patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11863235
DESCRIPTION OF EMBODIMENTS Problem to be Solved by the Present Disclosure In an optical network system, a plurality of terminals may mutually perform optical communication. In such a case, for example, providing one-to-one optical transmission lines between the terminals gives rise to a problem that the number of optical transmission lines becomes enormous in accordance with an increase in the number of terminals and its configuration becomes complicated. Consequently, an object of the present disclosure is to simplify the configuration of an optical network system in which a plurality of terminals mutually perform optical communication, and to provide an autonomous driving system including the simplified optical network system. Effect of the Present Disclosure According to the present disclosure, it is possible to simplify the configuration of an optical network system in which a plurality of terminals mutually perform optical communication. According to the present disclosure, it is possible to provide an autonomous driving system including the simplified optical network system. DESCRIPTION OF EMBODIMENT OF THE PRESENT DISCLOSURE First, embodiments of the present disclosure will be listed and described. An optical network system according to one embodiment includes N (N is an integer equal to or greater than 2) information processing terminals each including an optical transmitter unit and an optical receiver unit, N first optical fibers, N first optical transmission lines, N second optical fibers, N second optical transmission lines, an optical coupling unit, and an optical branching unit. The N first optical fibers extend from the respective optical transmitter units of the N information processing terminals. The N first optical transmission lines are connected to the respective first optical fibers at one ends. The N second optical fibers extend from the respective optical receiver units of the N information processing terminals. The N second optical transmission lines are connected to the respective second optical fibers at one ends. The optical coupling unit couples another ends of the N first optical transmission lines to one end of a third optical transmission line. The optical branching unit couples another end of the third optical transmission line to another ends of the N second optical transmission lines. In this optical network system, when an optical signal is transmitted from an optical transmitter unit of a certain information processing terminal, the optical signal propagates through one first optical transmission line and reaches the optical coupling unit. The optical signal then propagates through the third optical transmission line through the optical coupling unit, and then is distributed to the N second optical transmission lines by the optical branching unit. The optical signal having propagated through each second optical transmission line reaches the N information processing terminals through the N second optical fibers and is received in the optical receiver unit of each information processing terminal Such a flow of the optical signal is the same as in a case where an optical signal is transmitted from an optical transmitter unit of another information processing terminal. In this way, in the above-described optical network system, the optical signal transmitted from each information processing terminal is aggregated in the third optical transmission line and then sent out to all the information processing terminals. Therefore, the configuration of the optical network system can be simplified compared with, for example, a case where a one-to-one optical transmission line is provided between the information processing terminals. The above-described optical network system may further include an optical amplifier provided in the middle of the third optical transmission line. For example, in a configuration in which a one-to-one optical transmission line is provided between the information processing terminals, if an optical amplifier is provided in the middle of the optical transmission line, an optical amplifier is provided in each of a large number of optical transmission lines, which results in an increase in costs and further complication of the system. On the other hand, according to the above-described optical network system, the optical signal transmitted from each information processing terminal is temporarily aggregated in the third optical transmission line, and thus the number of optical amplifiers can be significantly reduced by providing an optical amplifier on the third optical transmission line. Therefore, it is possible to suppress an increase in costs and the complication of the system due to the optical amplifiers being provided. In this case, the optical amplifier may be a rare-earth-doped fiber amplifier or a semiconductor optical amplifier. The above-described optical network system may further include N optical signal detection units, a communication control unit, and an avoidance signal sending unit. Each of the N optical signal detection units detects an optical signal propagating through each of the first optical transmission lines. The communication control unit generates a collision avoidance signal for avoiding a collision between optical signals on the basis of detection results of the N optical signal detection units. The avoidance signal sending unit converts the collision avoidance signal into light and sends out this light from the third optical transmission line or the N second optical transmission lines to the N information processing terminals. When optical signals are sent out simultaneously from two or more information processing terminals, these optical signals collide with each other in the third optical transmission line and become unreceivable signals. As in such an optical network system, the communication control unit sends out a signal for avoiding a collision between optical signals to each information processing terminal, and thus it is possible to adjust sending timings of the optical signals between the information processing terminals, and to avoid a collision between the optical signals. In the above-described optical network system, a scheme of avoiding the collision between optical signals may be dynamic bandwidth allocation, CSMA/CA, CSMA/CD, or a token ring. Using any of these, it is possible to adjust sending timings of the optical signals between the information processing terminals, and to reduce a collision between the optical signals. In the above-described optical network system, the N first optical transmission lines and the N second optical transmission lines may comprise single-mode optical fibers. In the above-described optical network system, N is equal to or greater than 4, and the optical coupling unit may be formed by combining a plurality of optical couplers having 2 inputs and 1 output. In the above-described optical network system, N is equal to or greater than 4, and the optical branching unit may be formed by combining a plurality of optical couplers having 1 input and 2 outputs. An autonomous driving system according to one embodiment includes any of the optical network systems. The N information processing terminals are installed on a roadside at intervals from each other. The N information processing terminals transmit and receive information for autonomous driving to and from a traveling automobile, and mutually provide and share position information relating to the automobile. According to this autonomous driving system, all the information processing terminals can reliably know of the presence of the automobile. Details of Embodiment of the Present Disclosure Specific examples of an optical network system and an autonomous driving system of the present disclosure will be described below with reference to the drawings. The present invention is not limited to these examples. The present invention is indicated by the claims, and it is intended to include all changes within the scope and meaning equivalent to the claims. In the following description, the same components are denoted by the same reference numerals in the description of the drawings, and repeated description thereof will be omitted. FIG.1is a diagram illustrating an optical network system1according an embodiment of the present disclosure. As shown in the drawing, the optical network system1includes an optical network unit2and N information processing terminals20connected to the optical network unit2. N is an integer equal to or greater than 2, and a case where N is 4 is illustrated in the drawing. Specifically, an optical transmitter unit of each information processing terminal20and the optical network unit2are optically coupled to each other through an optical fiber F1. The optical fiber F1is an example of a first optical fiber in the present disclosure. In addition, an optical receiver unit of each information processing terminal20and the optical network unit2are optically coupled to each other through an optical fiber F2. The optical fiber F2is an example of a second optical fiber in the present disclosure. The optical network unit2assists with mutual optical communication between the N information processing terminals20. FIG.2is a block diagram illustrating a hardware configuration example of each information processing terminal20. As shown inFIG.2, the information processing terminal20is configured to include a computer provided with hardware such as a central processing unit (CPU)21, a volatile memory (random access memory (RANI))22, a non-volatile memory (read only memory (ROM))23, an optical transmitter module24, an optical receiver module25, and an auxiliary storage device26. The information processing terminal20realizes a predetermined function by these components operating using a program or the like. The optical transmitter module24corresponds to the above-described optical transmitter unit, and the optical receiver module25corresponds to the above-described optical receiver unit. FIG.3is a diagram illustrating an internal configuration of the optical network unit2. As shown inFIG.3, the optical network unit2includes N optical transmission lines3, N optical transmission lines4, one optical transmission line5, an optical coupling unit6, an optical branching unit7, an optical amplifier8, N optical signal detection units9, a communication control unit10, an avoidance signal sending unit11, a communication mechanism12, N optical input ports13, and N optical output ports14. The optical transmission line3is an example of a first optical transmission line in the present disclosure. The optical transmission line4is an example of a second optical transmission line in the present disclosure. The optical transmission line5is an example of a third optical transmission line in the present disclosure. One end of each of the N optical transmission lines3is connected to the optical fiber F1(seeFIG.1) extending from an optical transmitter unit of a corresponding information processing terminal20through a corresponding optical input port13. One end of each of the N optical transmission lines4is connected to the optical fiber F2(seeFIG.1) extending from an optical receiver unit of a corresponding information processing terminal20through a corresponding optical output port14. The optical transmission lines3and4can comprise, for example, single-mode optical fibers (SMF). The optical coupling unit6couples another ends of the N optical transmission lines3to one end of the optical transmission line5. The optical coupling unit6is, for example, an optical coupler of N inputs and 1 output. In a case where N is equal to or greater than 4, the optical coupling unit6can be formed by combining a plurality of optical couplers having 2 inputs and 1 output. In the shown example, that is, in a case where N is 4, the optical coupling unit6is configured to include three optical couplers61having 2 inputs and 1 output. Specifically, another ends of two optical transmission lines3are coupled to two input ends of one optical coupler61, and another ends of the remaining two optical transmission lines3are coupled to two input ends of another optical coupler61. The output ends of these optical couplers61are coupled to two input ends of another optical coupler61through an optical fiber. The output end of another optical coupler61is coupled to the one end of the optical transmission line5. The optical transmission line5can comprise, for example, an SMF. The optical branching unit7couples another end of the optical transmission line5to another ends of the N optical transmission lines4. The optical branching unit7is, for example, an optical coupler of 1 input and N outputs. In a case where N is equal to or greater than 4, the optical branching unit7can be formed by combining a plurality of optical couplers having 1 input and 2 outputs. In the shown example, that is, in a case where N is 4, the optical branching unit7is configured to include three optical couplers71having 1 input and 2 outputs. Specifically, the another end of the optical transmission line5is coupled to an input end of one optical coupler71. Two output ends of the optical coupler71are coupled to respective input ends of the other two optical couplers71through optical fibers. The two output ends of one optical coupler71of the other two optical couplers71are coupled to the respective other ends of the two optical transmission lines4. The two output ends of the other optical coupler71are coupled to the respective other ends of the remaining two optical transmission lines4. In the optical coupling unit6, it is preferable that the coupling ratios of the N optical transmission lines3are substantially the same as each other. In the optical branching unit7, it is preferable that the branching ratios of the N optical transmission lines4are substantially the same as each other. The optical amplifier8is provided in the middle of the optical transmission line5. That is, the optical transmission line5has three portions51,52, and53, the input end of the optical amplifier8is optically coupled to the optical coupling unit6through the portions51and52, and the output end of the optical amplifier8is optically coupled to the optical branching unit7through the portion53. The optical amplifier8amplifies an optical signal propagating through the optical transmission line5as an optical signal without converting the optical signal into an electrical signal. Examples of such an optical amplifier8include an erbium doped fiber amplifier (EDFA), a praseodymium doped fiber amplifier (PDFA), and a thulium doped fiber amplifier (TDFA), which are rare-earth-doped fiber amplifiers, a semiconductor optical amplifier (SOA), and the like. Here, the movement of optical signals in the N optical transmission lines3, the N optical transmission lines4, the optical transmission line5, the optical coupling unit6, the optical branching unit7, and the optical amplifier8will be described. When an optical signal is transmitted from the optical transmitter module24which is an optical transmitter unit of a certain information processing terminal20, the optical signal propagates through one optical transmission line3and reaches the optical coupling unit6. The optical signal passes through the optical coupling unit6, and is amplified and enhanced by the optical amplifier8while propagating through the optical transmission line5. Thereafter, this optical signal is distributed to the N optical transmission lines4by the optical branching unit7. That is, the same optical signal propagates through the N optical transmission lines4. The optical signal having propagated through each optical transmission line4reaches all the information processing terminals20through the optical fiber F2, and is received in the optical receiver module25which is an optical receiver unit of each information processing terminal20. Such a flow of the optical signal is the same as when the optical signal is transmitted from an optical transmitter unit of another information processing terminal20. Meanwhile, the optical signal received in the information processing terminal20which is a sending source of the optical signal is ignored by a CPU21inside the information processing terminal20. In this way, the optical network unit2assists with mutual optical communication in the N information processing terminals20. The configuration of the optical network unit2will be further described. The N optical signal detection units9detect optical signals propagating through the corresponding optical transmission lines3. Each optical signal detection unit9includes an optical branching unit91and a light receiving element92. The optical branching unit91is, for example, an optical coupler having 1 input and 2 outputs, and is provided in the middle of the optical transmission line3. The optical transmission line3has two portions31and32. The input end of the optical branching unit91is optically coupled to the optical input port13through the portion31. One output end of the optical branching unit91is optically coupled to the optical coupling unit6through the portion32. The light receiving element92is, for example, a photodiode. The light receiving element92is optically coupled to the other output end of the optical branching unit91through an optical fiber. The light receiving element92receives a part of the optical signal propagating through the optical transmission line3from the optical branching unit91, and generates an electrical signal according to the light intensity of the part of the optical signal. The communication control unit10generates a collision avoidance signal Sg on the basis of detection results of the N optical signal detection units9. The collision avoidance signal Sg is a signal for avoiding a collision between optical signals. In an example, when the communication control unit10confirms sending out of an optical signal from a certain optical signal detection unit9, the communication control unit transmits the collision avoidance signal Sg for restricting the sending of the optical signal to all the other optical signal detection units9until the sending of the optical signal is completed. The avoidance signal sending unit11includes an optical coupling unit111and a light-emitting element112. The light-emitting element112is electrically connected to the communication control unit10, and inputs the collision avoidance signal Sg generated in the communication control unit10. The light-emitting element112converts the collision avoidance signal Sg into an optical signal and outputs the converted signal. The light-emitting element112is, for example, a laser diode. The optical coupling unit111is, for example, an optical coupler having 2 inputs and 1 output. The optical coupling unit111is provided in the middle of the optical transmission line5. In an example, the optical coupling unit111is provided between the optical coupling unit6and the optical amplifier8. That is, one input end of the optical coupling unit111is optically coupled to the output end of the optical coupling unit6through the portion51of the optical transmission line5. The output end of the optical coupling unit111is optically coupled to the optical amplifier8through the portion52of the optical transmission line5. The other input end of the optical coupling unit111is optically coupled to the light-emitting element112through an optical fiber. The collision avoidance signal Sg which is output from the light-emitting element112as an optical signal reaches the optical amplifier8through the optical coupling unit111. Thereafter, the collision avoidance signal Sg which is output as an optical signal is distributed to the N optical transmission lines4in the optical branching unit7and sent out to the N information processing terminals20. This collision avoidance signal Sg is also sent to the information processing terminal20that outputs an optical signal, but the collision avoidance signal Sg is ignored in the information processing terminal20. Meanwhile, the optical coupling unit111may be provided in the middle of the optical transmission line5and between the optical amplifier8and the optical branching unit7. In this case, one input end of the optical coupling unit111is optically coupled to the output end of the optical amplifier8through a portion of the optical transmission line5, and the output end of the optical coupling unit111is optically coupled to the optical branching unit7through another portion of the optical transmission line5. The other input end of the optical coupling unit111is optically coupled to the light-emitting element112through an optical fiber. The communication mechanism12is a portion that communicates with an upper-level device which is not shown. The communication mechanism12is electrically connected to the communication control unit10, and reports the communication status, communication content, and the like of the optical network system1to the upper-level device. Examples of schemes of avoiding a collision between optical signals other than above include dynamic bandwidth allocation (DBA), carrier sense multiple access with collision detection (CSMA/CD), carrier sense multiple access with collision avoidances (CSMA/CA), and a token ring. DBA is a scheme of dynamically allocating a transmission band, that is, a channel of an optical signal in accordance with the amount of traffic. In this scheme, instead of allocating an individually independent transmission band to each of the N information processing terminals20, the transmission band is allocated only to an information processing terminal20that transmits an optical signal, and the allocation of the transmission band is flexibly changed every time the information processing terminal20that transmits an optical signal is changed. In this scheme, the communication control unit10allocates the transmission band. The communication control unit10transmits a signal indicating a transmission band allocated to each information processing terminal20to each information processing terminal20instead of the collision avoidance signal Sg. CSMA/CD is a communication scheme in which, when a plurality of instruments share one communication line, a right to use the line can be adjusted even in a case where there is no instrument that performs central monitoring and control. CSMA/CD is adopted in Ethernet (registered trademark) or the like. Specifically, an information processing terminal20that desires to transmit an optical signal monitors the status of an optical signal flowing through the optical network unit2, confirms that none of the information processing terminals20is transmitting the optical signal, and then starts to transmit it. At this time, in a case where another information processing terminal20happens to start the transmission simultaneously, both of them interrupt the communication in order to prevent data from being corrupted due to a collide between optical signals. Thereafter, both information processing terminals20randomly wait for several milliseconds and then resume the transmission. Since the probability of the randomly determined waiting time being exactly the same is low, one information processing terminal20having determined a short waiting time performs the transmission in advance, and the other information processing terminal20can resume the transmission after the one information processing terminal20completes the transmission. In this scheme, the communication control unit10and the avoidance signal sending unit11are not required. In CSMA/CA, each information processing terminal20continuously monitors the shared transmission band, and starts to transmit an optical signal when it is confirmed that the transmission band is free for a certain period of time or longer. The waiting time of each information processing terminal20is determined by each information processing terminal20adding back-off to the common minimum waiting time. The back-off is a random time which is independently determined for each information processing terminal20. This prevents the transmission from being performed all at once after a certain period of time has elapsed since the immediately preceding communication was completed. Whether an optical signal is correctly transmitted in actuality is determined by whether an ACK (Acknowledge) signal from an information processing terminal20on the receiver side reaches an information processing terminal20on the transmission side. In a case where there is no ACK signal from the information processing terminal20on the receiver side, it is considered that there is a communication failure, and the information processing terminal20on the transmission side retransmits the optical signal. In this scheme, the communication control unit10and the avoidance signal sending unit11are also not required. In the token ring, the N information processing terminals20that participate in a network are connected to each other in an annular shape so as to logically draw one ring. However, physically, a star-type connection mode in which the N information processing terminals20are connected to one line concentrator, that is, the optical network unit2, is formed. In order to prevent a plurality of information processing terminals20from simultaneously sending out optical signals and the optical signals from colliding with each other, a scheme called token passing is used. That is, a signal called a token indicating a right to transmit data circulates on a ring at a high speed, and an information processing terminal20that desires to transmit data sends out the data and the token together to the network when a free token arrives at itself. A token to which this data is attached is called a busy token, and follows the ring to an information processing terminal20which is a destination. The information processing terminal20which is a destination receives the data, and then returns the received busy token to an information processing terminal20which is a transmission source along the ring. The information processing terminal20which is a transmission source converts the returned busy token into a free token to return it to the network, so that another information processing terminal20can transmit the data. In this scheme, the communication control unit10and the avoidance signal sending unit11are also not required. Effects obtained by the optical network system1according to the present embodiment described above will be described. In the optical network system1of the present embodiment, the optical signal transmitted from each information processing terminal20is aggregated in the optical transmission line5and then sent out to all the information processing terminals20. Therefore, the configuration of the optical network system can be simplified compared with, for example, a case where a one-to-one optical transmission line is provided between the information processing terminals20. As in the present embodiment, the optical amplifier8may be provided in the middle of the optical transmission line5. For example, in a configuration in which a one-to-one optical transmission line is provided between the information processing terminals20, if an optical amplifier is provided in the middle of the optical transmission line, an optical amplifier is provided in each of a large number of optical transmission lines, which results in an increase in costs and further complication of the system. On the other hand, according to the optical network system1of the present embodiment, the optical signal transmitted from each information processing terminal20is temporarily aggregated in the optical transmission line5, and thus the number of optical amplifiers8can be significantly reduced by providing the optical amplifier8on the optical transmission line5. Therefore, it is possible to suppress an increase in costs and the complication of the system due to the optical amplifiers being provided. As in the present embodiment, the optical network system1may include the N optical signal detection units9, the communication control unit10, and the avoidance signal sending unit11. When optical signals are sent out simultaneously from two or more information processing terminals20, these optical signals collide with each other in the optical transmission line5and become unreceivable signals. As in this optical network system1, the communication control unit10sends out the collision avoidance signal Sg to each information processing terminal20, and thus it is possible to adjust sending timings of the optical signals between the information processing terminals20, and to avoid a collision between the optical signals. Alternatively, a scheme of avoiding the collision between optical signals may be DBA, CSMA/CA, CSMA/CD, or a token ring. Using any of these, it is possible to adjust sending timings of the optical signal between the information processing terminals20, and to reduce a collision between the optical signals. FIG.4is a diagram schematically illustrating a configuration of an autonomous driving system100as an application example of the present embodiment. This autonomous driving system100includes N information processing terminals20and the optical network unit2connected to the N information processing terminals20. The drawing illustrates a case where the number N is 3. Each information processing terminals20is installed on a roadside at a substantially constant interval, for example, an interval of 100 meters. Each information processing terminal20transmits and receives various types of information for an autonomous driving to and from a traveling automobile102. Such an automobile102is called a connected car. In that case, depending on a positional relationship between the information processing terminal20and the automobile102, the information processing terminal20may not be able to detect the presence of the automobile102due to an obstacle being present therebetween or the like. Even in such a case, the N information processing terminals20mutually provides and shares position information relating to the automobile102or the like, and thus all the information processing terminals20can reliably know of the presence of the automobile102. For example, the optical network system1is used for such information sharing. In a case where a network system, a wireless system, or the like using an electrical signal is used, there may be concern of communication between information processing terminals being interrupted due to lightning, noise, or the like, which is not preferable for the automobile102during autonomous driving. According to the autonomous driving system100of the present embodiment, it is possible to reduce the concern of communication being interrupted between the information processing terminals by using the above-described the optical network system1. The optical network system and the autonomous driving system according to the present disclosure, are not limited to the above-described embodiment, and can be modified in various other ways. For example, the optical network system1of the embodiment includes the optical amplifier8. However, in a case where output power of an optical signal from each information processing terminal20is sufficient large and each information processing terminal20can receive the optical signal even when optical losses in the optical coupling unit6, the optical branching unit7, and the like are taken into consideration, the optical amplifier8may be omitted. In the embodiment, although the optical coupling unit111of the avoidance signal sending unit11is provided on the optical transmission line5, N optical coupling units111may be configured to be provided in each of the N optical transmission lines4. REFERENCE SIGNS LIST 1Optical network system2Optical network unit3First optical transmission line4Second optical transmission line5Third optical transmission line6Optical coupling unit7Optical branching unit8Optical amplifier9Optical signal detection unit10Communication control unit11Avoidance signal sending unit12Communication mechanism13Optical input port14Optical output port20Information processing terminal21CPU22RANI23ROM24Optical transmitter module25Optical receiver module26Auxiliary storage device31,32,51,52,53Portion61,71Optical coupler91Optical branching unit92Light receiving element100Autonomous driving system102Automobile111Optical coupling unit112Light-emitting elementF1, F2Optical fiberSg Collision avoidance signal
32,629
11863236
DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. [First Example of Modulation and Demodulation Scheme for Visible Light Communication] In this embodiment, an optical communication method is used that transmits and receives modulated signals as optical signals. First, a first example of visible light communication, which is one example of an optical communication method that can be applied to each of the embodiments of the present disclosure will be given. <Line Scan Sampling> Smartphones and digital cameras, for example, are equipped with an image sensor such as a CMOS (Complementary Metal Oxide Semiconductor) sensor. For example, the entire scene in a single image captured by the CMOS sensor is not captured at a single instant, but rather, for example, captured line by line using a rolling shutter method, whereby the sensor reads out the amount of light received line by line, as shown in “Advanced Image Sensor”, The Journal of The Institute of Image Information and Television Engineers, vol. 66, no. 3, pp. 172-173, 2012 and “High. Speed Technology Trends in CMOS image Sensors”. The Journal of The Institute of Image Information and Television Engineers, vol. 66, no. 3, pp. 174-177, 2012. Accordingly, taking the readout time into account, the starting and stopping of the reception of light is controlled so that there is a time shift from line to line. In other words, images captured by the CMOS sensor are constructed from a plurality of lines captured with a slight time lag between each line. In the first example of a visible light communication method, high-speed reception of visible light signals is achieved based on a method that focuses on the characteristics of the CMOS sensor. In other words, in the first example of a visible light communication method, by utilizing the slight difference in exposure time between lines, the luminance and color of the light source at a plurality of points in time can be measured line by line, from a single image (image captured by the image sensor, i.e., “captured image”), making it possible to capture a modulated signal faster than the frame rate of the image sensor, as illustrated inFIG.1. Hereinafter, this sampling technique is referred to as “line scan sampling”, and one line of pixels that are exposed at the same time is referred to as an “exposure line”. Note that line scan sampling can be implemented using the rolling shutter scheme of a CMOS sensor, but even when the rolling shutter scheme is implemented using a sensor other than a CMOS sensor, such as a charge-coupled device (CCD) sensor or an organic CMOS sensor exemplified by “Proposal of New Organic CMOS Image Sensor for Reduction in Pixel Size”, FUJIFILM RESEARCH & DEVELOPMENT, no. 55, pp. 14-17, 2010, the line scan sampling can be implemented in the same manner. However, when the photography setting for photographing an image using the camera function (the function for capturing a video or still image) is used, even if a rapidly flashing light source is captured, the flashing will not appear as a striped pattern extending along the exposure lines. This is because, with this setting, since the exposure time is sufficiently longer than the flash cycle, as illustrated inFIG.2, the change in luminance resulting from the light source flashing (light-emission pattern) is uniform, whereby the variation in pixel values between exposure lines is small, resulting in a substantially uniform image. In contrast, by setting the exposure time to the flash cycle of the light source as illustrated inFIG.3, the state of the flashing of the light source (light-emission pattern) can be observed as a change in luminance between exposure lines. InFIG.3, the length of the exposure period is set slightly longer than the length of the shortest period of a continuous light-emitting state, and the difference in start times of exposure periods between adjacent exposure lines is set longer than the shortest period of a continuous light-emitting state, but the exposure period setting in line scan sampling is not limited to this example. For example, the length of the exposure period may be set shorter than the shortest period of a continuous light-emitting state, and may be set to approximately double the length of the shortest period of a continuous light-emitting state. Moreover, in addition to a method in which the optical signal is expressed as, for example, a combination of square waves like illustrated inFIG.4A, a method in which the optical signal continuously changes may be used as the optical communication method. In any case, with respect to the sampling rate required to receive and demodulate optical signals, a reception device that uses an optical communication method sets the difference between start times or end times between temporally neighboring exposure lines to be less than or equal to the sampling interval corresponding to the sampling rate. Moreover, the reception device having an optical communication method sets the length of the exposure period to be less than or equal to the length of the sampling interval. However, the reception device having an optical communication method may set the length of the exposure period to less than or equal to 1.5 times the sampling interval, and may set the exposure period to less than or equal to 2 times the sampling interval. For example, exposure lines are designed so as to be parallel to the lengthwise direction of the image sensor. In such cases, in one example, assuming the frame rate is 30 fps (frames per second), at a resolution of 1920×1080, 32,400 or more samples are obtained each second, and at a resolution of 3840×2160, 64,800 or more samples are obtained each second. <Line Scan Sampling Application Example> Note that in the above description, line scan sampling in which a signal that indicates an amount of light received per line is read out is described, but the method of sampling optical signals using an image sensor such as a CMOS sensor is not limited to this line scan sampling example. A variety of methods that can obtain signals sampled at a sampling rate higher than the frame rate used in typical video capturing can be implemented as a sampling method used for optical signal reception. For example, a method of controlling the exposure time per pixel and reading out a signal or a method of controlling the exposure time per group of pixels arranged in a shape other than a line and reading out a signal may be used by utilizing the global shutter method disclosed in “Advanced Image Sensor”. The Journal of The Institute of Image Information and Television Engineers, vol. 66, no. 3, pp. 172-173, 2012 or “High Speed Technology Trends in CMOS Image Sensors”. The Journal of The Institute of Image Information and. Television Engineers, vol. 66, no. 3, pp. 174-177, 2012 that has a shutter function for each pixel. Moreover, a method may be used in which a signal is read out a plurality of times from the same pixel during a period corresponding to a single frame in the frame rate used in typical video capturing. <Frame Sampling> Furthermore, by employing the frame rate method that gives a shutter function to each pixel disclosed in “Advanced Image Sensor”. The Journal of The Institute of Image Information and Television Engineers, vol. 66, no. 3, pp. 172-173, 2012 and “High Speed Technology Trends in CMOS Image Sensors”. The Journal of The Institute of Image Information and Television Engineers, vol. 66, no. 3, pp. 174-177, 2012, it is possible to sample optical signals even in a method that speeds up the frame rate. For example, the embodiments to be described hereinafter can be realized in any of the methods described above: “Line Scan Sampling”, “Line Scan Sampling Application Example”, and “Frame Sampling”. <Light Source and Modulation Scheme> In visible light communication, for example, an LED (Light Emitting Diode) can be used as a transmitter. LEDs are commonly used as light sources in lamps or in display backlights, and are capable of rapidly flashing. However, light sources that are used as visible light communication transmitters cannot be allowed to flash uncontrolled when performing visible light communication. If the changes in luminance made for visible light communication are recognizable to the human eye, the original functionality of a light source as a lamp will be lost. Accordingly, the transmission signal needs to be emitted at a desired brightness and needs to be imperceptible to the human eye. One example of a modulation scheme that satisfies these conditions is 4 PPM (4-Pulse Position Modulation). As illustrated inFIG.4A, 4 PPM is a scheme in which two bits are expressed by a group of four time slots each indicating either bright or dark light emitted by a light source. Moreover, as illustrated inFIG.4A, in 4 PPM, three of the four slots are bright and one of the slots is dark. Accordingly, regardless of the content of the signal, the average brightness (average luminance) is ¾=75%. For comparison, one example of a similar scheme is Manchester encoding illustrated inFIG.4B. In the Manchester coding scheme, one bit is expressed with two states, and the modulation efficiency is 50%, which is the same as 4 PPM, but among the two states, one is bright and one is dark, so the average luminance is ½=50%. In other words, 4 PPM is more suitable than Manchester encoding as a modulation scheme for visible light communication. However, since communication capability is not adversely affected by changes in luminance from visible light communication that are recognizable to the human eye, depending on the application, there may be no problem in using a method in which the changes in luminance are recognizable to the human eye. Accordingly, the transmitter (light source) may use, for example, an amplitude shift keying (ASK) method, a phase shift keying (PSK) method, or a pulse amplitude modulation (PAM) method to generate the modulated signal and pulse the light source to emit light. <Example of Overall Configuration of Communication System> As illustrated inFIG.5, the communication system that performs visible light communication includes at least a transmitter that transmits (emits) optical signals and a receiver that receives optical signals. For example, there are two types of transmitters: a variable content transmitter using light communication that changes the transmission content depending on the image or content to be displayed; and a fixed content transmitter using light communication that continues transmitting fixed transmission content. However, even with a configuration including only either the variable content transmitter using light communication or the fixed content transmitter using light communication, a communication system that communicates via light can be realized. The receiver can receive an optical signal from the transmitter, obtain, for example, relevant information associated with the optical signal, and provide it to the user. This concludes the summary of the visible light communication method, but communication methods applicable to the light communication to be described in the following embodiments are not limited to this example. For example, the light emitter in the transmitter may transmit data using a plurality of light sources. Moreover, the light receiver in the reception device need not be an image sensor such as a CMOS sensor, and may employ a communication method that can use a device that is capable of converting an optical signal into an electrical signal, such as a photodiode. In such cases, since there is no need to perform sampling using the above-described line scan sampling, such a light receiver is applicable even to methods that require 32,400 or more samples per second. Moreover, depending on the application, for example, a wireless communication method that uses light in frequencies outside of the visible light range, such as infrared light or ultraviolet light, may be used. Embodiment 1 FIG.6illustrates one example of configurations of device100and terminal150according to this embodiment. [Configuration of Device100] Device100(which corresponds to the visible light communication transmitter) includes a visible-light light source, lamp, or light (hereinafter also expressed by the all-encompassing term “light source”) such as a light emitting diode (LED). Note that hereinafter, device100is also referred to as “first device”. In first device100inFIG.6, transmission unit102receives an input of, for example, information101related to a location or position. Moreover, transmission unit102may receive an input of information105related to time. Moreover, transmission unit102may receive inputs of both information101related to a location or position and information105related to time. Transmission unit102receives inputs of information101related to a location or position and/or information105related to time, generates modulated signal (for optical communication)103based on the input signal(s), and outputs modulated signal103. Modulated signal103is then transmitted from light source104, for example. Next, examples of information101related to a location or position will be given. Example 1 Information101related to a location or position may be information indicating the latitude and/or longitude of a location or position. For example, information101related to a location or position may be information indicating “45 degrees north latitude, 135 degrees east longitude”. Example 2 Information101related to a location or position may be information indicating an address. For example, information101related to a location or position may be information indicating “1-1-1 XYZ-machi, Chiyoda-ku, Tokyo-to”. Example 3 Information101related to a location or position may be information indicating a building or facility, for example. For example, information101related to a location or position may be information indicating “Tokyo Tower”. Example 4 Information101related to a location or position may be information indicating a unique location or position of something at a building or facility, for example. For example, assume there are five parking spaces for automobiles in a parking lot. Assume the first through fifth parking spaces are named A-1through A-5, respectively. In this example, information101related to a location or position may be information indicating, for example, “A-3”. This example is not limited to only parking spaces in a parking lot. Information101related to a location or position may be, for example, information related to a section, a seat, a store, a facility, etc., at, for example, a concert facility, a stadium such as a baseball, soccer, or tennis stadium, an airplane, an airport lounge, a railway, a station, etc. This concludes the examples of information101related to a location or position. Note that methods for configuring information101related to a location r position are not limited to the above examples. [Configuration of Terminal150] Terminal150inFIG.6(which corresponds to the visible light communication receiver) receives modulated signal103transmitted from first device100. Light receiver (light reception device)151is, for example, an image sensor such as a complementary metal oxide semiconductor (CMOS) or organic CMOS image sensor. Light receiver151receives light including the modulated signal transmitted from first device100, and outputs reception signal152. Note that reception signal152output from light receiver151may be a signal including an image or video obtained by an image sensor, and may be a signal output by an element that performs some other photo-electric conversion (converting light into an electric signal). In the following description, when a reception-side device is described as receiving a modulated signal without giving any further details on the processes performed by light receiver151, this means that the reception-side device obtains a modulated signal for transmitting information, or a modulated signal of an image or video and a modulated signal for transmitting information, by photo-electric conversion (converting light into an electric signal) of light including the modulated signal by light receiver151. However, the method described above used to receive the modulated signal by the reception-side device is merely one non-limiting example. Reception unit153receives an input of reception signal152, performs processing such as demodulation and error correction decoding on the modulated signal included in reception signal152, and outputs reception data154. Data analyzer155receives an input of reception data154, estimates, for example, the location or position of terminal150by analyzing reception data154, and outputs information156including information on the location or position of at least terminal150. Display157receives an input of information156, and displays information related to the location or position of terminal150based on information on the location or position of terminal150included in information156. [Frame Configuration] FIG.7illustrates one example of a frame configuration of a modulated signal transmitted by first device100. InFIG.7, time is represented on the horizontal axis. For example, first device100transmits preamble201and then transmits control information symbol202, symbol203related to location information or position information, and symbol204related to time information. Preamble201is a symbol for terminal150that receives the modulated signal transmitted by first device100to perform, for example, signal detection, time synchronization, and/or frame synchronization. Control information symbol202is, for example, a symbol including data on, for example, the configuration method of the modulated signal, the error correction encoding scheme used, and/or the frame configuration method. Symbol203related to location information or position information is a symbol including information101related to a location or position illustrated inFIG.6. Note that the frame may include symbols other than symbols201,202, and203. For example, as illustrated inFIG.7, the frame may include symbol204related to time information. Here, symbol204related to time information includes information105related to time at which first device100transmitted the modulated signal. Note that the configuration of the frame of the modulated signal transmitted by first device100is not limited to the example illustrated inFIG.7, and the symbols included in the modulated signal are not limited to the configuration illustrated inFIG.7. The frame may include symbols including other data and/or information. Advantageous Effects Next, advantageous effects upon first device100transmitting a modulated signal and terminal150receiving that modulated signal, as illustrated inFIG.6andFIG.7, will be described. Since first device100transmits the modulated signal via visible light, terminal150capable of receiving the modulated signal is not in a location significantly far from the location of first device100. Accordingly, by terminal150obtaining the location or position information transmitted by first device100, terminal150can achieve an advantageous effect whereby it is possible to easily (i.e., without having to perform complicated signal processing) obtain accurate position information. Moreover, when first device100is disposed in a location where reception of satellite radio waves from a GPS satellite is difficult, it is possible to achieve an advantageous effect whereby it is possible for terminal150to securely obtain accurate position information even in locations in which reception of radio waves from a GPS satellite is difficult, by terminal150receiving the modulated signal transmitted by first device100. Embodiment 2 In this embodiment, a configuration in which a plurality of first devices100described in Embodiment 1 are provided will be described. In this embodiment, for example, as illustrated inFIG.8, first device #1301-1having the same configuration as first device100illustrated inFIG.6transmits a modulated signal. Terminal302having the same configuration as terminal150illustrated inFIG.6receives the modulated signal transmitted by first device #1301-1, and, for example, obtains information related to the location or position of first device #1301-1and information related to time pertaining to first device #1301-1. Similarly, first device #2301-2having the same configuration as first device100illustrated inFIG.6transmits a modulated signal. Terminal302receives the modulated signal transmitted by first device #2301-2, and, for example, obtains information related to the location or position of first device #2301-2and information related to time pertaining to first device #2301-2. Terminal302can calculate the distance between first device #1301-1and first device #2301-2illustrated inFIG.8based on the information related to the location or position of first device #1301-1and the information related to the location or position of first device #2301-2. Moreover, terminal302can calculate the distance between terminal302and first device #1301-1based on the information related to time pertaining to first device #1301-1and, for example, the time at which terminal302received the modulated signal transmitted by first device #1301-1. Similarly, terminal302can calculate the distance between terminal302and first device #2301-2based on the information related to time pertaining to first device #2301-2and, for example, the time at which terminal302received the modulated signal transmitted by first device #2301-2. Moreover, terminal302knows the position of first device #1301-1based on the information related to the location or position of first device #1301-1. Terminal302knows the position of first device #2301-2based on the information related to the location or position of first device #2301-2. Moreover, terminal302knows the geometry of the triangle formed by first device #1301-1, first device #2301-2, and terminal302from the distance between first device #1301-1and first device #2301-2, the distance between first device #1301-1and terminal302, and the distance between first device #2301-2and terminal302. Accordingly, terminal302can accurately calculate and obtain the position of terminal302from the position of first device #1301-1, the position of first device #2301-2, and the geometry of the triangle formed by first device #1301-1, first device #2301-2, and terminal302. However, the geodetic measurement method used by terminal302to obtain the location or position information is not limited to the method described above; any geodetic measurement, method may be used. Examples of geodetic measurement methods include triangulation, traverse calculation, trilateration, leveling, etc. As described above, in this embodiment, terminal302can obtain the above-described information from a plurality of devices301including light sources that transmit location information, and as a result, it is possible to achieve an advantageous effect whereby the terminal302accurately estimate the position of terminal302. Moreover, in this embodiment, when device301including a light source that transmits location information is disposed in a location where reception of satellite radio waves from a GPS satellite is difficult, as described in Embodiment 1, it is possible to achieve an advantageous effect whereby it is possible for terminal302to securely obtain accurate position information even in locations in which reception of radio waves from a GPS satellite is difficult, by terminal302receiving the modulated signal transmitted by device301. Note that in the above example, terminal302receives modulated signals transmitted by two devices301, but an embodiment in which terminal302receives modulated signals transmitted by more than two devices301can be implemented in the same manner. Note that the more devices301there are, the more accurately terminal302can calculate the position information, so from this viewpoint, more devices301are more beneficial. Embodiment 3 FIG.9illustrates one example of a configuration of device400, terminal450, and base station470(or access point (AP)) that communicates with terminal450according to this embodiment. Device400includes, for example, an LED visible light source, lamp, light source, and/or light. Note that hereinafter, device400is also referred to as “first device”. Note that in first device400illustrated inFIG.9, configurations that operate the same as first device100illustrated inFIG.6share like reference signs. Moreover, in terminal450illustrated inFIG.9, configurations that operate the same as terminal150illustrated inFIG.6share like reference signs. In first device400inFIG.9, transmission unit102receives inputs of, for example, information101related to a location or position, information401-1related to the service set identifier (SSID) of base station470, and information401-2related to an access destination. Moreover, transmission unit102may receive an input of information105related to time. Transmission unit102receives inputs of information101related to a location or position, information401-1related to an SSID, and information401-2related to an access destination, and/or information105related to time, generates modulated signal (for optical communication)103based on the input signal(s), and outputs modulated signal103. Modulated signal103is then transmitted from light source104, for example. Note that since an example of information101related to a location or position has already been given in Embodiment 1, repeated description will be omitted. Next, information401-1related to an SSID and information401-2related to an access destination will be described. First, information401-1related to an SSID will be described. Information401-1related to an SSID is information indicating the SSID of base station470) illustrated inFIG.9. When processing is performed for determining whether or not the SSID notified via the optical signal is the SSID of a secure base station, first device400can provide access to base station470, which is a secure access destination for terminal450. With this, terminal450illustrated inFIG.9can securely obtain accurate position information from base station470. On the other hand, first device400can restrict the terminals that access base station470to terminals in a space in which it is possible to receive optical signals transmitted (emitted) by first device400. Note that when terminal450receives an optical signal transmitted via a predetermined scheme, it may be determined that the notified SSID is the SSID of a secure base station. Moreover, terminal450may also perform processing for determining whether the notified SSID is secure or not. For example, first device400may transmit a predetermined identifier in an optical signal, and terminal450may determine whether the notified SSID is the SSID of a secure base station or not based on the received identifier. Moreover, the processing for determining whether the base station is secure or not may be omitted by terminal450, and instead, the user may select a first device400that is highly secure utilizing the characteristics of the visible light, and the SSID of the highly secure base station may be obtained by terminal450receiving the optical signal from first device400. Note that although the only base station that is illustrated inFIG.9is base station470, even when one or more base stations (or APs) other than base station470are also present, terminal450can access base station470using the SSID obtained from first device400and obtain information. Next, information401-2related to an access destination will be described. Information401-2related to an access destination is information related to an access destination for obtaining information after terminal450accesses base station470. Note that an example of operations according to this embodiment will be described in greater detail later. This concludes the description of information401-1related to an SSID and information401-2related to an access destination. Terminal450receives modulated signal103transmitted from first device400. Light receiver151is, for example, an image sensor such as a CMOS or organic CMOS image sensor. Light receiver151receives light including the modulated signal transmitted from first device400, and outputs reception signal152. Reception unit153receives an input of reception signal152received via light receiver151, performs processing such as demodulation and error correction decoding on the modulated signal included in reception signal152, and outputs reception data154. Data analyzer155receives an input of reception data154, and estimates, for example, the location or position of terminal450based on reception data154. Data analyzer155then outputs information156including the location or position information of at least terminal450, information451related to an SSID, and information452related to an access destination. Display157receives inputs of information156including the location or position information of terminal450, information451related to an SSID, and information452related to an access destination, and, for example, displays the location and/or position of terminal450, the SSID of the communication partner to be accessed by wireless communication device453included in terminal450, and/or the access destination (hereinafter, this display will be referred to as the “first display”). For example, after the first display, wireless communication device453receives inputs of information451related to an SSID and information452related to an access destination. Wireless communication device453then connects to a partner to communicate with based on the information451related to an SSID, by using, for example, radio waves. Note that in the example illustrated inFIG.9, wireless communication device453connects to base station470. Then, based on information452related to an access destination, wireless communication device453generates modulated signal from data including the information related to the access destination, and transmits the generated modulated signal to base station470by sing, for example, radio waves. Base station470, which is the communication partner of terminal450inFIG.9, receives the modulated signal transmitted by wireless communication device453included in terminal450. Base station470then performs processing such as demodulation and error correction decoding on the received modulated signal, and outputs reception data471including information on the access destination transmitted from terminal450. Based on this information on the access destination, base station470accesses a desired access destination over a network and, for example, obtains desired information472from the access destination. Base station470then receives an input of desired information472, generates a modulated signal based on desired information472, and transmits, to terminal450(wireless communication device453), the generated modulated signal using, for example, radio waves. Wireless communication device453in terminal450receives the modulated signal transmitted from base station470, performs processing such as demodulation and error correction decoding, and obtains desired information472. For example, assume the desired information472is information related to a section, a seat, a store, a facility, etc., on/at, for example, a map, a map or floor guide for a building, a map or floor guide for a facility, a map or floor guide for a parking lot, a concert facility, a stadium, an airplane, an airport lounge, a railway, a station, etc. Display157receives inputs of information454including desired information472, information156including the location or position information of at least terminal450, and information451related to an SSID, and after first display, based on desired information472and information156including the location or position information of at least terminal450, displays the position of terminal450mapped on information on a map, floor guide, or facility, information on seating information, information on stores. FIG.10is an example of a detailed display by display157. The display inFIG.10indicates that this is the third floor of a building. Each of A-1, A-2, A-3, A-4, A-21, A-22, A-23, and A-24indicates a position of a parking space for an automobile. a-1and a-2indicate positions of elevators. The information on this map including the positions of the parking spaces and the elevators is one example of desired information454(472). As illustrated inFIG.10, display157displays the current position of terminal450mapped on the map. Note that the current position is information obtained from information156including the location or position information of at least terminal450. FIG.11illustrates one example of a frame configuration of a modulated signal transmitted by first device400illustrated inFIG.9. InFIG.11, time is represented on the horizontal axis. Moreover, inFIG.11, symbols that transmit the same information as inFIG.7share like reference signs, and repeated description thereof is omitted. First device400transmits symbol600-1related to an SSID and symbol600-2related to an access destination, in addition to preamble201, control information symbol202, symbol203related to location information or position information, and symbol204related to time information. Symbol600-1related to an SSID is a symbol for transmitting information401-1related to an SSID inFIG.9, and symbol600-2related to an access destination is a symbol for transmitting information401-2related to an access destination inFIG.9. Note that the frame inFIG.11may include symbols other than those shown inFIG.11. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.11. FIG.12illustrates one example of a frame configuration of a modulated signal transmitted by base station470illustrated inFIG.9. InFIG.12, time is represented on the horizontal axis. As illustrated inFIG.12, base station470transmits, for example, preamble701, and thereafter transmits control information symbol702and information symbol703. Preamble701is a symbol for terminal450that receives the modulated signal transmitted by base station470to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol702is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used in the generation of the modulated signal, and information related to the frame configuration. Based on information on control information symbol702, wireless communication device453in terminal450implements, for example, demodulation of the modulated signal. Information symbol703is a symbol for transmitting information. Note that in this embodiment, information symbol703is a symbol for transmitting the above-described desired information472. Note that base station470inFIG.9may transmit a frame including symbols other than those shown inFIG.12. For example, base station470may transmit a frame including a pilot symbol (reference symbol) between information symbols703. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.12. Moreover, inFIG.12, a plurality of symbols may be arranged along the frequency axis. In other words, inFIG.12, symbols may be present on a plurality of frequencies (a plurality of carriers). Moreover, for example, a modulated signal that has the frame configuration illustrated inFIG.11and is transmitted by first device400at a regular timing, e.g., repeatedly transmitted is conceivable. With this, a plurality of terminals450can implement the above-described operations. FIG.13is a flow chart illustrating one example of processes implemented by first device400, terminal450, and base station470illustrated inFIG.9and described above. First, first device400transmits a modulated signal having the frame configuration illustrated inFIG.11(ST801). Terminal450receives the modulated signal transmitted by first device400and estimates the location or position of terminal450(ST802). Terminal450also knows the SS ID of base station470to be accessed by terminal450by receiving the modulated signal transmitted by first device400(ST803). Terminal450transmits, to base station470, a modulated signal including data including information452related to an access destination for obtaining information such as map information, using radio waves (ST804). Base station470receives the modulated signal transmitted by terminal450, obtains information on an access destination, accesses a desired access destination via a network, and obtains desired information such as map information (information to be transmitted to terminal450) (ST805). Base station470then transmits, to terminal450, a modulated signal including the obtained desired information such as the map information, by using radio waves (ST806). Terminal450receives the modulated signal transmitted by base station470and obtains information such as map information. Terminal450displays a display like that inFIG.10, based on the information such as map information and the information on the location or position of terminal450that is previously obtained. Next, an example of operations performed when a plurality of first devices400and base station470are provided in the location illustrated inFIG.10. FIG.14is a map of the same location illustrated inFIG.10. In other words,FIG.14is a map of the third floor described with reference toFIG.10. InFIG.14, each of A-1, A-2, A-3, A-4, A-21, A-22, A-23, and A-24indicates a parking space for an automobile, and each of a-1and a-2indicates an elevator. The position of circle901-1inFIG.14indicates the location of a first device having the same configuration as first device400illustrated inFIG.9. Hereinafter, the first device that has the same configuration as first device400and is at the position of901-1is referred to as “first device #1400”. First device #1400has, as information related to a location or information related to a position, information indicating “A-1”, and transmits this information indicating “A-1”. The position of circle901-2in.FIG.14indicates the location of a first device having the same configuration as first device400illustrated inFIG.9. Hereinafter, the first device that has the same configuration as first device400and is at the position of901-2is referred to as “first device #2400”. First device #2400has, as information related to a location or information related to a position, information indicating “A-2”, and transmits this information indicating “A-2”. The position of circle901-3inFIG.14indicates the location of a first device having the same configuration as first device400illustrated inFIG.9. Hereinafter, the first device that has the same configuration as first device400and is at the position of901-3is referred to as “first device #3400”. First device #3400has, as information related to a location or information related to a position, information indicating “A-3”, and transmits this information indicating “A-3”. The position of circle901-4inFIG.14indicates the location of a first device having the same configuration as first device400illustrated inFIG.9. Hereinafter, the first device that has the same configuration as first device400and is at the position of901-4is referred to as “first device #4400”. First device #4400has, as information related to a location or information related to a position, information indicating “A-4”, and transmits this information indicating “A-4”. The position of circle901-21inFIG.14indicates the location of a first device having the same configuration as first device400illustrated inFIG.9. Hereinafter, the first device that has the same configuration as first device400and is at the position of901-21is referred to as “first device #21400”. First device #21400has, as information related to a location or information related to a position, information indicating “A-21”, and transmits this information indicating “A-21”. The position of circle901-22inFIG.14indicates the location of a first device having the same configuration as first device400illustrated inFIG.9. Hereinafter, the first device that has the same configuration as first device400and is at the position of901-22is referred to as “first device #22400”. First device #22400has, as information related to a location or information related to a position, information indicating “A-22”, and transmits this information indicating “A-22”. The position of circle901-23inFIG.14indicates the location of a first device having the same configuration as first device400illustrated inFIG.9. Hereinafter, the first device that has the same configuration as first device400and is at the position of901-23is referred to as “first device #23400”. First device #23400has, as information related to a location or information related to a position, information indicating “A-23”, and transmits this information indicating “A-23”. The position of circle901-24inFIG.14indicates the location of a first device having the same configuration as first device400illustrated inFIG.9, Hereinafter, the first device that has the same configuration as first device400and is at the position of901-24is referred to as “first device #24400”. First device #24400has, as information related to a location or information related to a position, information indicating “A-24”, and transmits this information indicating “A-24”. The position of double circle902inFIG.14indicates the location of a base station (or AP) having the same configuration as base station470illustrated inFIG.9. Hereinafter, the base station (or AP) having the same configuration as base station470inFIG.9will be referred to simply as “base station470”. Moreover, here, the SSID of base station470at position902is “abcdef”. When terminal450present in the vicinity of the position indicated on the map inFIG.14can wirelessly communicate, terminal450may access base station470at the position of902inFIG.14. Accordingly, first device #1400at901-1inFIG.14transmits “abcdef” as information related to an SSID (refer to401-1inFIG.9). Similarly, first device #2400at901-2inFIG.14transmits “abcdef” as information related to an SSID (refer to401-1inFIG.9). First device #3400at901-3inFIG.14transmits “abcdef” as information related to an SSID (refer to401-1inFIG.9). First device #4400at901-4inFIG.14transmits “abcdef” as information related to an SSID (refer to401-1inFIG.9). First device #21400at901-21inFIG.14transmits “abcdef” as information related to an SSID (refer to401-1inFIG.9). First device #22400at901-22inFIG.14transmits “abcdef” as information related to an SSID (refer to401-1inFIG.9). First device #23400at901-23inFIG.14transmits “abcdef” as information related to an SSID (refer to401-1inFIG.9). First device #24400at901-24inFIG.14transmits “abcdef” as information related to an SSID (refer to401-1inFIG.9). Hereinafter, an example of specific operations will be given. Assume a terminal having the same configuration as terminal450inFIG.9is positioned at903-1inFIG.14(hereinafter, this terminal will be referred to simply as terminal450). In such cases, terminal450receives the modulated signal transmitted by first device #4400at the position of901-4inFIG.14, and obtains position information indicating “A-4”. Moreover, terminal450receives the modulated signal transmitted by first device #4400at the position of901-4inFIG.14, and obtains SSID information indicating “abcdef”. With this, terminal450accesses base station470positioned at902inFIG.14. Moreover, terminal450obtains, from base station470positioned at902inFIG.14, information such as map information. Terminal450then displays map information and position information (for example, seeFIG.10;FIG.10is merely one non-limiting example). Similarly, assume a terminal having the same configuration as terminal450inFIG.9is positioned at903-2inFIG.14(hereinafter, this terminal will be referred to simply as terminal450). In such cases, terminal450receives the modulated signal transmitted by first device #22400at the position of901-22inFIG.14, and obtains position information indicating “A-22”. Moreover, terminal450receives the modulated signal transmitted by first device #4400at the position of901-22inFIG.14, and obtains SSID information indicating “abcdef”. With this, terminal450accesses base station470positioned at902inFIG.14. Moreover, terminal450obtains, from base station470positioned at902inFIG.14, information such as map information. Terminal450then displays map information and position information (for example, seeFIG.10;FIG.10is merely one non-limiting example). Note that terminal450may record the map (surrounding area information) and the position information like that inFIG.14in a storage (not illustrated in the drawings) included in terminal450, and may read the information stored in the storage when required by the user of terminal450. This makes it possible to use the map (surrounding area information) and the position information in a manner that is convenient to the user. In this way, since first device400transmits the modulated signal via visible light, terminal450capable of receiving the modulated signal is limited to being located within a region capable of receiving the optical signal from the position of first device400. Accordingly, by terminal450obtaining the location or position information transmitted by first device400, terminal450can easily (i.e., without having to perform complicated signal processing) obtain accurate position information. Moreover, when first device400is disposed in a location where reception of satellite radio waves from a GPS satellite is difficult, it is possible for terminal450to securely obtain accurate position information even in locations in which reception of radio waves from a GPS satellite is difficult, by terminal450receiving the modulated signal transmitted by first device400. Furthermore, based on the information on the SSID transmitted from first device400, terminal450can securely obtain information by connecting to base station (or AP)470and obtaining information. This is because, when information from a visible light modulated signal is obtained by terminal450, since it is visible light, the user can easily visually recognize first device400transmitting the modulated signal, making it possible for the user to easily determine whether the source of information is secure or not. Conversely, for example, when the SSID is obtained from a modulated signal transmitted over radio waves via a wireless LAN, it is difficult for the user to determine which device transmitted the radio waves. Accordingly, from the viewpoint of ensuring information security, obtaining the SSID via visible light communication is more suitable than wireless LAN communication. Note that wireless communication device453in terminal450illustrated inFIG.9may further receive inputs of a plurality of signals. For example, wireless communication device453may receive an input of a control signal for controlling wireless communication device453, and may receive an input of information, etc., transmitted to base station470. Here, one conceivable example is that wireless communication device453begins performing communication based on the control signal. As described above, in this embodiment, the configuration of the first device is not limited to the configuration of first device400inFIG.9, the configuration of the terminal is not limited to the configuration of terminal450inFIG.9, and the connection destination and configuration of the base station are not limited to the connection destination and configuration of base station470inFIG.9. Moreover, in the example inFIG.9, a single base station470is present, but a plurality of (secure) base stations (or APs) that terminal450can access may be present. In such cases, the symbol related to an SSID that is transmitted by first device400inFIG.9may include information indicating the SSID of each of the plurality of base stations (or APs). In such cases, as the display of the access destination (the “first display” described above), display157in terminal450illustrated inFIG.9displays a list of the SSIDs of the plurality of base stations and/or a list of the plurality of access destinations. Then, based on the information on the SSIDs of the plurality of base stations (or APs), terminal450inFIG.9may select one or more base stations to actually wirelessly connect to (in other words, may concurrently connect to a plurality of base stations). For example, assume there are three base stations470. Here, the three base stations470shall be referred as base station #A, base station #B, and base station #C. Moreover, assume the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”. In such cases, symbol600-1related to an SSID in the frame configuration illustrated inFIG.11of the modulated signal transmitted by first device400includes information indicating that the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”. Then, terminal450inFIG.9receives symbol600-1related to an SSID, and based on the information indicating that the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”, selects one or more base stations470to actually wirelessly connect to. Embodiment 4 FIG.15illustrates one example of a configuration of a communication system according to this embodiment. The communication system Illustrated inFIG.15includes, for example, device1000, terminal1050, and base station (or AP)470that communicates with terminal1050. Device1000includes, for example, an LED visible light source, lamp, light source, and/or light (hereinafter referred to as “light source104”). Note that hereinafter, device1000is also referred to as “second device” in this embodiment. Note that in second device1000illustrated inFIG.15, configurations that operate the same as first device100illustrated inFIG.6share like reference signs. Moreover, in terminal1050illustrated inFIG.15, configurations that operate the same as terminal150illustrated inFIG.6share like reference signs. Moreover, communication between wireless communication device453in terminal1050and base station470illustrated inFIG.15uses, for example, radio waves. In second device1000illustrated inFIG.15, transmission unit102receives inputs of information1001-1related to an SSID, information1001-2related to an encryption key, and data1002, generates modulated signal (for optical communication)103based on the input signal(s), and outputs modulated signal103. Modulated signal103is then transmitted from light source104, for example. Next, information1001-1related to an SSID and information1001-2related to an encryption key will be described. First, information1001-1related to an SSID will be described. Information1001-1related to an SSID is information indicating the SSID of base station470illustrated inFIG.15. Note that in one example, base station470transmits a modulated signal to terminal1050over radio waves, and receives the modulated signal from terminal1050over radio waves. In other words, second device1000can provide access to base station470, which is a secure access destination for terminal1050. With this, terminal1050illustrated inFIG.15can securely obtain information from base station470. On the other hand, second device1000can restrict the terminals that access base station470to terminals in a space in which it is possible to receive optical signals transmitted (emitted) by second device1000. Note that when terminal1050receives an optical signal transmitted via a predetermined scheme, it may be determined that the notified SSID is the SSID of a secure base station. Moreover, terminal1050may also perform processing for determining whether the notified SSID is secure or not. For example, second device1000may transmit a predetermined identifier in an optical signal, and terminal1050may determine whether the notified SSID is the SSID of a secure base station or not based on the received identifier. Note that although the only base station that is illustrated inFIG.15is base station470, even when, for example, a base station (or AP) other than base station470is also present, terminal1050can access base station470using the SSID obtained from second device1000and obtain information. Next, information1001-2related to an encryption key will be described. Information1001-2related to an encryption key is information related to an encryption key that is necessary in order for terminal1050to communicate with base station470. By obtaining information1001-2related to an encryption key from second device1000, terminal1050can perform encrypted communication with base station470. This concludes the description of information1001-1related to an SSID and information1001-2related to an encryption key. Terminal1050inFIG.15receives a modulated signal transmitted by second device1000. Note that in terminal1050illustrated inFIG.15, configurations that operate the same as terminal150inFIG.6and terminal450inFIG.9share like reference signs. Light receiver151included in terminal1050is, for example, an image sensor such as a CMOS or organic CMOS image sensor. Light receiver151receives light including the modulated signal transmitted from second device1000, and outputs reception signal152. Reception unit153receives an input of reception signal152received via light receiver151, performs processing such as demodulation and error correction decoding on the modulated signal included in reception signal152, and outputs reception data154. Data analyzer155receives an input of reception data154, and outputs, based on reception data154, for example, information1051on the SSID of the base station to be connected to and information1052on the encryption key for communicating with the base station to be connected to. For example, in a wireless local area network (LAN), examples of encryption schemes include wired equivalent privacy (WEP), Wi-Fi protected access (WPA), and Wi-Fi protected access 2 (WPA2) (pre-shared key (PSK) mode, extended authentication protocol (EAP) mode). Note that the encryption method is not limited to these examples. Display157receives inputs of information1051on the SSID and information1052on the encryption key, and, for example, displays (i) the SSID of the communication partner to be accessed by wireless communication device453included in terminal1050and (ii) the encryption key (hereinafter this display is referred to as the “first display” in this embodiment). For example, after the first display, wireless communication device453receives inputs of information1051on the SSID and information1052on the encryption key, and establishes a connection with base station470(for example, assume the connection uses radio waves). Here, when base station470communicates with wireless communication device453included in terminal1050, base station470also transmits the modulated signal using, for example, radio waves. Thereafter, wireless communication device453receives inputs of data1053and control signal1054, demodulates data1053in accordance with the control indicated in control signal1054, and transmits the modulated signal over radio waves. Then, for example, base station470transmits data over the network (471) and receives data from the network (472). Thereafter, for example, base station470transmits the modulated signal to terminal1050over radio waves. Wireless communication device453included in terminal1050performs processing such as demodulation and error correction decoding on the modulated signal received over radio waves, and obtains reception data1056. Display157performs display based on reception data1056. FIG.16illustrates one example of a frame configuration of a modulated signal transmitted by second device1000illustrated inFIG.15. InFIG.16, time is represented on the horizontal axis. Moreover, inFIG.16, symbols that are the same as inFIG.7andFIG.11share like reference numbers, and repeated description thereof will be omitted. Symbol600-1related to an SSID is a symbol for transmitting information1001-1related to an SSID illustrated inFIG.15, and symbol1101related to the encryption key is a symbol for transmitting information1001-2related to an encryption key illustrated inFIG.15. Data symbol1102is a symbol for transmitting data1002illustrated inFIG.15. Second device1000transmits preamble201, control information symbol202, symbol600-1related to an SSID, symbol1101related to the encryption key, and data symbol1102. Note that second device1000may transmit a frame including symbols other than those shown inFIG.16. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.16. FIG.17illustrates one example of a frame configuration of a modulated signal transmitted by wireless communication device453included in terminal1050illustrated inFIG.15. InFIG.17, time is represented on the horizontal axis. As illustrated inFIG.17, wireless communication device453included in terminal1050transmits, for example, preamble1201, and thereafter transmits control information symbol1202and information symbol1203. Preamble1201is a symbol for base station470that receives the modulated signal transmitted by wireless communication device453in terminal1050to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol1202is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used in the generation of the modulated signal, information related to the frame configuration, and information related to transmission method. Based on information on control information symbol1202, base station470implements, for example, demodulation of the modulated signal. Information symbol1203is a symbol for wireless communication device453in terminal1050to transmit data. Note that wireless communication device453in terminal1050may transmit a frame including symbols other than those shown inFIG.17. For example, wireless communication device453may transmit a frame including a pilot symbol (reference symbol) between information symbols1203. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.17. Moreover, inFIG.17, a plurality of symbols may be arranged along the frequency axis. In other words, inFIG.17, symbols may be present on a plurality of frequencies (a plurality of carriers). Moreover, in Embodiment 3, when wireless communication device453included in terminal450illustrated inFIG.9transmits a modulated signal, the frame configuration illustrated inFIG.17may be used. The frame configuration the modulated signal transmitted by base station470in this embodiment is the same as the frame configuration illustrated inFIG.12and described in Embodiment 3. In other words, as illustrated inFIG.12, base station470transmits, for example, preamble701, and thereafter transmits control information symbol702and information symbol703. Preamble701is a symbol for wireless communication device453in terminal1050that receives the modulated signal transmitted by base station470to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol702is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used in the generation of the modulated signal, information related to the frame configuration, and information related to transmission method. Based on information on control information symbol702, wireless communication device453in terminal1050implements, for example, demodulation of the modulated signal. Information symbol703is a symbol for base station470to transmit information. Note that base station470inFIG.15may transmit a frame including symbols other than those shown inFIG.12. For example, base station470may transmit a frame including a pilot symbol (reference symbol) between information symbols703. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.12. Moreover, inFIG.12, a plurality of symbols may be arranged along the frequency axis. In other words, inFIG.12, symbols may be present on a plurality of frequencies (a plurality of carriers). Moreover, for example, a modulated signal that has the frame configuration illustrated inFIG.16and is transmitted by second device1000at a regular timing, e.g., repeatedly transmitted is conceivable. With this, a plurality of terminals1050can implement the above-described operations. FIG.18is a flow chart illustrating one example of processes implemented by second device1000, terminal1050, and base station470illustrated inFIG.15. First, second device1000transmits a modulated signal having the frame configuration illustrated inFIG.16(ST1301). Terminal1050obtains the SSID of base station470to be accessed by terminal1050by receiving the modulated signal transmitted by second device1000(ST1302). Terminal1050also obtains the encryption key to be used in communication with base station470to be accessed by terminal1050(ST1303). Terminal1050then connects with base station470over radio waves (ST1304). Terminal1050completes the connection with base station470by receiving a response from base station470(ST1305). Terminal1050then transmits information on the connection destination to base station470using radio waves (ST1306). Base station470obtains information for transmitting to terminal1050from the network (ST1307). Base station470then transmits the obtained information to terminal1050using radio waves, and terminal1050obtains the information (ST1308). When necessary, terminal1050, for example, obtains required information from the network via base station470. As described above, based on the information on the SSID and information on the encryption key transmitted from second device1000, terminal1050connects with base station470and obtains information to securely obtain information from base station470, whose security has been authenticated. This is because, when information from a visible light modulated signal is obtained by terminal1050, since it is visible light, it possible for the user to easily determine whether the source of information is secure or not. Conversely, for example, when the SSID is obtained from a modulated signal transmitted over radio waves via a wireless LAN, it is difficult for the user to determine which device transmitted the radio waves. Accordingly, from the viewpoint of ensuring information security, obtaining the SSID via visible light communication is more suitable than wireless LAN communication. Note that in this embodiment, a configuration in which second device1000transmits encryption key information has been described. However, for example, when base station470does not perform encrypted communication using an encryption key, second device1000may transmit only SSID information, without transmitting encryption key information. In such cases, the present disclosure can be implemented in the same manner simply by removing the configuration related to an encryption key from the above configurations. Moreover, the configuration of the second device is not limited to the configuration of second device1000illustrated inFIG.15, the configuration of the terminal is not limited to the configuration of terminal1050illustrated inFIG.15, and the connection destination and configuration of the base station is not limited to the connection destination and configuration of base station470illustrated inFIG.15. Moreover, in the example inFIG.15, a single base station470is present, but a plurality of (secure) base stations (or APs) that terminal1050can access may be present. Note that these plurality of base stations and terminal1050respectively transmit and receive modulated signals using radio waves. In such cases, the symbol related to an SSID that is transmitted by second device1000inFIG.15may include information indicating the SSID of each of the plurality of base stations (or APs). In such cases, as the display of the access destination, display157in terminal1050illustrated inFIG.15displays a list of the SSIDs of the plurality of base stations and/or a list of the plurality of access destinations. Moreover, the symbol related to an encryption key that is transmitted by second device1000inFIG.15may include information indicating the encryption key to be used for connection with each of the plurality of base stations (or APs). Then, based on the information on the SSIDs of the plurality of base stations and the information on the encryption keys to be used for connection with the plurality of base stations, terminal1050inFIG.15may select one or more base stations to actually wirelessly connect to (via, for example radio waves) (in other words, may concurrently connect to a plurality of base stations). For example, assume there are three base stations470. Here, the three base stations470shall be referred as base station #A, base station #B, and base station #C. Moreover, assume the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”. Moreover, assume the encryption key for connecting with base station #A is “123”, the encryption key for connecting with base station #B is “456”, and the encryption key for connecting with base station #C is “789”. In such cases, symbol600-1related to an SSID in the frame configuration illustrated inFIG.16of the modulated signal transmitted by second device1000includes information indicating that the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”. Moreover, symbol1101related to the encryption key in the frame configuration illustrated inFIG.16includes information indicating that the encryption key for connecting with base station #A is “123”, the encryption key for connecting with base station #B is “456”, and the encryption key for connecting with base station #C is “789”. Terminal1050inFIG.15receives symbol600-1related to an SSID, and thus obtains information indicating that the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”. Moreover, terminal1050receives symbol1101related to the encryption key, and thus obtains information indicating that the encryption key for connecting with base station #A is “123”, the encryption key for connecting with base station #B is “456”, and the encryption key for connecting with base station #C is “789”. Then, based on this information, terminal1050selects one or more base station to actually wirelessly (via, for example, radio waves) connect to, and connects to the selected one or more base station. As described in this embodiment, as a result of terminal1050setting which base station470to access, utilizing a light source, exemplified here as an LED light source, a mode for making a special setting for processes for establishing a wireless connection between terminal1050and base station470in the modulated signal for connection over radio waves that is transmitted by terminal1050is not required. Moreover, a mode for making a special setting for processes for establishing a wireless connection between terminal1050and base station470in the modulated signal that is transmitted by base station470is not required. With this, in this embodiment, data transmission efficiency in wireless communication can be improved. Moreover, the encryption key may be an encryption key for an SSID on a wireless LAN, as described above, and may be an encryption key for limiting the connection type, the service type, or the connection region of a network, for example. In other words, it is acceptable so long as an encryption key for limiting something or other is implemented. Embodiment 5 FIG.19illustrates one example of a configuration of a communication system according to this embodiment. The communication system illustrated inFIG.19includes, for example, devices1400A and1400B, terminal1050, and base station (or AP)470that communicates with terminal1050. Devices1400A and1400B include, for example, an LED visible light source, lamp, light source, and/or light (hereinafter referred to as light sources1406-1and1406-2). Note that hereinafter, device1400A is also referred to as “third device” and device1400B is also referred to as “fourth device” in this embodiment. Moreover, in terminal1050illustrated inFIG.19, configurations that operate the same as terminal150illustrated in.FIG.1or terminal1050illustrated inFIG.15share like reference signs. Moreover, in base station (or AP)470illustrated inFIG.19, configurations that operate the same as base station470illustrated inFIG.9have the same references signs as inFIG.9. Moreover, communication between wireless communication device453in terminal1050and base station470illustrated inFIG.19uses, for example, radio waves. In third device1400A illustrated inFIG.19, transmission unit1404-1receives inputs of information1401-1related to an SSID and data1402-1, generates modulated signal (for optical communication)1405-1based on the input signals, and outputs modulated signal1405-1. Modulated signal1405-1is then transmitted from light source1406-1, for example. In fourth device1400B illustrated inFIG.19, transmission unit1404-2receives inputs of information1403-2related to an encryption key and data1402-2, generates modulated signal (for optical communication)1405-2based on the input signals, and outputs modulated signal1405-2. Modulated signal1405-2is then transmitted from light source1406-2, for example. Next, information1401-1related to an SSID and information1403-2related to an encryption key will be described. First, information1401-1related to an SSID will be described. Information1401-1related to an SSID is information indicating the SSID of base station470illustrated inFIG.19. In other words, third device1400A can provide access to base station470via radio waves, which is a secure access destination for terminal1050. With this, terminal1050illustrated inFIG.19can securely obtain information from base station470. Note that when terminal1050receives an optical signal transmitted via a predetermined scheme, it may be determined that the notified SSID is the SSID of a secure base station. Moreover, terminal1050may also perform processing for determining whether the notified SSID is secure or not. For example, third device1400A may transmit a predetermined identifier in an optical signal, and terminal1050may determine whether the notified SSID is the SSID of a secure base station or not based on the received identifier. Note that although the only base station that is illustrated inFIG.19is base station470, even when, for example, a base station (or AP) other than base station470is also present, terminal1050can access base station470using the SSID obtained from third device1400A and the encryption key obtained from fourth device1400B, and obtain information. Next, information1403-2related to an encryption key will be described. Information1403-2related to an encryption key is information related to an encryption key that is necessary in order for terminal1050to communicate with base station470via radio waves. By obtaining information1403-2related to an encryption key from fourth device1400B, terminal1050can perform encrypted communication with base station470. This concludes the description of information1401-1related to an SSID and information1403-2related to an encryption key. Terminal1050inFIG.19receives a modulated signal transmitted by third device1400A. Light receiver151included in terminal1050is, for example, an image sensor such as a CMOS or organic CMOS image sensor. Light receiver151receives light including the modulated signal transmitted from third device1400A, and outputs reception signal152. Reception unit153receives an input of reception signal152received via light receiver151, performs processing such as demodulation and error correction decoding on the modulated signal included in reception signal152, and outputs reception data154. Data analyzer155receives an input of reception data154, and outputs, based on the reception data, for example, information1051on the SSID of the base station to be connected to. Wireless communication device453obtains, from information1051on the SSID, information on the SSID of base station470that wireless communication device453connects with via radio waves. Terminal1050inFIG.19receives a modulated signal transmitted by fourth device1400B. Light receiver151included in terminal1050is, for example, an image sensor such as a CMOS or organic CMOS image sensor. Light receiver151receives light including the modulated signal transmitted from fourth device1400B, and outputs reception signal152. Reception unit153receives an input of reception signal152received via light receiver151, performs processing such as demodulation and error correction decoding on the modulated signal included in reception signal152, and outputs reception data154. Data analyzer155receives an input of reception data.154, and outputs, based on the reception data, for example, information1052on the encryption key for communicating with the base station to be connected to. For example, in a wireless local area network (LAN), examples of encryption schemes include wired equivalent privacy (WEP), Wi-Fi protected access (WPA), and Wi-Fi protected access 2 (WPA2) (pre-shared key (PSK) mode, extended authentication protocol (EAP) mode). Note that the encryption method is not limited to these examples. Wireless communication device453included in terminal1050obtains, from information1052on the encryption key for communicating with the base station to be connected to (via, for example, radio waves), information on the encryption key of base station470that wireless communication device453is to connect to. Display157receives inputs of information1051on the SSID and information1052on the encryption key, and, for example, displays (i) the SSID of the communication partner to be accessed by wireless communication device453included in terminal1050and (ii) the encryption key (hereinafter this display is referred to as the “first display” in this embodiment). For example, after the first display, wireless communication device453receives inputs of information1051on the SSID and information1052on the encryption key, and establishes a connection with base station470via radio waves. Here, when base station470communicates with wireless communication device453included in terminal1050, base station470also transmits the modulated signal using, for example, radio waves. Thereafter, wireless communication device453receives inputs of data1053and control signal1054, demodulates data1053in accordance with the control indicated in control signal1054, and transmits the modulated signal over radio waves. Then, for example, base station470transmits data over the network (471) and receives data from the network (472). Thereafter, for example, base station470transmits the modulated signal to terminal1050over radio waves. Wireless communication device453included in terminal1050performs processing such as demodulation and error correction decoding on the modulated signal received over radio waves, and obtains reception data1056. Display157performs display based on reception data1056. FIG.20illustrates one example of a frame configuration of a modulated signal transmitted by third device1400A illustrated inFIG.19. InFIG.20, time is represented on the horizontal axis. Moreover, inFIG.20, symbols that are the same as inFIG.2,FIG.11, andFIG.16share like reference numbers, and repeated description thereof will be omitted. Symbol600-1related to an SSID is a symbol for transmitting information1401-1related to an SSID illustrated inFIG.19. Data symbol1102is a symbol for transmitting data1402-1. Third device1400A transmits preamble201, control information symbol202, symbol600-1related to an SSID, and data symbol1102. Note that third device1400A may transmit a frame including symbols other than those shown inFIG.20. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.20. FIG.21illustrates one example of a frame configuration of a modulated signal transmitted by fourth device1400B illustrated inFIG.19. InFIG.21, time is represented on the horizontal axis. Moreover, inFIG.21, symbols that are the same as inFIG.7andFIG.16share like reference numbers, and repeated description thereof will be omitted. Symbol1101related to the encryption key is a symbol for transmitting information1403-2related to an encryption key illustrated inFIG.19. Data symbol1102is a symbol for transmitting data1402-2. Fourth device1400B transmits preamble201, control information symbol202, symbol1101related to the encryption key, and data symbol1102. Note that fourth device1400B inFIG.19may transmit a frame including symbols other than those shown inFIG.21. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.21. The frame configuration of the modulated signal transmitted by wireless communication device453in this embodiment is the same as the frame configuration illustrated inFIG.17and described in Embodiment 4. In other words, as illustrated inFIG.17, wireless communication device453included in terminal1050transmits, for example, preamble1201, and thereafter transmits control information symbol1202and information symbol1203. Preamble1201is a symbol for base station (or AP)470that receives the modulated signal transmitted by wireless communication device453in terminal1050illustrated inFIG.19to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol1202is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used in the generation of the modulated signal, information related to the frame configuration, and information related to transmission method. Based on information on control information symbol1202, base station470implements, for example, demodulation of the modulated signal. Information symbol1203is a symbol for wireless communication device453in terminal1050to transmit data. Note that wireless communication device453in terminal1050illustrated inFIG.19may transmit a frame including symbols other than those shown inFIG.17. For example, wireless communication device453may transmit a frame including a pilot symbol (reference symbol) between information symbols1203. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.17. Moreover, inFIG.17, a plurality of symbols may be arranged along the frequency axis in other words, inFIG.17, symbols may be present on a plurality of frequencies (a plurality of carriers). The frame configuration of the modulated signal transmitted by base station470in this embodiment is the same as the frame configuration illustrated inFIG.12and described in Embodiment 3. In other words, as illustrated inFIG.12, base station470transmits, for example, preamble701, and thereafter transmits control information symbol702and information symbol703. Preamble701is a symbol for wireless communication device453in terminal1050illustrated inFIG.19that receives the modulated signal transmitted by base station470to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol702is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used in the generation of the modulated signal, information related to the frame configuration, and information related to transmission method. Based on information on control information symbol702, wireless communication device453in terminal1050illustrated inFIG.19implements, for example, demodulation of the modulated signal. Information symbol703is a symbol for base station470illustrated inFIG.19to transmit information. Note that base station470inFIG.19may transmit a frame including symbols other than those shown inFIG.12. For example, base station470may transmit a frame including a pilot symbol (reference symbol) between information symbols703. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.12. Moreover, inFIG.12, a plurality of symbols may be arranged along the frequency axis. In other words, inFIG.12, symbols may be present on a plurality of frequencies (a plurality of carriers). Moreover, for example, a modulated signal that has the frame configuration illustrated inFIG.20and is transmitted by third device1400A at a regular timing, e.g., repeatedly transmitted is conceivable. With this, a plurality of terminals1050can implement the above-described operations. Similarly, a modulated signal that has the frame configuration illustrated inFIG.21and is transmitted by fourth device1400B at a regular timing, e.g., repeatedly transmitted is conceivable. With this, a plurality of terminals1050can implement the above-described operations. FIG.22is a flow chart illustrating a first example of processes implemented by third device1400A, fourth device1400B, terminal1050, and base station470illustrated inFIG.19. Note that inFIG.22, configurations that operate in the same manner asFIG.18share like reference signs. First, third device1400A transmits a modulated signal having the frame configuration illustrated inFIG.20(ST1701). Terminal1050obtains the SSID of base station470to be accessed by terminal1050by receiving the modulated signal transmitted by third device1400A (ST1702). Next, fourth device1400B transmits a modulated signal having the frame configuration illustrated inFIG.21(ST1703). Terminal1050obtains the encryption key used to communicate with base station470to be accessed by terminal1050by receiving the modulated signal transmitted by fourth device1400B (ST1704). Terminal1050then connects with base station470over radio waves (ST1304). Terminal1050completes the connection with base station470over radio waves by receiving a response from base station470(ST1305). Terminal1050then transmits information on the connection destination to base station470using radio waves (ST1306). Base station470obtains information for transmitting to terminal1050from the network (ST1307). Base station470then transmits the obtained information to terminal1050using radio waves, and terminal1050obtains the information (ST1308). When necessary, terminal1050, for example, obtains required information from the network via base station470. FIG.23is a flow chart illustrating a second example of processes implemented by third device1400A, fourth device1400B, terminal1050, and base station470illustrated inFIG.19. Note that inFIG.23, configurations that operate in the same manner asFIG.18share like reference signs. First, fourth device1400B transmits a modulated signal having the frame configuration illustrated inFIG.21(ST1801). Terminal1050obtains the encryption key used to communicate with base station470to be accessed by terminal1050by receiving the modulated signal transmitted by fourth device1400B (ST1802). Next, third device1400A transmits a modulated signal having the frame configuration illustrated inFIG.20(ST1803). Terminal1050obtains the SSID of base station470to be accessed by terminal1050by receiving the modulated signal transmitted by third device1400A (ST1804). Terminal1050then connects with base station470over radio waves (ST1304). Terminal1050completes the connection with base station470over radio waves by receiving a response from base station470(ST1305). Terminal1050then transmits information on the connection destination to base station470using radio waves (ST1306). Base station470obtains information for transmitting to terminal1050from the network (ST1307). Base station470then transmits the obtained information to terminal1050using radio waves, and terminal1050obtains the information (ST1308). When necessary, terminal1050, for example, obtains required information from the network via base station470. As described above, based on the SSID transmitted from third device1400A and the encryption key information transmitted from fourth device1400B, terminal1050connects with base station470and obtains information. In other words, since the device that terminal1050obtains the SSID information from and the device that terminal1050obtains the encryption key information from are different, terminal1050can securely obtain the information via base station470whose security has been authenticated. This is because, when information from a visible light modulated signal is obtained by terminal1050, since it is visible light, it possible for the user to easily determine whether the source of information is secure or not. Conversely, for example, when the SSID is obtained from a modulated signal transmitted over radio waves via a wireless LAN, it is difficult for the user to determine which device transmitted the radio waves. Accordingly, from the viewpoint of ensuring information security, obtaining the SSID via visible light communication is more suitable than wireless LAN communication. Note that in this embodiment, a configuration in which fourth device1400B transmits encryption key information has been described. However, for example, when base station470does not perform encrypted communication using an encryption key, third device1400A may transmit SSID information, and fourth device14008need not transmit encryption key information. In such cases, the present disclosure can be implemented in the same manner simply by removing the configuration related to an encryption key from the above configurations. Moreover, like in this embodiment, by employing a configuration in which the device that transmits the information related to an SSID (third device1400A) and the device that transmits information related to an encryption key (fourth device1400B) are separate devices, it is possible for terminal1050to more securely communicate with base station470. For example, consider the space illustrated inFIG.24. As illustrated inFIG.24, the space includes area #1and area #2, and a wall and a doorway between area #1and area #2. In other words, in the space illustrated inFIG.24, movement from area #1to area #2and movement from area #2to area #1is only possible through the doorway. Base station470, third device1400A, and fourth device1400B are disposed in area #1inFIG.24. Only third device1400A is disposed in area #2. Moreover, assume that the radio waves transmitted by base station470are receivable in either of areas #1or #2inFIG.24. Here, terminal1050in area #1in which fourth device1400B is disposed can obtain the encryption key for base station470from fourth device1400B and communicate with base station470. Moreover, even when terminal1050connected to base station470in area #1moves to area #2, terminal1050can still communicate with base station470using the encryption key obtained from fourth device1400B in area #1. Additionally, even when terminal1050connected to base station470in area #1moves to an area other than area #1or area #2and then returns to either one of areas #1or #2, terminal1050can still communicate with base station470using the encryption key obtained from fourth device1400B in area #1. However, terminal1050that cannot enter area #1cannot obtain an encryption key from fourth device1400B. In such cases, terminal1050knows only the SSID of base station (or AP)470. Therefore, for example, communication with base station470via a service that can be accepted with nothing more than knowledge of the SSID of base station470may be received by terminal1050. The service that can be accepted with nothing more than knowledge of the SSID of base station470can be more restrictive than a service that can be accepted when both the SSID and the encryption key are known. Accordingly, it is possible to exclusively allow only terminal1050that can enter area #1to communicate with base station470. This makes it possible to assure secure communication. Moreover, this makes it possible to construct a system that can provide different services for different areas. Note that by changing (for example, on a per time interval basis) the encryption key for terminal1050to communicate with base station470, it is possible to prohibit terminal1050having an old encryption key from before the change from communicating with base station470. Using such a system makes it possible to provide even more secure communication. Moreover, the configuration of the third device is not limited to the configuration of third device1400A illustrated inFIG.19, the configuration of the fourth device is not limited to the configuration of fourth device1400B illustrated inFIG.19, the configuration of the terminal is not limited to the configuration of terminal1050illustrated inFIG.19, and the connection destination and configuration of the base station is not limited to the connection destination and configuration of base station470illustrated inFIG.19. Moreover, in the example inFIG.19, a single base station470is present, but a plurality of (secure) base stations (or APs) that terminal1050can access may be present. In such cases, the symbol related to an SSID that is transmitted by third device1400A inFIG.19may include information indicating the SSID of each of the plurality of base stations470. Moreover, the symbol related to an encryption key that is transmitted by fourth device1400B inFIG.19may include information indicating the encryption key to be used for connection with each of the plurality of base stations. In such cases, as the display of the access destination (the “first display” described above), display157in terminal1050illustrated inFIG.19displays a list of the SSIDs of the plurality of base stations and/or a list of the plurality of access destinations. Then, based on the information on the SSIDs of the plurality of base stations and the information on the encryption keys to be used for connection with the plurality of base stations, terminal1050inFIG.19may select one or more base stations to actually wirelessly connect to (in other words, may concurrently connect to a plurality of base stations). For example, assume there are three base stations470. Here, the three base stations470shall be referred as base station #A, base station #B, and base station #C. Moreover, assume the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”. Moreover, assume the encryption key for connecting with base station #A is “123”, the encryption key for connecting with base station #B is “456”, and the encryption key for connecting with base station #C is “789”. In such cases, symbol600-1related to an SSID in the frame configuration illustrated inFIG.20of the modulated signal transmitted by third device1400A includes information indicating that the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”. Moreover, symbol1101related to the encryption key in the frame configuration illustrated inFIG.21of the modulated signal transmitted by fourth device1400B includes information indicating that the encryption key for connecting with base station #A is “123”, the encryption key for connecting with base station #13is “456”, and the encryption key for connecting with base station #C is “789”. Terminal1050inFIG.19receives symbol600-1related to an SSM, and thus obtains information indicating that the SSID of base station #A is “abcdef”, the SSID of base station #B is “ghijk”, and the SSID of base station #C is “pqrstu”. Moreover, terminal1050receives symbol1101related to the encryption key, and thus obtains information indicating that the encryption key for connecting with base station #A is “123”, the encryption key for connecting with base station #B is “456”, and the encryption key for connecting with base station #C is “789”. Then, based on this information, terminal1050selects a base station to wirelessly (via, for example, radio waves) connect to, and connects to the selected base station. As described in this embodiment, as a result of terminal1050setting which base station470to access, utilizing a light source, exemplified here as an LED light source, a mode for making a special setting for processes for establishing a wireless connection between terminal1050and base station470in the modulated signal for connection over radio waves that is transmitted by terminal1050is not required. Moreover, a mode for making a special setting for processes for establishing a wireless connection between terminal1050and base station470in the modulated signal that is transmitted by base station470is not required. With this, in this embodiment, data transmission efficiency in wireless communication can be improved. Moreover, the encryption key may be an encryption key for an SSID on a wireless LAN, as described above, and may be an encryption key for limiting the connection type, the service type, or the connection region of a network, for example. In other words, it is acceptable so long as an encryption key for limiting something or other is implemented. Embodiment 6 FIG.25illustrates one example of a configuration of a communication system according to this embodiment. The communication system illustrated inFIG.25includes, for example, base station2000and terminal1050. Moreover, base station2000includes transmission device2001and wireless communication device2002. InFIG.25, symbols that are the same as inFIG.6andFIG.15share like reference numbers, and repeated description thereof will be omitted. Moreover, communication between wireless communication device2002and wireless communication device453illustrated inFIG.25uses, for example, radio waves. Transmission device2001included in base station (or AP)2000inFIG.25includes, for example, an LED visible light source, lamp, light source, and/or light (hereinafter referred to as “light source104”). First, operations performed by transmission device2001the element related to the LED lamp, light source, and/or light that emits visible light) will be described. In transmission device2001, transmission unit102receives inputs of information1001-1related to an SSID, information1001-2related to an encryption key, and data1002, generates modulated signal (for optical communication)103based on the input signals, and outputs modulated signal103. Modulated signal103is then transmitted from light source104, for example. Next, information1001-1related to an SSID and information1001-2related to an encryption key will be described. First, information1001-1related to an SSID will be described. Information1001-1related to an SSID is information indicating the SSID of wireless communication device2002, which uses radio waves and is included in base station2000illustrated inFIG.25. In other words, transmission device2001can provide access to wireless communication device2002, which is a wireless secure access destination for terminal1050. With this, terminal1050illustrated inFIG.25can securely obtain information from wireless communication device2002. On the other hand, transmission device2001can restrict the terminals that access wireless communication device2002to terminals in a space in which it is possible to receive optical signals transmitted (emitted) by transmission device2001. Note that when terminal1050receives an optical signal transmitted via a predetermined scheme, it may be determined that the notified SSID is the SSID of a secure base station. Moreover, terminal1050may also perform processing for determining whether the notified SSID is secure or not. For example, transmission device2001may transmit a predetermined identifier in an optical signal, and terminal1050may determine whether the notified SSID is the SSID of a secure base station or not based on the received identifier. Note that although the only base station that is illustrated inFIG.25is base station2000, even when, for example, a base station (or AP) other than base station2000is also present, terminal1050can access wireless communication device2002of base station2000using the SSID and the encryption key obtained from transmission device2001, and obtain information. Next, information1001-2related to an encryption key will be described. Information1001-2related to an encryption key is information related to an encryption key that is necessary in order for terminal1050to communicate with wireless communication device2002. By obtaining information1001-2related to an encryption key from transmission device2001, terminal1050can perform encrypted communication with wireless communication device2002. This concludes the description of information1001-1related to an SSID and information1001-2related to an encryption key. Terminal1050inFIG.25receives a modulated signal transmitted by transmission device2001. Note that in terminal1050illustrated inFIG.25, configurations that operate the same as terminal150inFIG.6and terminal1050inFIG.15share like reference signs. Light receiver151included in terminal1050is, for example, an image sensor such as a CMOS or organic CMOS image sensor. Light receiver151receives light including the modulated signal transmitted from transmission device2001, and outputs reception signal152. Reception unit153receives an input of reception signal152received via light receiver151, performs processing such as demodulation and error correction decoding on the modulated signal included in reception signal152, and outputs reception data154. Data analyzer155receives an input of reception data154, and outputs, based on the reception data, for example, information1051on the SSID for wireless communication device2002included in base station2000to be connected to, and information1052on the encryption key for communicating with wireless communication device2002included in base station2000to be connected to. For example, in a wireless local area network (LAN), examples of encryption schemes include wired equivalent privacy (WEP), Wi-Fi protected access (WPA), and Wi-Fi protected access 2 (WPA2) (pre-shared key (PSK) mode, extended authentication protocol (EAP) mode). Note that the encryption method is not limited to these examples. Display157receives inputs of information1051on the SSID and information1052on the encryption key, and, for example, displays (i) the SSID of the communication partner to be accessed by wireless communication device453included in terminal1050and (ii) the encryption key (hereinafter this display is referred to as the “first display” in this embodiment). For example, after the first display, wireless communication device453receives inputs of information1051on the SSID and information1052on the encryption key, and establishes a connection with wireless communication device2002in base station2000(for example, assume the connection uses radio waves). Here, when wireless communication device2002in base station2000communicates with wireless communication device453included in terminal1050, wireless communication device2002also transmits the modulated signal using, for example, radio waves. Thereafter, wireless communication device453receives inputs of data1053and control signal1054, demodulates data1053in accordance with the control indicated in control signal1054, and transmits the modulated signal over radio waves. Then, for example, wireless communication device2002in base station2000transmits data over the network (471) and receives data from the network (472). Thereafter, for example, wireless communication device2002in base station2000transmits the modulated signal to terminal1050over radio waves. Wireless communication device453included in terminal1050performs processing such as demodulation and error correction decoding on the modulated signal received over radio waves, and obtains reception data1056. Display157performs display based on reception data1056. The frame configuration of the modulated signal transmitted by transmission device2001in base station2000according to this embodiment is the same as the frame configuration illustrated inFIG.16and described in Embodiment 4. In other words, inFIG.16, symbol600-1related to an SSID is a symbol for transmitting information1001-1related to an SSID illustrated inFIG.25, and symbol1101related to the encryption key is a symbol for transmitting information1001-2related to an encryption key illustrated inFIG.25. Data symbol1102is a symbol for transmitting data1002illustrated inFIG.25. As illustrated inFIG.16, transmission device2001in base station2000transmits preamble201, control information symbol202, symbol600-1related to an SSID, symbol1101related to the encryption key, and data symbol1102. Note that transmission device2001in base station2000may transmit a frame including symbols other than those shown inFIG.16. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.16. The frame configuration of the modulated signal transmitted by wireless communication device453included in terminal1050according to this embodiment is the same as the frame configuration illustrated inFIG.17and described in Embodiment 4. In other words, as illustrated inFIG.17, wireless communication device453included in terminal1050and illustrated inFIG.25transmits, for example, preamble1201, and thereafter transmits control information symbol1202and information symbol1203. Here, preamble1201is a symbol for wireless communication device2002in base station2000that receives the modulated signal transmitted by wireless communication device453to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol1202is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used by terminal1050in the generation of the modulated signal, information related to the frame configuration, and information related to transmission method. Based on information on control information symbol1202, wireless communication device2002in base station2000implements, for example, demodulation of the modulated signal. Information symbol1203is a symbol for wireless communication device453in terminal1050to transmit data. Note that wireless communication device453in terminal1050may transmit a frame including symbols other than those shown inFIG.17. For example, wireless communication device453may transmit a frame including a pilot symbol (reference symbol) between information symbols1203. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.17. Moreover, inFIG.17, a plurality of symbols may be arranged along the frequency axis. In other words, inFIG.17, symbols may be present on a plurality of frequencies (a plurality of carriers). The frame configuration of the modulated signal transmitted by wireless communication device2002in this embodiment is the same as the frame configuration illustrated inFIG.12and described in Embodiment 3. In other words, as illustrated inFIG.12, wireless communication device2002transmits, for example, preamble701, and thereafter transmits control information symbol702and information symbol703. Preamble701is a symbol for wireless communication device453in terminal1050that receives the modulated signal transmitted by wireless communication device2002to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol702is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used in the generation of the modulated signal, information related to the frame configuration, and information related to transmission method. Based on information on control information symbol702, wireless communication device453in terminal1050implements, for example, demodulation of the modulated signal. Information symbol703is a symbol for wireless communication device2002to transmit information. Note that wireless communication device2002included in base station2000illustrated inFIG.25may transmit a frame including symbols other than those shown inFIG.12. For example, wireless communication device2002may transmit a frame including a pilot symbol (reference symbol) between information symbols703. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.12. Moreover, inFIG.12, a plurality of symbols may be arranged along the frequency axis. In other words, inFIG.12, symbols may be present on a plurality of frequencies (a plurality of carriers). Moreover, for example, a modulated signal that has the frame configuration illustrated inFIG.16and is transmitted by transmission device2001at a regular timing, e.g., repeatedly transmitted is conceivable. With this, a plurality of terminals1050can implement the above-described operations. FIG.26is a flow chart illustrating one example of processes implemented by transmission device2001in base station2000, terminal1050, and wireless communication device2002in base station2000illustrated inFIG.25. First, transmission device2001transmits a modulated signal having the frame configuration illustrated inFIG.16(ST1301). Terminal1050obtains the SSID of base station2000(wireless communication device2002) to be accessed by terminal1050by receiving the modulated signal transmitted by transmission device2001(ST1302). Terminal1050also obtains the encryption key to be used in communication with base station2000(wireless communication device2002) to be accessed by terminal1050(ST1303). Terminal1050then connects with wireless communication device2002in base station2000over radio waves (ST1304). Terminal1050completes the connection with wireless communication device2002in base station2000by receiving a response from wireless communication device2002in base station2000(ST1305). Terminal1050then transmits information on the connection destination to wireless communication device2002in base station2000using radio waves (ST1306). Wireless communication device2002in base station2000obtains information for transmitting to terminal1050from the network (ST1307). Wireless communication device2002in base station2000then transmits the obtained information to terminal1050using radio waves, and terminal1050obtains the information (ST1308). When necessary, terminal1050, for example, obtains required information from the network via wireless communication device2002in base station2000. As described above, based on the information on the SSID and information on the encryption key transmitted from transmission device2001in base station2000, terminal1050connects with wireless communication device2002in base station2000and obtains information to securely obtain information from base station2000, whose security has been authenticated. This is because, when information from a visible light modulated signal is obtained by terminal1050, since it is visible light, it possible for the user to easily determine whether the source of information is secure or not. Conversely, for example, when the SSID is obtained from a modulated signal transmitted over radio waves via a wireless LAN, it is difficult for the user to determine which device transmitted the radio waves. Accordingly, from the viewpoint of ensuring information security, obtaining the SSID via visible light communication is more suitable than wireless LAN communication. Note that in this embodiment, a configuration in which transmission device2001transmits encryption key information has been described. However, for example, when wireless communication device2002in base station2000does not perform encrypted communicating using an encryption key, transmission device2001may transmit only SSID information, without transmitting encryption key information. In such cases, the present disclosure can be implemented in the same manner simply by removing the configuration related to an encryption key from the above configurations included in transmission device2001. Moreover, as illustrated inFIG.25, a configuration is acceptable in which the SSID and encryption key of wireless communication device2002in base station2000can be rewritten. For example, inFIG.25, wireless communication device2002receives inputs of information1001-1related to an SSID and information1001-2related to an encryption key. Wireless communication device2002in base station2000overwrites the SSID and the encryption key in accordance with the input information1001-1related to an SSID and information1001-2related to an encryption key. With this configuration, even more secure communication between terminal1050and wireless communication device2002in base station2000can be assured. Note that inFIG.25, although wireless communication device2002in base station2000has a function of overwriting the SSID and the encryption key, a configuration in which the function for overwriting both or one of the SSID and the encryption key is also acceptable. Moreover, the configuration of the transmission device is not limited to the configuration of transmission device2001illustrated inFIG.25, the configuration of the terminal is not limited to the configuration of terminal1050illustrated inFIG.25, and the connection destination and configuration of the wireless communication device is not limited to the connection destination and configuration of wireless communication device2002illustrated inFIG.25. Moreover, in the example inFIG.25, a single base station2000is present, but a plurality of wireless communication devices2002in (secure) base stations (or APs)2000that terminal1050can access may be present. Note that these plurality of wireless communication devices2002in base stations2000and terminal1050respectively transmit and receive modulated signals using radio waves. In such cases, the symbol related to an SSID that is transmitted by transmission device2001inFIG.25may include information indicating the SSID of each of the plurality of wireless communication devices2002in base stations2000. Moreover, the symbol related to an encryption key that is transmitted by transmission device2001inFIG.25may include information indicating the encryption key to be used for connection with each of the plurality of wireless communication devices2002in base stations2000. Terminal1050inFIG.25may select a wireless communication device2002in a base station2000to wirelessly connect to (for example, over radio waves), based on the information on the SSIDs and encryption key information of the plurality of wireless communication devices2002in base stations2000(or connect to the plurality of wireless communication devices2002in base stations2000). For example, assume there are three base stations2000including wireless communication devices2002. Here, the three wireless communication devices2002in the three base stations2000shall be referred as wireless communication device #A, wireless communication device #B, and wireless communication device #C. Moreover, assume the SSID of wireless communication device #A is “abcdef”, the SSID of wireless communication device #B is “ghijk”, and the SSID of wireless communication device #C is “pqrstu”. Moreover, assume the encryption key for connecting with wireless communication device #A is “123”, the encryption key for connecting with wireless communication device #B is “456”, and the encryption key for connecting with wireless communication device #C is “789”. In such cases, symbol600-1related to an SSID in the frame configuration illustrated inFIG.16of the modulated signal transmitted by transmission device2001includes information indicating that the SSID of wireless communication device #A is “abcdef”, the SSID of wireless communication device #B is “ghijk”, and the SSID of wireless communication device #C is “pqrstu”. Moreover, symbol1101related to the encryption key in the frame configuration illustrated inFIG.16includes information indicating that the encryption key for connecting with wireless communication device #A is “123”, the encryption key for connecting with wireless communication device #B is “456”, and the encryption key for connecting with wireless communication device #C is “789”. Terminal1050inFIG.25receives symbol600-1related to an SSID, and thus obtains information indicating that the SSID of wireless communication device #A is “abcdef”, the SSID of wireless communication device #B is “ghijk”, and the SSID of wireless communication device #C is “pqrstu”. Moreover, terminal1050receives symbol1101related to the encryption key, and thus obtains information indicating that the encryption key for connecting with wireless communication device #A is “123”, the encryption key for connecting with wireless communication device #B is “456”, and the encryption key for connecting with wireless communication device #C is “789”. Then, based on this information, terminal1050selects a base station to wirelessly (via, for example, radio waves) connect to, and connects to the selected base station. As described in this embodiment, as a result of terminal1050setting which wireless communication device2002in base station2000to access, utilizing a light source, exemplified here as an LED light source, a mode for making a special setting for processes for establishing a wireless connection between terminal1050and base station2000in the modulated signal for connection over radio waves that is transmitted by terminal1050is not required. Moreover, a mode for making a special setting for processes for establishing a wireless connection between terminal1050and base station2000in the modulated signal that is transmitted by base station2000is not required. With this, in this embodiment, data transmission efficiency in wireless communication can be improved. Moreover, the encryption key may be an encryption key for an SSID on a wireless LAN, as described above, and may be an encryption key for limiting the connection type, the service type, or the connection region of a network, for example. In other words, it is acceptable so long as an encryption key for limiting something or other is implemented. Embodiment 7 FIG.27illustrates one example of a configuration of a communication system according to this embodiment. The communication system illustrated inFIG.27includes device1000, terminal1050, and base station (or AP)470-1(base station #1), base station (or AP)470-2(base station #2), and base station (or AP)470-3(base station #3) that communicate with terminal1050. InFIG.27, symbols that are the same as inFIG.6,FIG.9, andFIG.15share like reference numbers, and repeated description thereof will be omitted. Device1000includes, for example, an LED visible light source, lamp, light source, and/or light (light source104). Note that hereinafter, device1000is also referred to as “fifth device” in this embodiment. Moreover, communication between wireless communication device453and base station470-1(base station #1) illustrated inFIG.27, communication between wireless communication device453and base station470-2(base station #2) inFIG.27, and communication between wireless communication device453and base station470-3(base station #3) inFIG.27uses, for example, radio waves. In fifth device1000illustrated inFIG.27, transmission unit102receives inputs of information1001-1related to an SSID, information1001-2related to an encryption key, and data1002, generates modulated signal (for optical communication)103based on the input signals, and outputs modulated signal103. Modulated signal103is then transmitted from light source104, for example. Next, information1001-1related to an SSID and information1001-2related to an encryption key will be described. First, information1001-1related to an SSID will be described. Information1001-1related to an SSID includes, for example, information indicating the SSID of base station470-1(base station #1) inFIG.27, information indicating the SSID of base station470-2(base station #2) inFIG.27, and information indicating the SSID of base station470-3(base station #3) inFIG.27. Note that in one example, base stations470-1,470-2, and470-3transmit modulated signals to terminal1050over radio waves, and receive modulated signals from terminal1050over radio waves. In other words, fifth device1000can provide access to base stations470-1,470-2, and470-3, which are secure access destinations for terminal1050. With this, terminal1050illustrated inFIG.27can securely obtain information from base stations470-1,470-2, and470-3. On the other hand, fifth device1000can restrict the terminals that access base stations470-1,470-2, and470-3to terminals in a space in which it is possible to receive optical signals transmitted (emitted) by fifth device1000. Note that when terminal1050receives an optical signal transmitted via a predetermined scheme, it may be determined that the notified SSID is the SSID of a secure base station. Moreover, terminal1050may also perform processing for determining whether the notified SSID is secure or not. For example, fifth device1000may transmit a predetermined identifier in an optical signal, and terminal1050may determine whether the notified SSID is the SSID of a secure base station or not based on the received identifier. Note that although the example illustrated inFIG.27shows base stations470-1,470-2, and470-3, base stations (or APs) other than base stations470-1,470-2, and470-3may be present, for example. Next, information1001-2related to an encryption key will be described. Information1001-2related to an encryption key is information related to an encryption key that is necessary in order for terminal1050to communicate with base stations470-1,470-2, and470-3. By obtaining information1001-2related to an encryption key from fifth device1000, encrypted communication can be performed between terminal1050and base station470-1, between terminal1050and base station470-2, and between terminal1050and base station470-3. This concludes the description of information1001-1related to an SSID and information1001-2related to an encryption key. Terminal1050inFIG.27receives a modulated signal transmitted by fifth device1000. Note that in terminal1050illustrated inFIG.27, configurations that operate the same as terminal150inFIG.6and terminal450inFIG.9share like reference signs. Light receiver151included in terminal1050is, for example, an image sensor such as a CMOS or organic CMOS image sensor. Light receiver151receives light including the modulated signal transmitted from fifth device1000, and outputs reception signal152. Reception unit153receives an input of reception signal152received via light receiver151, performs processing such as demodulation and error correction decoding on the modulated signal included in reception signal152, and outputs reception data154. Data analyzer155receives an input of reception data.154, and outputs, based on reception data154, for example, information1051on the SSIDs of base stations470-1,470-2, and470-3to be connected to, and information1052on the encryption keys for communicating with base stations470-1,470-2, and470-3to be connected to. For example, in a wireless local area network (LAN), examples of encryption schemes include wired equivalent privacy (WEP), Wi-Fi protected access (WPA), and Wi-Fi protected access 2 (WPA2) (pre-shared key (PSK) mode, extended authentication protocol (EAP) mode). Note that the encryption method is not limited to these examples. Display157receives inputs of information1051on the SSID and information1052on the encryption key, and, for example, displays (i) the SSID of the communication partner to be accessed by wireless communication device453included in terminal1050and (ii) the encryption key (hereinafter this display is referred to as the “first display” in this embodiment). For example, after the first display, wireless communication device453receives inputs of information1051on the SSIDs and information1052on the encryption keys, and establishes a connection with any one of base stations470-1,470-2, or470-3(for example, assume the connection uses radio waves). Here, when base station470connected to communicates with wireless communication device453included in terminal1050, that base station470also transmits the modulated signal using, for example, radio waves. Thereafter, wireless communication device453receives inputs of data1053and control signal1054, demodulates data1053in accordance with the control indicated in control signal1054, and transmits the modulated signal over radio waves. Then, for example, base station470connected to transmits data over the network (any one of471-1,471-2, and471-3) and receives data from the network (any one of472-1,472-2, and472-3). Thereafter, for example, base station470connected to transmits the modulated signal to terminal1050over radio waves. Wireless communication device453included in terminal1050performs processing such as demodulation and error correction decoding on the modulated signal received over radio waves, and obtains reception data1056. Display157performs display based on reception data1056. In the example illustrated inFIG.27, fifth device1000transmits three modulated signals having three different frame configurations.FIG.28illustrates frame2300-1(frame #1) among the three frame configurations,FIG.29illustrates frame2300-2(frame #2) among the three frame configurations, andFIG.30illustrates frame2300-3(frame #3) among the three frame configurations. FIG.28illustrates an example of the configuration of frame2300-1(frame #1) of a modulated signal transmitted by fifth device1000. InFIG.28, time is represented on the horizontal axis. Moreover, inFIG.28, symbols that are the same as inFIG.2andFIG.16share like reference numbers, and repeated description thereof will be omitted. Frame2300-1(frame #1) illustrated inFIG.28is a frame for transmitting information on the SSID of base station470-1(base station #1) inFIG.27and information on the encryption key of base station470-1(base station #1) inFIG.27(the encryption key for accessing base station470-1). Symbol2301-1related to an SSID is a symbol for transmitting information1001-1related to an SSID illustrated inFIG.27. Moreover, symbol2301-1related to an SSID is a symbol for fifth device1000inFIG.27to transmit the SSID of base station470-1(base station #1). Symbol2302-1related to the encryption key is a symbol for transmitting information1001-2related to an encryption key illustrated inFIG.27. Moreover, symbol2302-1related to the encryption key is a symbol for fifth device1000inFIG.27to transmit the encryption key of base station470-1(base station #1) (the encryption key for accessing base station470-1). Fifth device1000transmits preamble201, control information symbol202, symbol2301-1related to an SSID, symbol2302-1related to the encryption key, and data symbol1102. Note that fifth device1000may transmit frame2300-1(frame #1) including a symbol other than the symbols illustrated inFIG.28. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration of frame2300-1(frame #1) illustrated inFIG.28. FIG.29illustrates an example of the configuration of frame2300-2(frame #2) of a modulated signal transmitted by fifth device1000.FIG.29, time is represented on the horizontal axis. Moreover, inFIG.29, symbols that are the same as inFIG.2andFIG.16share like reference numbers, and repeated description thereof will be omitted. Frame2300-2(frame #2) illustrated inFIG.29is a frame for transmitting information on the SSID of base station470-2(base station #2) inFIG.27and information on the encryption key of base station470-2(base station #2) inFIG.27(the encryption key for accessing base station470-2). Symbol2301-2related to an SSID is a symbol for transmitting information1001-1related to an SSID illustrated inFIG.27. Moreover, symbol2301-2related to an SSID is a symbol for fifth device1000inFIG.27to transmit the SSID of base station470-2(base station #2). Symbol2302-2related to the encryption key is a symbol for transmitting information1001-2related to an encryption key illustrated inFIG.27. Moreover, symbol2302-2related to the encryption key is a symbol for fifth device1000inFIG.27to transmit the encryption key of base station470-2(base station #2) (the encryption key for accessing base station470-2). Fifth device1000transmits preamble201, control information symbol202, symbol2301-2related to an SSID, symbol2302-2related to the encryption key, and data symbol1102. Note that fifth device1000may transmit frame2300-2(frame #2) including a symbol other than the symbols illustrated inFIG.29. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration of frame2300-2(frame #2) illustrated inFIG.29. FIG.30illustrates an example of the configuration of frame2300-3(frame #3) of a modulated signal transmitted by fifth device1000. InFIG.30, time is represented on the horizontal axis. Moreover, inFIG.30, symbols that are the same as inFIG.2andFIG.16share like reference numbers, and repeated description thereof will be omitted. Frame2300-3(frame #3) illustrated inFIG.30is a frame for transmitting information on the SSID of base station470-3(base station #3) inFIG.27and information on the encryption key of base station470-3(base station #3) inFIG.27(the encryption key for accessing base station470-3). Symbol2301-3related to an SSID is a symbol for transmitting information1001-1related to an SSID illustrated inFIG.27. Moreover, symbol2301-3related to an SSID is a symbol for fifth device1000inFIG.27to transmit the SSID of base station470-3(base station #3). Symbol2302-3related to the encryption key is a symbol for transmitting information1001-2related to an encryption key illustrated inFIG.27. Moreover, symbol2302-3related to the encryption key is a symbol for fifth device1000to transmit the encryption key of base station470-3(base station #3) (the encryption key for accessing base station470-3). Fifth device1000transmits preamble201, control information symbol202, symbol2301-3related to an SSM, symbol2302-3related to the encryption key, and data symbol1102. Note that fifth device1000may transmit frame2300-3(frame #3) including a symbol other than the symbols illustrated inFIG.30. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration of frame2300-3(frame #3) illustrated inFIG.30. FIG.31illustrates an example of a transmission method used when fifth device1000transmits frame2300-1(frame #1) inFIG.28, frame2300-2(frame #2) inFIG.29, and frame2300-3(frame #3) inFIG.30. InFIG.31, time is represented on the horizontal axis. InFIG.31, in frame #1group transmissions of2601-1and2601-2, one or more of frame2300-1(frame #1) illustrated inFIG.28is transmitted. Moreover, in frame #2group transmissions of2602-1and2602-2, one or more of frame2300-2(frame #2) illustrated inFIG.29is transmitted. Moreover, in frame #3group transmissions of2603-1and2603-2, one or more of frame2300-3(frame #3) illustrated inFIG.30is transmitted. Next, this will be described in greater detail. First, the transmission of one or more of frame2300-1(frame #1) illustrated inFIG.28in frame #1group transmissions of2601-1and2601-2will be described. For example, when an image sensor such as a CMOS or organic CMOS sensor is used in light receiver151, it is possible to process reception signals frame by frame of a video or still image. Note that, for example, when a video is labeled “4K 30p”, this means that one frame has 3840×2160 pixels, and the number of frames per second is 30. Accordingly, when fifth device1000transmits a modulated signal having a configuration in which frame2300-1(frame #1) inFIG.28, frame2300-2(frame #2) inFIG.29, and frame2300-3(frame #3) inFIG.30are present, terminal1050inFIG.27has difficulty in selecting a base station470to access from among the plurality of base stations470-1,470-2, and470-3. In view of this, this embodiment proposes a frame configuration like that illustrated inFIG.31. <Method 1-1> Method 1-1 makes the time period that each of frame #1group transmissions of2601-1and2601-2occupies longer than a frame period of a video or still image by including a plurality of frames2300-1(frame #1) illustrated inFIG.28, in frame #1group transmissions of2601-1and2601-2. This method makes it possible for terminal1050to prevent the reception, from fifth device1000, of a modulated signal including, in a single frame of a video or still image, frame2300-1(frame #1) inFIG.28, frame2300-2(frame #2) inFIG.29, and frame2300-3(frame #3) inFIG.30, that is to say, a modulated signal including different SSIDs and encryption keys. With this, terminal1050illustrated inFIG.27can easily select a base station470to access from among the plurality of base stations470-1,470-2, and470-3. <Method 2-1> Method 2-1 makes the time period that frame2300-1(frame #1) inFIG.28occupies longer than a frame period of a video or still image. For example, symbol2301-1related to an SSID inFIG.28may include a plurality of items of the information on the SSID for base station #1(i.e., the information on the SSID for base station #1is repeatedly included), and symbol2302-1related to an encryption key may include a plurality of items of the information on the encryption key for base station #1(the encryption key for connecting with base station #1) the information on the encryption key for base station #1(the encryption key for connecting with base station #1) is repeatedly included). This method makes it possible for terminal1050to prevent the reception, from fifth device1000, of a modulated signal including, in a single frame of a video or still image, frame2300-1(frame #1) inFIG.28, frame2300-2(frame #2) inFIG.29, and frame2300-3(frame #3) inFIG.30, that is to say, a modulated signal including different SSIDs and encryption keys. With this, terminal1050illustrated can easily select a base station470to access from among the plurality of base stations470-1,470-2, and470-3. Similarly, frame #2group transmissions of2602-1and2602-2may have the following configurations. <Method 1-2> Method 1-2 makes the time period that frame #2group transmission occupies longer than a frame period of a video or still image by including a plurality of frames2300-2(frame #2) illustrated inFIG.29, in each of frame #2group transmissions of2602-1and2602-2. <Method 2-2> Method 2-2 makes the time period that frame2300-2(frame #2) inFIG.29occupies longer than a frame period of a video or still image. For example, symbol2301-2related to an SSID inFIG.29may include a plurality of items of the information on the SSID for base station #2(i.e., the information on the SSID for base station #2is repeatedly included), and symbol2302-2related to an encryption key may include a plurality of items of the information on the encryption key for base station #2(the encryption key for connecting with base station #2) (i.e., the information on the encryption key for base station #2(the encryption key for connecting with base station #2) is repeatedly included). Similarly, frame #3group transmissions of2603-1and2603-2may have the following configurations. <Method 1-3> Method 1-3 makes the time period that frame #3group transmission occupies longer than a frame period of a video or still image by including a plurality of frames2300-3(frame #3) illustrated inFIG.30, in each of frame #3group transmissions of2603-1and2603-2. <Method 2-3> Method 2-3 makes the time period that frame2300-3(frame #3) inFIG.30occupies longer than a frame period of a video or still image. For example, symbol2301-3related to an SSID inFIG.30may include a plurality of items of the information on the SSID for base station #3the information on the SSID for base station #3is repeatedly included), and symbol2302-3related to an encryption key may include a plurality of items of the information on the encryption key for base station #3(the encryption key for connecting with base station #3) (i.e., the information on the encryption key for base station #3(the encryption key for connecting with base station #3) is repeatedly included). Next, the advantageous effects achieved when fifth device1000transmits frames like those inFIG.28throughFIG.31will be described. As one example, consider area2700inFIG.32. InFIG.32, fifth devices1000are disposed at circles2701-1,2701-2,2701-3,2701-4,2701-5,2701-6,2701-7,2701-8,2701-8,2701-9, and2701-10. Moreover, base station470-1(base station #1) is disposed at double circle2702-1, base station470-2(base station #2) is disposed at double circle2702-2, and base station470-3(base station #3) is disposed at double circle2702-3. For example, 99 terminals having the same configuration as terminal1050(hereinafter, each of these terminals is simply referred to as terminal1050) are present in the area indicated as2703. Here, for example, fifth devices1000disposed at circles2701-5and2701-10both transmit information on the SSID of base station470-3(base station #3) and information on the encryption key for access to base station470-3(base station #3). This is because the base station closest to the positions of circles2701-5and2701-10is base station470-3(base station #3). In such cases, all 99 of terminals1050will access base station470-3(base station #3). This means there is a high probability that terminals1050will have difficulty accessing base station470-3(base station #3) due to congestion. Taking this point into consideration, by making it so that the 99 terminals1050access base station470-1(base station #1) (position of double circle2702-1), base station470-2(base station #2) (position of double circle2702-2), and base station470-3(base station #3) (position of double circle2702-3) as evenly as possible, it is possible to achieve a reduction in terminals1050having difficulty accessing a base station470, as described above. For example, since the 99 terminals1050typically access fifth device1000at different timings, when fifth device1000transmits a frame, such as those illustrated inFIG.28throughFIG.31in this embodiment, depending on the timing that each of the 99 terminals1050accesses fifth device1000, a single SSID and a single encryption key for one of base stations470-1,470-2, or470-3are obtained. With this, control is performed such that the 99 terminals1050access base stations470-1,470-2, or470-3as evenly as possible. Accordingly, the above described reduction in terminals1050having difficulty accessing a base station470can be achieved. Note thatFIG.31illustrates an example of a transmission method used when fifth device1000transmits frame2300-1(frame #1) inFIG.28, frame2300-2(frame #2) inFIG.29, and frame2300-3(frame #3) inFIG.30. However, the transmission method used when fifth device1000transmits frame2300-1(frame #1) inFIG.28, frame2300-2(frame #2) inFIG.29, and frame2300-3(frame #3) inFIG.30is not limited to this example. For example, inFIG.31, the order of frame #1group transmission, frame #2group transmission, and frame #3group transmission by fifth device1000is repeated, but the order in which frame #1group transmission, frame #2group transmission, and frame #3group transmission are transmitted is not limited to the example given inFIG.31, For example, the transmission of frame group1, the transmission of frame group #2, and the transmission of frame group #3by fifth device1000may be temporally randomized, and, alternatively, the order of the transmission of frame group1, the transmission of frame group #2, and the transmission of frame group #3may be a regular order different than the example given inFIG.31. It is sufficient so long as fifth device1000transmits frame #1group, frame #2group, and frame #3group. Moreover, inFIG.31, frame #1group transmission, frame #2group transmission, and frame #3group transmission by fifth device1000are exemplified as being performed consecutively, but these transmissions do not necessarily need to be performed consecutively. For example, inFIG.31, there may be a time interval between frame #1group transmission2601-1and frame #2group transmission2602-2. InFIG.31, the example includes only frame #1group transmission, frame #2group transmission, and frame #3group transmission, but other symbols and/or frames may be included. Furthermore, inFIG.31andFIG.27, there are three base stations470, but the number of base stations470is not limited to this example. Operations in cases in which there are two or more base stations470are the same as the example in which there are three base stations470, Accordingly, for example, when there are N base stations470(N is an integer greater than or equal to two), when transmission such as that illustrated inFIG.31is performed by fifth device1000, frame #k group transmission is performed. Note that k is an integer greater than or equal to one and less than or equal to N. Then, in the transmission of frame #k group, there is a symbol related to an SSID (information on the SSID of base station #k) and a symbol related to an encryption key (information on an encryption key for accessing base station #k). The frame configuration of the modulated signal transmitted by wireless communication device453included in terminal1050illustrated inFIG.27is the same as the frame configuration illustrated inFIG.17and described in Embodiment 4. In other words, as illustrated inFIG.17, wireless communication device453included in terminal1050and illustrated inFIG.27transmits, for example, preamble1201, and thereafter transmits control information symbol1202and information symbol1203. Preamble1201is a symbol for base stations470-1,470-2, and470-3that receive the modulated signal transmitted by wireless communication device453in terminal1050to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol1202is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used in the generation of the modulated signal, information related to the frame configuration, and information related to transmission method. Based on information on control information symbol1202, base stations470-1,470-2, and470-3implement, for example, demodulation of the modulated signal. Information symbol1203is a symbol for wireless communication device453in terminal1050to transmit data. Note that wireless communication device453in terminal1050illustrated inFIG.27may transmit a frame including symbols other than those illustrated inFIG.17(for example, a frame including a pilot symbol (reference symbol) between information symbols1203). Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.17. In other words, inFIG.17, a plurality of symbols may be present along the frequency axis, that is, symbols may be present on a plurality of frequencies (a plurality of carriers). The frame configuration of the modulated signal transmitted by base stations470-1,470-2, and470-3illustrated in.FIG.27is the same as the frame configuration illustrated inFIG.12and described in Embodiment 3. In other words, as illustrated inFIG.12, base stations470-1,470-2, and470-3transmit, for example, preamble701, and thereafter transmit control information symbol702and information symbol703. Preamble701is a symbol for wireless communication device453in terminal1050that receives the modulated signal transmitted by base stations470-1,470-2, and470-3to perform, for example, signal detection, time synchronization, frame synchronization, frequency synchronization, and/or frequency offset estimation, etc. Control information symbol702is, for example, a symbol including, for example, information related to the error correction encoding method and/or modulation scheme used in the generation of the modulated signal, information related to the frame configuration, and information related to transmission method. Based on information on control information symbol702, wireless communication device453in terminal1050implements, for example, demodulation of the modulated signal. Information symbol703is a symbol for base stations470-1,470-2, and470-3to transmit data. Note that base stations470-1,470-2, and470-3may transmit a frame including symbols other than the symbols illustrated inFIG.12. For example, base stations470-1,470-2, and470-3may transmit a frame including a pilot symbol (reference symbol) between information symbols703. Moreover, the configuration of the frame, including the order in which the symbols are transmitted, is not limited to the configuration illustrated inFIG.12. InFIG.12, a plurality of symbols may be arranged along the frequency axis. In other words, inFIG.12, symbols may be present on a plurality of frequencies (a plurality of carriers). FIG.33is a flow chart illustrating one example of processes implemented by fifth device1000, terminal1050, and base station #X. Note that X is equal to 1, 2, or 3. First, fifth device1000transmits a modulated signal having the frame configuration illustrated inFIG.31(ST2801). Terminal1050receives the modulated signal transmitted by fifth device1000, and selects a base station to access from among base station470-1(base station #1), base station470-2(base station #2), and base station470-3(base station #3) inFIG.27(ST2802). Next, this point will be described. Terminal1050receives the modulated signal transmitted by fifth device1000in order to access any one of base stations470. Here, terminal1050obtains any one of frame #1group transmission, frame #2group transmission, and frame #3group transmission illustrated inFIG.31, in a single frame of a video or still image. Terminal1050then determines which of base station470-1(base station #1), base station470-2(base station #2), base station470-3(base station #3) to access, based on the obtained base station information (for example, an SSID). Next, terminal1050obtains the SSID of base station #X to be accessed by terminal1050by receiving the modulated signal transmitted by fifth device1000(ST2803). Terminal1050also obtains the encryption key to be used in communication with base station #X to be accessed by terminal1050(ST2804). Terminal1050then connects with base station #X over radio waves (ST2805). Terminal1050completes the connection with base station #X over radio waves by receiving a response from base station #X (ST2806). Terminal1050then transmits information on the connection destination to base station #X using radio waves (ST2807). Base station #X obtains information for transmitting to terminal1050from the network (ST2808). Base station #X then transmits the obtained information to terminal1050using radio waves, and terminal1050obtains the information (ST2809). When necessary, terminal1050, for example, obtains required information from the network via base station #X. As described above, based on the information on the SSID and information on the encryption key transmitted from fifth device1000, terminal1050connects with base station470and obtains information to securely obtain information from base station470, whose security has been authenticated. This is because, when information from a visible light modulated signal is obtained, since it is visible light, it possible for the user to easily determine whether the source of information is secure or not. Conversely, for example, when the SSID is obtained from a modulated signal transmitted over radio waves via a wireless LAN, it is difficult for the user to determine which device transmitted the radio waves. Accordingly, from the viewpoint of ensuring information security, obtaining the SSID via visible light communication is more suitable than wireless LAN communication. Note that in this embodiment, a configuration in which fifth device1000transmits encryption key information has been described. However, for example, when base station470does not perform encrypted communication using an encryption key, fifth device1000may transmit only SSID information, without transmitting encryption key information. In such cases, the present disclosure can be implemented in the same manner simply by removing the configuration related to an encryption key from the above configurations. Moreover, the configuration of the fifth device is not limited to the configuration of fifth device1000illustrated inFIG.27, the configuration of the terminal is not limited to the configuration of terminal1050illustrated inFIG.27, and the connection destination and configuration of base stations #1, #2, and #3are not limited to the connection destination and configuration of base stations470-1,470-2, and470-3illustrated inFIG.27. Accordingly, with this embodiment, the above described reduction in terminals1050having difficulty accessing a base station470can be achieved even when a plurality of terminals1050are present in a given area. Note that inFIG.32, the frame configurations of the modulated signals transmitted by the fifth devices1000disposed at circles2701-1,2701-2,2701-3,2701-4,2701-5,2701-6,2701-7,2701-8,2701-8,2701-9, and2701-10may all be the same as illustrated inFIG.31, the frame configurations of the modulated signals transmitted by fifth devices1000may be mutually different, and two or more of the fifth devices1000may transmit modulated signals having the same frame configuration. Embodiment 8 In this embodiment, a case in which a communication method using optical signals is combined with image processing will described as one example of an application of a communication method using optical signals described above. The communication system according to this embodiment is applicable to, for example, communication between two automobiles (intervehicle communication), and communication between an automobile and a communication device disposed on the road or in the vicinity thereof (road-automobile communication). First, a basic description of the basic structure according to this embodiment will be given. Note that the application of this basic configuration is not limited to an automobile; the basic configuration can be applied to a mobile terminal such as a smartphone or notebook PC, as well as to other electronic devices. FIG.34is a block diagram illustrating the configuration of communication device A1000, which is one example of the communication device according to this embodiment. Communication device A1000includes light receiving device A1002, controller A1004, and wireless communication device A1006. Light receiving device A1002receives optical signal A1001emitted from a transmitter not illustrated in the drawings, and/or captures a sill image or video, and outputs optically received data A1003. Controller A1004, for example, controls other devices included in communication device A1000, and processes optically received data A1003input from light receiving device A1002and/or wireless communication reception data input from wireless communication device A1006. Wireless communication device A1006wirelessly connects to other communication device A1100based on control signal A1005from controller A1004and performs wireless communication for the transmission of wireless communication transmission data and the reception of wireless communication reception data. Wireless communication transmission data and wireless communication reception data are transmitted and received between wireless communication device A1006and controller A1004as wireless communication data A1008. Controller A1004outputs control signal A1007for controlling operation of light receiving device A1002, and light receiving device A1002operates according to control signal A1007. When optically received data. A1.003generated by light receiving device A1002includes still image data or video data, controller A1004may perform image processing using the still image data or video data. An example of the image processing performed by controller A1004will be given in greater detail later on. FIG.35is a block diagram illustrating the configuration of communication device A2000, which is another example of the communication device according to this embodiment. InFIG.35, elements having the same functions as those in communication device A1000illustrated inFIG.34share like reference signs, and repeated description thereof is omitted. Communication device A2000differs from communication device A1000in regard to the inclusion of presentation unit A2003and input unit A2004. Controller A1004generates an image based on, for example, optically received data A1003and/or wireless communication reception data or some other input information, and information read from memory, and outputs the generated image to presentation unit A2003as presentation information A2002. For example, presentation information A2002is, but not limited to, information including image information and/or text information generated based on optically received data A1003or some other data, and for example, presentation unit A2003is, but not limited to, a liquid crystal display, plasma display, or organic EL display that displays an image signal generated from the image information and/or text information obtained as the presentation information A2002. For example, presentation information A2002may be sound information, and presentation unit A2003may be a speaker that outputs sound in accordance with the sound information. In accordance with an operation made by a user, input unit A2004outputs, to controller A1004, input information A2005, which is, for example, information indicating the operation performed by the user and/or information indicating text input by the user. For example, input unit A2004is, but not limited to, a touch panel, physical key(s), floating touch display, and/or motion sensor. For example, input unit A2004may be a microphone and input information A2005may be sound information. Next, the configuration of light receiving device A1002will be described in greater detail. FIG.36is a block diagram illustrating the configuration of light receiving device A3000, which is a first example of a detailed configuration of light receiving device A1002according to this embodiment. Light receiving device A3000includes light receiver A3001and optically received signal processor A3003. Light receiver A3001has, for example, the same configuration as light receiver151illustrated inFIG.6, receives light emitted from an external source, and outputs reception signal A3002. Optically received signal processor A3003applies predetermined processing to reception signal A3002, and transmits the resulting signal as optically received data A1003. In one example, the predetermined processing applied to reception signal A3002by optically received signal processor A3003includes processing such as demodulating and error correction decoding of components in a modulated signal included in reception signal A3002, and demodulated data A4002resulting from the demodulation is output as optically received data A1003. In another example, as the predetermined processing, optically received signal processor A3003generates still image data or video data from reception signal A3002obtained by light receiver A3001, which is an image sensor such as a CMOS or organic CMOS image sensor, and outputs the generated still image data or video data as optically received data A1003. Here, the still image data or video data may be encoded data encoded using an image compression method or video compression method, and may be uncompressed data. Hereinafter, an example of the configuration of optically received signal processor A3003will be described in greater detail. FIG.37illustrates the configuration of optically received signal processor A4000, which is one example of a configuration of optically received signal processor A3003. Optically received signal processor A4000includes reception processor A4001. Reception processor A4001implements processing such as demodulation and/or error correction on reception signal A3002, and outputs the resulting demodulated data A4002as optically received data A1003. Reception signal A3002inputted into optically received signal processor A4000may be, for example, in the case of the above-described line scan sampling implementation example, a signal obtained by an image sensor such as a CMOS sensor using a sampling method receiving optical signals such as sampling by frame, and may be a signal sampled at a sampling rate required for reception of optical signals, using an element different from an image sensor that can convert optical signals into electrical signals, such as a photodiode. FIG.38illustrates the configuration of optically received signal processor A5000, which is another example of a configuration of optically received signal processor A3003. Optically received signal processor A5000includes image data generator A5001, and outputs, as optically received data A1003, image data A5002including optical signal information. In other words, image data generator A5001generates still image data or video data from reception signal A3002, and outputs image data A5002, which is the generated still image data or video data, as optically received data A1003. In the following description, for ease of explanation, unless otherwise noted, image data A5002shall be assumed to be video data. However, it goes without saying that the present disclosure can be implemented in the same manner even if “video data” is rewritten as “still image data” or “a combination of video data and still image data” in the following description. When light receiving device A1002includes optically received signal processor A5000, light receiver A3001is an image sensor such as a CMOS sensor. For example, light receiving device A1002controls operation of light receiver A3001, obtains reception signal A3002using a sampling method for receiving optical signals in the first period illustrated inFIG.39and obtains reception signal A3002using an imaging method for capturing video in the second period illustrated inFIG.39. Hereinafter, a signal obtained using the sampling method for receiving optical signals will be referred to as an imaging signal for optical communication, and a signal obtained using the imaging method for capturing video will be referred to as an imaging signal for video. Moreover, the data generated by image data generator A5001based on the imaging signal for optical communication will be referred to as imaging data for optical communication, and data generated by image data generator A5001based on the imaging signal for video will be referred to as imaging data for video. FIG.39illustrates one example of a control method of an image sensor in a case in which both the imaging signal for optical communication and the imaging signal for video are obtained by a single image sensor using time-division. Light receiving device A1002obtains an imaging signal for optical communication using a sampling method for receiving optical signals via light receiver A3001in the first period inFIG.39, and obtains an imaging signal for video using an imaging method for capturing video via the light receiver A3001in the second period inFIG.39. Here, each of the first period and the second period is a period corresponding to one or more frames in a video. However, light receiving device A1002may switch between the sampling method for receiving optical signals and the imaging method for capturing video out of sync with the video frames. Light receiving device A1002may arrange the first periods cyclically or non-cyclically. Moreover, rules for arranging the first periods such as the cycle at which the first periods are arranged may be changed dynamically. Note that light receiving device A1002may determine the start and end times of the first periods based on a signal input from an external source. For example, light receiving device A1002controls operation of light receiver A3001based on control signal A1007input from controller A1004. Here, controller A1004may output a control signal for controlling operation of light receiver A3001based on a signal received using a communication method such as wireless communication, wired communication, or optical communication from a transmission device external to communication device A1000or A2000, or data obtained from a sensor such as an image sensor included in communication device A1000or A2000. Control information for controlling operation of light receiver A3001may be, for example, a signal specifying a rule for arranging the first periods and second periods, or a signal instructing light receiver A3001, which normally obtains imaging signals for videos using the imaging method for capturing video, to temporarily or continuously obtain imaging signals for optical communication using the sampling method for receiving optical signals. An example of this will be given later in greater detail. Note that in the above description, the first period and the second period are exemplified as being arranged alternately, but the control method of the image sensor is not limited to this example. For example, a third period may be arranged that operates the CMOS sensor using an imaging method or a sampling method different from the methods employed in the first period and second period, and a transition period for switching operation of the image sensor may be implemented between the first period and the second period. Depending on the control method of the image sensor, it is possible to use a single image sensor to obtain both imaging signals for optical communication and imaging signals for video using time-division. As a result, it is possible to reduce the number of image sensors included in the communication device. Note that light receiving device A1002may operate light receiver A3001using the sampling method for receiving optical signals at all times to obtain reception signal A3002. Upon generating video data A5002, image data generator A5001may implement encoding processing using a video compression method on a video signal configured of frames generated based on reception signal A3002. For example, when reception signal A3002includes both an imaging signal for optical communication and an imaging signal for video, image data generator A5001may implement video compression processing on a frame generated from the imaging signal for video, excluding images (or frames) generated from the imaging signal for optical communication. Here, light receiving device A1002outputs, as optically received data A1003, the encoded video data, as well as image data generated from the imaging signal for optical communication. In the above description, the imaging signal for optical communication is exemplified as being output from light receiving device A1002as image data, but the imaging signal for optical communication may be output from light receiving device A1002as data in any format so long as the format allows for demodulation of optical signals. For example, the data may be data arranged in order of an average or gum of luminance values of pixels included in each exposure line or an average or sum of luminance values of pixels included in each of regions into which each pixel line is divided. Note that the video encoding processing that can be implemented by image data generator A5001when reception signal A3002includes the imaging signal for optical communication and the imaging signal for video is not limited to the above-described video encoding processing. For example, image data generator A5001may implement a common video compression processing on a video including frames configured of imaging signals for optical communication and frames configured of imaging signals for video, and light receiving device A1002may output, as optically received data A1003, encoded video data generated from imaging signals for optical communication and imaging signals for video. Next, operations performed by controller A1004in a case in which light receiving device A1002includes a configuration of optically received signal processor A5000. When light receiving device A1002includes a configuration of optically received signal processor A5000, light receiving device A1002does not perform processing such as demodulation and error correction on imaging data for optical communication. Accordingly, controller A1004implements processing such as demodulation and error correction on an optical signal using imaging data for optical communication included in optically received data A1003, and obtains data transmitted via the optical signal. Note that when optically received data A1003includes imaging data for video in addition to imaging data for optical communication, controller A1004may perform, in addition to processing such as demodulation and error correction on the optical signal included in imaging data for optical communication, image processing such as pattern recognition on the imaging data for video, and may further control light receiving device A1002and/or wireless communication device A1006based on the result of the image processing such as pattern recognition. Examples of signal processing using imaging data for video include processing of detecting a body part of a person such as the face, processing of distinguishing between people, processing of detecting a target such as a vehicle or drone, processing of distinguishing between targets such as vehicles and drones, processing of detecting movement or displacement of a detected person or target, and processing of tracking a detected person or target. These processes may be performed by extracting, from imaging data for video, feature amounts determined depending on the intended use of the signal processing and using the extracted feature amounts, and may be performed in a model generated by machine learning using a multilayer neural network. Note that when a model generated by machine learning using a multilayer neural network is used, the imaging data for video may first be preprocessed, and then the preprocessed data may be input into the model generated by machine learning using a multilayer neural network. Note that in the above description, imaging data for video is used in the signal processing performed by controller A1004, but sound data and/or other data obtained from, for example, a sensor may be used in addition to the imaging data for video, and sound data and/or other data obtained from, for example, a sensor may be used instead of the imaging data for video. Moreover, when light receiving device A1002includes a configuration of optically received signal processor A5000and light receiving device A1002outputs encoded video data as optically received data A1003, controller A1004may perform, as the above-described signal processing or part of the signal processing, video decoding processing corresponding to the video encoding processing, on the encoded video data included in optically received data A1003. Next, an example of the configuration of optically received signal processor A3003will be given. FIG.40illustrates the configuration of optically received signal processor A7000, which is a third example of a configuration of optically received signal processor A3003. Optically received signal processor A7000includes reception processor A7001and image data generator A7003. Reception processor A7001included in optically received signal processor A7000includes the same functions as reception processor A4001included in optically received signal processor A4000described with reference toFIG.37. Image data generator A7003included in optically received signal processor A7000includes the same functions as image data generator A5001included in optically received signal processor A5000described with reference toFIG.38. When light receiving device A1002includes optically received signal processor A7000, light receiving device A1002controls light receiver A3001and obtains an imaging signal for video and an imaging signal for optical communication as reception signal A3002. Optically received signal processor A7000inputs the imaging signal for video into image data generator A7003, and inputs the imaging signal for optical communication into reception processor A7001. However, it goes without saying that optically received signal processor A7000may input the imaging signal for optical communication into image data generator A5001. Optically received signal processor A7000outputs, as optically received data A1003, demodulated data A7002and video data A7004. Here, appended information such as time information indicating the time of reception of modulated signal corresponding to the demodulated data, or metadata, may be appended to demodulated data A7002. Here, time information appended to demodulated data A7002may be in a format that allows for the relationship between this information and the time information appended to video data A7004to be distinguished. For example, optically received signal processor A7000may append the time information for demodulated data A7002and the time information for video data A7004based on a common clock signal or time line, and information indicating the relationship between the time information for demodulated data A7002and the time information for video data A7004, such as information indicating the offset between the time information for demodulated data A7002and the time information for video data A7004, may be included in the time information for demodulated data A4002and the time information for video data A5002. Moreover, demodulated data A7002may include, as appended information or meta data, position information indicating a position, in an image, of the transmission device or light source that transmitted the modulated signal corresponding to the demodulated data. The appended information of demodulated data A7002may include both time information and position information and may include only one of the two. Moreover, other than time information and position information, the appended information of demodulated data A7002may include relative information related to the demodulated data. Note that the position information is exemplified as information indicating a position, in an image, of the transmission device or light source, but the position information may be some other type of information. For example, the position information may be information indicating the region in the image used for optical signal detection, or information indicating a position in a three-dimensional space. Position information on a position in a three-dimensional space may be, for example, information indicating a direction in which light receiving device A1002is capturing an image and a position in the image of the imaging data for video, and may be information indicating a value and region of coordinates in a coordinate system whose origin is the light receiving device or the communication device estimated based on the above data. Moreover, the information may be information indicating a value and region of coordinates in any given coordinate system used for, for example, GPS or three-dimensional mapping, estimated using position information on the communication device or light receiving device. Moreover, when light receiving device A1002obtains, in addition to imaging data for video, range image data indicating a depth to the captured target, the position in the three-dimensional space may be estimated using the range image data in addition to the imaging data for video. A range image can be obtained by, for example, using a time-of-flight (TOF) method, a range-finding method that uses stereo disparity, or a laser imaging detection and ranging (LIDER) method. Demodulated data A7002and video data A7004may be transmitted to controller A1004in communication device A1000or controller A1004in communication device A2000as a plurality of divided data streams or data packet sequences, and may be multiplexed onto a data stream in a format that allows for storing of both demodulated data A7002and video data A7004and transmitted to controller A1004in communication device A1000or controller A1004in communication device A2000in a single data stream or data packet sequence. FIG.41illustrates the configuration of light receiving device A8000, which is a second example of a configuration of light receiving device A1002. Light receiving device A8000includes first light receiver A8001-1, second light receiver A8001-2, first optically received signal processor A8003-1, and second optically received signal processor A8003-2. First light receiver A8001-1is an image sensor such as a CCD, CMOS, or organic CMOS image sensor, second light receiver A8001-2is an image sensor such as a CCD, CMOS, or organic CMOS image sensor, or a device capable of converting optical signals into electrical signals, such as a photodiode. Light receiving device A8000operates first light receiver A8001-1using an imaging method for capturing video, and obtains an imaging signal for video as reception signal A8002-1. When second light receiver A8001-2is an image sensor, light receiving device A8000operates second light receiver A8001-2using a sampling method for receiving optical signals, and obtains imaging signal for optical communication as reception signal A8002-2. However, when second light receiver A8001-2is a device capable of converting optical signals into electrical signals, such as a photodiode, light receiving device A8000obtains reception signal A8002-2sampled at a sampling rate required for reception of optical signals using second light receiver A8001-2. First optically received signal processor A8003-1has the same functions as, for example, optically received signal processor A5000illustrated inFIG.38, and outputs image data A8004-1, which is imaging data for video, as optically received data A1003. Second optically received signal processor A8003-2has the same functions as, for example, optically received signal processor A4000illustrated inFIG.37, and outputs demodulated data A8004-2as optically received data A1003. Note that second optically received signal processor A8003-2has the same functions as, for example, optically received signal processor A5000illustrated inFIG.38, and outputs image data A8004-2, which is imaging data for optical communication, as optically received data A1003. With this configuration, since light receiving device A8000can simultaneously obtain image data A8004-1, which is imaging data for video, and image data A8004-2, which is demodulated data or imaging data for optical communication, light receiving device A8000can both perform optical communication and capture video, without producing a period in which imaging data for video cannot be obtained. Note that although light receiving device A8000is exemplified as including two systems of a combination of a light receiver and an optically received signal processor, light receiving device A8000may include N (N is an integer greater than or equal to 3) systems of a combination of a light receiver and an optically received signal processor. Moreover, first light receiver A8001-1and second light receiver A8001-2need not be separate components. For example, a portion of the pixels of the image sensor may be used for capturing a video by operating them using the imaging method for capturing video as first light receiver A8001-1, and a different portion of the pixels of the same image sensor may be used for optical communication by operating them using the sampling method for receiving optical signals as second light receiver A8001-2. Similarly, when light receiving device A8000includes N or more systems of the light receiver and the optically received signal processor, pixels included in a first region of the image sensor may be used for capturing a video by operating them using the imaging method for capturing video, and pixels included in the second through N-th regions of the image sensor may be used for optical communication by operating them using the sampling method for receiving optical signals. Note that when it is not necessary to perform video capturing and light communication concurrently, without operating any of the pixels of the image sensor using the imaging method for capturing video, the pixels of the image sensor may be divided into a plurality of regions, and the pixels in respective regions may be operated using the sampling method for receiving optical signals to perform a plurality of instances of optical communication in parallel. Note that when video capturing or optical communication is performed using an image sensor, there is no need to always operate all of the pixels; there may be pixels that are temporarily or continuously not operated, that is to say, elements that do not readout accumulated electric loads resulting from receiving light. Next, one example of control of the image sensor in a case in which a plurality of optical signals are concurrently received using the image sensor will be given with reference toFIG.42. InFIG.42, (A) illustrates a state in which four light sources A through D that transmit mutually different optical signals are present in a capture region, which is a region that is capturable when the imaging method for capturing video is used. Each of the square regions in the capture region illustrated in (A) inFIG.42corresponds to a pixel. Here, for example, light receiving device A8000discerns regions A through D including the light sources A through D, as illustrated in (B) in FIG.42, and for each of the regions A through D, operates the pixels in that region using the sampling method for receiving optical signals to obtain the optical signals. As one example of a configuration for performing sampling for reception of optical signals for each region, a sampling method in an image sensor having a shutter function for each pixel will be given. (Example of Line Scan Sampling for Each Region) An example in which line scan sampling is performed when, as illustrated in (C) inFIG.42, in region A, a single line is configured of four pixels aligned in the vertical direction (column direction). In this example, region A includes 5 lines. The light receiving device exposes the lines by shifting the exposure period on a line-by-line basis for the five lines in region A to obtain changes in luminance or color of the modulated optical signals. However, note that the size of each of the regions, that is to say, the number of pixels included in the rows and the number of pixels included in the columns in each of the regions is not limited to the example illustrated inFIG.42; the number of pixels is not limited. Moreover, the size of the regions in which sampling for optical communication is performed may be changed in accordance with the size, position, mutual positional relationship, etc., in the screen of each of the light sources. In the example illustrated in (C) inFIG.42, although a single line is exemplified as including four pixels aligned in the column direction, for example, a single line may be five pixels aligned in the row direction, whereby there would be considered to be four row direction lines in the case of (C) inFIG.42. After the light receiving device reads out the signal from Line1in region A of (C) inFIG.42, which is the left-most line in region A, light receiving device reads out the signals corresponding to the remaining lines one by one, from left to right. When the light receiving device is finished reading out the signal from Line5, which is the right-most line in region A, the light receiving device returns to Line1, which is the left-most line, and repeats the process of reading out the signals line by line. In each of regions B through D in (B) inFIG.42as well, the light receiving device also performs line scan sampling by obtaining signals using the same process as in region A. Here, the light receiving device may expose the left-most line in every region at the same time or at different times. Moreover, the light receiving device may expose lines in the same column in regions A and C on the image sensor for the same exposure period, and expose lines in the same column in regions B and D on the image sensor for the same exposure period. However, regions A through D include lines that are exposed for the same exposure period. Here, an example was given in which a plurality of pixels aligned in the vertical direction (column direction) are exposed for the same period as a single line and signals are read out line by line, but line scan sampling in which a plurality of pixels aligned in the horizontal direction (row direction) are treated as a single line may be performed. In the above description, at least one pixel in the image sensor is used for both video capturing and optical communication, and switching is performed for switching between whether to obtain a signal corresponding to that pixel or pixels using the imaging method for capturing video or the sampling method for optical communication, but the configuration of the light receiving device including the image sensor is not limited to this example. For example, the image sensor may include pixels used for optical communication aside from the pixels that are used for video capturing. When the image sensor includes pixels used for optical communication aside from the pixels that are used for video capturing, the shape and/or size of the pixels used for optical communication may be different from the shape and/or size of the pixels used for video capturing. Moreover, the capturing of video using the pixels for video capturing and the sampling for optical communication using the pixels for optical communication may be controlled independently, and in circumstances in which one of the processing is unnecessary, one of the processing may be stopped, and the supply of power to the circuit for obtaining the signal required in the processing may be stored partially or entirely so as to reduce power consumption. By performing line scan sampling as described above, as illustrated in (A) inFIG.42, since it is possible to receive the mutually different modulated signals from the plurality of light sources in parallel, it is possible to achieve the advantageous effect whereby data transmission speeds are increased. Next, one example of the configuration of controller A1004included in communication device A1000or communication device A2000will be given. FIG.43illustrates controller A10000, which is one example of a physical configuration of controller A1004. Controller A10000includes central processing unit (CPU) A10001and memory A10002. Memory A10002stores, for example, data required for, for example, a program implemented by controller A1004or processing performed by the controller. CPU A10001performs processing based on a program read from memory A10002and achieves the functions of controller A1004. Moreover, for example, memory A10002stores data such as image data obtained by the reception device and reads out the stored data. Note that here, the elements that configure controller A10000are exemplified as a CPU and memory, but controller A10000may include other elements. For example, controller A10000may include a graphics processing unit (GPU) in addition to and separate from the CPU, and may include a circuit for performing video encoding processing, video decoding processing, and image processing such as pattern recognition on the imaging data for video. Moreover, controller A10000may include, for example, an input/output (I/O) for controlling the transferring of data between devices connected to controller A10000included in, for example, wireless communication device A1006. FIG.44illustrates the configuration of controller A11000, which is a first example of the configuration of controller A1004. Controller A11000includes signal processor A11002, wireless communication controller A11004, and light receiving device controller A11006. Signal processor A11002obtains, as optically received data A1003from light receiving device A1002, image data including imaging data for optical communication, or demodulated data on which demodulation and error correction has been performed, as an optical signal. When optically received data A1003is image data including imaging data for optical communication, signal processor A11002obtains a reception signal corresponding to the modulated signal from imaging data for optical communication, and performs demodulation processing and error correction processing on the reception signal to receive demodulated data. Wireless communication controller A11004outputs control signal A1005for controlling operation of wireless communication device A1006to wireless communication device A1006. Wireless communication controller A11004transfers the wireless communication reception data received via wireless communication device A1006to signal processor A11002, and transfers the wireless communication transmission data to be transmitted to other communication devices via wireless communication device A1006to wireless communication device A1006from signal processor A11002. Signal processor A11002performs signal processing using arbitrary data, such as demodulated data for optical communication, video imaging data, wireless communication reception data obtained via light receiving device A1002and wireless communication device A1006. For example, signal processor A11002instructs control of wireless communication device A1006by wireless communication controller A11004and instructs control of light receiving device by light receiving device controller A11006, based on the result of the above-described signal processing (A11005). Light receiving device controller A11006controls light receiving device A1002based on the instruction from signal processor A11002. Examples of the control of light receiving device A1002include controlling whether to obtain a signal using the imaging method for capturing video or the sampling method for receiving optical signals via light receivers A3001, A8001-1, and A8001-2, and the setting of the region of pixels to use the sampling method for receiving optical signals in cases in which a signal is obtained using the sampling method for receiving optical signals using a portion of the pixels included in the image sensor. However, the control of light receiving device A1002is not limited to these examples. For example, the control of light receiving device A1002may include the switching of the power of light receiving device A1002ON and OFF, and the switching of signal processing performed on optically received signals performed in light receiving device A1002. Moreover, some of the control described here may be performed automatically based on the result of the signal processing performed on the optically received signals in light receiving device A1002. FIG.45illustrates the configuration of controller A12000, which is a second example of the configuration of controller A1004. Controller A12000differs from controller A11000in regard to the inclusion of device controller A12002. Device controller A12002receives inputs of video imaging data obtained by signal processor A11002and/or the processing result of signal processor A11002(A12001), generates an image to be displayed on presentation unit A2003, and outputs the generated image signal to presentation unit A2003as presentation information A2002. Device controller A12002obtains input information A2005obtained by input unit A2004in accordance with the user operation of input unit A2004, and transfers input information A2005to signal processor A11002. With this configuration, signal processor A11002can perform signal processing based on input information A2005obtained in accordance with a user operation, in addition to the demodulated data for optical communication, video imaging data, and wireless communication reception data obtained via light receiving device A1002and wireless communication device A1006. For example, signal processor A11002instructs control of wireless communication device A1006by wireless communication controller A11004and instructs control of light receiving device by light receiving device controller A11006, based on the result of the above-described signal processing (A11005), and instructs the changing of the image displayed on presentation unit A2003. Hereinafter, as one example of processes performed by controller A1004, a communication control method of controlling wireless communication device A1006based on demodulated data obtained by receiving an optical signal and the result of image processing such as pattern recognition implemented on the imaging data for video, will be described. Signal processor A11002obtains imaging data for video as optically received data A1003from light receiving device A1002, and implements image processing such as pattern recognition on the imaging data for video. Wireless communication controller A11004controls wireless communication device A1006based on the result of image processing in signal processor A11002. With the communication control method described in this embodiment, demodulated data obtained by receiving an optical signal is associated with appended information such as position information indicating the position, in the image, of the transmitter that transmitted the optical signal or the light source used in the transmission of the optical signal, and the demodulated data appended with the appended information is used. In this embodiment, the information transmitted using optical communication may be any kind of information, and is not limited to a specific kind of information, but in the following description related to the communication control method, as one example, the information transmitted in the optical signal is exemplified as connection information including information required for connection or communication with another wireless communication device, such as the base station SSID described in Embodiments 3 through 7, for example. Signal processor A11002performs processing using demodulated data appended with appended information obtained in light receiving device A1002or signal processor A11002. Here, the demodulated data is connection information corresponding to another wireless communication device. When there are a plurality of items of the obtained connection information, signal processor A11002controls communication processing implemented by wireless communication device A1006using the appended information corresponding to each of the items of connection information and the result of image processing such as pattern recognition. Next, a first example of communication control based on the image processing result will be given. In the first example of the communication control based on the image processing result, communication device A1000, A2000is implemented as a vehicle or a device provided in a vehicle, and a camera provided in the vehicle is used as light receiving device A1002.FIG.46schematically illustrates one example of an image captured by a camera that captures a view in front of the vehicle. InFIG.46, three vehicles A13001, A13002, and A13003driving in front of the vehicle corresponding to communication device A1000, A2000are captured. Note that in the example given in this embodiment, a camera which captures a view in front of the vehicle is used, but it goes without saying that this embodiment can be implemented in the same manner even when the camera captures a view behind the vehicle or a view to a side of the vehicle. Here, vehicles A13001, A13002, and A13003each include a light source such as an LED, and transmission unit102that transmits an optical signal using the light source. Examples of light sources that can be used for optical communication include any given light source that is included in the vehicle such as a headlight or tail light, and which light source among the light sources included in the vehicle is to be used for transmitting optical signals may be selected arbitrarily depending on how the optical communication will be used. Moreover, when a plurality of light sources included in the vehicle are used for transmitting optical signals, the vehicle may include a transmission unit for optical communication use for each of the plurality of light sources, and, alternatively, may include a single transmission unit to transmit the optical signals using the plurality of light sources. Note that the vehicle may include a light source for optical communication use apart from the headlight and/or tail light. Vehicles A13001, A13002, and A13003include, in addition to the transmission unit and light source for optical communication, a communication device for wireless communication that corresponds to other communication device A1100described with reference toFIG.34and/orFIG.35. Note that when the host vehicle and vehicles A13001, A13002, and A13003include functions for the transmission and reception of optical signals and wireless communication, communication device A1000, A2000included in each of the vehicles has a configuration including transmission unit102and light source104for optical communication. In such cases, controller A1004may control the data transmitted by transmission unit102. In the first example of communication control based on the image processing result, vehicles A13001, A13002, and A13003transmit connection information which is information that can be used to connect with the communication device included in another vehicle via optical communication. Hereinafter, the connection information will be exemplified as including information indicating the SSID and the frequency channel used in the communication, in cases in which the communication device included in each of the vehicles operates as a base station. Note that in the example in the above description, an SSID is notified as the identifier included in the connection information for determining the communication partner, but the identifier information included in the connection information is not limited to an SSID. For example, the identifier may be a physical address such as the media access control (MAC) address of the other communication device, and may be a logical address such as the internet protocol (IP) address of the other communication device. Note that when the identifier information is used to select a resource to be accessed via a network such as the internet, rather than the identifier information being used in the selection of the other communication device to perform direct communication with by the communication device, the identifier information may be the address of the server that performs communication via a network such as the internet or the uniform resource locator (URL), uniform resource name (URN), or uniform resource identifier (URI) used to identify a resource on the internet. So long as the identifier information included in the connection information is information that can identify another communication terminal acting as the access destination or a resource on the internet, any information may be used. Note that in the above description, the connection information is exemplified as notifying information on the frequency channel used, but the connection information need not include information on the frequency channel used, and may include other information. Examples of other information that can be used as connection information include information related to an encryption key, types of compatible physical layer transmission standard, compatible data formats and/or communication protocols, etc. FIG.47schematically illustrates connection information obtained, in light receiving device A1002or controller A1004of communication device A1000, A2000, by demodulating optical signals transmitted using light sources by the transmission units included in vehicles A13001, A13002, and A13003. Communication device A1000, A2000obtains connection information from the optical signal transmitted by vehicle A13001indicating that the SSID is “XXX” and the frequency channel used is 1, obtains connection information from the optical signal transmitted by vehicle A13002indicating that the SSID is “YYY” and the frequency channel used is 3, and obtains connection information from the optical signal transmitted by vehicle A13003indicating that the SSID is “ZZZ” and the frequency channel used is 3. These items of connection information may be substituted with information that can be obtained by wireless communication device A1006in communication device A1000, A2000performing carrier sense over a given period, and receiving a signal transmitted from each of a plurality of communication devices. However, it is difficult for communication device A1000, A2000to determine which of the plurality of other communication devices in the surrounding area transmitted the signal, and there is a possibility that communication device A1000, A2000will connect and communicate with a communication device that is not the communication device that communication device A1000, A2000actually wants to communicate with. Thus, in the first example of communication control based on an image processing result, controller A1004in communication device A1000, A2000implements image processing on imaging data for video captured by light receiving device A1002, and detects vehicles A13001, A13002, and A13003from, for example, the image illustrated inFIG.46. Here, based on the positions of the light sources of the three optically received signals, controller A1004associates the three vehicles A13001, A13002, and A13003detected from the image with the three items of connection information received via optical communication. This makes it possible to identify connection information to use when wireless communication is performed between the three vehicles detected from the images. Next, controller A1004determines the reciprocal positional relationship between vehicles A13001, A13002, and A13003from the image and the positional relationships between each of these vehicles and the host vehicle, and then selects a target to perform wireless communication with. Controller A1004may select the vehicle closest to the host vehicle, which is vehicle A13003, as the communication target. Controller A1004may determine which lanes each of the vehicles is driving in and select, as the communication partner, a vehicle that is driving in the same lane as the host vehicle and is positioned frontmost in the image, which is vehicle A13001. With this configuration, it is possible to perform association with an object detected using signal processing such as pattern recognition based on (i) information difficult to be associated with a device in a real space with wireless communication alone, like an identifier used in wireless communication such as an SSID or address, and (ii) sensing data obtained from a sensor such as the image obtained by the image sensor. As a result, for example, when information such as the surrounding environment and the movement of surrounding vehicles is obtained for the purpose of controlling automated driving including assisted driving, this makes it easier to connect to a communication partner that is appropriate for obtaining such information. Next, a second example of communication control based on the image processing result will be given. In the second example of communication control based on the image processing result, the configuration of communication device A1000, A2000or the configuration of the host vehicle provided with communication device A1000, A2000, and the configuration of other vehicles A13001, A13002are the same as described in example 1 of the communication control based on the image processing result. The second example of communication control based on the image processing result differs from the first example of communication control based on the image processing result in that vehicle A13003is replaced by vehicle A15003that is not equipped with a function of transmitting optical signals. FIG.48schematically illustrates one example of an image captured by a camera that captures a view in front of the vehicle, according to the second example of communication control based on the image processing result. InFIG.48, three vehicles A13001, A13002, and A15003driving in front of the vehicle corresponding to communication device A1000, A2000are captured. FIG.49schematically illustrates connection information obtained, in light receiving device A1002or controller A1004of communication device A1000, A2000, by demodulating optical signals transmitted using light sources by the transmission units included in vehicles A13001and A13002. Communication device A1000, A2000obtains connection information from the optical signal transmitted by vehicle A13001indicating that the SSID is “XXX” and the frequency channel used is 1, and obtains connection information from the optical signal transmitted by vehicle A13002indicating that the SSID is “YYY” and the frequency channel used is 3. Here, since vehicle A15003is not equipped with a function of transmitting optical signals, communication device A1000, A2000does not obtain connection information relating to A15003. In the second example of communication control based on an image processing result, controller A1004in communication device A1000, A2000implements image processing on imaging data for video captured by light receiving device A1002, and detects vehicles A13001, A13002, and A15003from, for example, the image illustrated inFIG.48. Here, based on the positions of the light sources of the two optically received signals, controller A1004associates, from among vehicles A13001, A13002, and A15003, the two vehicles A13001and A13002detected from the image with the two items of connection information received via optical communication. With this, it is possible to identify connection information to be used when performing wireless communication with vehicles A13001and A13002detected from the image, as well as identify that the base station or communication device whose SSID is XXX or YYY is not the SSID to be used to communicate with vehicle A15003. First, an example in which vehicle A15003does not have a function of transmitting optical signals but has a function of performing wireless communication using the SSID “PPP” will be given. In such cases, wireless communication device A1006detects the three SSIDs of XXX, YYY, and PPP as the SSIDs of other communication devices provided in vehicles within a range in which communication is possible, via carrier sense, and controller A1004determines that PPP is the SSID to be used for communication with A15003, which differs from the SSIDs of XXX and YYY included in the connection information received as optical signals, and thus associates the SSID “PPP” with vehicle A15003. Controller A1004determines the reciprocal positional relationship between vehicles A13001, A13002, and A15003from the image and the positional relationships between each of these vehicles and the host vehicle, and then selects a target to perform wireless communication with. For example, controller A1004may select the vehicle closest to the host vehicle, which is vehicle A15003, as the communication target. Controller A1004may determine which lanes each of the vehicles is driving in and select, as the communication partner, a vehicle that is driving in the same lane as the host vehicle and is positioned frontmost in the image, which is vehicle A13001. With this configuration, it is possible to perform association with an object detected using signal processing such as pattern recognition based on (i) information difficult to be associated with a device in a real space with wireless communication alone, like an identifier used in wireless communication such as an SSID or address, and (ii) sensing data obtained from a sensor such as the image obtained by the image sensor. As a result, for example, when information such as the surrounding environment and the movement of surrounding vehicles is obtained for the purpose of controlling automated driving including assisted driving, this makes it easier to connect to a communication partner that is appropriate for obtaining such information. Next, an example in which vehicle A15003has neither a function of transmitting optical signals nor a function of performing wireless communication will be given. Here, wireless communication device A1006detects the two SSIDs of XXX and YYY as the SSIDs of other communication devices provided in vehicles within a range in which communication is possible, via carrier sense. Since controller A1004does not detect an SSID other than XXX and YYY, which are the SSIDs included in the connection information received as optical signals, as the SSID of another communication device provided in a vehicle, controller A1004determines that vehicle A15003does not have a function of performing wireless communication or is not a participant that can perform wireless communication. Controller A1004determines the reciprocal positional relationship between vehicles A13001, A13002, and A15003from the image and the positional relationships between each of these vehicles and the host vehicle, and then selects either vehicle A13001or vehicle A13002as a target to perform wireless communication with. For example, controller A1004may select the vehicle that is both closest to the host vehicle and capable of communication, which is vehicle A13002, as the communication target. Controller A1004may determine which lanes each of the vehicles is driving in and select, as the communication partner, a vehicle that is driving in the same lane as the host vehicle and is positioned frontmost in the image, which is vehicle A13001. With this configuration, it is possible to perform association with an object detected using signal processing such as pattern recognition based on (i) information difficult to be associated with a device in a real space with wireless communication alone, like an identifier used in wireless communication such as an SSID or address, and (ii) sensing data obtained from a sensor such as the image obtained by the image sensor. As a result, for example, it is possible to determine that information cannot be obtained from communication with vehicle A15003driving directly in front of the host vehicle, and, for example, when control of automated driving including assisted driving is performed, it is possible to prevent misrecognition of vehicle A13001or A13002, which the host vehicle is capable of communicating with, for vehicle A15003, which facilitates the provision of appropriate automated driving control. Next, a third example of communication control based on the image processing result will be given. In the third example of communication control based on the image processing result, the configuration of communication device A1000, A2000or the configuration of the host vehicle provided with communication device A1000, A2000, and the configuration of other vehicles A13002and A13003are the same as described in example 1 of the communication control based on the image processing result. The third example of communication control based on the image processing result differs from the first example of communication control based on the image processing result in that vehicle A13001is replaced by police vehicle A17001. Police vehicle A17001differs from vehicle A13001in that it is a police vehicle, but has the same configuration as vehicle A13001, and is equipped with functions of transmitting optical signals and performing wireless communication. FIG.50schematically illustrates one example of an image captured by a camera that captures a view in front of the vehicle, according to the third example of communication control based on the image processing result. InFIG.50, vehicles A13002and A13003and police vehicle A17001driving in front of the vehicle corresponding to communication device A1000, A2000are captured. FIG.51schematically illustrates connection information obtained, in light receiving device A1002or controller A1004of communication device A1000, A2000, by demodulating optical signals transmitted using light sources by the transmission units included in vehicles A17001, A13002, and A13003. Communication device A1000, A2000obtains connection information from the optical signal transmitted by police vehicle A17001indicating that the SSID is “QQQ” and the frequency channel used is 1, obtains connection information from the optical signal transmitted by vehicle A13002indicating that the SSID is “YYY” and the frequency channel used is 3, and obtains connection information from the optical signal transmitted by vehicle A13003indicating that the SSID is “ZZZ” and the frequency channel used is 3. In the third example of communication control based on an image processing result, controller A1004in communication device A1000, A2000implements image processing on imaging data for video captured by light receiving device A1002, and detects police vehicle A17001and vehicles A13002and A13003from, for example, the image illustrated inFIG.50. Here, based on the positions of the light sources of the three optically received signals, controller A1004associates police vehicle A17001and vehicles A13002and A13003detected from the image with the three items of connection information received via optical communication. This makes it possible to identify connection information to use when wireless communication is performed with each of police vehicle A17001and vehicles A13002and A13002detected from the images. Regarding the three vehicles recognized via the image processing, controller A1004performs detailed classification including determining whether a vehicle is a police vehicle or not using information on, for example, the appearance of the vehicle, and recognizes that vehicle A17001is a police vehicle. Controller A1004selects, as a target to perform wireless communication with, police vehicle A17001, which is the vehicle from which the obtainment of information takes priority from among police vehicle A17001and vehicles A13002and A13003. With this configuration, upon recognizing a target object through signal processing such as pattern recognition from sensing data obtained via a sensor such as an image obtained from an image sensor, further detailed classification of the recognized target object is performed, and communication control can be performed based on this classification. Note that the above-described example of control processing of selecting the police vehicle as a communication partner from which the obtainment of information takes priority is merely one non-limiting example; other control may be performed when a police vehicle is recognized. For example, police vehicle A17001may include in the transmitted optical signal an identifier for identifying itself as a police vehicle, and controller A1004may specify the identifier received from the optical signal from police vehicle A17001to vehicle A13002or A13003and obtain information on police vehicle A17001, rather than directly wirelessly connecting to the police vehicle. Moreover, when a police vehicle is detected through image processing, rather than always performing the same communication control, communication control may be performed that prioritizes the collection of information relating to the police vehicle when, for example, the emergency lights on the recognized police vehicle are recognized to be flashing, or when communication device A1000, A2000includes a microphone in addition to the image sensor and controller A1004detects the sound of a siren by implementing pattern recognition signal processing on the sound data obtained via the microphone. Note that when detecting sound generated by another device using the sound data obtained by the microphone, a modulated signal generated based on transmission data such as an identifier of the device may be transmitted at the same time. With this configuration, it is possible to associate a device that generates sound recognized through signal processing such as pattern recognition with transmission data such as an identifier transmitted as the sound signal. As a result, it may be possible to easily identify the device that generated the detected sound in an environment including a plurality of devices whose identifiers are known. Note that a sound signal may be used instead of the optical signal, and in such cases, light receiving device A1002in communication device A1000, A2000is replaced with a sound detection device such as a microphone. By using a device that can identify the direction of arrival of sound, such as an array microphone, as the sound detection device, it is possible to more accurately associate the device that generates the sound to be detected with the sound signal. Note that communication device A1000, A2000according to this embodiment may include a plurality of wireless communication devices. For example, communication device A1000, A2000may include a plurality of wireless communication devices that support communication schemes stipulated by mutually different standards, and may include a plurality of wireless communication devices that support the same communication scheme. Moreover, when communication device A1000, A2000according to this embodiment is embodied as a vehicle or a communication device provided in a vehicle, light receiving device A1002may be a camera such as a camera included in a drive recorder, a vehicle backup camera, a camera for checking the surroundings of the vehicle, or a camera used to project an image on a monitor in place of the side view mirrors. In this way, by receiving optical signals using a camera provided for purposes other than optical communication, it is possible to achieve the communication control disclosed in this embodiment without having to add a new camera, which reduces costs and encourages the broad usage of the function of receiving optical signals. Moreover, since this camera is installed such that a region from which information required by the driver, that is to say, information important in operating the vehicle, is captured, by collecting more information by combining signal processing such as image recognition with wireless communication, it is possible to facilitate the provision of appropriate automated driving control and the provision of information to the driver. The present disclosure describes an aspect of a method and device that use sensing data obtained from a sensor such as an image sensor or microphone to demodulate a transmission signal transmitted using a communication scheme that enables reception by such a sensor. In the above aspect, by further including an aspect of performing signal processing including pattern recognition such as image recognition on the sensing data obtained by the sensor, it is possible to determine correspondence between a target object in a real space detected or recognized from the sensing data and the transmission source of the transmission signal. In the above aspect, by further including an aspect of transmitting information such as the SSID, address, or an identifier to be used in processing over a network including communication, it is possible to easily associate the information to be used in processing over a network including communication with a target object in a real space. In other words, conventionally, information to be used in processing over a network in which association with a target object in a real space was difficult can be used based on sensing data obtained from the real space. In the above aspect, by further including an aspect of using an image sensor as a sensor and transmitting, in an optical signal, information to be used in processing over a network including communication, it is possible to improve the reliability of the association between a visible target object and the information to be used in processing over a network including communication. In the above aspect, by further including an aspect of transmitting an identifier to be used in transmission such as an SSID or address in an optical signal and selecting an identifier of a target to connect to via communication based on the result of image recognition signal processing, it is possible to perform communication control based the positional relationship of the target object in the real space and based on attributes of the target object, possible to perform communication by specifying the target object desired to be connected to, and possible to obtain information and make control instructions. As a result, for example, it is possible to provide a means for realizing communication with an appropriate communication partner in an environment in which an unspecified number of devices are within communication range, and encourages the creation and broad usage of new communication-based services. This concludes the description of Embodiment 8 according to the present disclosure. Note that the configuration illustrated inFIG.5was presented as one example of a communication system that performs visible light communication, but the configuration of the communication system that performs visible light communication is not limited to the configuration illustrated inFIG.5. For example, a configuration like that illustrated inFIG.52(see, for example, “IEEE 802.11-16/1499r1”) is acceptable. InFIG.52, the transmission signal is transmitted as an optical signal in a baseband bandwidth without being up-converted. In other words, a device that transmits the optical signal according to this embodiment (i.e., a device including a light source) may have the configuration illustrated on the transmission-side inFIG.52, and a terminal that receives the optical signal according to this embodiment may have the configuration illustrated on the reception-side inFIG.52. Embodiment 9 In this embodiment, additional information pertaining toFIG.52will be given. FIG.52will be described in more detail. The symbol mapper receives an input of transmission data, performs mapping based on a modulation scheme, and outputs a symbol sequence (ci). The pre-equalizer receives an input of the symbol sequence, performs pre-equalizing processing on the symbol sequence to reduce the equalizing processes on the reception-side, and outputs a pre-equalized symbol sequence. The Hermitian symmetry processor receives an input of the pre-equalized symbol sequence, allocates sub-carriers to the pre-equalized symbol sequence to secure Hermitian symmetry, and outputs parallel signals. The inverse (fast) Fourier transformer receives inputs of the parallel signals, applies an inverse (fast) Fourier transform to the parallel signals, and outputs inverse (fast) Fourier transformed signals. The parallel serial and cyclic prefix adder receives an input of the inverse (fast) Fourier transformed signals, performs parallel conversion and adds cyclic prefix, and outputs the signal-processed signal. The digital-to-analog converter receives an input of the signal-processed signal, performs digital-to-analog conversion, outputs an analog signal, and the analog signal is emitted as light from, for example, one or more LEDs. Note that the pre-equalizer and the Hermitian symmetry processor need not be included. In other words, there may be instances in which the pre-equalizer and the Hermitian symmetry processor do not perform their respective signal processing. The photodiode receives an input of light, and obtains a reception signal via a transimpedance amplifier (TIA). The analog-to-digital converter performs an analog-to-digital conversion on the reception signal, and outputs a digital signal. The cyclic prefix subtractor and serial parallel converter receives an input of the digital signal, subtracts the cyclic prefix, and then performs serial parallel conversion, and receives an input of parallel signals. The (fast) Fourier transformer receives inputs of the parallel signals, applies a (fast) Fourier transform to the parallel signals, and outputs (fast) Fourier transformed signals. The detector receives inputs of the (fast) Fourier transformed signals, performs detection, and outputs a series of reception symbols. The symbol demapper receives an input of the series of reception symbols, performs demapping, and obtains a series of reception data. In this way, even when such a transmission device that transmits the modulated optical signals and such a reception device that receives the modulated optical signals are applied to the amendments according to the present specification, the embodiments can be implemented in the same manner. Embodiment 10 In Embodiment 8, an example in which the transmission device transmits a plurality of modulated optical signals and the reception device receives the plurality of modulated optical signals was given with reference toFIG.42. In this embodiment, an implementation example in such a case will be given. FIG.53illustrates an example of configurations of a transmission device and a reception device according to this embodiment. InFIG.53, transmission device100transmits a plurality of modulated optical signals, and reception device150receives a plurality of modulated optical signals to receive reception data. Note that inFIG.53, configurations that operate in the same manner asFIG.6share like reference signs. The transmission device inFIG.53transmits M modulated optical signals. Note that M is an integer greater than or equal to two. Transmission unit A2002_ireceives inputs of data A2001_iand control signal A2005, and based on information related to the error correction encoding method and information related to the transmission method included in control signal A2005, implements error correction encoding and implements signal processing based on the transmission method to generate and output modulated optical signal A2003_i. Note that i is an integer greater than or equal to one and less than or equal to M. Modulated optical signal A2003_iis then transmitted from light source A2004_i. Light receiver A2051, one example of which is an image sensor, receives light corresponding to modulated optical signal A2003_i. Here, light receiver A2051receives light corresponding to the M modulated optical signals. The method of receiving the plurality of optical reception signals used in light receiver A2051is, for example, as described in Embodiment 8. Light receiver A2051outputs optical reception signal A2052_icorresponding to modulated optical signal2003_i. Note that i is an integer greater than or equal to one and less than or equal to M. Reception unit A2053_ireceives an input of optical reception signal A2052_i. corresponding to modulated optical signal A2003_i, performs processing such as demodulation and error correction decoding, and outputs reception data A2054_icorresponding to data A2001_i. Data obtainer A2055receives inputs of data A2054_1, data A2054_2, . . . , and data A2054_M, and generates and outputs data A2056. FIG.54illustrates an example of configurations of a transmission device and a reception device according to this embodiment, which differ fromFIG.53. Note that inFIG.54, configurations that operate in the same manner asFIG.53share like reference signs. Splitter A2102receives inputs of information A2101and control signal A2005, and based on information related to the error correction encoding method included in control signal A2005, performs error correction encoding on information A2101to generate error correction encoded data. Splitter A2102then splits the error correction encoded data and outputs error correction encoded data A2001_i. Note that the splitting of the data into M items of error correction encoded data. A2001_imay be performed using any method. For example, the error correction encoded data may be split into M items and a data sequence of the split M items of data may be allocated as the M items of error correction encoded data A2001_i. Moreover, M data sequences configured of the same data may be generated based on the error correction encoded data, and the data sequences may be allocated as the items of error correction encoded data A2001_i. The method of allocating the error correction encoded data A2001_iis not limited to these examples, any method may be used so long as M data sequences are generated from the error correction encoded data, and the data sequences are allocated as the items of error correction encoded data A2001_i. Transmission unit A2002_ireceives inputs of data A2001_iand control signal A2005, and based on information related to the transmission method included in control signal A2005, implements signal processing based on the transmission method to generate and output modulated optical signal A2003_i. Note that i is an integer greater than or equal to one and less than or equal to M. Modulated optical signal A2003_iis then transmitted from light source A2004_i. Light receiver A2051, one example of which is an image sensor, receives light corresponding to modulated optical signal A2003_i. Here, light receiver A2051receives light corresponding to the M modulated optical signals. The method of receiving the plurality of light reception signals used in light receiver A2051is, for example, as described in Embodiment 8. Light receiver A2051outputs optical reception signal A2052_icorresponding to modulated optical signal2003_i. Note that i is an integer greater than or equal to one and less than or equal to M. Reception unit A2053_ireceives an input of optical reception signal A2052_icorresponding to modulated optical signal A2003_i, performs processing such as demodulation, and outputs (the log-likelihood ratio of) reception data2051_icorresponding to data A2001_i. Error correction decoder A2151receives inputs of (the log-likelihood ratio of) reception data2054_1, (the log-likelihood ratio of) reception data2054_2, . . . , and (the log-likelihood ratio of) reception data2054_M, performs error correction decoding, and outputs reception data A2152. FIG.55illustrates one example of a frame configuration of a modulated optical signal transmitted by transmission device100illustrated inFIG.53andFIG.54. Frame configuration A2201_1inFIG.55is one example of the frame configuration of modulated optical signal A2003_1illustrated inFIG.53andFIG.54. Note that in frame configuration A2201_1, time is represented on the horizontal axis. Accordingly, frame configuration A2201_iinFIG.55is one example of the frame configuration of modulated optical signal A2003_iillustrated inFIG.53andFIG.54. Note that in frame configuration A2201_i, time is represented on the horizontal axis. Note that i is an integer greater than or equal to one and less than or equal to M (in other words, inFIG.55, M frame configurations are shown). As illustrated in frame configuration A2201_i, transmission device100illustrated inFIG.53andFIG.54transmits, in modulated optical signal A2003_i, preamble A2210_i, control information symbol A2211_i, and data symbol A2212_i. FIG.56illustrates one example of a reception state in reception device150. Note that in the following example, transmission device100illustrated inFIG.53andFIG.54includes 16 (M=16) light sources. InFIG.56, A2300indicates an image sensor, which is one example of the light receiver, A2301_1indicates light emitted by a first light source, and this light includes a first modulated optical signal. Note that the first modulated optical signal corresponds to A2201_1inFIG.55. Accordingly, inFIG.56, A2301_iindicates light emitted by an i-th light source, and this light includes an i-th modulated optical signal. Note that the i-th modulated optical signal corresponds to A2201_iinFIG.55. Note that i is an integer greater than or equal to one and less than or equal to 16. In the example of the reception state in reception device150illustrated inFIG.56, the light receiver in reception device150receives light from a fourth light source that includes a fourth modulated optical signal, receives light from an eighth light source that includes an eighth modulated optical signal, and receives light from a twelfth light source that includes a twelfth modulated optical signal. For example, assuming transmission device100illustrated inFIG.53and/orFIG.54transmits 16 modulated optical signals from the 16 light sources, in the state illustrated inFIG.56, since reception device150illustrated inFIG.53and/orFIG.54cannot receive all 16 of the modulated optical signals, it is difficult to obtain correct reception data in this state. A method for overcoming this problem will be described hereinafter. FIG.57illustrates one example of a configuration of information included in preamble A2210_iand control information symbol A2211_iin frame configuration A2201_iof modulated optical signal A2003_iillustrated inFIG.55, and the symbol configuration thereof. Note that i is an integer greater than or equal to one and less than or equal to M (M=16). Preamble A2210_iand control information symbol A2211_iin frame configuration A2201_iinclude, as illustrated inFIG.57, symbol A2401for signal detection, symbol A2402for synchronization, symbol A2403including information related to the number of modulated optical signals transmitted, symbol A2404including information related to the error correction encoding method, transmission method, and modulation scheme. Symbol A2401for signal detection is a symbol for notifying reception device150of the existence of the modulated optical signal, and by detecting this symbol, reception device150knows that the modulated optical signal exists. Symbol A2402for synchronization is a symbol for reception device150to perform time synchronization (may include frequency synchronization), and by using this symbol, reception device150can perform time synchronization and accurately demodulate the symbols. Symbol A2403including information related to the number of modulated optical signals transmitted is a symbol for notifying of the number of modulated optical signals transmitted by transmission device100, and in the state illustrated inFIG.56, symbol A2403including information related to the number of modulated optical signals transmitted transmits information indicating “16”. In the reception state illustrated inFIG.56, reception device150receives symbol A2403including information related to the number of modulated optical signals transmitted, and thus knows that the number of modulated optical signals transmitted by transmission device100is 16. Note that in the case of the reception state illustrated inFIG.56, reception device150knows that it has only received three of the 16 modulated optical signals. Symbol A2404including information related to the error correction encoding method, transmission method, and modulation scheme is, for example, a symbol including information on the error correction encoding method, transmission method, and modulation scheme used in the data symbol (symbol for transmitting data) in modulated optical signal A2003_i, and by receiving this symbol, reception device150can know the error correction encoding method, transmission method, and modulation scheme used in modulated optical signal A2003_i. In the case of the frame configuration illustrated inFIG.55, in modulated optical signal A2003_1through modulated optical signal A2003_16, the symbols inFIG.57are transmitted by transmission device100. As a result, even when reception device150cannot receive all of the modulated optical signals, like illustrated inFIG.56, it is possible to know the number of modulated optical signals transmitted by transmission device100, and thus reception device150can know whether all modulated optical signals have been received or not. When not all of the modulated optical signals have been received, signal processing can be cancelled midway, which achieves the advantageous effect that unnecessary power consumption can be reduced. FIG.58illustrates one example, which differs from the example illustrated inFIG.57, of a configuration of information included in preamble A2210_iand control information symbol A2211_iin frame configuration A2201_iof modulated optical signal A2003_iillustrated inFIG.55, and the symbol configuration thereof. Note that i is an integer greater than or equal to one and less than or equal to M (=16), and inFIG.58, configurations that operate in the same manner asFIG.57share like reference signs. Accordingly, since those configurations have already been described, repeated description thereof will be omitted. FIG.58differs fromFIG.57in that symbol A2501including information related to modulated optical signal number has been added to the symbols that transmission device100transmits. SinceFIG.58illustrates frame configuration A2201_iof modulated optical signal A2003_iinFIG.55, that is to say, the frame configuration of the i-th modulated optical signal, symbol A2501including information related to modulated optical signal number includes information indicating “i”. For example, symbol A2501including information related to modulated optical signal number transmitted in the first modulated optical signal by transmission device100includes information indicating “1”. In the reception state illustrated inFIG.56, reception device150receives symbol A2403including information related to the number of modulated optical signals transmitted, and thus knows that the number of modulated optical signals transmitted by transmission device100is 16. Then, since reception device150receives symbol A2501including information related to modulated optical signal number included in the fourth modulated optical signal, symbol A2501A including information related to modulated optical signal number included in the eighth modulated signal, and symbol A2501A including information related to modulated optical signal number included in the twelfth modulated signal, reception device150knows that the fourth modulated optical signal, the eighth modulated optical signal and the twelfth modulated optical signal have been received. As a result of knowing this situation, reception device150implements operations for improving the reception condition, and thus improves data reception quality. Note that these operations will be described in greater detail later. Other examples of reception states in reception device150are illustrated inFIG.59andFIG.60. Note that inFIG.59andFIG.60, configurations that operate in the same manner asFIG.56share like reference signs. Accordingly, since those configurations have already been described, repeated description thereof will be omitted. In the example of the reception state in reception device150illustrated inFIG.59, light receiver A2300in reception device150receives light from a first light source that includes a first modulated optical signal through light from a sixteenth light source that includes a sixteenth modulated optical signal, that is to say, 16 modulated optical signals. In the example illustrated inFIG.59, for example, the first modulated optical signal is received at the upper-left region of light receiver A2300. In the example of the reception state in reception device150illustrated inFIG.60, light receiver A2300in reception device150receives light from a first light source that includes a first modulated optical signal through light from a sixteenth light source that includes a sixteenth modulated optical signal, that is to say, 16 modulated optical signals. In the example illustrated inFIG.60, for example, the first modulated optical signal is received at the bottom-right region of light receiver A2300, which differs from the example inFIG.59. The reception states inFIG.59andFIG.60are merely examples; the situation in which reception device150receives the first modulated optical signal through sixteenth modulated optical signal differs depending on the environment. Taking this into consideration, since each modulated optical signal includes symbol A2501including information related to modulated optical signal number, like inFIG.58, reception device150can know which part of the light receiver will receive which modulated optical signal. Then, reception device150receives the i-th reception data obtained from the reception signal of the i-th modulated optical signal, and when the first through sixteenth reception data needs to be rearranged, since the reception data indicates which modulated optical signal it corresponds to, it is possible to distinguish this from symbol A2501including information related to modulated optical signal number, whereby the reception data can be correctly rearranged, which improves data reception quality. Next, a configuration method of a frame different from the example above will be described. FIG.55illustrates one example of a frame configuration of a modulated optical signal transmitted by transmission device100illustrated inFIG.53andFIG.54, and since this has already been described above, repeated description will be omitted. For example, assume the configuration of the preamble and the control information symbol in frame configuration A2201_1in modulated optical signal A2003_1inFIG.55is as illustrated inFIG.57, and the configuration of the preamble and the control information symbol in frame configuration A2201_2in modulated optical signal. A2003_2through frame configuration A2201_16in modulated optical signal A2003_16is as illustrated inFIG.61. Note that inFIG.61, configurations that operate in the same manner asFIG.57share like reference signs. The characterizing feature ofFIG.61is that symbol A2403including information related to the number of modulated optical signals transmitted is not included. In other words, the characterizing feature is that transmission device100only transmits symbol A2403including information related to the number of modulated optical signals transmitted in modulated optical signal A2003_1. Here, when the reception state in reception device150is the state illustrated inFIG.56, reception device150does not receive symbol A2403including information related to the number of modulated optical signals transmitted, so reception device150cannot know the number of modulated optical signals transmitted by transmission device100. Thus, reception device150determines that correctly receiving the data is difficult, stops the signal processing in the reception operations, and thus can reduce unnecessary power consumption. Note that in the description of this example, transmission device100is described as transmitting symbol A2403including information related to the number of modulated optical signals transmitted only in modulated optical signal A2003_1, but this example is not limiting. So long as transmission device100transmits symbol A2403including information related to the number of modulated optical signals transmitted in one or more of the modulated optical signals from among modulated optical signals A2003_1through A2003_16, the same advantageous effects as described above can be achieved. Next, yet another example will be given. FIG.55illustrates one example of a frame configuration of a modulated optical signal transmitted by transmission device100illustrated inFIG.53andFIG.54, and since this has already been described above, repeated description will be omitted. For example, assume the configuration of the preamble and the control information symbol in frame configuration A2201_1in modulated optical signal A2003_1inFIG.55is as illustrated inFIG.58, and the configuration of the preamble and the control information symbol in frame configuration A2201_2in modulated optical signal A2003_2through frame configuration A2201_16in modulated optical signal A2003_16is as illustrated inFIG.62. Note that inFIG.62, configurations that operate in the same manner asFIG.57andFIG.58share like reference signs. The characterizing feature ofFIG.62is that symbol A2403including information related to the number of modulated optical signals transmitted is not included. In other words, the characterizing feature is that transmission device100only transmits symbol A2403including information related to the number of modulated optical signals transmitted in modulated optical signal A2003_1. Here, when the reception state in reception device150is the state illustrated inFIG.56, reception device150does not receive symbol A2403including information related to the number of modulated optical signals transmitted, so reception device150cannot know the number of modulated optical signals transmitted by transmission device100. Thus, reception device150determines that correctly receiving the data is difficult, stops the signal processing in the reception operations, and thus can reduce unnecessary power consumption. Note that in the description of this example, transmission device100is described as transmitting symbol A2403including information related to the number of modulated optical signals transmitted only in modulated optical signal A2003_1, but this example is not limiting. So long as transmission device100transmits symbol A2403including information related to the number of modulated optical signals transmitted in one or more of the modulated optical signals from among modulated optical signals A2003_1through A2003_16, the same advantageous effects as described above can be achieved. In yet another example, transmission device100may transmit the preamble and the control information symbol in one or more of the modulated optical signals from among modulated optical signals A2003_1through A2003_16. As described above, when the transmission device transmits a plurality of modulated optical signals, as described in this embodiment, as a result of transmitting the modulated optical signals, advantageous effects whereby the reception device can achieve high data reception quality and can reduce power consumption can be achieved. Note that in this embodiment, the number of modulated optical signals that the transmission device transmits is exemplified as, but not limited to, 16. For example, when the transmission device has a configuration like that of100illustrated inFIG.53, the number of modulated optical signals transmitted may be changed depending on the time of transmission. For example, at a first time, 16 modulated optical signals may be transmitted, at a second time, eight modulated optical signals may be transmitted, and at a third time, one modulated optical signal may be transmitted. Moreover, in the case of this example, at the first time, information indicating 16 is transmitted in symbol A2404including information related to the number of modulated optical signals transmitted, at the second time, information indicating eight is transmitted in symbol A2404including information related to the number of modulated optical signals transmitted, and at the third time, information indicating one is transmitted in symbol A2404including information related to the number of modulated optical signals transmitted. Then, in this embodiment, the frame configuration was exemplified as the frame configuration illustrated inFIG.55, but the frame configuration is not limited to this example; other symbols may be present in the frame. Moreover, the order in which the symbols are transmitted is not limited to the order illustrated inFIG.55. Furthermore, the configurations of the preamble and the control information symbol were exemplified as those illustrated inFIG.57,FIG.58,FIG.61, andFIG.62, but in each of these figures, one or more symbols may be omitted, or other symbols may be present. Operations can be performed in the same manner with such configurations. In other words, the configurations of the preamble and the control information symbol are not limited to the examples inFIG.57,FIG.58,FIG.61, andFIG.62. Moreover, the order in which symbols included in the preamble and the control information symbol is not limited to the examples in.FIG.57,FIG.58,FIG.61, andFIG.62. Embodiment 11 In this embodiment, an implementation method for improving data reception quality by reception device150when, for example, the reception state of reception device150is like the situation illustrated inFIG.56will be described. As described in Embodiment 10, it is difficult for reception device150to correctly obtain reception data in a situation like that illustrated inFIG.56, for example. Moreover, there are instances in which the reception state of reception device150is like that illustrated inFIG.63. InFIG.63, configurations that operate in the same manner asFIG.56share like reference signs. InFIG.63, since the surface area of the light emitted by each light source in the light receiver such as the image sensor is small, there is a problem that the data reception quality in reception device150decreases. Moreover, when line scanning is performed or line scan sampling is performed per region, reception device150may experience a significant reduction in data reception quality. In this embodiment, an example of a configuration of reception device150that overcomes this problem will be given. Transmission device100inFIG.53is one example of a configuration of the transmission device that transmits data. Note that sinceFIG.53has already been described, repeated description thereof will be omitted. The configuration of reception device150that receives the modulated optical signal transmitted by transmission device100inFIG.53is illustrated inFIG.64. Another example of a configuration of the transmission device that transmits data that is different from the example ofFIG.53is transmission device100illustrated inFIG.54. Note that sinceFIG.54has already been described, repeated description thereof will be omitted. The configuration of reception device150that receives the modulated optical signal transmitted by transmission device100inFIG.54is illustrated inFIG.65. Hereinafter, reception device150illustrated inFIG.64andFIG.65be described. FIG.64illustrates one example of a configuration of reception device150that receives the modulated optical signal transmitted by transmission device100illustrated inFIG.53, and configurations that operate the same asFIG.53share like reference signs. Lens (group) A3101receives an input of lens control signal A3109, and performs control such as focal length, aperture, and focus control. Image sensor (light receiver) A3103receives and input of light A3102that has passed through the lens, and outputs optical reception signals A2052_1through A2502_M and image signal A3104. Note that image signal A3104may subsequently be subjected to signal processing and displayed as an image on an internal display, and may be displayed as an image on an external display via an interface. Data obtainer A2055receives inputs of reception data A2054_1through A2054_M, and outputs data A2056and reception state information A3107. Reception state information A3107may be, for example, the information related to the number of modulated optical signals transmitted obtained from symbol A2403including information related to the number of modulated optical signals transmitted, which is transmitted by transmission device100described in Embodiment 10, or the information related to modulated optical signal number obtained from symbol A2501including information related to modulated optical signal number transmitted by transmission device100described in Embodiment 10. Moreover, reception state information A3107may be information indicating a reception state, generated from the information related to the number of modulated optical signals transmitted and/or the information related to modulated optical signal number. Note that these examples are not limiting. Object recognition unit A3105receives inputs of image signal A3104, reception state information A3107and instruction signal A3150, and performs object recognition based on instruction signal A3150. For example, when instruction signal A3150indicates “perform communication”, object recognition unit A3105starts modulated optical signal recognition. Here, object recognition unit A3105receives inputs of image signal A3104and reception state information A3107, and outputs object recognition signal A3106. These operations will be described in greater detail later. Lens controller A3108receives an input of object recognition signal A3106, recognizes a reception state, examples of which are illustrated inFIG.56,FIG.63, etc., and outputs control signal A3109corresponding to control such as determining whether to perform lens control, and when performing lens control, determines the set value for focal length, the set value for aperture, and the setting for focus. InFIG.64, lens controller A3108is exemplified as receiving an input of object recognition signal A3106, but may receive inputs of other signals. FIG.65illustrates one example of a configuration of reception device150that receives the modulated optical signal transmitted by transmission device100illustrated inFIG.54, and configurations that operate the same asFIG.53andFIG.54share like reference signs. Note that since operations performed by lens (group) A3101, image sensor A3103, object recognition unit A3105, and lens controller A3108have already been described, repeated description is omitted. Error correction decoder A2155receives inputs of reception data A2054_1through A2054_M, and outputs data A2056and reception state information A3107. Next, a detailed example of a control method of lens (group) A3101inFIG.64andFIG.65will be given. As described in Embodiment 10, for example, when the reception state of reception device150is the state illustrated inFIG.56, since the light receiver is not receiving light emitted by some of the light sources, it is difficult for reception device150to correctly receive the data. Moreover, as described above, when the reception state of reception device150is the state illustrated inFIG.63, a problem arises in that the data reception quality of reception device150is poor. However, when the reception state of reception device150is a state like one of those illustrated inFIG.59andFIG.60, data reception quality is high. From the above, when reception device150controls lens (group) A3101so as to achieve a state like one of those illustrated inFIG.59andFIG.60, data reception quality improves. The configurations of reception device150illustrated inFIG.64andFIG.65are examples of configurations for realizing this. A detailed example of control of reception device150illustrated inFIG.64andFIG.65will be given. Assume the reception state of reception device150is the state illustrated in, for example,FIG.56. Here, since reception state information A3107inFIG.64andFIG.65is information generated based on the information related to the number of modulated optical signals transmitted and the information related to modulated optical signal number, as described above, object recognition unit A3105inFIG.64andFIG.65recognizes that three of the 16 modulated optical signals have been received. Furthermore, object recognition unit A3105recognizes, from image signal A3104, the reception state of the modulated optical signals, for example, which positions on the image sensor the three modulated optical signals are received at. In other words, object recognition unit A3105performs object recognition as depicted inFIG.56. Accordingly, object recognition unit A3105recognizes the reception state of the modulated optical signals and that the 16 modulated optical signals have not been received. Furthermore in the case of this example, based on these recognition results, object recognition unit A3105determines to perform lens control, and determines a suitable set value for focal length, a suitable set value for aperture, and a suitable setting for focus for realizing suitable communication, and outputs object recognition signal A3106including this information. Note that it is sufficient if object recognition signal A3106includes at least the suitable set value for focal length; object recognition signal A3106need not include the suitable set value for aperture and the suitable setting for focus. Lens controller A3108receives an input of object recognition signal A3106, and based on, for example, the suitable set value for focal length, the suitable set value for aperture, and the suitable setting for focus included in object recognition signal A3106, outputs lens control signal A3109for controlling lens (group) A3101. By implementing this sequence of operations, reception device150illustrated inFIG.64andFIG.65can achieve a reception state such as those illustrated inFIG.59andFIG.60, and thus achieve the advantageous effect that high data reception quality can be achieved. Although the above example pertains to controlling the reception state of reception device150from the state illustrated inFIG.56to a state like one of those illustrated inFIG.59andFIG.60, this example is not limiting. For example, the reception state of reception device150may be controlled from the state illustrated inFIG.63to a state like one of those illustrated inFIG.59andFIG.60. However, these examples are not limiting. Next, an example of control of reception device150illustrated inFIG.66andFIG.67that differs fromFIG.64andFIG.65will be given. FIG.66illustrates one example of a configuration of reception device150that receives the modulated optical signal transmitted by transmission device100illustrated inFIG.53, and configurations that operate the same asFIG.64share like reference signs. Repeated description of configurations that have already been described will be omitted. Reception device150inFIG.66differs from reception device150inFIG.64in regard to the inclusion of signal processor A3302disposed after image sensor A3103. Here, assume signal processor A3302includes at least a function for processing zoom (enlarging (and/or shrinking) an image). Accordingly, signal processor A3302receives inputs of image signal A3301, zoom signal A3300, object recognition signal A3106, and instruction signal A3150, and when instruction signal A3150indicates “capturing mode (perform image capturing)”, signal processor A3302performs signal processing for zooming on image signal A3301based on the zoom information (enlarging (and/or shrinking) an image) included in zoom signal A3300, and outputs signal-processed image signal A3104. When instruction signal A3150indicates “communication mode (perform communication)”, signal processor A3302performs signal processing for zooming on image signal A3301based on the information included in object recognition signal A3106, such as the suitable set value for focal length, the suitable set value for aperture, and the suitable setting for focus, and outputs signal-processed image signal A3104and signal-processed optical reception signals2052_1through A2052_M. With this, as described above, since the reception state is improved, the advantageous effect that data reception quality is improved can be achieved. Note that since the method for improving the reception state used in lens controller A3108has already been described, repeated description thereof will be omitted. By implementing the above, reception device150can achieve the advantageous effect of an improvement in data reception quality since the reception state improves. InFIG.66, when lens (group) A3101does not include a function for changing the focal length, changing of the focal length to improve reception is not performed. FIG.67illustrates one example of a configuration of reception device150that receives the modulated optical signal transmitted by transmission device100illustrated inFIG.54, and configurations that operate the same asFIG.65share like reference signs. Repeated description of configurations that have already been described will be omitted. Reception device150inFIG.67differs from reception device150inFIG.65in regard to the inclusion of signal processor A3302disposed after image sensor A3103, like inFIG.66. Note that since operations performed by signal processor A3302have already been described in detail, repeated description thereof will be omitted. Moreover, as already described, the advantageous effect of an improvement in data reception quality can be achieved since the reception state improves. Note that since the method for improving the reception state used in lens controller A3108has already been described, repeated description thereof will be omitted. By implementing the above, reception device150can achieve the advantageous effect of an improvement in data reception quality since the reception state improves. InFIG.67, when lens (group) A3101does not include a function for changing the focal length, changing of the focal length to improve reception is not performed. Note that in reception device150illustrated inFIG.64,FIG.65,FIG.66, andFIG.67, lens (group) A3101can be set with a plurality of focal length values. For example, conceivable methods include that the focal length can be set in a range of from 12 mm to 35 mm, inclusive, and that the focal length can be set to 12 mm and 25 mm. The following description will be based on this example. As a first example, consider a case in which a plurality of discrete focal length values are supported. When reception device150inFIG.64,FIG.65,FIG.66, andFIG.67is set to communication mode via instruction signal A3150, reception device150begins performing communication, and at this time, the focal length of lens (group) A3101shall be set to, for example, the widest angle of 12 mm. Note that when the focal length is set to the widest angle, as inFIG.56, it is highly probable that the reception state in which reception of a portion of the modulated optical signals is difficult can be avoided. With this, the advantageous effect that data reception quality can be improved can be achieved. However, in order to further improve data reception quality, the focal length, for example, may be controlled to a suitable value. Note that in this example, focal lengths of 12 mm and 25 mm are supported, but even when two or more focal lengths are supported, setting, for example, the focal length to the widest angle upon starting communication is an effective method for improving data reception quality. As a second example, consider a case in which a focal length can be consecutively (or minutely) set. When reception device150inFIG.64,FIG.65,FIG.66, andFIG.67is set to communication mode via instruction signal A3150, reception device150begins performing communication, and at this time, the focal length of lens (group) A3101shall be set to, for example, the widest angle of 12 mm. Note that when the focal length is set to the widest angle, as inFIG.56, it is highly probable that the reception state in which reception of a portion of the modulated optical signals is difficult can be avoided. With this, the advantageous effect that data reception quality can be improved can be achieved. However, in this example, since it is possible to minutely set the focal length, for example, even when the focal length is set to 14 mm, there is a high probability that the same advantageous effect can be achieved. However, in order to further improve data reception quality, the focal length, for example, may be controlled to a suitable value. In reception device150inFIG.66andFIG.67, assume signal processor A3302includes a function for processing zoom (enlarging (and/or shrinking) an image). In this example, assume an image enlargement of 1× (image is not enlarged), and image enlargement of 2×, and an image enlargement of 4× are supported. When reception device150inFIG.66, andFIG.67is set to communication mode via instruction signal A3150, reception device150begins performing communication, and at this time, the zoom (enlarging (and/or shrinking) an image) in signal processor A3302shall be set to, for example, “an image enlargement of 1× (image is not enlarged)”, which results in the widest angle. Note that when the focal length is set to the widest angle, like inFIG.56, it is highly probable that the reception state in which reception of a portion of the modulated optical signals is difficult can be avoided. With this, the advantageous effect that data reception quality can be improved can be achieved. However, in order to further improve data reception quality, the zoom value, for example, may be controlled to a suitable value. Embodiment 12 In the present embodiment, a configuration of a communication system that includes a transmission device and a reception device and is equipped in a vehicle, and processing operations performed by the communication system will be described. Note that the transmission device and the reception device described in the present specification may include all or some of the respective functions of the transmission devices and the reception devices described in the above embodiments. FIG.68illustrates one example of a configuration of a communication system equipped in a vehicle. The communication system includes transmission device810and reception device820. For example, transmission device810transmits (emits) N modulated optical signals. Note that N is an integer that is greater than or equal to 2 (however, N may be an integer that is greater than or equal to 1). Such a transmission device810includes N transmission sets for generating the N modulated optical signals, each of which is an optical signal. Each transmission set includes light source811_iand transmission signal processor813_i. Note that i is an integer that is greater than or equal to 2 and less than or equal to N (however, i may be an integer that is greater than or equal to 1). Transmission signal processor813_igenerates and outputs modulated signal812_1based on information814_i. Note that transmission device810does not necessarily transmit N modulated optical signals. In other words, transmission device810need not transmit any modulated optical signals, and, alternately, transmission device810may transmit (emit) at least one and at most N modulated optical signals. Light source811_iis configured of, for example, at least one light emitter such as an LED, and is used as, for example, the left headlight of a vehicle, the right headlight of a vehicle, the left tail light of a vehicle, a fog light (or right and left fog lights) of a vehicle, a brake light (or right and left brake lights), or a position light (or right and left position lights). Note that light source811_iis not limited to these lights, and may be used as a light or lamp for some other application. This light source811_itransmits a modulated optical signal, which is an optical signal based on modulated signal812_i, from transmission signal processor813_i, by changing the luminance of emitted light in accordance with modulated signal812_i. Although the data transmission method is exemplified as being accomplished by changing the luminance of light in the present embodiment, the data transmission method may be accomplished by changing the color in a color space or color system resulting from changes in the stimulus values X, Y, and Z. For example, information814_iincludes model identification information (or model information) that identifies the model of the vehicle equipped with the communication system, information indicating the speed of the vehicle, and information indicating the location of light source811_ion the vehicle body. The information indicating the location of light source811_ion the vehicle body is, for example, information indicating the location on the vehicle body equipped with the communication system, and specifically, is information indicating, for example, the front-left or front-right of the vehicle, the back-left or back-right of the vehicle, the center of the front of the vehicle, or the center of the rear of the vehicle. Reception device820includes M reception sets that each receive a modulated optical signal, and display information generator830. Note that M is an integer greater than or equal to one. Each reception set includes (electronic) mirror821_j, recognition unit823_j, demodulator826_j, and sensor controller827_j. Note that j is an integer that is greater than or equal to one and less than or equal to M. Mirror821_jis, for example, an electronic mirror, and functions as a light receiver that receives light. In other words, mirror821_jis implemented electronically as, for example, the left side mirror of the vehicle, the right side mirror of the vehicle, the back mirror of the vehicle, or a mirror fitted to a bumper of the vehicle. Note that mirror821_jis not limited to these examples of mirrors, and may be used as a mirror for some other application or a mirror that is fitted to some other location. More specifically, mirror821_jincludes, for example, an image sensor such as a CMOS or organic CMOS or CCD image sensor. Such a mirror821_jreceives ambient light and outputs imaging data822_j. Here, mirror821_jchanges the scanning method, i.e., the method used by the image sensor to read an amount of light, in accordance with control signal829_jfrom sensor controller827_j. In other words, when control signal829_jindicates control for video, mirror821_jreads the amount of light using a line scan sampling method with a long exposure time to output imaging data822_jfor video, like in Embodiment 8. When control signal829_jindicates optical communication control, mirror821_jreads the amount of light using a line scan sampling method with a short exposure time to output imaging data822_jfor optical communication, like in Embodiment 8. Imaging data822_jfor optical communication is data that is based on a modulated optical signal transmitted from transmission device810included in a communication system equipped in another vehicle. More specifically, as illustrated inFIG.42, mirror821_jperforms line scan sampling per region of the image sensor. In other words, mirror821_jperforms line scan sampling on each of one or a plurality of regions indicated by control signal829_j. This allows mirror821_jto simultaneously receive a plurality of modulated optical signals. Moreover, since reception device820includes a plurality of mirrors821_j, even more modulated optical signals can be received simultaneously. Note that line sampling is used in the above example, but the amount of light may be read from each pixel without performing line sampling. Recognition unit823_jperforms pattern recognition and object recognition and the like on imaging data822_jfor video, which is output from mirror821_j. With this, recognition unit823_jcan recognize an object that is transmitting a modulated optical signal from among objects appearing in an image included in the imaging data for video. Hereinafter, such an object will be referred to as a transmitting object. Recognition unit823_joutputs object recognition signal824_jindicating a recognition result of the transmitting object. Note that when the transmitting object is a vehicle and includes, for example, a communication system like the communication system illustrated inFIG.68, the modulated optical signal transmitted (emitted) by the transmitting object includes model identification information (or model information), information indicating the speed of the vehicle, and information indicating the location of the light source on the vehicle body. Sensor controller827_joutputs control signal829_jbased on object recognition signal824_jto demodulator826_jand mirror821_j. For example, when object recognition signal824_jindicates that no transmitting object is recognized, sensor controller827_joutputs control signal829_jindicating control for video. However, when object recognition signal824_jindicates that a transmitting object is recognized, sensor controller827_jalternately outputs control signal829_jindicating control for video and control signal829_jindicating control for optical communication in a cyclic manner, like illustrated inFIG.39described in Embodiment 8, for example. Here, control signal829_jindicating control for optical communication indicates a region of the image sensor in which the transmitting object recognized by recognition unit823_jappears. When a plurality of transmitting objects are recognized, control signal829_jindicating control for optical communication indicates, for each of the plurality of recognized transmitting objects, the region of the image sensor in which the transmitting object appears. As a result, mirror821_jcan perform line scan sampling in those regions to simultaneously receive a plurality of modulated optical signals. When demodulator826_jobtains imaging data (optical reception signal)822_jfor video from mirror821_j, demodulator826_joutputs reception data828_jincluded in the imaging data (optical reception signal)822_jfor video. Moreover, when demodulator826_jobtains imaging data (optical reception signal)822_jfor optical communication from mirror821_j, demodulator826_jperforms processing such as demodulation and error correction decoding on the imaging data (optical reception signal)822_jfor optical communication. More specifically, demodulator826_jobtains control signal829_jfrom sensor controller827_j, and identifies the region including the transmitting object indicated in control signal829_jfrom the image sensor. Demodulator826_jthen performs processing such as demodulation and error correction decoding on the data corresponding to the identified region from among the imaging data (optical reception signal)822_jfor optical communication, and outputs reception data828_j. Accordingly, reception data828_jis reception estimation data for data transmitted by a transmitting object. Reception data828_ithus includes the model identification information, information indicating the speed of the vehicle, and information indicating the location of the light source on the vehicle body, as transmitted by the transmitting object. Note that the transmitting object is, for example, an oncoming vehicle, a vehicle traveling ahead, or a vehicle following behind. Display information generator830obtains reception data828_jand object recognition signal824_jfrom each of the M reception sets. Display information generator830then generates and outputs display information831based on reception data828_jand object recognition signal824_j. A display included in the vehicle displays an image based on display information831. This makes it possible to display an image of the surrounding area of the vehicle equipped with the communication system on the display. Moreover, when a vehicle that transmits a modulated optical signal, for example, is present in the surrounding area, information related to the vehicle can be shown in association with the vehicle appearing in the image. Information related to the vehicle includes model identification information for the vehicle, information indicating the speed of the vehicle, and information indicating the location of the light source on the vehicle body. With such a transmission and reception method used in the communication system, it is possible to achieve the advantageous effects that data transmission distances of at least 100 meters can be ensured, and modulated optical signals can be transmitted and received even when vehicle speed is less than or equal to 200 km/h. Mirror821_jmay include functionality for instantaneously switching the focal length of the lens used to project an image onto the image sensor. This has the advantage that it is possible to instantaneously receive modulated optical signals from each of a plurality of transmitting objects (for example, vehicles) whose distances to the vehicle equipped with the communication system are different. Note that at least one element in the communication system according to the present embodiment, excluding light source811_1through light source811_N and mirror821_1through mirror821_M, may be configured as CPU and/or LSI850. The above description states that “when the transmitting object is a vehicle and includes, for example, a communication system like the communication system illustrated inFIG.68, the modulated optical signal transmitted (emitted) by the transmitting object includes model identification information, information indicating the speed of the vehicle, and information indicating the location of the light source on the vehicle body”. Hereinafter, advantages of the modulated optical signal including model identification information, information indicating the speed of the vehicle, and information indicating the location of the light source on the vehicle body will be described. In the above description, reception device820receives a modulated optical signal transmitted by a transmitting object and performs processing such as demodulation on the modulated optical signal to obtain model identification information (or model information) on the transmitting object, information indicating the speed of the vehicle, and information indicating the location of the light source on the vehicle body. Here, when reception device820obtains model identification information (or model information) on the transmitting object, for example, assume reception device820includes a storage that stores a database related to the relationship between vehicle model and vehicle size, although such a storage is not illustrated inFIG.68, For example, assume the storage stores the following database. When the model identification information indicates “0000000000” (binary), this indicates “vehicle model #0, overall length=3395 mm, vehicle body width=1475 mm, overall height=1780 mm”. When the model identification information indicates “0000000001” (binary), this indicates “vehicle model #1, overall length=4840 mm, vehicle body width=1820 mm, overall height=1695 mm”. When the model identification information indicates “1111111111” (binary), this indicates “vehicle model #1023, overall length=4270 mm, vehicle body width=1785 mm, overall height=1445 mm”. Accordingly, reception device820obtains the model identification information by receiving and demodulating the modulated optical signal transmitted by the transmitting object, and the storage included in reception device820receives an input of the model identification information, and obtains and outputs information indicating vehicle model, overall length, vehicle body width, and overall height from the above-described database. Accordingly, reception device820can estimate the distance to the transmitting object based on the information indicating vehicle model, overall length, vehicle body width, and overall height. Moreover, it is possible to more accurately comprehend the transmitting object in display information generator830included in the reception device. In this way, as a result of the transmitting object transmitting the modulated optical signal including the model information, the reception device that receives the modulated optical signal can achieve the advantageous effect that the reception device can more accurately comprehend the state of the transmitting object. Note that when the communication system includes wireless communication system842, like in.FIG.70, the communication system may obtain the information indicating vehicle type, overall length, vehicle body width, and overall height and the like, by wireless communication system842transmitting the modulated signal including the model identification information, the communication partner of wireless communication system842obtaining the model identification information, modulating and transmitting data indicating vehicle type, overall length, vehicle body width, and overall height and the like, and the communication system then receiving this modulated signal. Note that the operations illustrated inFIG.70will be described later. The reception device obtains information indicating the location of the light source on the vehicle body. Next, advantages thereof will be described by way of example. For example, assume the vehicle including the communication system obtained only the modulated optical signal from the light source in the right headlight of the transmitting object. In this case, reception device820in the communication system demodulates only a reception signal corresponding to the modulated optical signal from the light source in the right headlight, and recognizes that it has obtained information from the right headlight based on the information indicating the location of the light source on the vehicle body that is included in the reception signal. With this, the vehicle including the communication system recognizes that one vehicle is present based on the light source included in the right headlight, and also knows that a portion of the vehicle is obstructed by an obstruction. Moreover, the vehicle can recognize that the portion of the transmitting object that is obstructed by the obstruction is the left side. In this way, as a result of the transmitting object transmitting information indicating the location of the light source on the vehicle body from each light source, it is possible to achieve the advantageous effect that the communication system that receives this information can estimate the status of the transmitting object in greater detail. Next, the recognition result in recognition unit823_iwill be described.FIG.69illustrates one example of a recognition result. For example, recognition unit823_jobtains object recognition signal824_jfrom mirror821_jbeing used as a rearview mirror. As illustrated in (a) inFIG.69, as a result of performing, for example, pattern recognition on object recognition signal824_j, recognition unit823_jrecognizes headlights of vehicles following behind as transmitting objects. Recognition unit823_jthen identifies the locations of the transmitting objects in object recognition signal824_j. Demodulator826_1performs processing such as demodulation and error correction on each of regions in imaging data822_jfor optical communication. Each region is a region including a location of a transmitting object identified by recognition unit823_j. With this, demodulator826_iobtains information on the transmitting object from the region. The information on the transmitting object includes model identification information for the vehicle, information indicating the speed of the vehicle, and information indicating the location of the light source on the vehicle body. For example, as illustrated in (a) inFIG.69, model identification information indicates, for example, “make: α, model: C”, the information indicating the speed of the vehicle indicates, for example, “48 Km/h”, and the information indicating the location of the light source on the vehicle body indicates, for example, “front-right”. Display information831output from display information generator830includes information on each transmitting object obtained by demodulator826_i. Accordingly, so long as a control unit such as the electronic control unit (ECU) in the vehicle obtains display information831, the sizes, shapes, and locations of other vehicles in the surrounding area of the vehicle can be recognized, as illustrated in (b) inFIG.69. Specifically, the control unit identifies, for example, the size and shape of a vehicle based on the vehicle model indicated in the model identification information, by referencing a database, for example. Then, based on the information indicating the location of the light source on the vehicle body that is included in the information on the transmitting object, and the information indicating the location and size of the light source on the vehicle body that appears in object recognition signal824_j, the control unit recognizes the location and size of the vehicle that appears in object recognition signal824_jand ambient information, such as information on obstructions present and how an obstruction overlaps with a vehicle. In other words, the control unit can quickly recognize the size, shape, and location of another vehicle based on the information on the transmitting object, without having to recognize them from the image that is object recognition signal824_j. Stated differently, it is possible to reduce the image processing load and increase the speed that another vehicle present in the surrounding area can be recognized. However, when the vehicle is equipped with, for example, sensors (not illustrated inFIG.68) that recognize objects and these sensors recognize the size, shape, and location of another vehicle, reception device820can obtain and use the model identification information, information indicating the speed of the vehicle, and information indicating the location of the light source on the vehicle body, which are included in the modulated optical signal transmitted by a light source of the transmitting object, to achieve the advantageous effect that the state of the transmitting object and the state of the surrounding area of the transmitting object can be more accurately known. Moreover, since the information on the transmitting object includes information indicating the location of the light source on the vehicle body, the control unit can quickly recognize whether both the left and right headlights of the other vehicle appear in the image, or whether one of the left and right headlights is obstructed. The control unit can furthermore recognize motorcycles and bicycles, which are difficult to distinguish between with existing sensors that recognize objects based on model identification information. Moreover, since the information on the transmitting object includes information indicating the speed of the vehicle, the control unit can accurately recognize the speed of that vehicle. This makes it possible to achieve the above-described advantageous effects. Display information831output from display information generator830includes object recognition signal824_jin addition to the information on the transmitting object. Note that here, object recognition signal824_jincludes video data or still image data. Accordingly, so long as a control unit such as the electronic control unit (ECU) in the vehicle obtains display information831, an image of other vehicles in the surrounding area that appear in the image can be displayed on the vehicle display, as illustrated in (c) inFIG.69. Furthermore, the control unit can superimpose and display information on the transmitting object on the image that is shown on the display. In other words, information obtained from a light source that is the transmitting object appearing in the image can be shown in association with the light source. The present embodiment is also capable of achieving such advantageous effects. FIG.70illustrates one example of a configuration of a communication system equipped in a vehicle which differs from the example illustrated inFIG.68. Note that elements in the communication system illustrated inFIG.70that are the same as in the communication system illustrated inFIG.68share like reference numbers, and detailed, repeated description thereof will be omitted. In the communication system illustrated inFIG.68, the information that is transmitted and received via modulated optical signals is the model identification information, the information indicating the speed of the vehicle, and the information indicating the location of the light source on the vehicle body. In the communication system illustrated inFIG.70, the information that is transmitted and received via the modulated optical signal is information related to wireless access to the vehicle, which differs from the communication system illustrated inFIG.68. In other words, the communication system illustrated inFIG.70transmits, to other vehicles via modulated optical signals, information related to wireless access to the vehicle equipped with the communication system (i.e., the host vehicle that transmits the information). Note that here, the host vehicle will be referred to as the “first vehicle”. Information related to wireless access is, for example, when the communication method of wireless communication system842is a method that uses a wireless local area network (LAN), information indicating the service set identifier (SSID). In such cases, transmission device810in the communication system illustrated inFIG.70transmits (emits) information indicating the SSID of wireless communication system842included in the communication system, using a modulated optical signal. Note that the communication method used by wireless communication system842is not limited to a method that uses a wireless LAN; when the communication method used by wireless communication system842is some other wireless communication method that uses radio waves, identification information that makes it possible to identify wireless communication system842can be used as the information related to wireless access. Note that this point has already been described in detail in other embodiments of the present specification. Furthermore, a modulated optical signal including information related to wireless access to wireless communication system842included in a communication system equipped in another vehicle (hereinafter “second vehicle”) is transmitted by the communication system of the second vehicle, and the communication system of the first vehicle illustrated inFIG.70receives the modulated optical signal including the information related to wireless access to wireless communication system842included in the communication system of the second vehicle. The communication system of the first vehicle locates the modulated signal transmitted by wireless communication system842included in the communication system of the second vehicle, based on the information related to wireless access to wireless communication system842included in the communication system of the second vehicle, and communicates with the communication system of the second vehicle wirelessly (i.e., over radio waves). More specifically, the communication system of the first vehicle that is illustrated inFIG.70includes wireless access destination specification unit840instead of display information generator830illustrated inFIG.68, and further includes wireless communication system842. Transmission signal processor813_iin transmission device810generates and outputs modulated signal812_1based on information814_i. This information814_iis information related to wireless access to the host vehicle, that is to say, information related to wireless access to the first vehicle. Accordingly, light source811_iin transmission device810transmits (emits) a modulated optical signal including the information related to wireless access to the host vehicle. Mirror821_jincluded in reception device820receives, for example, a modulated optical signal including information related to wireless access to the second vehicle, and outputs imaging data822_jfor optical communication. With this, reception data828_joutput from demodulator826_jincludes, as information on the transmitting object, information related to wireless access to the second vehicle. Just like with the communication system illustrated inFIG.68, this allows mirror821_jto simultaneously receive a plurality of modulated optical signals with the communication system illustrated inFIG.70as well. In other words, with the communication system illustrated inFIG.70, a plurality of items of information related to wireless access to a plurality of other vehicles can be simultaneously obtained from one reception unit. Wireless access destination specification unit840obtains reception data828from each of M reception units. With this, if there are a plurality of other vehicles that each include the communication system corresponding toFIG.70in the surrounding area of the first vehicle, wireless access destination specification unit840in the first vehicle can simultaneously obtain the plurality of items of information related to wireless access to the plurality of other vehicles. Wireless access destination specification unit840then outputs the plurality of items of information841related to wireless access to the plurality of other vehicles to wireless communication system842. Wireless communication system842selects a communication partner using the plurality of items of information841related to wireless access to the plurality of other vehicles, and wirelessly communicates with the wireless communication system included in the communication system equipped in the selected vehicle. For example, wireless communication system842transmits a modulated signal including model identification information for the host vehicle and information indicating the speed of the host vehicle, and transmits the modulated signal to the other vehicle's communication system wirelessly (using radio waves). Furthermore, wireless communication system842included in the first vehicle receives the modulated signal including model identification information for the other vehicle and information indicating the speed of the other vehicle. It is possible to simultaneously receive a plurality of modulated optical signals with one reception unit even with the communication system illustrated inFIG.70. Furthermore, since the communication system illustrated inFIG.70includes a plurality of reception units, even more modulated optical signals can be simultaneously received. Still furthermore, the communication system illustrated inFIG.70is capable of achieving the same advantageous effects as the communication system illustrated inFIG.68. Note that transmission device810included in the communication system illustrated inFIG.70may transmit a modulated optical signal including the information indicating the SSID of wireless communication system842or the cellular terminal ID of wireless communication system842, in addition to or instead of the information related to wireless access. Moreover, as another example that differs from the above example, transmission device810illustrated inFIG.70may transmit, in addition to the information related to access to wireless communication system842, the model identification information, the information indicating the speed of the vehicle, and the information indicating the location of the light source on the vehicle body, just like in the example illustrated inFIG.68. In such cases, transmission device810illustrated inFIG.70may include the functionality of display information generator830described with reference toFIG.68. Embodiment 13 In the present embodiment, a configuration of a device that does not include a light source and generates a modulated optical signal by using reflected outside light such as reflected sunlight, and processing operations performed by such a device will be described. FIG.71Aillustrates one example of a configuration of the transmission device according to the present embodiment. Transmission device900includes optical transmission unit (also referred to as a modulated signal generator)910, liquid crystal controller920, memory921, and interface unit922. Memory921is, for example, read only memory (ROM) or random access memory (RAM), and includes a region for storing modulated signal921a. Liquid crystal controller920controls modulated signal generator910. More specifically, liquid crystal controller920reads modulated signal921afrom memory921, and applies a control voltage based on modulated signal921ato liquid crystal911bof optical transmission unit910. For example, modulated signal921ashall be expressed as s(t). Note that t is time. The control voltage based on modulated signal921ashall be expressed as—s(t)+α. Note that α is a real number. Optical transmission unit910includes liquid crystal panel911and reflective panel912that is stacked on liquid crystal panel911. Liquid crystal panel911includes liquid crystal911band two polarizing panels911aand911cthat sandwich liquid crystal911b. The polarization directions of transmitted light of the two polarizing panels911aand911cdiffer by 90°. The luminous transmittance of liquid crystal panel911changes in accordance with the control voltage applied to liquid crystal911b. More specifically, when the value of the control voltage applied to liquid crystal911bis 0 V, liquid crystal911btwists the oscillation direction of light passing through liquid crystal911b90 degrees. As a result, the light that passes through polarizing panel911ais twisted by liquid crystal911band then passes though polarizing panel911c. In other words, when the value of the control voltage applied to liquid crystal911bis 0 V, when sunlight or light from a lamp is incident, the light passes through liquid crystal panel911, reflects off reflective panel912, and once again passes through and is emitted from liquid crystal panel911, as illustrated inFIG.71A. Note that in this example, the light is twisted 90°, but the light may be twisted X°. However, X satisfies the following conditions: X greater than or equal to 0° and less than 360°, excluding 0° and 180°. With this, the values of X yield different luminances but allow for light to pass. On the other hand, when the value of the control voltage that is applied to liquid crystal911bis a predetermined value that is greater than or less than 0 V (i.e., an operating voltage value), liquid crystal911bdoes not twist the oscillation direction of light passing through liquid crystal911b. As a result, the light that passes through polarizing panel911ais not twisted by liquid crystal911b, and thus does not pass though polarizing panel911c. In other words, when the value of the control voltage that is applied to liquid crystal911bis a predetermined value that is greater than or less than 0 V (i.e., an operating voltage value), the light incident on liquid crystal panel911does not pass through liquid crystal panel911. Note that in this example, the phrasing “is not twisted” is used, but this may be rephrased as “twisted Y°”. However, Y satisfies the following conditions: Y greater than or equal to 0° and less than 360°, excluding 0° and 180°. With this, depending on the value of Y, it is possible to create a state in which luminance slightly remains. Accordingly, modulated signal generator910changes the luminance of outside light such as sunlight or light from a lamp, etc., that is incident on modulated signal generator910over time in accordance with modulated signal921ato emit modulated signal921aas an optical signal, that is to say, as a modulated optical signal. For example, modulated signal921amay include information related to the SSID of a base station, as described in Embodiment 3, and may include information related to an encryption key to be used in communication with a base station, as described in Embodiment 4. Moreover, modulated signal921amay include information on the location where transmission device900is fitted. Note that the information included in modulated signal921ais not limited to these examples; modulated signal921amay include other information. Interface unit922is electrically coupled to an external device external to transmission device900, and relays the reception and transmission of signals between liquid crystal controller920and the external device. For example, interface unit922is a device used for universal serial bus (USB) communication, Bluetooth (registered trademark) communication, close-proximity wireless communication, or radio frequency identifier (RFID) communication. For example, at the time of initial configuration of modulated signal921a, liquid crystal controller920receives data related to modulated signal921afrom the external device via interface unit922, and stores modulated signal921ain memory921. Moreover, when updating modulated signal921astored in memory921as well, liquid crystal controller920receives data related to the updated modulated signal921afrom the external device via interface unit922. Liquid crystal controller920then rewrites the existing modulated signal921astored in memory921into the new modulated signal921a. Note that the external device may be, for example, a server connected to transmission device900over a communication network. Moreover, the existing modulated signal921amay be stored in memory921and, when necessary, may be called so that transmission device900can transmit a modulated signal corresponding to this modulated signal. When the external device is a server, the server may manage a database related to the modulated signal and data content. In such cases, the reception device that is the reception partner of transmission device900can obtain the database from the server to demodulate the modulated signal. For example, assume transmission device900changes the data to be transmitted as described above. In such cases, when the reception device cannot obtain information related to changes made in the database, the reception device has difficulty demodulating the data. However, as described above, the reception device can demodulate the modulated signal transmitted by transmission device900by accessing the server, obtaining the database, and demodulating the modulated signal based on the database. Transmission device900may include photovoltaic generator923and storage battery924. Photovoltaic generator923receives outside light such as sunlight, converts the outside light into power, and stores the converted power in storage battery924. In this case, liquid crystal controller920controls optical transmission unit910based on power supplied from storage battery924. Note that transmission device900may internally include a small battery such as a button cell in place of photovoltaic generator923and storage battery924. With such a transmission device900according to the present embodiment, there is no need to include a light source, and modulated optical signal can be generated and transmitted simply by applying a control voltage to liquid crystal911b. This saves electricity and reduces the size of transmission device900. Accordingly, transmission device900can be fitted to a small object such as a bicycle, motorcycle, or person. Although the modulated optical signal is generated using reflected sunlight or light from a lamp inFIG.71A, as an example of another method, the modulated optical signal can be generated even when optical transmission unit910is used as a lamp (headlight). Note that in the above description, liquid crystal911bis described as twisting the polarization direction of light passing through liquid crystal911b90° or X° based on the value of voltage applied to liquid crystal911b, but this can be reworded as the polarization direction of light having passed through liquid crystal911bis changed 90° or X° compared to the polarization direction of light before passing through liquid crystal911b. In the above example of liquid crystal panel911, the polarization directions of light passing through polarizing panels911aand911cdiffer by 90°, but the angle formed between the polarization direction of light passing through polarizing panel911aand the polarization direction of light passing through polarizing panel911cis not limited to 90°. For example, the angle formed between the polarization direction of light passing through polarizing panel911aand the polarization direction of light passing through polarizing panel911cmay be less than 90°, and may be 0°. For example, when the angle formed between the polarization direction of light passing through polarizing panel911aand the polarization direction of light passing through polarizing panel911cis 0°, that is to say, when the polarization direction of light passing through polarizing panel911aand the polarization direction of light passing through polarizing panel911care the same, since light does not pass through liquid crystal panel911when a voltage of 0 V is applied to liquid crystal911b, light is not reflected when transmission device900is not operating, and light is reflected as a modulated optical signal in accordance with the voltage applied to liquid crystal911bby liquid crystal controller920when transmission device900is operating. With this configuration, since transmission device900only reflects light when emitting a modulated optical signal, it is possible to easily determine whether a modulated optical signal is being transmitted in the reception device. On the other hand, a configuration in which polarizing panel911aand polarizing panel911care arranged so that the angle formed between the polarization direction of light passing through polarizing panel911aand the polarization direction of light passing through polarizing panel911cis 90° reflects unmodulated light even when transmission device900is not operating, and so this configuration is favorable in applications where, for example, light is desired to be reflected in a state in which a signal is not transmitted. Next, the switching between an operational state in which transmission device900outputs a modulated optical signal and a non-operational state in which transmission device900does not output a modulated optical signal and simply reflects light (or does not reflect light) will be described. Transmission device900may control the control voltage applied to liquid crystal911bby liquid crystal controller920so that a modulated optical signal is output, as an operational state, when, for example, light of a certain intensity or higher is incident on transmission device900. In such cases, the determination of whether light of a certain intensity or higher is incident on transmission device900may be performed based on the power converted by photovoltaic generator923. For example, transmission device900may control liquid crystal panel911using liquid crystal controller920when power of a predetermined threshold or higher is output from photovoltaic generator923. Moreover, transmission device900may switch between the operational state and the non-operational state according to a control signal input from an external source via interface unit922. Here, the control information input from an external source may include information specifying either the timing or cycle for generating and outputting the modulated optical signal, and may include information specifying the data to be transmitted as the modulated optical signal. With this configuration, when light to be reflected and modulated by transmission device900is not incident on transmission device900or when a modulated optical signal is output but the strength of the modulated optical signal is so weak that it is difficult for the reception device to receive it, it is possible to reduce the amount of power consumed to a level lower than when operations for constantly outputting a modulated optical signal are always performed, since operations performed by at least part of transmission device900are stopped. Note that transmission device900may have any configuration that generates and outputs modulated optical signals in a constant operational state. Note that the configuration of the device that generates a modulated signal using liquid crystals is not limited to the example illustrated inFIG.71A. For example, data corresponding to modulated signal921amay be accumulated in memory921. The device that does not include liquid crystals may be stacked together with reflective panel912like illustrated inFIG.71A. Here, “liquid crystals” shall be something that has a state in which it transmits light (note that this state may exhibit a slight decrease in luminance) and a state in which it blocks light (note that luminance may slightly remain in this state). The state in which the liquid crystals transmit light and the state in which the liquid crystals block light are controlled in the time domain so as to generate modulated signal921a. With this, it is possible to generate a time-domain signal of modulated signal921a. Moreover, in the transmission device illustrated inFIG.71A, the liquid crystal panel may include a plurality of pixels. In such cases, the timing of the time-based change of the state in which the liquid crystals transmit light and the state in which the liquid crystals block light can be implemented in the same manner. Moreover, in the above example, the transmission device is exemplified as including liquid crystals, but even if some other device that can make time-based change of the state in which the liquid crystals transmit light and the state in which the liquid crystals block light is used, it can be implemented in the same manner. Moreover, a plurality of the transmission devices illustrated inFIG.71Acan be operated in parallel. Operations performed in such cases will be described with reference toFIG.71B. InFIG.71B, for example, a liquid crystal screen includes liquid crystals corresponding to first liquid crystal region transmission device101, liquid crystals corresponding to second liquid crystal region transmission device7102, liquid crystals corresponding to third liquid crystal region transmission device7103, liquid crystals corresponding to fourth liquid crystal region transmission device7104, liquid crystals corresponding to fifth liquid crystal region transmission device7105, liquid crystals corresponding to sixth liquid crystal region transmission device7106, liquid crystals corresponding to seventh liquid crystal region transmission device7107, liquid crystals corresponding to eighth liquid crystal region transmission device7108, and liquid crystals corresponding to ninth liquid crystal region transmission device7109. For example, first liquid crystal region transmission device7101, second liquid crystal region transmission device7102, third liquid crystal region transmission device7103, fourth liquid crystal region transmission device7104, fifth liquid crystal region transmission device7105, sixth liquid crystal region transmission device7106, seventh liquid crystal region transmission device7107, eighth liquid crystal region transmission device7108, and ninth liquid crystal region transmission device7109each have the configuration of transmission device900illustrated inFIG.71A. However, it is not necessary to individually provide liquid crystal controller920, interface unit922, the battery, and the power supply for each transmission device; first liquid crystal region transmission device7101, second liquid crystal region transmission device7102, third liquid crystal region transmission device7103, fourth liquid crystal region transmission device7104, fifth liquid crystal region transmission device7105, sixth liquid crystal region transmission device7106, seventh liquid crystal region transmission device7107, eighth liquid crystal region transmission device7108, and ninth liquid crystal region transmission device7109may share a common liquid crystal controller920, interface unit922, battery, and power supply. For example, first liquid crystal region transmission device7101, second liquid crystal region transmission device7102, third liquid crystal region transmission device7103, fourth liquid crystal region transmission device7104, fifth liquid crystal region transmission device7105, sixth liquid crystal region transmission device7106, seventh liquid crystal region transmission device7107, eighth liquid crystal region transmission device7108, and ninth liquid crystal region transmission device7109illustrated inFIG.71Bmay each transmit (emit) a different modulated optical signal. In such cases, for example, it is sufficient so long as the image sensor that receives the modulated optical signal captures the liquid crystals corresponding to first liquid crystal region transmission device7101, the liquid crystals corresponding to second liquid crystal region transmission device7102, the liquid crystals corresponding to third liquid crystal region transmission device7103, the liquid crystals corresponding to fourth liquid crystal region transmission device7104, the liquid crystals corresponding to fifth liquid crystal region transmission device7105, the liquid crystals corresponding to sixth liquid crystal region transmission device7106, the liquid crystals corresponding to seventh liquid crystal region transmission device7107, the liquid crystals corresponding to eighth liquid crystal region transmission device7108, and the liquid crystals corresponding to ninth liquid crystal region transmission device7109; the reception device can demodulate the respective modulated optical signals to obtain reception data corresponding to the data transmitted by first liquid crystal region transmission device7101, reception data corresponding to the data transmitted by second liquid crystal region transmission device7102, reception data corresponding to the data transmitted by third liquid crystal region transmission device7103, reception data corresponding to the data transmitted by fourth liquid crystal region transmission device7104, reception data corresponding to the data transmitted by fifth liquid crystal region transmission device7105, reception data corresponding to the data transmitted by sixth liquid crystal region transmission device7106, reception data corresponding to the data transmitted by seventh liquid crystal region transmission device7107, reception data corresponding to the data transmitted by eighth liquid crystal region transmission device7108, and reception data corresponding to the data transmitted by ninth liquid crystal region transmission device7109. Note that inFIG.71B, different modulated optical signals need not be transmitted by the different transmission devices. For example, first liquid crystal region transmission device7101and second liquid crystal region transmission device7102may transmit the same modulated optical signal; first liquid crystal region transmission device7101and ninth liquid crystal region transmission device7109may transmit the same modulated optical signal; fifth liquid crystal region transmission device7105and eighth liquid crystal region transmission device7108may transmit the same modulated optical signal; and fourth liquid crystal region transmission device7104, fifth liquid crystal region transmission device may transmit the same7105, and sixth liquid crystal region transmission device7106may transmit the same modulated optical signal. Note that the method used to transmit the same modulated optical signal from a plurality of liquid crystal region transmission devices is not limited to the example above. As described above, by dividing the liquid crystal screen into a plurality of regions and generating a plurality of modulated optical signals across the plurality of regions, it is possible to achieve the advantageous effect of an improvement in data transmission speeds. FIG.72illustrates a usage example of transmission device900. For example, transmission device900is fitted to road sign991, vehicle992, bicycle993, or clothes worn by person994. Other examples of objects to which transmission device900may be fitted include an obstruction in a parking lot, such as a cart, trolley, or wall, or a curb. When transmission device900is fitted to road sign991, optical transmission unit910is affixed to the front surface of road sign991. This makes it possible to use transmission device900as a device for providing road assistance information. In other words, transmission device900changes the luminance of reflected light in accordance with modulated signal921ato transmit road assistance information, which is information that road sign991should convey, to vehicles999, etc., in the surrounding area of road sign991. Here, when sunlight can be anticipated, such as during daytime, for example, the modulated optical signal can be obtained by reflective panel912reflecting the sunlight. In such cases, power for driving liquid crystal controller920can be obtained by, for example, photovoltaic generator923. Moreover, when sunlight cannot be anticipated, such as during nighttime, for example, the modulated optical signal can be obtained by reflective panel912reflecting light from vehicles or lamps such as street lights, for example. Moreover, when transmission device900is fitted to first vehicle992, transmission device900is affixed to the rear bumper of vehicle992, for example. With this, transmission device900can transmit information related to first vehicle992to, for example, second vehicle999located behind first vehicle992, by changing the luminance of the reflected light according to modulated signal921a. For example, during daytime when the weather is clear, light sources such as the tail lamps of second vehicle992are not turned on, but in these types of cases, the modulated optical signal can be transmitted by using reflected light, for example. Moreover, when sunlight cannot be anticipated, such as during nighttime, for example, second vehicle992(and other vehicles) can obtain the modulated optical signal by reflective panel912reflecting light emitted by second vehicle992or light from lamps such as street lights, for example. When transmission device900is fitted to bicycle993, transmission device900is affixed as a reflector to the back of bicycle993. With this, transmission device900can transmit information related to bicycle993to second vehicle999by changing the luminance of the reflected light according to modulated signal921a. In this way, since the size of transmission device900can be reduced according to the present embodiment as well, it is possible to fit transmission device900to a small object such as bicycle993. When transmission device900is fitted to clothes worn by person994, transmission device900is affixed as a reflector to the clothes worn by the person. With this, transmission device900can transmit information related to person994to, for example, second vehicle999in the surrounding area of person994, by changing the luminance of the reflected light according to modulated signal921a. In this way, since the size of transmission device900can be reduced according to the present embodiment as well, it is possible to fit transmission device900to a small object such as a person via their clothes. Note that objects to which transmission device900can be fitted are not limited to the above examples. Although the present embodiment is exemplified as including optical transmission unit910, a color QR code (registered trademark) may be included instead of transmission device900. Moreover, although the luminous transmittance of the light is uniformly changed across the entire liquid crystal panel911in the present embodiment, the luminous transmittance of the light may be changed per region of liquid crystal panel911. For example, assume liquid crystal panel911is divided into four or eight regions. In such cases, the luminous transmittance of each region into which liquid crystal panel911is divided is controlled. This allows transmission device900to simultaneously transmit a plurality of modulated optical signals. The plurality of modulated optical signals may be the same modulated optical signal or may be mutually different modulated optical signals. Moreover, the luminous transmittance of each region need not be changed with respect to time. This makes it possible to transmit information according to a spatial pattern of luminous transmittance for each region. For example, assume the above-described state in which the liquid crystals transmit light is defined as “1” and the above-described state in which the liquid crystals block light is defined as “0”. Moreover, assume the state of first liquid crystal region transmission device7101illustrated inFIG.71Bis the state in which the liquid crystals block light. Under these conditions, the reception device receives information indicating “0” by receiving the modulated optical signal transmitted by first liquid crystal region transmission device7101. Similarly, the reception device receives information indicating “0” when the state of second liquid crystal region transmission device7102is the state in which the liquid crystals block light. The reception device receives information indicating “0” when the state of third liquid crystal region transmission device7103is the state in which the liquid crystals block light. The reception device receives information indicating “0” when the state of fourth liquid crystal region transmission device7104is the state in which the liquid crystals block light. The reception device receives information indicating “0” when the state of fifth liquid crystal region transmission device7105is the state in which the liquid crystals block light. The reception device receives information indicating “0” when the state of sixth liquid crystal region transmission device7106is the state in which the liquid crystals block light. The reception device receives information indicating “0” when the state of seventh liquid crystal region transmission device7107is the state in which the liquid crystals block light. The reception device receives information indicating “0” when the state of eighth liquid crystal region transmission device7108is the state in which the liquid crystals block light. The reception device receives information indicating “1” when the state of ninth liquid crystal region transmission device7109is the state in which the liquid crystals transmit light. Accordingly, in this example, the reception device receives 9-bit information indicating “000000001” (binary). As a result of first liquid crystal region transmission device7101, second liquid crystal region transmission device7102, third liquid crystal region transmission device7103, fourth liquid crystal region transmission device7104, fifth liquid crystal region transmission device7105, sixth liquid crystal region transmission device7106, seventh liquid crystal region transmission device7107, eighth liquid crystal region transmission device7108, and ninth liquid crystal region transmission device7109maintaining the above-described state, 9-bit information indicating “000000001” is continuously transmitted. Note that the state may be changed over time, and in such cases, transmission device900(optical transmission unit910) transmits data in 9-bit units along the time axis. Moreover, a color QR code (registered trademark) or transmission device900, for example, may be fitted to, for example, a bag or a cart. Note that transmission device900may transmit a cellular terminal ID by changing the luminance of reflected light in accordance with modulated signal921aindicating that cellular terminal ID. Embodiment 14 In the present embodiment, the configuration of, for example, mirror821_jincluded in reception device820described in Embodiment 12 will be described in detail. FIG.73illustrates one example of a configuration of a mirror according to the present embodiment. Mirror1800illustrated inFIG.73corresponds to mirror821_jincluded in reception device820described in Embodiment 12. Mirror1800includes array lens1810and image sensor1820. Array lens1810includes a substantially rectangular, plate-shaped substrate1811, and a plurality of lenses1812arrayed on substrate1811. For example, substrate1811and plurality of lenses1812are formed as an integrated structure from, for example, resin or glass. Plurality of lenses1812are arranged in three rows and four columns along the surface of substrate1811. In this example, the focal length of each of the plurality of lenses1812is different. Image sensor1820receives light projected by the plurality of lenses1812of array lens1810. In other words, each region of image sensor1820receives light projected from a lens1812corresponding to that region. Since plurality of lenses1812are arrayed in three rows and four columns, the regions of image sensor1820are also arrayed in three rows and four columns, and each of the regions of image sensor1820receives light projected from a lens1812corresponding to that region. Accordingly, when a transmission device that transmits a modulated optical signal is imaged on image sensor1820by array lens1810, each region of image sensor1820receives light including that modulated optical signal. Each region of image sensor1820then outputs imaging data (an optical reception signal) based on the light reception result. In this example, as described above, the focal length of each of the plurality of lenses1812of array lens1810is different. Accordingly, the plurality of regions of image sensor1820can simultaneously output a plurality of items of imaging data (plurality of optical reception signals) representing the same scene at mutually different focal lengths. With any of the plurality of items of imaging data (plurality of optical reception signals), there is a high probability that the modulated optical signal can be received with favorable reception quality. As a result, the reception device that includes mirror1800can lighten the processing load for combining the focal lengths in order to receive the modulated optical signal from the transmission device, and receive the modulated signal. In other words, communication distance can be secured over a wide range with this reception device. Moreover, in the present embodiment, since a plurality of items of imaging data having mutually different focal lengths can be obtained without the need for a mechanism that moves mechanically, the occurrence of malfunctions in the reception device, for example, can be reduced. In other words, when a mechanism that mechanically moves is used, movement of the mechanism is restricted, for example, by the formation of condensation and the freezing of that condensation due to changes in temperature, and by high-temperature environments, but the present embodiment is less perceptible to such negative effects from changes in temperature. Accordingly, it is possible to reduce the occurrence of malfunctions and the like. Note that in the above example, mirror1800includes a single image sensor1820, but mirror1800may include a plurality of image sensors arrayed in a matrix. In other words, each of the plurality of image sensors receives light projected by a lens1812that corresponds to that image sensor. Moreover, in the above example, array lens1810includes 12 lenses1812arrayed in three rows and four columns, but the number of lenses1812and the number of rows and columns into which lenses1812are arrayed are not limited to this example. It is sufficient so long as array lens1810includes two or more lenses1812, and these two or more lenses1812may be arrayed in any manner. Embodiment 15 In the present embodiment, control of the luminance of a light source included in a transmission device that transmits a modulated optical signal, which is an optical signal, will be described. For example, when a light source of a transmission device described in each of the above embodiments is used for lighting purposes, the transmission of the section of the modulated optical signal that is for transmitting data (hereinafter referred to as an information transmission period) causes a reduction in the amount of light output and a reduction in the luminance of the output light that is used for lighting. In view of this, in the present embodiment, an information transmission period and a lighting period are provided in order to secure a sufficient amount of light for lighting. The information transmission period is a period exclusively for transmitting the section of the modulated optical signal that is for transmitting data, and the lighting period as a period exclusively for lighting. For example, the information transmission period and the lighting period are arranged alternately. However, the information transmission period and the lighting period need not be arranged alternately, and may be arranged in any manner along the time axis. FIG.74Aillustrates one example of changes in luminance in the information transmission period and the lighting period. As illustrated in (a) inFIG.74A, the transmission device causes the light source to emit light having first luminance y1in the lighting period. Moreover, as illustrated in (a) inFIG.74A, the transmission device transmits, in the information transmission period, a modulated optical signal configured of, for example, second luminance y2and third luminance y3lower than second luminance y2. However, the method of transmitting a modulated optical signal in the information transmission period is not limited to the example illustrated in (a) inFIG.74A. This makes it possible to inhibit a reduction in the amount of light for lighting compared to when no lighting period is provided. Here, the transmission device may set first luminance y1and second luminance y2to different luminances. For example, the transmission device may set first luminance y1higher than second luminance y2, as illustrated in (a) inFIG.74A. With this, the reduction in light for lighting in the information transmission period is compensated for by light for lighting in the lighting period, which makes it possible to further inhibit a reduction in the amount of light for lighting. Moreover, the transmission device may change the ratio between first luminance y1and second luminance y2or the time ratio between the information transmission period and lighting period in accordance with the ambient brightness. In other words, the time-based configuration of the information transmission period and the lighting period may be changed according communication to the ambient environment or communication environment, such as data transmission speed needs or communication quality needs. In such cases, luminance y2of the information transmission period and luminance y1of the lighting period may be changed via the time-based configuration of the information transmission period and the lighting period. Moreover, for example, when the ambient brightness changes with changes in the time of day, such as from morning to daytime, and from daytime to night time, the transmission device may change the above-described luminance ratio or time ratio according to the time of day. Moreover, the transmission device may control the light source so that a guard interval is provided between the information transmission period and the lighting period, as illustrated in (b) and (c) inFIG.74A. With this, it is possible to reduce the unpleasantness felt from the switching between the information transmission period and the lighting period. This furthermore makes it easier for the reception device to receive the first symbol in the information transmission period. The transmission device according the present embodiment is implemented with, for example, the configuration of transmission device100illustrated inFIG.6. In other words, transmission device100includes light source104and transmission unit102. Transmission unit102causes light source104to emit a modulated optical signal having first luminance y1in the lighting period, and causes light source104to emit a modulated optical signal having second luminance y2in the information transmission period. On the other hand, the reception device that receives the modulated optical signal transmitted from the transmission device according to the present embodiment receives light based on the modulated optical signal, and by receiving, in the modulated optical signal corresponding to the information transmission period, for example, a reference signal for synchronizing a time or frame, extracts the information transmission period from the reception signals. The reception device then outputs analysis information by analyzing data based on the modulated optical signal. With this, in a state in which a reduction of the amount of light for lighting has been inhibited, the reception device can receive the modulated optical signal and obtain analysis information. Furthermore, first luminance y1and second luminance y2may be different. For example, first luminance y1may be higher than second luminance y2. With this, it is possible to receive the modulated optical signal and obtain analysis information while the reduction in light for lighting in the information transmission period is being compensated for by light for lighting in the lighting period. Moreover, the reception device may receive light in the guard interval provided between the lighting period and the information transmission period. This makes it easier for the reception device to receive the first symbol in the information transmission period. The reception device according the present embodiment is implemented with, for example, the configuration of reception device150illustrated inFIG.6. In other words, reception device150includes light receiver151and data analyzer155. Light receiver151receives a modulated optical signal transmitted from light source104by receiving a modulated optical signal corresponding to a modulated optical signal having second luminance y2in the information transmission period. Data analyzer155then outputs analysis information by analyzing data based on the modulated optical signal. FIG.74Billustrates one example, which differs from the example illustrated inFIG.74A, of a frame configuration, along the time axis, of the transmission device that transmits (emits) the modulated optical signal. Note that inFIG.74B, time is represented on the horizontal axis. InFIG.74B, (a) illustrates one example of a first frame configuration of transmission device. In (a) inFIG.74B, a frame of scan period7401is transmitted. Note that scan period7401is a time period in which a frame for implementing Embodiment 13 is transmitted. Accordingly, the first vehicle that includes the transmission device includes the reception device. The first vehicle emits light so that the light is incident on transmission device900illustrated inFIG.71Athat is in the surrounding area of the first vehicle, as described in Embodiment 13 (a plurality of transmission device900may be present). This light emission period corresponds to scan period7401inFIG.74B(this also applies to scan period7411, scan period7413, scan period7421, scan period7423, scan period7431, and scan period7434). For example, in the scan period, the first vehicle may emit light while changing the direction in which light is emitted. The reception device included in the first vehicle obtains the data by receiving and demodulating the modulated optical signal transmitted by transmission device900reflecting light. Note that a reception device included in a vehicle other than the first vehicle may obtain the data by receiving and demodulating the modulated optical signal. InFIG.74B, (b) illustrates an example of a second frame configuration of transmission device. Scan periods7411and7413in (b) inFIG.74Bserve the same role as scan period7401in (a) inFIG.74B. Lighting periods7412and7414in (b) inFIG.7413correspond to the lighting period illustrated inFIG.74A, and serve the same role as described with reference toFIG.74A. Accordingly, repeated description will be omitted. InFIG.74B, (c) illustrates an example of a third frame configuration of transmission device. Scan periods7421and7423in (c) inFIG.74Bserve the same role as scan period7401in (a) inFIG.74B. Information transmission periods7422and7424in (c) inFIG.74Bcorrespond to the information transmission period illustrated inFIG.74A, and serve the same role as described with reference toFIG.74A. Accordingly, repeated description will be omitted. InFIG.74B, (d) illustrates an example of a fourth frame configuration of transmission device. Scan periods7431and7434in (d) inFIG.74Bserve the same role as scan period7401in (a) inFIG.74B. Information transmission periods7432and7435in (d) inFIG.74Bcorrespond to the information transmission period illustrated inFIG.74A, and serve the same role as described with reference toFIG.74A. Accordingly, repeated description will be omitted. Lighting periods7433and7436inFIG.74Bcorrespond to the lighting period illustrated in.FIG.74A, and serve the same role as described with reference toFIG.74A. Accordingly, repeated description will be omitted. Although the frame configurations illustrated inFIG.74AandFIG.74Bwere presented as examples of the frame configuration of the modulated optical signal emitted by the transmission device, the frame configuration is not limited to these examples. So long as the frame is configured of, from among the three types of periods—namely the scan period, the lighting period, and the information transmission period—at least one or at least two of these types of periods, such a frame configuration can be implemented in the same manner as described above and achieve the same advantageous effects as described above. Note that the scan period, the lighting period, and the information transmission period may include other symbols such as a control information transmission symbol or a reference symbol. The transmission device may change the frame configuration in accordance with the communication status or the environment that the vehicle including the transmission device is in, for example. For example, the type of periods that make up the frame may be changed, and the length of time in each period may be changed. Moreover, the user may set the frame configuration of the transmission device. FIG.74Cillustrates a configuration of the transmission device and the reception device that receives the modulated optical signal transmitted by the transmission device. Note that inFIG.74C, objects that operate the same as inFIG.6share like reference marks. Accordingly, repeated description thereof will be omitted. FIG.74Cdiffers fromFIG.6in that transmission unit102receives an input of control signal7499. In this example, transmission unit102changes, in control signal7499, the frame configuration of the modulated optical signal emitted by light source104. Note that control signal7499includes, for example, a signal based on communication status, a signal based on information such as information on the environment of the vehicle that includes the transmission device, and/or a signal indicating settings configured by the user. Note that the temporal length of the lighting period may be changed according to the length of the information transmission period in order to adjust the brightness. Moreover, the temporal length of the lighting period may be changed according to the environment of the vehicle (for example, the ambient brightness as affected by the time of day or the ambient brightness as affected by the weather). Furthermore, the length of the lighting period may be changed taking into consideration the length of the information transmission period and the environment of the vehicle. This achieves the advantageous effect that favorable brightness can be achieved. Moreover, the transmission device may be characterized by the transmission of a frame configuration made up of a scan period, like the frame configuration illustrated in (a) inFIG.74B, and/or a frame configuration made up of scan periods and lighting periods, like the frame configuration illustrated in (b) inFIG.74B. This achieves the advantageous effect that information related to the surrounding environment can be obtained. Moreover, the user may set the frame configuration so as to be made up of a scan period, such as the frame configuration illustrated in (a) inFIG.74B, and/or a so as to be made up of scan periods and lighting periods, such as the frame configuration illustrated in (b) inFIG.74B. For example, the user may configure settings so that the transmission device transmits a modulated optical signal having a frame configuration like that in (a) inFIG.74Bor (b) inFIG.74B. In such cases, the vehicle including the transmission device collects information about the surrounding area (for example, light source104may be configured to emit light in various directions). On the other hand, when the user is driving on the road, it is conceivable that the user may want to choose a frame configuration. This is because, for example, when light is emitted in various directions, this light may interfere with a user driving a vehicle in an oncoming lane. Being able to choose a frame configuration makes it possible to achieve the advantageous effect that such an occurrence can be avoided. Note that in the present embodiment, the transmission device and the reception device are exemplified as, but not limited to being equipped in a vehicle. The transmission device and the reception device may be equipped in something other than a vehicle, and may be provided as stand-alone units. Even in such cases, the operations described in the present embodiment can be implemented and the same advantageous effects can be achieved. In the above example, an information transmission period and a lighting period are provided in order to ensure a sufficient amount of light for lighting, but a plurality of light sources may be used. In other words, the transmission device may include a light source exclusively for transmitting modulated optical signals (hereinafter referred to as a “communications light source”) and a light source exclusively for lighting (hereinafter referred to as a “lighting light source”). In such cases, the transmission device may include a plurality of communications light sources. The plurality of communications light sources may have mutually different transmission speeds. Furthermore, any communications light source may output a modulated optical signal towards the ground. FIG.75illustrates an example of a plurality of communications light sources outputting modulated optical signals. First communications light source1911outputs modulated optical signal1911atoward reception device1950. Second communications light source1912outputs modulated optical signal1912a. Here, for example as illustrated inFIG.75, from the perspective of reception device1950, second communications light source1912is obstructed by first communications light source1911. Second communications light source1912therefore outputs modulated optical signal1912atoward the ground. With this, modulated optical signal1912areflects off the ground and is incident on reception device1950. As a result, even when second communications light source1912is obstructed by first communications light source1911, reception device1950can receive modulated optical signal1912afrom second communications light source1912. (Supplemental Information 1) It goes without saying that the embodiments described in the present specification may be combined with other aspects. Moreover, the embodiments are merely examples. For example, while a modulation scheme, an error correction coding method (error correction code, code length, encode rate, etc., to be used), control information, etc., are exemplified, it is possible to carry out the present disclosure with the same configuration even when other types of a modulation scheme, an error correction coding method (error correction code, code length, encode rate, etc., to be used), control information, etc., are applied. Regarding the modulation scheme, even when a modulation scheme other than the modulation schemes described herein is used, it is possible to carry out the embodiments and the other subject matter described herein. For example, amplitude phase shift keying (APSK) (such as 16APSK, 64APSK, 128APSK, 256APSK, 1024APSK and 4096 APSK), pulse amplitude modulation (PAM) (such as 4PAM, 8PAM, 16PAM, 64PAM, 128PAM, 256PAM, 1024PAM and 4096PAM), phase shift keying (PSK) (such as BPSK, QPSK, 8PSK, 16PSK, 64PSK, 128PSK, 256PSK, 1024PSK and 4096PSK), and quadrature amplitude modulation (QAM) (such as 4QAM, 8QAM, 16QAM, 64QAM, 128QAM, 256QAM, 1024QAM and 4096QAM) may be applied, or in each modulation scheme, uniform mapping or non-uniform mapping may be performed. Moreover, a method for arranging 2, 4, 8, 16, 64, 128, 256, 1024, etc., signal points on an I-Q plane (a modulation scheme having 2, 4, 8, 16, 64, 128, 256, 1024, etc., signal points) is not limited to a signal point arrangement method of the modulation schemes described herein. In the present specification, conceivable devices that include the wireless communication device described in the present specification include a communications and broadcast apparatus, such as a broadcast station, a base station, an access point, a terminal or a mobile phone, or a communication apparatus such as a television, a radio, a terminal, a personal computer, a mobile phone, an access point, or a base station. Moreover, the wireless communication device described in the present specification is conceivably a device having communication functions that is connectable via some interface to a device for executing an application in, for example, a television, a radio, a personal computer or a mobile phone. In the present specification, conceivable devices that include the receiver described in the present specification include a communications and broadcast apparatus, such as a broadcast station, a base station, an access point, a terminal or a mobile phone, or a communication apparatus such as a television, a radio, a terminal, a personal computer, a mobile phone, an access point, or a base station. Moreover, in the wireless communication via radio waves according to this embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, post-amble, reference symbol, etc.) or symbols for control information, may be arranged in any way in a frame. Here, the terms “pilot symbol” and “control information symbol” are used, but the naming of such symbols is not important; the functions that they perform are. A pilot symbol may be a known symbol that is modulated using PSK modulation in a transceiver (alternatively, a symbol transmitted by a transmitter can be known by a receiver by the receiver being periodic), and the receiver detects, for example, frequency synchronization, time synchronization, and a channel estimation (channel state information (CSI)) symbol (of each modulated signal) by using the symbol. Moreover, the symbol for control information is a symbol for transmitting information required to be transmitted to a communication partner in order to establish communication pertaining to anything other than data (such as application data) (this information is, for example, the modulation scheme, error correction encoding scheme, or encode rate of the error correction encoding scheme used in the communication, or settings information in an upper layer). (Supplemental Information 2) Methods based on specifications stipulated by Moving Picture Experts Group (MPEG) 2, H.264/Advanced Video Coding (AVC), H.265/High. Efficiency Video Coding (HEVC), VC-1, VP8, and VP9, etc., may be used as the video encoding method described in the above embodiments. However, a video encoding method different from the above examples may be used as the video encoding method described in the above embodiments. Note that the present disclosure is not limited to the above embodiments; various modifications can be applied to them. For example, the above embodiments are implemented as a communication device, but this example is not limiting; the embodiments may be realized as a communication method implemented as software, hardware, or software paired with hardware. Note that a program for executing the above-described communication method, transmission method, or reception method may be stored in read only memory (ROM) in advance to cause a central processing unit (CPU) to operate this program. Moreover, the program for executing the communication method, transmission method, or reception method may be stored in a computer-readable storage medium, the program stored in the recording medium may be recorded in random access memory (RAM) in a computer, and the computer may be caused to operate according to this program. Each functional block of each of the above-described embodiments, etc., may be partially or entirely realized as a large scale integration (LSI) circuit, which is an integrated circuit. Each process described in each of the above embodiments may be controlled partially or entirely by one LSI circuit or a combination of LSI circuits. These LSI circuits may be formed as separate chips, or may be formed as one chip so as to include the entire configuration or part of the functional block. The LSI circuit may include a data input and a data output. The term “LSI circuit” is used here, but the integrated circuit may also be referred to as an integrated circuit (IC), a system LSI circuit, a super LSI circuit or an ultra LSI circuit depending on the degree of integration. Moreover, the circuit integration technique is not limited to LSI, and may be realized by a dedicated circuit or a general purpose processor. After manufacturing of the LSI circuit, a field programmable gate array (FPGA) or a reconfigurable processor which is reconfigurable in connection or settings of circuit cells inside the LSI circuit may be used. The present disclosure may be implemented as digital processing or analog processing. Furthermore, if an integrated circuit technology that replaces LSI emerges as semiconductor technology advances or when a derivative technology is established, it goes without saying that the functional blocks may be integrated by using such technology. Implementation of biotechnology, for example, is a possibility. (Supplemental Information 3) Note that at least one of the field programmable gate array (FPGA) and central processing unit (CPU) may be configured to be able to download all or part of software required for implementing the communication method, transmission method, or reception method described in the present disclosure via wireless or wired communication, and moreover may be configured to be able to download all or part of software for receiving updates via wireless or wired communication. The downloaded software may be stored in storage, and the digital signal processing described in the present disclosure may be implemented by operating at least one of the FPGA and CPU based on the stored software. Here, a device including at least one of the FPGA and CPU may connect to a communications modem over a wired or wireless connection, and the device and communications modem may implement the communications method, transmission method, or reception method described in the present disclosure. For example, a communication device (transmission device or reception device) such as the base station, AP, and terminal described in the present specification may include at least one of the FPGA and the CPU, and include an interface for obtaining, from an external source, software for operating at least one of the FPGA and the CPU. Furthermore, the communication device may include storage for storing software obtained from an external source, and may implement the signal processing described in the present disclosure by operating the FPGA and/or CPU based on the stored software. The transmission device described in the present specification may be included in a first automobile or vehicle, and the reception device described in the present specification may be included in a second automobile or vehicle, and the transmission and receiving of data may be implemented under such a configuration. The transmission device or part of the functions of the transmission device described in the present specification may be connected to the first automobile or vehicle via an interface, and the reception device or part of the functions of the reception device described in the present specification may be connected to the second automobile or vehicle via an interface, and the transmission of data may be implemented via transmission and reception thereby. The transmission device described in the present specification may be included in a first automobile or vehicle, and the transmission and receiving of data between this transmission device and the reception device described in the present specification may be implemented under such a configuration. The reception device described in the present specification may be included in a second automobile or vehicle, and the transmission and receiving of data between this reception device and the transmission device described in the present specification may be implemented under such a configuration. Furthermore, the transmission device or part of the functions of the transmission device described in the present specification may be connected to the first automobile or vehicle via an interface, and the transmission and receiving of data between this string of transmission devices and the reception device described in the present specification may be implemented under such a configuration. The reception device or part of the functions of the reception device described in the present specification may be connected to the second automobile or vehicle via an interface, and the transmission and receiving of data between this string of reception devices and the transmission device described in the present specification may be implemented under such a configuration. When the automobile or vehicle includes the transmission device or part of the transmission device described in the present specification, or when the automobile or vehicle and the transmission device described in the present specification or part of the functions of the transmission device described in the present specification are connected via an interface, the light source included in the transmission device described in the present specification may be a light source included in the automobile or vehicle. For example, automobile B100illustrated inFIG.76includes light sources B101_1, B101_2, B101_3, and B101_4, and one or more of these light sources may be the light source to be used by the transmission device according to the present specification for transmitting the modulated optical signal. Moreover, the function for selecting which light source among the plurality of light sources included in automobile B100the transmission device according to the present specification uses for transmitting the modulated optical signal may be included in the transmission device or a device connected to the transmission device. Moreover, the brightness of the light source, the angle of emission of the light source, the positioning of the light source may be configurable. When the automobile or vehicle includes the reception device or part of the reception device described in the present specification, or when the automobile or vehicle and the reception device described in the present specification or part of the functions of the reception device described in the present specification are connected via an interface, the light receiver included in the reception device described in the present specification may be a light receiver included in the automobile or vehicle (for example, an image sensor or photodiode). For example, automobile B100illustrated inFIG.77includes light receivers B201_1, B201_2, B201_3, B201_4, B201_5, and B201_6, and one or more of these light receivers may be the light receiver to be used by the reception device according to the present specification for receiving the modulated optical signal. Moreover, the function for selecting which light receiver among the plurality of light receivers included in automobile B100the reception device according to the present specification uses for receiving the modulated optical signal may be included in the reception device or a device connected to the reception device. Moreover, the angle of the light receiver and the positioning of the light receiver may be configurable. Furthermore, the reception device described in the present specification may display, on the front panel included in the automobile or in the cockpit of the vehicle, a notification indicating that data has been received. Moreover, the reception device described in the present specification may notify a user that data has been received by vibrating the steering wheel of, for example, the automobile, or vibrating a vibrator included on the steering wheel. (Supplemental Information 4) In the present specification, a server may provide an application related to processes pertaining to the reception device, and the functions of the reception device according to the present specification may be implemented by the terminal installing the application. Note that the application may be provided to the terminal by the communication device including in the transmission device according to the present specification connecting to a server over a network, and may be provided to the terminal by a communication device including a different transmission function connecting to a server over a network. Similarly, in the present specification, a server may provide an application related to processes pertaining to the transmission device, and the functions of the transmission device according to the present specification may be implemented by the terminal installing the application. Note that a method in which the application is provided to a different communication device by the communication device connecting to a server over a network is conceivable. Moreover, a server may provide software related to the light source included in the transmission device and the light receiver included in the reception device, and transmission and reception of the modulated optical signal by the light source included in the transmission device and the light receiver included in the reception device, respectively, may be supported by obtaining this software. Furthermore, the transmission device according to the present specification may function as a server, and an application included in the transmission device may be provided to the communication device using some communication means, and the reception device according to the present specification can be implemented by the application obtained by the communication device downloading the application. Note that in the present specification, there is reference to a “lamp” and a “light source”, but the method may be a method of a projector or display displaying, for example, an image, a video, or advertisement, and the modulated optical signal being included in that light. In other words, the “lamp” and the “light source” may include functions other than the emission of light. Moreover, the “lamp” and the “light source” may comprise a plurality of lamps and light sources, respectively. Furthermore, the transmission method used by the communication device that generates a modulated optical signal and emits light may be a method other than the transmission method described in the present specification. Moreover, the modulated optical signal may include information other than what is described in the present specification. Moreover, the lamp and/or light source, such as an LED lamp and/or light source, may itself include the functions of the transmission device described in the present specification. Furthermore, the transmission device and the reception device disclosed in the present specification are exemplified as, but not limited to being equipped in a vehicle. The transmission device and the reception device may be equipped in something other than a vehicle, and may be provided as stand-alone units. Even in such cases, the operations described in the present specification can be implemented and the same advantageous effects can be achieved. (Supplemental Information 5) The communication device and reception device according to the present disclosure may be implemented as any one of the aspects according to Embodiments 1 through 11. In other words, a first communication device according to one aspect of the present disclosure includes: a light receiver that receives a first optical signal and a second optical signal and generates a reception signal, the first optical signal transmitting first identifier information indicating an identifier of the first communication device, and the second optical signal transmitting second identifier information indicating an identifier of a second communication device; a demodulator that demodulates the reception signal to obtain the first identifier information and the second identifier information; a camera that captures a region including the first optical signal and the second optical signal to obtain video data or still image data; a controller that selects, based on the video data or still image data, one of the first identifier information or the second identifier information; and a communicator that communicates with a communication device corresponding to the selected identifier information. A second communication device according to one aspect of the present disclosure includes: a light receiver that captures a predetermined region to obtain a reception signal for demodulating an optical signal emitted to the predetermined region and video data or still image data for use in image processing: a demodulator that demodulates the image data to obtain a plurality of items of identifier information indicating identifiers of other corresponding communication devices; a controller that selects, based on the video data or still image data, one item of identifier information from among the plurality of items of identifier information; and a communicator that wirelessly communicates with another communication device that corresponds to the selected identifier information. A first reception device according to one aspect of the present disclosure includes: a first light receiver that receives a first optical signal and a second optical signal and generates an optical reception signal, the first optical signal transmitting first identifier information indicating an identifier of a first communication device and the second optical signal transmitting second identifier information indicating an identifier of a second communication device; a demodulator that demodulates the optical reception signal to obtain the first identifier information and the second identifier information; a second light receiver that obtains video data or still image data in which a region including the first optical signal and the second optical signal is captured; and a controller that selects, based on the video data or the still image data, one of the first identifier information or the second identifier information. A second reception device according to one aspect of the present disclosure includes: a light receiver that receives a first optical signal and a second optical signal and generates a reception signal, the first optical signal transmitting first identifier information indicating an identifier of a first communication device and the second optical signal transmitting second identifier information indicating an identifier of a second communication device; a demodulator that demodulates the reception signal to obtain the first identifier information and the second identifier information; a camera that captures a region including the first optical signal and the second optical signal to obtain video data or still image data; and an analyzer that analyzes the video data or the still image data to generate relative position information indicating a positional relationship between a first transmitter that transmitted the first optical signal and a second transmitter that transmitted the second optical signal. A third reception device according to one aspect of the present disclosure includes: a light receiver that uses an image sensor to receive a first optical signal and a second optical signal and generates a reception signal, the first optical signal transmitting first identifier information indicating an identifier of a first communication device, the second optical signal transmitting second identifier information indicating an identifier of a second communication device; a demodulator that demodulates the reception signal to obtain the first identifier information and the second identifier information; and an analyzer that generates first position information indicating a position of a first transmitter that transmitted the first optical signal and second position information indicating a position of a second transmitter that transmitted the second optical signal. A fourth reception device according to one aspect of the present disclosure includes: a light receiver that captures a predetermined region to obtain a reception signal for demodulating an optical signal emitted to the predetermined region and video data or still image data for use in image processing; a demodulator that demodulates the reception signal to receive demodulated data; and an analyzer that analyzes the video data or still image data to generate attribute information indicating an attribute of a transmitter that transmitted an optical signal corresponding to the demodulated data. Moreover, the transmission method and the reception method according to the present disclosure may be in accordance with the aspect according to Embodiment 15. In other words, the transmission method according to one aspect of the present disclosure includes: in a first period, causing a light source to emit light having a first luminance; and in a second period, causing the light source to transmit an optical signal by causing the light source to alternately emit light having a second luminance and light having a third luminance lower than the second luminance. For example, the first period is the lighting period illustrated inFIG.74A, and the second period is the information transmission period illustrated inFIG.74A. With this, in the second period, since the light source transmits an optical signal by alternately emitting light having the second luminance and light having the third luminance, the reception device can securely obtain information such as an SSID by receiving the optical signal. Moreover, when the light source is also used for lighting, the amount of light that is output for lighting can be expected to decrease due to the transmission of the optical signal. However, with the transmission method according to this aspect, it is possible to inhibit a reduction in the amount of light output for lighting since the light source emits light having the first luminance in the first period. Moreover, the transmission method may further control the light source so as to provide a guard interval between the first period and the second period. Since this provides a guard interval like illustrated in, for example, (b) and (c) inFIG.74A, it is possible to reduce the unpleasantness felt from the switching between the first period and the second period. This furthermore makes it easier for the reception device to receive the first symbol in the second period. Moreover, the first luminance and the second luminance may be different. For example, the transmission device may set the first luminance higher than the second luminance, as illustrated in (a) inFIG.74A. With this, the reduction in light for lighting in the second period is compensated for by light for lighting in the first period, which makes it possible to further inhibit a reduction in the amount of light for lighting. Moreover, the reception method according to one aspect of the present disclosure includes: in a first period, receiving light having a first luminance from a light source; in a second period, receiving an optical signal transmitted from the light source, by alternately receiving light having a second luminance and light having a third luminance lower than the second luminance; and outputting analysis information by analyzing data based on the optical signal. This makes it possible for the reception device to securely obtain information such as an SSID by receiving the optical signal. Moreover, when the light source is also used for lighting, the amount of light that is output for lighting can be expected to decrease due to the transmission of the optical signal. However, with the reception method according to this aspect, it is possible to receive the optical signal and obtain the analysis information, in a state in which a reduction in the amount of light output for lighting is inhibited, since the light source emits light having the first luminance in the first period. The reception method may further include receiving light in a guard interval provided between the first period and the second period. Since this provides a guard interval like illustrated in, for example, (b) and (c) inFIG.74A, the first symbol in the second period can be easily received. Moreover, the first luminance and the second luminance may be different. For example, the first luminance may be higher than the second luminance, as illustrated in (a) inFIG.74A. With this, it is possible to receive the optical signal and obtain analysis information while the reduction in light for lighting in the second period is being compensated for by light for lighting in the first period. Variation of Embodiments 1 to 15 Hereinafter, a variation example for each of Embodiments 1 to 15 will be described per item. <Vehicle Visible Light Communication> In the above embodiments, when a vehicle receives a modulated optical signal, an image sensor is used in place of a mirror on the vehicle, such as a side mirror or rearview mirror, as illustrated in, for example,FIG.68, and the image sensor receives the modulated optical signal. However, rather than an image sensor that is used in place of a mirror, an image sensor for receiving modulated optical signals may be provided in the vehicle. Alternatively, instead of an image sensor, a photodiode for receiving modulated optical signals may be provided in the vehicle. Note that in the present disclosure, since the modulated optical signal is light, the “reception” of a modulated optical signal means both reception in a communications sense and reception in an optical sense. When a photodiode is used to receive modulated optical signals, the optical communication system that includes the photodiode may have the configuration illustrated inFIG.52. Moreover, when modulated optical signals are transmitted from a vehicle, the vehicle or an element included in the vehicle may transmit the modulated optical signals, or a device not included in but provided on or equipped in the vehicle may transmit the modulated optical signal. Moreover, it goes without saying that the transmission device that transmits/emits the modulated optical signals may be used independently. <Switching Between Methods> As described above, there are two methods for transmitting modulated optical signals. The first of the two methods transmits modulated optical signals using baseband transmission, as described with reference to, for example,FIG.1,FIG.2,FIG.3, andFIG.4, and the second of the two methods transmits modulated optical signals using the configuration illustrated inFIG.52. Baseband transmission is transmission based on an ASK, Manchester encoding, or a line scan (line scan sampling) method. The transmission device may switch between these two methods. For example, the transmission device switches the transmission method based on, for example, a targeted transmission distance (data reception quality) or transmission speed, and the modulated optical signal is transmitted using the transmission method switched to, i.e., the first method or the second method. Stated differently, the transmission device emits or radiates light using the transmission method switched to. FIG.78illustrates one example of a frame configuration of a modulated optical signal. The transmission device transmits preamble11, common control information symbol12, preamble13, and data symbol14. Each of preamble11and preamble13includes a symbol for performing time synchronization and signal detection. The reception device may perform frequency offset estimation, frequency synchronization, and channel estimation using the preamble. Common control information symbol12includes at least information indicating the above-described transmission method, i.e., the first method or the second method. For example, the transmission device transmits preamble11and common control information symbol12using the first method, and transmits preamble13and data symbol14using the transmission method indicated in common control information symbol12. Note that the frame configuration illustrated inFIG.78includes a gap of time between the end time of the transmission of common control information symbol12and the start time of the transmission of preamble13. In other words, preamble13and data symbol14are not transmitted immediately after common control information symbol12is transmitted, but is transmitted after elapse of a predetermined amount of time. When the reception device receives common control information symbol12, during this predetermined amount of time, the reception device can switch modes so that the reception device can receive the modulated optical signal using the transmission method indicated in common control information symbol12. Note that usage of the predetermined amount of time is not required. In other words, preamble13and data symbol14may be transmitted immediately after common control information symbol12is transmitted. Moreover, each of the first method and the second method may be any kind of transmission method. For example, the first method and the second method may have mutually different sampling frequencies. Moreover, the number of transmission methods used is not limited to two; three or more transmission methods may be used. In such cases, common control information symbol12indicates any one of the three or more methods. <Groupcast> The transmission device may groupcast, unicast, or multicast (or broadcast) the modulated optical signal. Note that in groupcast, unicast, or multicast, the transmission device may transmit the modulated optical signal by changing the luminance of the light emitted by the light source that is included in the transmission device, and, alternatively, may transmit the modulated optical signal by using reflected outside light, like in the configuration illustrated inFIG.71A, for example. Moreover, the groupcast transmits data to a plurality of specified communication partners. For example, the transmission device may specify the transmission destination for groupcast, unicast, or multicast using an IP address. In other words, when the transmission device performs unicast, the transmission device transmits the IP address of the terminal that is the transmission destination. When the transmission device performs groupcast, the transmission device transmits the IP address assigned to the group that is the transmission destination. The group that is the transmission destination is, for example, a group to which standard-sized vehicles belong, a group to which large vehicles belong, or a group to which electric automobiles belong. The IP addresses of these groups may be static or arbitrarily assigned based on place or time. For example, when the reception device of a vehicle passes through an electronic toll collection (ETC) system gate, the reception device may receive, from an antenna provided at the gate, the IP address of the group to which that vehicle belongs. This assigns lip addresses to groups of vehicles that pass through the gate. When the transmission device performs multicast, the transmission device transmits the IP address corresponding to the multicast. Note that when the transmission device performs groupcast or multicast, the transmission device may transmit a MAC address for the groupcast or the multicast. For example, a specific MAC address may be set for the groupcast or multicast. Specifically, all of the hits of the specific MAC address are either 1 or 0. <Visible Light Communication Relaying> FIG.79illustrates one example of the relaying of visible light communication. For example, when optical communication device50A transmits a modulated optical signal to a communication partner device, optical communication device50B relays the transmission of that modulated optical signal. Optical communication device50A includes communication processor51and light emitter52. Light emitter52is, for example, a light source such as a light emitting diode (LED) or an organic electro-luminescent (EL) light source. Communication processor51causes a modulated optical signal to be transmitted from light emitter52by changing the luminance of the light emitted by light emitter52. Optical communication device50B includes light receiver53, communication processor54, and a plurality of light emitters55. Light receiver53is an element such as an image sensor or photodiode, and receives a modulated optical signal transmitted from light emitter52included in optical communication device50A and outputs a signal indicated by the modulated optical signal to communication processor54. Communication processor54causes a modulated optical signal to be transmitted from each of the plurality of light emitters55by changing the luminance of the light emitted by the plurality of light emitters55in accordance with the signal. In this way, optical communication device50B relays the transmission of a modulated optical signal from optical communication device50A to a communication partner device. Here, in the relaying of the modulated optical signal, the frame of the transmission signal may include a region for transmitting destination information, and either groupcast or multicast may be specified as the destination information. Moreover, in the above example, optical communication device50B is exemplified as transmitting a modulated optical signal from each of the plurality of light emitters55, but a modulated optical signal may be transmitted from a single light emitter55. Moreover, in the relaying of the modulated optical signal, the number of hops may be specified. For example, the frame of the modulated optical signal may include a region for transmitting the number of hops. In such cases, optical communication device50B may increment the number of hops, and when the number of hops reaches the upper limit, may stop the relaying. Moreover, the frame of the modulated optical signal that is transmitted may include information indicating the upper limit of the number of hops. Accordingly, optical communication device50B transmits a transmission frame including information indicating destination information, a number of hops, and the upper limit for the number of hops. Moreover, optical communication device50B may continuously or regularly transmit the modulated optical signal a plurality of times rather than transmitting the modulated optical signal a single time. Furthermore, optical communication device50B may transmit the same modulated optical signal as the modulated optical signal transmitted by optical communication device50A, may further append additional data to the data indicated by the modulated optical signal transmitted by optical communication device50A, and then transmit a modulated optical signal indicating that data. <Supplemental Information for the Scan Period, the Lighting Period, and the Information Transmission Period> With the scan period illustrated inFIG.74B, the vehicle including the transmission device and the reception device (the first vehicle described above) emits light so that light is incident on each of the one or more transmission devices900illustrated inFIG.72. Here, the vehicle may emit light while changing the directivity of the light, and may emit light while changing the radiation width of the light. Note that converse to the scan period, with the lighting period, the vehicle may emit light while maintaining a constant directivity. However, the directivity of the lighting period may be set to one of a plurality of candidate directivities. Moreover, in the information transmission period, the vehicle may iteratively transmit information or data. In other words, information may be repeatedly transmitted. For example, when the vehicle transmits a first data group, the directivity of light used to transmit the first data group the first time and the directivity of light used to transmit the first data group the second time may be different. More specifically, the vehicle changes the directivity of light by moving the light source for transmitting the first data group or changing the optical system (for example, a lens or reflective surface) for transmitting the first data group. The vehicle may change the directivity of light by electronically changing a characteristic of the lens. Alternatively, the vehicle may transmit a plurality of first data groups simultaneously. In such cases, the vehicle transmits the plurality of first data groups simultaneously by using, for example, a plurality of LEDs. Note that in such cases, each of the plurality of LEDs may emit light in a different direction (emit light having a different directivity). <Modulated Optical Signal that Uses Reflected Light> FIG.80illustrates one example of a frame configuration of a modulated optical signal transmitted by transmission device900according to Embodiment 13. Note that the modulated optical signal transmitted by transmission device900according to Embodiment 13 is a signal generated using reflected outside light such as sunlight. Transmission device900transmits preamble16, control information symbol17, and data symbol18, for example as illustrated inFIG.80. In other words, transmission device900transmits the above-described preamble16and symbols by switching liquid crystal panel911between a state in which the liquid crystals transmit light and a state in which the liquid crystals block light, that is to say, by switching optical transmission unit910between a light-reflecting state and a non-light-reflecting state. Preamble16includes a symbol for performing time synchronization and signal detection. The reception device may perform frequency offset estimation, frequency synchronization, and channel estimation using the preamble. Such a preamble16is a data sequence known to transmission device900that controls the transmittance of light and the reception device that receives the modulated optical signal from transmission device900. Control information symbol17indicates, for example, the type of data symbol18. More specifically, control information symbol17indicates information such as information about a road sign (i.e., sign information), or information about the vehicle equipped with transmission device900. The information about the vehicle indicates, for example, “the vehicle is moving” or “the vehicle is not moving”. Control information symbol17may include information on the transmission method or error correction encoding method for data symbol18. FIG.81illustrates transmission device900that controls the transmittance of light described in. Embodiment 13, and communication device25. Communication device25is an external device that communicates with interface unit922included in transmission device900(seeFIG.71A), and receives modulated signals via an antenna. Communication device25generates transmission data based on data obtained from the modulated signal, and transmits the transmission data to transmission device900. Transmission device900receives the transmission data transmitted from communication device25, and stores a modulated signal based on the transmission data into memory921. Here, so long as the modulated signal is already stored in memory921, transmission device900may overwrite the existing modulated signal with the modulated signal based on the transmission data. With this, based on the transmission data, transmission device900can change the modulated optical signal or data sequence transmitted as a result of controlling the state of liquid crystal panel911. Here, transmission device900may change control information symbol17as necessary. Note that communication device25may receive the modulated signal via wireless communication using radio waves, and, alternatively, may receive the modulated signal via visible light communication or wired communication. <Plurality of Light Sources and Plurality of Light Receivers> FIG.82illustrates communication between first communication device30_1and second communication device30_2. Each of first communication device30_1and second communication device30_2includes a plurality of light sources for transmitting modulated optical signals and a plurality of light receivers for receiving modulated optical signals. First communication device30_1and second communication device30_2perform optical communication (for example, visible light communication) using the included plurality of light sources and plurality of light receivers. FIG.83illustrates an example of a configuration of a communication device. For example, each of first communication device30_1and second communication device30_2has the configuration illustrated inFIG.83. Each of first communication device30_1and second communication device30_2includes control symbol generator31, transmission unit32, reception unit33, first light source34_1through N-th light source34_N (N is an integer that is greater than or equal to 2), and first light receiver35_1through M-th light receiver35_M (M is an integer that is greater than or equal to 2). Control symbol generator31generates a control information symbol. This control information symbol is, for example, control information symbol A2211_iillustrated in.FIG.55, common control information symbol12illustrated inFIG.78, or control information symbol17illustrated inFIG.80. Transmission unit32generates a modulated signal having a frame configuration including the control information symbol generated by control symbol generator31, and outputs the generated modulated signal to at least one light source among first light source34_1through N-th light source34_N. In other words, transmission unit32may transmit the same modulated signal to two or more light sources among first light source34_1through N-th light source34_N, and, alternatively, may transmit a different modulated signal to the respective light sources. Moreover, transmission unit32may, based on the modulated signal obtained by reception unit33, select a light source to be used in the next instance of optical communication from among first light source34_1through N-th light source34_N. Each of first light source34_1through N-th light source34_N transmits a modulated optical signal, which is an optical signal, by emitting light in accordance with the modulated signal output from transmission unit32. Here, first light source34_1through N-th light source34_N may emit light having mutually different directivities. Alternatively, the directions in which these light sources emit light may be mutually different. Moreover, the light sources may be disposed in different locations. This makes it possible to inhibit interference between the modulated optical signals transmitted from first light source34_1through N-th light source34_N. First light receiver35_1through M-th light receiver35_M receive the same modulated optical signal or mutually different modulated optical signals. Each of first light receiver35_1through M-th light receiver35_M outputs a modulated signal based on the received modulated optical signal to reception unit33. Reception unit33obtains the modulated signals output from first light receiver35_1through M-th light receiver35_M. Moreover, reception unit33may, based on the modulated signals, select a light receiver to be used in the next instance of optical communication from among first light receiver35_1through M-th light receiver35_M. For example, when first communication device30_1transmits a modulated optical signal to second communication device30_2, first communication device30_1first transmits a modulated optical signal including a training symbol from each of first light source34_1through N-th light source34_N, in order to cause second communication device30_2to select a light receiver to be used in the optical communication. The transmission and reception of such a training symbol allows second communication device30_2to know light sources that simultaneously interfere. Hereinafter, training using a training symbol will be described in detail. FIG.84illustrates one example of the timing at which the training symbol is transmitted from each of a plurality of light sources. Note that in the following example, N=4. Firstly, first light source34_1transmits a first training symbol at a first time. Next, second light source34_2transmits a second training symbol at a second time. Next, third light source34_3transmits a third training symbol at a third time. Lastly, fourth light source34_4transmits a fourth training symbol at a fourth time. In other words, the first training symbol, the second training symbol, the third training symbol, and the fourth training symbol are transmitted by time division multiplexing (TDM). Here, the first training symbol may include information indicating an identification (ID) unique to first light source34_1. Similarly, the second training symbol may include information indicating an ID unique to second light source34_2, the third training symbol may include information indicating an ID unique to third light source34_3, and the fourth training symbol may include information indicating an ID unique to fourth light source34_4. With this, when second communication device30_2, which is the communication partner of first communication device30_1, receives a training symbol, second communication device30_2can identify which light source transmitted the training symbol. FIG.85illustrates one example of the reception tinning of each of a plurality of light receivers. Note that in the following example, M=5. Second communication device30_2receives, via first light receiver35_1through fifth light receiver35_5, reception signals including the training symbols transmitted at the timings indicated inFIG.84. More specifically, first light receiver35_1receives a reception signal including the first training symbol at the first time. Second light receiver35_2receives a reception signal including the fourth training symbol at the fourth time. Third light receiver35_3receives a reception signal including the second training symbol at the second time. Fourth light receiver35_4receives a reception signal including the first training symbol at the first time, and further receives a reception signal including the fourth training symbol at the fourth time. Fifth light receiver35_5receives a reception signal including the third training symbol at the third time. Accordingly, fourth light receiver35_4can receive a modulated optical signal from first light source34_1and a modulated optical signal from fourth light source34_4. Moreover, first light receiver35_1can receive only the modulated optical signal from first light source34_1, and second light receiver35_2can receive only the modulated optical signal from fourth light source34_4. Similarly, third light receiver35_3can receive only the modulated optical signal from second light source34_2, and fifth light receiver35_5can receive only the modulated optical signal from third light source34_3. Taking such a reception state into account, reception unit33included in second communication device30_2selects the combination of the light source that first communication device30_1transmitted (emitted) the modulated optical signal from, and the light receiver used to receive the modulated optical signal. More specifically, in cases where first communication device30_1transmits a single stream modulated optical signal, when the reception state in second communication device30_2is the state illustrated inFIG.85, second communication device30_2searches for a set of a light source and a light receiver that yields the best reception state. For example, second communication device30_2selects the set of third light source34_3and fifth light receiver35_5as a favorable set. Second communication device30_2transmits, to first communication device30_1, request information requesting transmission using third light source34_3. First communication device30_1that receives this request information uses third light source34_3to transmit (emit) a modulated optical signal including a data symbol. Second communication device30_2receives this modulated optical signal via fifth light receiver35_5. Note that first communication device30_1may transmit a single stream modulated optical signal. In such cases, second communication device30_2selects a set including all of the light sources and all of the light receivers. On the other hand, in cases where first communication device30_1transmits a multi-stream modulated optical signal, when the reception state in second communication device30_2is the state illustrated inFIG.85, second communication device30_2, for example, searches for a set of a light source and a light receiver that yields a plurality of streams that can be received with little interference. In the reception state illustrated inFIG.85, fourth light receiver35_4can, unfavorably, receive a modulated optical signal from first light source34_1and a modulated optical signal from fourth light source34_4. In other words, in fourth light receiver35_4, the light from first light source34_1interferes with the light from fourth light source34_4. In view of this, reception unit33included in second communication device30_2determines whether there is interference in light from a plurality of light sources, with each of first light receiver35_1through fifth light receiver35_5. The following describes two examples (i.e., a first example and a second example) of this determination. First Example When the reception state in second communication device30_2is the state illustrated inFIG.85, for example, reception unit33included in second communication device30_2determines to use first light receiver35_1, second light receiver35_2, third light receiver35_3, and fifth light receiver35_5. Then, second communication device30_2requests first communication device30_1to transmit (emit) a first modulated optical signal from first light source34_1, transmit (emit) a fourth modulated optical signal from second light source34_2, transmit (emit) a second modulated optical signal from third light source34_3, and transmit (emit) a third modulated optical signal from fourth light source34_4. In response to this request, first communication device30_1transmits (emits) the first modulated optical signal including a data symbol from first light source34_1, transmits (emits) the second modulated optical signal including a data symbol from second light source34_2, transmits (emits) the third modulated optical signal including a data symbol from third light source34_3, and transmits (emits) the fourth modulated optical signal including a data symbol from fourth light source34_4. Here, there is a time at which the first modulated optical signal, the second modulated optical signal, the third modulated optical signal, and the fourth modulated optical signal are present. Stated differently, there is a time at which a data symbol is present in the first modulated optical signal, a data symbol is present in the second modulated optical signal, a data symbol is present in the third modulated optical signal, and a data symbol is present in the fourth modulated optical signal. As described above, as a result of the selection of a light receiver to be used and the request of a light source to be used, a set of first light source34_1and first light receiver35_1, a set of second light source34_2and third light receiver35_3, a set of third light source34_3and fifth light receiver35_5, and a set of fourth light source34_4and second light receiver35_2are selected. Accordingly, first light receiver35_1included in second communication device30_2receives the first modulated optical signal, second light receiver35_2included in second communication device30_2receives the fourth modulated optical signal, third light receiver35_3included in second communication device30_2receives the second modulated optical signal, and fifth light receiver35_5included in second communication device30_2receives the third modulated optical signal. Second Example When the reception state in second communication device30_2is the state illustrated inFIG.85, for example, reception unit33included in second communication device30_2determines to use third light receiver35_3and fifth light receiver35_5, since the reception states of third light receiver35_3and fifth light receiver35_5are favorable. As a result, second communication device30_2requests first communication device30_1to transmit (emit) the second modulated optical signal from second light source34_2and transmit (emit) the third modulated optical signal from third light source34_3. In response to this request, first communication device30_1transmits (emits) the second modulated optical signal including a data symbol from second light source34_2, and transmits (emits) the third modulated optical signal including a data symbol from third light source34_3. Here, there is a time at which the fourth modulated optical signal and the second modulated optical signal are present. Stated differently, there is a time at which a data symbol is present in the fourth modulated optical signal and a data symbol is present in the second modulated optical signal. Accordingly, third light receiver353included in second communication device30_2receives the second modulated optical signal and fifth light receiver35_5included in second communication device30_2receives the third modulated optical signal. Here, since the degree of directivity is high when first communication device30_1and second communication device30_2transmit a modulated optical signal, precoding is not implemented. However, the communication device may implement precoding. FIG.86illustrates a detailed configuration example of a transmission unit that is included in the communication device illustrated inFIG.83and does not implement precoding. Transmission unit32includes error correction encoder32a, four mappers32b, four signal processors32c, and light source selector32d. Error correction encoder32areceives an input of data and a control signal, performs error correction encoding based on information related to error correction encoding that is included in the control signal (for example, error correction code information, code length (block length), encode rate, etc.), and outputs encoded data. Mappers32beach receive an input of the encoded data and a control signal, perform mapping corresponding to the modulation method, based on information on the modulated signal included in the control signal, and output a mapped signal. Signal processors32ceach receive an input of a mapped signal and the control signal, perform signal processing based on the control signal, and output a signal-processed signal. Light source selector32dreceives an input of signal-processed signals output from the four signal processors32c, the control signal, and a control information symbol generated by control symbol generator31, and generates one or more modulated signals each including the control information symbol, based on the control signal. Light source selector32dfurthermore selects one or more light sources from among first light source34_1through fifth light source34_5, in response to a request from a device (for example, second communication device30_2) that is a communication partner of the device that includes this transmission unit32. Light source selector32dthen transmits each of the one or more modulated signals as modulated optical signals from the selected one or more light sources. FIG.87illustrates a detailed configuration example of a transmission unit that is included in the communication device illustrated inFIG.83and implements precoding. Transmission unit32includes error correction encoder32a, four mappers32b, four signal processors32c, and light source selector32d, and further includes weighting synthesizer32e. Weighting synthesizer32ereceives inputs of the mapped signals output from the four mappers32band the control signal, and performs weighting synthesis (i.e., precoding) based on the control signal. Weighting synthesizer32ethen outputs the weighted signals to the four signal processors32c. Signal processors32ceach receive an input of a weighted signal and the control signal, perform signal processing based on the control signal, and output a signal-processed signal. Note that in the above example, first communication device30_1transmits (emits) a modulated optical signal, and second communication device30_2receives the modulated optical signal. Similarly, second communication device30_2may transmit (emit) a modulated optical signal, and first communication device30_1may receive the modulated optical signal. Training in such cases is carried out in the same manner as when first communication device30_1transmits the modulated optical signal. Note that sharing the training may be difficult. This is because the method of arrangement of the light sources included in first communication device30_1and the method of arrangement of the light sources in second communication device30_2are different, and the method of arrangement of the light receivers included in first communication device30_1and the method of arrangement of the light receivers in second communication device30_2are different. However, the shared training may be possible depending on the configuration of the devices. Moreover, when first communication device30_1only includes a single light source, second communication device30_2selects which light receiver to use. When second communication device30_2only includes a single light receiver, first communication device30_1selects which light source to use. <Light Source Configuration> FIG.88illustrates an example of a configuration of a light source. Note that light source34illustrated inFIG.88may be any one of first light source34_1through N-th light source34_N illustrated inFIG.83. Light source34includes light emission method determiner34a, first light amount adjuster34b_1through X-th light adjuster34b_X (X is an integer that is greater than or equal to 2), and first light emitter34c_1through X-th light emitter34c_X. Light emission method determiner34adetermines a light emission method, and outputs a signal in accordance with the determined light emission method to first light amount adjuster34b_1through X-th light amount adjuster34b_X. First light amount adjuster34b_1through X-th light amount adjuster34b_X each adjust the amount of light based on the signal in accordance with the light emission method that is output from light emission method determiner34a. More specifically, first light amount adjuster34b_1through X-th light amount adjuster34b_X each change the amplitude (i.e., the intensity of the light) and/or change the phase of the light. First light emitter34c_1through X-th light emitter34c_X respectively correspond to first light amount adjuster34b_1through X-th light amount adjuster34b_X. First light emitter34c_1through X-th light emitter34c_X each transmit (emit) a modulated optical signal by emitting light in accordance with the amplitude and/or phase adjusted or changed by the light amount adjuster corresponding to the light emitter. This allows light source34to adjust the directivity of the modulated optical signal. In this way, light source34is configured of a plurality of light emitters. Each light emitter is configured of an LED or an organic EL element. Note that light source34may include a single light emitter. In such cases, the configuration illustratedFIG.88can be implemented by dividing the single light emitter into X parts and causing the X parts to selectively emit light. <MAC Frame> There are three types of medium access control (MAC) frames of the modulated optical signal transmitted by the communication device that includes light source34illustrated inFIG.88, for example. The three types of frame are a management frame, a control frame, and a data frame. The management frame includes, for example, the following frames indicated as (A1) through (A9). (A1) Beacon Frame: A beacon frame is a frame for informing a surrounding wireless communication device of network information. (A2) Probe Request Frame: A probe request frame is a frame for a terminal to inquire whether there is a wireless cell in the surrounding area. (A3) Probe Response Frame: A probe response frame is a response to the probe request frame. (A4) Association Request Frame: An association request frame is a frame for a terminal to request a connection association from a base station. (A5) Association Response Frame: An association response is a response to the association request frame. (A6) Disassociation Frame: A disassociation frame is a frame for interrupting communication. (A7) Authentication Frame: An authentication frame is a frame for performing authentication between wireless communication devices. (A8) De-Authentication Frame: A de-authentication frame is a frame for interrupting (de-authenticating). (A9) Action Frame: An action frame is a frame for general added functions. The control frame includes, for example, the following frames indicated as (B1) through (B5). (B1) Request to Send (RTS) Frame: An RTS frame is a frame for requesting data transmission. (B2) Clear to Send (CTS) Frame: A CTS frame is a frame for transmitting that the wireless communication device specified in the RTS is clear to send. (B3) Acknowledgement (ACK) Frame: An ACK frame is a frame for confirming and responding to successful data reception. (B4) Block ACK Request Frame: A block ACK request is a frame for requesting a block ACK. (B5) Block ACK Frame: A block ACK frame is a frame for confirming and responding to successful reception of the data of a plurality of MAC frames. A data frame is a frame for transmitting user data. Here, when the communication device has the configuration illustrated inFIG.83, the communication device performs transmission in accordance with, for example, the first method and/or the second method described below. First Method: When transmitting part of the management frame (for example, the beacon frame) or part of the control frame (for example, the RTS frame), the communication device transmits (emits) a modulated optical signal using at least a plurality of light sources. For example, the plurality of light sources are all of the light sources included in the communication device. Second Method: The communication device includes at least one light source, and each light source has the configuration illustrated inFIG.88. When transmitting part of the management frame (for example, the beacon frame) or part of the control frame (for example, the RTS frame), the communication device transmits (emits) a modulated optical signal using two or more of the light emitters. Advantageous Effect By using the first method and/or the second method, the light including the modulated signal is emitted at a wide angle. Since this allows a plurality of communication devices (terminals) to receive the modulated signal, the system can operate stably (emission of modulated signals that cause interference can be reduced), which improves the data transmission efficiency of the system. Moreover, when the communication device transmits (emits) a modulated optical signal using the first method and/or the second method, the communication device may satisfy one of the following two conditions (condition 1 or condition 2). Doing so allows for part of the management frame (for example, the beacon frame) or part of the control frame (for example, the RTS frame) to be transmitted with high quality to more communication devices (terminals). The conditions are as follows. Condition 1: the amount of light is increased when transmitting part of the management frame (for example, the beacon frame) or part of the control frame (for example, the RTS frame). Condition 2: repetition or spread-spectrum is used when transmitting part of the management frame (for example, the beacon frame) or part of the control frame (for example, the RTS frame). When repetition is used, more repetitions are used than when transmitting the data frame. Alternatively, repetition is not used when transmitting the data frame. When spread-spectrum is used, a spread-spectrum is performed that yields a greater spread gain than when the data frame is transmitted. Alternatively, spread-spectrum is not used when transmitting the data frame. Moreover, when transmitting the data frame, the communication device transmits (emits) a modulated optical signal using the light source determined in the above section “<Plurality of Light Sources and Plurality of Light Receivers>” (FIG.82throughFIG.87). The important point is that the transmission method used for the data frame and the transmission method used for part of the management frame (for example, the beacon frame) or part of the control frame (for example, the RTS frame) are different. For example, the number of light sources used is different. Alternatively, the method used by the light amount adjuster is different. <Training for Light Source Optimization> FIG.89illustrates an example of a configuration of a communication system. For example, as illustrated inFIG.89, the communication system includes an access point, terminal #1, terminal #2, and terminal #3. Each of the access point, terminal #1, terminal #2, and terminal #3is a communication device including light source34illustrated inFIG.88. Moreover, the communication device may have the configuration illustrated inFIG.83. When communication is performed between the access point and a terminal, the access point and the terminal perform training for at least one of (i) selecting a light source and a light receiver; and (ii) light source optimization. The method of selecting the light source and the light receiver is as described with reference toFIG.82throughFIG.87. FIG.90illustrates one example of the access point selecting a light source and setting a parameter, and terminal #1selecting a light receiver. As illustrated inFIG.90, for example, terminal #1transmits a training request symbol. This training request symbol is a symbol for requesting transmission of a training symbol. Moreover, the training symbol is a symbol for selecting a light source and a light receiver, and setting a light source parameter. For example, the training request symbol may include the MAC address of a communication partner (access point) that receives the symbol. This makes it possible to clarify the partner of the request to transmit a training symbol. Moreover, the training request symbol may include the MAC address of the device (terminal #1) that transmits the symbol. With this, the communication partner that receives the symbol can identify the device that transmits the symbol, that is to say, the device that requested the training symbol. Moreover, the training request symbol may include information about whether to transmit the light source/light receiver selection training symbol. For example, when requesting the communication partner (access point) to transmit the light source/light receiver selection training symbol, the device (terminal #1) transmits b0=1. In other words, b0=1 is included in the training request symbol. On the other hand, when the device (terminal #1) does not request the communication partner (access point) to transmit the light source/light receiver selection training symbol, the device (terminal #1) transmits b0=0. In other words, b0=0 is included in the training request symbol. Moreover, the training request symbol may include information about whether to transmit light source parameter training symbols. For example, when requesting the communication partner (access point) to transmit light source parameter training symbols, the device (terminal #1) transmits b1=1. In other words, b1=1 is included in the training request symbol. On the other hand, when the device (terminal #1) does not request the communication partner (access point) to transmit light source parameter training symbols, the device (terminal #1) transmits b1=0. In other words, b1=0 is included in the training request symbol. Note that in the example illustrated inFIG.90, terminal #1transmits b0=1 and b1=1 to the access point. Details regarding the transmission method of the light source/light receiver selection training symbol are as described with reference toFIG.82throughFIG.87. Moreover, the device (access point) that transmits the light source/light receiver selection training symbol transmits information indicating the MAC address of the device along with information indicating the MAC address of the communication partner (terminal #1) that receives the symbol. This makes it possible to prevent malfunction of another terminal. Next, terminal #1receives the light source light receiver selection training symbol transmitted by the access point, and performs signal processing for selecting a light source of the access point and selecting a light receiver of terminal #1. An example of such operations is as described with reference toFIG.82throughFIG.87. Next, terminal #1transmits a light source selection request symbol. Here, the light source selection request symbol may include the MAC address of the communication partner (access point). This clarifies the destination of the symbol. Moreover, the light source selection request symbol may include the MAC address of the device (terminal #1). This clarifies which device transmitted the symbol. Moreover, the light source selection request symbol may include at least one of the following items of information: (i) the identification (ID) of the light receiver used by terminal #1for reception; (ii) the number of transmission streams that terminal #1requests of the access point; and (iii) information indicating whether to perform light source parameter training or not. Next, the access point receives the light source selection request symbol transmitted by terminal #1, and transmits light source parameter training symbols. Hereinafter, the transmission of light source parameter training symbols will be described. Note that a light source parameter training symbol may include information indicating the MAC address of the device (access point) that transmitted the symbol and information indicating the MAC address of the communication partner (terminal #1) that receives the symbol. For example, assume the light source selection request symbol transmitted by terminal #1specifies a first light source (of the access point). Note that the first light source corresponds to first light source34_1illustrated in, for example,FIG.83, and light source34illustrated inFIG.88. FIG.91illustrates an example where the access point uses the first light source to transmit (emit) a plurality of training symbols, namely a training symbol according to a first parameter through a training symbol according to a fourth parameter, as light source parameter training symbols. When the first light source has the configuration illustrated inFIG.88, the parameter set by light amount adjuster #j with respect to the training symbol according to the i-th parameter is expressed as Zij. Note that i is an integer greater than or equal to one and less than or equal to Y. Note that Y is exemplified as four in the example illustrated inFIG.91, but Y is an integer that is greater than or equal to 2. Moreover, j is an integer that is greater than or equal to one and less than or equal to X. Here, the following holds true. In the training symbol according to the a-th parameter and the training symbol according to the b-th parameter, when a≠b holds true, there is a j for which Zaj≠Zbj holds true. Terminal #1then receives the light source parameter training symbols. In the example illustrated inFIG.91, terminal #1searches the training symbols according to the first through fourth parameters, that is to say, searches the first through fourth parameters, for a parameter that yields favorable reception quality. Terminal #1then transmits a parameter request symbol including information indicating the favorable parameter. Note that the parameter request symbol may include information indicating the MAC address of the device (terminal #1) that transmitted the symbol and information indicating the MAC address of the communication partner (access point) that receives the symbol. The access point receives the parameter request symbol, and based on the information in the symbol, uses the first light source to transmit (emit) a modulated optical signal to terminal #1. Note that in this example, the first light source of the access point is selected in the selection of the light source, but the same operations are performed when a different light source is selected. In other words, when a different light source is selected, the access point changes a parameter in the light amount adjuster corresponding to the selected light source and transmits light source parameter training symbols. Thereafter, terminal #1receives this symbol, transmits a parameter request symbol, and the access point transmits (emits) a modulated optical signal based on the parameter request symbol. As described with reference toFIG.82throughFIG.87, terminal #1may select a plurality of light sources (of an access point). This allows for transmission just like multiple-input multiple-output (MIMO) transmission. In such cases, on a per light source basis, the access point changes a parameter and transmits (emits) light source parameter training symbols in accordance with the changed parameters. FIG.92illustrates an example of the access point transmitting training symbols on a per light source basis. For example, using a light source selection request symbol, terminal #1requests the access point to transmit (emit) a modulated optical signal using the first light source, the second light source, and the fourth light source. The access point then transmits light source parameter training symbols, as illustrated inFIG.92. More specifically, as illustrated inFIG.92, the access point transmits, from the first light source, a training symbol according to a first parameter, a training symbol according to a second parameter, a training symbol according to a third parameter, and a training symbol according to a fourth parameter. Note that the method used to set the parameter has already been described above. Then, the access point transmits, from the second light source, a training symbol according to a first parameter, a training symbol according to a second parameter, a training symbol according to a third parameter, and a training symbol according to a fourth parameter. Note that the method used to set the parameter has already been described above. The access point then transmits, from the fourth light source, a training symbol according to a first parameter, a training symbol according to a second parameter, a training symbol according to a third parameter, and a training symbol according to a fourth parameter. Note that the method used to set the parameter has already been described above. Terminal #1receives the light source parameter training symbols, and request the favorable parameter for the first light source, the favorable parameter for the second light source, and the favorable parameter for the fourth light source. Terminal #1then transmits (emits) a parameter request symbol including information indicating the requested parameters to the access point. The access point then sets the parameters for the first light source, the second light source, and the fourth light source based on the information in the parameter request symbol. Thereafter, the access point transmits (emits) modulated optical signals using the first light source, the second light source, and the fourth light source set with the parameters. Moreover, the above processing is processing related to, in communication between an access point and terminal #1, (i) selecting a light source to be used by the access point to transmit (emit) a modulated optical signal and (ii) a light source parameter adjustment method. This processing for the light source selection and the light source parameter adjustment method is performed in the same manner even in the following case. In communication between an access point and terminal #1, the selecting of a light source to be used by terminal #1to transmit (emit) a modulated optical signal and the light source parameter adjustment method are performed in the same manner as the processing described above. More specifically, in the examples illustrated inFIG.90throughFIG.92, by changing the wording so that the operations performed by the access point are performed by terminal #1and the operations performed by terminal #1are performed by the access point, the selecting of a light source to be used by terminal #1to transmit (emit) a modulated optical signal and the light source parameter adjustment method can be implemented. Moreover, in communication between an access point and a terminal other than terminal #1, the selecting of a light source to be used by the access point to transmit (emit) a modulated optical signal and the light source parameter adjustment method can be implemented in the same manner as in communication between an access point and terminal #1. Moreover, in communication between an access point and a terminal other than terminal #1, the selecting of a light source to be used by the other terminal to transmit (emit) a modulated optical signal and the light source parameter adjustment method can be implemented in the same manner as in communication between an access point and terminal #1. <Training for Light Source Optimization: Variation> Next a variation of the training for light source optimization will be described. FIG.93illustrates a variation of the training for light source optimization. As illustrated inFIG.93, for example, terminal #1transmits a training request symbol. This training request symbol is a symbol for requesting transmission of a training symbol. The training symbol is a symbol for selecting a light source and a light receiver, and setting a light source parameter. For example, the training request symbol may include the MAC address of a communication partner (access point) that receives the symbol. This makes it possible to clarify the partner of the request to transmit a training symbol. Furthermore, the training request symbol may include the MAC address of the device (terminal #1) that transmits the symbol. With this, the communication partner that receives the symbol can identify the device that transmits the symbol, that is to say, the device that requested the training symbol. In this way, terminal #1requests a training symbol for, for example, setting a light source parameter, from the access point. The access point receives the training request symbol transmitted by terminal #1, and transmits (emits) “light source/light receiver selection and light source parameter training symbols”. Here, the characterizing feature of the present variation is that the setting of the light source parameter is performed along with the selection of a light source and a light receiver. The access point transmits light source/light receiver selection and light source parameter training symbols, as described above. FIG.94illustrates an example of the access point transmitting light source/light receiver selection and light source parameter training symbols. Note that such a symbol may include information indicating the MAC address of the device (access point) that transmitted the symbol and information indicating the MAC address of the communication partner (terminal #1) that receives the symbol. The light source/light receiver selection and light source parameter training symbols may have the configuration illustrated inFIG.94, for example. Note that the access point includes four light sources, namely a first light source, a second light source, a third light source, and a fourth light source. The access point transmits training symbols from light sources capable of transmitting (emitting) modulated optical signals. Accordingly, as illustrated inFIG.94, the access point transmits (emits) training symbols from the first light source, transmits (emits) training symbols from the second light source, transmits (emits) training symbols from the third light source, and transmits (emits) training symbols from the fourth light source. Moreover, on a per light source basis, the access point changes a parameter and transmits training symbols. Note that as described above the access point includes a first light source, a second light source, a third light source, and a fourth light source. Accordingly, the access point then transmits, from the first light source, a training symbol according to a first parameter, a training symbol according to a second parameter, a training symbol according to a third parameter, and a training symbol according to a fourth parameter. Note that the method used to set the parameter has already been described above. Next, the access point then transmits, from the second light source, a training symbol according to a first parameter, a training symbol according to a second parameter, a training symbol according to a third parameter, and a training symbol according to a fourth parameter. Note that the method used to set the parameter has already been described above. Next, the access point then transmits, from the third light source, a training symbol according to a first parameter, a training symbol according to a second parameter, a training symbol according to a third parameter, and a training symbol according to a fourth parameter. Note that the method used to set the parameter has already been described above. Next, the access point then transmits, from the fourth light source, a training symbol according to a first parameter, a training symbol according to a second parameter, a training symbol according to a third parameter, and a training symbol according to a fourth parameter. Note that the method used to set the parameter has already been described above. Note that the parameter setting for training symbol generation may or may not differ between different light sources. Next, terminal #1receives the light source/light receiver selection and light source parameter training symbols transmitted by the access point. Terminal #1then determines whether to use the first light source in the transmission of the modulated optical signal, and when terminal #1determines to use the first light source, determines what parameter to use. Similarly, terminal #1determines whether to use the second light source in the transmission of the modulated optical signal, and when terminal #1determines to use the second light source, determines what parameter to use. Terminal #1also determines whether to use the third light source in the transmission of the modulated optical signal, and when terminal #1determines to use the third light source, determines what parameter to use. Terminal #1also determines whether to use the fourth light source in the transmission of the modulated optical signal, and when terminal #1determines to use the fourth light source, determines what parameter to use. Terminal #1transmits a light source setting request symbol including the above determined information to the access point. Note that in conjunction with the above operations, terminal #1determines a light receiver for reception of the modulated optical signal transmitted by the access point. The access point then receives the light source setting request symbol transmitted by terminal #1, and determines the light source(s) to use in the transmission of the modulated optical signal and the parameter(s) to be used by the light source(s), based on the information included in the symbol. The access point also determines the number of modulated optical signals to be transmitted, and transmits (emits) the modulated optical signal(s) from one or more light sources. Moreover, the above processing is processing related to, in communication between an access point and terminal #1, selecting a light source to be used by the access point to transmit (emit) a modulated optical signal and (ii) a light source parameter adjustment method. This processing for the light source selection and the light source parameter adjustment method is performed in the same manner even in the following case. In other words, in communication between an access point and terminal #1, the selecting of a light source to be used by terminal #1to transmit (emit) a modulated optical signal and the light source parameter adjustment method are performed in the same manner as the processing described above. More specifically, in the examples illustrated inFIG.93andFIG.94, by changing the wording so that the operations performed by the access point are performed by terminal #1and the operations performed by terminal #1are performed by the access point, the selecting of a light source to be used by terminal #1to transmit (emit) a modulated optical signal and the light source parameter adjustment method can be implemented. Moreover, in communication between an access point and a terminal other than terminal #1, the selecting of a light source to be used by the access point to transmit (emit) a modulated optical signal and the light source parameter adjustment method can be implemented in the same manner as in communication between an access point and terminal #1. Moreover, in communication between an access point and a terminal other than terminal #1, the selecting of a light source to be used by the other terminal to transmit (emit) a modulated optical signal and the light source parameter adjustment method can be implemented in the same manner as in communication between an access point and terminal #1. <Communication Modes> FIG.95illustrates three communication or transmission modes. The access point can set its mode to any one of (1) a multipoint communication mode, (2) a point-to-point communication mode, and (3) a multicast mode. Here, the access point has the following characteristics. When the access point is in the multicast mode (3), the access point does not specify the MAC address of the communication partner. Accordingly, the access point need not transmit information indicating the MAC address of the communication partner. Moreover, when the access point is in the point-to-point communication mode (2), the access point should emit light having directivity when transmitting a data symbol. Moreover, when the access point is in the multicast mode (3), the access point need not perform emission that controls the directivity of light, and may emit a plurality of beams of light in a plurality of directions. Note that in the VARIATION OF EMBODIMENTS 1 TO 15 section, the description used the terminology “vehicle”, “communication device”, “access point”, and “terminal”, but the naming is not limited to these examples. As already described above, the devices may be referred to be other names. Although a vehicle is presented as an example of the conveyance in the specification, other applicable examples include an airplane, an airship, watercraft, and a drone (unmanned aircraft). Although only some exemplary embodiments of the present disclosure have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. INDUSTRIAL APPLICABILITY In one aspect, the present disclosure is applicable as an optical communication system.
427,671
11863237
DETAILED DESCRIPTION Since the disclosure may have diverse modified embodiments, preferred embodiments are illustrated in the drawings and are described in the detailed description. However, this is not intended to limit the disclosure to particular modes of practice, and it is to be appreciated that all changes, equivalents, and substitutes that do not depart from the spirit and technical scope of the disclosure are encompassed in the disclosure. In the description of the disclosure, certain detailed explanations of the related art are omitted when it is deemed that they may unnecessarily obscure the essence of the disclosure. In addition, numeral figures (e.g., first, second, and the like) used during describing the specification are just identification symbols for distinguishing one element from another element. Further, in the specification, if it is described that one component “is connected to” or “accesses” the other component, it is understood that the one component may be directly connected to or may directly access the other component but unless explicitly described to the contrary, another component may be “connected” or “access” between the components. In addition, terms including “unit,” “er,” “or,” “module,” and the like disclosed in the specification mean a unit that processes at least one function or operation and this may be implemented by hardware or software such as a processor, a micro processor, a micro controller, a central processing unit (CPU), a graphics processing unit (GPU), an accelerated Processing unit (APU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), and a field programmable gate array (FPGA) or a combination of hardware and software. In addition, it is intended to clarify that the division of the components in the specification is only made for each main function that each component is responsible for. That is, two or more components to be described later below may be combined into one component, or one components may be divided into two or more components according to more subdivided functions. In addition, it goes without saying that each of the components to be described later below may additionally perform some or all of the functions of other components in addition to its own main function, and some of the main functions that each of the components is responsible for may be dedicated and performed by other components. An optical communication system according to embodiments may be applied to various optical communication networks based on a wavelength division multiplexing-passive optical network (WDM-PON), which are located in remote locations and are composed of optical communication devices that transmit and receive optical signals through corresponding optical communication modules (optical transceiver). For example, the optical communication system may configure an optical transport network that is a sub-network constituting a fronthaul segment of a radio access network architecture. However, the disclosure is not limited thereto, and the technical spirit of the disclosure may be applied to a midhaul segment and a backhaul segment of the radio access network architecture. As another example, the optical communication system may be applied to an optical subscriber network. As another example, the optical communication system may be applied to a distributed antenna system (DAS) for resolving a shadow area of a base station. Hereinafter, for convenience of description, in a case where the optical communication system configures a fronthaul segment of the radio access network architecture described above, an embodiment of a system including an optical communication device (e.g., a central office terminal (COT)) connected to a digital unit or baseband unit at a central office side and an optical communication device (e.g., a remote terminal (RT)) connected to a remote unit or remote radio head at a remote location will be mainly described. Hereinafter, various embodiments will be described in detail in order. FIG.1is a configuration diagram of an optical communication system according to an embodiment. Referring toFIG.1, an optical communication system100according to an embodiment may include a first optical communication device120and a second optical communication device130. InFIG.1, only one second optical communication device130is illustrated for convenience of description, but the inventive concept of the disclosure is not limited thereto. The first optical communication device120may be located on the side of a first site, and may include at least one optical transceiver1200. The second optical communication device130may be located at a second site apart from the first site by a certain distance, and may include at least one optical transceiver1300. The first optical communication device120and the second optical communication device130may be communicatively connected to each other through the respective optical transceivers1200and1300and an optical cable connecting them. In some embodiments, the optical communication system100may be applied to an optical subscriber network. In this case, the first optical communication device120may be an optical line terminal (OLT) at a central office side. In addition, the second optical communication device130may be any one of a remote terminal (RT), an optical network terminal (ONT) at a subscriber side, and an optical network unit. In another embodiment, the optical communication system100may be applied to a fronthaul transmission network of a distributed base station. In this case, the first optical communication device120may be a digital unit (DU) at the central office side or a termination device at a baseband unit (BBU) side. In addition, the second optical communication device130may be a remote unit (RU) or a remote radio head (RRH). In another embodiment, the optical communication system100may be applied to a distributed antenna system (DAS) for solving a shadow area of a base station. In this case, the first optical communication device120may be a headend unit, and the second optical communication device130may be an extension unit or a remote unit. As described above, the optical communication system100according to the inventive concept may be applied to various optical communication networks implemented by optical communication devices that are located remotely from each other and transmit and receive optical signals through corresponding optical transceivers. Hereinafter, an ‘optical wavelength automatic setting operation’ between the first optical communication device120and the second optical communication device130in the optical communication system100according to an embodiment will be described in detail with reference toFIGS.2to4. FIG.2is a block diagram of an optical communication device according to an embodiment. It should be noted thatFIG.2shows a main portion of an optical transceiver included in the optical communication device in more detail on the assumption that the optical communication system100ofFIG.1is applied to a WDM-PON. In addition, solid arrows shown inFIG.2may indicate moving paths of payload data, and dotted arrows may indicate moving paths of auxiliary management data (e.g., auxiliary management and control channel (AMCC) data). Referring toFIG.2, among a plurality of optical communication devices constituting the optical communication system100according to an embodiment, the first optical communication device120may include a first main controller (MCU)210, a first memory215, and n first optical transceivers1200-1to1200-n(where n is a natural number of 2 or more). Each of the n first optical transceivers1200-1to1200-nmay include a first sub controller (SCU)220, a first transmitter230, and a first receiver250. The n first optical transceivers1200to1200-nmay be connected to a first multiplexer (MUX)240to transmit an optical signal to the first MUX240or may receive an optical signal of a corresponding wavelength band from the first MUX240. In addition, among a plurality of optical communication devices constituting the optical communication system100according to an embodiment, each of n second optical communication devices130-1to130-nmay include the second optical transceiver1300, a second main controller (MCU)290, and a second memory295. Each of the n second optical transceivers1300may include a second sub controller280, a second receiver270, and a second transmitter275. The n second optical transceivers1300may be connected to a second MUX260to transmit an optical signal to the second MUX160or may receive an optical signal from the second MUX160. According to an embodiment, the first MUX240on the first optical communication device120side may be a separate device separated from the first optical communication device120or may be a component provided inside the first optical communication device120. In addition, the second MUX260may be a separate device from the n second optical communication devices130-1to130-n, but may be configured in plural and may be provided inside then second optical communication devices130-1to130-n, respectively. In this case, the n second optical communication devices130-1to130-nmay include a plurality of optical transceivers, respectively. According to an embodiment, the first optical communication device120, the first MUX240, and the second MUX260may be connected to each other in a ring topology. In addition, according to an embodiment, a plurality of sub-multiplexers may be connected to the second MUX260, and a tree topology may be formed in such a way that the second optical communication devices130-1to130-nare connected to the sub-multiplexers. First, the first MCU210may be configured to control the operation of the first optical communication device120. The first MCU210may be connected to an external device such as a server or a network monitoring system (NMS) to transmit/receive information and data necessary for the operation of the first optical communication device120. The first memory215is a space in which program instructions and various types of information necessary for the operation of the first optical communication device12are stored, and may include a data storage medium such as a magnetic disk or a solid-state drive (SSD). The first sub controller220is configured to be wired or wirelessly connected to the first MCU210, and may manage and control the first optical transceiver1200-1. The first sub controller220may process payload data transmission/reception and control management (wavelength setting/control, communication state monitoring, etc.) between the first optical transceiver1200-1and the second optical transceiver1300-1. The first sub controller220is an active configuration of the first optical transceiver1200-1, and may be a term that collectively refers to a processor for performing various control and processing, a memory in which firmware, etc. are stored for transmission of first auxiliary management data of low speed through an auxiliary management and control channel together with high-speed payload data. The first sub controller220may transmit the first auxiliary management data to the second optical transceiver1300-1according to various methods. For example, the first sub controller220may simultaneously transmit the first auxiliary management data and the payload data to the second optical transceiver1300-1through a baseband intensity over-modulation method. For another example, the first sub controller220may overlap the first auxiliary management data and the payload data and may transmit the same to the second optical transceiver1300-1through a radio frequency (RF) pilot tone method. The baseband intensity over-modulation method is a technology in which the first auxiliary management data is stacked on top of the payload data, and the RF pilot tone method is a technology of overlapping ASK or FSK modulated first auxiliary management data with the payload data. A transmission rate of the first auxiliary management data may be different from a transmission rate of the payload data. For example, a frequency of the first auxiliary management data may be several kHz, and a frequency of the payload data may be tens to hundreds of MHz. A first auxiliary management data transmission/reception method, such as the baseband intensity over-modulation and the RF pilot tone method, has already been disclosed, and thus, detailed contents thereof are omitted. In particular, the first sub controller220may generate first downstream wavelength information when a first test optical signal is transmitted. The first test optical signal is that preset ‘test information’ is generated as an optical signal of a first downstream wavelength, and the first downstream wavelength information is auxiliary management data generated by the first sub controller220and may include information about a length of the first downstream wavelength. In other words, the first sub controller220may generate information about a wavelength of the first test optical signal as the first auxiliary management data (hereinafter, first auxiliary management data corresponding to the wavelength of the first test optical signal is referred to as ‘first downstream wavelength information’). The first downstream wavelength information may be information generated to correspond to an AMCC by the first sub controller220. In addition, the first sub controller220may output the generated first downstream wavelength information to the first transmitter230. The first transmitter230is configured to convert received payload data and/or first auxiliary management data into an optical signal. The first transmitter230may include a transmitter optical sub-assemblies (TOSA) made of a laser diode, laser diode driving circuitry (LDD), biasing circuitry, and the like. Payload data input to the first transmitter230may be input through the LDD. In particular, the first transmitter230may generate first transmission light. The first transmission light may include a first test optical signal and a first downstream wavelength optical signal. The first test optical signal may be obtained by the first transmitter230converting test information into an optical signal. The first downstream wavelength optical signal may be obtained by the first transmitter230converting the first downstream wavelength information into an optical signal. The first test optical signal and the first downstream wavelength optical signal may be combined into the first transmission light, but may be transmitted to the outside through different channels and wavelengths. This is because the first downstream wavelength optical signal is an optical signal corresponding to an AMCC. The first transmitter230may output the first transmission light to the first MUX240. The first MUX240may be configured to multiplex an optical signal input from the first transmitter230and transmit the optical signal to an optical cable, and demultiplex signals received from the optical cable. In addition, the first MUX240may include wavelength selective switches (WSS). Accordingly, when a control signal is received, the first MUX240may control a wavelength of each switch of the WSS to correspond to the control signal (this will be described later with reference toFIG.3). The first receiver250may divide an optical signal input after being demultiplexed in the first MUX240into payload data and second auxiliary management data (the definition of the second auxiliary management data will be described later below) and output the payload data and the second auxiliary management data in corresponding configurations, respectively. In particular, the first receiver250may output the second auxiliary management data to the first controller220. The first receiver250may include a photo diode, a receiver optical sub-assembly (ROSA) including a trans-impedance amplifier (TIA), a post amplifier, and the like. In the above, the configuration of the first optical transceiver1200-1from among the n first optical transceivers1200-1to1200-nhas been described. Configurations of the remaining first optical transceivers1200-2to1200-nare substantially the same as that of the first optical transceiver1200-1, so a description thereof will be omitted. The second sub controller280of the second optical transceiver1300-1may be configured to generally control the operation of the second optical transceiver1300-1. The second sub controller280may manage transmission/reception of payload data between the first optical transceiver1200-1and the second optical transceiver1300-1and transmission/reception of information (hereinafter referred to as second auxiliary management data) for management and control (wavelength setting, communication state monitoring, etc.), and the like. The second sub controller280may transmit the payload data and second auxiliary management data to the first optical transceiver1200-1according to various methods. Like the first sub controller220, the second sub controller280may transmit the second auxiliary management data to the first optical transceiver1200-1without affecting the payload data through various methods. The second sub controller280is an active configuration of the second optical transceiver1300-1, and may collectively refer to a processor that processes and controls information that can be transmitted and received through an auxiliary management and control channel, a memory in which firmware, etc. are stored, and the like. The second receiver270may be configured to correspond to the first receiver250, and the second transmitter275may be configured to correspond to the first transmitter230. The payload data and the second auxiliary management data transmitted to the first optical transceiver1200-1through the second transmitter275and the second MUX260may be converted into an optical signal and multiplexed. An optical signal received from the first optical transceiver1200-1through the second MUX260and the second receiver270may be demultiplexed and converted into an electrical signal. The second MCU290and the second memory295have configurations similar to those of the first MCU210and the first memory215, respectively, and thus a redundant description thereof will be omitted. In the above, all functions of respective components of the first and second optical transceivers1200-1and1300-1have been described. Hereinafter, an automatic wavelength setting operation for establishing a communication channel between the n first optical transceivers1200-1to1200-nand then second optical transceivers1300will be described in detail with reference toFIGS.3and4. FIG.3is a configuration diagram of an optical module and an optical wavelength setting device according to an embodiment. Solid arrows shown inFIG.3may indicate moving paths of payload data, and dotted arrows may indicate moving paths of auxiliary management data (e.g., AMCC data). Referring toFIG.3, each of n first optical transceivers, that is, each of a first-1optical transceiver310-1to a first-n optical transceiver310-nmay be connected to the first MUX240. A first-1transmitter230-1of the first-1optical transceiver310-1may be connected to a first transmitting port P11of the first MUX240, and a first-1receiver250-1may be connected to a first receiving port P12of the first MUX240. A first-2transmitter230-2of the first-2optical transceiver310-2may be connected to a second transmitting port P21of the first MUX240, and a first-2receiver250-2may be connected to a second receiving port P22of the first MUX240. Similarly, a first-n transmitter230-nof the first-n optical transceiver310-nmay be connected to an nthtransmitting port Pn1of the first MUX240, and a first-n receiver250-nmay be connected to an nthreceiving port Pn2of the first MUX240. The first-1transmitter230-1and the first transmitting port P11of the first MUX240may be connected to each other by wire (e.g., an optical cable), and a first transmission coupler330-1may be formed in the optical cable. The first transmission coupler330-1may couple first transmission light output from the first-1transmitter230-1and output the first transmission light to a downstream wavelength analyzer340. That is, first partial transmission light is an optical signal separated from the first transmission light by the first transmission coupler330-1, and may be input to the downstream wavelength analyzer340. The first-2transmitter230-2and the second transmitting port P21of the first MUX240may be connected to each other by wire (e.g., an optical cable), and a second transmission coupler330-2may be formed in the optical cable. The second transmission coupler330-2may couple second transmission light output from the second transmitter230-2and output the second transmission light to the downstream wavelength analyzer340. That is, second partial transmission light is an optical signal separated from the second transmission light by the second transmission coupler330-2, and may be input to the downstream wavelength analyzer340. Similarly, the first-n transmitter230-nand the nthtransmitting port Pn1of the first MUX240may be connected to each other by wire (e.g., an optical cable), and an nthtransmission coupler330-nmay be formed in the optical cable. The nthtransmission coupler330-nmay couple nthtransmission light output from the nthtransmitter230-nand output the nthtransmission light to the downstream wavelength analyzer340. That is, nthpartial transmission light is an optical signal separated from the nthtransmission light by the nthtransmission coupler330-n, and may be input to the downstream wavelength analyzer340. The downstream wavelength analyzer340may receive first partial transmission light to nthpartial transmission light. Because the first partial transmission light is a portion of the first transmission light, the first partial transmission light may include a portion of a first test optical signal and a portion of a first downstream wavelength optical signal. Therefore, the downstream wavelength analyzer340may separate the first partial transmission light into a portion of the first test optical signal and a portion of the first downstream wavelength optical signal, and then analyze the first downstream wavelength optical signal to recognize a first downstream wavelength. In addition, the downstream wavelength analyzer340may separate the second partial transmission light into a portion of a second test optical signal and a portion of a second downstream wavelength optical signal, and then analyze the second downstream wavelength optical signal to recognize the second downstream wavelength. In the same way, the downstream wavelength analyzer340may separate the nthpartial transmission light into a portion of an nthtest optical signal and a portion of an nthtransmission wavelength optical signal, and then analyze the nthdownstream wavelength optical signal to recognize an nthdownstream wavelength. As described above, the first to nthdownstream wavelength optical signals may correspond to an AMCC. Accordingly, the downstream wavelength analyzer340may be configured to analyze an AMCC signal. In addition, the downstream wavelength analyzer340may output information about the first to nthdownstream wavelengths to a WSS controller350. The WSS controller350may generate a first control signal by using information about the first to nthdownstream wavelengths. For example, the WSS controller350may generate a first control signal that allows the first transmitting port P11to pass a signal of the first downstream wavelength, allows the second transmitting port P21to pass a signal of the second downstream wavelength, and allows the nthtransmitting port Pn1to pass a signal of the nthdownstream wavelength. In other words, the first control signal may be a signal for controlling individual filters of the transmitting ports P11to Pn1respectively corresponding to the first to nthtransmission lights. The WSS controller350may output the first control signal to the first MUX240. The first MUX240may include WSS, and may control the WSS according to the first control signal. For example, a switch capable of selecting a wavelength may be formed in each transmitting port and receiving port of the first MUX240, and the first MUX240may control individual switches formed in each transmitting port and receiving port according to the first control signal. That is, the first MUX240may control a first-1switch corresponding to the first transmitting port P11to correspond to the first downstream wavelength. In addition, the first MUX240may control the second-1switch corresponding to the second transmitting port P21to correspond to the second downstream wavelength. Similarly, the first MUX240may control an nth−1 switch corresponding to the nthtransmitting port Pn1to correspond to the nthdownstream wavelength. Accordingly, the first transmitting port P11may output only an optical signal corresponding to the first downstream wavelength, the second transmitting port P21may output only an optical signal corresponding to the second downstream wavelength, and the nthtransmitting port Pn1may output only an optical signal corresponding to the nthdownstream wavelength. In addition, the first MUX240may multiplex optical signals output from the first transmitting port P11to the nthtransmitting port Pn1to generate ‘multiplexed transmission light’, and may transmit the multiplexed transmission light to the second MUX260through an optical cable. When the multiplexed transmission light is received, the second MUX260may demultiplex the multiplexed transmission light and output the same to each of n second optical transceivers320-1to320-n. For example, the second MUX260may output a portion of the first transmission light included in the multiplexed transmission light (a portion of the first transmission light excluding the first partial transmission light, hereinafter, the portion received by the second MUX260will be abbreviated as ‘first transmission light’) to the second-1optical transceiver320-1corresponding to the first transmission light. In addition, the second MUX260may output the second transmission light included in the multiplexed transmission light to the second-2optical transceiver320-2corresponding to the second transmission light. Similarly, the second MUX260may output the nthtransmission light included in the multiplexed transmission light to the second-n optical transceiver320-ncorresponding to the nthtransmission light. A second-1receiver270-1of the second-1optical transceiver320-1may receive the first transmission light. In the second-1receiver270-1, the first test optical signal and the first downstream wavelength optical signal of the first transmission light may be separated, and the first downstream wavelength optical signal may be output to the second-1sub controller280(Wavelength Data Out). The second-1sub controller280may recognize the first downstream wavelength by analyzing the first downstream wavelength optical signal. In addition, the second-1sub controller280may generate first upstream wavelength information when first reply light is transmitted in response to reception of the first transmission light. The first reply light is an optical signal set to be transmitted when the first transmission light is received, and may include preset test information (this may be independent of information corresponding to the first test optical signal) as payload data. For example, when the first transmission light is received, the second-1optical transceiver320-1may generate a first reply light and output the first reply light to the second MUX260. At this time, a second-1transmitter275-1may generate preset test information as a first reply test optical signal, and the second-1sub controller280may generate information about a wavelength of the first reply test optical signal as first auxiliary management data (hereinafter, the wavelength of the first reply test optical signal is referred to as a ‘first upstream wavelength’, and the first auxiliary management data corresponding to the first upstream wavelength is referred to as ‘first upstream wavelength information’). The first upstream wavelength information may be information generated to correspond to an AMCC by the second-1sub controller280. In addition, the second-1sub controller280may include information about the first downstream wavelength in the first upstream wavelength information. Accordingly, the first upstream wavelength information may include information about a wavelength of the first test optical signal and a wavelength of the first reply test optical signal. In addition, the second-1sub controller280may output the generated first upstream wavelength information to the second-1transmitter275-1. The second-1transmitter275-1may generate first reply light by using input test information and the first upstream wavelength information. Because this may be similar to the operation of the first-1transmitter230-1for generating the first transmission light, a detailed description thereof will be omitted. In addition, the second-1transmitter275-1may output the first reply light to the second MUX260. In the same way, a second-2transmitter275-2may generate second reply light and output the same to the second MUX260, and a second-n transmitter275-nmay generate nthreply light and output the same to the second MUX260. The first to nthreply lights include first to nthtest optical signals and first to nthupstream wavelength optical signals, respectively, and the first to nthupstream wavelength optical signals may correspond to an AMCC as auxiliary management information. The second MUX260may generate a multiplexed reply light by multiplexing the first to nthreply lights, and may transmit the multiplexed reply light to the first MUX240through an optical cable. In this case, a reply coupler360may be formed between the first MUX240and the second MUX260. The reply coupler360may couple a portion of the multiplexed reply light and output the same to a third MUX370. In other words, ‘partially multiplexed reply light’ is an optical signal separated from the multiplexed reply light by the reply coupler360and may be input to the third MUX370. The third MUX370may demultiplex the partially multiplexed reply light and output the same to an upstream wavelength analyzer380. For example, the third MUX370may demultiplex the partially multiplexed reply light and separate the same into first to nthpartial reply lights, and output the first to nthpartial reply lights to the upstream wavelength analyzer380. The upstream wavelength analyzer380may receive the first to nthpartial reply lights. Because the first partial reply light is a portion of the first reply light, the first partial reply light may include a portion of the first upstream wavelength optical signal. Accordingly, the upstream wavelength analyzer380may separate a portion of the first upstream wavelength optical signal from the first partial reply light, and then analyze the first upstream wavelength optical signal to recognize the first upstream wavelength. In addition, the upstream wavelength analyzer380may separate a portion of the second upstream wavelength optical signal from the second partial reply light, and then analyze the second upstream wavelength optical signal to recognize the second upstream wavelength. In the same way, the upstream wavelength analyzer380may separate a portion of the nthupstream wavelength optical signal from the nthpartial reply light, and then analyze the nthupstream wavelength optical signal to recognize the nthupstream wavelength. As described above, the first to nthupstream wavelength optical signals may correspond to an AMCC. Accordingly, the upstream wavelength analyzer380may be configured to analyze an AMCC signal. In addition, the upstream wavelength analyzer380may output information about the first to nthupstream wavelengths to the WSS controller350. The WSS controller350may generate a second control signal by using information about the first to nthupstream wavelengths. For example, the WSS controller350may generate a second control signal that allows the first receiving port P12to pass a signal of the first upstream wavelength, allows the second receiving port P22to pass a signal of the second upstream wavelength, and allows the nthreceiving port Pn2to pass a signal of the nthupstream wavelength. In other words, the second control signal may be a signal for controlling individual filters of the receiving ports P12to Pn2respectively corresponding to the first to nthreply lights. The WSS controller350may output the second control signal to the first MUX240. The first MUX240may include WSS, and may control the WSS according to the second control signal. For example, the first MUX240may control individual switches formed in each receiving port according to the second control signal. That is, the first MUX240may control the first-2switch corresponding to the first receiving port P12to correspond to the first upstream wavelength. In addition, the first MUX240may control the second-2switch corresponding to the second receiving port P22to correspond to the second upstream wavelength. Similarly, the first MUX240may control the nth−2 switch corresponding to the nthreceiving port Pn2to correspond to the nthupstream wavelength. Accordingly, the first receiving port P12may output only an optical signal corresponding to the first upstream wavelength, the second receiving port P22may output only an optical signal corresponding to the second upstream wavelength, and the nthreceiving port Pn2may output only an optical signal corresponding to the nthupstream wavelength. In addition, the first MUX240may demultiplex a portion of the multiplexed reply light input through the optical cable (i.e., a portion of the multiplex reply light excluding the partially multiplexed reply light) and output the same to each of the n first optical transceivers310-1to310-n. FIG.4is a flowchart illustrating an automatic optical wavelength setting operation according to an embodiment. InFIG.4, operations of each component (the downstream wavelength analyzer, WSS controller, upstream wavelength analyzer, and third MUX) of an optical wavelength setting device390described with reference toFIG.3are reconstructed over time and illustrated. Referring toFIG.4, the automatic optical wavelength setting operation of the optical wavelength setting device390may be more easily understood. Referring toFIG.4, in operation S410, the first optical communication device120may generate first to nthtransmission lights including any one of first downstream wavelength information to nthdownstream wavelength information, and in operation S420, the first optical communication device120may transmit the generated first to nthtransmission lights to the second optical communication device130. As described above, each of the n first optical transceivers1200-1to1200-nincluded in the first optical communication device may generate transmission light including downstream wavelength information. The first MUX240connected to the first optical transceivers1200-1to1200-nreceives and multiplexes the first to nthtransmission lights, and may transmit the multiplexed first to nthtransmission lights to the second MUX260through an optical cable. In operation S450, the second optical communication device130may receive the corresponding mthtransmission light from among the first to nthtransmission lights received from the first optical communication device, and may read mthdownstream wavelength information included in the received mthtransmission light (m is a natural number less than or equal to n). The second MUX260may receive and demultiplex the multiplexed transmission lights. The second optical communication device130may receive the corresponding mthtransmission light from among the demultiplexed first to nthtransmission lights, and read mthdownstream wavelength information included in the received mthtransmission light. In operation S460, the second optical communication device130may generate pthreply light including the read mthdownstream wavelength information and pthupstream wavelength information, and may transmit the generated pthreply light to the first optical communication device120through the second MUX260. The second MUX260may multiplex reply light provided from each of a plurality of second optical communication devices and transmit the multiplexed reply light to the first MUX240. The first MUX240may demultiplex the received reply light. In operation S470, the first optical communication device120may analyze downstream wavelength information and upstream wavelength information included in each of reply lights received from the second optical communication device, and control WSS according to a control signal generated based on a result of the analysis, and thus in operation S480, the first optical communication device120may set an optical wavelength between optical communication devices. As described above, in the optical communication system100according to an embodiment, even if a plurality of wavelength-variable optical modules are included, an optical signal having a wavelength corresponding to these optical modules may be automatically set without an administrator's visit. While the embodiments have been particularly shown and described, it will be understood by one of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.
37,748
11863238
DETAILED DESCRIPTION What follows is a description of various aspects, embodiments and/or examples in which the invention may be practiced. Reference will be made to the attached drawings, and the information included in the drawings is part of this detailed description. The aspects, embodiments and/or examples described herein are presented for exemplification purposes, and not for limitation purposes. It should be understood that structural and/or logical modifications could be made by someone of ordinary skills in the art without departing from the scope of the invention. Therefore, the scope of the invention is defined by the accompanying claims and their equivalents. It should be understood that, for clarity of the drawings and of the specification, some or all details about some structural components or steps that are known in the art are not shown or described if they are not necessary for the invention to be understood by one of ordinary skills in the art. For the following description, it can be assumed that most correspondingly labeled elements across the figures (e.g.,105A and205A, etc.) possess the same characteristics and are subject to the same structure and function. If there is a difference between correspondingly labeled elements that is not pointed out, and this difference results in a non-corresponding structure or function of an element for a particular embodiment, example or aspect, then the conflicting description given for that particular embodiment, example or aspect shall govern. FIG.1is a diagram illustrating a top view of an optical-to-electrical conversion system101comprising an integrated photonics receiver chip102, a transimpedance amplifier106, and a digital signal processor107, according to an aspect. As described previously in the Background above, an integrated photonics receiver may be configured to receive optical light, such that the optical light may be absorbed by a photodetector contained within the receiver, as an example. The absorbed optical light may be converted to an electrical signal within the photodetector, and the electrical signal may be electrically transmitted to a transimpedance amplifier (e.g., a photocurrent to voltage converter), for example, contained within an electrical chip/die positioned at or near an output of the integrated photonics receiver. As will be described throughout this disclosure below, an improved integrated photonics receiver may be provided having two photodetectors and dispersion compensators, as an example, to enable a more effective and efficient optical-to-electrical conversion. It should be noted that the disclosed optical-to-electrical conversion system101ofFIG.1is configured to utilize direct detection in the conversion of the absorbed optical light into an electrical signal. As shown inFIG.1, the integrated photonics receiver chip (“integrated photonics receiver chip,” “integrated photonics receiver,” “receiver,” “receiver chip”)102may be provided with a polarization splitter rotator103, a first and a second dispersion compensators104A and104B, and a first and a second photodetectors105A and105B, as an example. As shown as an example, the polarization splitter rotator103may be provided with an input optical port112disposed at a first end (“first end,” “input end,” “input”)102A of the receiver chip102, and a first and a second output optical ports113A and113B. As will be described in more detail later, the input port112may be adapted to receive an optical signal115, as an example. As shown, the first and the second dispersion compensators104A and104B may each be provided with an input port114, each optically connected (via optical waveguides/channels (not shown)) to the polarization splitter rotator output ports113A and113B, respectively. Additionally, the first and the second dispersion compensators104A and104B may each further comprise an output port116, as shown. The first and the second photodetectors105A and105B may each be provided with an input port117, as shown, each optically connected (via optical waveguides/channels (not shown)) to the output port of each of the first and the second dispersion compensators104A and104B, respectively, as an example. Finally, as shown, an output of each of the first and the second photodetectors105A and105B may be electrically connected (via electrical connections111) to the transimpedance amplifier106, which will be described in more detail later. As shown inFIG.1, the first and the second photodetectors105A and105B may be electrically connected in parallel (e.g., at111), such that photocurrents from the first and the second photodetectors, corresponding to the TE polarizations of two optical signals (e.g.,115A and115B), for example, may be constructively combined, as will be discussed in more detail later. As can be seen inFIG.1, each photodetector105a,105bmay be configured to operate as a single polarization waveguide photodetector. As mentioned above, the first and the second photodetectors105A and105B may electrically connect to the transimpedance amplifier106. As an example, the transimpedance amplifier106may be contained within an electrical chip positioned at or near a second end (“second end,” “output end,” “output”)102B of the receiver chip102, as will be discussed in more detail when referring toFIG.2. It should be noted that the first end102A and second end102B are shown opposite each other inFIG.1, the first end102A and second end102B may also be disposed in alternate ways, such as at a right angle with respect to each other. As shown inFIG.1, the transimpedance amplifier106may be electrically connected (via electrical connection119, for example) to a digital signal processor (DSP)107. The digital signal processor107may be an application-specific integrated circuit (not shown) integrated on the electrical chip with the transimpedance amplifier106, as an example. As will be described in more detail later in this disclosure below, the digital signal processor107may be provided with forward error correction capabilities, such that the DSP107can provide digital feedback via electrical signals (“electrical signals,” “digital signal,” “control signals”)121A and121B, for example, to each of the first and the second dispersion compensators104A and104B, respectively, for automating the control of the dispersion compensators, for example. As shown as an example inFIG.1, an optical signal115having TE and TM polarization modes may be launched into the receiver102at the input optical port112of the polarization splitter rotator103. As indicated, within the polarization splitter rotator103, the optical signal115is split into two signals, such the split optical signals115A and115B exiting the output optical ports113A and113B, respectively. The TM polarization mode of the optical signal115may be split and rotated to TE polarization and exit the output optical port113B and the TE polarization mode of the optical signal115may be split and keep TE polarization and exit the output optical port113A, as an example. As an example, the polarization splitting and rotation (within 103, for example) may be achieved either by an edge coupler (not shown) with a suitable integrated polarization splitter rotator device or by a dual-polarization grating coupler (not shown). As shown, the split optical signals115A and115B may propagate toward and enter the first and the second dispersion compensators104A and104B, respectively, via the input ports114, as an example. As will be described in greater detail later in this disclosure, the first and the second dispersion compensators104A and104B may, using the DSP feedback loop (realized via digital signals121A and121B), may enable the receiver chip102to reduce/compensate for dispersion (e.g., chromatic dispersion, intermodal dispersion, polarization mode dispersion) incurred by the optical signals115A and115B being received by the parallel photodetectors105A and105B, respectively, as an example. It should be understood that the optical paths from the optical input port112to the first and the second photodetector input ports117are arranged with significantly equal length, i.e., significantly identical in terms of propagation delay, to achieve constructive signal combining, as disclosed in greater detail hereinafter. FIG.2is a diagram illustrating a top view of exemplary electrical connections211between the photodetectors105A and105B of the integrated photonics receiver chip102and the transimpedance amplifier106ofFIG.1, according to an aspect. As mentioned previously above when referring toFIG.1, the first and the second photodetectors205A and205B may be electrically connected to the transimpedance amplifier206, which may be integrated on an electrical chip220, as shown inFIG.2, as an example. As will be described in detail below, the first and the second photodetectors205A and205B may be electrically connected to the transimpedance amplifier206in parallel, such that the photocurrents being electrically transmitted from each of the first and the second photodetectors205A and205B may be combined. As an example, let the function blocks module222represent each of the additional exemplary optical components of the receiver chip202shown previously inFIG.1, such as the dispersion compensators (104A and104B) and the polarization splitter rotator (103), for example. Similarly, let the function blocks module223represent each of the additional exemplary electrical components of the electrical chip220shown previously inFIG.1, such as the DSP (107) and feedback loop (121A and121B), for example. As shown as an example, the receiver chip202may further comprise a signal pad202S (labeled S) and a pair of ground pads202G (labeled G) disposed along the second end202B, such that a GSG pad configuration is formed at the output end202B of the receiver chip202. It should be noted that the receiver chip202can also be configured with a single ground pad instead of pairs. The GSG pad configuration of the receiver202may thus model the GSG pad configuration of a traditional transimpedance amplifier, as an example. As such, as shown inFIG.2, the electrical chip220may be provided with a signal pad220S and a pair of ground pads220G disposed at a first end220A of the electrical chip220, such that the GSG configurations of both receiver and electrical chips202and220, respectively, are parallelly aligned. As an example, the first and the second photodetectors205A and205B may be electrically connected to each of the electrical pads via signal traces (e.g., copper traces) etched into the receiver chip202. As such, for example, the first and the second photodetectors205A and205B may each electrically connect to one of the pair of ground pads202G via signal trace218G and to the signal pad202S via signal trace218S, respectively, as shown. Finally, as shown, each corresponding pair of electrical pads between the receiver202and the electrical chip220may be electrically connected via the electrical connections211(e.g., wires), such that ground pads202G electrically connect to ground pads220G, and signal pad202S electrically connects to signal pad220S, as an example. As mentioned above, the GSG pad configuration electrically connected to the first and the second photodetectors205A and205B may match the conventional GSG pad configuration of the transimpedance amplifier206, for example. As shown inFIG.2, the transimpedance amplifier206may be electrically connected to the ground pads220G via signal traces222G, for example, and to the signal pad220S via signal trace222S. Thus, a secure electrical link is formed from each of the first and the second photodetectors205A and205B to the transimpedance amplifier206, as shown. As is known, an optical photodetector is adapted to absorb optical light and convert the absorbed optical light into an electrical signal (i.e., a photocurrent, for example). As discussed previously in the Background above, traditional optical receivers comprise a single photodetector having two inputs. As described throughout this disclosure herein above, the optical receiver202may comprise two parallelly aligned, single-input photodetectors (205A and205B, for example). The use of the first and the second photodetectors205A and205B, as compared to the traditional single photodetector, helps mitigate channel impediments (e.g., waveguide impurities, reflection, loss) by ensuring fuller optical absorption, such that to maximize the optical signal clarity at the output (at the transimpedance amplifier206, for example). Thus, an advantage of using two parallelly connected photodetectors in the disclosed receiver is that the optical signal clarity may be maximized, which may improve upon the return loss of traditional receivers. As mentioned above, an electrical link (i.e., an electrical circuit) may be formed between each of the first and the second photodetectors205A and205B and the transimpedance amplifier206via the electrically connected and paired GSG pads, respectively. As an example, as similarly shown previously inFIG.1, let an optical signal (e.g.,115inFIG.1) be launched into the optical receiver202. As mentioned previously above, the optical signal may be split into two optical signals and each having the same polarization (TE polarization, for example). Within the first and the second photodetectors205A and205B, each of the split optical signals, respectively, may be absorbed, and a photocurrent (not shown) corresponding to each of the optical signals may be outputted from the first and the second photodetectors205A and205B. As an example, the two photocurrents (not shown) may be electrically transmitted, via the signal traces218S, for example, to the signal pad202S, where the two photocurrents may be combined constructively into a single photocurrent. The single photocurrent (not shown) may travel electrically between the signal pads202S and220S, via the electrical connection211, for example, and the single photocurrent may electrically flow from the signal pad220S into the transimpedance amplifier206via the signal trace222S. The transimpedance amplifier206may then convert the single photocurrent into a voltage signal (not shown), as an example. Thus, as outlined above, the use of the first and the second photodetectors205A and205B and the electrically connected GSG pads (202G and202S), respectively, may not only improve the optical reflection issues traditionally experienced by conventional receivers, but may also negate the need for using phase tuners and combiners to control the optical signal's phase, as outlined previously in the Background above. Thus, an advantage is that, because of the use of the first and the second photodetectors, conventionally used optical components, such as phase tuners and combiners, may be no longer be needed, which may reduce associated manufacturing costs. Another advantage is that, because the phase tuner may be negated, a control algorithm adapted to control the phase tuner is no longer needed either, which may simplify operation of the receiver and thus reduce associated operational costs. FIGS.3A-3Billustrate top views of exemplary tunable dispersion compensator structures304, realized by cascaded ring resonators326and cascaded Mach-Zehnder interferometers327, respectively, according to an aspect. As described previously throughout this disclosure above, the integrated receiver chip (e.g.,102inFIG.1) may be provided with a first and a second dispersion compensators (e.g.,104A and104B inFIG.1) parallelly connected to the first and the second photodetectors (e.g.,105A and105B), respectively. As an example, each of the first and the second dispersion compensators may be or may comprise a tunable dispersion compensator (e.g.,532inFIG.5). The tunable dispersion compensators304may be an integrated optical tunable filter, which is capable of compensating phase distortion of light signals115caused by optical fiber dispersion. As will be described in detail below, the tunable dispersion compensator may be implemented using either cascaded ring resonators326or cascaded Mach-Zehnder interferometers (MZIs)327, as an example. As shown inFIG.3A, the tunable dispersion compensator304may be realized by cascaded ring resonators326, as an example. Such ring resonators326may be constructed using silicon photonics technology, for example, and be optically connected to a silicon waveguide325, resulting in the cascaded arrangement shown, as an example. The cascaded ring resonators326may be integrated onto an optical channel of the receiver chip (e.g.,102inFIG.1), such that optical light being launched into the receiver chip may be propagated and coupled into each ring resonator326, and thus resulting in a tuning of the dispersion of the optical light. Tuning the dispersion of the optical light may be performed by tuning the resonance frequency of the ring resonator326with phase turners326A. As an example, the phase tuners326A may each be a thermo-optic phase shifter, which can change the resonance frequency of the ring resonator326. Alternatively, as shown inFIG.3B, the tunable dispersion compensator304may be realized by cascaded MZIs327, as an example. As shown inFIG.3B, the tunable dispersion compensator is composed of cascaded alternating symmetrical and asymmetrical MZIs as shown by328B and328A, respectively, for example. Furthermore, each MZI327may be optically connected such that the output ports of a first MZI optically connect to the input ports of an adjacent second MZI, as shown at329, for example, and such that to form the cascaded arrangement shown. As an example, the cascaded MZIs327may be integrated onto an optical channel of the receiver chip (e.g.,102inFIG.1), such that optical light being launched into the receiver chip may be propagated through each MZI (e.g.,328A,328B). The symmetric MZIs (inter couplers)328B function as tunable couplers for guiding the path on which the optical signal will take, while the asymmetric MZIs (outer couplers)328A function as the dispersive element. The two outer couplers328A may be set to 50% coupling ratio. Controlling the coupling ratio of the inter couplers328B will result in a tuning of the dispersion applying to the optical light. As an example, the coupling ratios of the inter couplers328B may be controlled by phase tuners360, which may each be a thermo-optic phase shifter. It should be understood that either structure304may be configured to receive a control signal for causing a tuning of the dispersion, for example. FIG.4is an exemplary plot430illustrating a simulation of the tunable dispersion compensator comprising cascaded ring resonators illustrated inFIG.3Aaccording to an aspect. As described previously above when referring toFIGS.3A-3B, the integration of a dispersion compensator structure (e.g., cascaded ring resonators or cascaded MZIs) may enable the dispersion of a propagating optical signal on the integrated receiver chip (e.g.,102inFIG.1) to be tuned, as an example. As shown inFIG.4, the plot430illustrates a dispersion curve (“dispersion curve,” “simulation curve”)431, measured in picoseconds/nanometer (ps/nm), as indicated on the y-axis, against increasing values of wavelength, measured in nm, as indicated on the x-axis, for example. As mentioned above, the exemplary dispersion curve431shown inFIG.4was obtained via a simulation of a dispersion compensator using cascaded ring topology illustrated inFIG.3A. As shown by the dispersion curve431inFIG.4, for a single dispersion compensator device (e.g.,104A inFIG.1), the amount of dispersion compensation provided can be up to −100 ps/nm, which may occur at 1551.5 nm (along the x-axis), for example, and remain for about such that the optical bandwidth431A is about 0.5 nm, as an example. Within the bandwidth region431A, for example, the phase ripples may be smaller than 0.1 radians, which may be preferred for optimal compensator functionality. Ideally, the optical bandwidth431A should preferably be as wide as feasibly possible, for example. Thus, as shown by the simulation curve431ofFIG.4, the amount of dispersion of a propagating optical signal can be tuned by an effectively configured dispersion compensator, such that the dispersion compensation can be adjusted to compensate for changes in temperature in the optical transmission system or for a fiber length variation, for example. However, it should be noted that a trade-off between the dispersion compensation capability and the optical bandwidth needs to be considered when implementing such dispersion compensator devices, for example. FIG.5is a diagram illustrating a top view of cascaded Mach-Zehnder interferometer switches534with multiple dispersion compensators532,533, according to an aspect. As described previously above when referring toFIGS.3A—3B, cascaded ring resonators (e.g.,326) or cascaded MZIs (e.g.,327) can be used to implement a dispersion compensator structure, as an example. Additionally, as described previously above when referring toFIG.4, such a dispersion compensator structure may be capable of providing up to −100 ps/nm of dispersion compensation. As will be described in detail below, for applications requiring larger amounts of dispersion tuning (e.g., long transmission, or high-speed applications) where a single dispersion compensator structure may not suffice, cascaded MZI switches paired with multiple types of dispersion compensator devices may alternatively be provided to form another dispersion compensator structure504(e.g., in place of104A,104B inFIG.1), as an example. As shown inFIG.5, multiple-stage MZI switches534may be integrated with three fixed dispersion compensators533, as an example, to form the dispersion compensator structure504. As shown as an example, let the structure504be optically connected to an optical waveguide525integrated on an optical receiver chip (e.g.,102inFIG.1). As shown, a tunable dispersion compensator532(based on either structure304shown previously inFIGS.3A-3B, for example) may be provided on the optical waveguide525, as an example. For example, the tunable dispersion compensator532may be adapted to provide dispersion in a range from 0 to D, where D is an amount of dispersion (e.g., 100 ps/nm). As shown, following the tunable dispersion compensator532, a first MZI switch534A may be optically connected to an output of the tunable dispersion compensator532, as shown. As an example, the first MZI switch534A may be implemented as a 1×2 MZI structure, such that the first MZI switch534A has one input and two outputs, as shown, branching into an upper and a lower arms535A-a and535A-b, respectively, for example. As shown, the upper arm535A-a may be provided with a first fixed dispersion compensator533A. As an example, the first fixed dispersion compensator533A may be adapted to provide a fixed dispersion of D, again, where D is an amount of dispersion (set by the tunable dispersion compensator532). As shown, the upper and lower arms535A-a and535A-b may optically connect to the inputs, respectively, of a second MZI switch534B, as an example. As an example, the second MZI switch534B may be implemented as a 2×2 MZI structure, as shown, such that the second MZI switch534B has two inputs and two outputs, for example, branching into an upper and a lower arms535B-a and535B-b, respectively. As shown, the upper arm535B-a may be provided with a second fixed dispersion compensator533B. As an example, the second fixed dispersion compensator533B may be adapted to provide a fixed dispersion of 2*D. As shown, the upper and lower arms535B-a and535B-b may optically connect to the inputs, respectively, of a third MZI switch534C, as an example. As an example, the third MZI switch534C may be implemented as a 2×2 MZI structure, as shown, such that the third MZI switch534C has two inputs and two outputs, for example, branching into an upper and a lower arms535C-a and535C-b, respectively. As shown, the upper arm535C-a may be provided with a third fixed dispersion compensator533C. As an example, the third fixed dispersion compensator533C may be adapted to provide a fixed dispersion of 4*D. Finally, as shown, the upper and lower arms535C-a and535C-b may optically connect to the inputs, respectively, of a fourth MZI switch534D, as an example. As an example, the fourth MZI switch534D may be implemented as a 2×1 MZI structure, as shown, such that the fourth MZI switch534D has two inputs and a single output, for example. As outlined above, the cascading of multiple MZI switches534each paired with a fixed dispersion compensator533, for example, may form a dispersion compensator structure504for accommodating applications requiring large amounts of dispersion tuning. As an example, each of the MZI switches534may be controlled (autonomously by an external signal, for example), such that optical light can selectively propagate through a portion of or all of the fixed dispersion compensators533. For example, as mentioned above, the tunable dispersion compensator532may be tuned to a certain dispersion from 0 to D (by the external signal, for example), which may then, subsequently, determine each of the dispersion values for the first, second, and third fixed dispersion compensators533A-533C, respectively. Then, as an optical signal (not shown) is propagated through the structure504, each of the MZI switches534may be individually controlled. Referring to the first MZI switch534A, optical light may be directed onto either the upper arm535A-a, such that to then pass through the first fixed dispersion compensator533A, or the lower arm535A-b, such that to avoid the first fixed dispersion compensator533A. Each of the second and the third MZI switches534B and534C may be controlled in the same way, for example, such that the optical light may be directed onto either of each of the upper arms535B-a,535C-a or lower arms535B-b,535C-b, respectively, as the optical light propagates through the structure504. As mentioned previously above, in this way, the optical light may be selectively subject to increasing amounts of dispersion, induced by the dispersion compensators533, by the controlling of the MZI switches534. As outlined above, from front end to back end in the dispersion compensator structure504, optical light may selectively achieve continuous tuning of dispersion in a large range from 0 to 8*D. As noted above, only a single dispersion compensator has to be configured as a tunable dispersion compensator (e.g.,532) tunable in the range from 0 to D, for example. As such, by the use of the above-described structure504, there is no need to configure a dispersion compensator to be tunable in a range from 0 to 8*D, which would be challenging and costly. Thus, the design of the tunable dispersion compensator may be simplified. As an example, for 100G PAM4 Dense Wave Division Multiplexing (DWDM) applications, data transmission reach may only extend to about 2 kilometers (km), without dispersion compensation. Utilizing the cascaded dispersion compensators in the exemplary formation described above, the data transmission reach may be improved significantly, extending to about 40 km, for example. Thus, an advantage is that the disclosed dispersion compensator structure may enable large amounts of dispersion tuning for an integrated receiver chip, which may thus improve data transmission reach. An additional advantage is that the dispersion compensator design is simplified, which may thus reduce manufacturing costs associated with integrating the dispersion compensator onto an integrated receiver chip. As an example, in certain applications where even greater amounts of dispersion compensation are required (e.g., greater than 8*D), an external dispersion compensation module can be optically paired with the on-chip integrated dispersion compensator (e.g.,104A,104B inFIG.1). As such, the combination of the external dispersion compensation module with the on-chip integrated dispersion compensator may relax the dispersion compensation requirements of each component, respectively. Moreover, additional fixed dispersion compensators may be added, for example, to enable a more flexible dispersion compensation mechanism. It should be understood that the number of optical components, such as the tunable dispersion compensators532and the MZI switches534, for example, shown inFIG.5, is exemplary, and thus the number of such optical components may be increased or decreased, for example. It should also be understood that the dispersion compensation structure504shown inFIG.5may be provided as the first and the second dispersion compensators104A and104B ofFIG.1, as an example. FIG.6is a flowchart illustrating an exemplary control algorithm640for controlling the tunable dispersion compensator104A,104B ofFIG.1, according to an aspect. As mentioned previously above when referring toFIG.1, the integrated receiver chip (e.g.,102) may electrically communicate with the transimpedance amplifier (e.g.,106), which may further electrically communicate with a DSP module (e.g.,107). As described previously, the DSP module may be adapted to receive the voltage signal (i.e., converted photocurrent) from the transimpedance amplifier, and be provided with forward error correction (FEC) capabilities to facilitate the tuning of the first and the second dispersion compensators (e.g.,104A and104B) for optimum link performance, as an example. As will be described in detail below, the DSP module may be programmed with the exemplary control algorithm640for effectively and autonomously control the tuning of the dispersion compensators. As shown inFIG.6, upon startup of the DSP module (i.e., operating the electrical chip (e.g.,220inFIG.2)), indicated at641inFIG.6, the DSP module may set the initial dispersion compensator parameters, indicated at642. Then, the DSP module may first enter the scan operation mode, indicated at643a, as disclosed in greater detail hereinbelow. Then, after the scan operation mode program has ended, indicated at646, the DSP module may use the tracking operation mode, indicated at643b, in combination with the scan operation mode (step643a) to tune (step647) and find (step648) the optimum bit error rate (BER), as disclosed in greater detail hereinafter. After, the DSP module enters the scan operation mode643a, the DSP module may scan through the preset dispersion compensation parameters, indicated by644. As described previously above when referring toFIG.5, the tunable dispersion compensator structure (e.g.,504) may be thus configured such that the dispersion is tuned in a range from 0 to D using532ofFIG.5(or 0 to 8*D, including the fixed dispersion compensators (e.g.,533)), for example. As an optical signal (e.g.,115) propagates along the optical receiver (e.g.,102) and through the tunable dispersion compensators (as well as any fixed dispersion compensators (e.g.,533)), the optical signal may be subject to dispersion, as defined by the value of D, for example. Once the optical signal passes through the photodetectors (e.g.,105A and105B), the optical signal may be electrically transmitted (as a photocurrent, for example) to the transimpedance amplifier106/206, as described previously above when referring toFIG.2. The transimpedance amplifier may convert the photocurrent of the originally received optical signal, for example, into a voltage signal that may be electrically processed by the DSP107, for example. As mentioned above, the DSP may be provided with FEC functionality for monitoring the OE link between the receiver and transimpedance amplifier via the bit error rate (BER). As shown inFIG.6, the DSP may process the voltage signal, obtain the OE link bit error rate (BER), then set the dispersion compensator parameters104A-B for the best BER, a step indicated at645. It should be noted that the OE link BER may be hindered by impairments of the optical link, such as, for example, reflection, fiber dispersion, etc. Then, the program may end, as indicated by step646, or enter tracking operation mode. Upon entering tracking operation mode, indicated by643b, the DPS module may set the dispersion compensator parameters which are found in step645with the best BER or the initial dispersion compensator parameters from the step642if the parameters with the best BER from the scan operation mode are not available. As shown inFIG.1, the DSP (107) may obtain BER and generate control signals (121A and121B), which may be electrically transmitted to the first and the second dispersion compensators (104A and104B), respectively, for example. The control signals may then cause a tuning of the tunable dispersion compensators, such that the dispersion may be set to a new value in the range between 0 and D (or 0 and 8*D), indicated at step647. As the optical signal is propagated through the OE conversion system (101) ofFIG.1, a new photocurrent will be generated, and thus a new voltage signal, which will be electrically sent to the DSP, as an example. As before, the DSP will obtain the BER of the new voltage signal. Subsequently, the DSP will evaluate the BER value, such that to determine whether the obtained BER is optimum, indicated by step648. More specifically, the DSP may determine whether the obtained BER yields sufficiency, by, for example, comparing the BER value to a predefined value/threshold. In an example, the BER value should be as low as possible. If the read BER value is insufficient (i.e., worse than the predefined value, for example), then the dispersion compensator parameters will be tuned, indicated at647, as an example. However, if the read BER value is sufficient (i.e., equal to or better than the predefined value, for example), then no further dispersion compensator tuning may be required. The DSP module may continue to monitor the OE link signal performance. If the BER is worse than the predefined value due to any changes on the optical link, such as, temperature-induced link dispersion changes, it may autonomously start the tuning to find the best operation point for the OE link, indicated by steps647and648. Again, it should be understood that the tracking operation mode may be used in combination with the scan operation mode to tune and obtain the optimum bit error rate (BER). It should also be understood that the tuning (tracking operation mode) may allow the DSP to continually keep the dispersion compensator parameters set at optimum link BER performance. FIG.7is a diagram illustrating a top view of an alternative embodiment701of the optical-to-electrical conversion system101ofFIG.1, according to an aspect. As described previously above when referring toFIG.1, the integrated receiver chip (102) may be provided with a polarization splitter rotator (103) adapted to split an optical signal into two optical signals, and rotate the polarization mode of the optical signal, such that the two optical signals possess the same polarization (e.g., TE polarization). As will be discussed in detail below, the polarization splitter rotator may be replaced by a polarization splitter, such that to allow both TM and TE polarizations (“dual-mode”) to be propagated along the integrated receiver chip, as an example. As shown inFIG.7, the OE conversion system701may comprise an integrated receiver chip702, a transimpedance amplifier706, and a DSP module707. As described previously above when referring toFIG.2, for example, the transimpedance amplifier706and the DSP707may each be integrated on an electrical chip (e.g.,220) disposed at or near the output of the receiver chip702, as an example. As shown, the polarization splitter rotator ofFIG.1may be replaced with the polarization splitter748, for example, having input port749and output ports750A and750B, as an example. As similarly described above when referring toFIG.1, the integrated receiver702may comprise a first and a second dispersion compensators704A and704B having input and output ports714and716, and a first and a second photodetectors705A and705B having input ports717, respectively, for example. As an example, the output ports750A and750B may be optically connected to the input ports714of the first and the second dispersion compensators704A and704B, respectively, and the output ports716of the first and the second dispersion compensators704A and704B may be optically connected to the input ports717of the first and the second photodetectors705A and705B, respectively, as shown. For example, the above-described on-chip optical connections may be made using integrated channels/waveguides (not shown). As shown as an example inFIG.7, an optical signal715having TE and TM polarization modes may be launched into the receiver702at the input optical port749of the polarization splitter748. As indicated, within the polarization splitter748, the optical signal715is split into two signals each having different polarization modes, such the split optical signals715A and715B exiting the output optical ports750A and750B, respectively, contain TE polarization and TM polarization, respectively, as shown. In comparison with the polarization splitter rotator103described previously when referring toFIG.1, the polarization splitter749does not rotate the TM polarization mode of the optical signal715, as an example, allowing both TE and TM polarizations to be propagated along the optical receiver702. However, using the polarization splitter748in place of the polarization splitter rotator (103) requires that the first and the second dispersion compensators704A and704B and the photodetector launch waveguides (e.g.,717) to be polarization insensitive. This requirement may be met by using an optical waveguide having a square frontal cross-section (e.g., square prism shaped waveguide, from a top view) for each of the dispersion compensators and the photodetector launch waveguides, for example. As shown, the split optical signals715A and715B may propagate toward and enter the first and the second dispersion compensators704A and704B, respectively, via the input ports714, as an example. As described previously throughout this disclosure above, the first and the second dispersion compensators704A and704B may, using the DSP107feedback loop (realized via control signals721A and721B), may enable the receiver chip702to reduce dispersion (e.g., polarization mode dispersion) incurred by the optical signals715A and715B being received by the parallel photodetectors705A and705B, respectively, as an example. Because each of the optical signals715A and715B possesses a different polarization (e.g., TE and TM polarizations, respectively), the TE polarization mode and the TM polarization mode of the input optical signal715are compensated by the first and the second dispersion compensators704A and704B, respectively, as an example. In this way, because both TE and TM polarization modes are being propagated, the first and the second dispersion compensators704A and704B may compensate the polarization mode dispersion (PMD) between the two polarization modes. For example, due to random imperfections and/or asymmetries of the optical waveguides (not shown) of the receiver chip702or fiber link, the TE and TM polarizations may propagate along the optical link at different speeds, which may cause optical pulse distortions, and thus reduce data transmission or optical signal clarity, for example. The DSP707may be adapted to monitor such instances of PMD (via FEC and BER, for example), such that to control the tuning of the first and the second dispersion compensators704A and704B and compensate for the PMD (by speeding up or slowing down one or both polarizations, for example) between the TE and TM polarizations, respectively, as an example. Thus, an advantage of utilizing a polarization splitter is that the polarization mode dispersion may be compensated for the polarizations of an optical signal propagating along the disclosed integrated receiver chip. It should be understood that the control algorithm shown inFIG.6and described previously above may be included in the OE conversion system701and programmed on the DSP707, such that to control the tuning of the first and the second dispersion compensators704A and704B, as an example. It should also be understood that the first and the second photodetectors705A and705B may be electrically connected to the transimpedance amplifier706in the same manner as that shown previously inFIG.2(e.g., parallelly paired GSG pad configurations). As an example, the dispersion compensator structures disclosed herein above can be integrated not only on the receiver side, as shown herein, but also on the transmitter side (not shown). Having dispersion compensators integrated on both the transmitter and receiver ends, for example, allows the required total amount of dispersion compensation for either end to be less, and thus also loosens the requirement for the number of cascaded MZI switches (e.g.,534inFIG.5). Furthermore, the disclosed receiver embodiments (e.g.,102and702) may be provided as wavelength division multiplexing (WDM) receivers as well. As such, the polarization manipulating component (e.g.,103,748) may be broadband, and the dispersion compensators can be configured with a wavelength periodic feature to match the channel spacing of multiple wavelengths, as is needed for WDM receivers, for example. The multiplexing feature can be disposed on the receiver chip after the polarization manipulation and dispersion compensator components, for example (i.e., positioned on the chip between components704A,704B and705A,705B, for example). In this way, only one set of polarization manipulation and dispersion compensator components is needed to accommodate multiple wavelengths on a WDM receiver. It should be understood that the above-described integrated receiver may be based on various integrated photonics platforms, such as, for example, silicon, silicon nitride, silica, lithium niobate, polymer, III-V materials, hybrid platforms, etc. It should also be understood that the integrated receiver disclosed herein may be adapted for use with multiple wavelength ranges, including, but not limited to, visible light, O, E, S, C, and L-band. It should also be understood that the potential applications of the disclosed invention are not limited to optical communications, but may also include optical sensing, optical computing, automotive applications, quantum applications, etc. For example, the disclosed receiver may be implemented in single-wavelength 100 Gbit/s PAM4 DWDM transceivers in pluggable form factor. It may be advantageous to set forth definitions of certain words and phrases used in this patent document. The term “couple” and its derivatives refer to any direct or indirect communication between two or more elements, whether or not those elements are in physical contact with one another. The term “or” is inclusive, meaning and/or. The phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like. Further, as used in this application, “plurality” means two or more. A “set” of items may include one or more of such items. Whether in the written description or the claims, the terms “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of,” respectively, are closed or semi-closed transitional phrases with respect to claims. If present, use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence or order of one claim element over another or the temporal order in which acts of a method are performed. These terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. As used in this application, “and/or” means that the listed items are alternatives, but the alternatives also include any combination of the listed items. As used throughout this disclosure, the terms/phrases “optical signal,” “optical test signal,” “optical light,” “laser light,” “laser signal,” and the like are used interchangeably. It should be understood that the aforementioned terms each individually and collectively refer to light, and more specifically, electromagnetic radiation. Throughout this description, the aspects, embodiments or examples shown should be considered as exemplars, rather than limitations on the apparatus or procedures disclosed or claimed. Although some of the examples may involve specific combinations of method acts or system elements, it should be understood that those acts and those elements may be combined in other ways to accomplish the same objectives. Acts, elements and features discussed only in connection with one aspect, embodiment or example are not intended to be excluded from a similar role(s) in other aspects, embodiments or examples. Aspects, embodiments or examples of the invention may be described as processes, which are usually depicted using a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may depict the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. With regard to flowcharts, it should be understood that additional and fewer steps may be taken, and the steps as shown may be combined or further refined to achieve the described methods. If means-plus-function limitations are recited in the claims, the means are not intended to be limited to the means disclosed in this application for performing the recited function, but are intended to cover in scope any equivalent means, known now or later developed, for performing the recited function. Claim limitations should be construed as means-plus-function limitations only if the claim recites the term “means” in association with a recited function. If any presented, the claims directed to a method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the present invention. Although aspects, embodiments and/or examples have been illustrated and described herein, someone of ordinary skills in the art will easily detect alternate of the same and/or equivalent variations, which may be capable of achieving the same results, and which may be substituted for the aspects, embodiments and/or examples illustrated and described herein, without departing from the scope of the invention. Therefore, the scope of this application is intended to cover such alternate aspects, embodiments and/or examples. Hence, the scope of the invention is defined by the accompanying claims and their equivalents. Further, each and every claim is incorporated as further disclosure into the specification.
47,501
11863239
DESCRIPTION OF EXAMPLE EMBODIMENTS Overview Provided according to the techniques of present disclosure are methods of providing acoustic communication networks. For example, methods are provided for that include obtaining, at a first transceiver of a plurality of transceivers arranged within an installation pattern, a first acoustic signal provided by a second transceiver of the plurality of transceivers. A location of the first transceiver within a message distribution pattern is determined from the acoustic signal. A delay based upon the location of the first transceiver within the message distribution pattern is determined. Finally, a second acoustic signal corresponding to the first acoustic signal is provided from the first transceiver after the delay. The techniques of the present disclosure also provide for systems that include a plurality of transceivers arranged in an installation pattern. Each transceiver of the plurality of transceivers is configured to obtain a first acoustic signal from another of the plurality of transceivers. Each transceiver is also configured to determine, from the acoustic signal, a location of the transceiver within a message distribution pattern; determine a delay based upon the location within the message distribution pattern; and provide a second acoustic signal corresponding to the first acoustic signal. The techniques of the present disclosure also provide for apparatuses that include one or more speakers, one or more microphones, and one or more processors. The one or more processors are configured to obtain, via the one or more microphones, a first acoustic signal from a transceiver of a plurality of transceivers arranged in an installation pattern; determine, from the acoustic signal, a location of the apparatus within a message distribution pattern; determine a delay based upon the location within the message distribution pattern; and provide, via the one or more speakers after the delay, a second acoustic signal corresponding to the first acoustic signal. Example Embodiments With reference now made toFIG.1, depicted therein is a network environment100configured to provide an underwater communication network (UCN). Specifically, network environment100includes communication gateways105aand105b. The gateways105aand105bare configured with acoustic modems that allow them to communicate underwater amongst themselves, as well as with acoustic modems incorporated into manned submersible vehicles110and unmanned submersible vehicles115. Gateways105aand105bmay also be configured to communicate with surface vessels120, land-based gateways125and air and space vehicles130aand130busing wireless communication techniques, including wireless protocols, such as the IEEE 802.11 family of protocols, cellular protocols, such as the 5G protocol, and other radio frequency (RF) techniques. The acoustic modems within a UCN may, in general, be subject to operational constraints not present in other types of modems. For example, the transmissions from acoustic modems may be multipath, as it may be difficult to generate directed pressure waves used to transmit acoustic signals. Also, the medium used by acoustic modems, which is water for UCNs, exhibits relatively slow propagation times. Specifically, the speed of sound in water is 1,500 meters per second compared with approximately 300,000,000 meters per second for RF and other electromagnetic-based communication techniques. Furthermore, acoustic modems like those incorporated into gateways105aand105bmay operate with relatively low bandwidth (e.g., operating frequencies of around 5 kHz to 32 KHz). These constraints may result in low transmission speeds on the order of bits per second or kilobits per second. Furthermore, dependent on the types of acoustic modems employed, the modems may be limited to half duplex communication, as well as limited in multiple access and Open Systems Interconnection (OSI) Model Layer 2 functionality. The combination of these factors prevents UCNs from using approaches based on instantaneous or near-instantaneous message collision detection and carrier-signal sensing that are ubiquitous in today's optical, electronic, and radio frequency networks, such as Media Access Control (MAC) techniques. Accordingly, example embodiments of the techniques disclosed herein provide for cellular automaton approaches that overcome the collision detection and carrier-signal sensing challenges in the underwater environment. These constraints may make it relatively difficult to implement long range UCNs. Long range, as used in the present disclosure, refers to UCNs in which the distance between UCN network elements (e.g., communication gateways105aand105b, manned submersible vehicles110, unmanned submersible vehicles115, and surface vessels120) may be on the order of 8 km or greater. In such long range UCNs, communications may lag due to multiple second propagation times. Additionally, the network nodes may need to be self-powered as there is no undersea power grid, and the network nodes may experience collisions, collisions being when a messages is unable to be interpreted because of multiple signals being present. Accordingly, it may be difficult to implement long range UCNs that are reliable and that remain functional for months or years at a time. Given the nature of UCNs, and long range UCNs in particular, the acoustic modems of gateways105aand105bmay be configured to address these constraints and challenges utilizing the techniques of the present disclosure. Accordingly, the acoustic modems associated with gateways105aand105bmay be configured to operate as cellular automatons. A cellular automaton consists of a regular grid of cells, each in one of a finite number of states such that every cell interacts with its neighbors. At each step in time or epoch, the automatons undergo a state transition. For example, according to one implementation of cellular automatons known as Conway's Game of Live, the automatons may undergo one of the following state transitions:Any live cell with fewer than two live neighbors dies, as if by under population;Any live cell with two or three live neighbors lives on to the next generation;Any live cell with more than three live neighbors dies, as if by overpopulation;Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction or colonization. According to the techniques of the present disclosure, the modems of gateways105aand105bare configured to operate as automatons within a grid of cells, but with different rules than those utilized in Conway's Game of Life. In one specific embodiment, the acoustic modems of gateways105aand105bare arranged in a hexagonal grid200of modems, as illustrated with reference toFIG.2. In grid200, it is assumed that a modem is arranged at the center of each of the illustrated hexagonal cells. In hexagonal grid200, each cell has six neighbors. Accordingly, node202has neighbor nodes204,206,208,210,212and214. The use of a hexagonal grid may increase reliability compared to, for example, a square grid. A hexagonal grid may provide approximately equal transmission distances from node202to each of its neighbor nodes204,206,208,210,212and214, which may help provide increased reliability. The modems of hexagonal grid200may also be configured to operate according to a number of design rules. For example, no modem may transmit a signal while an adjacent modem is transmitting. Accordingly, the modem of node202cannot transmit while any one of the modems of nodes204,206,208,210,212or214is transmitting. Similarly, the modem of node206may not transmit while the modem of node208is transmitting, but the modem of node206may transmit while the modem of node210is transmitting, as nodes206and210are not neighbors. Furthermore, should node206and node210transmit simultaneously, nodes that are adjacent to both of node206and node210(such as node202and node208) will not reliably receive the signals transmitted by node206and node210because the mutually adjacent nodes are simultaneously receiving signals from two transmitters, preventing the reception of either of the transmissions from node206and210. Finally, each of the modems of hexagonal grid200may be configured with signal transmission states based upon their relative position to other modems within hexagonal grid200to ensure that the nodes operate according to the above-described design rules. For example, each of the nodes within hexagonal grid200may be configured to transmit a received signal in the next epoch after receipt, or further delayed by one or more additional epochs, based upon the relative location of the node within hexagonal grid200. In some embodiments, the nodes of hexagonal grid200may be configured to select among two values for the delay between receiving the acoustic signal and retransmitting the acoustic signal. In such embodiments, the nodes may select between delaying retransmission until the next epoch after the acoustic signal was received or further delaying until the second epoch after the acoustic signal was received. In other embodiments, the nodes of hexagonal grid200may be configured to select among two or more possible delays. For example, some nodes may be configured such that they select among retransmitting a received signal in the very next epoch after it is received, retransmitting the signal in the second epoch after it was received, or retransmitting the signal in the third epoch after it was received. In each of these embodiments, the selection among the different delays is based upon the node's location within hexagonal grid200relative to the initial source of the signal. This configuration of different rules for retransmission produces a dynamic message transmission pattern that overlays the hexagonal grid installation pattern of grid200, examples of which shall be described with reference toFIGS.3A,3B and4. In other words, the modems of the hexagonal grid200may be embodied as a system that includes a plurality of transceivers arranged in an installation pattern. Each of the transceivers (e.g., modems) is configured to obtain or receive a first acoustic signal from another of the plurality of transceivers. Based on the received signal, each of the transceivers determines its relative location within a message distribution pattern. Furthermore, each of the plurality of transceivers determines a delay based upon the relative location within the message distribution pattern. Finally, each of the plurality of transceivers is configured to provide, after the delay, a second acoustic signal corresponding to the first acoustic signal. With reference now made toFIGS.3A and3B, examples of predetermined transmissions pattern for the transceivers or modems of the present disclosure will now be described. Message distribution pattern300aofFIG.3Ais illustrated as overlaid on a hexagonal gird, such as hexagonal grid200ofFIG.2. Within transmission pattern300a, the acoustic modem associated with node302aserves as the source of a transmission. Accordingly, some of the nodes, specifically, nodes306a,310aand314a, have the state that they will retransmit the signal received from node302ain the epoch immediately following its receipt from the modem of node302a. Nodes304a,308aand312a, on the other hand, are configured to delay one epoch before retransmitting the signal. Nodes304a,308aand312aare configured with this delay because of the design constraint that no modem may transmit a signal while an adjacent modem is transmitting. For example, node304ais adjacent to nodes306aand314a, both of which are configured to immediately retransmit the signal in the next epoch. Because node304acannot transmit while these adjacent nodes are transmitting, it must delay its retransmission until after nodes306aand314aare done retransmitting the signal. The modem of node316a, on the other hand, is configured to delay its retransmission of the received signal by two epochs based upon its arrangement within message distribution pattern300a. For example, the modem of node316awill receive the transmission from the modem of node306awhen the modem of node306aretransmits the signal received from the mode of node302a. Node316amust delay its retransmission while node304ais retransmitting, as it is adjacent to node304a, resulting in a delay of one epoch. The modem of node316amust also delay a second epoch as it must delay its retransmission of the signal until after the modem of node318ahas finished retransmitting the signal. The message distribution pattern300aof the nodes as illustrated inFIG.3Ais just an example, and other design parameters for an acoustic network may result in different message distribution patterns. For example, the message distribution pattern of the nodes may be rotated, as illustrated inFIG.3B. As shown inFIG.3B, the message distribution pattern of the network nodes may be rotated about the source node for the transmission. In the example of message distribution pattern300b, has been rotated 180° relative to message distribution pattern300aofFIG.3A. Additionally, the message distribution patterns illustrated inFIGS.3A and3Bare based on the location of a particular node relative to the location of the source of a signal. Other considerations may be used in determining the state for a particular node. For example, a signal may include a hop count (i.e., a count indicating how many nodes the signal has traversed on its way to reaching the receiving node). This hop count may be an additional consideration in determining the transmission state for a particular node. Another way to look at the message distribution patterns300aand300bofFIGS.3A and3B, respectively, is to see that the message distribution pattern of the nodes are arranged into main paths, and sub paths, which are illustrated in more detail inFIG.4. As shown inFIG.4, the main paths through message distribution pattern400extend outward from the modem of node402, the origin of the transmission, through the modems of nodes404,408and412, respectively, which neighbor the source node402and are configured to retransmit the signal in the next epoch, to the edges of the grid. Rotations of message distribution pattern400may change the direction of each of the main paths, but no matter where the initialization is, the same direction will hold true. Sub paths extend from the source node402and the main paths after a one epoch delay. Accordingly, the sub path nodes are shown extending away from the origin and main paths after an initial delay of one epoch and then proceed to retransmit immediately thereafter. For example, the sub path that includes nodes414,416,418,420and422, extends from the origin node402after the one epoch delay from the modem of node414. Nodes416,418,420and422then retransmit the received signal in the next epoch after receipt. These path and sub path patterns may be used to simplify the implementation of message distribution patterns in programmable logic devices. With reference now made toFIG.5, depicted therein is a flowchart500that includes operations that carry out an example embodiment of the techniques of the present disclosure. Flowchart500begins in operation505where a first acoustic signal is obtained (e.g., received) at a first transceiver of a plurality of transceivers. The first acoustic signal is provided by a second transceiver of the plurality of transceivers. The plurality of transceivers are arranged within an installation pattern. Accordingly, operation505may be embodied as a modem associated with one or more of nodes304a/b,306a/b,308a/b,310a/b,312a/bor314a/bofFIG.3A or3Breceiving a signal from the modem of origin node302a/b. For example, the hexagonal grid upon which message distribution patterns300a/bare overlaid represents an installation pattern of transceivers. Therefore, operation505may be embodied as a transceiver associated with one of nodes304a/b,306a/b,308a/b,310a/b,312a/bor314a/bofFIG.3A or3Breceiving a signal from the modem of origin node302a/b. Operation505may also be embodied as any other modem associated with a node of the hexagonal grid upon which message distribution patterns300a/bofFIG.3A or3Bare overlaid receiving a signal from another modem associated with a node within the hexagonal grid. In operation510, a location of the first transceiver within a message distribution pattern is determined from the first acoustic signal and a location of the first transceiver within the installation pattern. For example, included within the acoustic signal may be data indicative of which transceiver within the plurality of transceivers was the source or origin of the signal. UsingFIGS.3A and3Bas a more specific example, an acoustic signal sent via the modems associated with the nodes of the hexagonal grids upon which message distribution patterns300a/bofFIGS.3A and3Bare overlaid may include data indicative of which node within the hexagonal grid was the origin or source node for the signal. The transceiver that receives that signal may contain data regarding the location of each node within the determined installation pattern. Accordingly, using the data from the acoustic signal indicating the origin or source of the signal in combination with the data stored at the transceiver regarding its location within the installation pattern, a determination may be made regarding the location of the first transceiver to the source or origin of the acoustic signal within the message distribution pattern. The data contained in the acoustic signal may include additional data, such as data indicative of the structure of the message distribution pattern and/or a rotation of the message distribution pattern, such as the rotations of message distribution patterns300aand300billustrated inFIGS.3A and3B, respectively. While the discussion above indicates that the data contained in the acoustic message directly identifies which node served as the source of a particular acoustic message, the data contained in the acoustic signal may provide an indirect indication of the source of the acoustic signal. For example, the acoustic signal may include a code. This code may be used by the receiving modem to determine which delay will be used in operation520, discussed in more detail below. This code may indicate a value in a look-up table in the modem that returns a particular delay without ever specifically identifying a specific source node for the signal. Nevertheless, the code would correspond to a signal sent by one or more particular source nodes, and therefore, determining the code from the acoustic message indirectly determines the location of the receiving modem relative to the source or origin modem. Next, a delay is determined in operation515. This delay is based upon the location of the first transceiver within the message distribution pattern. For example, and as illustrated inFIGS.3A,3B and4above, each modem associated within a node in one of message distribution patterns300a,300bor400is configured to retransmit a received signal with a particular delay based upon its relative position to the origin or source node of the received signal within the message distribution patterns. Examples of such delays include waiting until the very next epoch to retransmit the signal, allowing one intervening epoch before retransmitting the signal, or waiting two intervening epochs before retransmitting the signal. Accordingly, using the location of the first transceiver relative to the source or origin of the acoustic signal, the transceiver may determine if it should retransmit the signal in the next epoch, retransmit the signal after a one epoch delay, or retransmit the signal after a two epoch delay. Finally, in operation520, the first transceiver provides a second acoustic signal corresponding to the first acoustic signal after the delay determined in operation515. Accordingly, the providing of operation520may be embodied as the retransmission of an acoustic signal in the next epoch after the acoustic signal is received, retransmitting the acoustic signal after a one epoch delay, retransmitting the acoustic signal after a two epoch delay, or retransmitting the acoustic signal after some other delay. Additionally, because message distribution by a node requires the expenditure of what may be limited lifetime power resources for that node, and because the message bandwidth of the UCN is limited, UCN nodes may implement cryptographic verification of received messages prior to acting on those messages or relaying them in a current or subsequent epoch. This cryptographic verification may use symmetric (e.g. pre-shared) or asymmetric (e.g. Public/Private) key material to provide message authentication and/or anti-replay capabilities. This cryptographic verification serves to prevent the exhaustion of UCN resources and to preserve the availability of the UCN for authorized messages. Accordingly, the method of flowchart500may include additional operations to implement such cryptographic verification of received messages. For example, after operation505, in which an acoustic message is received, but before operation520, in which the acoustic message is retransmitted, the message may be cryptographically verified using the authentication techniques described herein. As described above, the techniques of the present disclosure may be used to implement UCNs that may be suitable for providing long range communications. Simulations of the techniques of the present disclosure have shown that implementing the techniques of the present disclosure may result in consistent, reliable and redundant communication of acoustic signals throughout a UCN. Illustrated inFIG.6are the results of a simulation of the techniques of the present disclosure in a UCN in which it is assumed that each modem may transmit its signals to adjacent modems within grid600with 100 percent reliability. Under this assumption, it can be seen that each node receives the message at least once after 18 epochs of communication, and the vast majority of the nodes receive the message two or three times, which may significantly improve the reliability of the UCN network. This is an important consideration for UCNs as it allows for networks to be established without needing to provide acknowledgement (ACK) messages, negative acknowledgement (NACK) messages, routing information protocol (RIP) messages, Internet Control Message Protocol (ICMP) messages, Open Shortest Path First (OSPF) protocol messages, Hello messages, and/or Keep alive messages. Being able to omit these messages may be particularly beneficial in UCNs. For example, each message sent in a network utilizes energy. In UCNs where many network elements are not directly connected to a power grid, and instead rely on battery power, being able to reliably communicate without having to send these signaling messages may result in a UCN that can operate for longer periods of time without maintenance or replacement of network nodes. With reference now made toFIG.7, depicted therein are the results700of a simulation in which modems in neighboring nodes are able to communicate with only 50 percent reliability, which represents an essentially “worst case” scenario. Even in this worst case scenario, it can be seen that the vast majority of the nodes receive the message after 30 epochs of communication. As illustrated, only 29 out of 289 nodes failed to receive the signal, meaning approximately 90 percent of the nodes received the signal. Furthermore, as shown inFIG.7, a large number of nodes receive the signal multiple times, which may vastly improve the reliability of the UCN network. With reference now made toFIG.8, illustrated therein a hardware block diagram of a device800, such as an acoustic modem, that may perform the techniques of the present disclosure. It should be appreciated thatFIG.8provides only an illustration of one embodiment and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made. As depicted, the device800includes a bus812, which provides communications between computer processor(s)814, memory816, persistent storage818, communications unit820, and input/output (I/O) interface(s)822. Bus812can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, bus812can be implemented with one or more buses. I/O interfaces822may be configured to receive data from external devices828. Memory816and persistent storage818are computer readable storage media. In the depicted embodiment, memory816includes random access memory (RAM)824and cache memory826. In general, memory816can include any suitable volatile or non-volatile computer readable storage media. Instructions for the techniques of the present disclosure may be stored in memory816or persistent storage818for execution by processor(s)814. The control logic stored in memory816or persistent storage818may implement the techniques of the present disclosure. One or more programs may be stored in persistent storage818for execution by one or more of the respective computer processors814via one or more memories of memory816. The persistent storage818may be a magnetic hard disk drive, a solid state hard drive, a semiconductor storage device, read-only memory (ROM), erasable programmable read-only memory (EPROM), flash memory, or any other computer readable storage media that is capable of storing program instructions or digital information. The media used by persistent storage818may also be removable. For example, a removable hard drive may be used for persistent storage818. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer readable storage medium that is also part of persistent storage818. Communications unit820, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit820includes one or more network interface cards. Communications unit820may provide communications through the use of either or both physical and wireless communications links. Device800may also include an optional display830. When embodied as an acoustic modem, device800may include elements that allow it to transmit acoustic signals. Specifically, device800includes transmitter circuitry832, digital to analog converter834, and speaker836(sometimes also referred to as a transducer). Accordingly, processor814provides data to transmitter circuitry832via bus812. Transmitter circuitry832processes the received data to prepare it for transmission as an acoustic signal. Transmitter circuitry832provides the prepared data to digital to analog converter834, which drives speaker836to transmit the data as an acoustic signal. Device800may also include elements that allow it to receive acoustic signals. Specifically, device800includes receiver circuitry842, analog to digital converter844, and microphone846(sometimes also referred to as a hydrophone). Accordingly, microphone846receives an acoustic signal, and provides it to analog to digital converter844. Analog to digital converter converts the received signal into digital data that is provided to receiver circuitry842. Receiver circuitry processes the received data and provides it to processors814via bus812. In summary, provided for in the present disclosure are methods that include the operations of: obtaining, at a first transceiver of a plurality of transceivers arranged within an installation pattern, a first acoustic signal provided by a second transceiver of the plurality of transceivers; determining, from the acoustic signal, a location of the first transceiver within a message distribution pattern; determining a delay based upon the location of the first transceiver within the message distribution pattern; and providing, from the first transceiver after the delay, a second acoustic signal corresponding to the first acoustic signal. Also provided for are systems that include a plurality of transceivers arranged in an installation pattern. Each transceiver of the plurality of transceivers is configured to: obtain a first acoustic signal from another of the plurality of transceivers; determine, from the acoustic signal, a location within the message distribution pattern; determine a delay based upon the location within the message distribution pattern; and provide, after the delay, a second acoustic signal corresponding to the first acoustic signal. The techniques of the present disclosure also provide for apparatuses that include: one or more speakers; one or more microphones, and one or more processors. The one or more processors are configured to: obtain, via the one or more microphones, a first acoustic signal from a transceiver of a plurality of transceivers arranged in an installation pattern; determine, from the acoustic signal and a location of the transceiver within the installation pattern, a location of the apparatus within a message distribution pattern; determine a delay based upon the location within the message distribution pattern; and provide, via the one or more speakers after the delay, a second acoustic signal corresponding to the first acoustic signal. By implementing the techniques of the present disclosure, UCNs may be implemented that provide survivable, long lifetime mesh networks at low power. Such UCNs may be particularly suitable for Emergency Command and Control applications. Furthermore, UCNs implemented according to the techniques of the present disclosure may be suitable as backups to all other systems/networks. For example, even if Global Positioning System networks and RF infrastructure fails, UCNs implemented according to the techniques of the present disclosure may remain active. The above description is intended by way of example only. Although the techniques are illustrated and described herein as embodied in one or more specific examples, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made within the scope and range of equivalents of the claims.
30,515
11863240
DETAILED DESCRIPTION FIG.1is a block diagram illustrating an electronic device101in a network environment100according to various embodiments. Referring toFIG.1, the electronic device101in the network environment100may communicate with an electronic device102via a first network198(e.g., a short-range wireless communication network), or an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input device150, a sound output device155, a display device160, an audio module170, a sensor module176, an interface177, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one (e.g., the display device160or the camera module180) of the components may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components may be implemented integrated and implemented as in, for example, the sensor module176(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) embedded in the display device160(e.g., a display). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing and computation. The processor120may load and process a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor123(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from the main processor121. Additionally or alternatively, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. Here, the auxiliary processor123may be operated separately from or embedded in the main processor121. In such a case, the auxiliary processor123may control, for example, at least some of functions or states related to at least one component (e.g., the display device160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active (e.g., executing an application) state. According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140is software stored in the memory130, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input device150is a device configured to receive a command or data to be used by a component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101, and may include, for example, a microphone, a mouse, or a keyboard. The sound output device155is a device configured to output sound signals to the outside of the electronic device101, and may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record, and the receiver may be used only for incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display device160is a device configured to visually provide information to a user of the electronic device101, and may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display device160may include touch circuitry or a pressure sensor adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input device150, or output the sound via the sound output device155or an external electronic device (e.g., an electronic device102(e.g., a speaker or a headphone)) wiredly or wirelessly coupled with the electronic device101. The sensor module176may generate an electrical signal or data value corresponding to an internal operational state (e.g., power or temperature) of the electronic device101or an environmental state external to the electronic device101. The sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) wiredly or wirelessly. According to an embodiment, the interface177may include a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102), for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. The haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188is a module configured to manage power supplied to the electronic device101, and may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189is a device configured to supply power to at least one component of the electronic device101, and may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a wired communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a wired communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules190may be implemented as a single chip or may be implemented as separate chips, respectively. According to an embodiment, the wireless communication module192may identify and authenticate the electronic device101in a communication network, using user information stored in the subscriber identification module196. The antenna module197may include at least one antenna module for transmitting or receiving a signal or power to or from the outside of the electronic device101. According to an embodiment, the communication module190(e.g., the wireless communication module192) may transmit or receive a signal to or from the external electronic device via an antenna appropriate for a communication scheme. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102and104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices. According to an embodiment, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the function requested or an additional function, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the function or service requested, with or without further processing of the outcome. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smart phone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that various embodiments of the disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular expression may include a plural expression, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include all possible combinations of the items enumerated together in a corresponding one of the phrases. Such terms as “1st” and “2nd,” or “first” and “second” may represent corresponding components regardless of order or importance, may be used to simply distinguish one component from another, and do not limit the corresponding components. When it is described that an element (e.g., a first element) is “(operatively or communicatively) coupled” with/to or “connected” to another element (e.g., a second element), the element can be directly connected to the other element or can be connected to the other element through another element (e.g., a third element). As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, the module may be implemented as an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program140) including an instruction that is stored in a machine-readable storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., a computer). The machine is a device capable of invoking the stored instruction and operating according to the invoked instruction, and may include the electronic device (e.g., the electronic device101) according to the embodiments set forth herein. When the instruction is executed by the processor (e.g., the processor120), the processor may perform functions corresponding to the instruction directly, or functions corresponding to the instruction can be performed using other components under the control of the processor. The instruction may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal, but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed online via an application store (e.g., Play Store™). If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. Each component (e.g., a module or a program) according to various embodiments may include a single entity or multiple entities. Some of the above-described sub-components may be omitted, or one or more other components may be added to various embodiments. Alternatively or additionally, some components (e.g., modules or programs) may be integrated into a single entity, and the single entity may still perform one or more functions of each of some components in the same or similar manner as they are performed by a corresponding one of some components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. FIG.2is a block diagram of an electronic device for calculating a reflection coefficient according to various embodiments. Referring toFIG.2, the electronic device101according to various embodiments may include the processor120, the wireless communication module192, and an antenna251. The wireless communication module192(or a communication circuit) may include a transceiver210, a power amplifier221, a duplexer231, and a coupler241. According to various embodiments, the transceiver210may generate a transmission signal. The transceiver210may carry a transmission data signal on a carrier and transfer the transmission signal including the transmission data signal and the carrier to the power amplifier321. The transmission data signal may include data to be transmitted to another electronic device (for example, the electronic device104) or a Base Station (BS) by the electronic device101, and may use a fixed particular frequency or a Continuous Wave (CW) signal configured in a single tone in an Industrial Science Medical (ISM) band. The transceiver210may include a Band-Pass Filter (BPF) specified for a band to measure a transmission path in order to use a continuous frequency change of a CW signal, or may include a bypass that does not include a filter since a low signal is used. The transmission signal may be transmitted to the outside of the electronic device101through the antenna251in the form of an electromagnetic wave via the duplexer231and the coupler241. In order to generate the transmission signal, the transceiver210may include an oscillator (not shown) for generating a carrier. The transceiver210may include a modulation circuit for performing a modulation task to carry the transmission data signal on the carrier generated by the oscillator. The transceiver210may include a Radio-Frequency (RF) amplifier for amplifying the modulated carrier in order to increase the strength of the transmission signal. According to various embodiments, the transceiver210may receive a signal (Rx) received through the antenna251via the coupler241and the duplexer231. The transceiver210may receive a reception signal including a reception data signal and a carrier from the antenna251and extract data from the reception signal. The transceiver210may transmit the extracted data to the processor120or the memory130. The reception data signal may include data that the electronic device101receives from another electronic device104or the BS. In order to process the received signal, the transceiver220may include a demodulation circuit for performing a demodulation task to extract data from the reception signal. According to various embodiments, the power amplifier221may amplify the transmission signal (Tx) on a transmission side. The power amplifier221may receive the transmission signal (Tx) from the transceiver210, amplify the transmission signal, and transmit the amplified transmission signal to the duplexer221. The power amplifier221and the duplexer231may operate differently depending on the frequency band of the transmission signal or a communication scheme. For example, the power amplifier221may include a power amplifier of a Multi-Mode Multi-Band (MMMB). The duplexer231may include a High-Band (HB) duplexer, a Middle-Band (MB) duplexer, or a Low-Band (LB) duplexer. The power amplifier221may receive the transmission signal from the transceiver210, amplify the received transmission signal, and then transmit the amplified transmission signal to the duplexer231. According to various embodiments, the duplexer231may branch the transmission signal and the reception signal. The duplexer231may separate the transmission signal and the reception signal and filter a transmission frequency and a reception frequency. According to an embodiment, when a signal is transmitted through the antenna251, the duplexer231may pass the transmission signal therethrough. According to another embodiment, when a signal is received through the antenna231, the duplexer231may pass the reception signal therethrough. According to an embodiment, the duplexer231may transmit the transmission signal from the transceiver210to the antenna251. According to an embodiment, the duplexer231may transmit the reception signal from the antenna251to the transceiver210. The duplexer231may receive the transmission signal from the power amplifier221and transmit the transmission signal to the antenna251via the coupler241. The duplexer231may receive the reception signal from the first antenna211via the coupler241and transmit the reception signal to the transceiver210. According to various embodiments, the coupler241may be connected between the antenna251and the duplexer231, and may receive the transmission signal from the duplexer231or receive the reception signal from the antenna251. The coupler241may individually detect the transmission signal and the reception signal. According to an embodiment, the coupler241may branch a part of the transmission signal transmitted from the duplexer231and transmit another part thereof to the transceiver210. The coupler241may distinguish between a signal radiated through the antenna251and a signal that is not radiated therethrough but is reflected therefrom among at least one signal output through the duplexer231and transfer some of the signals to the transceiver210. For example, a part of the transmission signal may be fed back to the transceiver210(for example, a feedback port) from the coupler241. According to an embodiment, signals branched through the coupler241may include a forward coupling signal211. The forward coupling signal211is a part of the transmission signal and may have the same frequency and phase as the frequency and phase of the transmission signal. According to an embodiment, a strength of the forward coupling signal211may be lower than a strength of the transmission signal. The forward coupling signal211may be used to calculate a reflection coefficient. According to another embodiment, the coupler241may transmit a signal reflected from the antenna251to the transceiver210. For example, the reflected signal may be transmitted to the transceiver210(for example, the feedback port) from the coupler241. According to an embodiment, the signal that is not radiated through the antenna251but is reflected from the antenna251may include a reverse coupling signal212. The reverse coupling signal212may include a signal reflected from the antenna251and a signal received through the antenna251. According to various embodiments, the processor120may control the transceiver210, the power amplifier221, the duplexer231, the coupler241, and the antenna251. The processor120may perform the function of the communication module190. The processor120may control the operation of the transceiver210for generating a transmission signal. The processor120may determine or generate data to be included in the transmission signal and transmit the data to the transceiver210. The processor120may determine a generation scheme of the transmission signal. The transceiver210may generate the transmission signal from the data determined or generated by the processor120according to the generation scheme determined by the processor120. For example, when the processor210determines that the data is a voice format and the signal generation scheme is Amplitude Modulation (AM), the transceiver210may carry the voice data on the carrier in the AM scheme and generate the transmission signal. According to various embodiments, the processor120may determine a phase and a frequency of the transmission signal. The processor210may control the transceiver210such that the transmission signal has a specific phase and a specific frequency. The processor120may determine each of the phase and the frequency of the transmission signal for the antenna251. The processor120may transmit the transmission signal through the antenna251. The processor120may control the transceiver210to compensate for the phase of the transmission signal. According to various embodiments, the processor120may transmit a signal through the antenna251, and some of the signals transmitted through the antenna251may be detected as forward coupling signals branched through the coupler241. According to an embodiment, the forward coupling signal211may have the same frequency and phase as those of the transmitted signal. According to an embodiment, the frequency and the phase of the forward coupling signal211may be the same as the frequency and the phase of the transmitted signal. According to an embodiment, a strength of the forward coupling signal211may be lower than a strength of the transmitted signal. The processor120may detect the reverse coupling signal212that is not radiated through the antenna251but is reflected from the antenna251. The reverse coupling signal212may include a signal received through the antenna251. According to an embodiment, the processor120may calculate a reflection coefficient of the antenna251based at least partially on the forward coupling signal211or the reverse coupling signal212and determine a magnitude, a phase, an I value, and a Q value of a signal corresponding to the calculated reflection coefficient. According to an embodiment, the processor120may use the determined reflection coefficient to identify an object type and determine a distance from the object. The electronic device101(for example, the processor120) according to various embodiments may transmit or receive a signal through the transceiver210. According to an embodiment, the electronic device101(for example, the processor120) may transmit the transmission signal (Tx) through the transceiver210, and the transmission signal (Tx) output through the transceiver210may be amplified via the power amplifier221. The transmission signal (Tx) may be transmitted to the coupler241via the duplexer231. According to an embodiment, the duplexer231may transmit high-frequency signals transmitted and received through the antenna251such that the signals are separately transmitted for the transmission signal and the reception signal according to communication bands thereof. According to an embodiment, the transmission signal (Tx) passing through the coupler241may be transmitted to another electronic device through the antenna251. The coupler241may be a bidirectional coupler, and the electronic device101may individually detect the forward coupling signal211for the transmission signal (Tx) and the reverse coupling signal212that is not radiated from the antenna251but is reflected therefrom through the coupler241. According to various embodiments, the electronic device101(for example, the processor120) may receive the reception signal (Rx) through the antenna251. The reception signal (Rx) may be transmitted to the duplexer231via the coupler241. According to an embodiment, the reception signal (Rx) transmitted to the duplexer231may be transmitted to the transceiver210. According to an embodiment, the electronic device101may detect the signal that is not radiated through the antenna251but is reflected therefrom through the coupler241. According to an embodiment, the electronic device101may individually detect a signal (Rx) received through the antenna251and a signal that is not radiated through the antenna251but is reflected therefrom through the coupler241. For example, the reverse coupling signal212may include the signal (Rx) received through the antenna251and the signal reflected from the antenna251through the coupler241. According to various embodiments, the electronic device101(for example, the processor120) may detect each of the forward coupling signal211for the transmission signal (Tx) and the reverse coupling signal212for the transmission signal (Tx), which is not radiated through the antenna251but is reflected therefrom through the coupler241. According to various embodiments, the electronic device101(for example, the processor120) may use a reflection coefficient of the antenna251based at least partially on the forward coupling signal211or the reverse coupling signal212. According to various embodiments, the electronic device101(for example, the processor120) may determine at least one of a type of an object (or an entity) adjacent to the electronic device101or a distance therefrom on the basis of the calculated reflection coefficient. According to various embodiments, the electronic device101may switch the antenna251to another antenna. FIG.3illustrates an internal structure of a coupler in detail according to various embodiments, andFIG.4illustrates an I value and a Q value of a lookup table as coordinates through normalization according to various embodiments. Referring toFIGS.3and4, the coupler241according to various embodiments may be a bidirectional coupler, and may detect each of a Tx signal transmitted to the outside through the coupler241and an Rx signal received from the antenna251. A Tx signal (for example, a1) generated from the transceiver210may pass through the coupler241, and a Tx signal (a2) passing through the coupler241may be output (or radiated) to the outside through the antenna251. The coupler241may input the Tx signal (for example, a1), output the Tx signal (for example, a2), and transmit the Tx signal to the antenna251. The electronic device101may allow the Tx signal (for example, a1) to be output as the Tx signal (for example, a2) and transmitted to the antenna251through the coupler241, and may detect the forward coupling signal (for example, b3). Further, the electronic device101may transmit the Rx signal (for example, b2) received through the antenna251and the reverse coupling signal (for example, b4), which is not radiated through the antenna251but is reflected from the antenna251, to the transceiver210through the coupler241. The signal (for example, b1) is a signal that is unable to pass through the coupler241, is reflected therefrom, and is transmitted to the transceiver210again among the Tx signals (for example, a1) output from the transceiver210. According to various embodiments, the electronic device101(for example, the processor120) may individually detect the forward coupling signal of the Tx signal and the reverse coupling signal, which is not radiated through the antenna251but is reflected from the antenna251, through the coupler241. The electronic device101(for example, the processor120) may calculate a reflection coefficient (Fin) based on the reverse coupling signal using [Equation 1] or [Equation 2] below. The electronic device101(for example, the processor120) may calculate the reflection coefficient based at least partially on the transmission (Tx) signal (for example, a2), the forward coupling signal (for example, b3), or the reverse coupling signal (for example, b4). The processor120may calculate a reflection coefficient (Γ) through [Equation 1] or [Equation 2] below. Y(n)=ΓX(n)+W(n)n=1,2,3, . . .N[Equation 1] In [Equation 1] above, Y(n) denotes a transmission (Tx) signal (for example, a2), X(n) denotes a forward coupling signal (for example, b3), and W(n) denotes noise generated by the coupler241. The reflection coefficient (Γ) may be calculated through the transmission (Tx) signal (for example, a2), the forward coupling signal (for example, a3), and the noise. The reflection coefficient (Γ) may be proportional to f(S21, S31, S32, S41, S42 . . . )×b4/b3. [Equation 1] is an equation for calculating a reflection coefficient (Γ) at a point viewing the antenna and the coupler for a 4-port network when a bidirectional coupler or a coupler is connected to the antenna of the electronic device101. The S parameters S21, S22, S31, S32, S41, and S42 may be specific values (constants). The reflection coefficient (Γ) corresponds to a value generated by dividing the reverse coupling signal (for example, b4) by the forward coupling signal (for example, b3). The equation may be stored in the memory. The S parameter is a circuit result value used for a radio frequency and means a ratio of an output voltage to an input voltage in frequency distribution. For example, S21 indicates a ratio between a voltage of a signal input into the first antenna251and a voltage of a signal output from a second antenna. The S parameter may be indicated as a matrix, as shown in [Equation 2] below. Smatrix=(S11S12S21S22)[Equation⁢⁢2] In the S parameters when a plurality of antennas exists, S11 denotes a value of a voltage of a signal reflected through the first antenna compared to a voltage of a signal input into the first antenna, S21 denotes a value of a voltage of a signal received through the second antenna against a voltage of a signal input into the first antenna, S31 denotes a value of a signal received through a third antenna compared to a voltage of a signal input into the first antenna, and S41 denotes a value of a voltage of a signal received through a fourth antenna against a voltage of a signal input into the first antenna in [Equation 2]. Similarly, S22 denotes a value of a voltage of a signal reflected through the second antenna compared to a voltage of a signal input into the second antenna, S32 denotes a value of a voltage of a signal received through the third antenna compared to a voltage of a signal input into the second antenna, S42 denotes a value of a voltage of a signal received through the fourth antenna compared to a voltage of a signal input into the second antenna, and S43 denotes a value of a voltage of a signal received through the fourth antenna compared to a voltage of a signal input into the third antenna. For example, a lookup table (for example, NV LUT) showing the magnitude and the phase of a signal corresponding to the reflection coefficient is as shown in [Table 1] below. TABLE 1MagnitudePhaseI ValueQ Value0012953480.40−541619670.445−132171480.490246759200.4135583933610.41806248−5930.42254703−47850.4270563−58310.4315−4748−24580.622.5−509887440.667.5154891530.6112.5763454900.6157.510358−11880.6202.57191−65850.6247.5954−90440.6292.5−6389−62320.6337.5−91307280.80−7556113270.8454055129260.890915176790.813512625−4090.81809521−79850.82253029−119930.8270−7609−102560.8315−13452−457 [Table 1] above is a table showing the detailed magnitude and phase of a signal corresponding to the calculated reflection coefficient.FIG.4illustrates an I value and a Q value in [Table 1] above as coordinates through normalization, and the horizontal axis indicates the value of I and the vertical axis indicates the value of Q inFIG.4. For example, when the magnitude of the signal is 0.4 and the phase of the signal is 45 degrees in [Table 1] above, the value of I for the signal is −1321 and the value of Q for the signal is 7148, which may be expressed as coordinates 401 ofFIG.4. [Table 1] above is a table according to an embodiment, and each value may be variably changed. The table may be a table configured on the basis of the reflection coefficient of the antenna. The table may include the magnitude and the phase of a signal corresponding to a transfer coefficient. The lookup table may be stored in the memory130, and may be updated when an object type and a distance from the object are determined. The electronic device101may store at least one of the determined type of the external object and the determined distance from the external object in the lookup table on the basis of at least of the reflection coefficient and the transfer coefficient. For example, when the magnitude of at least one signal of the reflection coefficient or the transfer coefficient of the antenna is 0.0 and the phase is 0.0, the electronic device101may be in the state in which there is no object (entity) in the vicinity thereof. The magnitude and the phase of each signal recorded in the table may indicate a preset state of the electronic device101corresponding to the magnitude and the phase of each signal. For example, when the magnitude of the signal is 0.80 and the phase is 315.0, the electronic device101may determine that the value of I is −13452, the value of Q is −457, and the object exists near the electronic device (for example, within several cm therefrom). According to various embodiments, the electronic device101(for example, the processor120) may determine the magnitude and the phase of the signal corresponding to at least one of the reflection coefficient or the transfer coefficient of the antenna, or may acquire the magnitude and the phase of the signal corresponding to at least one of the reflection coefficient or the transfer coefficient of the antenna through the table. According to various embodiments, when the object exists near the electronic device101, the electronic device101(for example, the processor120) may calculate at least one of the reflection coefficient or the transfer coefficient of the antenna and pre-store the magnitude and the phase of the signal corresponding to at least one of the calculated reflection coefficient or transfer coefficient of the antenna in the table. According to various embodiments, the electronic device101(for example, the processor120) may determine a type of the object adjacent to the electronic device or a distance from the object on the basis of the table corresponding to the antenna. For example, according to various embodiments, when the object exists adjacent to the electronic device101, the electronic device101(for example, the processor120) may calculate at least one of the reflection coefficient or the transfer coefficient of the diversity antenna corresponding to the object adjacent to the electronic device and store the magnitude and the phase of the signal corresponding to at least one of the calculated reflection coefficient or transfer coefficient of the diversity antenna in the table corresponding to the diversity antenna. FIG.5is a block diagram of an electronic device performing a function on the basis of an external object according to various embodiments. More specifically,FIG.5is a first block diagram of the electronic device101for determining a type of an external object located adjacent to the electronic device101or a distance from the external object based at least partially on a reflection coefficient or a transfer coefficient according to various embodiments. Referring toFIG.5, the electronic device101calculating the reflection coefficient and the transfer coefficient according to various embodiments may include the processor120, the wireless communication module192(or a communication circuit), a first antenna551, and a second antenna552. The wireless communication module192may include a transceiver510, a first power amplifier521, a second power amplifier522, a first duplexer531, a second duplexer532, a first coupler541, a second coupler542, and a switch561. The wireless communication module192may include a transceiver for generating at least one signal, at least one amplifier for amplifying the generated signals, at least one duplexer for dividing the amplified signals according to communication bands thereof, and at least one coupler for transmitting the divided signals through at least one of the first antenna or the second antenna. The first coupler541may be disposed between the first antenna551and the first duplexer531, and the second coupler542may be disposed between the second antenna552and the second duplexer532. The switch561may be disposed between the first coupler541and the second coupler542, and may switch signals received by the first coupler541and the second coupler542and transfer the signals to the transceiver510. According to various embodiments, the processor120may perform the same operation as the processor120ofFIG.2, or may perform at least one operation or function. The transceiver510may perform the same operation as the transceiver210ofFIG.2, or may perform at least one operation or function. At least one of the first power amplifier521or the second power amplifier522may perform the same operation as the power amplifier221ofFIG.2, or may perform at least one operation or function. At least one of the first duplexer531or the second duplexer532may perform the same operation as the duplexer231ofFIG.2, or may perform at least one operation or function. At least one of the first coupler541or the second coupler542may perform the same operation as the coupler241ofFIG.2, or may perform at least one operation or function. According to various embodiments, the switch561may switch signals transmitted between elements (for example, the first coupler541and the second coupler542). The switch561may receive a signal from the first coupler541or the second coupler542and transfer the signal to the transceiver510. According to an embodiment, the switch561may receive a signal received from the first antenna551through the first coupler541and transfer the received signal to the transceiver51. According to another embodiment, the switch561may receive a signal received from the second antenna552through the second coupler542and transfer the received signal to the transceiver510. The switch561may transfer a signal received from at least one of the first antenna551or the second antenna552to a feedback port511of the transceiver510. According to various embodiments, the switch561may transfer a signal (for example, a forward coupling signal) branched by the first coupler541from some of the transmission signals transmitted from the first duplexer531to the transceiver510. According to an embodiment, the switch561may switch some signals branched from the transmission signals by the first coupler541and transfer the same to the transceiver510(for example, the feedback port511). For example, the branched signals may include a forward coupling signal. According to another embodiment, the switch561may transfer a signal that is not radiated through the first antenna551but is reflected therefrom to the transceiver510. According to an embodiment, the switch561may switch a signal (for example, a reverse coupling signal) that is not radiated through the first antenna551but is reflected therefrom and transfer the same to the transceiver510(for example, the feedback port511). The electronic device101(for example, the processor120) may use the reverse coupling signal to calculate the reflection coefficient. The reflection coefficient may be calculated as the ratio (for example, S11) of the voltage of the signal reflected from the first antenna551to the voltage of the signal input into the first antenna551. Further, the reflection coefficient may be calculated as the ratio (for example, S22) of the voltage of the signal reflected from the second antenna552to the voltage of the signal input into the second antenna552. According to various embodiments, the switch561may transfer a signal (for example, a forward coupling signal) branched by the second coupler542from some of the transmission signals transmitted from the second duplexer532to the transceiver510. According to an embodiment, the switch561may switch some signals branched by the second coupler542and transfer the signals to the transceiver510(for example, the feedback port511). For example, the branched signal may include a forward coupling signal. According to another embodiment, the switch561may transfer a signal branched by the second coupler542from some of the reception signals received from the second antenna552to the transceiver510. According to an embodiment, the switch561may switch some signals branched by the second antenna552and transfer the signals to the transceiver510(for example, the feedback port511). According to various embodiments, some of the signals output through the first antenna551may be received by the second antenna552. When some of the signals output through the first antenna551are input through the second coupler542, the switch561may switch some signals input through the second coupler542and transfer the signals to the transceiver510(for example, the feedback port511). The electronic device101(for example, the processor120) may calculate a transfer coefficient on the basis of some signals input through the switch561. The transfer coefficient may be calculated using a ratio (for example, S21) of the voltage of the signal received through the second antenna552to the voltage of the signal input into the first antenna551. Further, the transfer coefficient may be calculated using the ratio (for example, S12) of the voltage of the signal received through the first antenna551to the voltage of the signal input into the second antenna552. The electronic device101(for example, the processor120) may calculate at least one of a type of an object adjacent to the electronic device101and a distance from the object based at least partially on the reflection coefficient or the transfer coefficient. In general, when an object having a dielectric constant that is the same as a hand, a head, or the electronic device is adjacent to the antenna, input impedance facing the antenna and a coupling coefficient between antennas may change due to a change in a dielectric constant around the antenna. The electronic device101may analyze a change in a phase and a strength of the signal received by the second antenna552among the signals output through the first antenna551, and determine a type of an object existing around the electronic device101and a distance from the object. The electronic device101(for example, the processor120) may determine at least one of the type of the object adjacent to the electronic device101or the distance from the object based at least partially on the signals that are branched through the first coupler541and input into the feedback port511of the transceiver510and the signals that are output through the first antenna551and received through the second antenna552. According to an embodiment, the electronic device101(for example, the processor120) may perform a predetermined function according to a value corresponding to the distance between the electronic device101and an external object based at least partially on the reflection coefficient and the transfer coefficient. The electronic device101(for example, the processor120) may perform a predetermined function (or operation) as at least one of the type of the object adjacent to the electronic device101or the distance from the object is identified based at least partially on the reflection coefficient or the transfer coefficient. For example, when the object is another electronic device (for example, the electronic device102ofFIG.1), the electronic device101(for example, the processor120) may activate a communication module (for example, Bluetooth, NFC, Wi-Fi, or wireless charging module) for performing wired communication and/or wireless communication with another electronic device (for example, the electronic device102ofFIG.1). According to an embodiment, the electronic device101(for example, the processor120) may perform an operation of adjusting the transmission power used for communication to a predetermined magnitude through the communication module as at least the part of the predetermined function. FIG.6is a flowchart illustrating an operation in which the electronic device performs a function on the basis of an external object according to various embodiments. Hereinafter, the operation in which the electronic device according to an embodiment performs the function on the basis of the external object is described in detail with reference toFIG.6. According to various embodiments, in operation610, the electronic device101(for example, the processor120) may output a first signal to a first antenna. The electronic device101(for example, the processor120) may transmit the first signal through the transceiver510, the power amplifier521, the first duplexer531, and the first antenna551. According to an embodiment, the electronic device101(for example, the processor120) may transmit a signal through the transceiver310, the signal output through the transceiver310may be amplified through the power amplifier321, the amplified signal may be transmitted to the first coupler541via the first duplexer531, and the first signal output through the first coupler541may be output through the first antenna. The signal output through the first antenna551may be transmitted to another electronic device. The electronic device101(for example, the processor120) may acquire a forward coupling signal of the first signal from the first coupler541. According to various embodiments, in operation612, the electronic device101(for example, the processor120) may acquire a second signal, obtained by reflection of the first signal from the first antenna. According to an embodiment, the electronic device101(for example, the processor120) may detect a second signal (for example, a reverse coupling signal), which is not radiated from the first antenna551but is reflected therefrom. The second signal, which is the part of the signals radiated from the first antenna, may not be radiated through the first antenna but may be reflected therefrom, and thus may be fed back from the first coupler541to the transceiver510(for example, feedback port511). According to an embodiment, the fed back second signal may include a reverse coupling signal. The electronic device101(for example, the processor120) may identify a reflection coefficient obtained by reflection of the first signal from the first antenna on the basis of at least the part of the second signal. According to various embodiments, in operation614, the electronic device101(for example, the processor120) may acquire a third signal, obtained by reception of the first signal through the second antenna output through the first antenna. According to an embodiment, the electronic device101(for example, the processor120) may receive, through the second antenna552, the third signal, which is the part of the first signal output through the first antenna551. The electronic device101(for example, the processor120) may receive most of the signals received from the second antenna552through the second duplexer532and transmit some signals to the feedback port511through the second coupler542and the switch561. When some of the signals output through the first antenna551are input through the second coupler542, the switch561may switch some signals input through the second coupler542and transfer the signals to the transceiver510(for example, the feedback port511). The switch561may operate to transmit signals received from at least one of the first antenna551or the second antenna552to the feedback port511of the transceiver510. According to various embodiments, in operation616, the electronic device101(for example, the processor120) may identify a reflection coefficient obtained by reflection of the first signal from the first antenna and a transfer coefficient obtained by transmission of the first signal to the second antenna based at least partially on the second signal and the third signal. According to an embodiment, the electronic device101(for example, the processor120) may calculate the reflection coefficient of the antenna551based at least partially on the second signal (for example, a reverse coupling signal) or the third signal detected by the first coupler541. According to an embodiment, the electronic device101(for example, the processor) may use the reflection coefficient of the antenna551based at least partially on the reverse coupling signal or the third signal in order to determine at least one of a type of an object adjacent to the electronic device101or a distance from the object. According to an embodiment, the electronic device101(for example, the processor120) may detect reception of some of the signals output through the first antenna551by the second antenna552. When some of the signals output through the first antenna551are received through the second antenna552, the electronic device101(for example, the processor) may calculate the transfer coefficient on the basis of some signals input through the second antenna552. The electronic device101(for example, the processor120) may identify (or calculate) at least one of the type of the object adjacent to the electronic device101or the distance from the object based at least partially on the reflection coefficient or the transfer coefficient. According to various embodiments, in operation618, the electronic device101(for example, the processor120) may perform a predetermined function according to a value corresponding to the distance between the electronic device101and the external object based at least partially on the reflection coefficient and the transfer coefficient. The electronic device101(for example, the processor120) may perform a predetermined function (or operation) as at least one of the type of the object adjacent to the electronic device101or the distance from the object is identified based at least partially on the reflection coefficient or the transfer coefficient. According to an embodiment, the electronic device101(for example, the processor120) may use at least a part of the reflection coefficient or the transfer coefficient to perform at least one function based on the type of the object or the distance from the object. For example, when the object is another electronic device (for example, the electronic device102ofFIG.1), the electronic device101(for example, the processor120) may activate a communication module (for example, Bluetooth, NFC, Wi-Fi, or wireless charging module) for performing wired communication and/or wireless communication with another electronic device (for example, the electronic device102ofFIG.1). According to an embodiment, the electronic device101(for example, the processor120) may perform an operation of adjusting the transmission power used for the communication to a predetermined magnitude through the communication module as at least the part of the predetermined function. According to various embodiments, an operation in which the electronic device performs a function on the basis of an external object may include an operation of acquiring a second signal obtained by reflection of the first signal from the first antenna and a third signal acquired by reception of the first signal output through the first antenna by the second antenna, an operation of identifying a reflection coefficient obtained by reflection of the first signal from the first antenna and a transfer coefficient obtained by transmission of the first signal to the second antenna based at least partially on the second signal and the third signal, and an operation of performing a predetermined function according to a value corresponding to a distance between the electronic device and an external object based at least partially on the reflection coefficient and the transfer coefficient. According to an embodiment, the operation of identifying a value corresponding to the distance, based on a lookup table stored in the memory may be further included. According to an embodiment, an operation of performing an operation of adjusting the transmission power used for communication to a predetermined magnitude through the communication module as at least a part of the predetermined function may be further included. According to an embodiment, an operation of identifying at least one of the type of the external object or the distance from the external object, based at least partially on the reflection coefficient and the transfer coefficient, may be further included. According to an embodiment, an operation of storing at least one of the type of the external object or the distance from the external object, determined based at least partially on the reflection coefficient and the transfer coefficient in the lookup table, may be further included. According to various embodiments, an operation of calculating the reflection coefficient, based on a voltage of the second signal and a voltage of the first signal and an operation of calculating the transfer coefficient, based on a voltage of the third signal and the voltage of the first signal may be further included. According to various embodiments, the electronic device may include an operation of outputting a first signal through the first antenna using the coupler, acquiring a second signal reflected through the first antenna and a third signal, obtained by reception of the first signal output through the first antenna, through the second antenna on the basis of the output first signal, identifying a reflection coefficient obtained by reflection of the first signal from the first antenna and a transfer coefficient obtained by transmission of the first signal to the second antenna based at least partially on the second signal and the third signal, and determining at least one of a type of an external object and a distance from the external object based at least partially on the reflection coefficient and the transfer coefficient. According to an embodiment, an operation of storing at least one of the type of the external object or the distance from the external object, determined based at least partially on the reflection coefficient and the transfer coefficient in the lookup table may be further included. According to an embodiment, an operation of performing an operation of adjusting the transmission power used for communication to a predetermined magnitude through the communication module as at least a part of the predetermined function may be further included. According to an embodiment, an operation of identifying at least one of the type of the external object or the distance from the external object, based at least partially on the reflection coefficient and the transfer coefficient may be further included. According to an embodiment, an operation of storing at least one of the type of the external object or the distance from the external object, determined based at least partially on the reflection coefficient and the transfer coefficient in the lookup table may be further included. According to various embodiments, an operation of calculating the reflection coefficient, based on a voltage of the second signal and a voltage of the first signal and an operation of calculating the transfer coefficient, based on a voltage of the third signal and the voltage of the first signal may be further included. FIG.7is a block diagram of an electronic device performing a function on the basis of an external object according to various embodiments. More specifically,FIG.7is a second block diagram of the electronic device101for determining a type of an external object adjacent to the electronic device101or a distance from the external object based at least partially on a reflection coefficient or a transfer coefficient according to various embodiments. Referring toFIG.7, the electronic device101calculating the reflection coefficient and the transfer coefficient according to various embodiments may include the processor120, the wireless communication module192, a first antenna751, and a second antenna752. The wireless communication module192(or a communication circuit) may include a transceiver710, a first power amplifier721, a second power amplifier722, a first duplexer731, a second duplexer732, a coupler741, a first switch761, and a second switch762. The wireless communication module192may further include the first switch761, disposed between the coupler741and the second switch762, and the second switch762, disposed between the second antenna752and the second duplexer732. The second switch762and the first switch761may be electrically connected, and the second switch762may switch a signal to the first switch761. The processor120may perform the same operation as the processor120ofFIG.2, or may perform at least one operation or function. The transceiver710may perform the same operation as the transceiver210ofFIG.2, or may perform at least one operation or function. At least one of the first power amplifier721or the second power amplifier722may perform the same operation as the power amplifier221ofFIG.2, or may perform at least one operation or function. At least one of the first duplexer731or the second duplexer732may perform the same operation as the duplexer231ofFIG.2, or may perform at least one operation or function. The coupler741may perform the same operation as the coupler241ofFIG.2, or may perform at least one operation or function. The first switch761may perform the same operation as the switch561ofFIG.5, or may perform at least one operation or function. According to various embodiments, the first switch761may switch a signal transmitted between elements (for example, the coupler741and the second switch762). The first switch761may receive a signal from the coupler741or the second switch762and transmit the signal to the transceiver510. According to an embodiment, the first switch761may receive a signal received from the first antenna751through the coupler741and transmit the received signal to the transceiver710. According to another embodiment, the first switch761may receive a signal received from the second antenna752through the second switch742and transmit the received signal to the transceiver710. The first switch761may transmit a signal received from at least one of the first antenna751or the second antenna752to a feedback port711of the transceiver710. According to various embodiments, the first switch761may transmit signals branched by the coupler741from some of the transmission signals transmitted from the first duplexer731to the transceiver710. According an embodiment, the first switch761may switch some of the signals branched from the transmission signals by the coupler741and transmit the signals to the transceiver710(for example, the feedback port711). For example, the branched signals may include a forward coupling signal. According to another embodiment, the first switch761may transmit a signal (for example, a reverse coupling signal), which is not radiated through the first antenna751but is reflected therefrom, to the transceiver710. According to an embodiment, the first switch761may switch the signal that is not radiated through the first antenna751but is reflected therefrom and transmit the signal to the transceiver710(for example, the feedback port711). According to various embodiments, the second switch762may switch the signal received from the second antenna752to one of the first switch761or the second duplexer732. The received signal may include some of the signals radiated from the first antenna751, which are received by the second antenna752. According to an embodiment, some of the signals radiated through the first antenna751may be received through the second antenna752. When some of the signals radiated through the first antenna751are received through the second antenna752, the second switch762may switch some signals received through the second antenna752and transmit the signals to the second duplexer732or the first switch761. The electronic device101(for example, the processor120) may calculate a transfer coefficient on the basis of some signals input through the first switch761. The electronic device101(for example, the processor120) may calculate at least one of a type of an object adjacent to the electronic device101and a distance from the object based at least partially on the reflection coefficient or the transfer coefficient. The electronic device101(for example, the processor120) may determine at least one of a type of an object adjacent to the electronic device101or a distance from the object based at least partially on the signals, which are branched through the coupler741and input into the feedback port711of the transceiver710, and the signals, which are output through the first antenna751and received through the second antenna752. According to an embodiment, the electronic device101(for example, the processor120) may perform a predetermined function according to a value corresponding to the distance between the electronic device101and an external object based at least partially on the reflection coefficient and the transfer coefficient. The electronic device101(for example, the processor120) may perform a predetermined function (or operation) as at least one of the type of the object adjacent to the electronic device101or the distance from the object is identified based at least partially on the reflection coefficient or the transfer coefficient. FIG.8is a block diagram of an electronic device performing a function on the basis of an external object according to various embodiments. More specifically,FIG.8is a third block diagram of the electronic device101for determining a type of an external object adjacent to the electronic device101or a distance from the external object based at least partially on a reflection coefficient or a transfer coefficient according to various embodiments. Referring toFIG.8, the electronic device101calculating the reflection coefficient and the transfer coefficient according to various embodiments may include the processor120, the wireless communication module192, a first antenna851, and a second antenna852. The wireless communication module192may include a transceiver810, a first power amplifier821, a second power amplifier822, a first duplexer831, a second duplexer832, a first coupler841, a second duplexer842, a first switch861, a second switch862, and a third switch863. The wireless communication module192may further include a first switch for switching a second signal output through the first coupler, which receives the reflected second signal, a second switch for switching a third signal output through the second coupler, which receives the received third signal, and a third switch for switching signals output from the first switch and the second switch to be transmitted to the transceiver. The third switch863may include a Dual Pole-Dual Throw (DPDT) switch operating to receive the switched signals from the first switch and the second switch and to transmit the signals to the transceiver, and the transceiver may include two feedback ports for receiving the respective signals. The third switch863may include switching terminals corresponding in number to the number of antennas included in the electronic device. According to various embodiments, the processor120may perform the same operation as the processor120ofFIG.2, or may perform at least one operation or function. The transceiver810may perform the same operation as the transceiver210ofFIG.2, or may perform at least one operation or function. At least one of the first power amplifier821or the second power amplifier822may perform the same operation as the power amplifier221ofFIG.2, or may perform at least one operation or function. At least one of the first duplexer831or the second duplexer832may perform the same operation as the duplexer231ofFIG.2, or may perform at least one operation or function. At least one of the first coupler841or the second coupler842may perform the same operation as the coupler241ofFIG.2, or may perform at least one operation or function. According to various embodiments, the first switch861may switch signals branched through the first coupler841and transmit the signals to the third switch863. The first switch861may receive the signal from the first coupler841and transmit the signal to the third switch863. The third switch863may switch the signal received from the first switch861and transmit the signal to the transceiver810. According to an embodiment, the first switch861may receive the signal received from the first antenna851through the first coupler841and transmit the signal to the transceiver810through the third switch863. The first switch861may transmit the signal received from the first coupler841to one of a first feedback port811or a second feedback port812of the transceiver810through the third switch863. According to various embodiments, the first switch861may transmit, to the transceiver810, the signals branched by the first coupler841from the transmission signals transmitted from the first duplexer841. For example, the branched signals may include a forward coupling signal. According to another embodiment, the first switch861may transmit a signal that is not radiated through the first antenna851but is reflected therefrom to the transceiver810. The electronic device101(for example, the processor120) may calculate a reflection coefficient on the basis of some of the transmission signals branched by the first coupler841. According to various embodiments, some of the signals output through the first antenna851may be received by the second antenna852. When some of the signals radiated through the first antenna851are received by the second coupler842via the second antenna852, the second switch862may switch some signals received through the second coupler842and transmit the signals to the third switch863. The electronic device101(for example, the processor120) may calculate a transfer coefficient on the basis of some signal input through the second antenna852, the second coupler842, the second switch862, and the third switch863. The electronic device101(for example, the processor120) may calculate at least one of a type of an object adjacent to the electronic device101and a distance from the object based at least partially on the reflection coefficient or the transfer coefficient. The electronic device101(for example, the processor120) may determine at least one of the type of the object adjacent to the electronic device101or the distance from the object based at least partially on the signals that are branched through the first coupler841and input into the feedback port811of the transceiver810and the signals that are output through the first antenna851and received through the second antenna852. According to various embodiments, the third switch863may switch the signal switched through the first switch861and transmit the signal to the transceiver810. The third switch863may receive the signal output from the first coupler841through the first switch861and transmit the signal to the first feedback port811of the transceiver810. According to an embodiment, the third switch863may transmit the signal that is output through the first antenna851and received through the second antenna852, the second coupler842, and the second switch862to the second feedback port812of the transceiver810. The electronic device101(for example, the processor120) may determine at least one of the type of the object adjacent to the electronic device101or the distance from the object based at least partially on the signals that are branched through the first coupler841and input into the feedback port810of the transceiver811and the signals that are output through the first antenna851and received through the second antenna852. The third switch may include a Dual Pole-Dual Throw (DPDT) switch. According to an embodiment, the electronic device101(for example, the processor120) may perform a predetermined function according to a value corresponding to the distance between the electronic device101and an external object based at least partially on the reflection coefficient and the transfer coefficient. The electronic device101(for example, the processor120) may perform a predetermined function (or operation) as at least one of the type of the object adjacent to the electronic device101or the distance from the object is identified based at least partially on the reflection coefficient or the transfer coefficient. FIG.9is a block diagram of an electronic device performing a function on the basis of an external object according to various embodiments. More specifically,FIG.9is a fourth block diagram of the electronic device101for determining a type of an external object located adjacent to the electronic device101or a distance from the external object based at least partially on a reflection coefficient or a transfer coefficient according to various embodiments. Referring toFIG.9, the electronic device101calculating a reflection coefficient and a transfer coefficient according to various embodiments may include the processor120, the wireless communication module192, a first antenna951, a second antenna952, and an Nth antenna953. The wireless communication module192may include a transceiver910, a first power amplifier921, a second power amplifier922, an Nth power amplifier923, a first duplexer931, a second duplexer932, an Nth duplexer933, a first coupler941, a second coupler942, an Nth coupler943, and a switch961. The processor120may perform the same operation as the processor120ofFIG.2, or may perform at least one operation or function. The transceiver910may perform the same operation as the transceiver210ofFIG.2, or may perform at least one operation or function. At least one of the first power amplifier921, the second power amplifier922, or the Nth power amplifier923may perform the same operation as the power amplifier221ofFIG.2, or may perform at least one operation or function. At least one of the first duplexer931, the second duplexer932, or the Nth duplexer933may perform the same operation as the duplexer231ofFIG.2, or may perform at least one operation or function. At least one of the first coupler941, the second coupler942, or the Nth coupler943may perform the same operation as the coupler241ofFIG.2, or may perform at least one operation or function. The switch961may perform the same operation as the switch541ofFIG.5, or may perform at least one operation or function. As illustrated inFIG.9, the electronic device101according to various embodiments may include a plurality of power amplifiers921,922, and923, a plurality of duplexers931,932, and933, and a plurality of couplers941,942, and943. Each of the power amplifiers, each of the duplexers, and each of the couplers may perform the operation performed by the power amplifier221, the duplexer231, and the coupler241ofFIG.2. According to various embodiments, some of the signals radiated through the first antenna951may be received by at least one of the second antenna952or the Nth antenna953. The electronic device101(for example, the processor120) may determine at least one of a type of an object adjacent to the electronic device101or a distance from the object based at least partially on the signals that are not radiated through the first antenna951but are reflected therefrom and input into a feedback port911of the transceiver910and the signals that are output through the first antenna951and received through at least one of the second antenna952or the Nth antenna953. According to an embodiment, the electronic device101(for example, the processor120) may perform a predetermined function according to a value corresponding to the distance between the electronic device101and an external object based at least partially on the reflection coefficient and the transfer coefficient. The electronic device101(for example, the processor120) may perform a predetermined function (or operation) as at least one of the type of the object adjacent to the electronic device101or the distance from the object is identified based at least partially on the reflection coefficient or the transfer coefficient. According to various embodiments, the electronic device may include: a first antenna and a second antenna; a communication circuit including a coupler; and a processor electrically connected to the first and second antennas and the communication module, wherein the processor may be configured to control the communication circuit to output a first signal through the first antenna and to acquire a second signal, obtained by reflection of the first signal through the first antenna, and a third signal, obtained by reception of the first signal through the first antenna, through the second antenna, identify a reflection coefficient obtained by reflection of the first signal from the first antenna and a transfer coefficient obtained by transmission of the first signal to the second antenna, based at least partially on the second signal and the third signal, and perform a predetermined function according to a value corresponding to a distance between the electronic device and an external object, based at least partially on the reflection coefficient and the transfer coefficient. According to an embodiment, the processor may be configured to perform an operation of adjusting the transmission power used for communication to a predetermined magnitude through the communication module as at least a part of the predetermined function. According to an embodiment, the electronic device may further include a memory, and the processor may be configured to identify a value corresponding to the distance, based on a lookup table stored in the memory. According to an embodiment, the processor may be configured to identify at least one of the type of the external object or the distance from the external object, based at least partially on the reflection coefficient and the transfer coefficient. According to an embodiment, the processor may be configured to store at least one of the type of the external object or the distance from the external object, identified based at least partially on the reflection coefficient and the transfer coefficient in the lookup table. According to an embodiment, the processor may be configured to calculate the reflection coefficient, based on a voltage of the second signal and a voltage of the first signal, and calculate the transfer coefficient, based on a voltage of the third signal and the voltage of the first signal. According to an embodiment, the communication circuit may be configured to include a transceiver configured to generate at least one signal, at least one amplifier configured to amplify the generated signal, at least one duplexer configured to distinguish the amplified signals according to communication bands thereof, and at least one coupler configured to radiate the distinguished signals through at least one of the first antenna or the second antenna. According to an embodiment, the communication circuit may further include a switch, which is disposed between the first coupler receiving the reflected second signal and the second coupler receiving the third signal and is configured to switch at least one of the received second signal or the third signal to be transmitted to the transceiver. According to an embodiment, the communication circuit may further include a first switch configured to switch the third signal and a second switch, which is disposed between the first coupler receiving the reflected second signal and the first switch and is configured to switch at least one of the second signal or the third signal to be transmitted to the transceiver. According to an embodiment, the communication circuit may further include a first switch, configured to switch the second signal output through the first coupler receiving the reflected second signal, a second switch, configured to switch the third signal output through the second coupler receiving the received third signal, and a third switch, configured to switch signals output from the first switch and the second switch to be transmitted to the transceiver. According to an embodiment, the transceiver may include a first port configured to acquire the reflected second signal and a second port configured to acquire the reflected third signal, and the third switch may include a Dual Pole-Dual Throw (DPDT) configured to operate to receive the signal output from the first switch and transmit the signal to the first port and to receive the signal output from the second switch and transmit the signal to the second port. According to an embodiment, the third switch may include switching terminals corresponding in number to the number of antennas included in the electronic device. According to various embodiments, the electronic device may include: a first antenna and a second antenna; a communication circuit including a coupler; and a processor electrically connected to the first and second antennas and the communication module, wherein the processor may be configured to output a first signal through the first antenna using the coupler, acquire a second signal reflected through the first antenna and a third signal, obtained by reception of the first signal output through the first antenna, through the second antenna, based on the output first signal, identify a reflection coefficient, obtained by reflection of the first signal from the first antenna, and a transfer coefficient, obtained by transmission of the first signal to the second antenna, based at least partially on the second signal and the third signal, and identify at least one of a type of an external object and a distance from the external object, based at least partially on the reflection coefficient and the transfer coefficient. According to an embodiment, the processor may be configured to store at least one of the type of the external object or the distance from the external object, identified based at least partially on the reflection coefficient and the transfer coefficient in a lookup table. According to an embodiment, the wireless communication module may include a transceiver, configured to generate at least one signal, at least one amplifier configured to amplify the generated signal, at least one duplexer configured to distinguish the amplified signals according to communication bands thereof, and at least one coupler configured to radiate the distinguished signals through at least one of the first antenna or the second antenna. According to an embodiment, the wireless communication module may further include a switch, which disposed between the first coupler receiving the reflected second signal and the second coupler receiving the received third signal and is configured to switch at least one of the received second signal or the third signal to be transmitted to the transceiver. According to an embodiment, the wireless communication module may further include a first switch, configured to switch the third signal and a second switch disposed between the first coupler receiving the reflected second signal and the first switch and is configured to switch at least one of the received second signal or the third signal to be transmitted to the transceiver. According to an embodiment, the wireless communication module may further include a first switch, configured to switch the second signal output through the first coupler receiving the reflected second signal, a second switch, configured to switch the third signal output through the second coupler receiving the received third signal, and a third switch, configured to switch signals output from the first switch and the second switch to be transmitted to the transceiver. According to an embodiment, the transceiver may include a first port, configured to acquire the reflected second signal, and a second port, configured to acquire the reflected third signal, and the third switch may include a Dual Pole-Dual Throw (DPDT) switch, configured to operate to receive the signal output from the first switch and transmit the signal to the first port and to receive the signal output from the second switch and transmit the signal to the second port. FIG.10illustrates data for measuring a type of an adjacent object and a distance from the object on the basis of a voltage of a signal reflected through a first antenna compared to a voltage of a signal input into the first antenna of the electronic device according to various embodiments, andFIG.11illustrates data for measuring a type of an adjacent object and a distance from the object on the basis of a voltage of a signal received through a second antenna among signals output through the first antenna compared to a voltage of a signal input into the first antenna of the electronic device according to various embodiments. Referring toFIG.10, it may be noted that the result of Inverse Synthetic Aperture Radar (ISAR) of 0 to 10 mm for a free space and an iron plate indicates a high frequency in the distance from the iron plate corresponding to about 0 to 5 mm on the basis of, for example, a voltage (for example, S11) of a signal reflected through the first antenna compared to a voltage of a signal input into the first antenna. The type of the object adjacent to the electronic device101and the distance from the object may be determined through identification of a reflection coefficient based on the voltage (for example, S11) of the signal reflected through the first antenna compared to the voltage of the signal input into the first antenna. For example, when the distance between the electronic device101and the iron plate is 5 mm or more, it may not be easy to distinguish the iron plate from the free space. In this case, a better result may be obtained by identifying a transfer coefficient on the basis of the signal received through the second antenna among the signals output through the first antenna. Referring toFIG.11, the type of the object adjacent to the electronic device101and the distance from the object may be determined through identification of the transfer coefficient based on the voltage (for example, S21) of the signal received through the second antenna, among the signals output through the first antenna, compared to the voltage of the signal input into the first antenna. For example, it may be noted that the result of the Inverse Synthetic Aperture Radar (ISAR) of 0 to 10 mm for the free space and the iron plate is better than that ofFIG.10. In order to acquire the transfer coefficient between the first antenna and the second antenna, the electronic device101according to various embodiments may be designed through various block diagrams illustrated inFIG.5andFIGS.7to9. Further, the electronic device101may acquire the type of the object and the distance from the object inFIG.11more clearly than that inFIG.10by obtaining an I value and a Q value corresponding to the voltage (for example, S21) of the signal received through the second antenna, among the signals output through the first antenna.
86,672
11863241
DETAILED DESCRIPTION Massive MIMO is an extended version of MU-MIMO where the number of base station antennas and number of users are large. However, the number of base station antennas are excessive large compared to number of users served. Having large number of antennas at base station provides high spatial degrees of freedom that can be exploited to generate highly focused radiation towards the user equipment. This also allows the base stations to spatially multiplex data streams to multiple devices (MU-MIMO) using the same time-frequency resource. The prior art focus on the UE as the DUT (device under test), and they are focus only on the antenna aspects. In our case DUT is the combination of the following components of gNB. Baseband processing (channel estimation, precoding, Angle of arrival calculation), +ADC, amplifiers, Antenna Array elements. Prior arts do not consider the channel learning capabilities of the DUT. The quality of beamforming heavily depends on how well gNB is deriving the channel information H from the uplink pilot or reference signal, and based on H, it performs the beamforming operation in the downlink. So we are covering the complete UL and DL loop. The prior are very generic and do not cater to the 5G-NR specifications. In actual systems, various systems perform differently regarding how well does the gNB read the channel, depends on: channel estimation algorithm, and how aligned the phase shifter is. Each antenna element has its own phase shifter and it is aligned. Since we send a signal out over the air, the gNB is enabled to learn and figure out how to pass the beam, for beamforming purposes. This enables us to test the gNB's channel learning performance, including actual performance and not simulations. In addition, automation capabilities are used to ease the design and verification process. For example, every time an algorithm or any antenna parameter is changed. A run is triggered which gives us the OTA performance report. Previously we estimated only the characteristics of the channel. However, when the number of antennas increases, you have to learn the antennas also. So, in the uplink, using the UE emulators, we are feeding the reference signals, then estimating the channel, then using the estimated channel we are doing the precoding in the downlink. The prior art tries to do testing of beamforming with RF chambers, e.g., choose an angle and direct some beam toward it and see how much power is received and at what angle. However, this is deficient because this doesn't determine what the gNB is learning about the channel, and how well it is reading from the signal. By sending a signal over the air, the gNB can learn and figure out how to pass the beam. For massive MIMO, it is important to include the antenna arrays in the design stages of the beamforming module. This will give us insights about how certain aspects of antenna array design can impact the overall beamforming capabilities. The invention differs from the prior art, in following respects. First, the prior art focus on the UE as the DUT (device under test), and they are focus only on the antenna aspects. In our case DUT is the combination of the following components of gNB: Baseband processing (channel estimation, precoding, Angle of arrival calculation); ADC, amplifiers, Antenna Array elements. Also, the prior art does not consider the channel learning capabilities of the DUT. The quality of beamforming heavily depends on how well gNB is deriving the channel information **H** from the uplink pilot or reference signal. And based on H, it performs the beamforming operation in the downlink. So we are covering the complete UL and DL loop. Also, the prior art is very generic and do not cater to the 5G-NR specifications. In some embodiments, automation capabilities are used to ease the design and verification process. For example, every time we change an algorithm, or any antenna parameter, we can trigger the run which gives us the OTA performance report. FIG.1shows a system100including a gNB101with M antenna elements serving K users. hijrepresents the channel experienced between ithantenna in gNB and jthuser. For a Massive MIMO system using OFDM modulation the channel is modeled with a matrix of dimension N×K as show inFIG.2. Each element in the matrix represents the channel frequency response between the gNB antenna and a user for a given subcarrier FIG.1shows the channel state information200to be learned by the gNB in each subcarrier Precoding: FromFIG.1it is clear that each user receives the unwanted signal (data meant for other users) causing interference which make it impossible for the user to detect the data directed towards it. Precoding is a technique employed at the gNB transmitter so that interference experienced by each user is either minimum or zero and it can properly detect the intended signal. Precoding is performed by pre multiplying the transmit vector of size M×1 with a precoding matrix of dimension K×M before feeding the transmit signal to the antennas In order to compute the precoder matrix the gNB should have the knowledge of channel matrix H, which it estimates using the pilots sent in the uplink by each user. Time Division Duplexing (TDD) is the preferred mode of operation for massive MIMO as it allows to use the reciprocity of the channel in both Uplink and downlink and there by eliminates the pilot overhead in downlink. Massive MU-MIMO Beamforming capabilities plays a crucial role for the overall performance of the gNB. However, testing of gNB beamforming mechanism opens new challenges. Traditional approach of cable-based testing methodologies is not applicable for massive MIMO for the reasons mentioned below and Over-the-Air (OTA) testing methodologies needs to be applied Measurement of end-to-end performance of the beamforming mechanism is not possible by merely taking the measurements at individual antennas as the antenna array elements used for massive MIMO are active in nature New antenna array designs are highly integrated with other active components like amplifiers and probing the antenna signals is not possible Also, when large antenna arrays are involved, it is not effective to design or characterize the baseband separately without considering the antenna array characteristics. The baseband that is designed based on the channel simulations, is unlikely to provide the same performance when integrated with the arrays. So, it is important to involve the antenna arrays during the initial stages of baseband designs An OTA testing methodology is disclosed that helps to verify the design and also evaluate the performance of the beamforming capabilities of gNB. The proposed test setup will reduce the iterations between baseband design changes and testing as it considers the antenna arrays. It also helps to get insights about the power consumption and the amount of imperfections added as the number of antennas are varied, so that important design trade-offs can be made. The setup is made by isolating only the components contributing to the beamforming. FIG.3Ais a diagram of a portion of a test setup300including a DUT301. The DUT301includes a digital baseband section and an analog RF section. FIG.3Bis a diagram of an OTA test Setup302. The OTA setup includes a device under test (DUT)301. Also shown are UE U, UEs S and UEs V. UE U is used for assessing the performance of the beam directed towards it, UEs V are victim UEs and UEs S increase the dimension of a matrix H for a given test. FIG.2is a diagram of a UE simulator board400. In some embodiments, baseband processing is performed in a DSP or FPGA and is coupled to the antenna arrays. A small circuit board may be used that integrates these two components, forming a DUT. In some embodiments a UE simulator is not part of the DUT. Since this doesn't need to be in an RF chamber, this results in a more real-world result. Various papers describe different types of scatterers and these scatterers can be added to the system to test different scenarios. An antenna array element used herein may each include a phase shifter, low noise amplifier (LNA), etc. We can additionally measure: how much power is drawn; what is minimum number of elements for a certain level of performance. The device to be tested includes all the components contributing for beamforming. The Antenna array system contains N active antenna array elements along with calibration mechanism to maintain UL DL reciprocity. The Module ‘Precoder’ includes both digital precoding and direct weighting of the antenna elements to form a directed beam. The host PC does the scheduling of Uplink transmissions from each of the UE boards and then collects the IQ samples from the UE boards to do performance analysis as shown inFIG.5. The frame structure mentioned in 3GPP 38.211 is followed. Any of the TDD UL-DL configurations (referFIG.6) can be used. The UE simulator boards as shown inFIG.4are used to simulate the UE functionality required to test the beamforming capability of gNB. It is a simple and cost-effective module which can be constructed using off-the-shelf components. An optional power amplifier can be used for testing at long distances from gNB. UE U is used for assessing the performance of the beam directed towards it. UEs Vxare the victim UEs that are used to evaluate the deviation of the intended beam from U. These UEs are placed at horizontal angles of θxand vertical angles φxrelative to U. UEs Sxonly serve to increase the dimension of the matrix H for a given test. Larger dimensions of H lead to more challenges in Beamforming implementation. gNB and all the UE boards are synchronized using a master clock. The steps involved in the testing are as follows:a. During the UL subframes, only the UEs U and Sxtransmit the OFDM symbols containing UL DMRS (Demodulation Reference Signal) that are orthogonal to each other as mentioned in 38. 211.UEs Vxare victim UEs and they do not transmit in uplinkb. In the same UL subframe, gNB performs the channel estimation, precoding matrix computation, antenna elements weighting coefficient computationc. In the subsequent DL subframe, gNB transmits known data streams to UEs U and Sxusing massive MU-MIMO beamforming coefficients computed in 2d. In the same subframe as 3, the host PC collects the IQ samples from the UE U and all the victim UEs Vxand evaluates the performance (referFIG.5) of the beam that was meant for UE U. Performance Metric: EVM (error vector magnitude) of the demodulated QAM symbols from the UEs U and Vxare used to evaluate the 3-D beamforming performance. The thermal noise variance σ2from U and Vxare used to calibrate the values of EVM. Similar performance analysis can be done for UL data detection FIG.3is a diagram of a circuit500for performing Beamforming Performance evaluation. The Host PC does the scheduling of Uplink transmissions from each of the UE boards and then collects the IQ samples from the UE boards to do performance analysis as shown inFIG.5. FIGS.6A and6Bare a diagram of a single TDD UL DL Configuration (U: uplink, D: Downlink, X: flexible). FIG.7is a schematic network architecture diagram for 3G and other-G prior art networks. The diagram shows a plurality of “Gs,” including 2G, 3G, 4G, 5G and Wi-Fi. 2G is represented by GERAN801, which includes a 2G device701a, BTS701b, and BSC701c.3G is represented by UTRAN702, which includes a 3G UE702a, nodeB702b, RNC702c, and femto gateway (FGW, which in 3GPP namespace is also known as a Home nodeB Gateway or HNBGW)702d.4G is represented by EUTRAN or E-RAN703, which includes an LTE UE703aand LTE eNodeB703b. Wi-Fi is represented by Wi-Fi access network704, which includes a trusted Wi-Fi access point704cand an untrusted Wi-Fi access point704d. The Wi-Fi devices704aand704bmay access either AP704cor704d. In the current network architecture, each “G” has a core network. 2G circuit core network705includes a 2G MSC/VLR; 2G/3G packet core network706includes an SGSN/GGSN (for EDGE or UMTS packet traffic); 3G circuit core707includes a 3G MSC/VLR; 4G circuit core708includes an evolved packet core (EPC); and in some embodiments the Wi-Fi access network may be connected via an ePDG/TTG using S2a/S2b. Each of these nodes are connected via a number of different protocols and interfaces, as shown, to other, non-“G”-specific network nodes, such as the SCP730, the SMSC731, PCRF732, HLR/HSS733, Authentication, Authorization, and Accounting server (AAA)734, and IP Multimedia Subsystem (IMS)735. An HeMS/AAA736is present in some cases for use by the 3G UTRAN. The diagram is used to indicate schematically the basic functions of each network as known to one of skill in the art, and is not intended to be exhaustive. For example, 5G core717is shown using a single interface to 5G access716, although in some cases 5G access can be supported using dual connectivity or via a non-standalone deployment architecture. Noteworthy is that the RANs701,702,703,704and736rely on specialized core networks705,706,707,708,709,737but share essential management databases730,731,732,733,734,735,738. More specifically, for the 2G GERAN, a BSC701cis required for Abis compatibility with BTS701b, while for the 3G UTRAN, an RNC702cis required for Iub compatibility and an FGW702dis required for Iuh compatibility. These core network functions are separate because each RAT uses different methods and techniques. On the right side of the diagram are disparate functions that are shared by each of the separate RAT core networks. These shared functions include, e.g., PCRF policy functions, AAA authentication functions, and the like. Letters on the lines indicate well-defined interfaces and protocols for communication between the identified nodes. FIG.8is an enhanced base station for performing the methods described herein, in accordance with some embodiments. Base station900may include processor802, processor memory804in communication with the processor, baseband processor806, and baseband processor memory808in communication with the baseband processor. Mesh network node800may also include first radio transceiver812and second radio transceiver814, internal universal serial bus (USB) port816, and subscriber information module card (SIM card)818coupled to USB port816. In some embodiments, the second radio transceiver814itself may be coupled to USB port816, and communications from the baseband processor may be passed through USB port816. The second radio transceiver may be used for wirelessly backhauling eNodeB800. Processor802and baseband processor806are in communication with one another. Processor802may perform routing functions, and may determine if/when a switch in network configuration is needed. Baseband processor806may generate and receive radio signals for both radio transceivers812and814, based on instructions from processor802. In some embodiments, processors802and806may be on the same physical logic board. In other embodiments, they may be on separate logic boards. Processor802may identify the appropriate network configuration, and may perform routing of packets from one network interface to another accordingly. Processor802may use memory804, in particular to store a routing table to be used for routing packets. Baseband processor806may perform operations to generate the radio frequency signals for transmission or retransmission by both transceivers810and812. Baseband processor806may also perform operations to decode signals received by transceivers812and814. Baseband processor806may use memory808to perform these tasks. The first radio transceiver812may be a radio transceiver capable of providing LTE eNodeB functionality, and may be capable of higher power and multi-channel OFDMA. The second radio transceiver814may be a radio transceiver capable of providing LTE UE functionality. Both transceivers812and814may be capable of receiving and transmitting on one or more LTE bands. In some embodiments, either or both of transceivers812and814may be capable of providing both LTE eNodeB and LTE UE functionality. Transceiver812may be coupled to processor802via a Peripheral Component Interconnect-Express (PCI-E) bus, and/or via a daughtercard. As transceiver814is for providing LTE UE functionality, in effect emulating a user equipment, it may be connected via the same or different PCI-E bus, or by a USB bus, and may also be coupled to SIM card818. First transceiver812may be coupled to first radio frequency (RF) chain (filter, amplifier, antenna)822, and second transceiver814may be coupled to second RF chain (filter, amplifier, antenna)824. SIM card818may provide information required for authenticating the simulated UE to the evolved packet core (EPC). When no access to an operator EPC is available, a local EPC may be used, or another local EPC on the network may be used. This information may be stored within the SIM card, and may include one or more of an international mobile equipment identity (IMEI), international mobile subscriber identity (IMSI), or other parameter needed to identify a UE. Special parameters may also be stored in the SIM card or provided by the processor during processing to identify to a target eNodeB that device800is not an ordinary UE but instead is a special UE for providing backhaul to device800. Wired backhaul or wireless backhaul may be used. Wired backhaul may be an Ethernet-based backhaul (including Gigabit Ethernet), or a fiber-optic backhaul connection, or a cable-based backhaul connection, in some embodiments. Additionally, wireless backhaul may be provided in addition to wireless transceivers812and814, which may be 3G, 4G, 5G, Wi-Fi 802.11a/b/g/n/ac/ad/ah, Bluetooth, ZigBee, microwave (including line-of-sight microwave), or another wireless backhaul connection. Any of the wired and wireless connections described herein may be used flexibly for either access (providing a network connection to UEs) or backhaul (providing a mesh link or providing a link to a gateway or core network), according to identified network conditions and needs, and may be under the control of processor802for reconfiguration. A GPS module830may also be included, and may be in communication with a GPS antenna832for providing GPS coordinates, as described herein. When mounted in a vehicle, the GPS antenna may be located on the exterior of the vehicle pointing upward, for receiving signals from overhead without being blocked by the bulk of the vehicle or the skin of the vehicle. Automatic neighbor relations (ANR) module832may also be present and may run on processor802or on another processor, or may be located within another device, according to the methods and procedures described herein. Other elements and/or modules may also be included, such as a home eNodeB, a local gateway (LGW), a self-organizing network (SON) module, or another module. Additional radio amplifiers, radio transceivers and/or wired network connections may also be included. FIG.9is a coordinating server for providing services and performing methods as described herein, in accordance with some embodiments. Coordinating server1000includes processor902and memory904, which are configured to provide the functions described herein. Also present are radio access network coordination/routing (RAN Coordination and routing) module906, including ANR module906a, RAN configuration module908, and RAN proxying module910. The ANR module906amay perform the ANR tracking, PCI disambiguation, ECGI requesting, and GPS coalescing and tracking as described herein, in coordination with RAN coordination module906(e.g., for requesting ECGIs, etc.). In some embodiments, coordinating server900may coordinate multiple RANs using coordination module906. In some embodiments, coordination server may also provide proxying, routing virtualization and RAN virtualization, via modules910and908. In some embodiments, a downstream network interface912is provided for interfacing with the RANs, which may be a radio interface (e.g., LTE), and an upstream network interface914is provided for interfacing with the core network, which may be either a radio interface (e.g., LTE) or a wired interface (e.g., Ethernet). Coordinator900includes local evolved packet core (EPC) module920, for authenticating users, storing and caching priority profile information, and performing other EPC-dependent functions when no backhaul link is available. Local EPC920may include local HSS922, local MME924, local SGW926, and local PGW928, as well as other modules. Local EPC920may incorporate these modules as software modules, processes, or containers. Local EPC920may alternatively incorporate these modules as a small number of monolithic software processes. Modules906,908,910and local EPC920may each run on processor902or on another processor, or may be located within another device. The protocols described herein have largely been adopted by the 3GPP as a standard for the upcoming 5G network technology as well, in particular for interfacing with 4G/LTE technology. For example, X2 is used in both 4G and 5G and is also complemented by 5G-specific standard protocols called Xn. Additionally, the 5G standard includes two phases, non-standalone (which will coexist with 4G devices and networks) and standalone, and also includes specifications for dual connectivity of UEs to both LTE and NR (“New Radio”) 5G radio access networks. The inter-base station protocol between an LTE eNB and a 5G gNB is called Xx. The specifications of the Xn and Xx protocol are understood to be known to those of skill in the art and are hereby incorporated by reference dated as of the priority date of this application. In some embodiments, several nodes in the 4G/LTE Evolved Packet Core (EPC), including mobility management entity (MME), MME/serving gateway (S-GW), and MME/S-GW are located in a core network. Where shown in the present disclosure it is understood that an MME/S-GW is representing any combination of nodes in a core network, of whatever generation technology, as appropriate. The present disclosure contemplates a gateway node, variously described as a gateway, HetNet Gateway, multi-RAT gateway, LTE Access Controller, radio access network controller, aggregating gateway, cloud coordination server, coordinating gateway, or coordination cloud, in a gateway role and position between one or more core networks (including multiple operator core networks and core networks of heterogeneous RATs) and the radio access network (RAN). This gateway node may also provide a gateway role for the X2 protocol or other protocols among a series of base stations. The gateway node may also be a security gateway, for example, a TWAG or ePDG. The RAN shown is for use at least with an evolved universal mobile telecommunications system terrestrial radio access network (E-UTRAN) for 4G/LTE, and for 5G, and with any other combination of RATs, and is shown with multiple included base stations, which may be eNBs or may include regular eNBs, femto cells, small cells, virtual cells, virtualized cells (i.e., real cells behind a virtualization gateway), or other cellular base stations, including 3G base stations and 5G base stations (gNBs), or base stations that provide multi-RAT access in a single device, depending on context. In the present disclosure, the words “eNB,” “eNodeB,” and “gNodeB” are used to refer to a cellular base station. However, one of skill in the art would appreciate that it would be possible to provide the same functionality and services to other types of base stations, as well as any equivalents, such as Home eNodeBs. In some cases Wi-Fi may be provided as a RAT, either on its own or as a component of a cellular access network via a trusted wireless access gateway (TWAG), evolved packet data network gateway (ePDG) or other gateway, which may be the same as the coordinating gateway described hereinabove. The word “X2” herein may be understood to include X2 or also Xn or Xx, as appropriate. The gateway described herein is understood to be able to be used as a proxy, gateway, B2BUA, interworking node, interoperability node, etc. as described herein for and between X2, Xn, and/or Xx, as appropriate, as well as for any other protocol and/or any other communications between an LTE eNB, a 5G gNB (either NR, standalone or non-standalone). The gateway described herein is understood to be suitable for providing a stateful proxy that models capabilities of dual connectivity-capable handsets for when such handsets are connected to any combination of eNBs and gNBs. The gateway described herein may perform stateful interworking for master cell group (MCG), secondary cell group (SCG), other dual-connectivity scenarios, or single-connectivity scenarios. In some embodiments, the base stations described herein may be compatible with a Long Term Evolution (LTE) radio transmission protocol, or another air interface. The LTE-compatible base stations may be eNodeBs, or may be gNodeBs, or may be hybrid base stations supporting multiple technologies and may have integration across multiple cellular network generations such as steering, memory sharing, data structure sharing, shared connections to core network nodes, etc. In addition to supporting the LTE protocol, the base stations may also support other air interfaces, such as UMTS/HSPA, CDMA/CDMA2000, GSM/EDGE, GPRS, EVDO, other 3G/2G, legacy TDD, 5G, or other air interfaces used for mobile telephony. In some embodiments, the base stations described herein may support Wi-Fi air interfaces, which may include one of 802.11a/b/g/n/ac/ad/af/ah. In some embodiments, the base stations described herein may support 802.16 (WiMAX), or other air interfaces. In some embodiments, the base stations described herein may provide access to land mobile radio (LMR)-associated radio frequency bands. In some embodiments, the base stations described herein may also support more than one of the above radio frequency protocols, and may also support transmit power adjustments for some or all of the radio frequency protocols supported. In any of the scenarios described herein, where processing may be performed at the cell, the processing may also be performed in coordination with a cloud coordination server. A mesh node may be an eNodeB. An eNodeB may be in communication with the cloud coordination server via an X2 protocol connection, or another connection. The eNodeB may perform inter-cell coordination via the cloud communication server when other cells are in communication with the cloud coordination server. The eNodeB may communicate with the cloud coordination server to determine whether the UE has the ability to support a handover to Wi-Fi, e.g., in a heterogeneous network. Although the methods above are described as separate embodiments, one of skill in the art would understand that it would be possible and desirable to combine several of the above methods into a single embodiment, or to combine disparate methods into a single embodiment. For example, all of the above methods could be combined. In the scenarios where multiple embodiments are described, the methods could be combined in sequential order, or in various orders as necessary. Although the above systems and methods for providing interference mitigation are described in reference to the Long Term Evolution (LTE) standard, one of skill in the art would understand that these systems and methods could be adapted for use with other wireless standards or versions thereof. The inventors have understood and appreciated that the present disclosure could be used in conjunction with various network architectures and technologies. Wherever a 4G technology is described, the inventors have understood that other RATs have similar equivalents, such as a gNodeB for 5G equivalent of eNB. Wherever an MME is described, the MME could be a 3G RNC or a 5G AMF/SMF. Additionally, wherever an MME is described, any other node in the core network could be managed in much the same way or in an equivalent or analogous way, for example, multiple connections to 4G EPC PGWs or SGWs, or any other node for any other RAT, could be periodically evaluated for health and otherwise monitored, and the other aspects of the present disclosure could be made to apply, in a way that would be understood by one having skill in the art. Additionally, the inventors have understood and appreciated that it is advantageous to perform certain functions at a coordination server, such as the Parallel Wireless HetNet Gateway, which performs virtualization of the RAN towards the core and vice versa, so that the core functions may be statefully proxied through the coordination server to enable the RAN to have reduced complexity. Therefore, at least four scenarios are described: (1) the selection of an MME or core node at the base station; (2) the selection of an MME or core node at a coordinating server such as a virtual radio network controller gateway (VRNCGW); (3) the selection of an MME or core node at the base station that is connected to a 5G-capable core network (either a 5G core network in a 5G standalone configuration, or a 4G core network in 5G non-standalone configuration); (4) the selection of an MME or core node at a coordinating server that is connected to a 5G-capable core network (either 5G SA or NSA). In some embodiments, the core network RAT is obscured or virtualized towards the RAN such that the coordination server and not the base station is performing the functions described herein, e.g., the health management functions, to ensure that the RAN is always connected to an appropriate core network node. Different protocols other than S1AP, or the same protocol, could be used, in some embodiments. In some embodiments, the software needed for implementing the methods and procedures described herein may be implemented in a high level procedural or an object-oriented language such as C, C++, C#, Python, Java, or Perl. The software may also be implemented in assembly language if desired. Packet processing implemented in a network device can include any processing determined by the context. For example, packet processing may involve high-level data link control (HDLC) framing, header compression, and/or encryption. In some embodiments, software that, when executed, causes a device to perform the methods described herein may be stored on a computer-readable medium such as read-only memory (ROM), programmable-read-only memory (PROM), electrically erasable programmable-read-only memory (EEPROM), flash memory, or a magnetic disk that is readable by a general or special purpose-processing unit to perform the processes described in this document. The processors can include any microprocessor (single or multiple core), system on chip (SoC), microcontroller, digital signal processor (DSP), graphics processing unit (GPU), or any other integrated circuit capable of processing instructions such as an x86 microprocessor. In some embodiments, the radio transceivers described herein may be base stations compatible with a Long Term Evolution (LTE) radio transmission protocol or air interface. The LTE-compatible base stations may be eNodeBs. In addition to supporting the LTE protocol, the base stations may also support other air interfaces, such as UMTS/HSPA, CDMA/CDMA2000, GSM/EDGE, GPRS, EVDO, 2G, 3G, 5G, TDD, or other air interfaces used for mobile telephony. In some embodiments, the base stations described herein may support Wi-Fi air interfaces, which may include one or more of IEEE 802.11a/b/g/n/ac/af/p/h. In some embodiments, the base stations described herein may support IEEE 802.16 (WiMAX), to LTE transmissions in unlicensed frequency bands (e.g., LTE-U, Licensed Access or LA-LTE), to LTE transmissions using dynamic spectrum access (DSA), to radio transceivers for ZigBee, Bluetooth, or other radio frequency protocols, or other air interfaces. The foregoing discussion discloses and describes merely exemplary embodiments of the present invention. In some embodiments, software that, when executed, causes a device to perform the methods described herein may be stored on a computer-readable medium such as a computer memory storage device, a hard disk, a flash drive, an optical disc, or the like. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. For example, wireless network topology can also apply to wired networks, optical networks, and the like. Various components in the devices described herein may be added, removed, split across different devices, combined onto a single device, or substituted with those having the same or similar functionality. Although the present disclosure has been described and illustrated in the foregoing example embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the disclosure may be made without departing from the spirit and scope of the disclosure, which is limited only by the claims which follow. Various components in the devices described herein may be added, removed, or substituted with those having the same or similar functionality. Various steps as described in the figures and specification may be added or removed from the processes described herein, and the steps described may be performed in an alternative order, consistent with the spirit of the invention. Features of one embodiment may be used in another embodiment. Other embodiments are within the following claims.
33,715
11863242
DESCRIPTION OF EMBODIMENTS FIG.2illustrates an exemplary system200according to an embodiment of the present principles. The system200includes a mobile station (STA)210, a first access point (AP1)220and a second access point (AP2)230such as a gateway. The two access points220,230are configured for wireless communication with mobile stations, e.g. using Wi-Fi according to IEEE 802.11. The system200further includes a status detector device240, configured to determine whether a frame is a management frame or a data frame and hence whether a station is inactive or active, and a wireless LAN (WLAN) controller250. The APs, the status detector device240and the WLAN controller250are connected by a connection260, which preferably is wired but also can be wireless. The mobile station210can be any kind of conventional device—mobile phone, tablet, sensor, etc.—compatible with the wireless communications standard used by the APs. Each AP220,230includes at least one hardware processing unit (“processor”)221,231, memory222,232and at least one wireless communications interface223,233, in the example a Wi-Fi interface, configured to communicate with other mobile stations, and a backbone interface224,234configured for communication with the other devices connected to the connection260. Any suitable communication standard, such as Wi-Fi (IEEE 802.11), Ethernet (IEEE 802.3), and PLC (power-line communication), could be used for the communication over the connection260. The APs220,230are configured to operate on different channels, i.e. different frequencies, so as to avoid interference. The channel allocation, which preferably is dynamic, can be performed in any suitable conventional way. The status detector device240and the WLAN controller250each include at least one hardware processing unit (“processor”)241,251, memory242,252and a backbone interface244,254configured for communication with the other devices connected to the connection260. In particular, the backbone interface244of the status detector device240is configured to receive measurements of transmission rate (PHY rate or other measure of bandwidth) and signal strength (RSSI) from the APs, as will be further described hereinafter. The status detector device240and the WLAN controller250can be stand-alone devices or be implemented on another device in the system200, such as on an AP, or in an external network, or in the Cloud. The system could also include a gateway device (not shown) configured to connect the system200to an external network such as the Internet. The gateway device can be a stand-alone device, but it can also be implemented on one of the devices connected to the connection260, for example an AP. The memories222,232,242,252, which can be implemented as a plurality of memory circuits possibly of different types, are configured to store software instructions for execution by the respective processors221,231,241,251, and also for various data necessary for performing the respective functions described herein. The skilled person will appreciate that the illustrated devices are very simplified for reasons of clarity and that real devices in addition would include features such as internal connections and power supplies. Non-transitory storage media270stores instructions that, when executed by processor241, perform the functions of the status detector device240as further described hereinafter with reference toFIG.3. A salient point of the present principles is the use of signal strength information, typically Received Signal Strength Indicator (RSSI) information in addition to the PHY rate to classify (802.11) frames. Thus, instead of directly classifying a frame with a low PHY rate (i.e. below a given transmission rate threshold value) as a management frame, the status detector device240checks the corresponding RSSI value. In case this RSSI sample is low (i.e. below a signal strength threshold), then it is likely that the station was simply relatively far from the AP, which means that the frame can be classified as a data frame (sent at a low transmission rate, since the low signal strength did not permit a (significantly) higher transmission rate). On the other hand, in case the RSSI sample is high (i.e. above the signal strength threshold), then it is likely that the station was relatively close to the AP and that the frame was sent with a deliberately low transmission rate—as management frames are—and that the frame thus can be classified as a management frame. The transmission rate threshold can be set as a fixed value, as in the conventional solution, for example to a maximum management frame transmission rate. The signal strength threshold can be set as a fixed value or, advantageously, as a value dependent on the transmission rate. Alternatively, the received signal strength can be converted into an expected transmission rate, i.e. a transmission rate that the station could use to transmit data rates (rather than management frames that on purpose are sent using a low transmission rate), and compared with the measured PHY. If the expected transmission rate is at least a certain amount—this amount can be a fixed value dependent on system characteristics—then the frame can be classified as a management frame. For example, the following table [downloaded from http://community.arubanetworks.com/t5/Controller-Based-WLANs/What-is-the-relationship-between-data-rate-SNR-and-RSS1/ta-p/178312] applies to a specific 1×1 configuration in a 802.11n network: Rate (MB/s)69121824364854SNR (db)457912162024Signal level (dBm)−81−81−78−76−73−69−65−64 Similar tables can be obtained for other configurations, as is known in the art. It should be noted that the PHY rate can further depend on factors unrelated to the RSSI, such as interference. In a variant, the interference is measured using any conventional method and the expected transmission rate is adjusted depending on the level of interference. The present system can thus use a mechanism that contains both a hard coded—measured—mapping between RSSI and PHY rates for different configurations, and a dynamic mechanism that adjusts the function in case of for example interference. FIG.3illustrates a flow chart for a method300of frame classification at a status detector device240according to an embodiment of the present principles. In step S310, the processor241of the status detector device240obtains the samples of PHY rate and RSSI from the APs220,230, which measured the received PHY rate and RSS1 for their communications with the mobile stations210and provide this to the status detector device240. For each frame, the processor then performs the following steps, which may be performed in parallel or in series. In step S320, the processor241filters the sample to keep the one with a PHY rate that is below a maximum management frame PHY rate, for example, 1 Mbps in 802.11n in the 2.4 GHz band. In case the PHY rate is above the maximum management frame PHY rate, then the frame is a data frame and the station is determined, in step S350, to be active. In step S330, the processor241uses a function to calculate the expected PHY rate for the corresponding RSS1. As mentioned hereinbefore, further factors, like interference, may be taken into account as well. In step S340, the processor241determines if the measured PHY rate is lower than the expected PHY rate minus a margin, preferably given in mbps or percent. It will be appreciated that it is also possible to deduct the margin, which for example can be 3 dbm, from the RSSI value before it is input to the function to obtain the expected PHY rate. In this case, the measured PHY rate is compared directly with the expected PHY rate. Based on the determination in step S340, the processor241determines whether the frame is a management frame or a data frame. In case the measured PHY rate is lower (than the expected PHY rate minus the margin or the expected PHY rate (including the margin), depending on the implementation), the frame is determined, in step S360, to be a management frame; otherwise the frame is determined, in step S350, to be a data frame. In case it was determined in step S350that the frame was a data frame, it can be determined, in optional step S370, that the station is active. Conversely, in case it was determined in step S360that the frame was a management frame, it can be determined, in step S380, that the station is inactive. In order to obtain a higher confidence level of the determination, for example ‘active’ or ‘passive’, more than a single sample can be used for the determination. In this case, the processor241is configured to determine the nature of a plurality of frames—management frame or data frame—during a given time period (such as a sliding window) and determine that the station is active if at least a given number or a given ratio are data frames. As will be appreciated, the present principles can determine, without inspection of the content of a frame, whether the frame is a management frame or a data frame and hence if a station is inactive or active. It should be understood that the elements shown in the figures may be implemented in various forms of hardware, software or combinations thereof. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces. The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its scope. All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context. In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
12,978
11863243
DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the various embodiments. However, it will be apparent to one of skilled in the art that the inventive concepts may be practiced without one or more of these specific details. System Overview FIG.1illustrates a network system100configured to implement one or more aspects of various embodiments. As shown, network system100includes a number of servers104(1)-104(X), each of which is referred to individually as a server104. Network system100also includes a number of cellular base stations118(1)-118(Y) (e.g., cell sites, cellular towers, etc.), each of which is referred to individually as base station118. Base stations118are capable of communicating with servers104via a network140. Network140includes any technically feasible wired, optical, wireless, or hybrid network that transmits data between or among servers104, base stations118, and/or other components. For example, network140could include a wide area network (WAN), local area network (LAN), personal area network (PAN), wireless (WiFi) network, cellular network, Ethernet network, Bluetooth network, universal serial bus (USB) network, satellite network, and/or the Internet. Network system100also includes a set of electronic devices112(1)-112(M) connected to base station118(1) via a set of cellular links114(1)-114(M), as well as a different set of electronic devices112(M+1)-112(N) connected to base station118(Y) via a different set of cellular links114(M+1)-114(N). For example, each electronic device included in electronic devices112(1)-112(M) and electronic devices112(M+1)-112(N) could establish a cellular link with the closest base station118and/or a given base station118for the cell in which the electronic device is located. Each of electronic devices112(1)-112(M) and electronic devices112(M+1)-112(N) is referred to individually as electronic device112. Electronic devices112include network-enabled devices that use a cellular communication protocol to establish and maintain cellular links114with cellular base stations118. In one embodiment, the cellular communication protocol includes the narrow-band Internet-of-Things (NB-IoT) protocol. A given electronic device112establishes a cellular link using a cellular modem, as described below in greater detail in conjunction withFIG.2. In so doing, the given electronic device112transmits identifying information included in a subscriber identification module (SIM) card to a carrier server (not shown) via a corresponding cellular tower118. In response, the carrier server authorizes the given electronic device112to establish a cellular link114with the corresponding cellular tower118. AppEach server104represents a destination for data generated by electronic devices112. Electronic devices112generally transmit this data to servers104via a cellular network that includes base stations118for security and reduced latency. For example, electronic devices112could generate electricity consumption data and/or other metrology data and transmit the metrology data to servers104across cellular links114. Electronic devices112establish cellular links114periodically and/or at different times according to a communication schedule defined by servers104and/or other components of network system100. For example, each electronic device112could be scheduled to establish a corresponding cellular link114at a different hour of a given day, so that only a subset of cellular links114are active at a given time. Each electronic device112may establish cellular link114at a certain frequency (e.g., every eight hours, every 24 hours, etc.) to transmit data collected over the preceding period to one or more servers104. When cellular link114is not established for a given electronic device112(e.g., during periods in between transmission times specified in the communication schedule), the cellular modem on the electronic device may be placed in a lower-power “sleep” mode to conserve power. A given electronic device112that establishes a cellular link114with a corresponding base station118may then exchange data with one or more servers104via link114. For example, electronic devices112could include smart meters that measure and report consumption of electricity and/or other resources to servers104in a centralized management facility and/or “back office” for the corresponding utility. In another example, electronic devices112could monitor and report particulate matter mass concentrations and/or other indicators of air quality to servers104for one or more governmental, private, and/or other entities that analyze and/or aggregate air quality data from multiple locations. In general, electronic devices112may include devices that measure, collect, and/or transmit data in a non-real-time manner to servers104via cellular links114with base stations118. Each electronic device112can be powered by one or more power sources, such as (but not limited to) a power grid, a battery, and/or a solar panel. When a given electronic device112is battery-powered, electronic device112may perform a number of operations to reduce power consumption. As described in further detail below, these operations may include analyzing and/or evaluating cellular protocol and/or radio frequency (RF) parameters associated with the electronic device's link114with a corresponding base station118. When the parameters meet one or more conditions and/or thresholds indicating that the signal over link114can be used to establish a data session with base station118, the electronic device may transmit messages that include metrology and/or other data collected by the electronic device over link114to base station118for forwarding to the relevant servers104. When the parameters do not meet the conditions and/or thresholds, the electronic device may delay transmission of the messages over link114for a predetermined period and/or until the corresponding link114has sufficient signal strength. FIG.2is a more detailed illustration of one of electronic devices112ofFIG.1, according to various embodiments. As shown, electronic device112includes one or more processors202, a battery204, a cellular modem206, and a memory216coupled together. Processors202include any technically feasible hardware units configured to process data and execute software applications (e.g., software application222). For example, processors202could include one or more central processing units (CPUs), graphics processing units (CPUs), application-specific integrated circuit (ASICs), field programmable gate array (FPGAs), artificial intelligence (AI) accelerators, microprocessors, microcontrollers, other types of processing units, and/or a combination of different processing units (e.g., a CPU configured to operate in conjunction with a GPU). Memory216includes one or more units that store data and/or instructions. For example, memory216could include a random access memory (RAM) module, a flash memory unit, and/or another type of memory unit. Processors202, cellular modem206, and/or other components of electronic device112include functionality to read data from and write data to memory116. Memory216includes software application222, which includes program code that, when executed by one or more processors202, performs any of the operations discussed herein. Cellular modem206establishes a cellular link114between electronic device112and a corresponding base station118. For example, cellular modem206could include a narrow-band IoT (NB-IoT) modem that establishes link114with base station118and couples to a cellular network according to an NB-IoT protocol. Once link114is established, electronic device112may use cellular modem206and link114to connect to the Internet and/or another network (e.g., network140) and send and receive data with other devices on the network. Battery204supplies power to processors202, cellular modem206, memory216, and/or other components of electronic device112. For example, battery204could include sufficient capacity to allow electronic device112to operate for a number of years without replacement and/or recharging. Software application222manages battery204consumption on electronic device112by controlling the transmission of data over link114based on estimates of a signal condition228associated with link114. In some embodiments, signal condition228represents the ability of electronic device112to transmit and/or receive data using the signal associated with link114. A “good” signal condition228indicates that the signal can be used to transmit data over link114, while a “poor” signal condition228indicates that the signal is not capable of transmitting data over link114without encountering interruptions, retries, and/or other issues or errors. Signal condition228may be affected by obstructions (e.g., vehicles, debris, etc.) between electronic device112and base station118, connections between other devices and base station118, weather conditions near electronic device112and/or base station118, and/or other factors. Signal condition228may be estimated by measuring, assessing, and/or combining signal strength, signal quality, and/or other attributes that characterize the signal. More specifically, software application222may estimate signal condition228based on one or more cellular protocol parameters224and/or one or more RF link parameters226associated with link114. When cellular protocol parameters224and/or RF link parameters226indicate signal condition228is adequate for transmitting and/or receiving data, software application222may make and/or implement transmission decisions230that cause data to be transmitted over link114. When cellular protocol parameters224and/or RF link parameters226indicate that signal condition228is not adequate, software application222may make and/or implement transmission decisions230that cause the transmission of data over link114to be delayed. In one or more embodiments, software application222analyzes cellular protocol parameters224to assess signal condition228and make transmission decisions230. As described in further detail below with respect toFIG.3A, cellular protocol parameters224may include (but are not limited to) a coverage enhancement (CE) parameter, a Signal to Interference and Noise Ratio (SINR), and/or a Reference Signal Received Quality (RSRQ) provided by the cellular modem206and/or the cellular protocol associated with link114. Software application222may perform one or more comparisons, evaluations, and/or other operations using cellular protocol parameters224to generate a binary value indicating whether or not signaling condition228is adequate (e.g., whether electronic device112is inside or outside the cell edge associated with the corresponding base station118). Software application222also, or instead, uses protocol-agnostic RF link parameters226to estimate signal condition228and make corresponding transmission decisions230. As described in further detail below with respect toFIG.3B, RF link parameters226may include (but are not limited to) a path loss (PL), a Reference Signal Received Power (RSRP), and/or a SINR. Software application222may input RF link parameters226into one or more equations and/or formulas and obtain, as output from the equation(s) and/or formula(s), a numeric value representing signal condition228. A change in the numeric value indicates an increase or decrease in signal condition228. As a result, the numeric value may represent a “probability” of successful data transmission over link114, given the current state of the signal associated with link114. Preserving Battery Life when Poor Signaling Conditions Exist FIG.3Aillustrates the use of cellular protocol parameters224ofFIG.2in making a transmission decision316, according to various embodiments. As mentioned above, cellular protocol parameters224may be provided and/or supported by the cellular protocol used by an electronic device (e.g., electronic device112ofFIG.1) to establish a link with a cellular base station (e.g., base station118ofFIG.1). As shown inFIG.3A, cellular protocol parameters224include a CE parameter302, an RSRQ304, and a SINR306. CE parameter302includes a CE level, CE mode, and/or another value indicating the amount of CE on the electronic device. For example, CE parameter302may represent a number of retries, power boosting, and/or the use of other CE techniques by the electronic device in the presence of adverse signal conditions. RSRQ304and SINR306include metrics related to signal strength and/or signal quality provided by a cellular modem (e.g., cellular modem206ofFIG.2) on the electronic device. For example, measurements of RSRQ304and SINR306may be generated according to one or more cellular protocols and/or one or more propagation models supported by the cellular modem. A number of comparisons314are performed using CE parameter302, RSRQ304, and SINR306to generate an estimate of signal condition228for the link between the electronic device and base station. In particular, comparisons314may include a comparison of CE parameter302with a predetermined CE parameter value308, a comparison of RSRQ304with an RSRQ threshold310, and a comparison of SINR306with a SINR threshold312. For example, comparisons314could be performed using the following expression: (CE level==0)&&(SINR>SINR threshold)&&(RSRQ>RSRQ threshold)  (1) The expression includes a first comparison of CE parameter302with a CE level of 0, which indicates a lowest number of data transmission retries over the link. The first comparison evaluates to true if CE parameter302is set to or represents the CE level of 0 and to false otherwise. The expression also includes a second comparison of SINR306to SINR threshold312and a third comparison of RSRQ304to RSRQ threshold310. The second comparison evaluates to true if SINR306exceeds SINR threshold312and to false otherwise, and the third comparison evaluates to true if RSRQ304exceeds RSRQ threshold310and to false otherwise. In turn, the expression includes a first logical conjunction operator between the first and second comparisons and a second logical conjunction operator between the second and third comparisons, such that the expression evaluates to true if all three comparisons evaluate to true and to false otherwise. Consequently, the estimate of signal condition228produced by comparisons314includes a binary or Boolean value that indicates whether the signal over the link is adequate for transmission. Continuing with the above example, values of CE parameter302, RSRQ304, and SINR306that result in the expression evaluating to true indicate that signal condition228is adequate for transmission of data from the electronic device over the link. Conversely, values of CE parameter302, RSRQ304, and SINR306that result in the expression evaluating to false indicate that signal condition228is not adequate for transmission of data from the electronic device over the link. In one or more embodiments, CE parameter value308, RSRQ threshold310, and/or SINR threshold312are selected or adjusted to “identify” the cell edge associated with the link. For example, CE parameter value308, RSRQ threshold310, and/or SINR threshold312could be set for each combination of factors (e.g., cellular protocol, type of cellular modem, type of electronic device, etc.) that affects the transmission of data over a cellular link. Data rates, retry rates, and/or other indicators of wireless performance could be collected for different values of CE parameter302, RSRQ304, and/or SINR306sampled for a given combination of factors (e.g., values of CE parameter302, RSRQ304, and/or SINR306generated by a certain type of cellular modem according to a cellular protocol). CE parameter value308, RSRQ threshold310, and/or SINR threshold312could then be set to values of CE parameter302, RSRQ304, and/or SINR306that result in a specific data rate and/or retry rate and/or a given range of data rates and/or retry rates. In another example, CE parameter value308, RSRQ threshold310, and/or SINR threshold312could be selected to achieve a certain battery runtime in the electronic device, transmission of data over the link within a certain period from which the data was generated or sampled, and/or another goal related to cellular communication with the electronic device. After signal condition228is estimated using comparisons314, transmission decision316is made based on the estimated signal condition228. If signal condition228is adequate (e.g., if the expression used to perform comparisons314evaluates to true), transmission decision316may include immediate transmission of data from the electronic device over the link. If signal condition is not adequate (e.g., if the expression used to perform comparisons314evaluates to false), transmission decision316may include delaying the transmission of data from the electronic device over the link. For example, transmission decision316could specify delaying the transmission of data over the link until the next point in time at which the electronic device is scheduled to transmit data over the link (e.g., after a predetermined period specified in a communication schedule associated with the electronic device). When the next point in time is reached, signal condition228could be re-evaluated using updated values of CE parameter302, RSRQ304, and SINR306, and a new transmission decision316may be made based on the newly estimated signal condition228. As a result, signal condition228may be estimated whenever the electronic device is scheduled to transmit data over the link, and a corresponding transmission decision316may be made so that the electronic device transmits data over the link to meet a certain data rate, likelihood of success, transmission deadline, and/or other goal or requirement represented by signal condition228. Because comparisons314involve computationally efficient evaluations that produce deterministic results, signal condition228may be estimated with minimal power consumption and/or computation time on the electronic device. The binary value representing signal condition228may similarly be used to generate a corresponding transmission decision316in a time- and/or power-efficient manner. Consequently, the technique ofFIG.3Aimproves power consumption and/or battery life on the electronic device by avoiding transmission attempts when signal condition228is not adequate and minimizing delay and/or overhead associated with transmitting data over the link when signal condition228is adequate. WhileFIG.3Aillustrates the generation of transmission decision316based on CE parameter302, RSRQ304, and SINR306, it will be appreciated that other parameters and/or thresholds may be used in comparisons314and/or to make transmission decision316. For example, one or more comparisons314could be performed using values and/or thresholds for RSRP, Receive Signal Strength Indicator (RSSI), and/or other measurements of cellular signal strength, signal quality, and/or signal state supported by the cellular modem and/or cellular protocol used by the electronic device to send and receive data over the link. In another example, CE parameter302, RSRQ304, and/or SINR306could be compared with multiple values and/or thresholds, and the output of comparisons314may include a “rating” (e.g., from 1 to 5, from 1 to 10, from A to F, etc.) that represents signal condition228. FIG.3Billustrates the use of RF link parameters226ofFIG.2in making a transmission decision33, according to various embodiments. As mentioned above, RF link parameters226may be generated by a cellular modem on an electronic device (e.g., electronic device112ofFIG.1) independent of the cellular protocol used to establish a link with a base station (e.g., base station118ofFIG.1). For example, RF link parameters226could be included in link quality statistics produced by the cellular modem. As shown inFIG.3B, RF link parameters226include a PL322, an RSRP324, and SINR306. PL322, RSRP324, and SINR306may be generated by the cellular modem according to a propagation model of a cellular signal from the base station to the electronic device. PL322, RSRP324, and SINR306are combined with a bandwidth constant328and an environment constant330into an estimate of a pseudo-coupling loss (PCL)326representing a signal condition of the link. In one or more embodiments, PCL326is calculated using the following equation: PCLuplink(dB)=PL+RSRP−SINR−NfigureENodeB−Nfloor(2) In the above equation, PCL326for the uplink from the electronic device to the base station is represented by PCLuplink(dB). In particular, PCL326is calculated as the sum of PL322from the electronic device to the base station, RSRP324as perceived by the cellular modem, the negative of SINR306as perceived by the cellular modem, the negative of bandwidth constant328as represented by NfigureENodeB, and the negative of environment constant330as represented by Nfloorfrom the result. Bandwidth constant328may be a noise figure that is relative to the bandwidth associated with the base station, and environment constant330may be a thermal noise floor defined by the environment around the electronic device and/or base station. PCL326is compared with a maximum coupling loss (MCL)332associated with the link to generate transmission decision336. In some embodiments, MCL332represents a threshold for PCL326. MCL332may be defined for the electronic device, cellular protocol, base station, and/or another factor that affects transmission of data from the electronic device over the link. When MCL332is greater than or equal to PCL326, the likelihood of success of data transmission over the link is adequate, and transmission decision336includes immediate transmission of data from the electronic device over the link. When MCL332is less than PCL326, data transmission over the link is unlikely to succeed, and transmission decision336includes delaying the transmission of data from the electronic device over the link (e.g., until the next point in time specified in the communication schedule associated with the electronic device). In one or more embodiments, the estimation of PCL326using Equation 1 is derived using the following. MCL332is calculated as the maximum conducted power loss that can be tolerated by the electronic device and/or base station. Values of MCL332may be calculated using the following: MCLdownlink=PENodeB−(SINRRequiredBPD+NfigureBPD+Nfloor)  (3) MCLuplink=PBPD−(SINRRequiredENodeB+NfigureENodeB+Nfloor)  (4) In the above equations, both values of MCL332are calculated in dB. MCLdownlinkrepresents MCL332for the downlink from the base station to the electronic device, and MCLuplinkrepresents MCL332for the uplink from the electronic device to the base station. PENodeBand PBPDrepresent the transmission power of the base station and the electronic device, respectively. SINRRequiredBPDand SINRRequiredENodeBrepresent the required SINR306(e.g., the amount of noise tolerated) for the electronic device and base station, respectively. As mentioned above, Nfloorrepresents environment constant330and NfigureENodeBrepresents bandwidth constant328for the base station, while NfigureBPDrepresents bandwidth constant328(e.g., a bandwidth-dependent noise figure) for the electronic device. Continuing with the above equations, SINR306for the base station may be estimated based on data available to the cellular modem on the electronic device. In particular, SINR306for the electronic device may be calculated using the following equation: SINRBPD=SBPD−(NBPD+IBPD)  (5) where SINRBPDis SINR306for the electronic device, SBPDis the power of usable signals measured by the electronic device, NBPDis the background noise detected by the electronic device, and IBPDis the average interference power detected by the electronic device. All values used in the above equation may be specified in decibels (dB). Assuming SBPD=RSRP in decibels, then NBPD+IBPD=RSRP−SINRBPDwhen all values are in decibels. A lossy estimate of the noise and interference viewed from the base station is equivalent to the noise and interference viewed from the electronic device. Sources of errors associated with this estimate include the following:potentially higher noise and interference at the electronic device due to proximity to other transmitters (e.g., mobile phones, smart meters, etc.)potentially lower noise and interference at the electronic device due to location in a pit or obstructions between the electronic device and the base stationpotentially higher noise and interference at the base station due to a directional sectorized antennapotentially lower noise and interference at the base station due to PL distance from elements in the sector Because these factors are both additive and subtractive with respect to the noise and interference at the electronic device and base station, the observed noise and interference at the electronic device can be assumed to be close enough to the noise and interference at the base station to be used in an estimate of PCL326. More specifically, the above equation for calculating PCL326from PL322, RSRP324, SINR306, bandwidth constant328, and environment constant330may be derived by combining NBPD+IBPD=RSRP−SINRBPDwith the equation for calculating the uplink MCL332into the following: PCLuplink(dB)=PBPD—((PBPD@BS−(NBPD+IBPD)+NfigureENodeB+Nfloor)  (6) where PBPD@BSrepresents the received power of the electronic device at the base station. While this value is not available to the electronic device, the value can be calculated as the transmission power of the electronic device minus PL322: PBPD@BS(dB)=PBPD−PL  (7) The calculation of PCL326may be updated using the above calculation of PBPD@BSto produce the following equation: PCLuplink(dB)=PBPD−(((PBPD−PL)−(NBPD+IBPD)+NfigureENodeB+Nfloor)  (8) The above equation may then be converted into Equation 2 by cancelling out the transmission power terms PBPDand replacing NBPD+IBPDwith RSRP−SINRBPD. As shown, a given estimate of PCL326may also be used to update a signal condition profile334associated with the link between the electronic device and base station. Signal condition profile334includes statistics and/or metrics related to PCL326values collected over one or more periods. For example, signal condition profile334could include a minimum, maximum, mean, median, quantile, standard deviation, variance, skew, kurtosis, and/or another statistic for a number of PCL326values. Signal condition profile334could also, or instead, track moving averages, trends, seasonality, and/or other patterns related to time series analysis of values of PCL326. Signal condition profile334could also, or instead, include data rates, retry rates, and/or other indicators of transmission performance that are collected concurrently with values of PCL326. At each point in time at which transmission decision336is to be made, one or more PCL326values may be generated and compared with MCL332to make transmission decision336. When transmission decision336triggers the transmission of data from the electronic device over the link, additional values of PCL326may periodically be computed (e.g., every five seconds) during the corresponding data session to further monitor the signal condition associated with the link over time. These values of PCL326may then be aggregated into one or more statistics and/or time series in signal condition profile334. A separate signal condition profile334may be generated for each point in time at which a given transmission decision336is made (and optional subsequent data transmission is performed). Multiple signal condition profiles associated with multiple points in time and/or data sessions may optionally be aggregated into an “overall” signal condition profile334for the link. Consequently, signal condition profile334may characterize the quality, strength, and/or level of signal over the link across one or more time periods. Signal condition profile334may additionally be used to update the value of MCL332to which PCL326is compared to make transmission decision336. For example, MCL332may be set to a value of PCL326in signal condition profile334that is associated with a minimum data rate and/or a maximum retry rate. These updates to MCL332may be performed regularly (e.g., each time transmission decision336is to be made using signal condition profile334from a previous point in time) and/or in response to transmission errors and/or issues that occur after transmission decision336triggers one or more attempts to transmit the data over the link. In some embodiments, the techniques for estimating signal condition228described above with respect toFIGS.3A-3Bmay be used and/or combined in various ways to make transmission decisions316and/or336. For example, an initial transmission decision316could be made using comparisons314of CE parameter302, RSRQ304, and SINR306with CE parameter value308, RSRQ threshold310, and SINR threshold312, respectively. When transmission decision316triggers transmission of data from the electronic device over the link, values of PCL326are sampled and aggregated into signal condition profile334for the corresponding data session. In another example, transmission of data over the link could be performed or delayed based on a technique (e.g., one of the techniques illustrated inFIGS.3A and3B) that produces the most accurate estimate of signal condition228for a given cellular modem, electronic device, cellular protocol, and/or location. FIG.4is a flow chart of method steps for operating an electronic device within a cellular environment using cellular protocol information, according to various embodiments. Although the method steps are described in conjunction with the systems ofFIGS.1-2, persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention. As shown, software application222determines402that a point in time associated with transmission of data on the electronic device is reached. For example, software application222could determine that the current time represents a point in time at which metrology data and/or other data collected by the electronic device is to be transmitted to one or more servers104. This point in time could be specified in a communication schedule for the electronic device and/or identified by software application222after an alert, notification, interrupt, condition, and/or another trigger to transmit data is detected by software application222and/or the electronic device. Next, software application222determines404a CE parameter and one or more RF link parameters associated with a link between an electronic device and a cellular base station. The CE parameter may include a CE mode, CE level, and/or another indicator of the use of CE on the electronic device. The RF link parameter(s) may include (but are not limited to) a SINR and/or RSRQ. Software application222also computes406an estimate for a signal condition associated with the link based on the CE parameter and the RF link parameters. For example, software application222may compare the CE parameter to one or more predetermined CE levels, CE modes, and/or other values related to CE on the electronic device. Software application222may also compare the SINR and/or RSRQ in the RF link parameter(s) with one or more corresponding thresholds. Software application222may then apply one or more logical operators (e.g., conjunction, disjunction, exclusive disjunction, negation, biconditional, alternative denial, etc.) to binary values computed as results of the comparisons to generate a true or false value representing the estimate of the signal condition. Software application222performs additional processing based on a determination408of whether or not the signal condition is adequate for transmission. Continuing with the above example, software application222could determine that the signal condition is adequate when the comparisons and/or evaluations performed operation406result in a true value for the estimate. Conversely, software application222could determine that the signal condition is not adequate when the comparisons and/or evaluations performed in operation406result in a false value for the estimate. When the signal condition is determined to be adequate for transmission, software application222causes410transmission of one or more messages from the electronic device over the link. For example, software application222could generate one or more messages that include data to be transmitted to one or more servers104and/or one or more commands for transmitting the message(s) to the servers. Software application222and/or another component on the electronic device could then perform one or more operations that use the cellular modem on the electronic device to transmit the message(s) over the link to the cellular base station. After the message(s) are successfully transmitted, the component could place the cellular modem in a low-power “sleep” mode to conserve battery power on the electronic device. When the signal condition is determined to not be adequate for transmission, software application222delays412transmission of the message(s) over the link. For example, software application222could generate one or more commands that cause the cellular modem to disconnect with the base station and go into the low-power mode. Software application222could also, or instead, reschedule transmission of the message(s) for a subsequent point in time (e.g., the next point in time associated with transmission of data from the electronic device). Software application222may continue414operating the electronic device using operations402-412. For example, software application222could use operations402-412to periodically and/or repeatedly estimate the signal condition, make a transmission decision based on the signal condition, and implement the transmission decision while the electronic device is used to generate and/or collect data that is transmitted over a cellular link to one or more servers104. FIG.5is a flow chart of method steps for operating an electronic device within a cellular environment using RF link information, according to various embodiments. Although the method steps are described in conjunction with the systems ofFIGS.1-2, persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention. As shown, software application222determines502that a point in time associated with transmission of data on the electronic device is reached. For example, software application222could determine that the current time represents a point in time at which metrology data and/or other data collected by the electronic device is to be transmitted to one or more servers104. This point in time could be specified in a communication schedule for the electronic device and/or identified by software application222after an alert, notification, interrupt, condition, and/or another trigger to transmit data is detected by software application222and/or the electronic device. Next, software application222determines504one or more RF link parameters associated with a link between an electronic device and a cellular base station. The RF link parameter(s) may include, but are not limited to, a PL, RSRP, and/or SINR. Software application222also computes506an estimate for a PCL of the link based on the RF link parameter(s). For example, software application222could input the PL, the RSRP, the SINR, a thermal noise floor, and/or a noise figure associated with the cellular base station into an equation that performs additions, subtractions, and/or other arithmetic operations using the input. Software application222could then obtain the PCL as output of the equation. The PCL may include a numeric value that represents the likelihood of transmitting data over the link, given the signal associated with the link. Software application222performs additional processing based on a determination508of whether or not the estimated PCL exceeds an MCL for the link. The MCL may be represent a threshold for the coupling loss of the link. If the threshold is exceeded by the estimated PCL, the signal condition represented by the estimated PCL is not adequate for transmission of data over the link. If the threshold is not exceeded by the estimated PCL, the signal condition represented by the estimated PCL is adequate for transmission of data over the link. When the estimated PCL does not exceed the MCL for the link, software application222causes510transmission of one or more messages from the electronic device over the link. For example, software application222could generate one or more messages that include data to be transmitted to one or more servers104and/or one or more commands for transmitting the message(s) to the servers. Software application222and/or another component on the electronic device could then perform one or more operations that use the cellular modem on the electronic device to transmit the message(s) over the link to the cellular base station. After the message(s) are successfully transmitted, software application222and/or another component on the electronic device could place the cellular modem in a low-power “sleep” mode to conserve battery power on the electronic device. During transmission of the message(s) over the link, software application222computes512one or more additional estimates of the PCL and aggregates514estimates of the PCL into one or more statistics associated with signal conditions for the link. For example, software application222could calculate a mean, median, maximum, minimum, quantile, standard deviation, variance, skew, kurtosis, and/or another statistic related to PCL values calculated in operations506and512. Software application222could also, or instead, record the PCL values with timestamps for the times at which the PCL values were determined and/or determine trends, seasonality, moving averages, and/or other patterns related to the PCL values and timestamps. Software application222optionally updates516the MCL for the link based on the statistic(s). For example, software application222could set the MCL to the highest PCL value that allows for successful transmission of the data over the link and/or transmission of the data at a minimum data rate. Software application222may omit operation516if the data transmission over the link is successful and the PCL for the link remains below the MCL during the data transmission. When the signal condition is determined to not be adequate for transmission, software application222delays518transmission of the message(s) over the link. For example, software application222could generate one or more commands that cause the cellular modem to disconnect with the base station and go into the low-power mode. Software application222could also, or instead, reschedule transmission of the message(s) for a subsequent point in time (e.g., the next point in time associated with transmission of data from the electronic device). Software application222may continue520operating the electronic device using operations502-518. For example, software application222could use operations502-518to periodically and/or repeatedly estimate the signal condition, make a transmission decision based on the signal condition, and implement the transmission decision while the electronic device is used to generate and/or collect data that is transmitted over a cellular link to one or more servers104. Software application222may also operate the electronic device using various subsets, permutations, and/or repetitions of method steps from the flow charts ofFIGS.4and5. For example, software application222could perform operations402-408to efficiently determine if the signal condition associated with the link is sufficient to transmit data at a given point in time. Software application222could perform operations502-508before, after, and/or in lieu of operations402-408to perform a different and/or additional assessment of the signal condition. If the signal condition is determined to be adequate (e.g., if the estimate computed in operation406is true and/or the PCL computed in operation506does not exceed the MCL for the link), software application222may trigger the transmission of data over the link. During the data transmission, software application222may optionally use updated values of the CE and/or RF link parameters to further assess the signal condition associated with the link, generate statistics associated with the signal condition, and/or adjust one or more thresholds (e.g., SINR threshold, RSRQ threshold, MCL, etc.) used to estimate the signal condition and/or make transmission decisions based on the signal condition. As signal conditions are collected and aggregated over time, software application222may adjust estimates of signal conditions and/or the corresponding transmission decisions to balance the timeliness of data transmission over the link with the likelihood of success of the data transmission over the link. In sum, CE parameters and/or RF link parameters provided by the cellular modem on an electronic device are used with various techniques to control the transmission of data from the electronic device to a cellular base station via a link with the cellular base station. The CE parameter includes a CE level, CE mode, and/or another indicator of the use of CE on the electronic device. The RF parameters include a SINR, RSRQ, RSRP, PL, RSSI, and/or other measurements that characterize the signal strength, signal quality, and/or other attributes of the signal associated with the link. One technique performs computationally efficient comparisons and/or evaluations using a CE parameter and one or more RF link parameters to estimate the signal condition associated with the link as a binary true/false value. Another technique inputs one or more RF link parameters and/or constants into an equation and obtains, as output from the equation, a numeric value representing the signal condition. When the signal condition is determined to be adequate (e.g., when the binary value is set to true and/or the numeric value meets a threshold), transmission of data over the link is triggered. When the signal condition is determined to not be adequate (e.g., when the binary value is set to false and/or the numeric value does not meet a threshold), transmission of data over the link is delayed. Consequently, the disclosed techniques efficiently detect poor signal conditions and delay transmission of data over the link until the signal conditions improve. One technical advantage of the disclosed techniques relative to the prior art is that the disclosed techniques, when implemented in battery-powered network devices, enable relatively higher success rates and/or reduced retry rates when transmitting data over cellular links. Accordingly, the disclosed techniques enable battery-powered devices to reduce overall power consumption relative to what can be achieved using prior art approaches, thereby extending overall battery life. In addition, the disclosed techniques reduce overall computational overhead and result in overall higher data rates in cellular communications. These technical advantages provide one or more technological improvements over prior art approaches.1. In some embodiments, a method for operating an electronic device within a cellular environment comprises determining one or more radio frequency (RF) link parameters associated with a link between the electronic device and a cellular base station at a first point in time; computing a first estimate for a pseudo-coupling loss (PCL) for the link based on the one or more RF link parameters; determining that the PCL exceeds a maximum coupling loss (MCL) for the link based on the first estimate; and delaying transmission of one or more messages from the electronic device over the link until a second point in time that is later than the first point in time.2. The method of clause 1, further comprising determining updated values of the one or more RF link parameters at the second point in time; computing a second estimate of the PCL for the link based on the updated values; determining that the PCL does not exceed the MCL for the link based on the second estimate; and causing the one or more messages to be transmitted from the electronic device over the link.3. The method of any of clauses 1-2, further comprising determining a coverage enhancement (CE) parameter associated with the link between the electronic device and a cellular base station at the first time; and computing a second estimate for a signal condition associated with the link based on the CE parameter and the one or more RF link parameters.4. The method of any of clauses 1-3, wherein computing the third estimate for the signal condition associated with the link comprises computing a binary value based on a first comparison between the CE parameter and a predetermined CE parameter value, a second comparison between a Signal to Interference and Noise Ratio (SINR) included in the one or more RF link parameters and a first threshold, and a third comparison between a Reference Signal Receive Quality (RSRQ) included in the one or more RF link parameters and a second threshold.5. The method of any of clauses 1-4, further comprising computing one or more additional estimates of the PCL during transmission of the one or more messages over the link; and aggregating the first estimate and the one or more additional estimates of the PCL to generate one or more statistics that characterize a signal condition associated with the link.6. The method of any of clauses 1-5, further comprising updating the MCL for the link based on the one or more statistics.7. The method of any of clauses 1-6, wherein computing the first estimate for the PCL of the link comprises performing one or more computations based on the one or more RF link parameters and one or more constants.8. The method of any of clauses 1-7, wherein the one or more constants comprise at least one of a thermal noise floor or a noise figure associated with the cellular base station.9. The method of any of clauses 1-8, wherein the one or more RF link parameters comprise at least one of a path loss (PL), a SINR, and a Reference Signal Received Power (RSRP).10. The method of any of clauses 1-9, wherein the second point in time is a predetermined period of time after the first point in time.11. In some embodiments, a non-transitory computer readable medium stores instructions that, when executed by a processor, cause the processor to perform the steps of determining one or more radio frequency (RF) link parameters associated with a link between the electronic device and a cellular base station at a first point in time; computing a first estimate for a pseudo-coupling loss (PCL) for the link based on the one or more RF link parameters; determining that the PCL exceeds a maximum coupling loss (MCL) for the link based on the first estimate; and delaying transmission of one or more messages from the electronic device over the link until a second point in time that is later than the first point in time.12. The non-transitory computer readable medium of clause 11, wherein the instructions further cause the processor to perform the steps of determining updated values of the one or more RF link parameters at the second point in time; computing a second estimate of the PCL for the link based on the updated values; determining that the PCL does not exceed the MCL for the link based on the second estimate; and causing the one or more messages to be transmitted from the electronic device over the link.13. The non-transitory computer readable medium of any of clauses 11-12, wherein the instructions further cause the processor to perform the steps of determining a coverage enhancement (CE) parameter associated with the link between the electronic device and a cellular base station at the first time; and computing a second estimate for a signal condition associated with the link based on the CE parameter and the one or more RF link parameters.14. The non-transitory computer readable medium of any of clauses 11-13, wherein computing the third estimate for the signal condition associated with the link comprises computing a first binary value based on a first comparison of the CE parameter with a predetermined CE parameter value; computing a second binary value based on a second comparison of a SINR included in the one or more RF link parameters to a threshold; and computing a third binary value based on a third comparison of a Reference Signal Received Quality (RSRQ) included in the one or more RF link parameters to a threshold.15. The non-transitory computer readable medium of any of clauses 11-14, wherein the instructions further cause the processor to perform the steps of computing one or more additional estimates of the PCL during transmission of the one or more messages over the link; and aggregating the first estimate and the one or more additional estimates of the PCL into one or more statistics associated with signal conditions for the link.16. The non-transitory computer readable medium of any of clauses 11-15, wherein the one or more statistics comprises at least one of a minimum, a maximum, a mean, a median, a quantile, a standard deviation, a variance, a skew, or a kurtosis.17. The non-transitory computer readable medium of any of clauses 11-16, wherein computing the first estimate for the PCL for the link comprises performing one or more computations based on the one or more RF link parameters and one or more constants.18. The non-transitory computer readable medium of any of clauses 11-17, wherein the one or more constants comprise at least one of a thermal noise floor or a noise figure associated with the cellular base station and the one or more RF link parameters comprise at least one of a path loss (PL), a SINR, and a Reference Signal Received Power (RSRP).19. The non-transitory computer readable medium of any of clauses 11-18, wherein the one or more computations comprise at least one of summing a first RF link parameter and a second RF link parameter included in the one or more RF link parameters or subtracting a constant from the one or more RF link parameters.20. In some embodiments, a system comprises a memory that stores instructions, and a processor that is coupled to the memory and, when executing the instructions, is configured to determine one or more radio frequency (RF) link parameters associated with a link between the electronic device and a cellular base station at a first point in time; compute a first estimate for a pseudo-coupling loss (PCL) for the link based on the one or more RF link parameters; determine that the PCL exceeds a maximum coupling loss (MCL) for the link based on the first estimate; and delay transmission of one or more messages from the electronic device over the link until a second point in time that is later than the first point in time. Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present invention and protection. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module,” a “system,” or a “computer.” In addition, any hardware and/or software technique, process, function, component, engine, module, or system described in the present disclosure may be implemented as a circuit or set of circuits. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine. The instructions, when executed via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
56,241
11863244
The figures depict various embodiments of the present technology for purposes of illustration only, wherein the figures use like reference numerals to identify like elements. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated in the figures can be employed without departing from the principles of the present technology described herein. DETAILED DESCRIPTION Today, people rely on cable networks for a variety of services, including digital telephone, multimedia entertainment, and Internet connectivity. Increasing growth in these services has spurred continued expansion of cable networks and continued development of services offered through cable network technologies. For example, a cable service provider can offer services such as digital telephone, digital television, and Internet connectivity. In one example, the cable service provider can provide one customer bundled digital television and Internet connectivity services via a broadband connection on a cable network. In another example, the cable service provider can provide another customer bundled digital telephone, digital television, and Internet connectivity services together via another broadband connection on the cable network. Through the use of various devices, such as signal amplifiers, cable networks are able to carry electronic signals across vast distances, allowing cable service providers to provide services to their customers. However, several factors can impact the provision of services via cable networks. For example, electronic signals transmitted through cable networks generally degrade over vast distances. Other environmental factors, such as inclement weather and exposure to water can also damage cables in the cable network, resulting in signal loss. Signal loss can disrupt the provision of services via cable networks. For example, signal loss can cause tiling in digital television services or cause services to be unavailable altogether. Accordingly, testing, maintenance, and repair of cable networks are vital to optimal delivery of services via the cable networks. Under conventional approaches, a cable service provider may broadcast tones of varying frequency from a cable system source to a cable network. These tones are evaluated to test the frequency response of the cable network. However, testing the cable network under these conventional approaches faces several challenges. For example, locating faults in a cable network can be difficult because a fault can be located anywhere between the cable system source and a location where the tones are being evaluated. Further, broadcasting additional tones to a cable network increases the overall power transmitted through the cable network, which can overload the cable network, causing additional disruptions in service. These challenges become exacerbated as cable networks continue to expand and additional cable networks are deployed. Further, the use of cable networks to deliver more services, such as more digital television channels, and greater Internet access speeds, also exacerbates these challenges. FIG.1illustrates an example network100, according to conventional techniques. The example network100can be a hybrid network that incorporates cable network technology and fiber optic network technology. In the example network100, a headend test unit110is coupled to a downstream connection102from a headend (not shown) and an upstream connection112to the headend. The downstream connection102from the headend is coupled to a transmitter104(e.g., forward path fiber laser in a fiber optic network). The transmitter104broadcasts signals generated by the headend and the headend test unit110to the example network100. In the example network100, a node106(e.g., fiber node in a fiber optic network) serves as a connection point for downstream traffic and upstream traffic in the example network100. Further downstream from the node106, the example network100includes amplifiers108a,108b,108c. The amplifiers108a,108b,108camplify signal strength of the downstream traffic and the upstream traffic in the example network100, extending the range the downstream traffic and the upstream traffic can travel. In the example network100, upstream traffic travels to the headend through a receiver114(e.g., return path fiber receiver in a fiber optic network). As illustrated inFIG.1, field test units116a,116b,116ccan be situated at various locations in the example network100. Because the headend test unit110is situated close to the headend, a sweep test facilitated by the headend test unit110can be broadcasted through the example network100. The sweep test can pose challenges with respect to locating faults in the example network100. For example, the field test unit116ccan determine a fault in the example network100based on a sweep test facilitated by the headend test unit110. The fault can be located anywhere between the headend test unit110and the field test unit116c, including, for example, at the amplifier108c, at the amplifier108a, at the node106, at the transmitter104, and along any connection in between. Accordingly, conventional approaches fail to address these and other challenges arising in network technology. An improved approach rooted in network technology and computer technology overcomes the foregoing and other challenges arising in network technology under conventional approaches. The present technology provides solutions that enable accurate measuring of frequency response on a network (e.g., cable network, fiber optic network, hybrid network) through frequency sweep testing. In various embodiments, the present technology provides a remote transmitter test unit that can be physically deployed at various points in a network. The remote transmitter test unit can support generating sweep test signals at the various points. In various embodiments, the present technology provides for on demand sweep testing. A remote transmitter test unit or headend test unit can periodically transmit a query message and, based on a response to the query message, can initiate a sweep test. In various embodiments, the present technology provides for automatic generation of a sweep profile for a sweep test. Based on an analysis of a frequency spectrum on a network, the sweep profile provides parameters for conducting a sweep test. In various embodiments, the present technology provides for Orthogonal Frequency-Division Multiplexing (OFDM) table generation and OFDM sweep testing. An OFDM table for an OFDM channel can be generated based on unmodulated pilot sub-channels or OFDM pilots. An OFDM sweep test can be performed on OFDM channels based on properly placed sweep measurement points within the OFDM channel. Thus, the present technology provides for technological solutions to technological challenges by providing, for example, a remote transmitter test unit for sweep testing, on demand sweep testing, automatic sweep profile generation, and Orthogonal Frequency-Division Multiplexing (OFDM) table generation and OFDM sweep testing. More details relating to the present technology are provided below. Sweep Remote Transmitter Test Unit A traditional headend test unit typically requires an external power source. Further, the traditional headend test unit typically requires a dry, climate-controlled environment. Because of these constraints, the traditional headend test unit is physically located in a main distribution center (e.g., headend) of a cable network. Thus, test signals generated by the traditional headend test unit are present throughout the entire cable network. These test signals make locating faults in the cable network difficult because the test signals are present throughout the entire cable network. Further, these test signals increase the overall power transmitted through the cable network, potentially causing further disruptions in service. The present technology provides improvements over the aforementioned and other disadvantages associated with traditional headend test units. In various embodiments, the present technology provides a remote transmitter test unit that can include various improvements over a traditional headend test unit. For example, the remote transmitter test unit can include an integrated power source (e.g., battery). The integrated power source allows the remote transmitter test unit to be portable. The remote transmitter test unit can include a weatherproof enclosure. The remote transmitter test unit can include passive heat dissipation features. The weatherproof enclosure and the passive heat dissipation features allow the remote transmitter test unit to be deployed outdoors in various environments. The various improvements included in the remote transmitter test unit allows the remote transmitter test unit to be physically deployed at various points in a network. As the remote transmitter test unit can be physically deployed at various points in the network, the remote transmitter test unit can facilitate a sweep test at an intermediate point between a main distribution center of the network and an end point in the network. For example, the remote transmitter test unit can facilitate a sweep test at a system test point, a system test node, or a remote physical point in the network. From an intermediate point, the remote transmitter test unit can facilitate a sweep test for a section, or subsection, of the network. Thus, the present technology provides for technological solutions to technological challenges by providing a remote transmitter test unit that includes improvements over a traditional headend test unit. More details relating to the remote transmitter test unit are provided herein. FIGS.2A-2Dillustrate example views of an example remote transmitter test unit, according to various embodiments of the present technology. The components (e.g., modules, elements, interfaces, blocks, functions, switches, etc.) of the remote transmitter test unit shown in these figures and all figures herein are exemplary only, and other implementations may include additional, fewer, integrated, or different components. Some components may not be shown so as not to obscure relevant details. Some components may be simplified so as to allow focus on relevant details. The remote transmitter test unit illustrated in these figures, and the remote transmitter test units described in other figures herein, can constitute test equipment that are special purpose computers. In some embodiments, the components of the remote transmitter test unit are integrated into a single (or one) device or apparatus. In other embodiments, the components of the remote transmitter test unit can be distributed over two or more devices or apparatuses. The components shown in these figures and all figures herein are exemplary only, and other implementations can include additional, fewer, integrated, or different components. FIG.2Aillustrates an example front view200and an example side view220of components of a remote transmitter test unit, according to various embodiments of the present technology. In various embodiments, the remote transmitter test unit can generate sweep tones for a sweep test. As illustrated by the example front view200, the remote transmitter test unit can include a CPU board202within a chassis (e.g., structure, frame, body) of the remote transmitter test unit. The CPU board202can include various computing components (e.g., processor, memory, data store) for controlling various processes and functions of the remote transmitter test unit. Within the chassis, the remote transmitter test unit can include an RF board204. The RF board204can include various RF components for measuring and evaluating frequency. The CPU board202and the RF board204can be stacked using one or more connectors to connect the CPU board202and the RF board204. Stacking the CPU board202and the RF board204can improve portability of the remote transmitter test unit and make efficient use of space within the chassis. Within the chassis, the remote transmitter test unit can include a battery206. In an example embodiment, the battery206can be a rechargeable 3P3S battery. As illustrated by the example front view200, the remote transmitter test unit can include RF connections208on the chassis of the remote transmitter test unit. In an example embodiment, the RF connections208can include two F-connectors, or other types of RF-connectors, one for forward signals and one for return signals. The two F-connectors can include field replaceable barrels. Waterproof slide-on boots can be used to cover the RF connections208to maintain weatherproofing for the remote transmitter test unit. On the chassis, the remote transmitter test unit can include a user interface panel210. In an example embodiment, the user interface panel210can include user buttons for power and other functions. The user interface panel210can include one or more USB interfaces for communication with various USB devices (e.g., USB data store, Bluetooth dongle, Wi-Fi dongle). The user interface panel210can include one or more RJ-45 connectors for communication through an Ethernet connection. The user interface panel210can include LEDs to indicate, for example, power, active forward signal, active return signal, battery charging, and battery status. A weatherproof door and/or a sealed membrane can cover the user interface panel210to maintain weatherproofing for the remote transmitter test unit. The example side view220of the remote transmitter test unit shows the CPU board202and the RF board204stacked within the chassis of the remote transmitter test unit. The CPU board202can be stacked with the RF board204using one or more connectors, such as a connector222. In an example embodiment, the connector222is a 40-pin connector connecting the CPU board202and the RF board204. The CPU board202can include various computing components (e.g., processor, memory, data store). The various computing components can generate heat. The RF board204can include various RF components for measuring and evaluating frequency that also generate heat. As illustrated in the side view220, heat generating components224a,224bof the CPU board202can be located on a side of the CPU board202that is opposite from (or not facing) the RF board204. Heat generating components224c,224dof the RF board204can be located on a side of the RF board204that is opposite from (or not facing) the CPU board202. The heat generating components224a,224bof the CPU board202and the heat generating components224c,224dof the RF board204can be located on sides of the CPU board202and the RF board204opposite from (or not facing) the connector222. The heat generating components224a,224bof the CPU board202can be located on a side of the CPU board202facing outward toward the chassis of the remote transmitter test unit. The heat generating components224c,224dof the RF board204can be located on a side of the RF board204facing outward toward the chassis of the remote transmitter test unit. As illustrated in the side view220, the remote transmitter test unit can include thermal materials226a,226b. The thermal materials226a,226bcan contact the heat generating components224a,224b,224c,224dand the chassis of the remote transmitter test unit. The thermal materials226a,226bcan facilitate heat transfer from the heat generating components224a,224b,224c,224dthrough the thermal materials226a,226bto the chassis of the remote transmitter test unit where the heat can be dissipated outside the remote transmitter test unit. In an example embodiment, the chassis of the remote transmitter test unit is a metal chassis that further facilitates heat dissipation. With the locations of the heat generating components224a,224b,224c,224dand the use of the thermal materials226a,226b, concerns with heat generation in a compact device like the remote transmitter test unit can be alleviated. Thus, as illustrated inFIG.2A, the remote transmitter test unit can include various portability, weatherproofing, and heat dissipation features that allow the remote transmitter test unit to be physically deployed at various points in a network. FIG.2Billustrates an example view240of a removeable protector248of a remote transmitter test unit, according to various embodiments of the present technology. The example view240depicts the use of the remote transmitter test unit in an outdoor environment. As illustrated in the example view240, the remote transmitter test unit can be enclosed in the removeable protector248. In an example embodiment, the removeable protector248can be made of a hard plastic or rubberized material for absorbing bumps and shocks that may arise from being in an outdoor environment. The removeable protector248can include attachment points246a,246b,246c,246dfrom which various accessories can be attached. In the example scenario240, carabiners242a,242bare attached at attachment points246a,246bon the removeable protector248. The carabiners242a,242band the attachment points246a,246ballow the remote transmitter test unit to hang from a support rod244. The removeable protector248can include openings through which connectors of the remote transmitter test unit can be accessed. As illustrated in the example view240, the remote transmitter test unit can interface with a network through coaxial cables252a,252bwhich are connected to the remote transmitter test unit via F-connectors250a,250bof the remote transmitter test unit that extend through openings of the removeable protector248. FIG.2Cillustrates an example view260of a removeable protector270of a remote transmitter test unit, according to various embodiments of the present technology. As illustrated in the example view260, the remote transmitter test unit can be enclosed in the removeable protector270. The removeable protector270can include attachment points264a,264b,264c,264dfrom which various accessories can be attached. In an example embodiment, a carry strap262can be attached to the attachment points264a,264b,264c,264d. The carry strap262can allow for easy transport of the remote transmitter test unit. As illustrated in the example view260, the remote transmitter test unit can include two F-connectors268a,268b. Each F-connector268a,268bcan be covered with a waterproof slide-on boot266a,266b. The waterproof slide-on boot266aallows the remote transmitter test unit to maintain weatherproofing while an F-connector268ais not in use. As illustrated in the example view260, the removeable protector270(or the remote transmitter test unit) can include a weatherproof door272that covers a user interface of the remote transmitter test unit. The weatherproof door272allows the remote transmitter test unit to maintain weatherproofing while the user interface of the remote transmitter test unit is not in use. FIG.2Dillustrates an example view280of components of a remote transmitter test unit, according to various embodiments of the present technology. As illustrated in the example view280, the remote transmitter test unit can include a CPU board282and a RF board284. The CPU board282can include a connector port286to facilitate a connection between the CPU board282and the RF board284. The RF board284can include a connector port288to facilitate the connection between the CPU board282and the RF board284. In an example embodiment, the connector port286of the CPU board282and the connector port288of the RF board284are 40-pin connectors. The connector port286of the CPU board282and the connector port288of the RF board284can be connected using a BUS board. In various embodiments, a remote transmitter test unit can be operated to facilitate sweep testing at various points in a network. The remote transmitter test unit can be deployed at an intermediate point in the network. The remote transmitter test unit can interface with the network via one or more connectors (e.g., F-connectors). While interfaced with the network, the remote transmitter test unit can receive signals from field test units deployed on the network. For example, the remote transmitter test unit can receive a request to conduct a sweep test on the network. The remote transmitter test unit can facilitate the sweep test in response to the request. For example, the remote transmitter test unit can facilitate an on demand forward sweep, as further described herein. In addition, the remote transmitter test unit can generate a sweep profile for conducting a sweep test, as further described herein. The remote transmitter test unit also can provide for Orthogonal Frequency-Division Multiplexing (OFDM) table generation and OFDM sweep testing, as further described herein. FIG.8Aillustrates an example method800, according to various embodiments of the present technology. Some or all of the functionality described with respect to the example method800can be performed by a remote transmitter test unit, such as the remote transmitter test unit described with respect toFIGS.2A-2D, or a field test unit. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. At block802, the example method800interfaces with a section of a network. At block804, the example method receives a sweep request from a field test unit on the section of the network. At block806, the example method generates sweep tones on the section of the network in response to the sweep request. It is contemplated that there can be many other uses, applications, and/or variations associated with the various embodiments of the present technology. For example, various embodiments of the present technology can learn, improve, and/or be refined over time. Sweep on-Demand A traditional headend test unit typically facilitates a sweep test on a cable network by continuously broadcasting sweep tones throughout the cable network. These sweep tones are evaluated to test the frequency response of the cable network. Continuously broadcasting sweep tones throughout the cable network increases the overall power transmitted through the cable network, which can overload the cable network and cause disruptions in service. The present technology provides improvements over the aforementioned and other disadvantages associated with traditional headend test units. In various embodiments, the present technology provides for on demand sweep testing. For example, a communication channel (e.g., located at a center frequency of 20 MHz) in a network can be established for communication between a headend test unit (or a remote transmitter test unit) and a field test unit. The headend test unit (or the remote transmitter test unit) can listen for a sweep test request from the field test unit. In some cases, the headend test unit (or the remote transmitter test unit) can periodically (e.g., once every 2 minutes) transmit a query message on the communication channel. The field test unit can transmit a sweep test request in response to the query message. Whether transmitted independently or in response to the query message, the sweep test request indicates to the headend test unit (or the remote transmitter test unit) to initiate an on demand sweep test. Upon completion of the sweep test, the headend test unit (or the remote transmitter test unit) can return to listening on the communication channel for a new sweep test request. Thus, the present technology provides for technological solutions to technological challenges by providing on demand sweep testing that can reduce overall power transmitted through a network, thereby reducing disruptions in service caused by overloading the network. More details relating to on demand sweep testing are provided herein. FIG.3Aillustrates an example system300including a headend test unit302, according to various embodiments of the present technology. The components shown in these figures and all figures herein are exemplary only, and other implementations can include additional, fewer, integrated, or different components. Some components may not be shown so as not to obscure relevant details. The example system300illustrates examples of a headend test unit302that can implement some or all of the functionality of the various embodiments described with respect to the remote transmitter test unit described with respect toFIGS.2A-2D. It should be understood that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, based on the various features and embodiments discussed herein unless otherwise stated. As illustrated inFIG.3A, the example system300can include the headend test unit302connected to a field test unit310through a network308. The network308can include a downstream communication channel306for downstream traffic from the headend test unit302to the field test unit310. For example, the headend test unit302can transmit a query message through the downstream communication channel306. In addition, the headend test unit302can transmit information associated with a sweep test through the downstream communication channel306. For example, the headend test unit302can transmit a sweep profile associated with the sweep test and timing sync messages through the downstream communication channel306. The network308can include an upstream communication channel304for upstream traffic from the field test unit310to the headend test unit302. For example, the field test unit310can transmit a sweep test request to the headend test unit302through the upstream communication channel304. FIGS.3B-3Eillustrate example methods, according to various embodiments of the present technology. Some or all of the functionality described with respect to the example methods can be performed by a headend test unit, a remote transmitter test unit (e.g., the remote transmitter test unit described with respect toFIGS.2A-2D), or a field test unit. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. FIG.3Billustrates an example method320associated with continuously transmitting a sweep test, according to various embodiments of the present technology. At block322, the example method320transmits a sweep profile. The sweep profile can include a channel table describing active channels in a network. The sweep profile also can include start frequencies and stop frequencies associated with the active channels in the network. In an example embodiment, a headend test unit (or a remote transmitter test unit) can automatically generate the sweep profile, as further described herein. The headend test unit (or the remote transmitter test unit) transmits the sweep profile to a field test unit through a network. The sweep profile can be associated with a checksum that can be used to determine whether a sweep profile stored on the field test unit needs to be updated. At block324, the example method320sets a loop counter to zero. At block326, the example method320transmits a timing sync message. The timing sync message can provide a reference for a field test unit to synchronize measurement and evaluation of frequency responses on a network with sweep tones transmitted on the network. In an example embodiment, a headend test unit (or a remote transmitter test unit) transmits the timing sync message to a field test unit through a network. At block328, the example method320transmits sweep tones. A field test unit can measure and evaluate the sweep tones as part of a sweep test. At block330, the example method320increments the loop counter. At block332, the example method332determines whether the loop counter satisfies a threshold. The threshold can be associated with a number of times sweep tones are transmitted for a complete sweep test. Based on a determination that the loop counter does not satisfy the threshold and that the sweep test is not completed, the example method320returns to block328to transmit sweep tones. Based on a determination that the loop counter satisfies the threshold and that the sweep test is completed, the example method320returns to block322to transmit a sweep profile. In various embodiments, a headend test unit (or a remote transmitter test unit) can continuously transmit a sweep test. By continuously transmitting the sweep test, the headend test unit (or the remote transmitter test unit) can perform sweep tests with field test units that do not have the capability to request an on demand sweep test. FIG.3Cillustrates an example method340associated with receiving a continuously transmitted sweep test, according to various embodiments of the present technology. At block342, the example method340receives a message. In an example embodiment, a field test unit receives a message from a headend test unit (or a remote transmitter test unit). The received message can be a sweep profile message344or a timing sync message348. If the message received is a sweep profile message344, then, at block346, the example method updates a stored sweep profile. The stored sweep profile is updated with the information in the sweep profile message. For example, a stored sweep profile including a channel table describing active channels in a network as well as start frequencies and stop frequencies associated with the active channels can be updated with information in a sweep profile message. In some cases, the stored sweep profile is updated based on a comparison of a checksum associated with the sweep profile message and a checksum associated with the stored sweep profile. The stored sweep profile can be updated if the checksum associated with the stored sweep profile does not match the checksum associated with the sweep profile message. The stored sweep profile can be maintained if the checksum associated with the stored sweep profile matches the checksum associated with the sweep profile message. After the update of the stored sweep profile, the example method340waits to receive a next message and returns to block342. If the message received is a timing sync message348, then, at block350, the example method340measures sweep tones. In general, sweep tones associated with a sweep test follow a timing sync message. The sweep tones can be measured based on receipt of a timing sync message indicating that sweep tones will follow the timing sync message. After measurement of the sweep tones, the example method340waits to receive a next message and returns to block342. FIG.3Dillustrates an example method360associated with transmitting an on demand sweep test, according to various embodiments of the present technology. At block362, the example method360sends a query. In an example embodiment, the query is sent by a remote transmitter test unit (or a headend test unit) to a field test unit through a network. At block364, the example method360waits for a reply. If no reply is received, the example method360returns to block362and sends another query as part of a query listen mode376. In an example embodiment, a remote transmitter test unit (or a headend test unit) periodically sends queries and waits for a reply from a field test unit as part of a query listen mode. If a reply is received, then, at block366, the example method360transmits a sweep profile. The sweep profile can include a channel table describing active channels in a network. The sweep profile also can include start frequencies and stop frequencies associated with the active channels in the network. In an example embodiment, a remote transmitter test unit (or a headend test unit) can automatically generate the sweep profile, as further described herein. The remote transmitter test unit (or the headend test unit) transmits the sweep profile to a field test unit through a network. The sweep profile can be associated with a checksum. The field test unit can determine whether a sweep profile stored at the field test unit matches the transmitted sweep profile based on the checksum. The field test unit can update the sweep profile stored at the field test unit if the checksum of the transmitted sweep profile does not match a checksum of the stored sweep profile. The field unit can maintain the stored sweep profile if the checksum of the transmitted sweep profile matches the checksum of the stored sweep profile. At block368, the example method360transmits a timing sync message. The timing sync message can indicate to a field test unit that sweep tones for a sweep test are about to be transmitted. At block370, the example method360transmits sweep tones. In an example embodiment, a sweep test includes a set of sweep tones (e.g.,401sweep tones) transmitted incrementally for a field test unit to measure and evaluate. At block372, the example method360sends a query. The query can provide an opportunity for a field test unit to reply and request a subsequent sweep test. At block374, the example method360waits for a reply. If a reply is received, the example method360returns to block368to initiate the subsequent sweep test. Sweep tests can be repeated for as many times as they are requested as part of an active test mode378. If no reply is received, the example method360returns to block362and returns to query listen mode376. In an example embodiment, a remote transmitter test unit can automatically turn off if no reply is received. Not receiving a reply can indicate that an on demand sweep test has ended. As illustrated by the example method360, an on demand sweep test can be repeated based on replies received from a field test unit. For example, each time a reply is received in response to a query during active test mode, a sweep test can be repeated. In comparison, a continuously transmitted sweep test can repeat a sweep test for a predetermined number of times. FIG.3Eillustrates an example method380associated with requesting an on demand sweep test and receiving the on demand sweep test, according to various embodiments of the present technology. At block382, the example method380waits for a query. In an example embodiment, a field test unit waits for a query from a remote transmitter test unit (or a headend test unit). At block384, the example method380sends a reply. The reply can be sent in response to a query received from a remote transmitter test unit (or a headend test unit). In an example embodiment, a field test unit can send a reply to a query to initiate a sweep test from a remote transmitter test unit (or a headend test unit). At block386, the example method380receives a sweep profile. In an example embodiment, a field test unit can update a stored sweep profile based on a checksum associated with the stored sweep profile and a checksum associated with a received sweep profile. The stored sweep profile can be updated with the received sweep profile if the checksums do not match, and the stored sweep profile can be maintained if the checksums match. At block388, the example method380receives a timing sync message. The timing sync message can indicate that sweep tones for a sweep test are about to be transmitted. At block390, the example method380measures the RF power of the sweep tones. The sweep tones can be part of a sweep test following the timing sync message. At block392, the example method380waits for a query. The query can provide an opportunity to request a new sweep test. At block394, the example method380sends a reply. The reply can request the new sweep test. Upon request of the new sweep test, the example method380proceeds to block388where a new timing sync message is received, initiating a new sweep test. In an example embodiment, a field test unit can prevent initiation of new sweep tests by ignoring the query and not sending a request for a new sweep test. In various embodiments, the present technology provides for reverse sweep testing a network. A reverse sweep test can involve a field test unit transmitting sweep tones through a network. For example, a field test unit can establish communications with a headend test unit (or a remote transmitter test unit) through a forward communication channel and a reverse communication channel. In some cases, the field test unit can receive, through the forward communication channel, information related to a frequency on which the reverse communication channel is operating from the headend test unit (or the remote transmitter test unit). Based on the received information, the field test unit can send a message through the reverse communication channel to initiate a reverse sweep test. In some cases, based on receipt of the message, the headend test unit (or the remote transmitter test unit) can transmit to the field test unit a sweep profile for the reverse sweep test through the forward communication channel. The field test unit can conduct the reverse sweep test based on the sweep profile. During the reverse sweep test, the field test unit transmits sweep tones through the network. The headend test unit (or the remote transmitter test unit) can evaluate and measure the received sweep tones to determine a frequency response of the network. The frequency response can be provided to the field test unit through the forward communication channel as a sweep test result. FIG.4illustrates an example frequency diagram400associated with forward and reverse sweep tests, according to various embodiments of the present technology. As illustrated in the example frequency diagram400, a frequency spectrum of a network can include Single Carrier Quadrature Amplitude Modulation (SC-QAM) upstream channels402and SC-QAM downstream channels404. As illustrated in this example, the SC-QAM upstream channels402and the SC-QAM downstream channels404are frequency ranges spaced apart from each other. In an example sweep test of the network, a field test unit and a headend test unit (or a remote transmitter test unit) can communicate through a forward communication channel412and a reverse communication channel414. In a forward sweep test, the headend test unit (or the remote transmitter test unit) can transmit forward sweep tones410a,410bthat are received by the field test unit. In a reverse sweep test, the field test unit can transmit reverse sweep tones406a,406bto the headend test unit (or the remote transmitter test unit). In this example, a forward communication channel408has been established. The reverse sweep tones406a,406bare not sent on the forward communication channel408. As illustrated in this example, the present technology provides for reverse sweep testing in accordance with various embodiments. It should be understood that the various examples described herein with respect to forward sweep testing can be applied to reverse sweep testing unless otherwise stated. FIG.8Billustrates an example method830, according to various embodiments of the present technology. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. At block832, the example method830receives a sweep request in response to a periodic query transmission. At block834, the example method830provides a sweep profile for measuring sweep tones on a network. At block836, the example method830generates a timing synchronization message. At block838, the example method830generates sweep tones subsequent to provision of the timing synchronization message. It is contemplated that there can be many other uses, applications, and/or variations associated with the various embodiments of the present technology. For example, various embodiments of the present technology can learn, improve, and/or be refined over time. Sweep Profile Auto Generation A sweep test performed by a traditional headend test unit typically relies on a manually generated sweep profile. The manually generated sweep profile generally consists of manually entered data from a cable network plan associated with a cable network. The process of manually generating a sweep profile can be tedious and prone to human error. Further, because the cable network plan may not accurately reflect the actual frequencies being used on the cable network, the manually generated sweep profile can be inaccurate. Further, because the sweep test performed by the traditional headend test unit is transmitted across the cable network, the manually generated sweep profile cannot account for varying characteristics of different sections of the cable network. The present technology provides improvements over the foregoing and other disadvantages associated with manually generated sweep profiles. In various embodiments, the present technology provides for automatic generation of a sweep profile for a sweep test. For example, a field test unit can determine spectrum data associated with a network based on a scan of a frequency spectrum on the network. The field test unit can analyze the spectrum data to determine channel characteristics, such as channel frequencies and channel types, associated with channels on the network. Based on the channel characteristics, the field test unit can generate a sweep profile for conducting a sweep test on the network. The sweep profile can include, for example, start frequencies and stop frequencies associated with the channels on the network, a communication frequency for communications from a remote transmitter test unit (or a headend test unit) to the field test unit, guardband frequencies associated with the channels on the network, and transmission levels for sweep tones of the sweep test. The field test unit can store the sweep profile to memory and provide the sweep profile to a remote transmitter test unit (or a headend test unit). The remote transmitter test unit (or the headend test unit) can initiate a sweep test based on the sweep profile. While the foregoing example discussed automatic generation of a sweep profile by a field test unit as just one illustration, headend test units and remote transmitter test units likewise can automatically generate sweep profiles. More details relating to automatic generation of sweep profiles are provided herein. FIG.5illustrates an example method500associated with automatic generation of sweep profiles, according to various embodiments of the present technology. Some or all of the functionality described with respect to the example method500can be performed by a headend test unit, a remote transmitter test unit (e.g., the remote transmitter test unit described with respect toFIGS.2A-2D), or a field test unit. The sweep profiles generated based on the example method500can be used in a sweep test, such as the sweep tests described with respect toFIGS.3A-3E. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. As illustrated inFIG.5, at block504, the example method500initiates a sweep of start and stop frequencies. In an example embodiment, a user can initiate a sweep of start and stop frequencies through an input command provided to a field test unit (or a headend test unit or remote transmitter test unit). At block506, the example method500scans a frequency spectrum of a network and stores spectrum data. In an example embodiment, the field test unit (or the headend test unit or remote transmitter test unit) scans a frequency spectrum on a network and stores spectrum data associated with the network. The spectrum data can include, for example, amplitudes and phases of frequencies transmitted through the network. At block508, the example method500determines channel characteristics associated with the network. In an example embodiment, the field test unit (or the headend test unit or remote transmitter test unit) determines channel characteristics, such as channel frequencies and channel types, associated with channels on the network. Channel frequencies can include, for example, start frequencies associated with a frequency at which a channel starts and stop frequencies associated with a frequency at which the channel stops. For example, a television channel can be 6 MHz wide and start at, for example, 54 MHz and stop at, for example, 60 MHz. Channel types can include, for example, analog signals, digital signals (e.g., QAM, ISDB-T), and Orthogonal Frequency-Division Multiplexing (OFDM) signals, which can be a type of digital signal. At block510, the example method500derives a channel table. In an example embodiment, the field test unit (or the headend test unit or remote transmitter test unit) derives a channel table based on the channel characteristics associated with the channels on the network. The channel table can describe active channels in the network, including the channel characteristics associated with the active channels. At step512, the method500creates a guardband table from the channel table. A guardband can be a narrow frequency range that separates two ranges of frequency. The guardband allows the two ranges to avoid interference from each other. In an example embodiment, the field test unit (or the headend test unit or remote transmitter test unit) creates a guardband table that identifies frequency ranges of guardbands in a channel network. A frequency range of a guardband in the guardband table can be identified by a frequency, a value above the frequency indicating an upper bound of the frequency range, and a value below the frequency indicating a lower bound of the frequency range. The frequency range of the guardband in the guardband table can be associated with a flag indicating that a sweep tone is not to be transmitted in the frequency range. In this regard, a sweep test can avoid transmitting sweep tones at frequency ranges indicated by the guardband table as frequency ranges where sweep tones are not to be transmitted to avoid interference to channels in the network. Flags in the guardband table can also indicate other actions to be performed with respect to the frequency ranges. For example, a flag in the guardband table can indicate an associated frequency range is to be measured. In some cases, the flag can indicate that a frequency is to be measured by peak power or to be measured by average power. At step514, the method500generates a sweep profile. The sweep profile can be generated based on, for example, the channel characteristics, the channel table, and the guardband table. In an example embodiment, the field test unit (or the headend test unit or remote transmitter test unit) generates the sweep profile to include start frequencies and stop frequencies associated with channels in the network, a forward communication channel in a section of empty spectrum for communication with a remote transmitter test unit (or a headend test unit), a guardband table, and a sweep test transmission level. The sweep test transmission level (or power) can be based on an average channel power. For example, a sweep test transmission level can be 15 dB below an average channel power of a network. In general, a sweep test transmission level that is too low can result in sweep tones that are unstable, resulting in unstable measurements. A sweep test transmission level that is too high can overload a network and cause interference to neighboring frequencies. At block516, the example method500stores the sweep profile. The sweep profile can be stored in a data store of the field test unit (or the headend test unit or remote transmitter test unit). At block518, the example method500connects the field test unit to a headend test unit (or a remote transmitter test unit). The field test unit can connect to the headend test unit (or the remote transmitter test unit), via a communication channel in the network or, in some cases, via a connection outside the network. The sweep profile can be transferred to the headend test unit (or the remote transmitter test unit) via the communication channel, or the sweep profile can be transferred to the field test unit from the headend test unit (or the remote transmitter test unit). At block520, the example method500starts a sweep test. The sweep test can be conducted based on the sweep profile. FIG.8Cillustrates an example method850, according to various embodiments of the present technology. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. At block852, the example method850determines spectrum data based on a scan of frequencies on a network. At block854, the example method850generates a channel table including channel frequencies and channel types associated with the network based on the spectrum data. At block856, the example method850generates a sweep profile associated with the network based on the channel table. At block858, the example method850performs a sweep test based on the sweep profile. It is contemplated that there can be many other uses, applications, and/or variations associated with the various embodiments of the present technology. For example, various embodiments of the present technology can learn, improve, and/or be refined over time. Ofdm Table Generation and Sweeping A sweep test performed by a traditional headend test unit typically does not account for Orthogonal Frequency-Division Multiplexing (OFDM) channels in a network. In general, OFDM channels occupy a large, continuous portion of a cable frequency spectrum. A typical OFDM channel can have a bandwidth between 24 MHz and 192 MHz. Because of the continuous nature of the OFDM channel, traditional sweep tones cannot be inserted in the OFDM channel. Thus, the sweep test performed by the traditional headend test unit does not account for OFDM channels. The present technology provides improvements over the aforementioned and other disadvantages associated with sweep tests performed by traditional headend test units. In various embodiments, the present technology provides for Orthogonal Frequency-Division Multiplexing (OFDM) table generation and OFDM sweep testing. For example, an OFDM table can be generated based on pilot subchannels, or OFDM pilots, in OFDM channels. A sweep test can include the OFDM pilots as frequencies at which to measure frequency responses. Thus, the sweep test can account for OFDM channels. More details relating to OFDM table generation and OFDM sweep testing are provided herein. FIG.6Aillustrates an example method600associated with OFDM table generation and OFDM sweep testing, according to various embodiments of the present technology. Some or all of the functionality described with respect to the example method600can be performed by a headend test unit, a remote transmitter test unit (e.g., the remote transmitter test unit described with respect toFIGS.2A-2D), or a field test unit. The OFDM tables generated based on the example method600can be included in an automatically generated sweep profile, such as the sweep profiles described with respect toFIG.5. The OFDM sweep test described with respect to the example method600can be incorporated in a sweep test, such as the sweep tests described with respect toFIGS.3A-3E. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. As illustrated inFIG.6A, at block602, the example method600obtains OFDM channel information. In an example embodiment, a field test unit obtains OFDM channel information from a physical link channel (PLC) within an OFDM channel. The PLC can carry an OFDM Channel Description (OCD) message that contains the OFDM channel information. At block604, the example method600extracts pilot frequencies. In an example embodiment, a field test unit extracts pilot frequencies from the OFDM channel information obtained from the OCD message delivered through the PLC within the OFDM channel. The OFDM channel information in the OCD message can include, for example, an OFDM channel ID, subchannel spacing, and subchannel assignments for the OFDM channel. The OFDM channel information in the OCD message can also indicate which subchannels are pilot subchannels. The frequencies corresponding to the pilot subchannels can be the pilot frequencies. At block606, the example method600determines guardband bandwidth. The guardband bandwidth can be determined based on spacing from a measurement point. In an example embodiment, a field test unit identifies frequency ranges of pilot subchannels based on OFDM channel information in an OCD message. The OFDM channel information in the OCD message can include subchannel spacing and subchannel assignments from which the frequency ranges of the pilot subchannels can be determined. At block608, the example method600aligns OFDM guardbands with subcarrier pilot frequencies. A guardband can be a frequency range associated with a flag indicating the frequency range is to be skipped or measured. In an example embodiment, a field test unit can align OFDM guardbands with the frequency ranges of the pilot subchannels and associate the OFDM guardbands with flags indicating the frequency ranges of the pilot subchannels are to be measured by their average power. At block610, the example method600adjusts OFDM guardband bandwidth. The OFDM guardband bandwidth can be adjusted to avoid interference from neighboring frequencies of the OFDM guardbands. At block612, the example method600merges OFDM guardbands into a sweep profile. The OFDM guardbands can be included in a sweep profile for conducting a sweep test. During a sweep test, the OFDM guardbands can indicate subcarrier pilot frequencies in OFDM channels of a cable network. The sweep profile can identify the subcarrier pilot frequencies in the OFDM channels to be measured in the sweep test. Sweep tones can be prevented from being transmitted at the OFDM guardbands. The frequency response of the OFDM guardbands can be measured based on the subcarrier pilot frequencies associated with the OFDM guardbands. FIG.6Billustrates an example frequency diagram650associated with a sweep test including OFDM channels, according to various embodiments of the present technology. As illustrated in the example frequency diagram650, a frequency spectrum of a network can include Single Carrier Quadrature Amplitude Modulation (SC-QAM) channels652,654,656and an OFDM channel658. As illustrated in this example, the SC-QAM channels652,654,656are frequency ranges spaced apart from each other. The OFDM channel658is a continuous frequency range. The OFDM channel658can include subcarrier pilot frequencies666. In an example sweep test of the network, sweep tones660a,660bcan be transmitted in the space between the SC-QAM channels652,654,656. The sweep tones660a,660bcan be measured and, based on frequency responses of the sweep tones660a,660b, faults can be identified in the network. Faults can be identified, for example, based on an RF power of a sweep tone failing to satisfy a threshold RF power. Further, communication channels664a,664bcan be determined in the space between the SC-QAM channels652,654,656. For example, the communication channels664can include a forward communication channel for communication from a headend test unit, (or a remote transmitter test unit), to a field test unit and a reverse communication channel for communication from the field test unit to the headend test unit (or the remote transmitter test unit). In the example sweep test of the network, sweep tones662can be transmitted in the space between the SC-QAM channel656and the OFDM channel658. The sweep tone(s)662can be measured and, based on frequency responses of the sweep tone(s)662, faults can be identified in the network. In the example sweep test of the network, subcarrier pilot frequencies666can be measured and, based on frequency responses of the subcarrier pilot frequencies666, faults can be identified in the network. FIGS.7A-7Gillustrate example frequency diagrams, according to various embodiments of the present technology. In various embodiments, the example frequency diagrams can be associated with example scenarios that can be encountered during a sweep test. FIG.7Aillustrates an example frequency diagram700associated with a sweep test without OFDM sweeping, according to various embodiments of the present technology. As illustrated inFIG.7A, the frequency diagram700shows measured power levels of various sweep tones generated during the sweep test. Frequency markers708a,708bmark the beginning frequency and the ending frequency of the OFDM channel. In a first section702of the frequency diagram700, the frequency diagram700shows the measured power levels of sweep tones generated for frequencies lower than an OFDM channel. In a second section704aof the frequency diagram700, the frequency diagram700shows a flat line704bfor the frequencies of the OFDM channel. The flat line indicates a lack of measured power levels for the frequencies of the OFDM channel. Because sweep tones are not generated in the OFDM channel, a sweep test that only measures generated sweep tones does not measure the power levels associated with the OFDM channel. In a third section706of the frequency diagram700, the frequency diagram700shows the measured power levels of sweep tones generated for frequencies higher than the OFDM channel. As illustrated inFIG.7A, there are no apparent faults in the first section702and the third section706. Whether there are faults in the OFDM channel is unknown. FIG.7Billustrates an example frequency diagram710associated with a sweep test with OFDM sweeping, according to various embodiments of the present technology. The example frequency diagram710can be associated with a sweep test of the same network as the sweep test associated with example frequency diagram700inFIG.7A. As illustrated inFIG.7B, the frequency diagram710shows measured power levels of various sweep tones generated during the sweep test and measured power levels of guardband frequencies in an OFDM channel. Frequency markers718a,718bmark the beginning frequency and the ending frequency of the OFDM channel. In a first section712of the frequency diagram710, the frequency diagram710shows the measured power levels of sweep tones generated for frequencies lower than the OFDM channel. In a second section714of the frequency diagram710, the frequency diagram710shows measured power levels of guardband frequencies of the OFDM channel. Because sweep tones are not generated in the OFDM channel, a sweep test with OFDM sweeping measures the guardband frequencies of the OFDM channel. In a third section716of the frequency diagram710, the frequency diagram710shows the measured power levels of sweep tones generated for frequencies higher than the OFDM channel. In this example, the measured power levels of the guardband frequencies of the OFDM channel are higher than the measured power levels of the generated sweep tones in the first section712and the third section716. The measured power levels of the guardband frequencies of the OFDM channel can be evaluated against the nominal power levels of the guardband frequencies. The measured power levels of the generated sweep tones can be evaluated against the power levels at which the sweep tones were generated. As illustrated inFIG.7B, there are no apparent faults in the first section712, the second section714, and the third section716. FIG.7Cillustrates an example frequency diagram720associated with a sweep test, according to various embodiments of the present technology. As illustrated inFIG.7C, the frequency diagram720shows power levels of frequencies on a network including an OFDM channel. Frequency markers724,728mark the beginning frequency and the ending frequency of the OFDM channel. In a first section722of the frequency diagram720, the frequency diagram722shows power levels of frequencies lower than the OFDM channel. In a second section726of the frequency diagram720, the frequency diagram720shows power levels of frequencies of the OFDM channel. In a third section730of the frequency diagram720, the frequency diagram720shows power levels of frequencies higher than the OFDM channel. As illustrated inFIG.7C, there are no apparent faults in the first section722, the second section726, and the third section730. FIG.7Dillustrates an example frequency diagram740associated with a sweep test, according to various embodiments of the present technology. As illustrated inFIG.7D, the frequency diagram740shows power levels of frequencies on a network including an OFDM channel. Frequency markers744,748mark the beginning frequency and the ending frequency of the OFDM channel. In a first section742of the frequency diagram740, the frequency diagram740shows power levels of frequencies lower than the OFDM channel. In a second section746of the frequency diagram740, the frequency diagram740shows power levels of frequencies of the OFDM channel. In a third section750of the frequency diagram740, the frequency diagram740shows power levels of frequencies higher than the OFDM channel. As illustrated inFIG.7D, there is a fault in the OFDM channel corresponding to a drop752in the power levels of the frequencies of the OFDM channel in the second section746. There are no apparent faults in the first section742and the third section750. FIG.7Eillustrates an example frequency diagram760associated with a sweep test without OFDM sweeping, according to various embodiments of the present technology. The example frequency diagram760can be associated with a sweep test of the network associated with example frequency diagram740inFIG.7D. As illustrated inFIG.7E, the frequency diagram760shows measured power levels of various sweep tones generated during the sweep test. Frequency markers768a,768bmark the beginning frequency and the ending frequency of the OFDM channel. In a first section762of the frequency diagram760, the frequency diagram760shows the measured power levels of sweep tones generated for frequencies lower than an OFDM channel. In a second section764aof the frequency diagram760, the frequency diagram760shows a flat line764bfor the frequencies of the OFDM channel. The flat line indicates a lack of measured power levels for the frequencies of the OFDM channel. Because sweep tones are not generated in the OFDM channel, a sweep test that only measures generated sweep tones does not measure the power levels associated with the OFDM channel. Accordingly, a fault in the OFDM channel, such as that illustrated in the example frequency diagram740inFIG.7D, is not detected in the sweep test that only measures generated sweep tones without OFDM sweeping. In a third section766of the frequency diagram760, the frequency diagram760shows the measured power levels of sweep tones generated for frequencies higher than the OFDM channel. As illustrated inFIG.7E, there are no apparent faults in the first section762and the third section766. Whether there are faults in the OFDM channel is unknown based on the frequency diagram760. FIG.7Fillustrates an example frequency diagram770associated with a sweep test with OFDM sweeping, according to various embodiments of the present technology. The example frequency diagram770can be associated with a sweep test of the network associated with example frequency diagram720inFIG.7C. As illustrated inFIG.7F, the frequency diagram770shows measured power levels of various sweep tones generated during the sweep test and measured power levels of guardband frequencies in an OFDM channel. Frequency markers778a,778bmark the beginning frequency and the ending frequency of the OFDM channel. In a first section772of the frequency diagram770, the frequency diagram770shows the measured power levels of sweep tones generated for frequencies lower than the OFDM channel. In a second section774of the frequency diagram770, the frequency diagram770shows measured power levels of guardband frequencies of the OFDM channel. Because sweep tones are not generated in the OFDM channel, a sweep test with OFDM sweeping measures the guardband frequencies of the OFDM channel. In a third section776of the frequency diagram770, the frequency diagram770shows the measured power levels of sweep tones generated for frequencies higher than the OFDM channel. As illustrated inFIG.7F, there are no apparent faults in the first section772, the OFDM channel associated with the second section774, and the third section776. FIG.7Gillustrates an example frequency diagram780associated with a sweep test with OFDM sweeping, according to various embodiments of the present technology. The example frequency diagram780can be associated with a sweep test of the network associated with example frequency diagram740inFIG.7D. As illustrated inFIG.7G, the frequency diagram780shows measured power levels of various sweep tones generated during the sweep test and measured power levels of guardband frequencies in an OFDM channel. Frequency markers790a,790bmark the beginning frequency and the ending frequency of the OFDM channel. In a first section782of the frequency diagram780, the frequency diagram780shows the measured power levels of sweep tones generated for frequencies lower than the OFDM channel. In a second section784of the frequency diagram780, the frequency diagram780shows measured power levels of guardband frequencies of the OFDM channel. The measured power levels of the guardband frequencies of the OFDM channel can be evaluated against the nominal power levels of the guardband frequencies. A fault in the OFDM channel, such as that illustrated in the example frequency diagram740inFIG.7Dis detected here at a drop788. In a third section786of the frequency diagram780, the frequency diagram780shows the measured power levels of sweep tones generated for frequencies higher than the OFDM channel. As illustrated inFIG.7G, there is a fault associated with the drop788in the OFDM channel associated with the second section784. There are no apparent faults in the first section782and the third section786. FIG.8Dillustrates an example method880, according to various embodiments of the present technology. It should be appreciated that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments discussed herein unless otherwise stated. At block882, the example method880determines OFDM pilot frequencies for an OFDM channel. At block884, the example method880determines guardband frequencies based on the OFDM pilot frequencies. At block886, the example method880generates a sweep profile based on the guardband frequencies. At block888, the example method880performs a sweep test based on the sweep profile. It is contemplated that there can be many other uses, applications, and/or variations associated with the various embodiments of the present technology. For example, various embodiments of the present technology can learn, improve, and/or be refined over time. In various embodiments, the functionalities described herein with respect to the present technology can be implemented, in part or in whole, as software, hardware, or any combination thereof. In some cases, the functionalities described with respect to the present technology can be implemented, in part or in whole, as software running on one or more computing devices or systems. For example, the functionalities described with respect to on demand sweep testing, automatic generation of sweep profile, and OFDM table generation and sweeping, or at least a portion thereof can be implemented as or within an application (e.g., app), a program, an applet, or an operating system, etc., running on a user computing device or a client computing system. In a further example, the functionalities described with respect to the present technology or at least a portion thereof can be implemented using one or more computing devices or systems that include one or more servers, such as network servers or cloud servers. The functionalities described with respect to the present technology or at least a portion thereof can be implemented using computer system900ofFIG.9. It should be understood that there can be many variations or other possibilities. Hardware Implementation The foregoing processes and features can be implemented by a wide variety of machine and computer system architectures and in a wide variety of network and computing environments.FIG.9illustrates an example of a computer system900that may be used to implement one or more of the embodiments described herein according to an embodiment of the invention. The computer system900includes sets of instructions924for causing the computer system900to perform the processes and features discussed herein. The computer system900may be connected (e.g., networked) to other machines and/or computer systems. In a networked deployment, the computer system900may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The computer system900includes a processor902(e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory904, and a nonvolatile memory906(e.g., volatile RAM and non-volatile RAM, respectively), which communicate with each other via a bus908. In some embodiments, the computer system900can be a desktop computer, a laptop computer, personal digital assistant (PDA), or mobile phone, for example. In one embodiment, the computer system900also includes a video display910, an alphanumeric input device912(e.g., a keyboard), a cursor control device914(e.g., a mouse), a drive unit916, a signal generation device918(e.g., a speaker) and a network interface device920. In one embodiment, the video display910includes a touch sensitive screen for user input. In one embodiment, the touch sensitive screen is used instead of a keyboard and mouse. The disk drive unit916includes a machine-readable medium922on which is stored one or more sets of instructions924(e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions924can also reside, completely or at least partially, within the main memory904and/or within the processor902during execution thereof by the computer system900. The instructions924can further be transmitted or received over a network940via the network interface device920. In some embodiments, the machine-readable medium922also includes a database925. Volatile RAM may be implemented as dynamic RAM (DRAM), which requires power continually in order to refresh or maintain the data in the memory. Non-volatile memory is typically a magnetic hard drive, a magnetic optical drive, an optical drive (e.g., a DVD RAM), or other type of memory system that maintains data even after power is removed from the system. The non-volatile memory906may also be a random access memory. The non-volatile memory906can be a local device coupled directly to the rest of the components in the computer system900. A non-volatile memory that is remote from the system, such as a network storage device coupled to any of the computer systems described herein through a network interface such as a modem or Ethernet interface, can also be used. While the machine-readable medium922is shown in an exemplary embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present technology. Examples of machine-readable media (or computer-readable media) include, but are not limited to, recordable type media such as volatile and non-volatile memory devices; solid state memories; floppy and other removable disks; hard disk drives; magnetic media; optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs)); other similar non-transitory (or transitory), tangible (or non-tangible) storage medium; or any type of medium suitable for storing, encoding, or carrying a series of instructions for execution by the computer system900to perform any one or more of the processes and features described herein. In general, routines executed to implement the embodiments of the invention can be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions referred to as “programs” or “applications.” For example, one or more programs or applications can be used to execute any or all of the functionality, techniques, and processes described herein. The programs or applications typically comprise one or more instructions set at various times in various memory and storage devices in the machine and that, when read and executed by one or more processors, cause the computing system700to perform operations to execute elements involving the various aspects of the embodiments described herein. The executable routines and data may be stored in various places, including, for example, ROM, volatile RAM, non-volatile memory, and/or cache memory. Portions of these routines and/or data may be stored in any one of these storage devices. Further, the routines and data can be obtained from centralized servers or peer-to-peer networks. Different portions of the routines and data can be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions, or in a same communication session. The routines and data can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the routines and data can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the routines and data be on a machine-readable medium in entirety at a particular instance of time. While embodiments have been described fully in the context of computing systems, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms, and that the embodiments described herein apply equally regardless of the particular type of machine- or computer-readable media used to actually effect the distribution. Alternatively, or in combination, the embodiments described herein can be implemented using special purpose circuitry, with or without software instructions, such as using Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA). Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system. For purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the description. It will be apparent, however, to one skilled in the art that embodiments of the technology can be practiced without these specific details. In some instances, modules, structures, processes, features, and devices are shown in block diagram form in order to avoid obscuring the description or discussed herein. In other instances, functional block diagrams and flow diagrams are shown to represent data and logic flows. The components of block diagrams and flow diagrams (e.g., modules, engines, blocks, structures, devices, features, etc.) may be variously combined, separated, removed, reordered, and replaced in a manner other than as expressly described and depicted herein. Reference in this specification to “one embodiment,” “an embodiment,” “other embodiments,” “another embodiment,” “in various embodiments,” or the like means that a particular feature, design, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the technology. The appearances of, for example, the phrases “according to an embodiment,” “in one embodiment,” “in an embodiment,” “in various embodiments,” or “in another embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, whether or not there is express reference to an “embodiment” or the like, various features are described, which may be variously combined and included in some embodiments but also variously omitted in other embodiments. Similarly, various features are described which may be preferences or requirements for some embodiments but not other embodiments. Although embodiments have been described with reference to specific exemplary embodiments, it will be evident that the various modifications and changes can be made to these embodiments. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than in a restrictive sense. The foregoing specification provides a description with reference to specific exemplary embodiments. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Although some of the drawings illustrate a number of operations or method steps in a particular order, steps that are not order dependent may be reordered and other steps may be combined or omitted. While some reordering or other groupings are specifically mentioned, others will be apparent to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software, or any combination thereof. It should also be understood that a variety of changes may be made without departing from the essence of the invention. Such changes are also implicitly included in the description. They still fall within the scope of this invention. It should be understood that this technology is intended to yield a patent covering numerous aspects of the invention, both independently and as an overall system, and in both method and apparatus modes. Further, each of the various elements of the invention and claims may also be achieved in a variety of manners. This technology should be understood to encompass each such variation, be it a variation of an embodiment of any apparatus embodiment, a method or process embodiment, or even merely a variation of any element of these. Further, the use of the transitional phrase “comprising” is used to maintain the “open-end” claims herein, according to traditional claim interpretation. Thus, unless the context requires otherwise, it should be understood that the term “comprise” or variations such as “comprises” or “comprising,” are intended to imply the inclusion of a stated element or step or group of elements or steps, but not the exclusion of any other element or step or group of elements or steps. Such terms should be interpreted in their most expansive forms so as to afford the applicant the broadest coverage legally permissible in accordance with the following claims. The language used herein has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the technology of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
80,850
11863245
DESCRIPTION OF EMBODIMENTS Hereinafter, an embodiment of the present invention will be described with reference to the drawings. Description and drawings below are examples for describing the present invention, and omission and simplification are made as appropriate for the sake of clarity of description. The present invention can be carried out in other various forms. Unless otherwise specified, each constituent may be singular or plural. There is a case where a position, size, shape, range, and the like of each constituent illustrated in the drawings do not represent an actual position, size, shape, range, and the like, in order to facilitate understanding of the invention. For this reason, the present invention is not necessarily limited to a position, size, shape, range, and the like disclosed in the drawings. In a case where there are a plurality of constituents having the same or similar functions, description may be made by attaching different subscripts to the same reference numerals. However, in a case where a plurality of such constituents do not need to be distinguished from each other, the description may be made by omitting a subscript. Further, in description below, there is a case where processing performed by executing a program will be described. However, the program is executed by a processor (for example, CPU or GPU) to perform predetermined processing using a storage resource (for example, a memory) and/or an interface device (for example, a communication port) as appropriate. Therefore, the subject of the processing may be the processor. Similarly, the subject of the processing performed by executing the program may be a control unit, a device, a system, a computer, or a node having the processor. The subject of the processing performed by executing the program only needs to be an arithmetic unit, and may include a dedicated circuit (for example, an FPGA or an ASIC) that performs specific processing. The program may be installed in a device such as a computer from a program source. The program source may be, for example, a program distribution server or a computer-readable storage medium. In a case where the program source is a program distribution server, the program distribution server may include a processor and a storage resource that stores a program to be distributed, and the processor of the program distribution server may distribute a program to be distributed to another computer. Further, in description below, two or more programs may be realized as one program, or one program may be realized as two or more programs. First Embodiment FIG.1is a diagram illustrating a configuration of a signal transmission device1according to a first embodiment of the present invention. The signal transmission device1illustrated inFIG.1is a type of electronic device, and realizes various functions by performing communication with another electronic device. For example, when an electronic control unit (ECU) that performs image processing for automatic driving is used as the signal transmission device1, the signal transmission device1receives an image signal transmitted from a camera installed in a vehicle, and performs various types of arithmetic processing related to automatic driving of the vehicle on the basis of the received image signal. The signal transmission device1includes a communication unit11, a signal processing unit12, a power supply unit13, a filter circuit14, and a capacitor15. The communication unit11includes a reception circuit110and a communication signal processing unit112. A communication signal Sf in a predetermined frequency band transmitted from an electronic device2(seeFIG.4) connected to the signal transmission device1via a signal wiring4(seeFIG.4) to the signal transmission device1is input to the communication unit11via the capacitor15. The communication signal Sf is, for example, a serial signal representing “1” and “0” of data by a voltage difference, and voltage changes every predetermined period. A communication speed of the communication signal Sf is determined according to the period of the voltage change, and the shorter the period, the higher a communication speed. The reception circuit110receives the communication signal Sf input to the communication unit11and outputs the communication signal Sf to the communication signal processing unit112. The communication signal processing unit112decodes communication data included in the communication signal Sf received by the reception circuit110, acquires communication quality information Qf from the communication signal Sf, and outputs these pieces of information to the signal processing unit12. The communication quality information Qf is information related to communication quality of the communication signal Sf, and is, for example, an error frequency of communication data. The communication unit11has a communication speed change function113. The communication speed change function113is a function of changing a transmission frequency band of the communication signal Sf by changing a communication speed of the communication signal Sf. For example, in a case where a communication speed of the communication signal Sf transmitted from the electronic device2to the signal transmission device1changes, the communication unit11uses the communication speed change function113to change operation of the reception circuit110and the communication signal processing unit112according to the communication speed. By the above, even when a communication speed of the communication signal Sf changes, the communication unit11can decode communication data and acquire the communication quality information Qf. Note that the communication speed change function113may be implemented by another method as long as a transmission frequency band of the communication signal Sf can be changed. The signal processing unit12is a portion that performs various types of signal processing on the basis of communication data decoded from the communication signal Sf by the communication signal processing unit112, and is realized by using, for example, a microcomputer that executes a predetermined program or an integrated circuit such as an LSI, an FPGA, or an ASIC. The signal processing unit12includes a filter state determination unit120as a part of its function. The filter state determination unit120determines a state of the filter circuit14on the basis of the communication quality information Qf and performs processing according to a determination result. Details of a method of determining a state of the filter circuit14by the filter state determination unit120will be described later. The power supply unit13generates direct current Id using power supply voltage Vo input from the outside, and outputs the generated direct current Id to the signal wiring4via the filter circuit14. By the above, the direct current Id is superimposed on the communication signal Sf in the signal wiring4, and current flows in a direction from the signal transmission device1to the electronic device2. As a result, the direct current Id is supplied to the electronic device2via the signal wiring4. A signal processing unit30performs various arithmetic processing based on a signal received by a communication circuit unit31C, for example, processing related to automatic driving of a vehicle based on an image signal. The filter circuit14is connected between the signal wiring4and the power supply unit13, and is configured by connection of inductors L1and L2in series. The filter circuit14functions as a low-pass filter (PoC filter) that transmits the direct current Id output from the power supply unit13and blocks the communication signal Sf transmitted from the electronic device2via the signal wiring4. In the filter circuit14, the inductor L1and the inductor L2function as filters having frequency characteristics different from each other. Note that, in the example ofFIG.1, the filter circuit14is constituted by two of the inductors L1and L2, but the number of inductors constituting the filter circuit14is not limited to this, and the filter circuit14may be constituted by three or more inductors. Further, the filter circuit14may be configured using a component other than an inductor. If a plurality of filters having frequency characteristics different from each other can be combined to realize a PoC filter that transmits the direct current Id and blocks the communication signal Sf, the filter circuit14can be configured using an optional number of filters in an optional system. The capacitor15is connected between the signal wiring4and the communication unit11, and functions as a high-pass filter that transmits the communication signal Sf transmitted from the electronic device2via the signal wiring4and blocks the direct current Id output from the power supply unit13. Next, an outline of the present invention will be described with reference toFIGS.2and3.FIG.2is a diagram for explaining a change in impedance of the signal transmission device1when the filter circuit14fails.FIG.3is a diagram illustrating a relationship between a failure portion of the filter circuit14and a signal waveform of the communication signal Sf. Note thatFIGS.2and3illustrate an example of a case where the filter circuit14includes three of the inductors L1, L2, and L3connected in series, and the inductor L1corresponds to a radio frequency band, the inductor L2corresponds to an intermediate frequency band, and the inductor L3corresponds to a low frequency band in a frequency range in which the filter circuit14blocks the communication signal Sf. InFIG.2, the diagram on the left side illustrates a change in impedance of the filter circuit14in a case where a short-circuit fault occurs in the inductor L3on the low frequency side. As illustrated in this diagram, when the inductor L3does not have a short-circuit fault, impedance profiles of the inductors L1, L2, and L3are combined so that impedance of the filter circuit14is equal to or more than a predetermined reference value Zt over the entire frequency range to be blocked by the filter circuit14. On the other hand, when a short-circuit failure occurs in the inductor L3, an impedance profile of the inductor L3disappears, and impedance of the filter circuit14becomes less than the reference value Zt on the low frequency side. As a result, the communication signal Sf leaks to the power supply unit13side in the low frequency band, a signal waveform of the communication signal Sf is disturbed, and communication quality of the communication signal Sf deteriorates. This influence is particularly noticeable in a case where a communication speed of the communication signal Sf is low. InFIG.2, the diagram on the right side illustrates a change in impedance of the filter circuit14in a case where a short-circuit fault occurs in the inductor L1on the radio frequency side. As illustrated in this diagram, when the inductor L1does not have a short-circuit fault, impedance profiles of the inductors L1, L2, and L3are combined so that impedance of the filter circuit14is equal to or more than a predetermined reference value Zt over the entire frequency range to be blocked by the filter circuit14. On the other hand, when a short-circuit failure occurs in the inductor L1, an impedance profile of the inductor L1disappears, and impedance of the filter circuit14becomes less than the reference value Zt on the radio frequency side. As a result, the communication signal Sf leaks to the power supply unit13side in the radio frequency band, a signal waveform of the communication signal Sf is disturbed, and communication quality of the communication signal Sf deteriorates. This influence is particularly noticeable in a case where a communication speed of the communication signal Sf is high. FIG.3illustrates an example of a signal waveform of the communication signal Sf received by the communication unit11in a case where there is no failure in the filter circuit14and in a case where there is a short-circuit failure in each of the inductors L1, L2, and L3in the filter circuit14for each of cases where a communication speed of the communication signal Sf is 6 Gbps, 1 Gbps, 200 Mbps, and 50 Mbps. InFIG.3, for example, in a case where the inductor L1fails, signal waveforms of 6 Gbps and 1 Gbps are abnormal, and it can be seen that communication quality of the communication signal Sf deteriorates at these communication speeds. On the other hand, for example, in a case where the inductor L3fails, signal waveforms of 200 Mbps and 50 Mbps are abnormal, and it can be seen that communication quality of the communication signal Sf is deteriorated at these communication speeds. Further, in a case where the inductor L2fails, signal waveforms of 1 Gbps, 200 Mbps, and 50 Mbps are abnormal, and it can be seen that communication quality of the communication signal Sf is deteriorated at these communication speeds. In particular, at 200 Mbps, a signal waveform greatly changes as compared with a case where the inductor L1or the inductor L3fails, and it can be seen that degree of deterioration of communication quality is large. In the present invention, the filter state determination unit120determines a state of the filter circuit14in consideration of a difference in degree of influence on communication quality for each frequency in a case where a short-circuit fault occurs in each inductor constituting the filter circuit14as described above. Specifically, the communication quality information Qf when a communication speed of the communication signal Sf is changed using the communication speed change function113of the communication unit11is acquired, and in a case where communication quality is deteriorated at any of the communication speeds, a short-circuit failure is determined to occur in a filter corresponding to the communication speed in the filter circuit14. For example, in a case where the filter circuit14includes two of the inductors L1and L2as illustrated inFIG.1, the communication unit11receives the communication signal Sf transmitted at a communication speed corresponding to a frequency band of the inductor L1and the communication signal Sf transmitted at a communication speed corresponding to a frequency band of the inductor L2, and acquires the communication quality information Qf of each of the signals. The filter state determination unit120determines whether communication quality is deteriorated based on each piece of the acquired communication quality information Qf, and, in a case where communication quality is deteriorated at any of the communication speeds, the filter state determination unit120determines that a short-circuit fault occurs in one of the inductors L1and L2corresponding to the communication speed. In this manner, a state of the filter circuit14can be determined on the basis of the communication quality information Qf. Note that, in the above description, the number of times of changing a communication speed of the communication signal Sf is preferably equal to or larger than at least the number of filters constituting the filter circuit14. If a communication speed corresponding to a frequency characteristic of each filter of the filter circuit14can be sufficiently covered, the communication quality information Qf can be acquired by changing a communication speed of the communication signal Sf an optional number of times. FIG.4is a diagram illustrating a configuration of a signal transmission system according to the first embodiment of the present invention. The signal transmission system illustrated inFIG.4is configured such that the signal transmission device1and the electronic device2described inFIG.1are connected to each other via the signal wiring4, and a signal is transmitted between the signal transmission device1and the electronic device2via the signal wiring4. In the present embodiment, the signal wiring4is configured using, for example, a coaxial cable. Note that, hereinafter, signal transmission is assumed to be performed from the electronic device2to the signal transmission device1, but conversely, signal transmission may be performed from the signal transmission device1to the electronic device2. The electronic device2is a transmission source of the communication signal Sf received by the signal transmission device1, and is used in combination with various apparatuses and devices. The electronic device2is mounted on, for example, a camera installed in a vehicle, and transmits an image signal based on image information acquired by the camera to the signal transmission device1via the signal wiring4as the communication signal Sf. The electronic device2includes a communication unit21, a signal processing unit22, a power supply unit23, a filter circuit24, and a capacitor25. The signal processing unit22performs various types of signal processing according to application of a device or apparatus on which the electronic device2is mounted, and outputs communication data based on a processing result to the communication unit21. The communication unit21includes a transmission circuit210, converts communication data input to the communication unit21into the communication signal Sf, and outputs the communication signal Sf from the transmission circuit210to the signal wiring4via the capacitor25. In this manner, the communication signal Sf is transmitted from the electronic device2to the signal transmission device1via the signal wiring4. The filter circuit24is connected between the signal wiring4and the power supply unit23, and is configured by connection of inductors L11and L12in series. The filter circuit24functions as a low-pass filter (PoC filter) that transmits the direct current Id supplied from the signal transmission device1via the signal wiring4and blocks the communication signal Sf transmitted from the communication unit21. In the filter circuit24, the inductors L11and L12correspond to the inductors L1and L2of the filter circuit14in the signal transmission device1, respectively, and have frequency characteristics similar to frequency characteristics of the inductors L1and L2, respectively. Note that, in the example ofFIG.4, the filter circuit24is constituted by two of the inductors L11and L12, but the number of inductors constituting the filter circuit24is not limited to this, and the filter circuit24may be configured using three or more inductors, similarly to the filter circuit14of the signal transmission device1. Further, the filter circuit24may be configured using a component other than an inductor. If a plurality of filters having frequency characteristics different from each other can be combined to realize a PoC filter that transmits the direct current Id and blocks the communication signal Sf, the filter circuit24can be configured using an optional number of filters in an optional system. The power supply unit23receives the direct current Id supplied from the signal transmission device1via the signal wiring4and passes through the filter circuit24, and uses the direct current Id to supply power supplies Vser and Vsoc to the communication unit21and the signal processing unit22, respectively. The capacitor25is connected between the signal wiring4and the communication unit21, and functions as a high-pass filter that transmits the communication signal Sf output from the communication unit21and blocks the direct current Id supplied from the signal transmission device1via the signal wiring4. In the signal transmission system ofFIG.4, the electronic device2causes the communication unit21to transmit the communication signal Sf at a communication speed in a frequency band corresponding to frequency characteristics of the inductors L1and L11and a communication speed in a frequency band corresponding to frequency characteristics of the inductors L2and L12. The communication signal Sf transmitted from the electronic device2is input to the signal transmission device1via the signal wiring4and received by the reception circuit110in the communication unit11. Then, the communication quality information Qf of the communication signal Sf obtained in each frequency band is output from the communication signal processing unit112and input to the signal processing unit12. In the signal processing unit12, the filter state determination unit120determines states of the filter circuit14in the signal transmission device1and the filter circuit24in the electronic device2by determining whether the communication signal Sf in each frequency band is normal or abnormal from the communication quality information Qf. That is, in a case where there is an abnormality in the communication signal Sf in the frequency band corresponding to the frequency characteristics of the inductors L1and L11, at least one of the inductors L1and L11is determined to have a short-circuit fault. On the other hand, in a case where there is an abnormality in the communication signal Sf in the frequency band corresponding to the frequency characteristics of the inductors L2and L12, at least one of the inductors L2and L12is determined to have a short-circuit fault. By the above, in a case where a short-circuit fault occurs in the filter circuits14and24, the fault can be reliably detected on the signal transmission device1side. Note that, in the signal transmission device1, an operation mode of the electronic device2may be changed on the basis of the above-described state determination results of the filter circuits14and24. For example, in a case where a short-circuit fault is determined to occur in at least one of the inductors L1and L11on the radio frequency side, the electronic device2is operated in a function stop mode in which a part of functions of the electronic device2is stopped or in a function degeneration mode in which a part of functions of the electronic device2is limited so that the signal processing unit22does not perform processing of transmitting the communication signal Sf at a communication speed on the high-speed side. On the other hand, in a case where a short-circuit fault is determined to occur in at least one of the inductors L2and L12on the low frequency side, the electronic device2is operated in a function stop mode in which a part of functions of the electronic device2is stopped or in a function degeneration mode in which a part of functions of the electronic device2is limited so that the signal processing unit22does not perform processing of transmitting the communication signal Sf at a communication speed on the low-speed side. In this way, even in a case where a part of the filter circuits14and24fails, operation of the electronic device2can be continued within a possible range. According to the first embodiment of the present invention described above, an action and an effect described below are achieved. (1) The signal transmission device1includes the communication unit11that is connected to the electronic device2via the signal wiring4and performs communication with the electronic device2via the signal wiring4, the signal processing unit12that performs signal processing related to communication performed by the communication unit11, the power supply unit13that supplies the direct current Id to the electronic device2via the signal wiring4, and the filter circuit14connected between the signal wiring4and the power supply unit13. The filter circuit14includes a plurality of filters (the inductors L1and L2) having frequency characteristics different from each other. The signal processing unit12acquires the communication quality information Qf indicating quality of communication in at least two or more frequency bands, and determines a state of the filter circuit14on the basis of the communication quality information Qf. With this configuration, it is possible to detect a failure of the filter circuit14used as a PoC filter. (2) The communication unit11receives the communication signal Sf transmitted from the electronic device2in a first frequency band and the communication signal Sf transmitted from the electronic device2in a second frequency band different from the first frequency band. The signal processing unit12determines a state of the filter circuit14on the basis of the communication quality information Qf in the first frequency band and the communication quality information Qf in the second frequency band. With this configuration, when a short-circuit fault occurs in the inductors L1and L2that are filters constituting the filter circuit14, the fault can be reliably detected. (3) An operation mode of the electronic device2may be changed on the basis of a determination result of a state of the filter circuit14by the signal processing unit12. In this way, even in a case where the filter circuit14fails, operation of the electronic device2can be continued as much as possible to improve availability of the electronic device2. (4) The electronic device2includes the communication unit21that performs communication with the signal transmission device1that is an electronic device via the signal wiring4, the signal processing unit22that performs signal processing related to communication performed by the communication unit21, the power supply unit23that supplies the power supplies Vser and Vsoc to the communication unit21and the signal processing unit22using the direct current Id supplied from the signal transmission device1via the signal wiring4, and the filter circuit24connected between the signal wiring4and the power supply unit23. Each of the filter circuit14and the filter circuit24includes a plurality of filters (inductors L1, L2, L11, and L12) having frequency characteristics different from each other. The signal processing unit12acquires the communication quality information Qf indicating quality of communication in at least two or more frequency bands, and determines a state of the filter circuit14and the filter circuit24on the basis of the communication quality information Qf. With this configuration, in the signal transmission system including the signal transmission device1and the electronic device2, it is possible to detect failure of the filter circuits14and24used as PoC filters. Second Embodiment Next, a signal transmission device and a signal transmission system according to a second embodiment of the present invention will be described. In the present embodiment, an example in which a signal transmission device1A and an electronic device2A connected via the signal wiring4perform bidirectional communication with each other will be described. Note that the signal transmission device1A and the electronic device2A of the present embodiment correspond to the signal transmission device1and the electronic device2described in the first embodiment, respectively, and have partially different configurations. Hereinafter, the signal transmission device1A and the electronic device2A will be described focusing on differences from the first embodiment. FIG.5is a diagram illustrating a configuration of the signal transmission device1A according to the second embodiment of the present invention. As illustrated inFIG.5, the signal transmission device1A of the present embodiment has the same configuration as the signal transmission device1of the first embodiment described inFIG.1except that a communication unit11A is provided instead of the communication unit11. In the present embodiment, communication data to the electronic device2A is input from the signal processing unit12to the communication unit11A. The communication data includes, for example, control data for controlling operation of the electronic device2A. The communication unit11A further includes a transmission circuit111in addition to the reception circuit110, the communication signal processing unit112, and the communication speed change function113described in the first embodiment. In the communication unit11A, the communication signal processing unit112generates communication data from the communication signal Sf received by the reception circuit110and outputs the communication data to the signal processing unit12, and converts communication data input from the signal processing unit12into a communication signal Sb and outputs the communication signal Sb to the transmission circuit111. The transmission circuit111transmits the communication signal Sb by outputting the communication signal Sb to the signal wiring4via the capacitor25. By the above, the communication signal Sb in a predetermined frequency band is transmitted from the signal transmission device1A to the electronic device2A (seeFIG.6) via the signal wiring4. Similarly to the communication signal Sf, the communication signal Sb is a serial signal representing “1” and “0” of data by, for example, a voltage difference, and voltage changes every predetermined period. A communication speed of the communication signal Sb is determined according to the period of the voltage change, and the shorter the period, the higher a communication speed. Note that a communication speed of the communication signal Sb is set according to a necessary communication data amount and communication frequency, and may be the same as or different from a communication speed of the communication signal Sf. FIG.6is a diagram illustrating a configuration of the signal transmission system according to the second embodiment of the present invention. The signal transmission system illustrated inFIG.6is configured such that the signal transmission device1A and the electronic device2A described inFIG.5are connected to each other via the signal wiring4, and a signal is transmitted between the signal transmission device1A and the electronic device2A via the signal wiring4. As illustrated inFIG.6, the electronic device2A of the present embodiment has the same configuration as the electronic device2of the first embodiment described with reference toFIG.4except that a communication unit21A is provided instead of the communication unit21. In the present embodiment, the communication signal Sb transmitted from the signal transmission device1A to the electronic device2A is input to the communication unit21A via the capacitor25. The communication unit21A further includes a reception circuit211in addition to the transmission circuit210described in the first embodiment. The reception circuit211receives the communication signal Sb input to the communication unit21A. The communication signal Sb received by the reception circuit211is decoded into communication data in the communication unit21A, and is output to the signal processing unit22. By the above, communication data based on the communication signal Sb is used in signal processing performed by the signal processing unit22. In the signal transmission system ofFIG.6, similarly to the electronic device2described in the first embodiment, the electronic device2A causes the communication unit21A to transmit the communication signal Sf at a communication speed in a frequency band corresponding to frequency characteristics of the inductors L1and L11and a communication speed in a frequency band corresponding to frequency characteristics of the inductors L2and L12. The communication signal Sf transmitted from the electronic device2A is input to the signal transmission device1A via the signal wiring4and received by the reception circuit110in the communication unit11A. Then, the communication quality information Qf of the communication signal Sf obtained in each frequency band is output from the communication signal processing unit112and input to the signal processing unit12. In the signal processing unit12, the filter state determination unit120determines states of the filter circuit14in the signal transmission device1A and the filter circuit24in the electronic device2A by determining whether the communication signal Sf in each frequency band is normal or abnormal from the communication quality information Qf. According to the second embodiment of the present invention described above, the communication unit11A has a bidirectional communication function of receiving the communication signal Sf transmitted from the electronic device2A and transmitting the communication signal Sb to the electronic device2A. With this configuration, the electronic device2A can perform various pieces of signal processing using the communication signal Sb transmitted from the signal transmission device1A. Third Embodiment Next, a signal transmission device and a signal transmission system according to a third embodiment of the present invention will be described. In the present embodiment, an example in which a signal transmission device1B and an electronic device2B connected via the signal wiring4perform bidirectional communication with each other, and a state of a filter circuit is determined using the communication quality information Qf and Qb in the respective communication signals Sf and Sb will be described. Note that the signal transmission device1B and the electronic device2B of the present embodiment correspond to the signal transmission device1A and the electronic device2A described in the second embodiment, respectively, and have partially different configurations. Hereinafter, the signal transmission device1B and the electronic device2B will be described focusing on differences from the second embodiment. FIG.7is a diagram illustrating a configuration of the signal transmission system according to the third embodiment of the present invention. The signal transmission system illustrated inFIG.7is configured such that the signal transmission device1B and the electronic device2B are connected to each other via the signal wiring4, and a signal is transmitted between the signal transmission device1B and the electronic device2B via the signal wiring4. Note that, in the present embodiment, the communication signal Sf and the communication signal Sb have different communication speeds. Hereinafter, the communication signal Sb is assumed to have a lower communication speed than the communication signal Sf, but conversely, the communication signal Sb may have a higher communication speed than the communication speed Sf. As illustrated inFIG.7, the signal transmission device1B of the present embodiment includes a communication unit11B and a signal processing unit12B. The communication unit11B has the same configuration as the communication unit11A in the signal transmission device1A of the second embodiment described with reference toFIGS.5and6except that the communication speed change function113is not included. The signal processing unit12B has the same configuration as the signal processing unit12in the signal transmission device1of the first embodiment described with reference toFIGS.1and4except that a filter state determination unit120B is included instead of the filter state determination unit120. In the communication unit11B, the communication signal processing unit112acquires the communication quality information Qb of the communication signal Sb in addition to the communication quality information Qf of the communication signal Sf, and outputs the communication quality information Qb to the signal processing unit12B. The communication quality information Qb is information related to communication quality of the communication signal Sb, and is, for example, an error frequency of communication data. The communication signal processing unit112can acquire the communication quality information Qb of the communication signal Sb as information included in the communication signal Sf transmitted from the electronic device2B, for example. As illustrated inFIG.7, the electronic device2B of the present embodiment further includes a camera unit26in addition to the same configuration as the electronic device2A of the second embodiment described with reference toFIGS.5and6. The camera unit26is configured using a lens or an image sensor, and generates an image signal as the image sensor captures a subject image formed on the image sensor by the lens. The image signal generated by the camera unit26is input to the signal processing unit22, and after predetermined signal processing is performed in the signal processing unit22, the image signal is output to the communication unit21A as communication data to the signal transmission device1B. By the above, an image signal acquired by the camera unit26is transmitted from the electronic device2B to the signal transmission device1B. For example, in a case where the electronic device2B is mounted on a vehicle, as an image of a surrounding environment of the vehicle is captured using the camera unit26, the electronic device2B can detect the surrounding environment of the vehicle and transmit the communication signal Sf including an image signal related to a detection result to the signal transmission device1B. Further, in the electronic device2B, the communication unit21A acquires the communication quality information Qb from the communication signal Sb received from the signal transmission device1B. The communication quality information Qb of the communication signal Sb acquired by the communication unit21A is notified from the electronic device2B to the signal transmission device1B. For example, the communication quality information Qb can be notified from the electronic device2B to the signal transmission device1B as the communication signal Sf including the communication quality information Qb is transmitted from the electronic device2B to the signal transmission device1B. Alternatively, the communication quality information Qb may be notified using another method, for example, a communication path different from the communication signal Sf. In the signal transmission system ofFIG.7, the electronic device2B transmits the communication signal Sf at a communication speed in a frequency band on the radio frequency side according to frequency characteristics of the inductors L1and L11by the communication unit21A. The communication signal Sf transmitted from the electronic device2B is input to the signal transmission device1B via the signal wiring4and received by the reception circuit110in the communication unit11B. Then, the communication quality information Qf of the communication signal Sf is output from the communication signal processing unit112and input to the signal processing unit12B. On the other hand, the signal transmission device1B causes the communication unit11B to transmit the communication signal Sb at a communication speed in a frequency band on the low frequency side according to frequency characteristics of the inductors L2and L12. The communication signal Sb transmitted from the signal transmission device1B is input to the electronic device2B via the signal wiring4, is received by the reception circuit211in the communication unit21A, and the communication quality information Qb of the communication signal Sb is acquired. Then, as described above, the communication quality information Qb of the communication signal Sb is notified from the electronic device2B to the signal transmission device1B, and is input to the signal processing unit12B. In the signal processing unit12B, the filter state determination unit120B determines states of the filter circuit14in the signal transmission device1B and the filter circuit24in the electronic device2B by determining whether the communication signals Sf and Sb are normal or abnormal from the communication quality information Qf and Qb, respectively. That is, in a case where there is an abnormality in the communication signal Sf, at least one of the inductors L1and L11is determined to have a short-circuit fault. On the other hand, in a case where there is an abnormality in the communication signal Sb, at least one of the inductors L2and L12is determined to have a short-circuit fault. By the above, in a case where a short-circuit fault occurs in the filter circuits14and24, the fault can be reliably detected on the signal transmission device1B side. According to the third embodiment of the present invention described above, the communication unit11B receives the communication signal Sf transmitted from the electronic device2B in a first frequency band, and transmits the communication signal Sb to the electronic device2B in a second frequency band different from the first frequency band. The signal processing unit12B determines states of the filter circuits14and24based on the communication quality information Qf in the first frequency band and the communication quality information Qb in the second frequency band. With this configuration, even if the communication unit11B does not have a communication speed change function, the signal transmission device1B can determine states of the filter circuits14and24. Note that, in the third embodiment of the present invention described above, the example in which the electronic device2B includes the camera unit26and detects a surrounding environment of a vehicle using the camera unit26is described. However, a sensor other than a camera may be used as the sensor that detects a surrounding environment of a vehicle. For example, the electronic device2B including various sensors such as a radar, a LiDAR, and a sonar is mounted on a vehicle, and the communication signal Sf including information regarding a surrounding environment of a vehicle detected using these sensors can be transmitted from the electronic device2B to the signal transmission device1B, and the communication signal Sb including control information for controlling operation of these sensors can be transmitted from the signal transmission device1B to the electronic device2B. Fourth Embodiment Next, a signal transmission device and a signal transmission system according to a fourth embodiment of the present invention will be described. In the present embodiment, an example in which a state of a filter circuit is determined on the basis of a supply state of the direct current Id from a signal transmission device1C to the electronic device2B will be described. Note that the signal transmission device1C of the present embodiment corresponds to the signal transmission device1B described in the third embodiment, and is partially different in configuration. Hereinafter, the signal transmission device1C will be described focusing on differences from the third embodiment. FIG.8is a diagram illustrating a configuration of the signal transmission system according to the fourth embodiment of the present invention. The signal transmission system illustrated inFIG.8is configured such that the signal transmission device1C and the electronic device2B are connected to each other via the signal wiring4, and a signal is transmitted between the signal transmission device1C and the electronic device2B via the signal wiring4. Note that, in the present embodiment, the electronic device2B is the same as that described in the third embodiment. As illustrated inFIG.8, the signal transmission device1C of the present embodiment includes a signal processing unit12C and a power supply unit13C. The signal processing unit12C has the same configuration as the signal processing unit12B in the signal transmission device1B of the third embodiment described with reference toFIG.7except that a filter state determination unit120C is included instead of the filter state determination unit120B. The power supply unit13C outputs the direct current Id to the signal wiring4via the filter circuit14, and outputs power supply information Pd indicating a supply state of the direct current Id to the signal processing unit12C. For example, information such as a current value of the direct current Id and output voltage of the power supply unit13C when the direct current Id is output can be used as the power supply information Pd. The power supply information Pd output from the power supply unit13C is input to the filter state determination unit120C in the signal processing unit12C. The filter state determination unit120C determines states of the filter circuits14and24in the same manner as described in the third embodiment, and determines a state of the power supply unit13C based on the power supply information Pd. Then, a cause of a case where a communication abnormality occurs in the signal transmission system is identified on the basis of these determination results. That is, in a case where a communication abnormality occurs in the signal transmission system, whether any of the filter circuits14and24fails or the power supply unit13C fails is determined. According to the fourth embodiment of the present invention described above, the signal processing unit12C acquires the power supply information Pd indicating a supply state of the direct current Id in the power supply unit13C, and determines states of the filter circuits14and24and the power supply unit13C based on the communication quality information Qf and Qb and the power supply information Pd. With this configuration, in a case where a communication abnormality occurs in the signal transmission system, a cause of the communication abnormality can be identified. Fifth Embodiment Next, a signal transmission device and a signal transmission system according to a fifth embodiment of the present invention will be described. In the present embodiment, an example of communication abnormality determination in a case where the PoC filter includes three inductors will be described. FIG.9is a diagram illustrating a configuration of the signal transmission system according to the fifth embodiment of the present invention. The signal transmission system illustrated inFIG.9is configured such that a signal transmission device1D and an electronic device2D are connected to each other via the signal wiring4, and a signal is transmitted between the signal transmission device1D and the electronic device2D via the signal wiring4. As illustrated inFIG.9, the signal transmission device1D of the present embodiment includes the communication unit11A, the signal processing unit12C, the power supply unit13C, and a filter circuit14D. The communication unit11A is the same as the signal transmission device1A of the second embodiment described with reference toFIG.5, and the signal processing unit12C and the power supply unit13C are the same as the signal transmission device1C of the fourth embodiment described with reference toFIG.8. Further, the electronic device2D of the present embodiment has the same configuration as the electronic device2B of the third and fourth embodiments described with reference toFIGS.7and8, respectively, except that a filter circuit24D is provided instead of the filter circuit24. The filter circuits14D and24D are configured by connecting three of the inductors L1, L2, and L3in series. Hereinafter, description will be made by assuming that the inductor L1corresponds to a radio frequency band, the inductor L2corresponds to an intermediate frequency band, and the inductor L3corresponds to a low frequency band in a frequency range in which the filter circuits14D and24D block the communication signals Sf and Sb, similarly to the description inFIGS.2and3. FIG.10is a flowchart illustrating a process of communication abnormality determination in the signal transmission system according to the fifth embodiment of the present invention. Processing illustrated in the flowchart ofFIG.10is realized by, for example, a microcomputer executing a predetermined program in the filter state determination unit120C of the signal transmission device1D. Alternatively, the processing illustrated in the flowchart ofFIG.10may be realized using an integrated circuit such as an LSI, an FPGA, or an ASIC. In Step S10, whether or not a supply state of the direct current Id from the power supply unit13C to the electronic device2D is normal on the basis of the power supply information Pd. In a case where a supply state of the direct current Id is normal, the processing proceeds to Step S20, and in a case where the supply state is abnormal, the processing proceeds to Step S120. In Step S20, whether or not communication quality of the communication signal Sb is normal is determined based on the communication quality information Qb. Here, as the communication quality information Qb, a result of determination in the electronic device2D as to whether or not a value of a cyclic redundancy code (CRC) of communication data included in the communication signal Sb is normal is acquired, and whether or not communication quality of the communication signal Sb is normal is determined from the determination result. In a case where the communication quality of the communication signal Sb is normal, the processing proceeds to Step S30, and in a case where the communication quality is abnormal, the processing proceeds to Step S40. In Step S30, whether or not communication quality of the communication signal Sf is normal on the basis of the communication quality information Qf. Here, a CRC of communication data included in the communication signal Sf is acquired as the communication quality information Qf, and whether or not a value of the CRC is normal is determined, so that whether or not communication quality of the communication signal Sf is normal is determined. In a case where the communication quality of the communication signal Sf is normal, the processing proceeds to Step S70, and in a case where the communication quality is abnormal, the processing proceeds to Step S80. In Step S40, similarly to Step S30, whether or not communication quality of the communication signal Sf is normal based on the communication quality information Qf. In a case where the communication quality of the communication signal Sf is normal, the processing proceeds to Step S50, and in a case where the communication quality is abnormal, the processing proceeds to Step S110. In Step S50, a transmission rate (communication speed) of the communication signal Sf is changed using the communication speed change function113of the communication unit11A. Here, a transmission rate of the communication signal Sf is changed from high speed to low speed, for example, from 6 Gbps to 1 Gbps. By the above, a communication speed of the communication signal Sf is changed from a communication speed corresponding to a frequency band of the inductor L1to a communication speed corresponding to a frequency band of the inductor L2. In Step S60, it is determined whether or not communication quality of the communication signal Sf is normal based on the communication quality information Qf acquired for the communication signal Sf whose transmission rate is changed in Step S50. In a case where the communication quality of the communication signal Sf is normal, the processing proceeds to Step S100, and in a case where the communication quality is abnormal, the processing proceeds to Step S90. In Step S70, there is determined to be no communication abnormality. In Step S80, the inductor L1is determined to have a short-circuit fault in at least one of the filter circuits14and24. In Step S90, the inductor L2is determined to have a short-circuit fault in at least one of the filter circuits14and24. In Step S100, the inductor L3is determined to have a short-circuit fault in at least one of the filter circuits14and24. In Step S110, some abnormality is determined to occur in a transmission path of the communication signals Sf and Sb. As a cause of this abnormality, for example, an abnormality of a connector or a harness connecting the signal wiring4to the signal transmission device1D and the electronic device2D, disconnection of the capacitors15and25, disconnection of a substrate pattern in the signal transmission device1D and the electronic device2D, failure of the communication units11A and21A, and the like can be considered. In Step S120, some abnormality is determined to occur in power supply from the signal transmission device1D to the electronic device2D. As a cause of this abnormality, for example, disconnection of the signal wiring4, disconnection of at least one of the filter circuits14and24, failure of the power supply unit13C, and the like can be considered. After any of Steps S70to S120is executed, the processing illustrated in the flowchart ofFIG.10ends. Next, an operation control example of the electronic device2D performed on the basis of a result of the communication abnormality determination ofFIG.10will be described with reference to a table ofFIG.11. The table ofFIG.11illustrates an example of operation control of the electronic device2D in each of cases where a short-circuit failure occurs in each of the inductors L1to L3in the filter circuits14and24or where a disconnection (open) failure occurs in any of the inductors L1to L3. The signal transmission device1D can perform operation control on the electronic device2D according to a result of the communication abnormality determination ofFIG.10, for example, according to the table ofFIG.11. Note that, in the table ofFIG.11, the signal transmission device1D as an ECU is represented as “ECU”, the electronic device2D including the camera unit26is represented as “camera”, the communication signal Sf transmitted from the electronic device2D to the signal transmission device1D is represented as “forward channel”, and the communication signal Sb transmitted from the signal transmission device1D to the electronic device2D is represented as “backward channel”. According to the fifth embodiment of the present invention described above, in a case where a communication abnormality occurs in the signal transmission system, a cause of the abnormality can be identified, and appropriate operation control can be performed on the electronic device2D. Sixth Embodiment Next, a signal transmission device and a signal transmission system according to a sixth embodiment of the present invention will be described. In the present embodiment, an example in which a twisted pair cable is used instead of a coaxial cable for a signal wiring will be described. FIG.12is a diagram illustrating a configuration of the signal transmission system according to the sixth embodiment of the present invention. In the signal transmission system illustrated inFIG.12, a signal transmission device1E and an electronic device2E are connected to each other via a twisted pair cable5, and a signal is transmitted between the signal transmission device1E and the electronic device2E via the twisted pair cable5. Note that, hereinafter, signal transmission is assumed to be performed from the electronic device2E to the signal transmission device1E, but conversely, signal transmission may be performed from the signal transmission device1E to the electronic device2E, or communication may be performed bidirectionally. As illustrated inFIG.12, the electronic device2E of the present embodiment includes a communication unit21E. The communication unit21E has the same function as that of the communication unit21ofFIG.4described in the first embodiment, converts communication data output from the signal processing unit22into the communication signal Sf, and transmits the communication signal Sf from the transmission circuit210to the signal transmission device1E by differential transmission via the twisted pair cable5. Capacitors25P and25N are connected between the communication unit21E and the twisted pair cable5. The capacitors25P and25N function as high-pass filters that transmit the communication signal Sf transmitted from the transmission circuit210, and block the direct current Id supplied from the signal transmission device1E via the twisted pair cable5. Between the power supply unit23and the twisted pair cable5, filter circuits24P and24N that function as low-pass filters (PoC filters) that transmit the direct current Id supplied from the signal transmission device1E via the twisted pair cable5and block the communication signal Sf transmitted from the transmission circuit210are connected. The filter circuit24P includes the inductors L11and L12having different frequency characteristics, and the filter circuit24N includes inductors L13and L14having different frequency characteristics. Further, the signal transmission device1E of the present embodiment includes a communication unit11E. The communication unit11E has the same function as that of the communication unit11ofFIGS.1and4described in the first embodiment, causes the reception circuit110to receive the communication signal Sf transmitted from the electronic device2E by differential transmission via the twisted pair cable5, and performs decoding of communication data included in the communication signal Sf and acquisition of the communication quality information Qf in the communication signal processing unit112. Capacitors15P and15N are connected between the communication unit11E and the twisted pair cable5. The capacitors15P and15N function as high-pass filters that transmit the communication signal Sf transmitted from the electronic device2E via the twisted pair cable5, and block the direct current Id output from the power supply unit13. Between the power supply unit13and the twisted pair cable5, filter circuits14P and14N that function as low-pass filters (PoC filters) that that transmit the direct current Id output from the power supply unit13and block the communication signal Sf transmitted from the electronic device2E via the twisted pair cable5are connected. The filter circuit14P includes the inductors L1and L2having different frequency characteristics, and the filter circuit14N includes the inductors L3and L4having different frequency characteristics. Note that, as in the signal transmission system of the present embodiment, compatibility between signal transmission using a twisted pair cable and power supply is called power over data lines (PoDL). According to the sixth embodiment of the present invention described above, even in a case where a PoDL system that performs signal transmission and power supply using a twisted pair cable between two electronic devices is employed, a failure of a PoC filter can be detected. Seventh Embodiment Next, a signal transmission system according to a seventh embodiment of the present invention will be described. In the present embodiment, an example in which two electronic devices are both connected to one signal transmission device will be described. FIG.13is a diagram illustrating a configuration of the signal transmission system according to the seventh embodiment of the present invention. In the signal transmission system illustrated inFIG.13, the signal transmission device1B and two of the electronic devices2B are connected to each other via signal wirings4A to4D, and a four-way switch6is provided between the signal wirings4A and4B and the signal wirings4C and4D, so that a signal is transmitted between the signal transmission device1B and two of the electronic devices2B via the signal wirings4A to4D and the four-way switch6. In the signal transmission system of the present embodiment, each of the signal transmission device1B and two of the electronic devices2B has the configuration described inFIG.7in the third embodiment. However, the signal transmission system of the present embodiment may be configured using a signal transmission device and an electronic device described in another embodiment. Further, two of the electronic devices2B may have the same specifications or different specifications. For example, the camera unit26included in one of the electronic devices2B is a high-resolution camera, and the camera unit26included in the other electronic device2B is a low-resolution camera, so that performance of the two electronic devices2B can be differentiated from each other. The four-way switch6is a switch capable of optionally switching a connection state between the signal wirings4A and4B and the signal wirings4C and4D. For example, by switching the four-way switch6so as to connect the signal wiring4A and the signal wiring4C and connect the signal wiring4B and the signal wiring4D, one of the electronic devices2B can be connected to the signal transmission device1B via the signal wiring4A and the signal wiring4C, and the other electronic device2B can be connected to the signal transmission device1B via the signal wiring4B and the signal wiring4D. Further, by switching the four-way switch6so as to connect the signal wiring4A and the signal wiring4D and connect the signal wiring4B and the signal wiring4C, one of the electronic devices2B can be connected to the signal transmission device1B via the signal wiring4A and the signal wiring4D, and the other electronic device2B can be connected to the signal transmission device1B via the signal wiring4B and the signal wiring4C. Furthermore, by switching the four-way switch6so as to connect one of the signal wiring4C and the signal wiring4D to both the signal wiring4A and the signal wiring4B, two of the electronic devices2B may be both connected to the signal transmission device1B via the signal wiring4C or the signal wiring4D. A switching state of the four-way switch6is controlled by the signal transmission device1B. The signal transmission device1B determines presence or absence of an abnormality in each transmission path on the basis of the communication signals Sf and Sb transmitted and received between two of the electronic devices2B, and controls a switching state of the four-way switch6according to a determination result. By the above, in a case where an abnormality occurs in any of the transmission paths, at least one of the electronic devices2B can be operated. Therefore, availability of the signal transmission system can be improved. Note that, in each of the embodiments described above, information other than an error frequency of communication data may be used as the communication quality information Qf and Qb. For example, in a case where the communication unit11includes a waveform equivalent circuit in the signal transmission device1, a setting parameter of the waveform equivalent circuit can be used as the communication quality information Qf and Qb. Note that the waveform equivalent circuit is a circuit for realizing an equalizer function of compensating for signal attenuation due to the signal wiring4by adjusting a waveform of a communication signal received by the communication unit11according to a frequency characteristic of the signal wiring4. Since such a waveform equivalent circuit is well known, detailed description of the waveform equivalent circuit will be omitted. Further, the communication unit11can also measure a communication signal waveform as described inFIG.3and use a measurement result as the communication quality information Qf and Qb. In addition to this, if quality of communication performed by the communication signals Sf and Sb can be appropriately represented, optional information can be used as the communication quality information Qf and Qb. The embodiments and various variations described above are merely examples, and the present invention is not limited to the content of these examples unless the characteristics of the invention are impaired. Further, although various embodiments and variations are described above, the present invention is not limited to the content of these embodiments and variations. Other modes considered within the scope of the technical idea of the present invention are also included in the scope of the present invention. REFERENCE SIGNS LIST 1,1A,1B,1C,1D,1E signal transmission device2,2A,2B,2D,2E electronic device4,4A,4B,4C,4D signal wiring5twisted pair cable6four-way switch11,11A,11B,11E communication unit12,12B,12C signal processing unit13,13C power supply unit14,14D,14N,14P filter circuit15,15N,15P capacitor21,21A,21E communication unit22signal processing unit23power supply unit24,24D,24N,24P filter circuit25,25N,25P capacitor26camera unit110reception circuit111transmission circuit112communication signal processing unit113communication speed change function120,120B,120C filter state determination unit210transmission circuit211reception circuit
64,167
11863246
DETAILED DESCRIPTION The present disclosure is directed, in part, to systems and methods that provide broadband internet (e.g., high-speed internet) to a premises. In some instances, the systems and methods discussed herein may use Broadband over Power Line (BPL) technology (alternatively referred to a powerline communication (PCL) and/or internet over power line (IPL)) to deliver broadband internet to a variety of premises, such as homes, multi-family units and/or places of business. Utilizing existing electrical wiring of the premises may alleviate the need to build broadband facilities, structures, and/or route cables to individual premises. In such instances, BPL technology makes use of existing electrical wiring of the premises. Because of this, in some instances, utility companies (e.g., water, gas, electricity) that provide power (or other utilities) may also provide broadband internet as a service. This may consolidate consumer expenditures and increase user convenience. In some instances, a plurality of base station radio devices may be disposed atop vertical structures (e.g., utility poles and street lights) and which communicate with customer premises devices (CPEs) disposed at the premises. The base station radio devices may communicatively couple to an internet service provider (ISP), wide area network (WAN), and/or service provider network (SPN) that offers or otherwise provides broadband internet to consumers. In some instances, the base station radio devices may communicatively couple to the SPN via a backhaul network, such as fiber-optic, cables, and/or millimeter wave (mmWave) technology. Additionally, or alternatively, the base station radio devices may communicate with the SPN or be connected to the SPN via powerlines of a utility service using BPL technology (e.g., over medium or low voltage powerlines), and/or may use other technologies. Regardless of the specific implementation, the communication between the base station radio devices and the SPN represents a high-speed communication path for providing broadband internet. The base station radio devices may also include components for routing, networking, and switching functions to facilitate the conveyance of broadband internet between consumers (e.g., users, entities, etc.), other consumers (e.g., users, entities, etc.), and the SPN. In some instances, the base station radio devices may communicate with one another (e.g., mmWave) to transmit and receive data, and/or couple to the SPN. The base station radio devices function to provide broadband internet to the CPEs (and ultimately the premises) by wirelessly communicating with the CPEs. To wirelessly communicate with one another, the base station radio devices and the CPEs may include modems, antenna(s), an array of antenna(s), transceiver systems, antenna feed networks, and so forth. In some instances, the antenna(s) of the base station radio devices and/or the antenna(s) of the CPEs may include a plurality of modems and/or antennas for communicating over a range of frequencies (e.g., mid frequencies, high frequencies, etc.). The antenna(s) of the base station radio device(s) and/or the CPEs may include antennas for any disparate number of communication technologies (e.g., 4G LTE, 5G, etc.). Additionally, or alternatively, the CPE may include various interfaces for communicating with the SPN via wired technologies and physical layer (PHY) technologies at the premises (e.g., Coaxial Cable, DSL, Fiber, etc.). In some instances, the CPE may include modular components for interchanging modems, antenna(s), and so forth depending on which communication technologies are utilized for delivering broadband internet. In some instances, the base station radio device and the CPE may utilize, or communicate, over any dynamic shared spectrum (DSS). By way of example, the base station radio device and the CPE may communicate over a 3100 MHz to 4200 MHz DSS, such as a C-band spectrum (3700 MHz-4200 MHz). In some instances, the base station radio device and the CPE may communicate within specific ranges of the DSS, such as the Citizens Broadcast Radio Spectrum (CBRS) between 3550 MHz to 3700 MHz. However, other frequencies are envisioned and may be utilized. Regardless, the base station radio device and the CPE may include corresponding modem(s) and antenna(s) for communicating over desired frequencies, or at desired frequencies. In some instances, and as noted above, the antenna(s) and/or modems of the base station hub device and/or the CPE may be modular and interchangeable depending on the specific implementation. With the varying frequencies at which the base station radio device and the CPE communicate, under-utilized frequencies may be used depending on demand and load. That is, the base station radio device and the CPE may communicate with one another over a plurality of frequencies and depending on current loads within those frequencies. In some instances, the base station radio device and/or the CPE may include multiple radio transceiver ports coupled to the ports of one or more antenna elements via a coupling network. This may result in the CPE having a multiple-input and/or multiple-output (MIMO) antenna for receiving high frequencies and/or mid frequencies. In some instance(s), the antenna(s) may represent a massive MIMO for transmitting and receiving signals across a wide spectrum of frequencies. The antenna(s), a transceiver system, and/or an antenna feed network of the base station radio devices and/or the CPEs may also be configured to beamform or beam steer in order to increase a signal strength with the base station radio device(s). For example, the antenna(s) of the base station radio devices and/or the CPEs may be steered to transmit signals in a specific direction rather than broadcasting signals in all directions. In such instances, the antenna(s) (or the array of antenna(s)) may determine a direction of interest for sending and receiving a stronger signal in the direction of interest. As another example, the antenna(s) of the base station radio devices and/or the CPEs may transmit signals in a plurality of directions rather than broadcasting signals in all directions or a single direction. Herein, the antenna(s) of the CPEs may form multiple beams within communication channels between the CPE and the base station radio device. In such instances, the antenna(s) (or the array of antenna(s)) may determine the directions of interest for sending and receiving a stronger composite signal to the base station radio device. The CPEs are installed at the premises of the consumer (e.g., home and/or place of business) and may represent a fixed wireless device. In some instances, the CPEs may be installed on an exterior side of the premises at a demarcation point in which services (e.g., power, phone, television, etc.) are provided to the premises. In some instances, the CPE may be installed within an electric meter panel and coupled to the electric meter and the electrical wiring of the premises. For example, the CPE may include a housing that fits within an existing electric meter panel and when installed, is interposed between the electric meter panel and the electric meter. This coupling may provide power to the CPE, transfer power to the electric meter for metering, and connect the CPE with (or to) the electrical wiring of the premises. In some instances, a router may be plugged into an outlet within the interior of the premises and located proximate to the CPE to reduce dissipation and/or noise. The router and the CPE may be paired with one another as part of an out of box experience (OOBE) for providing broadband internet. Therein, the router may broadcast broadband internet within an interior of the premises. The CPE includes one or more interfaces for communicating with the router. For example, the CPE may include a BPL interface and a modem coupled to the antenna(s). The BPL interface and the modem may be communicatively coupled with one another. In some instances, the BPL interface and the modem (and/or the antenna(s)) may be components of a system on a chip (SoC) of the CPE. As the antenna(s) of the CPE receives the broadband internet from the base station radio device(s), or via DHY technologies, the modem may communicate the broadband data to the BPL interface (e.g., via digital and/or Ethernet interface). The BPL interface is configured to transmit the broadband data over the electrical wiring of the premises to the router. However, the CPE may utilize other existing wiring of the premises (e.g., plastic fiber, twisted pair, coax, etc.) for providing broadband data to the premises. In such instances, the CPE may include a LAN interface. The router, which is located within the interior side of the premises, may include a BPL interface for receiving the broadband data from the CPE. The BPL modem of the CPE and the BPL modem of the router therefore allows for the CPE and the router to communicate over the electrical wiring of the premises. The router further includes a wireless modem and antenna(s) for distributing broadband internet to the premises, or consumer device(s) within the premises. For example, the antenna(s) of the router may include a Wi-Fi module for supplying the premises with Wi-Fi (e.g., 2.4 GHz Wi-Fi, 5 GHz Wi-Fi, 6 GHz, etc.). The antenna(s) may also be modular or interchangeable to provide additional Wi-Fi frequency bands to the premises. In some instances, the router may broadcast the broadband internet via wireless and/or wired technologies (e.g., Ethernet, coaxial cable, USB, twisted pair, plastic fiber, etc.). In some instances, the antenna(s), BPL interface, and/or modem of the router may be components of a SoC of the router. Wirelessly coupling the base station radio device and the CPE may avoid conventional problems associated with providing broadband internet to individual premises. For example, costs, time, and inconveniences, sometimes referred as the last mile problem, are often limiting factors in providing broadband internet. Compared to conventional techniques that physically connect premises to the SPN, using wireless communication between the base station radio device and the CPE, as well as BPL technology, may reduce these challenges. For example, consumers may no longer be expected to be home while broadband internet is set up. In this manner, coupling the CPEs to the electrical wiring of the premises (i.e., the electric meter panel and the electric meter) also addresses challenges associated with building penetration. However, in NLOS applications, topography and obstructions make it difficult for transmitting and receiving signals. For example, the signals transmitted by the CPE may be reflected, diffracted, refracted, and scattered. In some instances, to overcome challenges associated with wireless communications between the CPE and the base station radio device, the CPE may include, or the antenna of the CPE may represent, a multi-antenna array having antennas arranged with different polarizations. The antenna(s) may include sub-arrays having multiple elements. In some instances, each sub-array of the multi-antenna array may include two orthogonally polarized elements. Additionally, each element of the sub-array may include a dedicated antenna feed port. By selecting specific polarizations, and determining the phase and or amplitude of the antenna feeds, the multi-antenna array may have a radiation pattern with a predetermined variable polarization. In some instances, the predetermined variable polarization may be a function of the direction of departure and arrival of signals. For example, the multi-antenna array may have linear, circular, and/or elliptical polarizations as a function of the direction of arrival/departure in the pattern. The multi-antenna array includes a structure for supporting the antenna feed network and orienting the elements such that the multi-antenna array realizes a directional radiation pattern in azimuth and elevation that is greater than the radiation pattern of a sub-array pattern. For example, in some instances, the multi-antenna array may be implemented as a non-planar array having sub-arrays arranged to form a pattern that has a beamwidth that exceeds the radiation pattern of the individual sub-array beamwidths. That is, each of the individual sub-arrays have an individual beamwidth, but when these beams experience constructive interference, a beam of the multi-antenna array may have a width that exceeds that of the individual sub-arrays. In some instances, the sub-arrays may include two orthogonally polarized elements and each element within the sub-array may have a dedicated antenna feed port. The sub-array may be implemented as a patch antenna having a first patch feed (and associated port) and a second patch feed (and associated port) that are orthogonally polarized. In some instances, the multi-antenna array may include a single transmission port or a single receiving port, and/or a single transmission/receiving port. In instances where only a single transmission/receiving port is included, the single transmission/receiving port may split or combine the transmitted/received signal amongst the sub-arrays and drive individual elements of the sub-arrays. This splitting/combining makes it possible for the CPE to include a single transmission/receiving port but have variable polarizations across the pattern. Additionally, this results in equal power or predetermined unequal power being transceived by the element(s). The multi-antenna array increases the number of transmission and receiving ports in MIMO and coherent space-polarization MIMO radio systems. For example, in conventional systems, if there is only one transmission port, then only one polarization may be used to illuminate the propagation channel. However, in such instances, this polarization may not be optimal for the communication channel between the CPE and the base station radio device (or between two devices). The multi-antenna array may be capable of eliminating polarization dependent loss (PDL) and utilizing a method of polarization mode dispersion combining to optimize the Signal to Interference to Noise Ratio (SINK) at the receiver to extend range and increase throughput. The differently polarized elements of the multi-antenna array allows receivers to implement PDL mitigation and adaptive interference mitigation based at least in part on polarization mode dispersion (PMD) processing. For example, the multi-antenna array provides a continuous distribution of polarizations from linear, elliptical, and circular. By way of example, envision that for a multi-antenna array that includes three patches, assume that the left patch includes a vertical polarization, the center patch includes a horizontal polarization (i.e., orthogonal polarization for the center patch), and the right patch includes a vertical polarization. Additionally, the left and right patches may be driven with equal phase while the center patch may be driven with a composite 90 degree phase shift. That is, the composite phase of the center patch is the sum of the phase delay realized in the feed and the time of flight phase delay due to the physical separation of the patch antennas. If the polarization of the left patch is measured, a vertical polarization is verified. As the measurement position moves from left to right, around the pattern, the polarization varies from the initial vertical polarization, through elliptical polarization, to circular polarization, and once again to vertical polarization in the right patch. The multi-antenna array seeks to create a variable polarization over its beamwidth for polarization diversity. The variation in polarization is observed to change in a trajectory around the Poincaré Sphere. In some instances, the receiver may receive signals having the vertical, circular, and/or elliptical polarizations. By diversifying the polarization, the CPE may more effectively communicate with the base station radio device. For example, when communicating with a multi-port receiver equipped with coherent spatial and/or polarization combining capability, such as the base station radio device, there is a significant advantage if the transmitter maximizes the spatial and polarization diversity. In effect, the multi-antenna array of the CPE enhances the apparent diversity via predetermined polarizations to allow the base station radio device to implement polarization dependent loss mitigation and mitigate interference and jamming through spatial and polarization processing over the bandwidth of the signal. In some instances, the polarization diversity may be accomplished, at least in part, by precoding the phase and/or amplitude of the feeds into the elements of the sub-arrays. For example, to adjust the polarization, and/or the direction of departure/arrival of signals in the multi-antenna array, the phase and/or amplitude of the feeds to/from the elements may be predetermined. In some instances, the phase and/or amplitude may be determined as a function of the direction of arriving signals/transmitting signals (i.e., where the multi-antenna array is receiving signals from and transmitting signals to). By way of example, the sub-array may be precoded or programmed to exhibit a 70 degree pattern in both azimuth and elevation, while the multi-antenna array achieves a 3 dB pattern of +/−90 degrees azimuth with respect to the multi-array antenna's boresight and an elevation of −0 degrees to +70 degrees with respect to the plane formed by the earth's surface. In some instances, multi-antenna array may include two-way or three-way power splitters/combiners to drive ports of each element within the multi-antenna array. Using orthogonal elements for adjacent sub-arrays serves to reduce the constructive or destructive interference as orthogonal components do not interfere with each other. Stated alternatively, the use of polarization diversity in the non-planar multi-antenna array reduces the parasitic effects of beam overlap and sidelobes. In some instances, the beamwidths from the sub-arrays may partially overlap to result in destructive interference decreasing the array gain in specific directions. Conversely beam overlap and sidelobes may interfere constructively, resulting in gain peaking over the beamwidth. This constructive interference may result in excess gain in specific directions that exceed Federal Communications Commission (FCC) limits for Effective Isotropic Radiated Power (EIRP). If the EIRP is exceeded in any specific direction the transmitter power may be required to be reduced for the entire array beamwidth resulting in shorter range and coverage over the beamwidth. This gain unflatness across the beamwidth is undesirable. Therefore the array element precoding (polarizations, power and phase) are pre-determined to maximize polarization diversity while minimizing gain variation across the array beamwidth. The present disclosure provides an overall understanding of the principles of the structure, function, device, and system disclosed herein. One or more examples of the present disclosure are illustrated in the accompanying drawings. Those of ordinary skill in the art will understand and appreciate that the devices, the systems, and/or the methods specifically described herein and illustrated in the accompanying drawings are non-limiting embodiments. The features illustrated or described in connection with one embodiment, or instance, may be combined with the features of other embodiments or instances. Such modifications and variations are intended to be included within the scope of the disclosure and appended claims. FIG.1illustrates an example environment100for providing broadband internet to a premises102(e.g., building, house, multi-dwelling complex, etc.). In some instances, the environment100may include a system104for providing the broadband internet to the premises102. The system104may, in some instances, include one or more base station radio devices106, one or more customer premises devices (CPE)108, one or more routers110, and/or one or more consumer device(s)112. The base station radio device106is shown coupled to utility pole structures114for being disposed above the ground. In some instances, the base station radio devices106may be configured to mount to the utility pole structures114, or on other structures, to vertically dispose the base station radio devices106above the ground. For example, the base station radio devices106may be disposed on a side of a building, a light pole, stop lights, telephone poles, and so forth. In some instances, the base station radio device106may be disposed on the utility pole structures114for communicatively coupling to a service provider network (SPN)116. In some instances, a backhaul134may couple the base station radio devices106to the SPN116. The backhaul134may, in some instances, represent a network for providing broadband internet to the premises102. For example, the backhaul134may include or represent cables (e.g., fiber-optic cables) that span between the utility pole structures114and which ultimately route to the SPN116for providing broadband internet. In some instances, the backhaul134may first route to a middle-mile location with broadband internet (e.g., hospital, police station, etc.) before routing to the SPN116. In some instances, additionally or alternatively, the base station radio devices106may communicate with the SPN116via wireless technologies (e.g., mmWave). However, the backhaul134may be routed differently than shown for communicating with the SPN116. For example, rather than the backhaul134being disposed on the utility pole structures114, the backhaul134(or portions) thereof may be buried and the base station radio devices106may couple to the backhaul134. In such instances, the base station radio devices106may be disposed on vertical structures (e.g., light poles). Regardless of the specific implementation, the base station radio devices106may be connected to the SPN116for accessing broadband internet provided by the SPN116. Disposing the base station radio devices106on the utility pole structures114utilizes an existing network of vertical structures for providing broadband internet. Furthermore, discussed herein, disposing the base station radio devices106on the utility pole structures114, or other vertical structures, may provide an unobstructed transmission path (or reduced unobstructed path) between the base station radio devices106and the CPEs108, vice versa. Additionally, noted above, in communities that lack the utility pole structures114, the base station radio devices106may be disposed on vertical structures other than the utility pole structures114, such as light poles. The base station radio devices106may function to provide broadband internet to one or more premises. For example, a first base station radio device may be disposed on a first powerline structure to provide broadband internet to one or more first premises, while a second base station radio device may be disposed on a second powerline structure to provide broadband internet to one or more second premises. In some instances, the one or more first premises may be the same as, or include some of, the one or more second premises. For example, referring toFIG.1, the base station radio device106may provide broadband internet to multiple premises, including the premises102. However, it is to be understood that more than two base station radio devices106may be included and any number of base station radio devices106may installed for providing broadband internet to a geographical region. For example, within densely populated areas, a larger number of base station radio devices106may be installed per block, radius, mile, etc. as compared to less densely populated areas. In this sense, the system104may be scaled as needed depending on demand, usage, and/or throughput requirements. The base station radio devices106may communicate with nearby CPEs, such as the CPE108, installed at the premises102. The base station radio device106may wirelessly communicate with the CPE108via a communication channel118to provide broadband internet offered by the SPN116. In some instances, the communication channel118between the base station radio device106and the CPE108may support any dynamically shared spectrum (DSS) (e.g. between 3100 MHz and 4200 MHz). In some instances, the communication channel118may support the Citizens Broadcast Radio Spectrum (CBRS) between 3550 MHz and 3700 MHz. In some instances, the communication channel118may include any low-band, mid-band and/or high-band frequencies, regardless of the DSS. However, it is to be understood that the communication channel118may support any range of frequencies for providing broadband internet to the premises102. The CPE108includes antenna(s)120(or a multi-antenna array) for communicating, via the communication channel118, with the base station radio device106and via an antenna of the base station radio device106(not shown inFIG.1.). In some instances, depending on the range of frequencies (or spectrum) at which the base station radio device106and the CPE108are configured to communicate, the CPE108may be configured accordingly. For example, the antenna(s)120may be interchangeable to accommodate for the spectrum, or range of frequencies, at which the base station radio device106and the CPE108communicate. In such instances, components of the CPE108may be modular or configurable to change antennas, modems, interfaces, and so forth. Multiple antennas, or antenna housings, may be configured to attach to the CPE108. Such configuration may make the CPE108modifiable to accommodate new technologies and communication protocols. The CPE108may include, or the antenna(s)120of the CPE108may represent, a multi-antenna array having antennas (e.g., two, three, four, etc.) arranged with different polarizations. The antenna(s)120may include sub-arrays having multiple patches or elements (e.g., two). In some instances, each sub-array of the multi-antenna array may include two orthogonally polarized elements and each element of the sub-array may include a dedicated antenna feed port. By selecting specific polarizations, and determining the phase and or amplitude of the antenna feeds, the antenna(s)120may have a radiation pattern with a predetermined variable polarization. In some instances, the predetermined variable polarization may be a function of the direction of departure and arrival of signals and/or in the antenna array. For example, the antenna array may have linear, circular, and/or elliptical polarizations, which may be a function of the direction of arrival/departure in the antenna array pattern. By diversifying the polarization, the CPE108may more effectively communicate with the base station radio device106. Stated alternatively, the base station radio device106may more efficiently communicate with the CPE108given the variable polarization over a beamwidth generated by antenna(s)120of the CPE108. For example, the antenna(s)120of the CPE108may enhance the apparent diversity via predetermined polarizations to allow the base station radio device106to implement DPL mitigation and mitigate interference and jamming through spatial and polarization processing over the bandwidth of the signal. This is in comparison to conventional antennas that are conditioned on a fixed polarization or fixed dual orthogonal polarization. The CPE108may be constrained such that, for example, only one transmit port is provided. In this example, the polarization diversity may be accomplished, at least in part, by splitting the transmitter power and precoding the phase and/or amplitude of the transmit signal feeds into the elements of the sub-arrays. For example, to adjust the polarization, and/or the direction of departure/arrival of signals in the multi-antenna array, the phase and/or amplitude of the feeds to/from the elements may be predetermined. In some instances, the phase and/or amplitude may be determined as a function of the direction of arriving signals/transmitting signals (i.e., where the multi-antenna array is receiving signals from and transmitting signals to). The selection of the amplitudes and phase shifts are predetermined to minimize transmitter gain variation across the antenna pattern and maximize the polarization diversity over the antenna pattern. In some instances, the CPE108may be configured to attach as a meter collar and within existing electric meters (or panels), which may be a smart meter of the premises102. Additional details of the meter collar are discussed in detail herein. However, generally, the meter collar includes a power module configured to supply power to the CPE108and which couples to the electrical wiring of the premises102. Alternatively, the CPE108may attach to the premises102at any demarcation point between a utility service and the premises102(e.g., electrical panel). The CPE108may include one or more interface(s) for communicatively coupling with the router110and providing the broadband internet to the consumer device(s)112. In some instances, the interfaces communicatively couple the CPE108and the router110over the electrical wiring of the premises102for providing broadband internet to the consumer device(s)112within the premises102. (e.g., personal computer, laptop, television, printer, audio/video receiver, audio equipment, video equipment, mobile devices, tablets, etc.). For example, the CPE108is shown including a first BPL interface122for communicating with a second BPL interface124of the router110. In addition, the CPE108may include a first modem module126for communicating with a second modem module128of the router110. The CPE108may include alternate interfaces as well, such as a LAN interface for communicating with the router110. Collectively, the BPL interfaces and the modem modules may provide broadband internet to the consumer device(s)112. For example, the BPL interfaces allow the CPE108and the router110to communicate over the electrical wiring of the premises102for coupling the consumer device(s)112to the SPN116. The modem modules act to wirelessly receive and transmit data between the SPN116and the consumer device(s)112. To briefly illustrate, the first modem module126, via the antenna(s)120, may receive broadband data from the base station radio device(s)106. This broadband data is communicated with the first BPL interface122. The first BPL interface122then transmits the broadband data through the premises structure130, via the electrical wiring of the premises102, to the second BPL interface124. The second modem module128then receives the broadband data from the second BPL interface124, and using antenna(s)132, broadcasts the broadband data via Wi-Fi to the consumer device(s)112. For example, the second modem module128may include a Wi-Fi module to supply wireless internet to the premises102. Additionally, while one pathway of communication is described, it is to be understood that the router110may similarly communicate with the CPE108for transmitting data from the CPE108to the base station radio device106and the SPN116. The first modem module126and/or the second modem module128may be configured for certain spectrums. For example, the first modem module126may be modular for adapting the CPE108to communicate with the base station radio device106over a range of frequencies, and the second modem module128may be modular for adapting the CPE108to communicate with the consumer device(s)112over a range of frequencies. For example, in some instances, the first modem module126may represent a CBRS modem for communicating with the base station radio device106in the CBRS (3550 MHz-3700 MHz). Alternatively, the first modem module126may represent a DSS modem for communicating with the base station radio device106via any frequency of the DSS (3100 MHz-4200 MHz). However, it is to be understood that the first modem module126may include other modules (e.g., WWAN), interfaces, or components for wirelessly communicating with the base station radio device106over any frequency, or range of frequencies, such as mmWave. The first modem module126may additionally or alternatively be configured for wired technologies (e.g., Ccable, DSL, twisted pair, etc.). In such instances, the CPE108may have ports or receptacles for receiving the physical connections. Additionally, the first modem module126may be interchangeable depending on the specific configuration of the CPE108(e.g., CBRS, BPL, mmWave, LAN, Optical etc.) or the router110(e.g., 5G, Wi-Fi, etc.). The CPE108may therefore be modular, with interchangeable modem module(s) depending on the specific implementation and technologies at the premises102. In some instances, the CPE108may include an expansion port(s) (e.g., UART, I2C, SPI, SDIO, USB, GPIOs, etc.), a real-time clock, temperature sensor(s), a Joint Test Action Group (JTAG), and/or a 6× sensor. Additionally, the second modem module128may represent other modems coupled to the antenna(s)132and which are configured to provide Wi-Fi to the consumer device(s)112. For example, the second modem module128may be configured to provide Wi-Fi other than 2.4 GHz and 5.0 GHz (e.g., Near Field Communication (NFC)). Additionally, or alternatively, in some instances, the router110may wirelessly broadcast the broadband internet to the consumer device(s)112via wired technologies such as Ethernet, USB, coaxial, fiber optic, and the like. In such instances, the router110may include plug-ins for receiving the wired technologies. In some instances, the router110may represent a wall plug-in or device that otherwise plugs into a power outlet within the premises102. The router110may receive power, via the power outlet, and ultimately via the electrical wiring of the premises102. As the CPE108couples to the electrical wiring of the premises102, via coupling to the electric meter, the CPE108may communicate with the router110over the electrical wiring within the premises102. For example, the meter collar may couple the CPE108with the neutral, earth ground wires and/or the line voltage wires that are fed into the premises102(or which feed into the breaker box of the premises102). Once the router110is plugged in, the CPE108may communicate with the router110using the electrical wiring (e.g., wires). The BPL interfaces of the CPE108and the router110, respectively, decipher, interpret, and communicate with one another for transmitting and receiving data. In some instances, the CPE108and the router110may be paired together as part of an installation process in order to provide the broadband internet. In some instances, the router110may be plugged into a wall outlet located closest to the electric meter for reducing a noise and/or decay of the broadband data over the electrical wiring of the premises102. For example, the broadband data may become attenuated with increased wire lengths between the CPE108and the router110. Additionally, appliances and/or devices that pull from the power supplied to the premises102, via the electrical wiring, may generate noise. In some instances, the router110may be installed within a breaker box, or in close proximity to the breaker box. As shown inFIG.1, the CPE108may mount to an exterior (e.g., outdoor) of the premises102and the router110may mount within or be disposed within an interior (e.g., indoor) of the premises102. This combination, or respective positioning of the CPE108and the router110may alleviate issues associated with building penetration. For example, wireless signals may fail to penetrate building materials (e.g., siding, roofing, studs, windows, etc.) of homes and/or business. By mounting the CPE108on an exterior-side of the premises102, and communicatively coupling the CPE108with the router110located on the interior-side of the premises102broadband internet may be provided to the consumer device(s)112. This may provide high-throughput wireless technologies (e.g., 4G LTE, 5G, etc.) to the premises102and without experiencing lag, latency, and/or buffering. However, in wireless technologies, challenges in NLOS application may introduce challenges. These challenges may be addressed, in part, by the polarization diversity of the CPE108. For example, in NLOS applications, signals incident at the base station radio device106may be cross-polarized. This may result in PDL and/or the transmission path (channel) may exhibit frequency selective multi-path fading where reflected copies of the signal cancel one another at the antenna of the base station radio device106to create a transmission null. However, the diversity of the polarization within the antenna(s)120allows the base station radio device106to implement PDL mitigation and adaptive interference mitigation based at least in part on PMD processing. In other words, the antenna(s)120intentionally introduces diversity for transmitting signals and for communicating over channels with the base station radio device106. This makes it possible in MIMO applications to perform digital baseband space or polarization processing with transmission/receiving ports. For example, when a single transmission port is intended to communicate with a multi-port receiver equipped with coherent spatial and/or polarization combining capability, there is a significant advantage if the transmitter can maximize the spatial and polarization diversity transmitted into the channel. In effect, the antenna(s)120may support a predetermined polarization as a function of direction (e.g., azimuth) and based on the spatial or polarization properties of the elements within the antenna(s)120. In some instances, the antenna(s)120may have a compact non-planar array of two or more dual-polarized sub-arrays. However, the antenna(s)120may have any number of dual-polarized sub-arrays, such as four. In some instances, the CPE108or the antenna(s)120of the CPE108may be configured to beam-form for achieving optimum link properties with the base station radio device(s)106. In some instances, the beam-forming may be achieved by using an antenna array or a MIMO antenna. Additionally, in some instances, the MIMO antenna may combine or aggregate signals received over disparate spectrums (or frequencies). Once combined, these signals may be provided to the premises102as broadband internet. For example, in some instances, the CPE108may combine broadband data received via mmWave frequencies and other spectrums (e.g., CBRS) for providing high bandwidth and throughput to the premises102. Such aggregation may also utilize currently available bandwidths and/or loads on the DSS. That is, a portion of the broadband internet supplied to the premises102may come by way of CBRS, while another portion may come by way of mmWave. Additionally, in some instances, the antenna(s)120of the CPE108may position at various positions on and/or around the premises102for achieving an increased signal strength with the base station radio device106. For example, as shown inFIG.1, the CPE108may mount to a side of the premises102facing the base station radio device106. In some instances, however, the electric meter may not be facing the base station radio device106and/or a line of sight between the electric meter and the base station radio device106may be obstructed (e.g., trees, fences, buildings, etc.). In such instances, when the CPE108mounts to the electric meter, the antenna(s)120may be similarly obstructed, which may impact the communication channel118and/or reduce a signal strength between the base station radio device106and the CPE108. Here, the antenna(s)120, in some instances, may extend from the CPE108(coupled to the utility meter) to dispose the antenna(s)120at various positions for potentially eliminating physical obstructions between the base station radio device106and the antenna(s)120. In these instances, the antenna(s)120may communicate the broadband data back to the CPE108via a cable. The cable may extend to various lengths using, for example, a cable recoil system (e.g., torsional spring, retractable reel, etc.). To find the optimum location of the antenna(s)120on the premises102, various techniques or instruments may be used. Once the optimum location is found (e.g., highest signal strength), the antenna(s)120may be mounted at that location. In some instances, and as alluded to above, the CPE108may be covered by another base station radio device106mounted on another powerline structure and in communication with the CPE108. These base station radio devices106may also be connected to the same SPN116(via the backhaul134) as the CPE108to provide the broadband internet to the CPE108. In some instances, the CPE108may connect with a nearest base station radio device106, a base station radio device106with which the CPE108has a strongest signal strength, and/or a base station radio device106having bandwidth to connect with the SPN116. In other instances the CPE108may be simultaneously connected to multiple base station radio devices106for allowing aggregation of data from the multiple base station radio devices106. In some instances, the CPE108may be configured to read electrical information, such as electrical consumption and/or generation over a certain period, statistical data analysis of thereof, outage information, etc. associated with the electric meter. The CPE108may also communicatively couple to other internet-accessible devices (e.g., IoT) of the premises102for reading electrical usage and/or status. For example, the CPE108may report, or provide, data indicating energy savings, usage, load to service, and/or other statistical information of the premises102. In such instances, the CPE108may tap into power systems or components of the premises102for providing such information (e.g., batteries, solar panels, etc.). In some instances, the CPE108may be configured to transmit the electrical information, usage data, and/or status data to a service entity (not shown) associated, via the communication channel118(and/or another communication channel) for advanced metering and providing essential services. In some instances, although the router110is discussed as being separate from the CPE108, in some instances, the router110may be integrated within the CPE108. In such instances, the integrated CPE may be disposed within the premises102and/or exterior the premises102. Additionally, or alternatively, in some instances the CPE108and the router110may wirelessly communicate with one another. In such instances, the CPE108and the router110may not communicate using the existing electrical wiring within the premises102. Instead, the CPE108and the router110may include wireless interfaces/modems for communicating with one another. However, noted above, in some instances, the CPE108may act as a wireless router for providing broadband internet to the premises102. Furthermore, in some instances, the router110may be integrated within the CPE108and/or the CPE108(with the router110) may be mounted in the interior or exterior of the premises102. In some instances, the CPE108may provide for an advanced metering infrastructure (AMI). Generally, AMI is an integrated system of smart meters, communications networks, and data management systems that enables two-way communication between utility companies and consumers. In some instances, AMI may eliminate the need for physically walking or driving to premises within a community to measure readings of power, gas, water, and so forth. In some instances, the CPE108may be used as a component of AMI for providing utility data or reporting utility data. This data may be used to optimize utilities, such as system loss, reporting maintenance planning, improving customer perception and engagement, water management, conservation and energy efficiency, consumption versus revenue trends and forecasting, power quality monitoring, theft identification, and revenue recovery. In part, this optimization may require an understanding of the premises102, and/or the condition and importance to the overall structure at the premises102. In some instances, this insight may be gleaned by aggregating utility data, including work history and condition rating, into a single system, balancing the importance of one factor versus another, and updating any condition changes as they occur. By receiving this data in real time, the utility company may obtain a more reliable view of the health of the premises102, the consumption of utilities and/or services at the premises102, and may make more meaningful investment and work decisions on how to best balance compliance, reliability, safety and risk. For example, the CPE108may provide utility data (or other data) regarding the various components within the premises102to the utility service. These components may include meters (e.g., gas, water, electricity, etc.) and/or appliances (e.g., coffee pot, light switch, oven, etc.). By communicatively coupling to these meters and/or appliances, via BPL and/or wireless technologies, data associated with use and consumption may be obtained. For example, the CPE108may determine electrical usage of certain appliances, and/or the router110may communicatively couple to appliances within the premises102(e.g., IoT). This coupling may be used to report usage and consumption data, and/or may be used to control certain appliances (e.g., turning on a furnace when the consumer approached the premises). In some instances, tapping into the electrical wiring may be used to control the assets within the premises102. For example, by providing broadband data transmission between electrical outlets within the premises102, or via the wireless communication pathways, there is the potential to network all kinds of common appliances and control their associated operations. FIG.2illustrates example components of the base station radio device106, the CPE108, and the router110. Discussed above, the base station radio device106may be in communication via wired technologies (e.g., a fiber-optic cable network) and/or wireless technologies (e.g., mmWave) with the SPN116. The base station radio device106may include one or more processor(s)200, computer-readable media202, interface(s)204, and/or antenna(s)206. The processor(s)200may include a central processing unit (CPU), a graphics processing unit (GPU), both a CPU and a GPU, or other processing units or components. Additionally, each of the processor(s)200may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems. The processor(s)200may be coupled to the computer-readable media202and execute computer executable instructions stored in the computer-readable media202. The processor(s)200may also couple modules and components of the base station radio device106to one another and may perform various functions including instructing and causing the modules and components of the base station radio device106to perform their associated functions. For example, the processor(s)200may cause components of the base station radio device106to transmit and receive broadband data from the SPN116, as well as transmit and receive broadband data from the CPE108. As the base station radio device106communicatively couple to multiple CPEs108to provide broadband internet, the base station radio device106may store, in the computer-readable media202, indicators and/or identifying information of individual CPEs108. Such information may be utilized for communicating (e.g., routing) with respective CPEs108at respective premises102. For example, a particular base station radio device106may provide broadband internet to multiple premises. As the base station radio device106sends data to respective premises, or receives data from the respective premises, the base station radio device106may tag or otherwise mark this outgoing and incoming data. This marking may indicate which premises is the recipient and/or originator of the data. As such, the base station radio device106may transmit the data to the respective premises, or to the proper recipients. The interface(s)204couple the base station radio device106to the SPN116(e.g., via the fiber-optic broadband network) for accessing broadband internet. Additionally, the interface(s)204may couple the base station radio device106to the CPE108. For example, the interface(s)204may be coupled to the processor(s)200and the antenna(s)206for communicating with the CPE108(and/or a plurality of CPEs108) to provide broadband internet. In some instances, the interface(s)204may include modems, modules, or other components for wirelessly coupling with the CPE108. For example, the interface(s)204may include a DSS modem module, a CBRS modem module, C-band modem module, a WWAN modem module, and/or any other modem/module for communicating, via the communication channel118, with the CPE108(e.g., mid frequencies, high frequencies, etc.). The base station radio device106may therefore include a plurality of interface(s)204for communicating with corresponding interfaces (e.g., the first modem module126) of the CPEs108. In some instances, the interface(s)204may include interfaces for interacting with wide area networks (WAN), cellular networks, and so forth. The antenna(s)206may include an array of antennas for otherwise transmitting data to, and receiving data from, the CPE108. In some instances, the antenna(s)206may beam-form for achieving optimum link properties with the CPE108and/or the SPN116. The base station radio device106may include additional interface(s) for communicating with other base station radio devices106(and ultimately the SPN116) using wired and/or wireless technologies. Additionally, the antenna(s)206may be capable of receiving signals with varying polarizations from the CPE108(e.g., vertical, horizontal, elliptical, etc.). In some instances, the base station radio device106may include input/output (I/O) components coupled to the processor(s)200. The I/O components may be configured to communicate with a computing device, such as a computing device loaded with appropriate applications for programming or checking the status of the base station radio device106. For example, the computing device may be operated by a utility service or company providing the broadband internet to the premises102, and which is used for monitoring and/or troubleshooting issues experienced by the base station radio device106and/or the CPE108. The I/O components may also provide other information from the premises102, such as usage data, data generated by appliances within the premises102(e.g., IoT), for use in energy savings, system management, and/or load to service determination. The base station radio device106communicatively couples to the CPE108via the communication channel118. As shown, the CPE108may include one or more processor(s)208, computer-readable media210, the antenna(s)120, the first BPL interface122, and the first modem module126, as discussed above with regard toFIG.1. In some instances, the processor(s)208may include a CPU and/or a GPU. Additionally, the processor(s)208may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems. The processor(s)208may be coupled to the computer-readable media210and execute computer executable instructions stored in the computer-readable media210. The processor(s)208may be also coupled modules and components of the CPE108and may perform various functions including instructing and causing the modules and components of the CPE108to perform their associated functions. For example, the processor(s)208may cause components of the CPE108to send and receive broadband data to and from the base station radio device106, and to send and receive broadband data to and from the router110. For example, as the antenna(s)120receive broadband data from the base station radio device106, the processor(s)208may cause this data to be sent to the router110via the first BPL interface122. The processor(s)208may therefore route broadband data from the antenna(s)120to interfaces of the CPE108, and vice versa, for providing broadband internet to the premises102. The first BPL interface122of the CPE108is shown communicating with the second BPL interface124of the router110. For example, a communication channel212exists between the first BPL interface122and the second BPL interface124. Noted above, the communication channel212may represent a communication channel over the electrical wiring of the premises102, where the broadband data is transmitted over wires or other cables within the premises102. However, although the discussion herein is with regard to providing broadband internet over the electrical wiring, the CPE108and the router110may wirelessly communicate with one another. In such instances, the communication channel212may represent a wireless communication channel. Additionally, the CPE108and the router110may communicate with other wiring of the premises102 The first BPL interface122communicatively couples to the first modem module126and the second BPL interface124communicatively couples to the second modem module128. The first modem module126may include a corresponding module for communicating with the interface(s)204of the base station radio device106(e.g., DSS, CBRS, G.hn, WWAN, C-band, etc.). As the first modem module126receives broadband data, via the antenna(s)120, the first modem module126may interpret the broadband data. The first BPL interface122then transmits the broadband data to the second BPL interface124, whereby the second modem module128may interpret the broadband data. Therein, the second modem module128may broadcast the broadband data to the consumer device(s)112via the antenna(s)132as broadband internet. As the CPE108receives data from the base station radio device106(via the antenna(s)120and the first modem module126(e.g., CBRS, DSS, WWAN, etc.), the first BPL interface122may transmit (via the communication channel212) the data to the second BPL interface124. The second BPL interface124receives the data and the second modem module128(e.g., 2.4 GHz and/or 5.0 GHz Wi-Fi module) communicatively coupled to the second BPL interface124then broadcasts this data, via the antenna(s)132, to the consumer device(s)112. Similarly, the second modem module128may receive data from the consumer device(s)112(via the antenna(s)132). The second BPL interface124transmits the data to the first BPL interface122and the first modem module126broadcasts this data to the base station radio device106via the antenna(s)120. The communicative coupling between the first BPL interface122and the first modem module126, the second BPL interface124and the second modem module128, as well as the CPE's108connection with the base station radio device106, permits the system104to provide broadband internet over existing electrical wiring of the premises102. Although the first BPL interface122and the first modem module126are shown as separate components, in some instances, the first BPL interface122and the first modem module126may be integrated as a single component. In some instances, the first BPL interface122and the first modem module126may be components of a SoC. Noted above, the first modem module126may also be modular and interchangeable depending on the frequencies which the first modem module126communicates with the base station radio device106. Additionally, or alternatively, the second BPL interface124and the second modem module128may be integrated as a single component. In some instances, the second BPL interface124and the second modem module128may be components of a SoC. The second modem module128may also be modular and interchangeable depending on the Wi-Fi or network provided to the premises102. In some instances, the antenna(s)120may be located inside, outside, or on the outside surface of a housing of the CPE108, and/or mounted at other locations distant or proximate to the electric meter. In some instance, the antenna(s)120may be configured to beam-form for achieving optimum signal strengths with the base station radio device(s)106. In some instances, the antenna(s)120, or an antenna array, may support 3100 MHz to 4200 MHz dual port/polarization, include a gain of 4 dBi, and may include an antenna pattern of 180 degrees azimuth −0+70 degrees vertical. As introduced above, the CPE108may include, or the antenna(s)120may represent, a multi-antenna array having antennas arranged with different polarizations. The antenna(s)120may include sub-arrays having multiple elements and each sub-array of the multi-antenna array may include two orthogonally polarized elements. Additionally, the antenna(s)120may have a radiation pattern with a predetermined variable polarization. In some instances, the predetermined variable polarization may be a function of the direction of departure and arrival of signals at the antenna(s)120. In some instances, the polarization diversity may be accomplished, at least in part, by precoding the phase and/or amplitude of the antenna feeds into the elements of the sub-arrays. Elements of the antenna array will constructively interfere if elements realize the same polarization and relative phase. When constructive interference is undesirable (e.g., when gain flatness is desired to meet FCC radiation limits), the relative phase of the two interfering elements may be precoded with a 180° phase offset resulting in the replacement of the constructive interference with destructive interference. Thus the gain peak is replaced with a gain null in a particular pattern azimuth and elevation. In some instances, this may be accomplished, at least in part by determining a geometry for a compact antenna, as well as gain and pattern objectives for the antenna array. Three-dimension simulation may be carried out to obtain equal phase and amplitude patterns. The selected elements may be converted to orthogonal polarization to eliminate first order pattern peaks. Therein, the phase and/or amplitude may be adjusted for co-polarized elements to flatten the pattern response. Additionally, the phase and/or amplitude of cross-polarized elements may be adjusted to maximize polarization diversity. The CPE108includes a power module214coupled to the processor(s)208. The power module214may be coupled to the electric meter of the premises102to supply electrical power from the electric meter to some or all components and modules of the CPE108. The CPE108, or a housing of the CPE108, may be configured to attach as a meter collar to the electric meter. Coupling the CPE108to the utility meter in this manner also communicatively couples the first BPL interface122with the second BPL interface124via the electrical wiring of the premises102. In this sense, the power module214may tap into the electrical wiring of the premises102for sending broadband data through the wiring of the premises102, for delivery to the router110. Using this form of communication allows broadband internet to penetrate the premises102using existing wiring networks and alleviates the building penetration problem. The CPE108may additionally include input/output (I/O) components216coupled to the processor(s)208. The I/O interface components216may be configured to communicate with a programming device, such as a computing device of the utility service, or other device loaded with appropriate applications for programming or checking the status of the CPE108(or the broadband internet). This communication may provide for testing, system upgrades, reboots, and so forth. The communication may also include data from an IoT within the premises102for use in load to service determination, energy savings, system usage, and so forth. In such instances, a user interface (UI) may be provided for interfacing with the CPE108. In some instances, the I/O components216may comprise a connector, such as a telco connector, a USB connector, a RJ45 connector, and the like, and/or an RF communication module such as a NFC, Bluetooth communication, or Wi-Fi communication module for such communication. In some instances, the CPE108may also include lighting element(s)218that indicate an operational state of the CPE108(e.g., light emitting diodes (LEDs)). The lighting element(s)218may indicate, for example, a strength of the broadband internet (e.g., Received Signal Strength Indicator (RSSI)), a packet error rate (PER) associated with receiving broadband data from the CPE108and/or the router110, or a health of the connection with the base station radio device106(e.g., the communication channel118) and/or the connection with the router110(e.g., the communication channel212). The lighting element(s)218may additionally or alternatively indicate power, BPL link, and may be disposed on side of CPE108and/or viewable at all angles. The computer-readable media210of the CPE108may also store electrical information associated with the electric meter, electrical information of the premises102, connectivity of the consumer device(s)112, and the like for reporting to a utility service. In some instances, the power module214may read the electrical information from memory of the electric meter provider for transmitting the electrical information to the associated service entity using the broadband internet. The CPE108may also include a global positioning system (GPS) component and/or other locating components for determining a location of the CPE108amongst a network or grid. Temperature sensors of the CPE108may also monitor a temperature within the CPE108. Additionally, the CPE108may include components for determining an orientation or angle at which the CPE108, or the antenna(s), are disposed (e.g., gyroscope, inclinometer, etc.). The router110may include one or more processor(s)220, computer-readable media222, and the second BPL interface124and the second modem module128as discussed above with regard toFIG.1. In some instances, the processor(s)220may include a CPU and/or a GPU. Additionally, the processor(s)220may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems. The processor(s)220may be coupled to the computer-readable media222and execute computer executable instructions stored in the computer-readable media222. The processor(s)220may also be coupled to modules and components of the router110and may perform various functions including instructing and causing the modules and components of the router110to perform their associated functions. The router110includes a power module224coupled to the processor(s)220. The power module224may be coupled to a power supply of the premises102(e.g., the electrical wiring) and receive electrical power to power components and modules of the router110. Coupling the router110to the electrical wiring in this manner couples the second BPL interface124with the first BPL interface122via electrical wiring of the premises102. In some instances, the router110may include input/output (I/O) components226coupled to the processor(s)220. The I/O components226may be configured to communicate with a computing device, such as a computing device loaded with appropriate applications for programming or checking the status of the router110. For example, the computing device may be operated by a utility service providing the broadband internet to the premises102, and which is used for monitoring and/or troubleshooting issues experienced by the base station radio device106and/or the CPE108. Discussed above, the router110includes the antenna(s)132for broadcasting the broadband internet within the premises102. Additionally, or alternatively, the router110may include plug-ins (e.g., Ethernet) for coupling to the consumer device(s)112. As used herein, a processor, such as the processor(s)200,208, and/or220may include multiple processors and/or a processor having multiple cores. Further, the processor(s) may comprise one or more cores of different types. For example, the processor(s) may include application processor units, graphic processing units, and so forth. In one implementation, the processor(s) may comprise a microcontroller and/or a microprocessor. The processor(s) may include a graphics processing unit (GPU), a microprocessor, a digital signal processor or other processing units or components known in the art. Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that may be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc. Additionally, each of the processor(s) may possess its own local memory, which also may store program components, program data, and/or one or more operating systems. Computer-readable media, such as the computer-readable media202,210, and/or222may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program component, or other data. Such memory may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology (e.g., embedded Multi-Media Controller (eMMC), SPI NOR), CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, DDR-SDRAM or any other medium which can be used to store the desired information and which can be accessed by a computing device. The memory may be implemented as computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor(s) to execute instructions stored on the memory. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other tangible medium which can be used to store the desired information and which can be accessed by the processor(s). FIGS.3A and3Billustrates the CPE108for communicatively coupling with the base station radio device106and the router110within the premises102. The CPE108is shown including a collar300for coupling the CPE108to an existing electric meter. However, the CPE108may include other bodies for coupling to electrical wiring at the premises102. In some instances, the CPE108may represent a fixed wireless device installed at the premises102, within an existing electric meter panel. In such instances, the collar300may include components for coupling to an existing electric meter panel and receiving the electric meter. Although the collar300is discussed as being part of the CPE108, or that the collar300has certain components, it should be understood that referring to the CPE108may include the collar300and the components thereof. That is, the CPE108may include the collar300(as well as its components) and the collar300may represent a portion of the CPE108placed at or on the premises102. The collar300may include a cylindrical shaped housing or body302. The body302extends between a first end304and a second end306, opposite the first end304and spaced apart in the Z-direction from the first end304. In some instances, the first end304may correspond to a front of the collar300and the second end306may correspond to a back of the collar300. The first end304is shown including an opening or annulus308for receiving an electric meter. In some instances, the annulus308may include a circular-shape and may be sized and configured for receiving the electric meter. The annulus308provides access to an interior310of the collar300. Discussed herein, the interior310may include components of the CPE108and/or features for receiving the electric meter. For example, the interior310may include receptacles or slots312for receiving prongs of the electric meter. The slots312may extend along a lengthwise direction of the body302(e.g., Z-direction) and may function to complete a circuit from incoming power to the breaker box (or electrical panel) located within the premises102. In some instances, the slots312may include five slots corresponding to hot wires and neutral wire(s). The CPE108includes a top portion314mounted atop (e.g., Y-direction) of the body302. The top portion314may include a base316and a cover318. The base316may provide a platform for supporting the cover318or onto which the cover318mounts. As shown, the base316may include features that conform to a curvature or shape of the body302and features for receiving the cover318. For example, one side of the base316may be curved for accommodating the body302and a second side may be planar for providing a substantially flat platform for the cover318. Disposed behind (i.e., beneath, underneath, etc.) the cover318may be the antenna(s)120and components of the CPE108, as discussed above with regard toFIG.2. The cover318may represent a radome for enclosing and protecting the antenna(s)120as well as other components of the CPE108from environmental conditions (e.g., rain, dust, debris, etc.). In some instances, the cover318may be manufactured from materials, including but not limited to, plastics, rubber-coated air-supported fabric, and/or other materials with low radio frequency loss characteristics. The location of the cover318may increase an ease of maintenance, servicing, and/or upgrading components of the CPE108. For example, as technology increases and/or as vendors continue to develop higher throughput technologies (e.g., 5G), the antenna(s)120and/or interfaces of the CPE108may be upgraded. Here, the top portion314may uncouple from the collar300(or the body302). A new top portion, which may include upgraded antenna(s), circuits, etc. may be disposed in place of the existing top portion. In such instances, locating the antenna(s) within the top portion314, and external to the interior310of the collar300, may allow for interchangeability as new technologies are introduced, as components fail and are in need of repair, and/or for configuring the CPE108to communicate with the base station radio device106using a certain spectrum (e.g., CBRS, C-band, etc.) and/or any other wireless technologies. In some instances, the top portion314may have a quick disconnect feature from the body302for quickly removing the top portion314and/or to replace the top portion314. FIGS.4A and4Billustrate the CPE108, including the collar300, from opposing ends. For example,FIG.4Aillustrates the first end304of the body302, such as the front, andFIG.4Billustrates the second end306of the body302, such as the back. Discussed above, the interior310may include the slots312, such as a first slot400(1), a second slot400(2), a third slot400(3), a fourth slot400(4), and a fifth slot400(5). The first slot400(1), the second slot400(2), the third slot400(3), the fourth slot400(4), and the fifth slot400(5) may collectively be referred to herein as “the slots312.” The slots312serve to transfer power as supplied by a utility service to a breaker box within the premises102. An electric meter couples to the slots312for completing a circuit such that power may be supplied to the premises102. In some instances, the first slot400(1) may couple to a first hot wire received from the utility service for providing a first hot lead, the second slot400(2) may couple to a neutral wire received from the utility service, the third slot400(3) may operably couple to the first hot wire (or the first hot lead) for providing power to breaker box, the fourth slot400(4) may couple to a second hot wire received from the utility service for providing a second hot lead, and the fifth slot400(5) may operably couple to the second hot wire (or the second hot lead) for providing power to breaker box. In other words, power may transfer through the electric meter, between the first slot400(1) and the third slot400(3), and between the fourth slot400(4) and the fifth slot400(5). The second slot400(2) serves to ground the premises102. In this sense, the first slot400(1) and the fourth slot400(2) may be on the utility side (utility service side), while the third slot400(3) and the fifth slot400(5) may be on the premises side (consumer side). The collar300may include a plurality of prongs for connecting to slots, or other receptacles, within the electric meter panel. For example, inFIG.4B, the collar300is shown including five prongs, such as a first prong402(1), a second prong402(2), a third prong402(3), a fourth prong402(4), and/or a fifth prong402(5). Collectively, the first prong402(1), the second prong402(2), the third prong402(3), the fourth prong402(4), and/or the fifth prong402(5) maybe referred to as “the prongs402.” Each of the prongs402may couple or be connected to corresponding slots312for transferring power and/or grounding the premises102. For example, the first prong402(1) may couple to the first slot400(1), the second prong402(2) may couple to the second slot400(2), the third prong402(3) may couple to the third slot400(3), the fourth prong402(4) may couple to the fourth slot400(4), and/or the fifth prong402(5) may couple to the fifth slot400(5). In this sense, the collar300may act as an extension or coupler for connecting the electric meter to the electric meter panel. Once the prongs402couple with corresponding slots of the electric meter panel (or otherwise couple to the electric meter panel) and prongs of the electric meter couple within the slots312of the collar300, the collar300may be interposed between the electric meter panel and the electric meter. Such coupling may not impact the functioning of the electric meter and/or the power supplied to the premises102. However, interposing the collar300in this matter provides power to the CPE108and allows the CPE108, or components thereof (e.g., the first BPL interface122, the first modem module126, the power module214, etc.) to receive power and connect to the electrical wiring of the premises102for providing broadband internet using BPL technology. InFIG.4A, at least a portion of a first connector404is shown extending into the interior310. The first connector404may couple to a second connector of the top portion314. The coupling between the first connector404and the second connector may communicatively couple the top portion314, or portions therein such as the antenna(s)120, to other components of the CPE108. Additionally, a coupling of the first connector404and the second connector of the top portion314may communicatively couple the CPE108to electrical wiring of the premises102for providing broadband internet to the premises. For example, the first connector404may communicatively couple to the electrical wiring of the premises102(e.g., via coupling to the slots312and/or the prongs402(e.g., via cables, wires, etc.). Additionally, the first connector404may include prongs, receptacles, male/female connectors, etc. for providing power to the top portion314. For example, the second connector of the top portion314may snap or fit into receptacles of the first connector404for providing power to the top portion314, transferring data, etc. A passage of the body302may be disposed through an opening of the body302, atop the body302(Y-direction), for providing access to the first connector404. FIGS.5A and5Billustrate side views of the CPE108.FIG.5Aillustrates a first side of the CPE108andFIG.5Billustrates a second side of the CPE108. Discussed above, the first end304(and the annulus308) may be sized and configured (e.g., shaped) for receiving the electric meter. Disposed around the annulus308, or at the first end304, may be a coupler500(e.g., worm-gear clamp, crimping socket, hose clamp, etc.) for securing the electric meter to and/or within the collar300. The coupler500may prevent the electric meter falling out of the collar300or otherwise disengaging from the body302of the collar300. The second end306may be sized and configured (e.g., shaped) for being disposed within an opening or receptacle of the electric meter panel. In doing so, the prongs402may couple, engage, or otherwise attach to slots of the electric meter panel for receiving power. The cover318is shown extending from a top of the base316by a distance502. The distance502may be such that, when electric meter panels are stacked, the CPEs108are of a form factor to reside between adjacent electric meters. For example, in apartment complexes, business complexes, condominium complexes, or other multi-family units, electric meters (and electric meter panels) are often placed in stacked relationships, disposed side by side, etc. For example, in an apartment building that includes twenty units, there may be twenty power meters arranged in a four by five grid. As the electric meters are in close proximity (e.g., stacked relationship, disposed side-by-side), the CPE108may include a form factor that is small enough to fit within a gap disposed between adjacent vertical meters. As part of this, and as shown, the cover318may extend the distance502from the base316. The distance502may be less than the distance (or gap) interposed between adjacent electric meters. As such, the CPEs108s(or the collar300) may be installed on such premises. In some instances, a portion of the cover318may slant rearwards from a first end, located proximate to the first end304of the body302to a second end, located proximate to the second end306of the body302. This slant extends backwards (Y-plane) towards the second end306of the body302to reduce a form factor of the CPE108. In some instances, this backwards slant may also correspond to an orientation of the antenna(s)120within the CPE108(or behind the cover318). For example, discussed herein, the antenna(s)120may be disposed at an angle or orientation to increase a field of view to the base station radio device106. Slanting the antenna(s)120in this manner directs the antenna(s) upwards towards the base station radio devices106. The slant may also reduce interference with incoming and outgoing signals. As such, the cover318may include a corresponding feature (e.g., slant) for the antenna(s)120. FIGS.6A and6Billustrate additional side views of the CPE108and the collar300.FIG.6Aillustrates a top of the collar300andFIG.6Billustrates a bottom of the collar300. The cover318may include a top600that is spaced apart from a bottom602. The bottom602may be coupled to the base316, and the top600may be disposed above the bottom602(Y-direction). As shown, in addition to the backwards slant of the cover318from the first end304to the second end306of the body302, as discussed above with regard toFIGS.5A and5B, the cover318may curve along the X-direction. For example, the cover318may include a first side604and a second side606that is spaced apart in the X-direction from the first side604. Between the first side604and the second side606, the cover318may curve, arc, or bend. In some instances, the cover318may provide a wider beamwidth on both azimuth and elevation pattern, by 3-5 degrees and/or the cover318may provide a slightly lower gain (˜0.3 dB), which is related to wider pattern. InFIG.6B, the body302is shown including an opening608for providing access to the interior310. In some instances, the collar300may include a hatch for covering up or being disposed over the opening608. In some instances, the opening608may be used to service components of the collar300, access fittings for coupling the collar300to the electric meter panel and/or the electric meter, and/or inspecting the CPE108. FIG.7illustrates the CPE108, showing the top portion314as transparent or in faint lines to illustrate the antenna(s)120residing there beneath. As introduced above, the antenna(s)120may represent a multi-antenna array for wirelessly communicating with the one or more devices (e.g., the base station radio device106) over one or more communication channels (e.g., radio frequency (RF) spectrum in the microwave to mmWave range of spectrums). In some instances, the antenna(s)120may include an array of sub-arrays, such as a first sub-array700(1), a second sub-array700(2), and/or a third sub-array700(3). In some instances, each of the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) may include two elements, such as a left element and a right element. In such instances, the antenna(s)120may include six antennas. However, the antenna(s)120may include more than or less than six antenna(s) and/or the sub-arrays may include more than two elements. In some instances, the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) may represent two port patch antennas having a low profile and which can be mounted on a flat surface. The first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) may include two orthogonal elements, such as a slant left element and a slant right element by feed points on the patch antenna. The slant left element and the slant right element of the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3), respectively, may be independently driven (e.g., phase and amplitude). Additionally, the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) may include different orthogonally polarized elements. For example, the first sub-array700(1) may include a vertical polarization element and a horizontal polarization element. Additionally, in some instances, the first sub-array700(1) may include a right hand circular polarization element and a left hand circular polarization element. By extension, the first sub-array700(1) may be implemented with any orthogonal pair of elements, and each element may include a dedicated feed port. The second sub-array700(2) and the third sub-array700(3) may include differently polarized elements as well. In some instances, the diversity of polarizations across the first sub-array700(1), the second sub-array700(2), and the third sub-array700(3) may increase communications with the one or more devices when transmitting and receiving data. That is, polarization diversity may allow properly equipped transceivers to implement polarization dependent loss (PDL) mitigation and adaptive interference mitigation based on polarization mode dispersion (PMD) processing. Additionally, the direction of transmission and/or the direction of arrival of signals (e.g., to and from the base station radio devices106) may be modified through adjusting the phase and/or amplitude of the dedicated feeds for the elements of the sub-arrays. In such instances, the radiation pattern of the antenna(s)120may be adjusted and configured according to predetermined variable polarizations. In some instances, the variable polarization may be determined as a function of the direction of departure/arrival in the array pattern of the first sub-array700(1), the second sub-array700(2), and the third sub-array700(3). The antenna(s)120are shown being coupled to or mounted on a structure702. The structure702may follow a curvature of at least a portion of the cover318. Additionally, as discussed herein, the structure702may orient the antenna(s)120upwards for increasing a line of sight with the base station radio devices106. Additional details of the structure702are discussed herein. FIG.8illustrates details of the antenna(s)120of the CPE108. As discussed above, the antenna(s)120may include the first sub-array700(1), the second sub-array700(2), and the third sub-array700(3). In some instances, the first sub-array700(1) may be mounted to the structure702and oriented in a first direction (e.g., leftward facing from center), the second sub-array700(2) may be mounted to the structure702and oriented in a second direction (e.g., forward facing from center), and the third sub-array700(3) may be mounted to the structure702and oriented in a third direction (e.g., rightward facing from center). Additionally, as shown, the first sub-array700(1), the second sub-array700(2), and the third sub-array700(3) may be tilted upwards. In some instances, the structure702may orient the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) between 60 and 70 degrees upward, or relative to a horizontal plane. In some instances, the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) may be offset from surfaces of the structure702to reduce interferences caused by materials of the structure702. The mounting, angles, and orientation of the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) may increase a line of sight and/or radiation pattern of the CPE108(or of the antenna(s)120). For example, when the CPE108communicates with other devices (e.g., the base station radio devices106), the upward tilt and horizontal field of view may increase the signal strength. Moreover, the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) may include different polarizations. In such instances, the receivers of the communicating devices, such as the antenna(s)206of the base station radio devices106) may receive stronger signal strengths from the CPE108and be capable of receiving signals with varying polarizations. The antenna(s)120may be arranged to maximize the polarization diversity across the radiation pattern of the CPE108. For example, by selecting specific polarization feeds on the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3), and precoding (predetermining) the phase and or amplitude of those feeds, a radiation pattern may be implemented with a predetermined variable polarization. In some instances, the predetermined variable polarization may be a function of the direction of departure and arrival in the CPE108. That is, the antenna(s)120may include a predetermined polarization and/or azimuth direction using elements of the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) with different polarizations, along with differential composite phase and amplitude in the feed network. Additionally, or alternatively, by selection of specific polarizations and precoding the phase and amplitude of those feeds, a flat total power beamwidth with a gain variation of less than 3 dB may be implemented. In some instances, the antenna(s)120may have dual orthogonal polarization, port to port isolation greater than 18 db, and gain over a 3 dB pattern targeted at 4 dBi (Vertical: +70°-0°; Horizontal: ±90°). Although the antenna(s)120are shown being substantially square in shape and/or of a certain size, other shapes and/or sizes are envisioned. By way of example, the antenna(s)120may be circular, rectangular, and/or hexagonal. FIG.9illustrates a transparent view of the CPE108, showing the top portion314disposed above the collar300and the first connector within the collar300. The top portion is shown including a second connector900that engages with the first connector404. For example, prongs902of the second connector900may engage within receptacles or slots of the first connector404. This may allow the top portion314to be interchangeable for different communication technologies, for repair, and so forth. In some instances, the first connector404and the second connector900may resemble a quick disconnect feature between the top portion314and the collar300, or components thereof. The coupling between the first connector404and the second connector900may be snap-fit or pressure fit and may couple computing components within the top portion314and computing components within the collar300. The connection between the first connector404and the second connector900may supply power to the top portion314, transfer data (e.g., broadband internet) between the top portion and the collar300(and ultimately into the premises102), and so forth. FIG.10illustrates the top portion314of the CPE108, showing the cover318removed to illustrate components of the CPE108disposed beneath the cover318. InFIG.10, the antenna(s)120and the collar300are also shown being removed. The CPE108is further shown including a printed circuit board (PCB)1000(or integrated circuit board) to which components of the CPE108couple or communicatively couple. For example, the PCB1000may house the first BPL interface122, the first modem module126, and so forth. The PCB1000, in some instances, may additionally include processor(s), memory, and so forth. In some instances, the top portion314may include batteries for supplying power to components disposed in the top portion314. In some instances, the CPE108may include a first PCB including the first BPL interface122disposed within the collar300, and a second PCB including the first modem module126disposed within the top portion314. This may allow the top portion314to be displaced from the collar300. Additionally, locating the antenna(s)120and the first modem module126external to the collar300, or within the top portion314, allows the top portion314to be quickly replaced and/or upgraded. For example, as new technologies are introduced and new antenna(s)120become available, the top portion314may be replaced without removing the collar300from the meter collar. Additionally, if components of the top portion314fail or break, the top portion314may be repaired without removing the collar300from the meter collar. The modularity of the CPE108may be provided, in part, by the first connector404(not shown inFIG.10) and the second connector900. For example, the first connector404and the second connector900may resemble a quick disconnect feature that allows the top portion314to be separated from the collar300. The first connector404and the second connector900may include corresponding male and female slots, prongs, etc. for communicatively coupling computing components within the top portion314with those within the collar300. For example, an engagement between the first connector404and the second connector1100may provide power to the top portion314, communicatively couple the top portion314with the electrical wiring of the premises102, and so forth. FIG.11illustrates a top view of the top portion314, showing the cover318removed to illustrate the antenna(s)120and the PCB1000. In some instances, and as shown, the PCB1000may mount behind (Z-direction) the structure702(and the antenna(s)120) and extend in a vertical direction Y-direction. The structure702is shown including a curved trajectory, between the first side604and the second side606. The first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) couple to the structure702for disposing the first sub-array700(1), the second sub-array700(2), and/or the third sub-array700(3) across a surface of the structure702. FIG.12illustrates a different embodiment of antenna(s)120of the CPE108. In some instances, rather than the CPE108including the first sub-array1200(1), the second sub-array1200(2), and the third sub-array1200(3), the antenna(s)120may include two sub-arrays, as illustrated inFIG.12. For example, the CPE108may include a first sub-array1200(1) and a second sub-array1200(2). The first sub-array1200(1) and the second sub-array1200(2) are shown being coupled to or mounted on a frame1202. In some instances, the frame1202couples to the base316. The frame1202may include a first mounting surface and a second mounting surface for receiving the first sub-array1200(1) the second sub-array1200(2), respectively. In some instances, the first mounting surface and the second mounting surface may be angled apart from one another to increase a field of view of the first sub-array1200(1) and the second sub-array1200(2). For example, the frame1202may include a V-shape. In some instances, the first mounting surface may be angled relative to the Z-plane, or relative to the second mounting surface. In some instances, the first mounting surface may be angled by 45 degrees. Similarly, the second mounting surface may be angled relative to the Z-plane, which may be equal to or substantially equal to 45 degrees. As such, in some instances, the first mounting surface and the second mounting surface may be angled apart from one another by substantially 90 degrees. The angling or orientation of the first sub-array1200(1) and the second sub-array1200(2) with respect to the Z-plane may provide a collective horizontal field of view of approximately between 160 degrees and 180 degrees. Furthermore, the frame1202may be angled backwards. In some instances, the frame1202may be angled backwards by substantially 30 degrees. The first sub-array1200(1) and the second sub-array1200(2) may therefore be angled between 60 degrees and 70 degrees upward, or relative to a horizontal plane. Angling the first sub-array1200(1) and the second sub-array1200(2) in this manner may increase a line of sight between the CPE108and base station radio device106. Moreover, the radiation pattern of the antenna(s)120may be adjusted and configured according to predetermined variable polarizations. In some instances, the variable polarization may be determined as a function of the direction of departure/arrival in the array pattern of the first sub-array1200(1) and the second sub-array700(2). FIG.13illustrates an alternate customer premises device (CPE)1300that includes a top portion1302positionable relative to a collar1304. In some instances, the CPE1300may be similar to and include similar components as the CPE108, and/or the collar1304may include similar components as the collar300. However, as shown inFIG.13, the top portion1302may be designed and configured to extend from the collar1304(or a body thereof). For example, depending on the location of the electric meter (or the electric meter panel) of the premises102, the wireless communication between the CPE1300and the base station radio device106may not be ideal. By way of example, the premises102may be located behind a taller building relative to the location of base station radio device106, or the electric meter may be located at the back of the premises102relative to the location of the base station radio device106. Additionally, in some instances, not all powerline structures may include a base station radio device106. Under such situations, the communication path may be obstructed (e.g., taller building). These factors may cause additional path loss between the base station radio device106and the CPE1300. Consequently, the CPE1300may have a reduced signal strength with the base station radio device106. To address these situations, the CPE1300or the collar1304may include components for extending the top portion1302at various positions from the collar1304. For example, the top portion1302(which includes antenna(s) for communicating with the base station radio device106) may be disposed from the collar1304and placed at various locations around, on, or about the exterior of the premises102. In such instances, the CPE1300may be located at more desirable places on the premises102(e.g., rooftop) to achieve a closer line-of-sight communication path with the base station radio device106. In some instances, the optimum placement for the top portion1302may be determined based on expected ranges of maximum achievable throughput calculated sides of the premises102. For example, instruments may calculate or determine the relative signal strength around the premises102. Based on the highest signal strength, the top portion1302may be installed at a corresponding location. In some instances, an installer of the CPE1300(or the top portion1302) may utilize these instruments during an installation process, and once finding the optimal location, may use these parameters to install the CPE1300and/or the top portion1302(or the antenna(s)) at respective locations. In some instances, the top portion1302may be tethered and/or wired to the collar1304for transmitting and receiving data (or signals). For example, in some instances, the top portion1302and/or the collar1304may include a recoil, or spool, for leashing the top portion1302to various lengths. As illustrated, the top portion1302and the collar1304may communicatively couple via one or more wires1306that are configured to extend at various lengths from the collar1304. These wires1306may be spooled within the collar1304and/or the top portion1302. The wires1306may also provide power to the top portion1302, and components thereof, such as PCBs, lighting element(s), antenna(s), etc. AlthoughFIG.13illustrates certain components being disposed from the collar1304, other embodiments are envisioned. For example, only the antenna(s) may be disposed from the collar1304, and the PCBs and/or lighting element(s) may remain on the collar1304(e.g., not within the top portion1302). In these instances, the components that are disposed from the collar1304may be varied. In some instances, the top portion1302may include a PCB including the first modem module136, while the collar1304may include a PCB including the first BPL interface122. This may allow the first modem module136(and the top portion1302) to be disposed from the collar1304. In such instances, the top portion1302and the collar1304may be coupled via an Ethernet cable (e.g., PoE). In some instances, the top portion1302(or components thereof) may be powered via batteries and/or power transmitted through the wires1306(from the collar1004). Additionally, or alternatively, the top portion1302maybe powered via power over ethernet (PoE). In such instances, the wires1306may correspond to and/or include ethernet cables for communicatively coupling the top portion1302and the collar1304. Still, in some instances, components within the top portion1302may be powered via solar energy. FIGS.14A and14Billustrate the CPE108, including the collar300, coupled to an electric meter panel1400and an electric meter1402. In some instances, the CPE108may be installed to the electric meter panel1400during an installation process. For example, after removing couplings (e.g., clamps, screws, sockets, etc.) from the electric meter1402, the electric meter1402may be pulled in a slightly downward direction (Y-direction) to remove the electric meter1402from the electric meter panel1400. Removing the electric meter1402exposes slots of the electric meter panel1400(e.g., the slots312), that receive prongs of the electric meter1402. Therein, the CPE108may be coupled to the electric meter panel1400. For example, as discussed above, the CPE108may include prongs (e.g., the prongs402) that are received within the slots of the electric meter panel1400. Additionally, an end of collar300may fit within and/or reside within the electric meter panel1400. Thereafter, the collar300may be secured to the electric meter panel1400to provide a water-tight seal. The CPE108may couple to the wiring of the premises102via the collar300coupling to the electric meter panel1400. Finally, the electric meter1402may be re-installed at the premises102. For example, after the CPE108is coupled to the electric meter panel1400, the electric meter1402may couple to the collar300. The collar300may include slots (e.g., the slots312) for receiving prongs of the electric meter1402. This coupling may complete a circuit to supply power to the premises102(after power is restored). In this manner, the collar300may act as an extension, interposed between the electric meter panel1400and the electric meter1402to power the CPE108, tap into the electrical wiring of the premises102, and to enable coupling of the electric meter1402to the electric meter panel1400. In turn, after installation, the CPE108may perform the operations described hereinabove for communicating with the base station radio device106and the router110to provide broadband internet to the premises102. For example, after the CPE108is installed, the consumer may plug in the router110to start receiving broadband internet. This process may involve a handshake or pairing operation. Additionally, in some instances, after the CPE108is installed, the CPE108may automatically connect to cloud software and/or services and provisioned with the appropriate broadband service selected by the consumer. Such coupling also allows the SPN116to communicate directly with CPE108to diagnose issues and/or monitor a status of the CPE108. Turning back to the illustrations shown inFIGS.14A and14B, the CPE108may be positioned or interposed between the electric meter panel1400and the electric meter1402, with the collar300acting as an extension. In this sense, the collar300may dispose the electric meter1402at a farther distance away from the premises102(e.g., side of a house). With this design, the CPE108significantly reduces the installation cost and also solves the building penetration problem by tapping into the electrical wiring of the premises102. This allows for the CPE108to seamlessly integrate with electric utilities that offer, or wish to offer, broadband internet to consumers. In some instances, the CPE108may be installed in minutes by simply plugging into the electric power service entrance to the building. The CPE108is installed to provide a weathertight seal between the electric meter panel1400and the electric meter1402. Installing the CPE108on an exterior side of the premises102reduces an installation time, as the SPN116may not have to access an interior of the premises102. This may also make installation less burdensome for the SPN116and/or the premises owner. As also discussed above, the distance502permits the top600of the cover318to be disposed beneath (Y-direction) a top1404of the electric meter panel900. This permits the CPE108to be installed on electric meters that are in close proximity to one another, such as in apartment complexes or multi-family units. FIGS.15-20illustrate various embodiments of antenna(s) that may be implemented within the CPE108, or other customer premises devices. In some embodiments, the antenna(s) illustrated herein may represent transceiver systems that are capable of transmitting and receiving data. For example, the antenna(s) may transmit and receive data from the base station radio device106. In such instances, the antenna(s) may include multiple sub-arrays, a feed network, and a radio modem. It is to be understood that the antenna(s) discussed herein may be implemented within the CPE108as the antenna(s)120. Additionally, the modems discussed herein may be representative of the first modem module126. FIG.15illustrates a transceiver system1500having multiple sub-arrays. For example, the transceiver system1500may include a first sub-array1502, a second sub-array1504, and a third sub-array1506. In some instances, the first sub-array1502, the second sub-array1504, and/or the third sub-array1506may include multiple elements that are dual polarized. In some instances, the first sub-array1502, the second sub-array1504, and/or the third sub-array1506may be dual polarized patch antennas. The first sub-array1502is shown including a first element1502(1) and a second element1502(2), the second sub-array1504is shown including a first element1504(1) and a second element1504(2), and the third sub-array1506is shown including a first element1506(1) and a second element1506(2). The first element1502(1) of the first sub-array1502may represent a left antenna and the second element1502(2) of the first sub-array1502may represent a right antenna. The first element1502(1) and the second element1502(2) may include different polarizations or may be polarized differently than one another. For example, in some instances, the first element1502(1) and the second element1502(2) may be orthogonally polarized. The first element1504(1) of the second sub-array1504may represent a left antenna and the second element1504(2) of the second sub-array1504may represent a right antenna. The first element1504(1) and the second element1504(2) may include different polarizations and may be orthogonally polarized. The first element1506(1) of the third sub-array1506may represent a left antenna and the second element1506(2) of the third sub-array1506may represent a right antenna. The first element1506(1) and the second element1506(2) may include different polarizations and may be orthogonally polarized. The transceiver system1500may have dedicated feed ports for transmitting and receiving via the first sub-array1502, the second sub-array1504, and the third sub-array1506. For example, the transceiver system1500is shown including a modem1508having a first port1510, a second port1512, a third port1514, and a fourth port1516. In some instances, the first port1510may represent a transmission and receiving port, while the second port1512, the third port1514, and/or the fourth port1516may represent receiving ports. Conventional antenna designs dictate an antenna feed design that feeds only a single co-polarized set of sub-array elements. That is, a co-polarized element is chosen from each sub-array to be driven by the distributed power of a transmission/receiver port. This co-polarization is suboptimal in modern MIMO wireless links given the frequency dependent fading and polarization mode dispersion introduced in NLOS communication. That is, co-polarization elements are the same (e.g., vertical to vertical, horizontal to horizontal, right hand circular to right hand circular, etc.). In such instances, given the scattering in NLOS communication, the signals may become cross-polarized, leading to PDL, insufficient signal levels at the base station radio device106, and/or loss of communication. However, comparatively, the transceiver system1500may include a single transmission/receiving port, such as the first port1510, but may split transmission signals amongst element(s) for generating variable polarized signals. This variation may improve signal levels and restore communications. To elaborate, and as shown, the transceiver system1500may include a power splitter and combiner1518to drive the elements of the first sub-array1502, the second sub-array1504, and the third sub-array1506. In some instances, the power splitter and combiner1518may unequally split and combine power to adjust the relative magnitude of each element in the transceiver system1500. For example, the power splitter and combiner1518may split signals to the first element1502(1) of the first sub-array1502, the second element1504(2) of the second sub-array1504, and the first element1506(1) of the third sub-array1506. Additionally, signals received via the first element1502(1) of the first sub-array1502, the second element1504(2) of the second sub-array1504, and the first element1506(1) of the third sub-array1506may be combined via the power splitter and combiner1518. During transmission, the splitting of the signals may drive the polarizations of each element of the sub-arrays. In some instances, the different polarizations of the first element1502(1) of the first sub-array1502, the second element1504(2) of the second sub-array1504, and the first element1506(1) of the third sub-array1506may maximize the polarization diversity across the radiation pattern of the transceiver system1500. Moreover, given the orientation of the first element1502(1) of the first sub-array1502, the second element1504(2) of the second sub-array1504, and the first element1506(1) of the third sub-array1506, transmitted signals may be sent in multiple directions. The signals transmitted by the first element1502(1) of the first sub-array1502, the second element1504(2) of the second sub-array1504, and the first element1506(1) of the third sub-array1506may have predetermined phases and/or amplitudes for steering transmitted beams. The predetermined phases and/or amplitudes may generate variable polarizations for receipt by the base station radio device106. That is, the base station radio device106may be configured to receive vertical, horizontal, circular, and/or elliptical polarizations, for example. As such, the polarizations of first element1502(1), the second element1504(2), and the first element1506(1) may generate various polarizations through constructive interference. The diversity of polarizations generated by the transceiver system1500may increase the signal strength of received signals. That is, by selecting specific polarization feeds on the first element1502(1), the second element1504(2), and the first element1506(1), and precoding the phase and or amplitude of those feeds, a radiation pattern may be emitted with a predetermined variable polarization. This predetermined variable polarization may be determined as a function of the direction of transmission and arrival in the transceiver system1500. The transceiver system1500may provide a continuous distribution of polarizations from linear to elliptical to circular, and then back to elliptical and linear. The polarization diversity may increase transmission with computing devices. The destructive and/or constructive interference between the first element1502(1) of the first sub-array1502, the second element1504(2) of the second sub-array1504, and the first element1506(1) of the third sub-array1506may generate linear, circular, and elliptical polarizations. This variance in polarizations permits receivers to receive the signals, across the array of polarizations. In such instances, and given NLOS communications, if one particular polarization lacks sufficient signal to noise ratio and/or insufficient signal above the receiver's sensitivity, the base station radio device106may receive signals having the different polarizations. Additionally, by splitting the transmitted signals, the energy of the transmitted signals may remain under a certain threshold governed by FCC regulations. For example, the polarizations of the elements, the phases of transmitted signals, and/or the amplitudes of the transmitted signals may be altered to obtain destructive interference. With the sub-arrays of the transceiver system1500, the phases and/or magnitudes of the elements may be adjusted to steer the beam pattern of the transceiver system1500and/or adjust the beamwidth. Such modulation and adjustment may allow the transceiver system1500to communicate with the base station radio device106. For example, different phases and/or amplitudes may be imparted to the signals transmitted via the first element1502(1) of the first sub-array1502, the second element1504(2) of the second sub-array1504, and the first element1506(1) of the third sub-array1506to steer beams in a particular direction (e.g., constructive interference). The second element1502(2) of the first sub-array1502is shown coupled to the second port1512, the first element1504(1) of the second sub-array1504is shown coupled to the third port1514, and the second element1506(2) of the third sub-array1506is shown coupled to the third port1514. In some instances, the transceiver system1500may include more than one receiving ports and/or more than three transmitting/receiving ports. The dedicated receiving ports of the modem are coupled to a diversity of receive polarizations and azimuthal gain patterns. This spatial and polarization diversity will enhance the performance of MIMO signal processing from the base station106and the CPE108improving spectral efficiency (e.g. higher throughput). In addition this antenna provides for polarimetric processing to eliminate PDL and exploit PMD processing for interference rejection FIG.16illustrates a transceiver system1600. In some instances, the transceiver system1600may be similar to the transceiver system1600. For example, the transceiver system1600may include a first sub-array1602, a second sub-array1604, and a third sub-array1606. In some instances, the first sub-array1602, the second sub-array1604, and/or the third sub-array1606may include multiple elements that are dual polarized. In some instances, the first sub-array1602, the second sub-array1604, and/or the third sub-array1606may be dual polarized patch antennas. The first sub-array1602is includes a first element1602(1) and a second element1602(2), the second sub-array1604includes a first element1604(1) and a second element1604(2), and the third sub-array1606includes a first element1606(1) and a second element1606(2). The transceiver system1600may have dedicated feed ports for transmitting and receiving via the first sub-array1602, the second sub-array1604, and the third sub-array1606. For example, the transceiver system1600is shown including a modem1608having a first port1610, a second port1612, a third port1614, and a fourth port1616. In some instances, the first port1610may represent a transmission and receiving port, while the second port1612, the third port1614, and/or the fourth port1616may represent receiving ports. As shown, the transceiver system1600may include a power splitter and combiner1618to drive the elements of the first sub-array1602, the second sub-array1604, and the third sub-array1606. In some instances, the power splitter and combiner1618may unequally split and combine power to adjust the relative magnitude of element(s) in the transceiver system1600. The first element1602(1) and the second element1602(2), the first element1604(1) and the second element1604(2), and the first element1606(1) and the second element1606(2) may be polarized differently than one another (e.g., orthogonally polarized). Compared to the transceiver system1500, the transceiver system1600may have horizontal and vertical polarizations. For example, the first element1602(1), the first element1604(1), and the first element1606(1) may have horizontal polarization. The second element1602(2), the second element1604(2), and the second element1606(2) may have vertical polarizations. The diversity of polarizations generated by the transceiver system1600may increase the signal strength of received signals. This predetermined variable polarization may be determined as a function of the direction of transmission and arrival in the transceiver system1600. Additionally, some of the elements of the sub-arrays may have 90 degree and 180 degree phase offsets. For example, the second element1604(2) may have a 90 degree phase offset and the first element1606(1) may have a 180 degree phase offset. In some instances, the phase offsets may obtain destructive interference, may steer the beam pattern of the transceiver system1600, and/or adjust the beamwidth. For example, the 90 degree phase offset and the 180 degree phase offset may steer beams in a particular direction (e.g., constructive interference). Moreover, phase shifting may create polarization diversity for receiving devices, such as the base station radio device106. In some instances, the horizontal and vertical polarizations of the first sub-array1602, the second sub-array1604, and the third sub-array1606, as well as the phase shifts of the second element1604(2) and the first element1606(1), may generate various polarizations through constructive interference. By precoding the phases of the second element1604(2) and the first element1606(1), a radiation pattern may be emitted with a predetermined variable polarization. FIG.17illustrates a transceiver system1700. In some instances, the transceiver system1700may be similar to the transceiver system1500and/or the transceiver system1600. For example, the transceiver system1700may include a first sub-array1702, a second sub-array1704, and a third sub-array1706. The first sub-array1702, the second sub-array1704, and/or the third sub-array1706may include multiple elements that are dual polarized. The first sub-array1702includes a first element1702(1) and a second element1702(2), the second sub-array1704includes a first element1704(1) and a second element1704(2), and the third sub-array1706includes a first element1706(1) and a second element1706(2). The transceiver system1700may have dedicated feed ports for transmitting and receiving via the first sub-array1702, the second sub-array1704, and the third sub-array1706. For example, the transceiver system1700is shown including a modem1708having a first port1710, a second port1712, a third port1714, and a fourth port1716. The first port1710may represent a transmission and receiving port, while the second port1712, the third port1714, and/or the fourth port1716may represent receiving ports. The transceiver system1700may include a power splitter and combiner1718to drive the elements of the first sub-array1702, the second sub-array1704, and the third sub-array1706. In some instances, the power splitter and combiner1518may unequally split and combine power to adjust the relative magnitude of each element in the transceiver system1700. Compared to the transceiver system1600, the transceiver system1700may have different predetermined phase shifts. For example, the second element1704(2) may have a 90 degree phase shift. This may result in the transmission of circularly polarized signals (via interaction amongst the elements within the sub-array) and permit the receivers (e.g., the port second1712, the third port1714, and the fourth port1716) of the transceiver system1600to receive circularly polarized signals. In some instances, the overlapping regions between the elements may result in circular polarization (or near circular polarization). In some instances, the phase offsets may obtain destructive interference, may steer the beam pattern of the transceiver system1700, and/or adjust the beamwidth. Moreover, phase shifting may create polarization diversity for receiving devices, such as the base station radio device106. In some instances, the first element1702(1) and the first element1706(1) may be driven with equal phase while the second element1704(2) may be driven with a composite 90 degree phase shift. That is, the composite 90 degree phase shift may represent the sum of the phase realized in the transceiver system1700and the time of flight phase delay due to the separation of the sub-arrays. In this instance, measuring the polarization on the far left of the transceiver system1600, a vertical polarization is realized (via the first element1702(1) of the first sub-array1702). As the measurement position moves from left to right, around the transceiver system1500(i.e., from the first sub-array1704to the third sub-array1706), the polarization varies from the initial polarization through elliptical to a circular polarization. This changing in polarization is formed by selecting an orthogonal polarization for second element1704(2) of the second sub-array1704. Therein, the polarization returns to the elliptical polarization and then to orthogonal vertical polarization as a result of the first element1706(1) of the third sub-array1706. As such, the transceiver system1700may create a variable polarization over its beamwidth. FIG.18illustrates a transceiver system1800including a first sub-array1802and a second sub-array1804. In some instances, the first sub-array1802and the second sub-array1804may include multiple elements that are dual polarized. For example, in some instances, the first sub-array1802and the second sub-array1804may be dual polarized patch antennas or cross polarized dipole antennas. Although the CPE108discussed above includes three antenna(s), or three sub-arrays, the transceiver system1800may be embodied within the CPE108(as discussed with regard toFIG.12). In such instances, the structure may include different features and/or shapes for receiving the first sub-array1802and the second sub-array1804(e.g., a V-shaped structure). As such, the CPE108may be configurable to receive less than three antennas or sub-arrays. The first sub-array1802is shown including a first element1802(1) and a second element1802(2), and the second sub-array1804is shown including a first element1804(1) and a second element1804(2). In some instances, the first element1804(1) and the second element1804(2) may be orthogonally polarized. Additionally, or alternatively, the first element1804(1) and the second element1804(2) may be orthogonally polarized. The transceiver system1800may have dedicated feed ports for transmitting and receiving via the first sub-array1802and the second sub-array1804. For example, the transceiver system1800is shown including a modem1806including a first port1808and a second port1810. In some instances, the first port1808may represent a transmission and receiving port, while the second port1810may represent a receiving port. As shown, the transceiver system1800may include power splitter/combiners, such as a first power splitter and combiner1812and a second power splitter and combiner1814. The first power splitter and combiner1812may drive the elements of the first sub-array1802and/or the second sub-array1804. The second power splitter and combiner1814may drive the elements of the first sub-array1802and the second sub-array1804. In some instances, the first power splitter and combiner1812and/or the second power splitter and combiner1814may unequally split and combine signals to adjust the relative magnitude of each element, or sub-array, in the transceiver system1800. For example, the first port1808may split transmission signals via the first power splitter and combiner1812to the first element1802(1) of the first sub-array1802and the second element1804(2) of the second sub-array1804. Signals received via the first element1802(1) of the first sub-array1802and the second element1804(2) of the second sub-array1804may be combined via the first power splitter and combiner1812. Similarly, signals received via the second element1802(2) of the first sub-array1802and the first element1804(1) of the second sub-array1804may be combined via the second power splitter and combiner1814. During transmission, the splitting of the signals via the first power splitter and combiner1812may drive the polarizations of each element. In some instances, the different polarizations of the first element1802(1) of the first sub-array1802and the second element1804(2) of the second sub-array1804may maximize the polarization diversity across the radiation pattern of the transceiver system1800. As such, the transceiver system1800may create a variable polarization over its beamwidth. Moreover, given the orientation of the first element1802(1) of the first sub-array1802and the second element1804(2) of the second sub-array1804(e.g., leftward facing, rightward facing, etc.), transmissions may be sent in multiple directions. The signals transmitted by the first element1802(1) of the first sub-array1802and the second element1804(2) of the second sub-array1804may have predetermined phases and/or amplitudes for steering transmitted beams. For example, because the elements of the first sub-array1802and the second sub-array1804have dedicated feed ports, their phase and/or magnitudes may be individually controlled for achieving polarization diversity. Moreover, the predetermined phases and/or amplitudes may generate variable polarizations for receipt by receivers of other devices (e.g., the base station radio device106). In other words, the antenna(s) of the base station radio device106may be configured to receive vertical, horizontal, circular, and/or elliptical polarizations, for example, emitted by the first sub-array1802and the second sub-array1804. As such, the polarizations of the first element1802(1) of the first sub-array1802and the second element1804(2) of the second sub-array1804may generate various polarizations through constructive interference. The diversity of polarizations generated by the transceiver system1800may increase the signal strength of received signals. That is, by selecting specific polarization feeds on the first element1802(1) of the first sub-array1802and the second element1804(2) of the second sub-array1804, and precoding the phase and or amplitude of those feeds, a radiation pattern may be emitted with a predetermined variable polarization. This predetermined variable polarization may be determined as a function of the direction of transmission and arrival in the transceiver system1800. For example, by phase shifting and/or adjusting the amplitude of the outgoing signals of the first element1802(1) of the first sub-array1802and the second element1804(2) of the second sub-array1804, the direction of transmissions may be adjusted. Similarly, receiving signals may be phase shifted. Although a particular embodiment of the transceiver system1800is shown, more than two dual-polarized elements (i.e., the first sub-array1802and the second sub-array1804) may be implemented within the transceiver system1800. For example, the transceiver system1800may include four dual-polarized sub-arrays that are arranged to form a pattern beamwidth that exceeds the radiation pattern of the individual widths of the individual sub-arrays. In such instances, the four sub-arrays may include two orthogonally polarized elements, and each element of the sub-array may have a dedicated antenna feed port. The transceiver system1800may be mounted to a structure that supports and orients the first sub-array1802and the second sub-array1804. For example, the mounting of the first sub-array1802and the second sub-array1804may provide the transceiver system1800with a pattern in azimuth and elevation directions that is greater than the pattern of the first sub-array1802and the second sub-array1804. That is, the individual beam patterns of the first sub-array1802and the second sub-array1804may constructively interfere with one another to increase a beam pattern of the transceiver system1800. For example, individually, the first sub-array1802and/or the second sub-array1804may exhibit a 70 degree pattern in both azimuth and elevation directions. However, through constructive interference the transceiver system1800may achieve a 3 dB pattern of +/−90 degree azimuth with respect to the transceiver system1800boresight and an elevation of zero (0) degrees to 70 degrees with respect to a horizontal plane. This pattern may represent a directional pattern of the transceiver system1800. However, as noted above, by selecting specific polarization feeds on each of the dual polarized elements of the first sub-array1802and the second sub-array1804, and precoding the phase and/or amplitude of the feeds, a radiation pattern with a predetermined variable polarization may be generated as a function of the direction of departure/arrival in the transceiver system1800. For example, the transceiver system1800may realize a 3 dB pattern of +/−180 degree azimuth with respect to the transceiver system1800boresight and an elevation of zero (0) degrees to 70 degrees with respect to a horizontal plane. This azimuthal pattern constitutes an omni-directional pattern. FIG.19illustrates a transceiver system1900. In some instances, the transceiver system1900may be similar to the transceiver system1800. For example, the transceiver system1900may include a first sub-array1902and a second sub-array1904. In some instances, the first sub-array1902and the second sub-array1904may include multiple elements that are dual polarized. In some instances, the first sub-array1902and the second sub-array1904may be dual polarized patch antennas. The first sub-array1902includes a first element1902(1) and a second element1902(2), and the second sub-array1904includes a first element1904(1) and a second element1904(2). The transceiver system1900may have dedicated feed ports for transmitting and receiving via the first sub-array1902and the second sub-array1904. For example, the transceiver system1900is shown including a modem1906having a first port1908and a second port1910. In some instances, the first port1908may represent a transmission and receiving port, while the second port1910may represent a receiving port. As shown, the transceiver system1900may include a first power splitter and combiner1912to drive the first element1902(1) of the first sub-array1902and the second element1904(2) of the second sub-array1904. The transceiver system1900may also include a second power splitter and combiner to drive the second element1902(2) of the first sub-array1902and the first element1904(1) of the second sub-array1904. In some instances, the first power splitter and combiner1912and the second power splitter and combiner1914may unequally split and combine power to adjust the relative magnitude of each element in the transceiver system1900. The first element1902(1) and the second element1902(2), the first element1904(1) and the second element1904(2) may be polarized differently than one another (e.g., orthogonally polarized). Additionally, some of the elements of the sub-arrays may have 90 degree phase shifts. For example, the second element1902(2) may have a 90 degree phase shift and the second element1904(2) may have a 90 degree phase shift. The predetermined phase shifts of the second element1902(2) and the second element1904(2) may produce circular polarizations. For example, the predetermined 90 degree phase shift in transmitting signals may result in the transmission of circularly polarized signals (via vector summing between the elements of the sub-arrays). Additionally, this allows the transceiver system1900to receive circularly polarized signals. That is, the overlapping regions between the elements of the first sub-array1902and the second sub-array1904may result in circular polarization (or near circular polarization). FIG.20illustrates a transceiver system2000having multiple sub-arrays. For example, the transceiver system2000may include a first sub-array2002and a second sub-array2004. In some instances, the first sub-array2002and the second sub-array2004may include multiple antennas, that are dual polarized. In some instances, the first sub-array2002and the second sub-array2004may be dual polarized patch antennas. The first sub-array2002is shown including a first element2002(1) and a second element2002(2), while the second sub-array2004is shown including a first element2004(1) and a second element2004(2). The first element2002(1) of the first sub-array2002may represent a left antenna and the second element2002(2) of the first sub-array2002may represent a right antenna. The first element2002(1) and the second element2002(2) may include different polarizations, or may be polarized differently than one another. The first element2002(1) and the second element2002(2) may be orthogonally polarized. Likewise, the first element2004(1) of the second sub-array2004may represent a left antenna and the second element2004(2) of the second sub-array2004may represent a right antenna. The first element2004(1) and the second element2004(2) may include different polarizations and may be orthogonally polarized. The transceiver system2000has dedicated feed ports for transmitting and receiving via the first sub-array2002and the second sub-array2004. For example, the transceiver system2000is shown including a modem2006that includes a first port2008, a second port2010, and a third port2012. In some instances, the first port2008may represent a transmission and receiving port, while the second port2010and the third port2012may represent receiving ports. As shown, the transceiver system2000may include a power splitter and combiner2014to drive the elements of the first sub-array2002and the second sub-array2004. For example, the first port2008may split transmission signals to the first element2002(1) of the first sub-array2002and the second element2004(2) of the second sub-array2004via a phase delay of a composite 90 degrees. During transmission, the splitting of the signals may drive the polarizations of each element. In some instances, the different polarizations of the first element2002(1) and the second element2004(2) may maximize the polarization diversity across the radiation pattern of the transceiver system2000. In this example, the polarization of the second element2002(2), illustrated as slant left polarization, forming the left most extent of the beamwidth while the second element2004(2) may be a right facing slant right polarization element forming the right extent of the beamwidth. Due to the 90 degree phase offset, the central facing polarization formed between the right and left elements may result in a right hand circular polarization. Additionally, this may result in the transmission of circularly polarized signals (via interaction between the elements) and permit the receivers of the transceiver system2000to receive circularly polarized signals. The second element2002(2) of the first sub-array2002may couple to the second port2010and the first element2004(1) of the second sub-array2004may couple to the third port2012. The transceiver system2000may represent an antenna array having one transmission/receiving port, with a two-way combine/split, and two receiving ports. FIG.21illustrates a graph2100showing simulation results for a transceiver system having three sub-arrays, such as the transceiver system1500.FIG.21illustrates the total power of the transceiver system at line2102. The right hand circular polarization (RHCP) is shown by line2104and the left hand circular polarization (LHCP) is shown by line2106. By starting on the left of the graph2100, the variation in polarization is observed to change in a trajectory around the Poincaré Sphere. Additionally, the graph2100illustrates the partial overlap of beams may result in destructive interference decreasing the gain in specific directions. For example, at zero degrees, the RHCP is shown as having a lull (e.g., the signals may be 180 degree out of phase). FIG.22illustrates a graph2200showing simulation results for a transceiver system having three sub-arrays, such as the transceiver system1500.FIG.22illustrates the total power of the transceiver system1500at line2202. The horizontal polarization is shown by line2204and the vertical polarization is shown by line2206. By starting on the left of the graph2200, the variation in polarization is observed to change in a trajectory around the Poincare Sphere. FIG.23illustrates a graph2300showing simulation results for a transceiver system having three sub-arrays, such as the transceiver system1500.FIG.23illustrates the total power of the transceiver system1500at line2302. A +45 degree slant polarization is shown by line2304and the −45 degree slant polarization is shown by line2306. By starting on the left of the graph2300, the variation in polarization is observed to change in a trajectory around the Poincare Sphere. Across the graphs2100,2200, and2300, the lines2102,2202, and2302illustrates that the total power are the same. Here, any orthogonal pair of measurement antennas can have large gain variations over the beamwidth while the total power is relatively flat over the +/−90 degree beamwidth. FIG.24illustrates a graph2400showing simulation results for a transceiver system having three sub-arrays, such as the transceiver system1600. The line2402and the line2404represent pattern traces of the orthogonal measurements of the transmission receiver port of the transceiver system1600, such as the first port1610. The lines2406,2408, and2410represent the three independent receiving port patterns (e.g., the second port1612, the third port1614, and the fourth port1616). A 180 degree phase shift may exist between the right and left facing co-polarized elements and create a deep transmission null at 0 degrees to improve array gain flatness. In addition, the transceiver system1600traverses a full 360 degrees of rotation around the Poincare Sphere. This is due to the +/−90 degree differential phase relative to the orthogonal center element the maximize polarization diversity. FIG.25illustrates the CPE108providing broadband internet to a premises102using a plurality of disparate communication protocols. As discussed above, the CPE108may be disposed on an exterior of the premises102while the router110is disposed on an interior of the premises102. The CPE108communicatively couples to the router110via a BPL interface (e.g., the first BPL interface122). The CPE108additionally communicatively couples to the base station radio device106via the communication channel118. The base station radio device106is shown communicatively coupled to a wide area network (WAN)2500. The WAN may be representative of the SPN116, or an ISP. In some instances, the premises102may include a plurality of diverse physical layer (PHY) technologies, such as wired, optical, or wireless, and/or wide area network (WAN) connections may be available at the utility service entrance of the premises102. In some instances, the PHY technologies and WAN technologies provided by Internet Service Providers (ISPs) are processed as needed by compatible modems at the CPE108. For example to support multiple WAN technologies for failover redundancy, the top portion314of the CPE108may be interchangeable with compatible modems and depending on technologies located at the premises102. For example, as shown, the premises102may include an cable2502(e.g., coaxial), a DSL2504(e.g., twisted pair), and an fiber2506(e.g., fiber optic). Each of the cable, the DSL2504, and the fiber2506may serve to provide internet services to the premises102. For example, the DSL2504may represent telephone lines that carry signals to and from the SPN116. Traditionally, each of the cable, the DSL2504, and the fiber2506requires a physical routing through a structure of the premises102for connection to a modem and/or router. However, as shown, the CPE108may communicatively couple to the cable2502, the DSL2504, and/or the fiber2506for providing broadband internet to the premises102. That is, rather than routing cables through the premises102for providing broadband internet (as discussed above), the cable2502, the DSL2504, and the fiber2506may instead couple to the CPE108. Therein, the CPE108may communicatively couple to the router110for providing broadband internet to the premises102. In some instances, the CPE108, the cable2502, the DSL2504, and/or the fiber2506may be located at a demarcation point in which services are provided to the premises102. The cable2502, the DSL2504, and the fiber2506are shown coupling to the WAN2500(e.g., the SPN116) for providing access to the broadband internet. In some instances, the premises102may include any and/or all of the cable2502, the DSL2504, and the fiber2506. That is, different premises may include different services that provide internet, or different technologies that provide internet to the premises102. However, in these instances, the CPE108may be modular for accepting any one of the cable2502, the DSL2504, and/or the fiber2506for providing broadband internet. In such instances, antenna(s) and/or modems of the CPE108may be configured to be interchangeable and installed for providing broadband internet, and depending on the type of PHY technologies (e.g., the cable2502, the DSL2504, and the fiber2506). That is, in some instances, the premises102may include the cable2502and the CPE108may receive the cable2502for providing broadband internet to the premises102. In this instance, the CPE108may not wirelessly communicate with the base station radio device106, but may take advantage of a PHY technology of the premises102. The CPE108may also be configured with a modem for communicating with the WAN2500, using the cable2502. In some instances, the CPE108may provide the broadband internet to the premises102(via the router110) using wired technologies (e.g., BPL) and/or wireless technologies. As another example, the premises102may include the fiber2506. Here, the CPE108may couple to the fiber2506for communicatively coupling with the WAN2500. The CPE108may also include modems and/or modules for transmitting and receiving data via the fiber2506. The CPE108may therein provide broadband internet to the premises102(using wired technologies and/or wireless technologies) through communicating with the router110. In some instances, rather than wirelessly receiving broadband internet via the base station radio device106, the CPE108may wirelessly couple to an ISP's wireless device2508, or wireless services. In some instances, the connections between the CPE108and the WAN2500may be combined into a plurality of WANs (m-WAN) and conveyed using at least a single PHY to enter the premises102. For example, this single PHY may comprise BPL for transmitting data to the router110on the interior of the premises102. However, transmitting and receiving data with the WAN2500may come by way of wireless, coaxial cable, twisted pair cables, fiber, and so forth. In this sense, the CPE108may represent a hub that is utilized to transmit data into the premises102. The CPE108may also aggregate data received across a plurality of frequencies or received via the different PHY technologies. For example, the CPE108may receive first data over a first frequency and second data over a second frequency, and combine the first data and the second data before sending into the premises102, via the first BPL interface122. In this manner, the CPE108may dynamically take advantage of unused frequency, or frequencies with low traffic, for communicating with the base station radio device106and/or the WAN2500. This process may also load balance data sent to and from the WAN2500. In some instances, the CPE108may include a mmWave antenna/modem for aggregating and/or obtaining higher bandwidths. Inside the premises102, in some instances, the plurality of WAN connections (if present) are separated into their independent and own bridge ethernet connections for WAN aggregation. Additionally, or alternatively, the plurality of WAN connections may be aggregated using the router110. In some instances, the router110may correspond to a multi-PHY multi-WAN router (MPMWR). In instances where the router110comprises a MPMWR, the router110may support one or more PHYs on each of the WAN/LAN ports (e.g., using wireless, coax, and so forth) to distribute load-balanced fail-over or WAN bonded multi-PHY LAN bandwidth throughout the premises. In some instances, segmentation of the CPE108and the router110on the exterior and interior side of the premises102, respectively, may provide for optimal PHY selections. That is, the broadband internet may be provided to the premises102via power networks, DSL cables, cables, and so forth. As such, the optimal (e.g., most reliable, highest bandwidth, etc.) connection(s) are chosen to match existing infrastructure throughout the premises102. Installation may be similar as discussed above. For example, the consumer may request service, the utility service or company may survey the premises102for available or optimal PHYs, install the CPE108(along with appropriate modems), and then provision the CPE108. In some instances, the CPE108may connect to a remote antenna for delivering broadband internet to the premises102. For example, in the event a customer is beyond the useful range of the CBRS spectrum, a remote unit may be deployed. The remote unit may include a directional antenna that allow the utility service to connect with the premises in instances where deployment of a radio station radio device is not feasible. FIG.26illustrates an example process2600for providing broadband internet to a premises, such as the premises102. At2602, the process2600may receive first data from a service provider network. For example, the base station radio device106may receive, from the SPN116, data for routing and/or transmitting to the premises102. The base station radio device106may receive the first data via a backhaul (e.g., wired and/or wireless). In some instances, the base station radio device106may receive the first data from another base station radio device and/or a servers, devices, or facilities of the SPN116. At2604, the process2600may transmit the first data. For example, the base station radio device106may transmit the data to the CPE108using the interface(s)204and antenna(s)206. The base station radio device106may communicate with the CPE108using any spectrum (e.g., DSS, CBRS, WWAN, C-band, etc.) and according to the technology of the CPE108(e.g., the first modem module126). In some instances, the base station radio device106may perform beamforming or beam steering with sending the first data. At2606, the process2600may receive the first data. For example, the CPE108may include antenna(s)120and the first modem module126for receiving the first data. In some instances, the antenna(s)120may beamform for receiving the first data from the base station radio device106. At2608, the process2600may transmit the first data. For example, the CPE108may transmit the first data via the first BPL interface122. At2610, the process2600may receive the first data. For example, the router110may receive the first data, via the second BPL interface124, from the first BPL interface122, over electrical wiring of the premises102. At2612, the process2600may transmit the first data. For example, the second modem module128and the antenna(s)132of the router110may transmit (e.g., broadcast) the first data to the consumer device(s)112within the premises102. In some instances, the second modem module128and the antenna(s)132may be modular to broadcast internet to the consumer device(s)112at certain frequencies (e.g., 5.0G). In some instances, the router110distributes Wi-Fi through DSS and/or CBRS. Additionally, the router110may include wired connections for providing broadband internet to the consumer device(s)112. At2614, the process2600may receive second data. For example, the second modem module128and the antenna(s)132of the router110may receive second data from the consumer device(s)112(e.g., request to navigate to a webpage). At2616, the process2600may transmit the second data. For example, the second BPL interface124may transmit the second data to the CPE108. At2618, the process2600may receive the second data. For example, the first BPL interface122may receive the second data from the second BPL interface124via the electrical wiring of the premises102. At2620, the process2600may transmit the second data. For example, the first modem module126and the antenna(s)120may transmit the second data to the base station radio device106. At2622the process2622may receive the second data. For example, the antenna(s)206and/or the interface(s)204may receive the second data form the CPE108. At2624, the process2624may transmit the second data. For example, the base station radio device106may transmit the second data to the SPN116. FIG.27illustrates an example process2700for determining phase shifts and amplitudes for a transceiver system. At2702, the process2700may include determining a polarization diversity associated with an antenna array. In some instances, the polarization diversity may be based on a desired achieved polarization diversity of the antenna array, an amount of antennas within the antenna array, and/or a remote antenna with which the transceiver system is to communicate with. For example, the remote antenna may be configured to receive a plurality of polarizations (e.g., circular, vertical, etc.) from the transceiving system (or another system). At2704, the process2700may include determining a first phase shift and a first amplitude for a first element of a first antenna of the antenna array. For example, the process2700may include a first sub-array having multiple elements that are dual polarized. A first element of the first sub-array may be precoded with a first phase shift and a first amplitude. The first phase shift and the first amplitude may be determined, based at least in part on, the desired polarization of the antenna array. In some instances, the first phase shift and the first amplitude may be relative to an additional antenna of the first sub-array or an additional antenna of other sub-arrays of the antenna array. At2706, the process2700may include determining a second phase shift and a second amplitude for a second element of a second antenna of the antenna array. For example, the process2700may include a second sub-array having multiple elements that are dual polarized. A second element of the second sub-array may be precoded with a second phase shift and a second amplitude. The second phase shift and the second amplitude may be determined, based at least in part on, the desired polarization of the antenna array. In some instances, the second phase shift and the second amplitude may be relative to an additional antenna of the second sub-array or an additional antenna of other sub-arrays of the antenna array. Although the process2700is discussed with regard to determining phase shifts and/or amplitudes for two sub-arrays, or a single element of the two sub-arrays, the process2700may determine phase shifts and/or amplitudes for multiple elements within a sub-array and/or for more sub-arrays (e.g., three). While the foregoing invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention. Although the application describes embodiments having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some embodiments that fall within the scope of the claims of the application.
150,402
11863247
DETAILED DESCRIPTION OF EMBODIMENTS FIG.1schematically illustrates a communication network120. The communication network120is based on power line communications PLC. The communication network120is for example an AMM electricity supply network allowing a base node device (also called “data hub”) to collect, from smart electricity meters, energy consumption reading data for electrical installations that said smart electricity meters are respectively responsible for monitoring. The data hub and the smart electricity meters are thus node devices of the communication network120. The communication network120may comprise other node devices, for example installed in electrical transformers. The communication network120has a meshed structure. The meshed structure of the communication network120is shown schematically inFIG.1through arrows representing the communication links between two neighbouring nodes, and in which some node devices act as a relay so as to increase the communication range in the communication network120. One and the same smart electricity meter thus potentially has a plurality of paths for reaching the data hub, and vice versa. The present invention is therefore particularly suited to the context of G3-PLC technology. The communication network120thus comprises a plurality of node devices130,131,132,133,134,135,136,137,138,139. A network neighbourhood is associated with each of the node devices of the communication network120. For example, the node device133inFIG.1is associated with a network neighbourhood110incorporating the node devices130,134and137. In the communication network120, a signal or a message broadcast by a node device, such as for example the node device133, is not generally visible at any point of the communication network120. Each node device transmitting signals or messages then has a network neighbourhood, that is to say a subset of the communication network120, in which any node device is able to intelligibly receive the signals or messages directly from the node device that broadcast these signals or messages. The network neighbourhood corresponds to the range of the transmitted signals, depending on predetermined transmission parameters (for example power, modulation and coding scheme, network topology, etc.) of the node device at the source of the signals and also potentially depending on characteristics of the communication channel, such as for example an attenuation, a noise level or an impedance. The communication network120is based on a reactive routing protocol, such as for example the LOADng (“Lightweight On-demand Ad hoc Distance-vector Routing Protocol—Next Generation”) protocol. In contrast to proactive routing protocols, which are based on overall network topology knowledge, reactive routing protocols are based on on-demand route discoveries, each node device of the network then needing only to know its own network neighbourhood in order to route data in the communication network120. To discover an appropriate route in the communication network120from a source node device (for example the node device133) to a destination node device (for example the node device132), it is known that the source node device broadcasts a route discovery request, called RREQ (“Route REQuest”). A copy of this route discovery request is received by each node device in the network neighbourhood of said source node device. Each node device in the network neighbourhood of said source node device relays said copy of the request through broadcasting if said node device in question is not the destination node device. Through step-by-step broadcasting, a plurality of copies of the route discovery request are typically received by the destination node device, each of these copies having taken a different route in the communication network120. The use of routing tables stored in the node devices makes it possible to perform point-to-point or unicast communications between any pair of node devices of the communication network120. Intermediate node devices therefore serve as a relay when the node devices of said pair are not in the network neighbourhood of one another, and the communications thus take place step-by-step, each node device using one of its own neighbours to track messages to their respective intended recipients. For communication between neighbouring node devices (that is to say node devices that are in the network neighbourhood of one another), the messages are transmitted in the form of modulated frames. When a modulated frame is addressed specifically to a neighbouring node device and it is demodulated correctly thereby, said neighbouring node device retransmits an acknowledgement ACK to the node device that addressed said modulated frame thereto. The acknowledgement ACK is transmitted on the same frequency band as the modulated frame with which said acknowledgement ACK is associated. A plurality of frequency bands are defined in order to support the transmission of these modulated frames, an appropriate modulation scheme being associated with each of these frequency bands. Each frame transmitted in the form of modulated signals begins with a predefined preamble depending on the modulation scheme in accordance with which said signals were modulated. The preamble is designed to make it possible to perform synchronization at reception on said frame, that is to say to be able to determine an effective frame start time. To this end, the preamble typically comprises a plurality of successive copies of one and the same symbol. The effective content and the duration of the preamble are thus predefined and depend on the modulation scheme that is used. The preambles of a plurality of frames are identical when the same modulation scheme is applied, and differ if not. The applicable modulation schemes (and corresponding demodulation schemes) are preferably OFDM (“Orthogonal Frequency Division Multiplex”) multi-carrier modulation schemes (respectively demodulation schemes). In one particular embodiment, the frequency bands are separate. In terms of frequency bands able to be used in the context of implementing the communication network120, mention may be made of the following: the CENELEC A frequency band, which ranges from approximately 35 kHz to 91 kHz: the FCC frequency band, which ranges approximately from 150 kHz to 480 kHz; the ARIB frequency band, which ranges approximately from 150 kHz to 400 kHz; and the CENELEC B frequency band, which ranges approximately from 98 kHz to 122 kHz. It is then possible to use: a first modulation scheme with thirty-six carriers in the CENELEC A frequency band; a second modulation scheme with seventy-two carriers in the FCC frequency band; a third modulation scheme with fifty-four carriers in the ARIB frequency band; and a fourth modulation scheme with sixteen carriers in the CENELEC B frequency band. It is apparent from the above that a node device may simultaneously use a plurality of frequency bands to communicate with one or more of its neighbours by applying an appropriate transmission mechanism. However, it appears that the ARIB and FFC frequency bands cannot be used simultaneously by one and the same node device, given that they overlap. Advantageously, at least some of the node devices130,131,132,133,134,135,136,137,138,139are configured so as to communicate in a plurality of frequency bands. It is therefore important, for a given node device, to be able to determine which communication modes are supported by a node device in its network neighbourhood. The term “supported communication modes” denotes one or more native communication modes of a node device, that is to say that said node device is capable of implementing due to its possible configurations, and also means that these one or more native communication modes are able to be used at a given time, given the possible interference that may exist. The interference may originate for example from a noisy environment. According to one embodiment of the invention, an initiator node device configured so as to communicate in a plurality of frequency bands with a target neighbouring node device may determine, when needed, which communication modes are supported by this target neighbouring node device prior to sending more substantial messages forming communications. The term “initiator node device” in this case denotes a device executing the method for determining a communication mode for the purposes of communicating with a target neighbouring node device, that is to say one located in its network neighbourhood. The term “target node device” in this case denotes a device receiving one or more channel estimate requests from a neighbouring initiator node device executing the method for determining a communication mode and that will normally be the intended recipient (and therefore the target), after determining a communication mode, for communications performed in this mode. In order to determine which communication modes are supported by a target neighbouring node device (for example the node device134), an initiator node device (for example the node device133) sends messages, in each of the frequency bands for which it is configured so as to communicate, to the target neighbouring node device, which messages each comprise information intended to ask the target node device for a channel estimate in the frequency band that is used. The presence of the information intended to ask for a channel estimate forms a channel estimate request. For example, in a network context compatible with the G3-PLC (registered trademark) standard, the information according to which a channel estimate is requested by an initiator node device from a target neighbouring node device is a Tone Map Request indicator of a frame control header defined according to the ITU-T G9903 recommendation and the information representative of at least one channel estimate and received from the target neighbouring node device is contained in a Tone Map Response message defined according to the ITU-T G9903 recommendation. The initiator node device then analyses the one or more responses possibly received from the target node device and determines, using the one or more items of information received in one or more possible messages received in response, which communication modes are supported by the target neighbouring node device, and then possibly which communication mode has the best performance out of these available communication modes. The information received in response to a channel estimate request is representative, besides the capability of the target node device to receive a message in a given frequency band, of the quality of the channel established in this frequency band. According to one embodiment, the response message to a channel estimate request is implemented in the form of an information block called Tone Map Response, as defined in the G3-PLC standard (ITU G.9903 March 2017 edition). In one exemplary embodiment, the Tone Map Response data block comprises information such as the type of modulation that it uses for the frequency band in question and a link quality indicator LQI. The Tone Map Response data block may contain other information as defined in table 9.9 of section 9.3.5.2.2 of the ITU-T G9903 recommendation (March 2017 version), in particular a tone map. The tone map is a list of subcarriers used to communicate in a given frequency band. According to one embodiment, the target node device, neighbouring the initiator node device, responds to the neighbouring initiator node device in each of the frequency bands in which it has received a message comprising a channel estimate request. Thus, a lack of response in one of the frequency bands used by an initiator node device to address a channel estimate request means that the target node device is not configured so as to communicate in this frequency band with the initiator node device, or else that the target node device was not able to correctly receive the channel estimate request due to interference in the transmission of the message comprising this request, or that this frequency band was not able to be used by the initiator node device to communicate with the target node device. According to one embodiment, the initiator node device records the one or more items of information representative of a channel estimate for each frequency band for which it received such information in response to a channel estimate request. The initiator node device then determines, based on this information, which communication modes are supported by the target neighbouring node device and records this information in a neighbourhood information table that comprises information representative of parameters of all of the identified neighbouring node devices. According to one variant, an initiator node device may comprise a plurality of neighbourhood tables, each of the tables corresponding to a previously detected and identified target neighbouring node device. When a new node device is added to the network neighbourhood of a given node device, information corresponding to this new neighbouring node device is added to the one or more neighbourhood information tables of the neighbouring node devices after the new node device has been able to be detected and identified and the parameters to be recorded have been able to be defined through message exchanges similar to those described above, in particular. A node device of the network that wishes to initiate communication with a neighbouring node device may thus advantageously consult the neighbourhood information table that corresponds to the target node device in order to determine which communication mode is the best one to use with this target node device. If a communication problem occurs, such as the complete absence of transmission or else transmission subjected to strong interference, or else if a validity time of a neighbourhood information table has expired, the initiator node device that initiated the communication may simply execute the method for determining a communication mode again in order to redefine which communication modes are supported by the target node device, and possibly the best communication mode for communicating with this target node device, prior to establishing any new communication with this target node device. Communication problems may be detected through an error rate check or else through using protocols intended to verify the correct reception or else the integrity of the transmitted messages. FIG.2illustrates a first exchange of messages between the node device133and the node device134neighbouring the node device133. The node devices133and134are represented by vertical bars located, respectively, on the left and on the right inFIG.2, and the messages exchanged between the two devices are each represented by an arrow going from one to the other of the node devices133and134neighbouring one another. Reading from top to bottom inFIG.2corresponds to a chronological sequence of steps (here S2to S5) and illustrates key steps of one example of a method for determining a communication mode according to one embodiment. Thus, in a step S2, the communication node device133addresses a message TM-RQ-B1, comprising information according to which a channel estimate is requested from the node device134for the frequency band B1(channel estimate request), to the node device134. The channel estimate request of the message TM-RQ-B1is addressed to the node device134by the node device133in a frequency band B1. The frequency band B1is for example a frequency band chosen from among the group of frequency bands consisting of the CENELEC A frequency band, the CENELEC B frequency band, and the FCC frequency band or the ARIB frequency band. According to one embodiment, the channel estimate request is implemented in the form of a bit set to 1 in a TMR field of a frame control header of a message, as defined in the G3-PLC standard (ITU G.9903 2017 edition). Similarly, the node device133also addresses a second message TMR-RQ-B2, comprising a channel estimate request, to the node device134in a step S3. The channel estimate request of the message TM-RQ-B2is addressed to the node device134by the node device133in a frequency band B2. The frequency band B2is for example also a frequency band chosen, separately from the frequency band B1, from among the group of frequency bands consisting of the CENELEC A frequency band, the CENELEC B frequency band, and the FCC frequency band or the ARIB frequency band. In this exemplary message exchange, the node device134does not respond to the channel estimate request in the frequency band B1, but, in a step S4, addresses a message TM-RSP-B2, in response to the message TM-RQ-B2, comprising one or more items of information linked to a channel estimate in the frequency band B2performed by the node device134. According to one embodiment, the response message TM-RSP-B2is implemented in the form of an information block called “TONE MAP RESPONSE”, as defined in the G3-PLC standard (ITU G.9903 2017 edition). The node device133then, in a step S5, records the received information, representative of the channel estimate performed by the node device134in the frequency band B2, in the form of an information block in a neighbourhood table NT-REC in a memory internal to the node device133. The node device133is thus advantageously capable of determining that the node device does not support communication in the frequency band B1or was possibly not able to receive the message TMR-RQ-B1. It is furthermore possible that the message TMR-RQ-B1sent to the node device134by the node device133was correctly received by the node device134but that the node device133was not able to receive any message in response due to interference on the communication link between the two neighbouring node devices133and134. Based on the information received and recorded in the neighbourhood table NT-REC, the node device133is able to determine, prior to subsequent communications with the node device134, which communication modes are supported thereby, or even which of these communication modes is the supported mode offering the best performance level for one or more messages to be transmitted subsequently to the neighbouring node device134. According to the example described inFIG.2, the node device133detects that the neighbouring node device134is a single-band node device capable of communicating in the frequency band B2. Advantageously, the node device133establishes first quality indicators based on the information successively received and representative of a channel estimate in the frequency bands under test, so as then to be able to determine a transmission mode with the node device134by comparing these respectively determined first transmission quality indicators for each of the frequency bands. FIG.3illustrates a second exchange of messages between the node device133and the node device134according to an embodiment similar to that ofFIG.2. According to one embodiment, the node device133is a multi-band node device configured so as to communicate in the frequency band B1and the frequency band B2, but also in a frequency band EB, called “extended band”, which groups together the frequency bands B1and B2. In other words, this means that the node device133is configured so as to be able to process communications in the extended frequency band EB that is wider than the frequency band B1or than the frequency band B2taken on their own, and that internal circuits of the node device133are configured so as to be able to generate a modulated frame on all of the subcarriers of the frequency bands B1and B2. Advantageously, the extended frequency band EB covers all of the subcarriers available in the various bands supported by the node device133, including in particular the frequency bands B1and B2. Distributing symbols to be transmitted on the extended frequency band EB therefore involves adjusting the encoding, error correction and data interleaving mechanisms used by communication modes for communicating in a “non-extended” band, such as the frequency band B1or the frequency band B2. For example, the implementation of the extended frequency band EB may be based on separate transmitter circuits of the node device133, the driving of which may be pooled under the control of an internal control unit configured so as to manage communications. Of course, the communication mode for communicating in the extended frequency band EB thus supported is applicable only between compatible node devices, that is to say ones that support the extended frequency band EB. According to one embodiment, a node device supporting the extended frequency band EB may, through configuration, restrict its use to a specific frequency band from among those forming the extended frequency band EB, such as for example the frequency band B1or else the frequency band B2. Advantageously, backwards compatibility between node devices is supported by message protocol exchanges, such as that illustrated inFIG.3. According to one embodiment, the node device133, configured so as to communicate in the extended frequency band EB, sends a message TM-RQ-EB to the node device134. The message is sent to the node device134in the extended frequency band EB in accordance with the transposition principle described above, and comprises information according to which a channel estimate is requested from the node device134in the extended frequency band EB. According to one embodiment, the channel estimate request is implemented in the form of a bit set to 1 in a TMR field of a frame control header of a message referring to use of the extended frequency band EB, such that, if the node device134is configured so as to communicate with the node device133in the extended frequency band EB, it sends a message in response to the channel estimate request received from the node device133. The example illustrated inFIG.3thus shows that the node device134is not configured so as to communicate with the node device133in an extended frequency band EB, since it does not respond to the transmitted message TM-RQ-EB. The node device133then sends messages TM-RQ-B1and TM-RQ-B2each comprising a channel estimate request in the frequency bands B1and B2, as already illustrated inFIG.2. Sending a channel estimate request in the frequency band EB beforehand allows the node device133to check whether the node device134supports a communication mode for communicating in an extended frequency band EB, before checking which other possible modes are supported, in the frequency bands B1and B2for example. It should be noted that the channel estimate request sent in the extended frequency band EB by the node device133to the node device134may be addressed before or after the other message exchanges performed in steps S2to S4. The capability of the node device134to communicate or not communicate with the node device133in the extended frequency band EB is recorded in the neighbourhood table NT-REC of the node device133in the form of one or more items of information. FIG.4illustrates a third exchange of messages between the node device133and the node device134according to an embodiment similar to that already used inFIG.2and inFIG.3. The exchange of shown messages illustrates that the node device134does not respond to the channel estimate request message TM-RQ-EB sent by the node device133in step S1, but then responds to the two messages TM-RQ-B1and TM-RQ-B2respectively sent to the node device134in the frequency bands B1and B2in steps S2and S3. The response messages TM-RSP-B1and TM-RSP-B2sent in step S4each comprise information representative of the channel estimate of the frequency band in question. Thus, the message TM-RSP-B1comprises information representative of a channel estimate in the frequency band B1and the message TM-RSP B2comprises information representative of a channel estimate in the frequency band B2. This information is recorded in the network neighbourhood table NT-REC in step S5. The node device133may identify, according to the responses received in this example, that the node device134does not support the communication mode for communicating in an extended frequency band EB, but supports communication modes for communicating in the frequency band B1and in the frequency band B2. FIG.5illustrates a fourth exchange of messages between the node device133and the node device134according to an embodiment similar to that already used inFIG.2, inFIG.3and inFIG.4. In this exemplary exchange of messages between the neighbouring node devices133and134, the node device134responds to a message TM-REQ-EB transmitted in step S1and comprising a channel estimate request with a message TM-RSP-EB in a step S12. According to this example, the message TM-RSP-EB, in response to the message TM-REQ-EB, comprises information representative of a channel estimate in an extended frequency band EB, and the node device133is able to deduce that the node device134is configured so as to communicate therewith in the extended frequency band EB. Thus, for example, following the message TM-REQ-EB, the node device133does not address any further message with a view to obtaining a channel estimate in a frequency band other than the extended frequency band EB. Of course, this example is non-limiting, and it may be beneficial to obtain information regarding the communication in an extended frequency band EB and information regarding each of the other frequency bands B1and B2with the node device134, prior to selection of a communication mode for communicating therewith by the node device133. FIG.6is a flowchart illustrating a method for determining a mode of communication between two node devices neighbouring one another in the communication network120, according to one embodiment. These node devices are by way of example the node device133operating as initiator node device and the neighbouring node device134, operating as a target node device. At the end of an initialization step S0, the node devices133and134are configured so as to communicate with one another in at least one communication mode for communicating in at least one frequency band. It is considered that the devices are then normally operational, at this stage, and that a message exchange may be initiated. According to the embodiment illustrated inFIG.6, the initiator node device133, in step S1, sends a message comprising information according to which a channel estimate request in an extended frequency band EB is requested from the target node device, and awaits a possible message in response for a predetermined time. At the end of the predetermined period, the initiator node device133, in step S12, checks whether a response has actually been received in the form of a message comprising information representative of a channel estimate in the frequency band EB. If so, the initiator node device133, in step S5, records the received information representative of a channel estimate in the extended frequency band EB in its neighbourhood table NT-REC, and determines a preferred communication mode, taking into account in particular the various information available in the neighbourhood table NT-REC. For example, the initiator node device133determines that the communication mode for communicating in an extended frequency band EB is the most advantageous communication mode at this time for communicating with the target node device134, and initiates transmission in this mode, in the extended frequency band EB, in step S6. According to the embodiment, in the absence of any response from the target node device134after a predetermined time, the initiator node device133considers that the target node device134does not support communication in a communication mode for communicating in the extended band and, in steps S2and S3, sends messages comprising a channel estimate request in the frequency band B1and a channel estimate request in the frequency band B2, respectively. The initiator node device133then awaits a possible response to at least one of these two messages, or to each of these two messages, and records the information representative of one or more channel estimates received in response in one or more neighbourhood tables NT-REC, before communicating subsequently in step S6. If no message is received in response to a channel estimate request transmitted by the initiator node device, in step S42, the method returns to step S2and the initiator node device again sends messages to the target node device until a response is obtained in at least one of the two frequency bands B1and B2. A new message comprising a channel estimate request (TMR indicator set to 1, for example, in G3-PLC) may be sent as soon as data have to be transmitted to the target node device134. When communications are established in step S6, in a given communication mode between the two node devices, and in the absence of any communication problem detected in step S62intended to define a communication quality level, communications continue in the selected communication mode. By contrast, if a communication quality problem is detected, the determination method is relaunched starting from step S1. Advantageously, determining the transmission mode comprises a step of comparing first transmission quality indicators that are respectively determined, for each of the frequency bands, based on recorded information associated with each of the at least two frequency bands B1and B2. If the received information, representative of one or more channel estimates, indicates that the available frequency bands exhibit significant interference, determining the transmission mode may furthermore comprise selecting what is called a “robust” transmission mode using BPSK modulation and systematic repetition of the transmitted bits (for example, each bit is repeated four times or six times during a transmission). The selection of what is called a “robust” transmission mode depends for example on a transmission quality level defined based on an estimate of a transmission channel established via a multi-band transmission in said at least two frequency bands B1and B2. FIG.7schematically illustrates an exemplary internal architecture of any node device of the communication network120. It will be considered by way of illustration thatFIG.7illustrates an internal layout of the node device133. Such a node device is said to be multi-band since it is capable of transmitting a message on a plurality of frequency bands. It will be noted thatFIG.7could also schematically illustrate an exemplary hardware architecture of a processing module contained within the node device. According to the exemplary hardware architecture shown inFIG.7, the node device133then comprises the following, connected by a communication bus1300: a processor or CPU (“Central Processing Unit”)1331; a RAM (“Random Access Memory”)1332; a ROM (“Read Only Memory”)1333; a storage unit such as a hard disk (or a storage medium reader, such as an SD (“Secure Digital”) card reader)1334; at least one communication interface1335allowing the node device133to communicate with the node devices belonging to its network neighbourhood, such as for example the node devices134and137. The processor1301is capable of executing instructions loaded into the RAM1332from the ROM1333, from an external memory (not shown), from a storage medium (such as an SD card), or from a communication network. When the node device is turned on, the processor1331is capable of reading instructions from the RAM1332and executing them. These instructions form a computer program that causes the processor1331to implement all or some of the exchanges and methods described with reference toFIGS.2,3,4and5. All or some of the exchanges and methods described with reference toFIGS.2,3,4and5may be implemented in software form by executing a set of instructions using a programmable machine, for example a DSP (“Digital Signal Processor”) or a microcontroller, or be implemented in hardware form by a machine or a dedicated component, for example an FPGA (“Field-Programmable Gate Array”) or an ASIC (“Application-Specific Integrated Circuit”). In general, the node device133comprises electronic circuitry configured so as to implement the methods described with reference to the node device133(likewise the node device134).
32,531
11863248
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Like features have been designated by like references in the various figures. In particular, the structural and/or functional features that are common among the various embodiments may have the same references and may dispose identical structural, dimensional and material properties. For the sake of clarity, only the operations and elements that are useful for an understanding of the embodiments described herein have been illustrated and described in detail. In particular, the generation of the radio frequency signals and their interpretation have not been described in detail, the described embodiments and modes of implementation being compatible with the standard techniques for generating and interpreting these signals. Unless indicated otherwise, when reference is made to two elements connected together, this signifies a direct connection without any intermediate elements other than conductors, and when reference is made to two elements coupled together, this signifies that these two elements can be connected or they can be coupled via one or more other elements. In the following disclosure, unless indicated otherwise, when reference is made to absolute positional qualifiers, such as the terms “front,” “back,” “top,” “bottom,” “left,” “right,” etc., or to relative positional qualifiers, such as the terms “above,” “below,” “higher,” “lower,” etc., or to qualifiers of orientation, such as “horizontal,” “vertical,” etc., reference is made to the orientation shown in the figures, with the understanding that, in practice, the described devices can be oriented differently. Unless specified otherwise, the expressions “around,” “approximately,” “substantially” and “in the order of” signify within 10%, and preferably within 5%. In the following description, a “binary signal” refers to a signal that alternates between a first state or constant level, for example a low state, denoted “0,” and a second state or constant level, for example a high state, denoted “1”. The binary signals can, in practice, correspond to voltages or currents that may not be perfectly constant in the high or low state. FIG.1shows, very schematically and in block diagram form, an exemplary system of the type to which, as an example, the described embodiments apply. The case is considered of two similar electronic devices, for example two mobile telephones, but everything that will be described applies more generally to any system in which a reader, terminal or device, must detect and communicate with an electromagnetic transponder or an electronic tag (TAG). To simplify, reference will be made to NFC devices in order to designate electronic devices incorporating near field communication circuits. Two NFC devices1(DEV1) and2(DEV2) are able to communicate by near field electromagnetic coupling. Depending on the applications, for a communication, one of the devices operates in so-called reader mode, while the other operates in so-called card mode, or both devices communicate in peer-to-peer (P2P) mode. Each device includes various electronic circuits12and22for generating and/or detecting a radiofrequency signal using an antenna (not shown). The radiofrequency field generated by one of the devices is detected by the other device, which is within range. When a device (for example, the device1) transmits an electromagnetic field (EMF1) in order to initiate a communication with another NFC device (for example, the device2), this field is detected by this device2once it is within range. The coupling between the two oscillating circuits (that of the antenna of the device2and that of the antenna of the device1) is reflected by a variation of the charge made up of the circuits of the device2on the oscillating circuit for generating the field of the device1. In practice, for a communication, the corresponding phase or magnitude variation of the transmitted field is detected by the device1, which then begins an NFC communication protocol with the device2. On the device1side, in practice it is detected whether the magnitude of the voltage across the terminals of the oscillating circuit and/or the phase shift relative to the signal generated by the circuit12depart from the magnitude and phase windows each defined by a lower threshold and an upper threshold. When the device1detects the presence of the device2in its field, it begins a procedure for establishing communication, implementing transmissions of the requests by the device1and responses by the device2(polling sequence as defined in the NFC Forum standard). If circuits of the device2are in standby mode, they are then reactivated. For energy saving reasons, the transmitting device1, whether it is connected to the electrical distribution sector or supplied directly or indirectly by battery, is placed in standby mode when it is not in use for communication. NFC devices are generally equipped with circuits for detecting another device located within their field in order to exit standby mode for communication purposes. In certain applications, when an NFC device is not in the process of communicating, it is switched to so-called low power mode in order to reduce the consumed energy. This is in particular the case for battery-powered devices. In this low power mode, a device configured in reading mode executes a so-called tag detection or card detection mode and executes detection loops. The detection is similar to that done when the device is not in low power mode, but the difference is that, in normal mode, the transmission of the carrier is continuous and periodically includes polling frames whereas, in order to reduce consumption, the transmission of the field is done by periodic bursts and without polling frame when the device is in low power mode. The bursts have a significantly shorter duration (in a ratio of at least ten, preferably at least one hundred) than the duration of a polling request of a card in normal mode. When it is in low power mode, an NFC device capable of operating both in reader mode and in card mode alternates between field emission phases and field detection phases. The field emission phases serve to detect the presence of a device in card mode within range. The field detection phases allow the device to detect any presence of a field emitted by another device in reader mode located within range. The near field communication circuits are increasingly frequently integrated into devices having communication functions other than NFC. This is in particular the case for mobile telephones that incorporate mobile telephone circuits and functions and NFC circuits and functions, and most often also Wi-Fi, Bluetooth, etc. functions and circuits. Laptop computers they constitute another example of NFC devices while having Wi-Fi, Bluetooth, etc. circuits and functions. InFIG.1, it is presumed that the device1emits, aside from the electromagnetic field EMF1suitable for establishing a communication with the device2, another radiofrequency electromagnetic field (EMF2) configured to convey signals intended for an ultra-wideband (UWB) receiver3. In some cases, the field emission by an NFC device is then problematic because it disrupts the ultra-wideband communication. Such disruptions can occur for other functions implemented by the NFC device, in particular by a refresh process for a screen of the NFC device. The embodiments implemented below seek to resolve these problems of disruptions due to the field emission. FIG.2shows, very schematically and in block diagram form, an embodiment of a near field communication device, for example the device1ofFIG.1. According to this embodiment, the device1, or its circuit12, includes a first circuit31(NFC), for example a near field communication circuit, linked, preferably connected, to a first antenna32(NFC ANTENNA), for example a near field communication antenna. A first signal, denoted NFC TX, makes it possible to control a field emission by the first near field communication antenna32. The signal NFC TX is transmitted from the first circuit31to the first antenna32, and causes the emission of the electromagnetic field EMF1by the first antenna32. The device1, or its circuit12, further includes a second circuit33(HOST). This second circuit33is for example a microcontroller, a microprocessor or an ultra-wideband communication circuit of the device1. The second circuit33here is linked, preferably connected, to a second radiofrequency communication antenna34(RF ANTENNA). A second signal, denoted RADIO, makes it possible to control a transmission of a radiofrequency signal by using the second antenna34. The signal RADIO is transmitted from the second circuit33to the second antenna34, and causes the emission of the electromagnetic field EMF2by the second antenna34. The second circuit33is further linked, preferably connected, to a screen35(SCREEN), for example a primary display screen of the device1. It is assumed that the second circuit33manages a refresh of the screen35. In other words, it is assumed that the second circuit33causes, periodically, an update of display pixels of the screen35. According to this embodiment, the second circuit33is linked, preferably connected, to the first NFC circuit31by a physical connection36. The connection36includes a first conductor360and a second conductor362linking, preferably connecting, terminals of the first circuit31to terminals of the second circuit33. More specifically, as illustrated inFIG.2, the first conductor360links, preferably connects, a first output terminal330of the second circuit33to a first input terminal310of the first circuit31. The second conductor362links, preferably connects, a second output terminal312of the first circuit31to a second input terminal332of the second circuit33. The terminals310,312,330and332are for example general purpose input/output (GPIO) terminals of the circuits31and33. If applicable, the terminals312and330are configured as outputs and the terminals30and332are configured as inputs. The first conductor360is dedicated to a transmission, from the circuit33to the circuit31, of a third signal, denoted COEX_IN. The second conductor362is dedicated to a transmission, from the circuit31to the circuit33of a fourth signal, denoted COEX_OUT. According to this embodiment, the signals COEX_IN and COEX_OUT are binary signals respectively switched by the second circuit33and by the first circuit31. The signals COEX_IN and COEX_OUT respectively make it possible to give priority either to operations carried out by the circuit31, or to operations carried out by the second circuit33. In particular, the signals COEX_IN and COEX_OUT, exchanged between the circuits31and33, make it possible to ensure that the emission of the electromagnetic field EMF1by the first circuit31, on the one hand, and the emission of the electromagnetic field EMF2or the refresh of the screen35by the second circuit33, on the other hand, are not done simultaneously. The device1can also include one or several other electronic elements or circuits. These electronic elements or circuits, the operation of which will not be described in detail in the description below, are symbolized inFIG.2by functional blocks37(FCT). FIGS.3A-3Cshow timing diagrams of embodiments of methods for controlling the device described in relation withFIG.2. These timing diagrams in particular reflect evolution examples, as a function of time (t), of the signals RADIO, COEX_IN, COEX_OUT and NFC TX. In view A, the timing diagram illustrates an embodiment in which the priority is given to the emission of the field EMF2by the second circuit33. The second circuit33here serves as priority circuit. According to this embodiment, the signal COEX_IN serves as inhibiting signal of the first circuit31. In particular, the signal COEX_IN here is configured to prevent a near field emission by the first circuit31. The signal COEX_OUT then serves as state signal of the first circuit31. In particular, the signal COEX_OUT here is configured to transmit, to the second circuit33, a near field emission request by the first circuit31. According to the embodiment of view A, it is considered that: a low state of the signal COEX_IN indicates that the first circuit31is authorized to emit in near field; a high state of the signal COEX_IN indicates that the first circuit31is prohibited from emitting in near field; a low state of the signal COEX_OUT indicates that the first circuit31does not wish to emit or is not emitting in near field; and a high state of the signal COEX_OUT indicates that the first circuit31wishes to emit or is emitting in near field. At an instant t0A, the signals RADIO, COEX_IN, COEX_OUT and NFC TX are idle. In particular, the signals NFC TX and RADIO are in a state such that they do not lead to any field emission by the antennas32and34, respectively. The binary signals COEX_IN and COEX_OUT are both in a low state or level. According to this embodiment, this means that the first circuit31is allowed to emit in near field, but does not wish to emit. At an instant t1A, after instant t0A, it is assumed that emission controls simultaneously reach the first circuit31and the second circuit33. At instant t1A, the first circuit31therefore switches its signal COEX_OUT from the low state to a high state or level. This switching of the signal COEX_OUT aims to signify, to the second circuit33, that the first circuit31wishes to begin a near field emission. However, in the embodiment of view A, the emission of the second circuit33takes priority relative to the emission of the first circuit31. The second circuit33therefore prevents the first circuit31from emitting in near field. To that end, the second circuit33switches, at instant t1A, the signal COEX_IN from the low state to a high state or level. The second circuit33begins sending the signal RADIO. During this time, the signal NFC TX is kept idle. At an instant t2A, after instant t1A, the second circuit33completes transmission. The signal RADIO is brought back to the idle state. The signal COEX_IN is then switched to the low state. This indicates that the first circuit31is authorized to emit in near field. The signal COEX_OUT is still in the high state at instant t2A, which therefore indicates that the first circuit31is prepared to emit. The emission of the signal NFC TX can then begin. It is presumed that at an instant t3A, after instant t2A, the second circuit33wishes to emit again. The second circuit33then switches the signal COEX_IN to the high state. This interrupts the near field emission by the first circuit31. The signal NFC TX is then placed in the idle state while the second circuit33emits. At an instant t4A, after instant t3A, the emission of the second circuit33ceases. The signal COEX_IN is switched to the low state. The signal COEX_OUT is then still in the high state. The emission of the first circuit31can therefore resume. At an instant t5A, after instant t4A, the emission of the first circuit31finishes. The signal COEX_OUT is then switched to the low state by the first circuit31. This means that the first circuit31no longer wishes to emit. In a variant, only the inhibiting signal COEX_IN is transmitted from the second circuit33to the first circuit31. This then makes it possible to use only the first conductor360of the connection36(FIG.2). The number of conductors of the connection is thus reduced linking, preferably connecting, and the first circuit31to the second circuit33. One also reduces the number of terminals of the circuits31and33dedicated to managing emission priorities between these two circuits. In view B, the timing diagram illustrates another embodiment in which the priority is given to the emission of the field EMF1, that is to say, the near field emission by the first circuit31. Contrary to the embodiment described in relation with view A, here it is the first circuit31that acts as the priority circuit. According to this embodiment, the signal COEX_OUT serves as inhibiting signal of the second circuit33. In particular, the signal COEX_OUT here is configured to prevent a radiofrequency emission by the second circuit33. The transmission of the signal COEX_IN here can be omitted. According to the embodiment of view B, it is considered that the low state of the signal COEX_OUT indicates that the second circuit33is authorized to emit a radiofrequency signal. The high state of the signal COEX_OUT indicates that the second circuit33is prohibited from emitting a radiofrequency signal. At an instant t0B, the signals RADIO, COEX_IN, COEX_OUT and NFC TX are idle. In particular, the signals NFC TX and RADIO are in a state such that they do not lead to any field emission by the antennas32and34, respectively. The binary signal COEX_OUT is in a low state. According to this embodiment, this means that the second circuit33is allowed to emit. At an instant t1B, after instant t0B, it is assumed that an emission control reaches the second circuit33. At instant t1B, the signal COEX_OUT for inhibiting the second circuit33is in the low state. This means that the emission of the second circuit33is allowed. The second circuit33then begins the emission of the signal RADIO. During this time, the signal NFC TX is kept idle. It is presumed that at an instant t2B, after instant t1B, the first circuit31wishes to communicate by near field. The first circuit31then switches the signal COEX_OUT to the high state. This interrupts the emission of the second circuit33. The signal RADIO is then placed in the idle state. At an instant t3B, after instant t2B, the first circuit31begins with a near field emission. At an instant t4B, after instant t3B, the emission of the first circuit31ceases. The signal COEX_OUT is then switched to the low state. In a variant, the signal CODEX_IN is also exchanged between the first circuit31and the second circuit33. This then for example allows the first circuit31to determine the state of the circuit33. In view C, the timing diagram also illustrates another embodiment in which the emission of a circuit, among the circuits31and33, is only possible after the other circuit has ceased to emit. In other words, it will arbitrarily be chosen to give priority to the circuit whose emission has already begun and has not yet finished. Thus, according to the embodiment of view C the placement in the high state of the signal COEX_IN during the emission of the second circuit33makes it possible to inhibit the emission of the first circuit31, and the placement in the high state of the signal COEX_OUT during the emission of the second circuit33makes it possible to indicate to the second circuit33that the first circuit31wishes to emit. Similarly, still according to the embodiment of view C, the placement in the high state of the signal COEX_OUT during the emission of the first circuit31makes it possible to inhibit the emission of the second circuit33, and that the placement in the high state of the signal COEX_IN during the emission of the first circuit31makes it possible to indicate to the first circuit31that the second circuit33wishes to emit. At an instant t0C, the signals RADIO, COEX_IN, COEX_OUT and NFC TX are idle. In particular, the signals NFC TX and RADIO are in a state such that they do not lead to any field emission by the antennas32and34, respectively. The binary signals COEX_IN and COEX_OUT are both in the low state. This indicates that neither the first circuit31nor the second circuit33wishes to emit. At an instant t1C, after instant t0C, the second circuit33begins to emit. At an instant t2C, after instant t1C, it is assumed that the first circuit31wishes to begin a near field emission. The signal COEX_OUT is then switched to the high state. This indicates that the first circuit31wishes to emit. At an instant t3C, after instant t2C, the second circuit33switches the signal COEX_IN to the high state. This prohibits the emission of the first circuit31. This prohibition is maintained as long as the emission of the second circuit33is not completed. At an instant t4C, after instant t3C, the second circuit33ceases to emit. The second circuit33then switches the signal COEX_IN to the low state. This authorizes the emission of the first circuit31. At an instant t5C, after instant t4C, the signal COEX_OUT is still in the high state while the signal COEX_IN is in the low state. The first circuit31then begins to emit. At an instant t6C, after instant t5C, the near field emission by the first circuit31finishes. The signal COEX_OUT is then switched to the low state. Assuming that the second circuit33wished to emit between instants t5C and t6C, the second circuit would have switched the signal COEX_IN. The signal COEX_OUT being in the high state between instants t5C and t6C, the emission of the second circuit33would nevertheless have been prohibited until the emission of the first circuit31finishes, that is to say, until instant t6C where the signal COEX_OUT is switched to the low state. The embodiments described above relative toFIGS.3A-3Cillustrate examples in which the signals COEX_IN and COEX_OUT make it possible to manage or arbitrate priorities between the near field emission by the first circuit31and the radiofrequency emission by the second circuit33. These embodiments are nevertheless transposable to other situations in which one seeks to block the near field emission by the first circuit31while operations, for example refreshes of the screen35, are done by the second circuit33. This transposition is within the capabilities of those skilled in the art from the above description. It is also possible to provide an embodiment in which one or more operations performed by the first circuit31block or inhibit one or several operations performed by the second circuit33. Conversely, it is possible to provide another embodiment in which one or more operations performed by the second circuit33block or inhibit one or several operations performed by the first circuit31. These combinations are within the capabilities of those skilled in the art from the present description. What was described in relation with views A, B and C in reference to high and low states is transposable to inverse states (low and high). FIG.4schematically shows an embodiment of a control circuit of the device described in relation withFIG.2. According to this embodiment, the second conductor362making it possible to transmit the signal COEX_OUT is linked, preferably connected, to an output of a radiofrequency communication circuit40(DIGITAL RF) of the NFC module. The first conductor360, which transmits the signal COEX_IN from the second circuit33, is linked, preferably connected, to an input of a first logic gate41. The logic gate41is, as illustrated inFIG.4, an inverter. The logic gate41thus makes it possible to obtain, at the output of the gate41, a binary state signal or with inverse level relative to that of the binary signal COEX_IN at the input of the gate41. The output of the logic gate41is linked, preferably connected, to an input of a second logic gate42. The logic gate42is, as illustrated inFIG.4, an AND gate. The output of the circuit40is linked, preferably connected, to another input of the second logic gate42. One thus obtains, at the output of the logic gate42, a binary signal in the high state when the signal COEX_IN is in the low state and when the signal COEX_OUT is in the high state and a binary signal in the low state otherwise. The output of the logic gate42is linked, preferably connected, to an activation (enable) input (EN) of an amplifier43(RF DRIVER) of the radiofrequency signal NFC TX suitable for an antenna, for example the NFC ANTENNA32(NFC ANTENNA). According to this embodiment, the signal COEX_IN is configured to prevent or block an NFC radiofrequency emission while the signal COEX_OUT is configured to communicate a request or emission control to the circuit33and to active, under the control of the signal COEX_IN, the NFC emission. One thus reproduces, in cabled logic using gates41and42, an operation similar to that of the embodiment of the method previously described in relation withFIG.3A. In a variant, the signals COEX_IN and COEX_OUT are processed by software, for example by the circuit40. The logic gates41and42are then for example programmed in the software (firmware) of the circuit40. If applicable, the signal COEX_IN is transmitted directly to the circuit40and the input EN of the circuit43is controlled by an output of the circuit40. FIG.5schematically shows another embodiment of a control circuit of the device described in relation withFIG.2. According to this embodiment, the first conductor360, which transmits the signal COEX_IN from the second circuit33, is linked, preferably connected, to an input of a first finite state machine44(LPCD FSM). The finite state machine44is for example part of the radiofrequency communication circuit40of the NFC module of the first circuit31. The signal COEX_IN is also transmitted to a processor or microcontroller of the first circuit31. Switching to the high state of the signal COEX_IN causes an interruption (block45, CPU INTERRUPT) of operations performed by this processor. An output of the finite state machine44is linked, preferably connected, to an input of a circuit46(DIG TX), for example an NFC modem. The circuit46is for example part of the radiofrequency communication circuit40(DIGITAL RF) of the NFC module, as illustrated inFIG.5. The circuit46is for example a circuit dedicated to the field emission (NFC TX). The circuit46can in particular control an emission power, an output impedance, etc. An output of the circuit46is linked, preferably connected, to the conductor362making it possible to transmit the signal COEX_OUT to the second circuit33. The output of the circuit46is linked, preferably connected, to an activation (enable) input (EN) of an amplifier43(RF DRIVER) of the radiofrequency signal NFC TX suitable for an antenna, for example the NFC ANTENNA32(NFC ANTENNA). The circuit46can correspond to a digital part of the amplifier43. Similarly to the embodiment described in relation withFIG.4, the signal COEX_IN is configured to prevent or block an NFC radiofrequency emission while the signal COEX_OUT is configured to communicate a request or emission control to the circuit33and to active, under the control of the signal COEX_IN, the NFC emission. One thus reproduces, in cabled logic using the finite state machine44of the first circuit31, an operation similar to that of the embodiment of the method previously described in relation withFIG.3A. FIG.6is a sequence diagram of steps of an embodiment of a method for controlling the circuit described in relation withFIG.5. According to this embodiment, the first NFC emission circuit31is initially in an idle state (block60, IDLE). When the first circuit31wishes to emit in near field, the finite state machine44(FIG.5) activates (arrow61, LPCD_EN=1) the amplifier43by its input EN. This thus makes it possible to authorize the near field emission of the signal NFC TX by the antenna32. The first circuit31is then in a near field emission state (block62, Field emission). Once the near field emission is complete, the first circuit31enters (arrow63, TX_END=1) a waiting state (block64, WAIT Time). A time delay is then launched suitable for temporarily keeping the first circuit31in the waiting state64, that is to say, during a duration denoted CNT. The first circuit31is meant to stay in the waiting state64as long as the duration CNT has not elapsed (arrow54, WAIT_CNT=1). The first circuit31then returns to the emission state62once the duration CNT has elapsed (arrow66, WAIT_CNT=0). The first circuit31can further leave the waiting state64and return (arrow67, LPCD_EN=0 OR COEX_IN=1) to the idle state60before the duration CNT has elapsed. This for example occurs if the finite state machine44deactivates the input EN of the amplifier43or if the signal COEX_IN is switched to the high state by the second circuit33. According to this embodiment, the second circuit33takes priority relative to the first circuit31. The first circuit31here is authorized to emit in near field under control of the signal COEX_IN transmitted by the second circuit33. The near field emission62finishes, however, only at the initiative of the first circuit31, the signal COEX_IN here only being taken into account in the waiting state64. In this case, the finite state machine activates the amplifier43with the additional condition that the signal COEX_IN is in the low state. The first circuit31here is configured so that the emission of the field62is done by periodic bursts, intercut by waiting phases64. This operating mode of the circuit31is described as low power card detection (LPCD). According to one preferred embodiment, in this low consumption mode, each near field emission62of the first circuit31is done by bursts with a duration of about 30 μs. This emission is repeated about three to four times per second, the duration CNT, corresponding to an interval between bursts, therefore being between about 250 ms and about 330 ms. FIGS.7A and7Bshow timing diagrams of the control method described in relation withFIG.6. These timing diagrams in particular reflect evolution examples, as a function of time (t), of the signal COEX_IN and the state of the first circuit31, denoted LPCD. In view A, the timing diagram illustrates an exemplary evolution according to which the switching of the signal COEX_IN to the high state takes place during the near field emission of the first circuit31. At an instant to, it is assumed that the first circuit31enters the waiting state (WAIT Time), while the signal COEX_IN is in the low state. At an instant t1, after instant to and separated from instant to by the duration CNT, the signal COEX_IN is still in the low state. The first circuit31therefore enters the near field emission state (Field emission). At an instant t2, after instant t1, the near field emission burst by the first circuit finishes. The signal COEX_IN still being in the low state, the first circuit31is returned to the waiting state (WAIT Time). At an instant t3, after instant t2and separated from instant t2by the duration CNT, the signal COEX_IN is still in the low state. The first circuit31therefore again enters the near field emission state (Field emission). At an instant t4, after instant t3, the signal COEX_IN is switched to the high state by the second circuit33during the near field emission by the first circuit31. According to this embodiment, the switching of the signal COEX_IN does not interrupt the near field emission. At an instant t5, after instant t4, the near field emission by the first circuit31finishes. The signal COEX_IN being in the high state, the first circuit31is prohibited from emitting again. The first circuit31is then placed in the idle state (IDLE). In view B, the timing diagram illustrates an exemplary evolution according to which the switching of the signal COEX_IN to the high state takes place outside the near field emission of the first circuit31. Between instants t0and t2, the timing diagram of view B is similar to the timing diagram of view A. At an instant t21, after instant t2and separated from instant t2by a duration shorter than the duration CNT, the signal COEX_IN is switched to the high state by the second circuit33. In view B, the first circuit31is thus in the waiting state (WAIT Time) at the instant of the switching of the signal COEX_IN. The first circuit31is then placed in the idle state (IDLE) at instant t21without waiting for the expiration of the duration CNT provided at instant t3, after instant t21. One advantage of the embodiment described in relation withFIGS.6and7lies in the fact that the emission of the first circuit31is not interrupted abruptly by the second circuit33. One thus avoids disturbing the operation of the first circuit31, in particular when this first circuit31is configured in low power card detection mode. In the case of a near field emission in a so-called normal mode, in other words, a mode in which the near field emission is done continuously, one generally needs to be able to interrupt the emission in a maximum period of about 300 μs to 400 μs following the demand of a priority circuit. The embodiment described in relation withFIGS.6and7is advantageously compatible with this need, even allowing a burst to finish. Indeed, the near field emission of the first circuit31finishes after a duration on the order of 30 μs. Furthermore, a hardware embodiment (with cabled logic) is preferred for speed reasons relative to a software implementation. FIG.8shows, very schematically and in block diagram form, an example implementation of the circuits ofFIG.2in a mobile telephone8. In the example ofFIG.8, the mobile telephone8includes elements similar to those of the device1ofFIG.2(inFIG.8, these elements are shown in dotted lines). The mobile telephone8in particular includes the first NFC circuit31connected to the antenna32and the second circuit33connected to the antenna34. The circuits31and33are linked, preferably connected, by the connection36. This allows the telephone8to manage priorities between operations performed by the first circuit31and other operations performed by the second circuit33. For example, the near field emission by the antenna32can be inhibited upon each refresh of the primary screen35of the telephone8. Various embodiments and variants have been described. Those skilled in the art will understand that certain features of these embodiments can be combined and other variants will readily occur to those skilled in the art. In particular, what is described more particularly in relation with one exemplary application to priority management between a near field emission and a radiofrequency emission and/or a screen refresh applies more generally to an inhibition of one or several functionalities of one circuit by another circuit owing to a dedicated connection. Finally, the practical implementation of the embodiments and variants described herein is within the capabilities of those skilled in the art based on the functional description provided hereinabove. In particular, the implantation of the connection36in the mobile telephone8is within the capabilities of one skilled in the art.
34,464
11863249
While the following detailed description will be given with respect to certain illustrative embodiments, it should be understood that the drawings are not necessarily to scale and the disclosed embodiments are sometimes illustrated diagrammatically and in partial views. In addition, in certain instances, details which are not necessary for an understanding of the disclosed subject matter or which render other details too difficult to perceive may have been omitted. It should therefore be understood that this disclosure is not limited to the particular embodiments disclosed and illustrated herein, but rather to a fair reading of the entire disclosure and claims, as well as any equivalents thereto. Additional, different, or fewer components and methods may be included in the systems and methods. DETAILED DESCRIPTION In the following description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, well known methods, procedures, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings. Referring now to the drawings and with specific reference toFIG.1, a wireless electrical connection system10is illustrated. The wireless electrical connection system10provides for the wireless transmission of electrical signals, such as, but not limited to, electrical energy, electrical power, electromagnetic energy, and electronically transmittable data (“electronic data”). Specifically, the wireless electrical connection system10provides for the wireless transmission of electrical signals via near field magnetic coupling. As shown in the embodiment ofFIG.1, the wireless electrical connection system10includes a wireless transmission system20and a wireless receiver system30. The wireless receiver system is configured to receive electrical energy, electrical power, electromagnetic energy, and/or electronic data from, at least, the wireless transmission system20. As illustrated, the wireless transmission system20and wireless receiver system30may be configured to transmit electrical energy, electrical power, electromagnetic energy, and/or electronically transmittable data across, at least, a separation distance or gap17. A separation distance or gap, such as the gap17, in the context of a wireless connection system, such as the system10, does not include a physical connection, such as a wired connection. There may be intermediary objects located in a separation distance or gap, such as the gap17, such as, but not limited to, air, a counter top, a casing for an electronic device, a plastic filament, an insulator, a mechanical wall, among other things; however, there is no physical, electrical connection at such a separation distance or gap. Thus, the combination of the wireless transmission system20and the wireless receiver system30create an electrical connection without the need for a physical connection. “Electrical connection,” as defined herein, refers to any facilitation of a transfer of an electrical current, voltage, and/or power from a first location, device, component, and/or source to a second location, device, component, and/or destination. An “electrical connection” may be a physical connection, such as, but not limited to, a wire, a trace, a via, among other physical electrical connections, connecting a first location, device, component, and/or source to a second location, device, component, and/or destination. Additionally or alternatively, an “electrical connection” may be a wireless electrical connection, such as, but not limited to, magnetic, electromagnetic, resonant, and/or inductive field, among other wireless electrical connections, connecting a first location, device, component, and/or source to a second location, device, component, and/or destination. Alternatively, the gap17may be referenced as a “Z-Distance,” because, if one considers an antenna21,31to be disposed substantially along a common X-Y plane, then the distance separating the antennas21,31is the gap in a “Z” or “depth” direction. However, flexible and/or non-planar coils are certainly contemplated by embodiments of the present disclosure and, thus, it is contemplated that the gap17may not be uniform, across an envelope of connection distances between the antennas21,31. It is contemplated that various tunings, configurations, and/or other parameters may alter the possible maximum distance of the gap17, such that electrical transmission from the wireless transmission system20to the wireless receiver system30remains possible. The wireless power system10operates when the wireless transmission system20and the wireless receiver system30are coupled. As defined herein, the terms “couples,” “coupled,” and “coupling” generally refers to magnetic field coupling, which occurs when the energy of a transmitter and/or any components thereof and the energy of a receiver and/or any components thereof are coupled to each other through a magnetic field. Coupling of the wireless transmission system20and the wireless receiver system30, in the system10, may be represented by a resonant coupling coefficient of the system10and, for the purposes of wireless power transfer, the coupling coefficient for the system10may be in the range of about 0.01 and 0.9. As illustrated, the wireless transmission system20may be associated with a host device11, which may receive power from an input power source12. The host device11may be any electrically operated device, circuit board, electronic assembly, dedicated charging device, or any other contemplated electronic device. Example host devices11, with which the wireless transmission system20may be associated therewith, include, but are not limited to including, a device that includes an integrated circuit, cases for wearable electronic devices, receptacles for electronic devices, a portable computing device, clothing configured with electronics, storage medium for electronic devices, charging apparatus for one or multiple electronic devices, dedicated electrical charging devices, activity or sport related equipment, goods, and/or data collection devices, among other contemplated electronic devices. As illustrated, one or both of the wireless transmission system20and the host device11are operatively associated with an input power source12. The input power source12may be or may include one or more electrical storage devices, such as an electrochemical cell, a battery pack, and/or a capacitor, among other storage devices. Additionally or alternatively, the input power source12may be any electrical input source (e.g., any alternating current (AC) or direct current (DC) delivery port) and may include connection apparatus from said electrical input source to the wireless transmission system20(e.g., transformers, regulators, conductive conduits, traces, wires, or equipment, goods, computer, camera, mobile phone, and/or other electrical device connection ports and/or adaptors, such as but not limited to USB or mp3 ports and/or adaptors, among other contemplated electrical components). Electrical energy received by the wireless transmission system20is then used for at least two purposes: providing electrical power to internal components of the wireless transmission system20and providing electrical power to the transmitter antenna21. The transmitter antenna21is configured to wirelessly transmit the electrical signals conditioned and modified for wireless transmission by the wireless transmission system20via near-field magnetic coupling (NFMC). Near-field magnetic coupling enables the transfer of electrical energy, electrical power, electromagnetic energy, and/or electronically transmissible data wirelessly through magnetic induction between the transmitter antenna21and a receiving antenna31of, or associated with, the wireless receiver system30. Near-field magnetic coupling may enable “inductive coupling,” which, as defined herein, is a wireless power transmission technique that utilizes an alternating electromagnetic field to transfer electrical energy between two antennas. Such inductive coupling is the near field wireless transmission of electrical energy between two magnetically coupled coils that are tuned to resonate at a similar frequency. Further, such near-field magnetic coupling may provide connection via “mutual inductance,” which, as defined herein is the production of an electromotive force in a circuit by a change in current in a second circuit magnetically coupled to the first. In one or more embodiments, the inductor coils of either the transmitter antenna21or the receiver antenna31are strategically positioned to facilitate reception and/or transmission of wirelessly transferred electrical energy, power, electromagnetic energy and/or data through near field magnetic induction. Antenna operating frequencies may comprise all operating frequency ranges, examples of which may include, but are not limited to, about 110 kHz to about 205 kHz (Qi interface standard), 100 kHz to about 350 kHz (PMA interface standard), 6.78 MHz (Rezence interface standard and/or any other proprietary interface standard operating at a frequency of 6.78 MHz), 13.56 MHz (Near Field Communications (NFC) standard, defined by ISO/IEC standard 18092), 27 MHz and/or, alternatively, at an operating frequency of another proprietary operating mode. The operating frequencies of the antennas21,31may be operating frequencies designated by the International Telecommunications Union (ITU) in the Industrial, Scientific, and Medical (ISM) frequency bands, which include, but is not limited to including, 6.78 MHz, 13.56 MHz, and 27 MHz, which are designated for use in wireless power transfer. In addition, the transmitting antenna and/or the receiving antenna of the present disclosure may be designed to transmit or receive, respectively, over a wide range of operating frequencies on the order of about 1 kHz to about 1 GHz or greater, in addition to the Qi, PMA, Rezence, and NFC interface standards. The transmitting antenna and the receiving antenna of the present disclosure may be configured to transmit and/or receive electrical power having a magnitude that ranges from about 10 mW to about 500 W. In one or more embodiments the inductor coil of the transmitting antenna21is configured to resonate at a transmitting antenna resonant frequency or within a transmitting antenna resonant frequency band. As known to those skilled in the art, a “resonant frequency” or “resonant frequency band” refers a frequency or frequencies wherein amplitude response of the antenna is at a relative maximum, or, additionally or alternatively, the frequency or frequency band where the capacitive reactance has a magnitude substantially similar to the magnitude of the inductive reactance. In one or more embodiments the transmitting antenna resonant frequency is at least 1 kHz. In one or more embodiments the transmitting antenna resonant frequency band extends from about 1 kHz to about 100 MHz. In one or more embodiments the inductor coil of the receiving antenna31is configured to resonate at a receiving antenna resonant frequency or within a receiving antenna resonant frequency band. In one or more embodiments the receiving antenna resonant frequency is at least 1 kHz. In one or more embodiments the receiving antenna resonant frequency band extends from about 1 kHz to about 100 MHz. The wireless receiver system30may be associated with at least one electronic device14, wherein the electronic device14may be any device that requires electrical power for any function and/or for power storage (e.g., via a battery and/or capacitor). Additionally or alternatively, the electronic device14may be any device capable of receipt of electronically transmissible data. For example, the device may be, but is not limited to being, a handheld computing device, a mobile device, a portable appliance, an integrated circuit, an identifiable tag, a kitchen utility device, an electronic tool, an electric vehicle, a game console, a robotic device, a wearable electronic device (e.g., an electronic watch, electronically modified glasses, altered-reality (AR) glasses, virtual reality (VR) glasses, among other things), a portable scanning device, a portable identifying device, a sporting good, an embedded sensor, an Internet of Things (IoT) sensor, IoT enabled clothing, IoT enabled recreational equipment, industrial equipment, medical equipment, a medical device a tablet computing device, a portable control device, a remote controller for an electronic device, a gaming controller, among other things. For the purposes of illustrating the features and characteristics of the disclosed embodiments, arrow-ended lines are utilized to illustrate transferrable and/or communicative signals and various patterns are used to illustrate electrical signals that are intended for power transmission and electrical signals that are intended for the transmission of data and/or control instructions. Solid lines indicate signal transmission of electrical energy over a physical and/or wireless electrical connection, in the form of power signals that are, ultimately, utilized in wireless power transmission from the wireless transmission system20to the wireless receiver system30. Further, dotted lines are utilized to illustrate electronically transmittable data signals, which ultimately may be wirelessly transmitted from the wireless transmission system20to the wireless receiver system30. While the systems and methods herein illustrate the transmission of wirelessly transmitted energy, wirelessly transmitted power, wirelessly transmitted electromagnetic energy, and electronically transmittable data, it is certainly contemplated that the systems, methods, and apparatus disclosed herein may be utilized in the transmission of only one signal, various combinations of two signals, or more than two signals and, further, it is contemplated that the systems, method, and apparatus disclosed herein may be utilized for wireless transmission of other electrical signals in addition to or uniquely in combination with one or more of the above mentioned signals. In some examples, the signal paths of solid or dotted lines may represent a functional signal path, whereas, in practical application, the actual signal is routed through additional components en route to its indicated destination. For example, it may be indicated that a data signal routes from a communications apparatus to another communications apparatus; however, in practical application, the data signal may be routed through an amplifier, then through a transmission antenna, to a receiver antenna, where, on the receiver end, the data signal is decoded by a respective communications device of the receiver. Turning now toFIG.2, the wireless connection system10is illustrated as a block diagram including example sub-systems of both the wireless transmission system20and the wireless receiver system30. The wireless transmission system20may include, at least, a power conditioning system40, a transmission control system26, a transmission tuning system24, and the transmission antenna21. A first portion of the electrical energy input from the input power source12is configured to electrically power components of the wireless transmission system20such as, but not limited to, the transmission control system26. A second portion of the electrical energy input from the input power source12is conditioned and/or modified for wireless power transmission, to the wireless receiver system30, via the transmission antenna21. Accordingly, the second portion of the input energy is modified and/or conditioned by the power conditioning system40. While not illustrated, it is certainly contemplated that one or both of the first and second portions of the input electrical energy may be modified, conditioned, altered, and/or otherwise changed prior to receipt by the power conditioning system40and/or transmission control system26, by further contemplated subsystems (e.g., a voltage regulator, a current regulator, switching systems, fault systems, safety regulators, among other things). Referring now toFIG.3, with continued reference toFIGS.1and2, subcomponents and/or systems of the transmission control system26are illustrated. The transmission control system26may include a sensing system50, a transmission controller28, a communications system29, a driver48, and a memory27. The transmission controller28may be any electronic controller or computing system that includes, at least, a processor which performs operations, executes control algorithms, stores data, retrieves data, gathers data, controls and/or provides communication with other components and/or subsystems associated with the wireless transmission system20, and/or performs any other computing or controlling task desired. The transmission controller28may be a single controller or may include more than one controller disposed to control various functions and/or features of the wireless transmission system20. Functionality of the transmission controller28may be implemented in hardware and/or software and may rely on one or more data maps relating to the operation of the wireless transmission system20. To that end, the transmission controller28may be operatively associated with the memory27. The memory may include one or more of internal memory, external memory, and/or remote memory (e.g., a database and/or server operatively connected to the transmission controller28via a network, such as, but not limited to, the Internet). The internal memory and/or external memory may include, but are not limited to including, one or more of a read only memory (ROM), including programmable read-only memory (PROM), erasable programmable read-only memory (EPROM or sometimes but rarely labelled EROM), electrically erasable programmable read-only memory (EEPROM), random access memory (RAM), including dynamic RAM (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), single data rate synchronous dynamic RAM (SDR SDRAM), double data rate synchronous dynamic RAM (DDR SDRAM, DDR2, DDR3, DDR4), and graphics double data rate synchronous dynamic RAM (GDDR SDRAM, GDDR2, GDDR3, GDDR4, GDDR5, a flash memory, a portable memory, and the like. Such memory media are examples of nontransitory machine readable and/or computer readable memory media. While particular elements of the transmission control system26are illustrated as independent components and/or circuits (e.g., the driver48, the memory27, the communications system29, the sensing system50, among other contemplated elements) of the transmission control system26, such components may be integrated with the transmission controller28. In some examples, the transmission controller28may be an integrated circuit configured to include functional elements of one or both of the transmission controller28and the wireless transmission system20, generally. As illustrated, the transmission controller28is in operative association, for the purposes of data transmission, receipt, and/or communication, with, at least, the memory27, the communications system29, the power conditioning system40, the driver48, and the sensing system50. The driver48may be implemented to control, at least in part, the operation of the power conditioning system40. In some examples, the driver48may receive instructions from the transmission controller28to generate and/or output a generated pulse width modulation (PWM) signal to the power conditioning system40. In some such examples, the PWM signal may be configured to drive the power conditioning system40to output electrical power as an alternating current signal, having an operating frequency defined by the PWM signal. The sensing system may include one or more sensors, wherein each sensor may be operatively associated with one or more components of the wireless transmission system20and configured to provide information and/or data. The term “sensor” is used in its broadest interpretation to define one or more components operatively associated with the wireless transmission system20that operate to sense functions, conditions, electrical characteristics, operations, and/or operating characteristics of one or more of the wireless transmission system20, the wireless receiving system30, the input power source12, the host device11, the transmission antenna21, the receiver antenna31, along with any other components and/or subcomponents thereof. As illustrated in the embodiment ofFIG.4, the sensing system50may include, but is not limited to including, a thermal sensing system52, an object sensing system54, a receiver sensing system56, and/or any other sensor(s)58. Within these systems, there may exist even more specific optional additional or alternative sensing systems addressing particular sensing aspects required by an application, such as, but not limited to: a condition-based maintenance sensing system, a performance optimization sensing system, a state-of-charge sensing system, a temperature management sensing system, a component heating sensing system, an IoT sensing system, an energy and/or power management sensing system, an impact detection sensing system, an electrical status sensing system, a speed detection sensing system, a device health sensing system, among others. The object sensing system54, may be a foreign object detection (FOD) system. Each of the thermal sensing system52, the object sensing system54, the receiver sensing system56and/or the other sensor(s)58, including the optional additional or alternative systems, are operatively and/or communicatively connected to the transmission controller28. The thermal sensing system52is configured to monitor ambient and/or component temperatures within the wireless transmission system20or other elements nearby the wireless transmission system20. The thermal sensing system52may be configured to detect a temperature within the wireless transmission system20and, if the detected temperature exceeds a threshold temperature, the transmission controller28prevents the wireless transmission system20from operating. Such a threshold temperature may be configured for safety considerations, operational considerations, efficiency considerations, and/or any combinations thereof. In a non-limiting example, if, via input from the thermal sensing system52, the transmission controller28determines that the temperature within the wireless transmission system20has increased from an acceptable operating temperature to an undesired operating temperature (e.g., in a non-limiting example, the internal temperature increasing from about 20° Celsius (C) to about 50° C., the transmission controller28prevents the operation of the wireless transmission system20and/or reduces levels of power output from the wireless transmission system20. In some non-limiting examples, the thermal sensing system52may include one or more of a thermocouple, a thermistor, a negative temperature coefficient (NTC) resistor, a resistance temperature detector (RTD), and/or any combinations thereof. As depicted inFIG.4, the transmission sensing system50may include the object sensing system54. The object sensing system54may be configured to detect presence of unwanted objects in contact with or proximate to the wireless transmission system20. In some examples, the object sensing system54is configured to detect the presence of an undesired object. In some such examples, if the transmission controller28, via information provided by the object sensing system54, detects the presence of an undesired object, then the transmission controller28prevents or otherwise modifies operation of the wireless transmission system20. In some examples, the object sensing system54utilizes an impedance change detection scheme, in which the transmission controller28analyzes a change in electrical impedance observed by the transmission antenna20against a known, acceptable electrical impedance value or range of electrical impedance values. Additionally or alternatively, the object sensing system54may utilize a quality factor (Q) change detection scheme, in which the transmission controller28analyzes a change from a known quality factor value or range of quality factor values of the object being detected, such as the receiver antenna31. The “quality factor” or “Q” of an inductor can be defined as (frequency (Hz)×inductance (H))/resistance (ohms), where frequency is the operational frequency of the circuit, inductance is the inductance output of the inductor and resistance is the combination of the radiative and reactive resistances that are internal to the inductor. “Quality factor,” as defined herein, is generally accepted as an index (figure of measure) that measures the efficiency of an apparatus like an antenna, a circuit, or a resonator. In some examples, the object sensing system54may include one or more of an optical sensor, an electro-optical sensor, a Hall effect sensor, a proximity sensor, and/or any combinations thereof. The receiver sensing system56is any sensor, circuit, and/or combinations thereof configured to detect presence of any wireless receiving system that may be couplable with the wireless transmission system20. In some examples, if the presence of any such wireless receiving system is detected, wireless transmission of electrical energy, electrical power, electromagnetic energy, and/or data by the wireless transmission system20to said wireless receiving system is enabled. In some examples, if the presence of a wireless receiver system is not detected, wireless transmission of electrical energy, electrical power, electromagnetic energy, and/or data is prevented from occurring. Accordingly, the receiver sensing system56may include one or more sensors and/or may be operatively associated with one or more sensors that are configured to analyze electrical characteristics within an environment of or proximate to the wireless transmission system20and, based on the electrical characteristics, determine presence of a wireless receiver system30. Referring now toFIG.5, and with continued reference toFIGS.1-4, a block diagram illustrating a first embodiment of the power conditioning system40is illustrated. At the power conditioning system40, electrical power is received, generally, as a direct current (DC) power source, via the input power source12itself or an intervening power converter, converting an AC source to a DC source (not shown). A voltage regulator46receives the electrical power from the input power source12and is configured to provide electrical power for transmission by the antenna21and provide electrical power for powering components of the wireless transmission system21. Accordingly, the voltage regulator46is configured to convert the received electrical power into at least two electrical power signals, each at a proper voltage for operation of the respective downstream components: a first electrical power signal to electrically power any components of the wireless transmission system20and a second portion conditioned and modified for wireless transmission to the wireless receiver system30. As illustrated inFIG.3, such a first portion is transmitted to, at least, the sensing system50, the transmission controller28, and the communications system29; however, the first portion is not limited to transmission to just these components and can be transmitted to any electrical components of the wireless transmission system20. The second portion of the electrical power is provided to an amplifier42of the power conditioning system40, which is configured to condition the electrical power for wireless transmission by the antenna21. The amplifier may function as an invertor, which receives an input DC power signal from the voltage regulator46and generates an alternating current (AC) as output, based, at least in part, on PWM input from the transmission control system26. The amplifier42may be or include, for example, a power stage inverter, such as a dual field effect transistor power stage invertor. The use of the amplifier42within the power conditioning system40and, in turn, the wireless transmission system20enables wireless transmission of electrical signals having much greater amplitudes than if transmitted without such an amplifier. For example, the addition of the amplifier42may enable the wireless transmission system20to transmit electrical energy as an electrical power signal having electrical power from about 10 mW to about 500 W. In some examples, the amplifier42may be or may include one or more class-E power amplifiers. Class-E power amplifiers are efficiently tuned switching power amplifiers designed for use at high frequencies (e.g., frequencies from about 1 MHz to about 1 GHz). Generally, a class-E amplifier employs a single-pole switching element and a tuned reactive network between the switch and an output load (e.g., the antenna21). Class E amplifiers may achieve high efficiency at high frequencies by only operating the switching element at points of zero current (e.g., on-to-off switching) or zero voltage (off to on switching). Such switching characteristics may minimize power lost in the switch, even when the switching time of the device is long compared to the frequency of operation. However, the amplifier42is certainly not limited to being a class-E power amplifier and may be or may include one or more of a class D amplifier, a class EF amplifier, an H invertor amplifier, among other amplifiers that could be included as part of the amplifier42. Returning now toFIG.2, the conditioned signal(s) from the power conditioning system40is then received by the transmission tuning system24, prior to transmission by the antenna. The transmission tuning system24may include tuning and/or impedance matching, filters (e.g. a low pass filter, a high pass filter, a “pi” or “Π” filter, a “T” filter, an “L” filter, a “LL” filter, an L-C trap filter, among other filters), network matching, sensing, and/or conditioning elements configured to optimize wireless transfer of signals from the wireless transmission system20to the wireless receiver system30. Further, the transmission tuning system24may include an impedance matching circuit, which is designed to match impedance with a corresponding wireless receiver system30for given power, current, and/or voltage requirements for wireless transmission of one or more of electrical energy, electrical power, electromagnetic energy, and electronic data. Turning now toFIG.6and with continued reference to, at least,FIGS.1and2, the wireless receiver system30is illustrated in further detail. The wireless receiver system30is configured to receive, at least, electrical energy, electrical power, electromagnetic energy, and/or electrically transmittable data via near field magnetic coupling from the wireless transmission system20, via the transmission antenna21. As illustrated inFIG.6, the wireless receiver system30includes, at least, the receiver antenna31, a receiver tuning system34, a power conditioning system32, and a receiver control system36. The receiver tuning system34may be configured to substantially match the electrical impedance of the wireless transmission system20. In some examples, the receiver tuning system34may be configured to dynamically adjust and substantially match the electrical impedance of the receiver antenna31to a characteristic impedance of the power generator or the load at a driving frequency of the transmission antenna20. As illustrated, the power conditioning system32includes a rectifier33and a voltage regulator35. In some examples, the rectifier33is in electrical connection with the receiver tuning system34. The rectifier33is configured to modify the received electrical energy from an alternating current electrical energy signal to a direct current electrical energy signal. In some examples, the rectifier33is comprised of at least one diode. Some non-limiting example configurations for the rectifier33include, but are not limited to including, a full wave rectifier, including a center tapped full wave rectifier and a full wave rectifier with filter, a half wave rectifier, including a half wave rectifier with filter, a bridge rectifier, including a bridge rectifier with filter, a split supply rectifier, a single phase rectifier, a three phase rectifier, a controlled rectifier, an uncontrolled rectifier, and a half controlled rectifier. As electronic devices may be sensitive to voltage, additional protection of the electronic device may be provided by clipper circuits or devices. The rectifier33may further include a clipper circuit or a clipper device. A clipper is herein defined as a circuit or device that removes either the positive half (top half), the negative half (bottom half), or both the positive and the negative halves of an input AC signal. In other words, a clipper is a circuit or device that limits the positive amplitude, the negative amplitude, or both the positive and the negative amplitudes of the input AC signal. Some non-limiting examples of a voltage regulator35include, but are not limited to, including a series linear voltage regulator, a shunt linear voltage regulator, a step up switching voltage regulator, a step down switching voltage regulator, an inverter voltage regulator, a Zener controlled transistor series voltage regulator, and an emitter follower voltage regulator. The voltage regulator35may further include a voltage multiplier. A voltage multiplier is herein defined as an electronic circuit or device that delivers an output voltage having an amplitude (peak value) that is two, three, or more times greater than the amplitude (peak value) of the input voltage. The voltage regulator35is in electrical connection with the rectifier33and configured to adjust the amplitude of the electrical voltage of the wirelessly received electrical energy signal, after conversion to AC by the rectifier33. In some examples, the voltage regulator35may be a low dropout linear voltage regulator; however, other voltage regulation circuits and/or systems are contemplated. As illustrated, the direct current electrical energy signal output by the voltage regulator35is received at the load16of the electronic device14. In some examples, a portion of the direct current electrical power signal may be utilized to power the receiver control system36and any components thereof, however, it is certainly possible that the receiver control system36, and any components thereof, may be powered and/or receive signals from the load16and/or other components of the electronic device14. The receiver control system36may include, but is not limited to, including a receiver controller38, a communications system39, and a memory37. The receiver controller38may be any electronic controller or computing system that includes, at least, a processor which performs operations, executes control algorithms, stores data, retrieves data, gathers data, controls and/or provides communication with other components and/or subsystems associated with the wireless receiver system30. The receiver controller38may be a single controller or may include more than one controller disposed to control various functions and/or features of the wireless receiver system30. Functionality of the transmission controller38may be implemented in hardware and/or software and may rely on one or more data maps relating to the operation of the wireless receiver system30. To that end, the receiver controller38may be operatively associated with the memory37. The memory may include one or both of internal memory, external memory, and/or remote memory (e.g., a database and/or server operatively connected to the receiver controller28via a network, such as, but not limited to, the Internet). The internal memory and/or external memory may include, but are not limited to including, one or more of a read only memory (ROM), including programmable read-only memory (PROM), erasable programmable read-only memory (EPROM or sometimes but rarely labelled EROM), electrically erasable programmable read-only memory (EEPROM), random access memory (RAM), including dynamic RAM (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), single data rate synchronous dynamic RAM (SDR SDRAM), double data rate synchronous dynamic RAM (DDR SDRAM, DDR2, DDR3, DDR4), and graphics double data rate synchronous dynamic RAM (GDDR SDRAM, GDDR2, GDDR3, GDDR4, GDDR5, a flash memory, a portable memory, and the like. Such memory media are examples of nontransitory computer readable memory media. Further, while particular elements of the receiver control system36are illustrated as independent components and/or circuits (e.g., the memory37, the communications system39, among other contemplated elements) of the receiver control system36, such components may be integrated with the receiver controller38. In some examples, the receiver controller38may be and/or include one or more integrated circuits configured to include functional elements of one or both of the receiver controller38and the wireless receiver system30, generally. “Integrated circuits,” as defined herein, generally refers to a circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce. Such integrated circuits may include, but are not limited to including, thin-film transistors, thick-film technologies, and/or hybrid integrated circuits. In some examples, the communications system39may be a dedicated circuit configured to send and receive data at a given operating frequency. For example, the communications system39may be a tagging or identifier integrated circuit, such as, but not limited to, an NFC tag and/or labelling integrated circuit. Examples of such NFC tag and/or labelling integrated circuits include the NTAG® family of integrated circuits manufactured by NXP Semiconductors N.V. Additionally or alternatively, the communications system39may include Bluetooth® communications components, WiFi communications components, TransferJet™ communications components, among other contemplated out of band communications components. However, the communications system39is certainly not limited to these example components and, in some examples, the communications system39may be implemented with another integrated circuit (e.g., integrated with the receiver controller38), may be another transceiver of or operatively associated with one or both of the electronic device14and the wireless receiver system30, among other contemplated communication systems and/or apparatus. Further, in some examples, functions of the communications system39may be integrated with the receiver controller39, such that the controller modifies the inductive field between the antennas21,31to communicate in the frequency band of wireless power transfer operating frequency. Turning now toFIG.7, a schematic block diagram for a data communications system60is illustrated. The data communications system60operates by encoding a message using pulse width encoding, as will be discussed in greater detail below. Accordingly, any elements of the data communication system60may be implemented by one or more apparatus, hardware, software, firmware, and any combinations thereof. To that end, the data communications system60and any components thereof may be comprised of or be performed by any electronic controller or computing system that includes, at least, a processor which performs operations, executes control algorithms, stores data, retrieves data, gathers data, controls and/or provides communication with other components and/or subsystems associated with the data communications system60and any components thereof. The data communications system60may be implemented by a single controller or may include more than one controller disposed to control various functions and/or features of the data communications system60and any components thereof. Functionality of the data communications system60and any components thereof may be implemented in hardware and/or software and may rely on one or more data maps relating to the operation of the wireless receiver system60and any components thereof. To that end, the data communications system60and any components thereof may be operatively associated with a memory. The memory may include one or both of internal memory, external memory, and/or remote memory (e.g., a database and/or server operatively connected to the data communications system60and any components thereof via a network, such as, but not limited to, the Internet). The internal memory and/or external memory may include, but are not limited to including, one or more of a read only memory (ROM), including programmable read-only memory (PROM), erasable programmable read-only memory (EPROM or sometimes but rarely labelled EROM), electrically erasable programmable read-only memory (EEPROM), random access memory (RAM), including dynamic RAM (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), single data rate synchronous dynamic RAM (SDR SDRAM), double data rate synchronous dynamic RAM (DDR SDRAM, DDR2, DDR3, DDR4), and graphics double data rate synchronous dynamic RAM (GDDR SDRAM, GDDR2, GDDR3, GDDR4, GDDR5, a flash memory, a portable memory, and the like. Such memory media are examples of nontransitory computer readable memory media. The data communications system60may be utilized to provide communications in conjunction with wireless power transfer systems, such as those discussed above, which will be discussed in greater detail below. However, it is certainly contemplated that the data communications system60may be utilized in any wireless or wired communications system, wherein pulse width encoding is optimal for achieving greater data rates, reduce bill of materials, provide asynchronous data communications, provide data communications absent a clock, and/or provide data communications with a medium susceptible to unreliable and/or inconsistent data rates among other things. Operations of the data communications system61begin, generally, when a data input source61provides a message62. The message62may be any encodable data desired for communications, ultimately, to a data recipient66. The message62is encoded by the encoder70, using a coding format (examples and more detail, below) to generate an encoded message72. The encoded message is transferred to a decoder90over a transfer medium64. The transfer medium64may be any medium, about which data is transferable; examples of transfer mediums that may comprise or be included as part of the transfer medium64include, but are not limited to including, a wireless connection, an electromagnetic connection, an electrical connection, a wireless electrical connection, an Internet connection, an Ethernet connection, a wired electrical connection, a wire, a trace, among other transfer media. Upon transfer via the transfer medium64, the encoded message72is received by the decoder90. The decoder90utilizes the same coding format as the encoder70to then decode the encoded message to reproduce the message62, for receipt by the data recipient66. Returning now to the encoder70, as illustrated in greater detail inFIG.8, the encoder receives the message62from the data input source61. The message62may be any data message of any length, size, duration, and any combinations thereof. Further, the message signal includes one or more message words65. A “message word,” as defined herein, refers to a fixed-size piece of data handled by an instruction set and/or a hardware device associated with data communications. A “message word” may be of any word length, word size, and/or word width, in accordance with its associated instruction set and/or associated hardware device. The size of a “message word” may be constrained by hardware and/or software limitations; therefore, it is advantageous for the encoder70, in conjunction with the coding format80, to implement intelligent instruction set that may be tailored to the specifications of hardware and/or software constraints. While it will be illustrated below that the example message words65may have a one-bit or two-bit binary format, the message words62are certainly not limited to having such binary formats and may be of any desired messaging format including, but not limited to including, higher bit binary formats (e.g., 4-bit binary, 8-bit binary, 16-bit binary, . . . , up to 2n-bit binary, for any integer “n”), base-8 or hexadecimal messages (including single or multiple digits), base-10 or decimal messages (including single or multiple digits), alphanumeric messages (including single or multiple alphanumeric characters), ASCII messages (including single or multiple ASCII characters), among other forms of transferable data messages. The diverse array of potential messages for the message words65is enabled by the system60utilizing the coding format80to encode the message words65to generate the encoded message72, which includes a plurality of encoded words75. The coding format80is illustrated in greater detail in the block diagram ofFIG.9. The coding format80correlates a plurality of correlated ratios85, respectively, with a plurality of format words82, wherein each of the plurality of correlated ratios is a ratio of a duty cycle of a pulse to a respective period associated with one or both of the duty cycle and the pulse. The format words82have a like format to the format of the message words62(e.g., if the message words62are in binary, then the format words82are in binary). The coding format80reads the message word62, relates it to a stored format word82, then outputs an encoded word72, based on the message word62, which is a pulse having a pulse width that is the correlated ratio85, of the format word82, multiplied by a period of the pulse. This may be better understood, in relation to the exemplary embodiment ofFIGS.10-11A-C. FIG.10illustrates an exemplary coding format80A, as illustrated as a table and as a coding format80A, having a one-bit binary format. As illustrated, the format words82include three format words, a start signal, 0, and 1. The start signal may be a format start word87that correlates to a start word67of the message signal62, the start word67indicating that the input data source61intends to send a message. Accordingly, the start word67and/or the format start word87are associated with a start correlated ratio88, which is correlated with both the start word67and the start format word87. The encoded words75are output as ratios of the duty cycle of a pulse to the pulse's respective period, such output is received by a controller and, for example, a signal is modulated to include the pulses having widths of the encoded words75. By utilizing percentages of a period of a pulse to encode a message62, the decoder90only needs to know the coding format80, it need not be synchronized by a clock of the signal. Therefore, the signal communications disclosed herein may be “un-clocked” and/or asynchronous communicative signals. An “un-clocked” communication signal, as defined herein, refers to a signal that does not require an oscillating clock signal to synchronize a sender of a message with the receiver of said message. As pulse width encoding using correlated ratios85have an unlimited number of possibilities for fields in the coding format (e.g., pairs of correlated ratios85to format words82), the only limit to the size of the coding format, within a single bit, is the granularity of the hardware and/or software utilized to implement the encoder70and/or the decoder90. Therefore, data rates using such pulse width encoding of the system60may enable faster data communications using less expensive, legacy hardware, when compared to utilizing legacy coding methods (e.g., Manchester coding, on-off-keying, among other things). To further illustrate the data communications of the system60, visually, a sample encoded word72A is illustrated inFIG.11A. The message62A is “1001.” Accordingly, the encoder70will reference the coding format80to extract correlated percentages85, which correlate each of the message words65of the message65A, to generate the encoded message72A as a pulse-width encoded message72A. As illustrated in the example ofFIG.11A, the period of the pulses of the pulse-width encoded message72A are substantially equal. As illustrated, each pulse has a width that corresponds to the ratio of the duty cycle of each pulse for each encoded word77(e.g., start correlated ratio is 0.8T1, “1” is 0.6T2, “0” is 0.3 T3, “0” is 0.3T4, and “1” is 0.6T5). While it is certainly possible that a message encoded and decoded with the system60may have a consistent data rate and, thus, the period “T” for the entire encoded message72will remain equal, a distinct advantage of the system60is that data communications fidelity is maintained, even when data rates are uneven. To that end,FIG.11Billustrates the same message “1001” with the same coding format80, however the data rate appears to have a slight drop, wherein T3is indicating a slower data rate at the transmission of the specific encoded word75D. While, visually, the plot of the encoded message72B indicates something different from the encoded message72A ofFIG.11A, the encoded message72B is identical to the encoded message72A, as the encoding is independent of data rate and/or the pulse period. “Independent of data rate” refers to signal communication conditions wherein a sender of a message and a receiver of said message do not have to operate at a common and/or consistent rate of transfer of data between sender and receiver. As illustrated, as withFIG.11A, each pulse has a width that corresponds to the ratio of the duty cycle of each pulse for each encoded word77(e.g., start correlated ratio is 0.8T1, “1” is 0.6T2, “0” is 0.3 T3, “0” is 0.3T4, and “1” is 0.6T5). Lastly, and illustrating further the advantages of the system60,FIG.11Cillustrates a scenario in which a data rate may be inconsistent, to the point where each period may have a different length. To that end, while visually, the plot of the encoded message72B indicates something different from the encoded message72A ofFIG.11Aand/or the encoded message72B ofFIG.11B, the encoded message72B is identical to the encoded message72A, as the encoding is independent of data rate. As illustrated, as withFIG.11A, each pulse has a width that corresponds to the ratio of the duty cycle of each pulse for each encoded word75(e.g., start correlated ratio is 0.8T1, “1” is 0.6T2, “0” is 0.3 T3, “0” is 0.3T4, and “1” is 0.6T5). Turning now toFIGS.12and13, an alternative coding format80B, which may be utilized to encode the message62A of “1001,” is illustrated. The coding format80B is illustrated as a two-bit binary coding method, including a format start bit87correlating with a start bit67. As illustrated, the coding format80B has five correlated ratios85, which allow for communication of four different format messages82and the start bit87.FIG.13illustrates, visually, the encoded message72D, utilizing the coding format80B, to encode the same message62A. While appearing, visually, as a different message than those of encoded messages72A,72B, and/or72C, the encoded message72D includes the same data as the encoded messages72A,72B,72C. In fact, the encoded message72D illustrates yet another advantage of the data communications system60, as the encoded message72D is a compressed version of any of the encoded messages72A,72B,72C, as it includes only three pulses, versus five. By utilizing the pulse width encoding of the system60, data compression of the message62is only limited by system hardware and/or software granularity (e.g., in terms of edge detection of a pulse width modulated signal). Turning now toFIG.14, the decoder90is illustrated in greater detail. The decoder90is configured to receive the encoded message72, as one or more message words75, and reference the detected encoded message words72versus the coding format80. The decoder90references each of the encoded message words75against the plurality of correlated ratios85, determines correlated format words82, and outputs the correlated format words82as the output message words65, to compile the message62. The message62is then output to the data recipient66, as the message62. By utilizing the coding format80, the only requirements for hardware and/or software at the decoder90, for detecting high and/or low edges of pulses, to decode the encoded message67are knowledge of the correlated pairs of correlated ratios85and format words82. FIG.15illustrates an exemplary method200for performing data communications utilizing the system60. The method begins, at block204, by determining the message signal62from the data input source61. The method further includes encoding the message words65into encoded message words75, at the encoder70, and utilizing the coding format80. The method further includes transmitting the encoded message signal75, including the encoded message words72, over the transfer medium64. Then, the encoded message signal72is received by the decoder90, as illustrated in block302. The decoder90then decodes the encoded message signal72into a plurality of message words65, by utilizing the coding format80to reference the encoded message words75against the correlated ratios85to determine format words82, representative of the message words65. Turning now toFIG.16, and with continued reference toFIGS.1-15, a wireless connector system110, for wireless power transfer and wireless data transmission, is illustrated. As indicated by the reference numbers, the system110may include substantially similar, identical, and/or analogous elements to those ofFIGS.1-6, as indicated by common reference numbers. Alternatively, functionally comparable components, which perform one or more similar functions to another, earlier described component, but have distinguishing characteristics, are denoted by three-digit numbers, wherein the most significant digit indicates a “series” for the current embodiment and the two least significant digits correspond to the earlier described component. “Functionally corresponds,” as defined herein, means that the two or more components perform a similar function within the context of their respective, broader system, method, or apparatus. For example, in describing the110, the most significant digit “1” indicates the series for the embodiment ofFIG.16and the two least significant digits, “10,” indicate that the system functionally corresponds to the earlier described system10. The system10functionally corresponds with the wireless receiver system because both of the systems10,110are configured for transmission electrical energy and/or transmission of electrical data. A wireless transmission system120receives electrical power from an input power source112that is in electrical connection with a power conditioning system122, of which analogous systems are discussed in greater deal, above, with respect toFIG.5. The input power is then provided to one or more of a transmission controller128, a communications system129, a memory127, and/or any combinations thereof, each of which have analogous systems and/or components described in greater detail, above, with respect toFIGS.3and4. The transmission controller128may embody, execute, and/or include the decoder90and/or the data recipient66. A portion of the power output of the power conditioning system122is then provided to the transmission antenna121, via the transmission tuning system124, all of which have analogous systems and/or components described in greater detail, above, with respect toFIGS.1-6. The transmission system120then may transmit the electrical power to a wireless receiver system130, via a receiver antenna131, when the transmission antenna121and the receiver antenna131are operatively coupled at an operating frequency of the system110. The wireless receiver system130receives the electrical power via the operative coupling of the receiver antenna131and the transmission antenna121and provides the electrical power to the power conditioning system132, via the receiver tuning system134, all of which have analogous components discussed above with reference toFIG.5. The power conditioning system133, as discussed above with reference toFIG.5, may include, at least, a rectifier for converting an input AC power signal to a DC signal, for power distribution to a load116and/or any components of the receiver system130, such as, but not limited to, a receiver controller, a memory137, and a communications system139, all of which have analogous components discussed above with reference toFIG.5. The receiver controller138may embody, execute, and/or include the encoder70and/or the data recipient66. To that end, the receiver controller138may receive and/or generate the message62, which it then utilizes the encoder70to perform pulse-width encoding for encoding the message for transmission to the transmission system120and/or the transmission controller129. Therefore, the receiver controller138may also have stored thereon the coding format80and/or the coding format may be stored on the memory127and recalled by the receiver controller138. The receiver controller138may be utilized to modulate the electromagnetic field coupling the antennas121,131, to transmit the encoded message in the frequency band of the wireless power transmission between the systems120,130. Additionally or alternatively, the receiver controller138may utilize one or more of amplitude shift keying (ASK), phase shift keying (PSK), and/or frequency shift keying (FSK), among other in-band communications methods, to transmit the encoded message72about the electromagnetic connection of the antennas121,131. Further, the input data source61may include electrical characteristic information associated with the wireless receiver system130. For example, as the power conditioning system132may include or be a rectifier, as discussed above, the data input source61may include an output voltage at the output of the rectifier. To that end, the output voltage of the rectifier may then be communicated to the wireless transmission system120and, based on the output voltage of the rectifier, the wireless transmission system120may raise or lower the amount of power transmitted to the wireless receiver system130. FIG.17is a block diagram for a method300for performing data communications utilizing the system110and the system60. The method begins, at block212, wherein the antennas121,131of the system electromagnetically couple, such that transfer of electrical energy and/or electrical data signals is possible. Then, the receiver controller138, of the receiver system130, determines the message signal62from the data input source61, as illustrated at block214. The method further includes encoding the message words65into encoded message words75, at the encoder70, and utilizing the coding format80, as performed at the receiver system120, at block216. The method further includes transmitting the encoded message signal75, including the encoded message words72, to the wireless transmission system120, by the wireless receiver system130, as illustrated at block218. Then, the encoded message signal72is received by the decoder90, at the transmission controller129of the wireless transmission system120, as illustrated in block312. The decoder90then decodes the encoded message signal72into a plurality of message words65, by utilizing the coding format80to reference the encoded message words75against the correlated ratios85to determine format words82, representative of the message words65, as illustrated at block314. The message signal62is then received by the wireless transmission system120, when it is determined based on decoded message words at the decoder90, as illustrated in block316. Turning now toFIG.17, an exemplary, non-limiting embodiment of one or more of the transmission antenna21, the transmission antenna(s)121, and the receiver antenna31that may be used with any of the systems, methods, and/or apparatus disclosed herein. In the illustrated embodiment, the antenna21,31,121, is a flat spiral coil configuration. In the exemplary embodiment shown, the antenna comprises four layers of alternating of an electrical conductor and electrically insulating layers integrated into a printed circuit board (PCB), flexible circuit board (FPC), or a hybrid circuit board (HCB), the HBC comprising a PCB portion and an FPC portion. As shown, the antenna21,31,121comprises two antenna segments that are electrically connected in series. As shown, the antenna21,31,121is constructed having five turns of a copper trace95deposited on the surface of an insulative substrate99with a gap97of, for example, 15 to 200 microns between each turn of the trace95. Each segment comprises an electrical conductor (e.g., trace95) positioned on an insulative substrate98in an electrical parallel configuration. Non-limiting examples can be found in U.S. Pat. Nos. 9,941,743, 9,960,628, 9,941,743 all to Peralta et al., U.S. Pat. Nos. 9,948,129, 10,063,100 to Singh et al., U.S. Pat. No. 9,941,590 to Luzinski, U.S. Pat. No. 9,960,629 to Rajagopalan et al. and U.S. Patent App. Nos. 2017/0040107, 2017/0040105, 2017/0040688 to Peralta et al., all of which are assigned to the assignee of the present application and incorporated fully herein by reference. In addition, the antenna21,31,121may be constructed having a multi-layer-multi-turn (MLMT) construction in which at least one insulator is positioned between a plurality of conductors. Non-limiting examples of antennas having an MLMT construction that may be incorporated within the wireless transmission system(s)20and/or the wireless receiver system(s)30may be found in U.S. Pat. Nos. 8,610,530, 8,653,927, 8,680,960, 8,692,641, 8,692,642, 8,698,590, 8,698,591, 8,707,546, 8,710,948, 8,803,649, 8,823,481, 8,823,482, 8,855,786, 8,898,885, 9,208,942, 9,232,893, 9,300,046, all to Singh et al., assigned to the assignee of the present application are incorporated fully herein. It is also noted that other antennas such as, but not limited to, an antenna configured to send and receive signals in the UHF radio wave frequency such IEEE standard 802.15.1 may be incorporated within the systems, methods, and/or apparatus of the present invention. FIG.18is an example block diagram for a method1000for designing a system for wirelessly transferring one or more of electrical energy, electrical power, electromagnetic energy, and electronic data, in accordance with the systems, methods, and apparatus of the present disclosure. To that end, the method1000may be utilized to design a system in accordance with any disclosed embodiments of the systems10,110and any components thereof. At block1200, the method1000includes designing a wireless transmission system for use in the system10,110. The wireless transmission system designed at block1200may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the wireless transmission systems20,120, and120A-H, in whole or in part and, optionally, including any components thereof. Block1200may be implemented as a method1200for designing a wireless transmission system. Turning now toFIG.19and with continued reference to the method1000ofFIG.18, an example block diagram for the method1200for designing a wireless transmission system is illustrated. The wireless transmission system designed by the method1000may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the wireless transmission systems20,120, and120A-H in whole or in part and, optionally, including any components thereof. The method1200includes designing and/or selecting a transmission antenna for the wireless transmission system, as illustrated in block1210. The designed and/or selected transmission antenna may be designed and/or selected in accordance with one or more of the aforementioned and disclosed embodiments of the transmission antenna21,121,121A-N, in whole or in part and including any components thereof. The method1200includes designing and/or tuning a transmission tuning system for the wireless transmission system, as illustrated in block1220. Such designing and/or tuning may be utilized for, but not limited to being utilized for, impedance matching, as discussed in more detail above. The designed and/or tuned transmission tuning system may be designed and/or tuned in accordance with one or more of the aforementioned and disclosed embodiments of wireless transmission systems20,120, and120A-H in whole or in part and, optionally, including any components thereof. The method1200further includes designing a power conditioning system for the wireless transmission system, as illustrated in block1230. The power conditioning system designed may be designed with any of a plurality of power output characteristic considerations, such as, but not limited to, power transfer efficiency, maximizing a transmission gap (e.g., the gap17), increasing output voltage to a receiver, mitigating power losses during wireless power transfer, increasing power output without degrading fidelity for data communications, optimizing power output for multiple coils receiving power from a common circuit and/or amplifier, among other contemplated power output characteristic considerations. The power conditioning system may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the power conditioning system40, in whole or in part and, optionally, including any components thereof. Further, at block1240, the method1200may determine and optimize a connection, and any associated connection components, to configure and/or optimize a connection between the input power source12and the power conditioning system of block1230. Such determining, configuring, and/or optimizing may include selecting and implementing protection mechanisms and/or apparatus, selecting and/or implementing voltage protection mechanisms, among other things. The method1200further includes designing and/or programing a transmission control system of the wireless transmission system of the method1000, as illustrated in block1250. The designed transmission control system may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the transmission control system26, in whole or in part and, optionally, including any components thereof. Such components thereof include, but are not limited to including, the sensing system50, the driver41, the transmission controller28, the memory27, the communications system29, the thermal sensing system52, the object sensing system54, the receiver sensing system56, the other sensor(s)58, the gate voltage regulator43, the PWM generator41, the frequency generator348, in whole or in part and, optionally, including any components thereof. Returning now toFIG.18, at block1300, the method1000includes designing a wireless receiver system for use in the system10. The wireless transmission system designed at block1300may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the wireless receiver system30in whole or in part and, optionally, including any components thereof. Block1300may be implemented as a method1300for designing a wireless receiver system. Turning now toFIG.20and with continued reference to the method1000ofFIG.8, an example block diagram for the method1300for designing a wireless receiver system is illustrated. The wireless receiver system designed by the method1300may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the wireless receiver system30in whole or in part and, optionally, including any components thereof. The method1300includes designing and/or selecting a receiver antenna for the wireless receiver system, as illustrated in block1310. The designed and/or selected receiver antenna may be designed and/or selected in accordance with one or more of the aforementioned and disclosed embodiments of the receiver antenna31, in whole or in part and including any components thereof. The method1300includes designing and/or tuning a receiver tuning system for the wireless receiver system, as illustrated in block1320. Such designing and/or tuning may be utilized for, but not limited to being utilized for, impedance matching, as discussed in more detail above. The designed and/or tuned receiver tuning system may be designed and/or tuned in accordance with one or more of the aforementioned and disclosed embodiments of the receiver tuning system34in whole or in part and/or, optionally, including any components thereof. The method1300further includes designing a power conditioning system for the wireless receiver system, as illustrated in block1330. The power conditioning system designed may be designed with any of a plurality of power output characteristic considerations, such as, but not limited to, power transfer efficiency, maximizing a transmission gap (e.g., the gap17), increasing output voltage to a receiver, mitigating power losses during wireless power transfer, increasing power output without degrading fidelity for data communications, optimizing power output for multiple coils receiving power from a common circuit and/or amplifier, among other contemplated power output characteristic considerations. The power conditioning system may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the power conditioning system32in whole or in part and, optionally, including any components thereof. Further, at block1340, the method1300may determine and optimize a connection, and any associated connection components, to configure and/or optimize a connection between the load16and the power conditioning system of block1330. Such determining, configuring, and/or optimizing may include selecting and implementing protection mechanisms and/or apparatus, selecting and/or implementing voltage protection mechanisms, among other things. The method1300further includes designing and/or programing a receiver control system of the wireless receiver system of the method1300, as illustrated in block1350. The designed receiver control system may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the receiver control system36in whole or in part and, optionally, including any components thereof. Such components thereof include, but are not limited to including, the receiver controller38, the memory37, and the communications system39, in whole or in part and, optionally, including any components thereof. Returning now to the method1000ofFIG.18, the method1000further includes, at block1400, optimizing and/or tuning both the wireless transmission system and the wireless receiver system for wireless power transfer. Such optimizing and/or tuning includes, but is not limited to including, controlling and/or tuning parameters of devices to match impedance, optimize and/or configure voltage and/or power levels of an output power signal, among other things and in accordance with any of the disclosed systems, methods, and apparatus herein. Further, the method1000includes optimizing and/or tuning both the wireless transmission system and the wireless receiver system for data communications, in view of system characteristics necessary for wireless power transfer. Such optimizing and/or tuning includes, but is not limited to including, optimizing power characteristics for concurrent transmission of electrical energy and electrical data signals, tuning quality factors of antennas for different transmission schemes, among other things and in accordance with any of the disclosed systems, methods, and apparatus herein. FIG.21is an example block diagram for a method2000for manufacturing a system for wirelessly transferring one or both of electrical energy and electronic data, in accordance with the systems, methods, and apparatus of the present disclosure. To that end, the method2000may be utilized to manufacture a system in accordance with any disclosed embodiments of the systems10,110and any components thereof. At block2200, the method2000includes manufacturing a wireless transmission system for use in the system10. The wireless transmission system manufactured at block2200may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the wireless transmission systems20,120and/or120A-H in whole or in part and, optionally, including any components thereof. Block2200may be implemented as a method2200for manufacturing a wireless transmission system. Turning now toFIG.22and with continued reference to the method2000ofFIG.21, an example block diagram for the method2200for manufacturing a wireless transmission system is illustrated. The wireless transmission system manufactured by the method2000may be manufactured in accordance with one or more of the aforementioned and disclosed embodiments of the wireless transmission systems20,120, and120A-H in whole or in part and, optionally, including any components thereof. The method2200includes manufacturing a transmission antenna for the wireless transmission system, as illustrated in block2210. The manufactured transmission system may be built and/or tuned in accordance with one or more of the aforementioned and disclosed embodiments of the transmission antenna21,121, and121A-N, in whole or in part and including any components thereof. The method2200includes building and/or tuning a transmission tuning system for the wireless transmission system, as illustrated in block2220. Such designing and/or tuning may be utilized for, but not limited to being utilized for, impedance matching, as discussed in more detail above. The built and/or tuned transmission tuning system may be designed and/or tuned in accordance with one or more of the aforementioned and disclosed embodiments of the transmission tuning system24, in whole or in part and, optionally, including any components thereof. The method2200further includes selecting and/or connecting a power conditioning system for the wireless transmission system, as illustrated in block2230. The power conditioning system manufactured may be designed with any of a plurality of power output characteristic considerations, such as, but not limited to, power transfer efficiency, maximizing a transmission gap (e.g., the gap17), increasing output voltage to a receiver, mitigating power losses during wireless power transfer, increasing power output without degrading fidelity for data communications, optimizing power output for multiple coils receiving power from a common circuit and/or amplifier, among other contemplated power output characteristic considerations. The power conditioning system may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the power conditioning system40in whole or in part and, optionally, including any components thereof. Further, at block2240, the method2200may determine and optimize a connection, and any associated connection components, to configure and/or optimize a connection between the input power source12and the power conditioning system of block2230. Such determining, configuring, and/or optimizing may include selecting and implementing protection mechanisms and/or apparatus, selecting and/or implementing voltage protection mechanisms, among other things. The method2200further includes assembling and/or programing a transmission control system of the wireless transmission system of the method2000, as illustrated in block2250. The assembled transmission control system may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the transmission control system26in whole or in part and, optionally, including any components thereof. Such components thereof include, but are not limited to including, the sensing system50, the driver41, the transmission controller28, the memory27, the communications system29, the thermal sensing system52, the object sensing system54, the receiver sensing system56, the other sensor(s)58, the gate voltage regulator43, the PWM generator41, the frequency generator348, in whole or in part and, optionally, including any components thereof. Returning now toFIG.21, at block2300, the method2000includes manufacturing a wireless receiver system for use in the system10. The wireless transmission system manufactured at block2300may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the wireless receiver system30in whole or in part and, optionally, including any components thereof. Block2300may be implemented as a method2300for manufacturing a wireless receiver system. Turning now toFIG.23and with continued reference to the method2000ofFIG.21, an example block diagram for the method2300for manufacturing a wireless receiver system is illustrated. The wireless receiver system manufactured by the method2000may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the wireless receiver system30in whole or in part and, optionally, including any components thereof. The method2300includes manufacturing a receiver antenna for the wireless receiver system, as illustrated in block2310. The manufactured receiver antenna may be manufactured, designed, and/or selected in accordance with one or more of the aforementioned and disclosed embodiments of the receiver antenna31in whole or in part and including any components thereof. The method2300includes building and/or tuning a receiver tuning system for the wireless receiver system, as illustrated in block2320. Such building and/or tuning may be utilized for, but not limited to being utilized for, impedance matching, as discussed in more detail above. The built and/or tuned receiver tuning system may be designed and/or tuned in accordance with one or more of the aforementioned and disclosed embodiments of the receiver tuning system34in whole or in part and, optionally, including any components thereof. The method2300further includes selecting and/or connecting a power conditioning system for the wireless receiver system, as illustrated in block2330. The power conditioning system designed may be designed with any of a plurality of power output characteristic considerations, such as, but not limited to, power transfer efficiency, maximizing a transmission gap (e.g., the gap17), increasing output voltage to a receiver, mitigating power losses during wireless power transfer, increasing power output without degrading fidelity for data communications, optimizing power output for multiple coils receiving power from a common circuit and/or amplifier, among other contemplated power output characteristic considerations. The power conditioning system may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the power conditioning system32in whole or in part and, optionally, including any components thereof. Further, at block2340, the method2300may determine and optimize a connection, and any associated connection components, to configure and/or optimize a connection between the load16and the power conditioning system of block2330. Such determining, configuring, and/or optimizing may include selecting and implementing protection mechanisms and/or apparatus, selecting and/or implementing voltage protection mechanisms, among other things. The method2300further includes assembling and/or programing a receiver control system of the wireless receiver system of the method2300, as illustrated in block2350. The assembled receiver control system may be designed in accordance with one or more of the aforementioned and disclosed embodiments of the receiver control system36in whole or in part and, optionally, including any components thereof. Such components thereof include, but are not limited to including, the receiver controller38, the memory37, and the communications system39, in whole or in part and, optionally, including any components thereof. Returning now to the method2000ofFIG.21, the method2000further includes, at block2400, optimizing and/or tuning both the wireless transmission system and the wireless receiver system for wireless power transfer. Such optimizing and/or tuning includes, but is not limited to including, controlling and/or tuning parameters of devices to match impedance, optimize and/or configure voltage and/or power levels of an output power signal, among other things and in accordance with any of the disclosed systems, methods, and apparatus herein. Further, the method2000includes optimizing and/or tuning both the wireless transmission system and the wireless receiver system for data communications, in view of system characteristics necessary for wireless power transfer, as illustrated at block2500. Such optimizing and/or tuning includes, but is not limited to including, optimizing power characteristics for concurrent transmission of electrical energy and electrical data signals, tuning quality factors of antennas for different transmission schemes, among other things and in accordance with any of the disclosed systems, methods, and apparatus herein. The systems, methods, and apparatus disclosed herein are designed to operate in an efficient, stable and reliable manner to satisfy a variety of operating and environmental conditions. The systems, methods, and/or apparatus disclosed herein are designed to operate in a wide range of thermal and mechanical stress environments so that data and/or electrical energy is transmitted efficiently and with minimal loss. In addition, the system10may be designed with a small form factor using a fabrication technology that allows for scalability, and at a cost that is amenable to developers and adopters. In addition, the systems, methods, and apparatus disclosed herein may be designed to operate over a wide range of frequencies to meet the requirements of a wide range of applications. In an embodiment the system may transmit electrical power on the order of about 100 W to about 10 W. In another embodiment, electrical power up to around about 500 W may also be transmitted. Specifically considering near field magnetic coupling (NFMC) as the mechanism of wireless power transfer between the wireless transmission systems20,120,120A-H and the wireless receiver systems30, it is well known that smaller sizes are generally more easily achievable if a higher operating frequency is selected. This is due to the inverse relationship of the required mutual inductance and the frequency of operation, as indicated by the following equation: M=Vinducedj*ω*ITx where:Vinducedis induced voltage on the receiver antenna coilItxis the AC current flowing through the transmitter antenna coil, andω is the operating frequency multiplied by 2π. Since the required mutual inductance increases in order to enable the wireless transfer of electrical energy having increased, it is necessary to increase the inductance or coupling of the transmitter or receiver while minimizing AC losses. Mutual inductance can be calculated by the following relationship: M=k*√{square root over (LTx*LRx)}, where:M is the mutual inductance of the system,k is the coupling of the system,LTxis the inductance of the transmitter antenna coil, andLRxis the inductance of the receiver antenna coil. As the form factor of the antenna coil is reduced, attaining the required inductance on either the receiver or transmitter is accompanied by an increase in antenna coil resistance as the high number of turns required leads to a reduction in trace width. This increase in resistance typically reduces the quality factor of the antenna coil and overall coil to coil efficiency of the system where the Quality factor is defined as: Q=ω*LR, where:Q is the quality factor of the antenna coil,L is the inductance of the antenna coil,ω is the operating frequency of the antenna coil in radians/second (alternatively, if the frequency of operation is in Hz, the operating frequency is ω divided by 2π),R is the equivalent series resistance (ESR) at the operating frequency. Further, transmission (Tx) antenna coil to receiver (Rx) antenna coil efficiency (Eff) is defined by the following equation: Eff=k2*QR⁢x*QT⁢x1+1+k2*QR⁢x*QT⁢x,k is the coupling of the system,QRxis the quality factor of the receiver antennal, andQTxis the quality factor of the transmission antenna. In an embodiment, a ferrite shield may be incorporated within the antenna structure to improve antenna performance. Selection of the ferrite shield material is dependent on the operating frequency as the complex magnetic permeability (μ=μ′−j*μ″) is frequency dependent. The material may be a sintered flexible ferrite sheet, a rigid shield, or a hybrid shield, wherein the hybrid shield comprises a rigid portion and a flexible portion. Additionally, the ferrite shield may be composed of varying material compositions. Examples of materials may include, but are not limited to, zinc comprising ferrite materials such as manganese-zinc, nickel-zinc, copper-zinc, magnesium-zinc, and combinations thereof. In addition, depending on the operating frequency and power requirements of the system10,110, a hybrid antenna construction comprising a Litz wire and a PCB coil combination may be desired to efficiently transfer power. In an embodiment, a hybrid Litz wire and PCB coil combination may comprise the transmission antenna21,121,121A-N or the receiver antenna31of a wrapped Litz wire construction and the other of the transmitter antenna21,121,121A-N or the receiver antenna31may be constructed having a coil disposed on a surface of a circuit board such as the antenna shown inFIG.17. Lower operating frequencies on the order of 100 kHz to several MHz range may require a certain mutual inductance between the transmission and receiver antenna21,31,121,121A-N. This is attainable by using a transmitter antenna21,121,121A-B of a Litz wire construction having a novel ferrite core in combination with a receiver antenna31comprising a coil disposed on a surface of a circuit board, such as the antenna shown inFIG.17. In order to increase mutual inductance, the coupling and/or inductance of the transmitter module20,120,120A-H or the receiver module30must be increased. However, due to the small form factor constraints, coupling is limited by the physical size of the connector modules. It is noted that using transmitter and receiver antennas21,31,121,121A-N of a construction comprising a coil disposed on the surface of a circuit board, such as the antenna shown inFIG.17, may increase inductance and increase the resistance of the antenna coils thereby decreasing the quality factor Q and antenna to antenna efficiency. In an embodiment, the system10,110comprising a transmission system20,120,120A-H having a transmission antenna21,121,121A-N of a Litz-wire construction and a shielding material and a receiver system30having a receiver antenna31comprising a coil disposed on a surface of a circuit board (FIG.17) may be used to increase the coupling and mutual inductance of an exemplary small form factor of the system10,110. To achieve a higher antenna to antenna efficiency, this configuration may be used to achieve the necessary power transfer while maintaining high Q factor at lower frequencies. These improvements may also increase the overall performance of an exemplary system10,110having a relatively small form factor. The choice of coil design and construction is determined by a combination of the following electrical and magnetic parameters: inductance (L), equivalent series resistance (ESR) at the operating frequency, coupling (k), and Mutual inductance. For lower operating frequencies, i.e., from about 100 kHz to about 10 MHz, and for achieving increased power transmission on the order of about 0.1 mm to about 100 mm, this particular antenna topology is beneficial. For example, per the mutual inductance equations, if the power to be delivered to a load is constant, while the operating frequency decreases, the mutual inductance between the transmitter and receiver antenna coils increases at a constant transmit current. Table I illustrates the improvement in mutual inductance. Table II illustrates the improvement in coupling and Table III illustrates the improvement in antenna to antenna efficiency. TABLE ITransmitterTransmitterReceiverAntennaAntennaAntennaMConstructionShieldConstruction(μH)Coil on FR4 PCBSheetCoil on FR4 PCB0.35Litz WireT-CoreCoil on FR4 PCB1.35 TABLE IITransmitterTransmitterReceiverAntennaAntennaAntennaConstructionShieldConstructionCouplingCoil on FR4 PCBSheetCoil on FR4 PCB0.26Litz WireT-CoreCoil on FR4 PCB0.29 TABLE IIITransmitterTransmitterReceiverAntenna toAntennaAntennaAntennaAntennaConstructionShieldConstructionEfficiencyCoil on FR4 PCBSheetCoil on FR4 PCB57.9%Litz WireT-CoreCoil on FR4 PCB80.8% In addition, if the system10is operated at a higher frequency, i.e., on the order of about 1 MHz or greater, the required mutual inductance will be reduced, thereby allowing for smaller transmitter and receiver antennas21,31,121,121A-N, wireless transmission systems20,120,120A-H and wireless receiver systems30. As defined herein shielding material is a material that captures a magnetic field. An example of which is a ferrite material. In the embodiments detailed in Tables I-III, a sheet of ferrite material is positioned directly adjacent to the transmitter antenna21, for example, behind the transmission antenna21,121,121A-N. As defined herein a “T-Core” shielding material is a magnetic field shield assembly comprising a sheet of shielding material, such as a ferrite material, placed directly behind the transmitter or receiver antenna21,31,121and an additional second shielding material, such as a ferrite material, placed within the inside area of a coil in the plane of the transmitter or receiver antenna21,31,121. Furthermore, the wireless transmission system20or the wireless receiver system30may be constructed having the respective transmitter or receiver antennas21,31,121comprising a “C-core” shielding material in which the shielding material, such as a ferrite material, configured similarly to the letter “C”, is positioned adjacent to the antenna21,31,121. In addition, the wireless transmission system20or the wireless receiver system30may be constructed having the respective transmitter or receiver antennas21,31,121comprising a “E-core” shielding material in which the shielding material, such as a ferrite material, configured similarly to the letter “E”, is positioned adjacent to the antenna21,31,121. Utilizing relatively small sized printed circuit board or flexible printed circuit board (PCB/FPC) based coil-antennas allow for appropriate stackups, appropriate trace widths, gap widths and copper (or other conductive material) depths that are more suitable for higher frequencies. Further, printed circuit board and flex printed circuit board-based, coil-antennas are highly integrated into the PCB fabrication process, thereby allowing for integration with the rest of the circuitry. This also allows for the integration of MLMT antenna designs to reduce ESR and improve the Q of the antennas. Furthermore, utilizing coils in a layered approach allows for other fabrication processes, for example, printing, printing on fabrics, semiconductor fabrication processes, such as a low temperature co-fired ceramic (LTCC) process, a high temperature co-fired ceramic (HTCC) process, and the like. Small form factor PCB coil designs are suitable at higher operating frequencies due to a lower required inductance while maintaining a low coil ESR to minimize the power dissipated in the transmit and receive coils. Printed circuit board (PCB) coil antennas offer additional benefits from a manufacturing, cost and assembly standpoint compared to wire-wound antenna coil solutions. For applications with a strict requirement for overall assembly thickness, printed circuit board (PCB) coil antennas are preferred due to the reduced thickness possible even with multilayer construction. The ferrite shield material selected for the coil combination also depends on the operating frequency as the complex magnetic permeability (μ=μ′−j*μ″) is frequency dependent. The material may be a sintered flexible ferrite sheet or a rigid shield and be composed of varying material compositions. It is noted that the construction of the antenna21,31,121is non-limiting. The antenna that is incorporated within a system may comprise magnetic wires or have a stamped metal construction. Furthermore, the antenna21,31,121may utilize thick film, thin film or other printing fabrication technologies in its construction. In an embodiment, incorporation of a transmitter or receiver antenna21,31,121having a multi-layer-multi-turn (MLMT) construction significantly reduces the equivalent series resistance (ESR) of the respective wireless transmission systems20and wireless receiver systems30and the wireless connector system10of the present invention. The inventors have discovered that incorporation of at least one transmitter and receiver antenna21,31,121having a multi-layer-multi-turn (MLMT) construction reduces equivalent series resistance (ESR) of the wireless transmission system20or wireless receiver system30by about 50 percent. Furthermore, reducing ESR improves the overall system efficiency and reduces heating in the antenna21,31,121and the system10by reducing the (I2×R) losses in the coil. Table IV shown below details the measured ESR for two multi-layer-multi-turn (MLMT) antenna designs in comparison to an antenna constructed comprising Litz wire wrapped around an inductor. As shown in Table IV below, the antenna constructed with an MLMT design exhibited a lower inductance, (0.60 μH) and a lower equivalent series resistance (ESR) (0.50Ω) in comparison to the antenna having a traditional wound Litz wire construction. Thus, the transmitter or receiver antenna21,31,121having a multi-layer-multi-turn (MLMT) construction contributes to the increased electrical performance of increased electrical power transmission and increased module separation distance of the gap17of the system10of the present invention. TABLE IIIAntennaFrequencyInductanceESRDesign(MHz)(μH)(Ω)Litz Wire23.800.97MLMT20.600.50MLMT100.651.05 Exemplary ways of connecting the module to a host device include, but are not limited to, directly soldering or placing the at least one wireless transmission system20and wireless receiver systems30on a circuit board or a host device. Alternatively, the at least one wireless transmission system20,120,120A-H and wireless receiver systems30could be connected to a circuit board or a host device using a wire/cable. Once connected to a host device, the full structure or at least a portion of the structure of the at least one wireless transmission system20,120,120A-H and wireless receiver systems30may be encapsulated within an insulative coating. In another embodiment, the system10,110of the present application could include a module that can operate both as a transmitter and as a receiver, (e.g., a transceiver). In a further embodiment, the system10,110of the present application may comprise a power and data transfer system in addition to a single antenna where the data is modulated into the power frequency. In another embodiment, the system10,110of the present invention may comprise multiple antennas within each wireless transmission system20,120,120A-H and wireless receiver systems30. If a multiple antenna system is employed, then the first antenna could be reserved for identification, diagnostics and any uni- or bi-directional data transfer, while the second antenna can be dedicated to power transfer. As used herein, the phrase “at least one of” preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C. The predicate words “configured to”, “operable to”, and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more embodiments, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code. A phrase such as “an aspect” does not imply that such aspect is essential to the subject technology or that such aspect applies to all configurations of the subject technology. A disclosure relating to an aspect may apply to all configurations, or one or more configurations. An aspect may provide one or more examples of the disclosure. A phrase such as an “aspect” may refer to one or more aspects and vice versa. A phrase such as an “embodiment” does not imply that such embodiment is essential to the subject technology or that such embodiment applies to all configurations of the subject technology. A disclosure relating to an embodiment may apply to all embodiments, or one or more embodiments. An embodiment may provide one or more examples of the disclosure. A phrase such an “embodiment” may refer to one or more embodiments and vice versa. A phrase such as a “configuration” does not imply that such configuration is essential to the subject technology or that such configuration applies to all configurations of the subject technology. A disclosure relating to a configuration may apply to all configurations, or one or more configurations. A configuration may provide one or more examples of the disclosure. A phrase such as a “configuration” may refer to one or more configurations and vice versa. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other embodiments. Furthermore, to the extent that the term “include,” “have,” or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim. Furthermore, to the extent that the term “include,” “have,” or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” Reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure. While this specification contains many specifics, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of particular implementations of the subject matter. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination.
101,813
11863250
DETAILED DESCRIPTION In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. According to some embodiments, the techniques described herein can be implemented by one or generalized computing systems programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Special-purpose computing devices may be used, such as desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. The present disclosure describes several embodiments of a space-based cellular communications constellation. Such geolocation and sets of navigation parameters can be incorporated in search and rescue applications and for optimizing communications network operations. Terminals A terminal might be some electronic user equipment that interacts in some way with a satellite or satellite signals. Examples might include user equipment (UE) that is configured or designed for using some protocol or protocols, a mobile phone, which might be a full-featured smartphone or otherwise, or some Internet-of-Things (IoT) connected device. In many cases, it might be assumed that the terminal is portable in that it can be easily moved from place to place and in operation is typically stationary or that a terminal is mobile in that it needs to be designed to operate while being moved relative to the surface of the Earth. Examples of terminals include mobile phones, cellular phones, smartphones, and other devices equipped to communicate with a particular satellite. It should be understood that an operation, function or characteristic of a terminal might also be that of a station that is effectively or functionally a mobile station, but is not at present mobile. In some examples, the terminal might be considered instead a portable station that can be moved from place to place but in operation is stationary, such as a laptop computer with several connected peripherals and having a cellular connection, or the terminal might be stationary, such as a cellular device that is embedded in a mounted home security system. A terminal might be referred to as a terrestrial mobile device, a user receiver, user equipment (UE), or the like. A terminal might be designed for use in a terrestrial cellular network. However, some terminals could be designed with modified communications services to handle GNSS signals that are outside a scope of a terrestrial cellular network protocol used by the terminal. Modifications to a terminal might or might not require changes to the chips, firmware, application software, and/or frequencies of operation in the terminals. Thus, for some terminals, GNSS functionality could be obtained using the terminal's existing protocol elements such as frequency, modulation, coding, etc., and would not require substantial modifications. Modifications might be limited to software or firmware update on the terminals, or no change. As explained herein, a terminal might receive navigation signals from a satellite of a GNSS network and the satellite might also be used to send point-to-point data transmissions to specific terminals, whereas the navigation signals might be widely available to many terminals. A terminal can receive those navigation signals, decode the signals into data, and process the data to find a navigation solution, perhaps in the form of a set of navigation parameters the terminal can use. Satellites A satellite might be an object, vehicle, etc. that is configured, designed, constructed, etc. to operate in an orbit, such as an Earth orbit, perhaps designed consistent with a particular orbital altitude as a design target and/or a particular orbital orientation. A satellite might include various electronic components that allow for communication among satellites, communication with fixed ground stations that are specific to satellite support, command, control, etc., and/or communication with terrestrial devices such as terminals. A satellite need not be in orbit to perform a function described herein. For example, the present disclosure can encompass a system that could be deployed in orbit but that is operating on Earth, either in a testing mode or a production mode. A satellite might be capable of transmitting signals over beam patters and a network of satellites might provide cellular coverage of any location on the Earth with one or more beams using one or more satellites simultaneously. Each beam might implement a full-duplex bandwidth for simultaneous uplink and downlink communications using protocols such as Global System Mobile (GSM); or Frequency Division Duplex (FDD) or Time Division Duplex (TDD) Long Term Evolution (LTE); or Frequency Division Duplex (FDD) or TDD 5G New Radio (NR). Some beams could implement receive-only operations to support uplink measurement of signals from terminals or base stations on the ground. Cells of a cellular communication network might be implemented via narrow beams wherein each beam serves as a cell (e.g., a Base Transceiver Station (BTS), Enhanced Node B (eNB), or gNB). These cells could employ standard control and user plane channels typically implemented in a common GSM, LTE, NR, etc., network. One of these control channels might be the BCCH channel or the Broadcast Control CHannel. Constellations In one embodiment, the satellite communications constellation might deploy thousands of satellites in Low Earth Orbit (LEO) at an altitude of approximately 500 km. The orbits might be circular or elliptical in shape. Other embodiments may deploy fewer or more satellites in orbits lower than or higher than 500 km. For simplicity, in one embodiment, the orbit configuration might be consistent with a Walker Style constellation. In this configuration, the satellites are placed in orbits with common altitudes. The satellite positions may be evenly spaced within inertial planes around the Earth where the inertial planes are equally spaced around the Earth with planes of satellites equally, at least approximately, spaced in inertial longitude of the ascending node. Large-scale LEO constellations of hundreds and thousands of satellites might be primarily designed for communications services, but they can also be configured as transmit platforms in orbit that could provide enhanced GNSS services compared to their predecessors, as explained herein. With thousands of satellites in a constellation, a terminal anywhere on Earth might have a line-of-sight path to enough distinct satellites to quickly gather navigation signals and compute a navigation solution for the terminal. Line-of-sight might be a state of a satellite being above a horizon relative to the terminal. Many large-scale LEO constellations target transmission frequencies in the range of 11 to 50 GHz, and that spectrum can suffer from significant link loss due to propagation, atmospheric, fading, and foliage effects. As a result, such satellite systems might typically be designed with directive antenna technologies (small dishes for users) to receive high SNR communications services across more limited coverage areas. Both of these design factors can limit coverage for GNSS usable signals to high elevation angles, reducing the value of the number of satellites available above the horizon. Large-scale LEO constellations designed for communications services are typically designed to provide these services to user terminals with the user terminals having unique hardware designed specifically for compatibility with the constellation providing the service. Typically, these user terminals are static or in a fixed location on the Earth. This presents several challenges for GNSS solutions to handsets since the links are not typically between satellites and mobile equipment. Since the satellite user's fixed terminal receives the satellite signals, peripheral devices, such as computers, mobile phones, etc., can only infer their distances from the fixed user terminal and cannot typically resolve their location. Embodiments described herein can provide a GNSS satellite system that improves on existing and planned solutions that will even support existing legacy cell phones, including those without built-in GPS receivers. Phased Array As explained herein, a satellite network might be used to provide GNSS functionality for terminals that are designed to communicate with current and future terrestrial 3GPP networks, but not necessarily with a GNSS system. A satellite communications system in LEO designed for communications services might deploy simultaneous GNSS services. To facilitate high SNR links for broadband mobile services, the satellites might employ directive, and perhaps steerable, antenna technologies, such as phased arrays. AS phased array might comprise a number of individual antenna elements, which might be arranged in a grid, a rectangle, a line, a square, a hexagon, or some two-dimensional (2D) pattern of antenna elements. Together, the collection of antenna elements can form a phased array with a radio system coordinating signals to be transmitted from each of a plurality of antenna elements and based on the coordination of signals might transmit over a beam being having a desired lobe pattern. An individual antenna element of the phased array can be used for wide beam transmissions, such as in-band, lower SNR transmissions that can be used for wide area coverage. Since the spacing of satellites in the constellation might be designed based on coverage from higher directivity, narrower beams, or array factors, a wider field of view can provide signals from many satellites to a receiver anywhere around the planet. This can speed resolving a navigation solution. The directivity dictated by the lobe pattern of the antennas may vary based on altitude, the field of coverage, and desired capacity/user data rates. Phased arrays can easily employ beam steering of multiple simultaneous beams to support seamless, high SNR coverage across a wide satellite footprint. A steerable phased array antenna may use antenna elements with wide beam patterns, such that appropriate spacing of antenna elements can accommodate beam steering off boresight angles of higher degrees. For instance, a highly directional phased array may employ antenna elements with full cone, half-power beamwidth radiation patterns as wide as 60, 90, or 120 degrees. Other implementations may use many low gain antenna elements to produce appropriate directivities and steerabilities from the array factor. A constellation of satellites might use phased arrays on board to steer a beam, or beams, at a single location on Earth. The result might be a circular or oval coverage polygon on the Earth's surface describing the coverage area for that beam or cell. In a typical LTE network, a radio transceiver station that deploys a cell like this might be referred to as an eNB or eNodeB. 2G and 5G networks may refer to these transceiver stations as BTS or gNB, respectively. Data Transmissions For data communications, a satellite might be part of a constellation that forms a space-based network of satellites designed to communicate with standard mobile handsets according to some pre-agreed protocol. Examples of data communication technologies and protocols include TDMA, FDMA, OFDMA, 3GPP technologies such as GSM (2G), CDMA (3G), UMTS (3G), LTE (4G), NR (5G), and/or others. Examples and details are provided in Speidel I through Speidel IV. Data transmissions might be for sending messages from a ground station, to a satellite, and then having the satellite forward the message to the terminal using protocols the terminal is programmed or configured for. For example, the satellite might send a transmission using a protocol that a standard smartphone could receive. GNSS Transmissions A GNSS transmission might involve a satellite emitting a navigation signal that can be received by a terminal or a large number of terminals in a large footprint of the navigation signal. As navigation signals need not be specific to any one terminal, they can be broadcast over a wide beam, so that any terminals that are within range (such as line-of-sight) of a satellite can pick up that satellite's navigation signals, combine navigation signals, and compute a navigation solution to derive a set of navigation parameters. While multiple terminals might receive a navigation signal from a satellite, each of them might compute a different navigation solution reflecting the fact that different states (such as position, velocity, etc.) of different terminals could result in the navigation signal appearing different to different terminals. The frequencies used for deploying the GNSS navigation signals may be in the same bands used for service links, TT&C links, or feeder links. This can be done by implementing a guard band or in-band implementation of the GNSS signals. The GNSS signals may also use a frequency different from the service links, TT&C links, or feeder links. Unlicensed bands (e.g., ISM) may be used over certain geographies. Furthermore, existing elements of the protocols used for the service links, TT&C links, or feeder links might be used for the GNSS signals, such as existing modulation and coding schemes (mod-cods, or MCS, etc.), control plane channels, or data plane channels. As an example, a system information block (SIB) in a Broadcast Control Channel (BCH) in a control plane of the LTE air interface protocol could be used to transmit the needed navigation message or other GNSS information. Navigation parameters related to geolocation can be changing as a terminal is moving. While satellites in orbit are not stationary, they are typically in well-characterized orbits with a location known to good precision at times that are precisely known. A satellite's two-way Doppler shift, a terminal's radio emissions, and the round-trip time delays can be measured onboard the satellite with sufficient accuracy to provide useful geolocation fixes for the network and for end-user purposes. Satellite constellations or space segment networks in orbit around the Earth may be deployed for high-SNR telecommunications or data services to a terminal and in parallel can provide signals that can used for determining some of a terminal's navigation parameters and a navigation solution that could indicate to a terminal its current position, a current time, or other navigation or state details. As explained herein, a satellite network might be used to provide a separate GNSS solution for terminals, such as standard smartphones and cellphones, that are designed to communicate with current and future terrestrial 3GPP networks, but not necessarily with a GNSS system. A satellite communications system in LEO designed for communications services might deploy simultaneous GNSS services. An architecture of a satellite communications system is described for providing GNSS solutions using a common communications platform and infrastructure. Directive technologies, such as phased arrays, often provide optimal solutions for communications systems that desire to optimize link SNRs for higher throughput links. Since GNSS links need not require high throughput links, they might be deployed with individual, or smaller groups, of individual antenna elements in a larger phased array. Alternatively, a separate, low gain, wide beamwidth antenna might be installed on the satellite as a dedicated aperture for GNSS signals. Variations of phased array activation might be used as well. Based on the desired beamwidth of the GNSS relevant radiating antenna elements, the system may use multiple antenna elements in a “smaller” phased array with “some” directivity—not as much as the entire array but more than an individual antenna element. MIMO techniques could be leveraged in a certain implementation. Multiple, individual antenna elements in the phased array, or separate phased arrays, with enough separation, can increase throughput and link budget for the GNSS type services deployed by the satellite. Examples may include antenna elements widely separated in a large, phased array, antenna elements on separate phased arrays, or antenna elements on separate phased arrays on separate satellites. The use of low frequency (e.g., sub 1 GHz or some low frequency) cellular bands provides favorable propagation characteristics of signals, enabling wide area coverage. A telecommunications service deployed in orbit may leverage a variety of frequencies, but some may be more suitable for wider area fields of coverage and support lower path loss and better link budgets. The system might be an LTE system in orbit and could use guard bands in LTE (180 kHz blocks) for GNSS frequency blocks. The GNSS RF spectrum could be broken into multiple orthogonal carriers (like LTE). This might take a form similar to the NB-IoT PHY protocol structure for deploying GNSS in the guard band or even doing it “in-band”. Alternatively, the guard band implementation could be GSM channels between LTE blocks. Combined Transmissions on a Phased Array FIG.1is a diagram illustrating a phased array as might be used in an embodiment. A phased array might comprise multiple antenna elements in a two-dimensional pattern. Individual antenna elements within, or subsections of, a phased array can generate more widespread signals by using wider beams. Using the entire phased array and coordinating the signals emitted from antenna elements can generate a highly directive main lobe, which might be used, or needed, for high SNR communications services. As shown inFIG.1, a square pattern phased array101comprises multiple individual antenna elements that together can generate a directive beam103, which might be used for high SNR telecommunications services. Directive beam103might be steerable across a field of view, ultimately defined by the relative spacing of the 4 by 4 array of antenna elements109and a beamwidth of a radiation pattern107of an individual antenna element105. Individual antenna elements in the phased array can be activated for GNSS signal generation to achieve more widespread signal relevance. Multiple individual antenna elements might be used on a common phased array101, and sub-arrays such as111and105to provide redundant or additional signals. Furthermore, sub-sections111of the array might be activated for a slightly more directive beam than an individual antenna element but less directive than the main beam103. FIG.2illustrates how a phased array might be used to project beams onto the Earth's surface using a main highly directive beam,205, and a wide beam pattern,203.FIG.2illustrates a satellite on orbit, which has a field of view of the Earth that can be characterized in terms of Doppler Shift and propagation delay (or range) using a contour of overlapping concentric circles (range contours) and approximately parabolic lines (Doppler contours). The Doppler contour lines would be precisely parabolic if the Earth were flat, but for purpose of explanation it can be noted that they are slightly modified parabolic sections of a cone cut by an oblate spheroid rather than the simple conic section. A directive beam is illustrated,205, providing high SNR services within a small spot beam portion of the field of view. Wide beam antennas can cover a much larger contour area within the satellite field of view. As shown inFIG.2, a satellite201, perhaps equipped with a phased array, has a field of view of the Earth203, which might be covered by one antenna element in the larger phased array. A directive beam205from the phased array may result in a localized signal spot beam215on the Earth designed for high SNR telecommunications services. An individual antenna element, or a subset of antenna elements, of the phased array, may result in a wider signal spot beam217on the Earth. The spot beams would cover the satellite field of view, described with Doppler contours211and Range contours209. To support compatibility between the wide beams (for GNSS) and narrow beams (for high SNR services), there may be a spectrum sharing/coordination arrangement. For instance, main beam(s)215may deploy LTE spectrum blocks of some bandwidth219dedicated for telecommunications/data services. Wide beams217may deploy narrow signal bandwidths in the LTE spectrum block guard bands221or in-band223. Multiple signals might be sent at the same time, using multiple frequency blocks and a number of multiple access schemes such as OFDMA, TDMA, or FDMA. Depending on capacity, a satellite might allocate different time slots or different LTE resource blocks to improve the GNSS resolution at the temporary expense of telecom traffic capacity. By allocating more spectral resources to the GNSS services in terms of bandwidth, power, and time, the geographic resolution might be improved. Wider bands might allow for improved measurement and finer resolutions. Averaging over time can also be done, if smearing due to fast-moving satellites is taken into account. A satellite's measurement of a transmission delay and a Doppler shift of communication with a terminal allows for computation of contour lines and evaluating where they intersect. This can provide good estimates of the geolocation of the terminal. However, there is the likely possibility that the approximately parabolic Doppler contour line will intersect the approximately circular timing contour circle at two places, leading to ambiguity when solving for the location parameters. This is illustrated inFIG.2, where a device may be operating at a location,215, within the satellite's field of view. In this example, the Doppler shift and delay measurement at the satellite results in two possible position solutions225and227based on the intersection of the measured Doppler shift and delay contours. Considerations of the power at the two possible locations can often help resolve the ambiguity, particularly when the satellite is steering a directional beam205toward a perpendicular location to the ground path. Furthermore, the measurement of the Doppler shift and time delay on the air interference link can be conducted in various places in the system. For instance, the terminals themselves may be capable of measuring the Doppler shift and the time delay over the link between it and a satellite. This, combined with information about the transmitted carrier frequency of the satellite (which can be used to measure the Doppler shift) and the ephemeris of the satellite allows for the computation of a position estimate of the terminal on the Earth. This could also be done by a ground station, which might form a link with a terminal through a satellite as a bent pipe. The Doppler shift on the link can be measured, generally, by comparing the receive frequency to the satellite transmit frequency (which may vary as a function of time). The terminal can measure time delay in various ways. One example is that the satellite sends a time stamp in the BCCH, other control channel, or other traffic channel, which is used to compare to the received timestamp. Another way might be to have the satellite send a timing advance to the terminal, possibly using conventional cellular protocols, but with a higher value than typical limited terrestrial distance timing advances. The delay and time measurements can be used along with the ephemeris of the satellites (delivered either by terrestrial means or space network) to compute a location relative to a satellite ephemeris. This may be done over multiple RF bursts, from one or more satellites, to hone in on a more accurate position. Navigation Signals Navigation services could be provided from a mixture of timing, Doppler shift, and attitude measurements determine the position of a terminal and attitude and geography information might be used to resolve ambiguities in the fix determinations. Satellites could operate at distances from each other of tens, hundreds or thousands of kilometers, allowing the opportunity for multiple satellites in a constellation to receive signals from the ground and more accurately measure Doppler Shift and delay to support positioning computations by the network, or the terminals, to solve for a set of navigation parameters for the terminals, as illustrated inFIG.3. Additionally, the terminals may be capable of listening to and measuring the timing and Doppler shift on signals from multiple satellites at one time, which may have overlapping wide beams, or overlapping directive beams, and may be on the same or different carrier frequencies. With this, multiple measurements can be made simultaneously, and these multiple measurements can be used to compute a higher accuracy for position or other navigation parameters, directly on the terminal, without requiring any other separate GNSS signals. FIG.3illustrates a process for geolocation of a terminal by finding an intersection of a range contour curve and a Doppler shift contour curve where the timing advance and the Doppler shift are measured from an uplink RACH Channel Request message and subsequent uplink messages. There is the potential for ambiguity among two solutions to the intersection of these two curves, but this can be resolved by considering the signal strength on the ground at those two possible fixes. The ambiguity is greatly reduced when the satellite attitude is pointed to one side of its cross-track direction so that the two ambiguous roots are far apart from the path with one receiving the full strength of the directional beam while the other is substantially weaker. Fixes using measurements from multiple satellites and various times can also improve accuracy and resolve the ambiguity. Broadcast Control Channel Some GNSS services might be able to be provided using, at least in part, a broadcast control channel. A broadcast control channel is often used as a one-way communications mechanism to deliver “cellular broadcast” (CB) messages in a one-to-many fashion. CB is a transmission of a text-type message that is slightly different in appearance from a standard SMS in that it is displayed on the home screen of a mobile device and can have a distinct warning tone sound. In one protocol, the maximum length of a cellular broadcast message is 1395 characters. Specifically, CB messages comprise up to 15 pages and each page is limited to 93 alphanumeric characters per message. A full CB message can be sent every 1.79 seconds. For instance, in the GSM protocol, a CB message can be delivered using four SDCCH channels every “multi-frame.” One multi-frame is 51 frames, where frames comprise eight timeslots, each timeslot duration being 576.9 microseconds. A multi-frame is approximately 0.235 seconds. Each SDCCH channel of CB data can include 41 bits, totaling 184 bits every 0.235 seconds. Therefore, a GSM embodiment's peak CB data rate would be approximately 781 bps. This data rate is superior to some the existing GNSS system deployed by the GPS constellation, which has a bit rate of 50 bps, so a navigation message of the same size can be delivered an order of magnitude faster using the CB protocol. Furthermore, the CB channel leverages the lowest mod-cod in the protocol specification. For example, in GSM, this might be GMSK with an encoding rate of 0.533. In LTE, this might be QPSK with an encoding rate of 0.0762. These modulation and coding schemes may have sufficient link closure at SINR levels greater than 7 dB and −5 dB, respectively. The CB operates on channels differently than the mobile networks' channels dedicated for voice, SMS, and data and thus does not contribute to network congestion. A CB message is sent from the mobile networks' core network with an identifier called the “Message ID.” Mobile phone users can make configuration changes on their handsets to display various alert message types to the mobile phone user. Most mobile phones come pre-configured for specific Message IDs that are important, such as Presidential Alerts. Configuration changes might involve changing the Message ID or Message IDs allocated for specific emergency alerts. Some handsets are pre-allocated for receiving specific Message IDs for CB. These Message IDs may include those allocated for the Earthquake & Tsunami Warning System (ETWS) or similar natural disasters and weather conditions. Some are allocated for lost or kidnapped children (e.g., AMBER alerts). Others are allocated for large-scale government messaging to nationwide populations (e.g., Presidential alerts). Thousands of Message IDs can be utilized in the cellular broadcast channel. Of these thousands, only tens to hundreds are already allocated for certain CB content (e.g., message ID 4352 is for earthquake warning, 4353 is for tsunami warning, 4370 is for Presidential alerts, etc.). Message ID's 0-999 are not allocated for any cellular broadcast content. Message IDs 1001 through 1003 are for Differential GPS Correction Data, GPS Ephemeris, and Clock Correction data for GPS Almanac and other similar data supporting assisted GPS. Each satellite in a space network might be programmed to consistently broadcast a CB message to areas of service for each deployed beam. The broadcast messages may use the same Message ID to be understood for the GNSS system. Application software on each handset may understand to pull broadcast messages on this Message ID to collect the navigation message. The pages of the CB Message could be encoded with the appropriate navigation message field values. Alternative Methods for GNSS Using Telecommunications Protocols In Speidel I, a method is disclosed to measure the Doppler shift and the delay of RACH signals from terminals operating as UEs operating on a mobile space network compatible with LTE and GSM devices. These measurements can be used to compute the location of the terminal/UE within the satellite footprint. This information could be computed by the satellite-based on RACH signals and delivered to the user using standard two-way delivery protocols, such as SMS or push notifications. The typical control plane channels leveraged by GSM or LTE could be redesigned to accommodate GNSS relevant signaling. Existing control plane features could be leveraged or repurposed for GNSS-type services to support this. FIG.3illustrates an example of conveying content to a UE via cellular broadcast messages, according to an embodiment. FIG.3illustrates an example300of conveying content to a UE via cellular broadcast messages, according to an embodiment. As illustrated there, to be distributed to the UE is stored in storage302and provided to an ingest server304. A command-and-control center306can receive content from ingest server304and provided to a content provider interface310. A cellular broadcast entity (CBE)312and receive data from content provider interface310and provide them to a cellular broadcast center (CBC)314, which in turn can construct cellular broadcast messages and convey those to an uplink ground station316and provide those in a proprietary protocol or a standard protocol to an orbital base station320. In some embodiments, the CBE and the CBC are implemented in a command-and-control center and/or, in others, on the satellite. Orbital base station320can transmit a signal, such as those described in Speidel I, for receipt by the user equipment322possibly using conventional cellular broadcast protocols. Orbital base station320can transmit signals to terrestrial user equipment with particular delay characteristics and Doppler shifts as explained in more detail herein. Terminal Adaptations The typical 3GPP protocols, such as GSM, LTE, NR, etc., only allow user equipment (UE) to lock onto one BTS carrier frequency or one eNB's primary component carrier frequency. Since fundamentally, only one eNB can serve a UE in a given cell in the LTE protocol, there may be only one satellite beam with the main lobe directed at the UE in the serving area. As a result, the UE can only receive information on one BCCH channel from one satellite at a time. The protocol could be modified to allow the UE to listen to the BCCH from multiple satellites, which would allow it to receive information that it needs for the navigation solution more rapidly. This solution might be implemented as a software application or firmware modification on handsets. The terminal might receive the navigation message from multiple satellites throughout multiple overpasses. To get at least four satellite navigation messages, the terminal may need to wait for 8 to 12 minutes. This means if the terminal does not move at all in 12 minutes, it might get an accurate set of navigation parameters and thus an accurate navigation solution. This might be useful for applications where the terminal is, in fact, stationary almost always, such as an IoT-type device that might be portable but largely operates while stationary. Terminals might derive both their local RF oscillator and their data clocks from the received downlink signal from a spacecraft in a constellation. For increased accuracy, the satellite might implement high accuracy telemetry for the on-orbit ephemeris (state vectors) with very accurate, perhaps atomic clocks. Examples are described in ETSI EN 300912 § 6.1 to § 6.2]. Hardware Examples FIG.4illustrates an example of data structures that might be present in memory or storage accessible to computer processors. In some embodiments, the data structures are used by various components and tools, some of which are described in more detail herein. The data structures and program code used to operate on the data structures may be provided and/or carried by a transitory computer readable medium, e.g., a transmission medium such as in the form of a signal transmitted over a network. According to some embodiments, the techniques described herein are implemented by one or more generalized computing systems programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Special-purpose computing devices may be used, such as desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. One embodiment might include a carrier medium carrying data that includes data having been processed by the methods described herein. The carrier medium can comprise any medium suitable for carrying the data, including a storage medium, e.g., solid-state memory, an optical disk or a magnetic disk, or a transient medium, e.g., a signal carrying the data such as a signal transmitted over a network, a digital signal, a radio frequency signal, an acoustic signal, an optical signal or an electrical signal. FIG.5is a block diagram that illustrates a computer system500upon which the computer systems of the systems described herein and/or data structures shown inFIG.4may be implemented. Computer system500includes a bus52or other communication mechanism for communicating information, and a processor54coupled with bus52for processing information. Processor54may be, for example, a general-purpose microprocessor. Computer system500also includes a main memory56, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus52for storing information and instructions to be executed by processor54. Main memory56may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor54. Such instructions, when stored in non-transitory storage media accessible to processor54, render computer system500into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system500further includes a read only memory (ROM)58or other static storage device coupled to bus52for storing static information and instructions for processor54. A storage device510, such as a magnetic disk or optical disk, is provided and coupled to bus52for storing information and instructions. Computer system500may be coupled via bus52to a display512, such as a computer monitor, for displaying information to a computer user. An input device514, including alphanumeric and other keys, is coupled to bus52for communicating information and command selections to processor54. Another type of user input device is a cursor control516, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor54and for controlling cursor movement on display512. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Computer system500may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system500to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system500in response to processor54executing one or more sequences of one or more instructions contained in main memory56. Such instructions may be read into main memory56from another storage medium, such as storage device510. Execution of the sequences of instructions contained in main memory56causes processor54to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may include non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device510. Volatile media includes dynamic memory, such as main memory56. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire, and fiber optics, including the wires that include bus52. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor54for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a network connection. A modem or network interface local to computer system500can receive the data. Bus52carries the data to main memory56, from which processor54retrieves and executes the instructions. The instructions received by main memory56may optionally be stored on storage device510either before or after execution by processor54. Computer system500also includes a communication interface518coupled to bus52. Communication interface518provides a two-way data communication coupling to a network link520that is connected to a local network522. For example, communication interface518may be a network card, a modem, a cable modem, or a satellite modem to provide a data communication connection to a corresponding type of telephone line or communications line. Wireless links may also be implemented. In any such implementation, communication interface518sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. Network link520typically provides data communication through one or more networks to other data devices. For example, network link520may provide a connection through local network522to a host computer524or to data equipment operated by an Internet Service Provider (ISP)526. ISP526in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the “Internet”528. Local network522and Internet528both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link520and through communication interface518, which carry the digital data to and from computer system500, are example forms of transmission media. Computer system500can send messages and receive data, including program code, through the network(s), network link520, and communication interface518. In the Internet example, a server530might transmit a requested code for an application program through the Internet528, ISP526, local network522, and communication interface518. The received code may be executed by processor54as it is received, and/or stored in storage device510, or other non-volatile storage for later execution. Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. Processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory. The code may also be provided carried by a transitory computer readable medium e.g., a transmission medium such as in the form of a signal transmitted over a network. Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of the set of A and B and C. For instance, in the illustrative example of a set having three members, the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. The use of examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Further embodiments can be envisioned to one of ordinary skill in the art after reading this disclosure. In other embodiments, combinations or sub-combinations of the above-disclosed invention can be advantageously made. The example arrangements of components are shown for purposes of illustration and combinations, additions, re-arrangements, and the like are contemplated in alternative embodiments of the present invention. Thus, while the invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the processes described herein may be implemented using hardware components, software components, and/or any combination thereof. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims and that the invention is intended to cover all modifications and equivalents within the scope of the following claims. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
46,147
11863251
DETAILED DESCRIPTION The embodiments set forth below represent information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure. Note that although terminology from 3GPP LTE has been used in this disclosure, this should not be seen as limiting the scope of the disclosure to only the aforementioned system. Other wireless systems, including New Radio (NR) (i.e., Fifth Generation (5G)), Wideband Code-Division Multiple Access (WCDMA), Worldwide Interoperability for Microwave Access (WiMax), Ultra Mobile Broadband (UMB), and Global System for Mobile Communications (GSM), may also benefit from exploiting the ideas covered within this disclosure. Also note that terminology such as evolved or enhanced NodeB (eNodeB) and User Equipment (UE) should be considered non-limiting and does not imply a certain hierarchical relation between the two; in general “eNodeB” could be considered as device 1 and “UE” device 2, and these two devices communicate with each other over some radio channel. Herein, wireless transmissions in the downlink are discussed in detail, but some embodiments of the disclosure are equally applicable in the uplink. In this regard,FIG.1illustrates one example of a wireless system10(e.g., a cellular communications system) in which embodiments of the present disclosure may be implemented. The wireless system10includes a first node12, which in this example is a radio access node. However, the first node12is not limited to a radio access node and can be another device such as a general radio node allowing communication within a radio network, including a wireless device as described below. The radio access node12provides wireless access to other nodes such as wireless devices or other access nodes, such as a second node14, within a coverage area16(e.g., cell) of the radio access node12. In some embodiments, the second node14is a Long Term Evolution User Equipment (LTE UE). Note that the term “UE” is used herein in its broad sense to mean any wireless device. As such, the terms “wireless device” and “UE” are used interchangeably herein. LTE uses Orthogonal Frequency-Division Multiplexing (OFDM) in the downlink and Discrete Fourier Transform (DFT)-spread OFDM in the uplink. The basic LTE downlink physical resource can thus be seen as a time-frequency grid as illustrated inFIG.2, where each resource element corresponds to one OFDM subcarrier during one OFDM symbol interval. FIG.3illustrates a time-domain structure as may be used in the LTE wireless communication system. In the time domain, LTE downlink transmissions are organized into radio frames of 10 ms, each radio frame consisting of ten equally-sized subframes of length Tsubframe=1 ms. Furthermore, the resource allocation in LTE is typically described in terms of resource blocks, where a resource block corresponds to one slot (0.5 ms) in the time domain and twelve contiguous subcarriers in the frequency domain. Resource blocks are numbered in the frequency domain, starting with 0 from one end of the system bandwidth. Downlink transmissions are dynamically scheduled; i.e., in each subframe the base station transmits control information regarding to which terminals data is transmitted and upon which resource blocks the data is transmitted in the current downlink subframe. This control signaling is typically transmitted in the first 1, 2, 3 or 4 OFDM symbols in each subframe. A downlink system with 3 OFDM symbols as control is illustrated inFIG.4. LTE uses Hybrid Automatic Repeat Requests (HARQ), where after receiving downlink data in a subframe, the terminal attempts to decode it and reports to the base station whether the decoding was successful (ACK) or not (NACK). In case of an unsuccessful decoding attempt, the base station can retransmit the erroneous data. Uplink control signaling from the terminal to the base station consists of:HARQ acknowledgements for received downlink data;terminal reports related to the downlink channel conditions, used as assistance for the downlink scheduling;scheduling requests, indicating that a mobile terminal needs uplink resources for uplink data transmissions. In order to provide frequency diversity, these frequency resources are frequency hopping on the slot boundary, i.e., one “resource” consists of 12 subcarriers at the upper part of the spectrum within the first slot of a subframe and an equally sized resource at the lower part of the spectrum during the second slot of the subframe or vice versa. If more resources are needed for the uplink L1/L2 control signaling, e.g., in case of very large overall transmission bandwidth supporting a large number of users, additional resource blocks can be assigned next to the previously assigned resource blocks.FIG.5illustrates uplink L1/L2 control signaling transmission on a Physical Uplink Control Channel (PUCCH), As mentioned above, uplink L1/L2 control signaling includes HARQ acknowledgements, channel state information and scheduling requests. Different combinations of these types of messages are possible as described further below, but to explain the structure for these cases it is beneficial to discuss separate transmission of each of the types first, starting with the HARQ and the scheduling request. There are five formats defined for the PUCCH in Rel-13, each capable of carrying a different number of bits. For this background art, PUCCH formats 2 and 3 are the most relevant. UEs can report channel state information (CSI) to provide the eNodeB with an estimate of the channel properties at the terminal in order to aid channel-dependent scheduling. Such channel properties are those that tend to vary with the fading of the channel or with interference, such as the relative gain and phase of the channel between antenna elements, the signal to interference and noise ratio (SINR) in a given subframe, etc. Such CSI feedback is used to adapt Multiple-Input Multiple-Output (MIMO) precoding and modulation and coding states. LTE provides other measures of channel properties, such as Received Signal Strength Indicators (RSSI), Reference Signal Received Power (RSRP), and Reference Signal Received Quality (RSRQ); however, these are longer term properties not used to adapt MIMO transmission or to select modulation and coding states, and so are not considered CSI in the context of this disclosure. A CSI report consists of multiple bits per subframe transmitted in the uplink control information (UCI) report. PUCCH Format 1, which is capable of at most two bits of information per subframe, can obviously not be used for this purpose. Transmission of CSI reports on the PUCCH in Rel-13 is instead handled by PUCCH Formats 2, 3, 4, and 5, which are capable of multiple information bits per subframe. PUCCH Format 2 resources are semi-statically configured. A Format 2 report can carry a payload of at most 11 bits. Variants of Format 2 are Format 2a and 2b which also carry HARQ-ACK information of 1 and 2 bits, respectively for a normal cyclic prefix. For an extended cyclic prefix, PUCCH Format 2 can also carry HARQ-ACK information. For simplicity, they are all referred to as Format 2 herein. PUCCH format 3 is designed to support larger HARQ-ACK payloads and can carry up to 10 or 20 HARQ-ACK bits for FDD and TDD, respectively. It can also carry Scheduling Requests (SR), and therefore supports up to 21 bits total. PUCCH format 3 can also carry CSI. PUCCH formats 4 and 5 carry still larger payloads. Because PUCCH payloads are constrained, LTE defines CSI reporting types that carry subsets of CSI components (such as Channel Quality Indicators (CQI), Precoding Matrix Indicators (PMI), Rank Indicators (RI), and CSI-RS Resource Indicators (CRI)). Together with the PUCCH reporting mode and ‘Mode State,’ each reporting type defines a payload that can be carried in a given PUCCH transmission, which is given in 3GPP TS 36.213, Table 7.2.2-3. In Rel-13, all PUCCH reporting types have payloads that are less than or equal to 11 bits, therefore all can be carried in a single PUCCH Format 2 transmission. Various CSI reporting types are defined in Rel-13 LTE:Type 1 report supports CQI feedback for the UE selected subbandsType 1 a report supports subband CQI and second PMI feedbackType 2, Type 2b, and Type 2c reports support wideband CQI and PMI feedbackType 2a report supports wideband PMI feedbackType 3 report supports RI feedbackType 4 report supports wideband CQIType 5 report supports RI and wideband PMI feedbackType 6 report supports RI and PMI feedbackType 7 report support CRI and RI feedbackType 8 report supports CRI, RI and wideband PMI feedbackType 9 report supports CRI, RI and PMI feedbackType 10 report supports CRI feedback These reporting types are transmitted on PUCCH with periodicities and offsets (in units of subframes) determined according to whether CQI, Class A first PMI, RI, or CRI are carried by the reporting type. Table 1 below shows the subframes when the various reporting types are transmitted assuming that wideband CSI reports are used with a single CSI subframe set. Similar mechanisms are used for subband reporting and for multiple subframe sets. TABLE 1PUCCH Report Transmission Time for CSI Reporting TypesCSISubframe in whichCSIReportingwideband CSI reportingcontentTypetype(s) are transmittedCQI1, 1a, 2,(10 × nf+ └ns/2 ┘ − NOFFSET,CQI)mod2b, 2c, 4(Npd) = 0Class2a(10 × nf+ └ns/2 ┘ − NOFFSET,CQI)modA first(H′ · Npd) = 0PMIRI3, 5(10 × nf+ └ns/2 ┘ − NOFFSET,CQI−NOFFSET,RI)mod (Npd· MRI) = 0CRI*7, 8, 9, 10(10 × nf+ └ns/2 ┘ − NOFFSET,CQI−NOFFSET,RI)mod (Npd· MRI· MCRI) = 0 Note that CRI is for the case where more than one CSI-RS resource is configured. Where (as defined in 3GPP TSs 36.213 and 36.331):ηfis the system frame numberηsis the slot number within a radio frame Npdis a periodicity in subframes set by the higher layer parameter cqi-pmi-ConfigIndex NOFFSET,CQIis an offset in subframes set by the higher layer parameter cqi-pmi-ConfigIndex H′ is set by the higher layer parameter periodicityFactorWB MRIis periodicity multiple in subframes set by the higher layer parameter ri-ConfigIndex NOFFSET,RIis an offset in subframes set by the higher layer parameter ri-ConfigIndex MCRIis periodicity multiple in subframes set by the higher layer parameter cri-ConfigIndex PUCCH CSI reporting has a fundamental periodicity of Npdsubframes, and CQIs can be reported at this rate. If an RI is configured, it can also be reported at the same rate as CQI by configuring MRI=1, since an offset NOFFSET,RIcan allow the RI to have different subframe shifts of the same periodicity as the CQI. On the other hand, a Class A first PMI is time multiplexed with the CQI, in which the Class A first PMI is transmitted instead of the CQI in one out of H′ transmissions of the CQI. The CRI is time multiplexed with the RI in a similar way, i.e., the CRI is transmitted instead of the RI in one out of MCRItransmissions of the RI. Also, PUCCH Format 3 can carry ACK/NACK and CSI in the same PUCCH transmission, but the CSI must be from only one serving cell. Furthermore, in Rel-13, a UE only transmits CSI on PUCCH Format 3 when transmitting ACK/NACK. If there is no ACK/NACK to be transmitted in a given subframe and CSI is to be transmitted on PUCCH, the UE will use PUCCH Format 2 in that subframe. LTE control signaling can be carried in a variety of ways, including carrying control information on a Physical Downlink Control Channel (PDCCH), Enhanced Physical Downlink Control Channel (EPDCCH) or PUCCH, embedded in a (PUSCH), in Medium Access Control (MAC) control elements (‘MAC CEs’), or in Radio Resource Control (RRC) signaling. Each of these mechanisms is customized to carry a particular kind of control information. As used herein, a control channel may refer to any of these mechanisms. Additionally, a transmission on a control channel may refer to a separate transmission that carries the information or a part of a transmission that carries specific information. Control information carried on the PDCCH, EPDCCH, PUCCH, or embedded in PUSCH is physical layer related control information, such as Downlink Control Information (DCI), Uplink Control Information (UCI), as described in 3GPP TS 36.211, 36.212, and 36.213. DCI is generally used to instruct the UE to perform some physical layer function, providing the needed information to perform the function. UCI generally provides the network with needed information, such as HARQ-ACK, Scheduling Request (SR), Channel State Information (CSI), including CQI, PMI, RI, and/or CRI. UCI and DCI can be transmitted on a subframe-by-subframe basis, and so are designed to support rapidly varying parameters, including those that can vary with a fast fading radio channel. Because UCI and DCI can be transmitted in every subframe, UCI or DCI corresponding to a given cell tend to be on the order of tens of bits, in order to limit the amount of control overhead. Control information carried in MAC CEs is carried in MAC headers on the Uplink and Downlink Shared Transport Channels (UL-SCH and DL-SCH), as described in 3GPP TS 36.321. Since a MAC header does not have a fixed size, control information in MAC CEs can be sent when it is needed and does not necessarily represent a fixed overhead. Furthermore, MAC CEs can carry larger control payloads efficiently, since they are carried in UL-SCH or DL-SCH transport channels, which benefit from link adaptation, HARQ, and can be turbo coded (whereas UCI and DCI cannot be in Rel-13). MAC CEs are used to perform repetitive tasks that use a fixed set of parameters, such as maintaining timing advance or buffer status reporting, but these tasks generally do not require transmission of a MAC CE on a subframe-by-subframe basis. Consequently, channel state information related to a fast fading radio channel, such as PMIs, CQIs, RIs, and CRIs are not carried in MAC CEs in Rel-13. Dedicated RRC control information is also carried through UL-SCHs and DL-SCHs using Signaling Radio Bearers (SRBs), as discussed in 3GPP TS 36.331. Consequently, it can also carry large control payloads efficiently. However, SRBs are not generally intended for very frequent transmission of large payloads, and need to be available to support less frequent signaling that should be highly reliably transmitted, such as for mobility procedures including handover. Therefore, similar to the MAC, RRC signaling does not carry channel state information related to a fast fading radio channel, such as PMIs, CQIs, RIs, and CRIs in Rel-13. In fact, this kind of CSI is only carried in UCI signaling on PUSCHs or PUCCHs. Multi-antenna techniques can significantly increase the data rates and reliability of a wireless communication system. The performance is in particular improved if both the transmitter and the receiver are equipped with multiple antennas, which results in a Multiple-Input Multiple-Output (MIMO) communication channel. Such systems and/or related techniques are commonly referred to as MIMO. The LTE standard is currently evolving with enhanced MIMO support. A core component in LTE is the support of MIMO antenna deployments and MIMO related techniques. LTE Release 12 supports an 8-layer spatial multiplexing mode for 8 Tx antennas with channel dependent precoding. The spatial multiplexing mode is aimed for high data rates in favorable channel conditions. An illustration of the spatial multiplexing operation is provided inFIG.6. As seen inFIG.6, the information carrying symbol vector s is multiplied by an NT×r precoder matrix W, which serves to distribute the transmit energy in a subspace of the NT(corresponding to NTantenna ports) dimensional vector space. The precoder matrix is typically selected from a codebook of possible precoder matrices, and is typically indicated by means of a PMI, which specifies a unique precoder matrix in the codebook for a given number of symbol streams. The r symbols in s each correspond to a layer, and r is referred to as the transmission rank. In this way, spatial multiplexing is achieved, since multiple symbols can be transmitted simultaneously over the same Time/Frequency Resource Element (TFRE). The number of symbols r is typically adapted to suit the current channel properties. LTE uses OFDM in the downlink (and DFT precoded OFDM in the uplink), and hence the received NR×1 vector ynfor a certain TFRE on subcarrier n (or alternatively data TFRE number n) is thus modeled by: yn=HnWsn+enEquation 1 where enis a noise/interference vector obtained as realizations of a random process. The precoder W can be a wideband precoder, which is constant over frequency, or frequency selective. The precoder matrix W is often chosen to match the characteristics of the NR×NTMIMO channel matrix Hn, resulting in so-called channel dependent precoding. This is also commonly referred to as closed-loop precoding and essentially strives for focusing the transmit energy into a subspace which is strong in the sense of conveying much of the transmitted energy to the UE. In addition, the precoder matrix may also be selected to strive for orthogonalizing the channel, meaning that after proper linear equalization at the UE, the inter-layer interference is reduced. One example method for a UE to select a precoder matrix W can be to select the Wkthat maximizes the Frobenius norm of the hypothesized equivalent channel: maxkHˆn⁢WkF2Equation⁢2 where Ĥnis a channel estimate, possibly derived from CSI-RS as described below. Wkis a hypothesized precoder matrix with index k. ĤnWkis the hypothesized equivalent channel. With regard to CSI feedback, a subband is defined as a number of adjacent Physical Resource Block (PRB) pairs. In LTE, the subband size (i.e., the number of adjacent PRB pairs) depends on the system bandwidth, whether CSI reporting is configured to be periodic or aperiodic, and feedback type (i.e., whether higher layer configured feedback or UE-selected subband feedback is configured). An example illustrating the difference between subband and wideband is shown inFIG.7. In the example, the subband consists of 6 adjacent PRBs. Note that only two subbands are shown inFIG.7for simplicity of illustration. Generally, all the PRB pairs in the system bandwidth are divided into different subbands where each subband consists of a fixed number of PRB pairs. In contract, wideband involves all the PRB pairs in the system bandwidth. As mentioned above, a UE may feedback a single precoder that takes into account the measurements from all PRB pairs in the system bandwidth if it is configured to report wideband PMI by the eNodeB. Alternatively, if the UE is configured to report subband PMI, a UE may feedback multiple precoders with one precoder per subband. In addition to the subband precoders, the UE may also feedback the wideband PMI. In closed-loop precoding for the LTE downlink, the UE transmits, based on channel measurements in the forward link (downlink), recommendations to the eNodeB of a suitable precoder to use. The eNB configures the UE to provide feedback according to the UE's transmission mode, and may transmit CSI-RS and configure the UE to use measurements of CSI-RS to feedback recommended precoding matrices that the UE selects from a codebook. A single precoder that is supposed to cover a large bandwidth (wideband precoding) may be fed back. It may also be beneficial to match the frequency variations of the channel and instead feedback a frequency-selective precoding report, e.g. several precoders, one per subband. This is an example of the more general case of channel state information feedback, which also encompasses feeding back other information than recommended precoders to assist the eNodeB in subsequent transmissions to the UE. Such other information may include CQIs as well as transmission RIs. Given the CSI feedback from the UE, the eNodeB determines the transmission parameters it wishes to use to transmit to the UE, including the precoding matrix, transmission rank, and Modulation and Coding State (MCS). These transmission parameters may differ from the recommendations the UE makes. Therefore, a rank indicator and MCS may be signaled in DCI, and the precoding matrix can be signaled in DCI or the eNodeB can transmit a demodulation reference signal from which the equivalent channel can be measured. The transmission rank, and thus the number of spatially multiplexed layers, is reflected in the number of columns of the precoder W. For efficient performance, it is important that a transmission rank that matches the channel properties is selected. In closed loop MIMO transmission schemes such as TM9 and TM10, a UE estimates and feeds the downlink CSI back to the eNodeB. The eNB uses the feedback CSI to transmit downlink data to the UE. The CSI consists of a transmission RI, a PMI and a CQI. A codebook of precoding matrices is used by the UE to find out the best match between the estimated downlink channel Hnand a precoding matrix in the codebook based on certain criteria, for example, the UE throughput. The channel Hnis estimated based on a Non-Zero Power CSI Reference Signal (NZP CSI-RS) transmitted in the downlink for TM9 and TM10. The CQI/RI/PMI together provide the downlink channel state to the UE. This is also referred to as implicit CSI feedback since the estimation of Hnis not fed back directly. The CQI/RI/PMI can be wideband or subband depending on which reporting mode is configured. The RI corresponds to a recommended number of streams that are to be spatially multiplexed and thus transmitted in parallel over the downlink channel. The PMI identifies a recommended precoding matrix codeword (in a codebook which contains precoders with the same number of rows as the number of CSI-RS ports) for the transmission, which relates to the spatial characteristics of the channel. The CQI represents a recommended transport block size (i.e., code rate) and LTE supports transmission of one or two simultaneous (on different layers) transmissions of transport blocks (i.e. separately encoded blocks of information) to a UE in a subframe. There is thus a relation between a CQI and an SINR of the spatial stream(s) over which the transport block or blocks are transmitted. Codebooks of up to 16 antenna ports have been defined in LTE Up to Release 13. Both one dimension (1D) and two-dimension (2D) antenna arrays are supported. For LTE Release 12 UE and earlier, only a codebook feedback for a 1D port layout is supported, with 2, 4, or 8 antenna ports. Hence, the codebook is designed assuming these ports are arranged on a straight line in one dimension. In LTE Rel-13, codebooks for 2D port layouts were specified for the case of 8, 12, or 16 antenna ports. In addition, a codebook for 1D port layout for the case of 16 antenna ports was also specified in LTE Rel-13. In LTE Rel-13, two types of CSI reporting were introduced, i.e., Class A and Class B. In Class A CSI reporting, a UE measures and reports CSI based on a new codebook for the configured 2D antenna array with 8, 12 or 16 antenna ports. The Class A codebook is defined by five parameters, i.e. (N1, N2, Q1, Q2, CodebookConfig), where (N1, N2) are the number of antenna ports in a first and a second dimension, respectively. (Q1, Q2) are the DFT oversampling factor for the first and the second dimension, respectively. CodebookConfig ranges from 1 to 4 and defines four different ways the codebook is formed. For CodebookConfig=1, a PMI corresponding to a single 2D beam is fed back for the whole system bandwidth while for CodebookConfig={2,3,4}, PMIs corresponding to four 2D beams are fed back and each subband may be associated with a different 2D beam. The CSI consists of a RI, a PMI and a CQI or CQIs, similar to the CSI reporting in pre Rel-13. In Class B CSI reporting, in one scenario (also referred to as “KCSI-RS>1”), the eNB may pre-form multiple beams in one antenna dimension. There can be multiple ports (1, 2, 4, or 8 ports) within each beam on the other antenna dimension. “Beamformed” CSI-RS are transmitted along each beam. A UE first selects the best beam from a group of beams configured and then measures CSI within the selected beam based on the legacy pre-Release 13 LTE codebook for 2, 4, or 8 ports. The UE then reports back the selected beam index and the CSI corresponding to the selected beam. In another scenario (also referred to as “KCSI-RS=1”), the eNB may form up to 4 (2D) beams on each polarization and “beamformed” CSI-RS is transmitted along each beam. A UE measures CSI on the “beamformed” CSI-RS and feedback CSI based on a new Class B codebook for 2, 4, or 8 ports. In LTE Release-10, a new reference symbol sequence was introduced for the intent to estimate downlink channel state information, the CSI-RS. The CSI-RS provides several advantages over basing the CSI feedback on the CRS which were used, for that purpose, in previous releases. Firstly, the CSI-RS is not used for demodulation of the data signal, and thus does not require the same density (i.e., the overhead of the CSI-RS is substantially less). Secondly, CSI-RS provides a much more flexible means to configure CSI feedback measurements (e.g., which CSI-RS resource to measure on can be configured in a UE specific manner). By measuring a CSI-RS transmitted from the eNodeB, a UE can estimate the effective channel the CSI-RS is traversing including the radio propagation channel and antenna gains. In more mathematical rigor this implies that if a known CSI-RS signal x is transmitted, a UE can estimate the coupling between the transmitted signal and the received signal (i.e., the effective channel). Hence if no virtualization is performed in the transmission, the received signal y can be expressed as: y=Hx+eEquation 3 and the UE can estimate the effective channel H. Up to eight CSI-RS ports can be configured in LTE Rel-10, that is, the UE can estimate the channel from up to eight transmit antenna ports. In LTE Release 13, the number of CSI-RS ports that can be configured is extended to up to sixteen ports (3GPP TS 36.213, 3GPP TS 36.211). In LTE Release 14, supporting up to 32 CSI-RS ports is under consideration. Related to CSI-RS is the concept of zero-power CSI-RS resources (also known as a muted CSI-RS) that are configured just as regular CSI-RS resources, so that a UE knows that the data transmission is mapped around those resources. The intent of the zero-power CSI-RS resources is to enable the network to mute the transmission on the corresponding resources in order to boost the SINR of a corresponding non-zero power CSI-RS, possibly transmitted in a neighbor cell/transmission point. For Rel-11 of LTE a special zero-power CSI-RS was introduced that a UE is mandated to use for measuring interference plus noise. A UE can assume that the Transmission Points (TPs) of interest are not transmitting on the zero-power CSI-RS resource, and the received power can therefore be used as a measure of the interference plus noise. Based on a specified CSI-RS resource and on an interference measurement configuration (e.g. a zero-power CSI-RS resource), the UE can estimate the effective channel and noise plus interference, and consequently also determine the rank, precoding matrix, and MCS to recommend to best match the particular channel. Some embodiments of the current disclosure may be used with two dimensional antenna arrays, and some of the presented embodiments use such antennas. Such antenna arrays may be (partly) described by the number of antenna columns corresponding to the horizontal dimension Nh, the number of antenna rows corresponding to the vertical dimension Nvand the number of dimensions corresponding to different polarizations Np. The total number of antennas is thus N=NhNvNp. It should be pointed out that the concept of an antenna is non-limiting in the sense that it can refer to any virtualization (e.g., linear mapping) of the physical antenna elements. For example, pairs of physical sub-elements could be fed the same signal, and hence share the same virtualized antenna port. An example of a 4×4 array with cross-polarized antenna elements is illustrated inFIG.8. Precoding may be interpreted as multiplying the signal with different beamforming weights for each antenna prior to transmission. A typical approach is to tailor the precoder to the antenna form factor, i.e. taking into account Nh, Nνand Npwhen designing the precoder codebook. Such 2D codebooks may not strictly relate vertical or horizontal dimensions to the dimensions that antenna ports are associated with. Therefore, 2D codebooks can be considered to have a first and a second number of antenna ports N1and N2, wherein N1can correspond to either the horizontal or vertical dimension, and so N2corresponds to the remaining dimension. That is, if N1=Nh, then N2=N84, while if N1=Nν, then N2=Nh. Similarly, 2D codebooks may not strictly relate antenna ports to polarization, and be designed with cophasing mechanisms used to combine two beams or two antenna ports, as described in the following. A common type of precoding is to use a DFT-precoder, where the precoder vector used to precode a single-layer transmission using a single-polarized uniform linear array (ULA) with N1antennas is defined as: w1⁢D(l,N1,O1)=1N1[ej⁢2⁢π·0·lO1⁢N1ej⁢2⁢π·1·lO1⁢N1⋮ej⁢2⁢π·(N1-1)·lO1⁢N1]Equation⁢4 where l=0, 1, . . . , O1N1−1 is the precoder index and O1is an integer oversampling factor. A precoder for a dual-polarized Uniform Linear Array (ULA) with N1antennas per polarization (and so 2N1antennas in total) can be similarly defined as: w1⁢D,DP(l,N1⁢O1)=[w1⁢D(l)ej⁢ϕ⁢w1⁢D(l)]=[w1⁢D(l)00w1⁢D(l)][1ej⁢ϕ]Equation⁢5 where ejϕis a cophasing factor between the two polarizations that may for instance be selected from a QPSK alphabet ϕ∈{0, π/2 , π3π/2}. A corresponding precoder vector for a two-dimensional uniform planar array (UPA) with N1×N2antennas can be created by taking the Kronecker product of two precoder vectors as w2D(l, m)=w1D(l, N1, O1)⊗w1D(m, N2, O2), where O2is an integer oversampling factor in the N2dimension. Each precoder w2D(l, m) forms a DFT beam; all the precoders {w2D(l, m), l=0, . . . , N1O1−1; m=0, . . . , N2O2−1} form a grid of DFT beams. An example is shown inFIG.9where (N1, N2)=(4,2) and (O1, O2)=(4,4). Throughout the following sections, the terms ‘DFT beams’ and DFT precoders' are used interchangeably. More generally, a beam with an index pair (l, m) can be identified by the direction in which the greatest energy is transmitted when precoding weights w2D(l, m) are used in the transmission. Also, a magnitude taper can be used with DFT beams to lower the beam's sidelobes. A 1D DFT precoder along N1and N2dimensions with magnitude tapering can be expressed as: w1⁢D(l,N1,O1,β)=1N1[β0⁢ej⁢2⁢π·0·lO1⁢N1β1⁢ej⁢2⁢π·1·lO1⁢N1⋮βN1-1⁢ej⁢2⁢π·(N1-1)·lO1⁢N1],w1⁢D(m,N2,O2,γ)=1N1[γ0⁢ej⁢2⁢π·0·mO2⁢N2γ1⁢ej⁢2⁢π·1·mO2⁢N2⋮γN2-1⁢ej⁢2⁢π·(N2-1)·mO2⁢N2] where 0<βi, γk≤1 (i=0, 1, . . . , N1−1; k=0, 1, . . . , N2−1) is an amplitude scaling factor. βi=1, γk=1 (i=0, 1, . . . , N1−1; k=0, 1, . . . , N2−1) corresponds to no tapering. DFT beams (with or without a magnitude taper) have a linear phase shift between elements along each of the two dimensions. Without loss of generality, it can be assumed that the elements of w(l, m) are ordered according to w(l, m)=w1D(l, N1, O1, β)⊗w1D(m, N2, O2, γ) such that adjacent elements correspond to adjacent antenna elements along dimension N2, and elements of w(l, m) spaced N2apart correspond to adjacent antenna elements along dimension N1. Then the phase shift between two elements ws1(l, m) and ws2(l, m) of w(l, m) can be expressed as: ws2(l,m)=ws1(l,m)·(αs2αs1)·ej⁢2⁢π⁡((k1-i1)⁢Δ1+(k2-i2)⁢Δ2) where s1=i1N2+i2and s2=k1N2+k2(with 0≤i2<N2, 0≤i1<N1, 0≤k2<N2, and 0≤k1<N1) are integers identifying two entries of the beam w(l, m) so that i2) indicates to a first entry of beam w(l, m) that is mapped to a first antenna element (or port) and (k1, k2) indicates to a second entry of beam w(l, m) that is mapped to a second antenna element (or port). αs1=βi1γi2and αs2=βk1γk2are real numbers. αi≠1 (i=s1, s2) if magnitude tapering is used; otherwise αi=1. Δ1=lO1⁢N1 is a phase shift corresponding to a direction along an axis, e.g. the horizontal axis (‘azimuth’). Δ2=mO2⁢N2 is a phase shift corresponding to direction along an axis, e.g. the vertical axis (‘elevation’). Therefore a kthbeam d(k) formed with precoder w(lk, mk) can also be referred to by the corresponding precoder w(lk, mk), i.e. d(k)=w(lk, mk). Thus a beam d(k) can be described as a set of complex numbers, each element of the set being characterized by at least one complex phase shift such that an element of the beam is related to any other element of the beam where dn(k)=dikαi,nej2π(pΔ1,k+qΔ2,k)=dikαi,n(ej2πΔ1,k)p(ej2πΔ2,k)q, where di(k) is the ithelement of a beam d(k), αi,nis a real number corresponding to the ithand nthelements of the beam d(k); p and q are integers; and Δ1,kand Δ2,kare real numbers corresponding to a beam with index pair (lk, mk) that determine the complex phase shifts ej2πΔ1,kand ej2πΔ2,k, respectively. Index pair (lk, mk) corresponds to a direction of arrival or departure of a plane wave when beam d(k) is used for transmission or reception in a UPA or ULA. A beam d(k) can be identified with a single index k where =lk+N1O1mk, i.e, along vertical or N2dimension first, or alternatively k=N2O2lkmk, i.e. along horizontal or N1dimension first. Extending the precoder for a dual-polarized ULA may then be done as: w2⁢D,DP(l,m,ϕ)=[1ej⁢ϕ]⊗w2⁢D(l,m)=[w2⁢D(l,m)ej⁢ϕ⁢w2⁢D(l,m)]=[w2⁢D(l,m)00W2⁢D(l,m)][1ej⁢ϕ]Equation⁢6 A precoder matrix W2D,DPfor multi-layer transmission may be created by appending columns of DFT precoder vectors as: W2D,DP(R)=[w2D,DP(l1, m1, ϕ1)w2D,DP(l2, m2ϕ2) . . .w2D,DP(lR, mR, ϕR)] where R is the number of transmission layers, i.e. the transmission rank. In a special case for a rank-2 DFT precoder, m1=m2=m and l1=l2=l, we have: W2⁢D,DP(2)(l,m,ϕ1,ϕ2)=[w2⁢D,DP(l,m,ϕ1)w2⁢D,DP⁢(l,m,ϕ2)]=[w2⁢D(l,m)00W2⁢D(l,m)][11ej⁢ϕ1ej⁢ϕ2]Equation⁢7 For each rank, all the precoder candidates form a ‘precoder codebook’ or a ‘codebook’. A UE can first determine the rank of the estimated downlink wideband channel based on CSI-RS. After the rank is identified, for each subband the UE then searches through all the precoder candidates in a codebook for the determined rank to find the best precoder for the subband. For example, in case of rank=1, the UE would search through w2D,DP(k, l, ϕ) for all the possible (k, l, ϕ) values. In case of rank=2, the UE would search through W2D,DP(2)(k, l, ϕ1, ϕ2) for all the possible (k, l, (ϕ1, ϕ2) values. With multi-user MIMO (MU-MIMO), two or more users in the same cell are co-scheduled on the same time-frequency resource. That is, two or more independent data streams are transmitted to different UEs at the same time, and the spatial domain is used to separate the respective streams. By transmitting several streams simultaneously, the capacity of the system can be increased. This, however, comes at the cost of reducing the SINR per stream, as the power has to be shared between streams and the streams will cause interference to each-other. When increasing the antenna array size, the increased beamforming gain will lead to higher SINR, however, as the user throughput depends only logarithmically on the SINR (for large SINRs), it is instead beneficial to trade the gains in SINR for a multiplexing gain, which increases linearly with the number of multiplexed users. Accurate CSI is required in order to perform appropriate nullforming between coscheduled users. In the current LTE Rel. 13 standard, no special CSI mode for MU-MIMO exists and thus, MU-MIMO scheduling and precoder construction has to be based on the existing CSI reporting designed for single-user MIMO (that is, a PMI indicating a DFT-based precoder, a RI and a CQI). This may prove quite challenging for MU-MIMO, as the reported precoder only contains information about the strongest channel direction for a user and may thus not contain enough information to do proper nullforming, which may lead to a large amount of interference between co-scheduled users, reducing the benefit of MU-MIMO. The DFT-based precoders discussed above and used in LTE Rel-13 calculate cophasing across pairs of (typically differently polarized) ports. If more than one beam d(k) is used in CSI reporting, beams are not combined with the cophasing, but port pairs associated with a selected beam are cophased. Consequently, such DFT-based precoders can be considered as ‘single beam’ precoders. Multi-beam precoders are therefore an extension, where cophasing is applied across beams as well as port pairs. Herein, we describe one such codebook. While the multi-beam codebook is described with two dimensions of the codebook relating to horizontal and vertical dimensions for concreteness, the codebook is equally applicable to a general case where the first or second dimension relates to horizontal or vertical antenna ports, as described above. DNis defined as a size N×N DFT matrix, i.e., the elements of DNare defined as [DN]k,l=1N⁢ej⁢2⁢π⁢k⁢lN.RN(q)=diag⁡([ej⁢2⁢π·0·qNej⁢2⁢π·1·qN…ej⁢2⁢π·(N-1)·qN]) is further defined to be a size N×N rotation matrix, defined for 0≤q<1. Multiplying DNwith RN(q) from the left creates a rotated DFT matrix with entries [RN(q)⁢DN]k,l=1N⁢ej⁢2⁢π⁢k⁡(l+q)N. The rotated DFT matrix RN(q)DN=[d1d2dN] consists of normalized orthogonal column vectors {di}i=1Nwhich furthermore span the vector spaceN. That is, the columns of RN(q)DN, for any q, is an orthonormal basis ofN. In some embodiments, a codebook design is created by extending the (rotated) DFT matrices that were appropriate transforms for a single-polarized ULA as discussed above to also fit the more general case of dual-polarized 2D UPAs. A rotated 2D DFT matrix is defined as DNV,NH(qV, qH)=(RNH(qH)DNH)⊗(RNV(qV)DNV)=[d1d2. . . dNVNH]. The columns {di}i=1NDPof DNV,NH(qV, qH) constitutes an orthonormal basis of the vector spaceNVNH. Such a column diis henceforth denoted a (DFT) beam. A dual-polarized beam space transformation matrix suitable for a UPA is created where the upper left and lower right elements correspond to the two polarizations: BNV,NH(qV,qH)=I2⊗DNV,NH(qV,qH)=[DNV,NH⁢(qV,qH)00DNV,NH⁢(qV,qH)]=[d1⁢d2⁢…⁢dNV,NH00⁢…⁢000⁢…⁢0d1⁢d2⁢…⁢dNV,NH]=[b1b2…b2⁢NV⁢NH]. The columns {bi}i=12NVNHof BNV,NH(qV, qH) constitute an orthonormal basis of the vector space2NVNH. Such a column biis henceforth denoted as a single-polarized beam (SP-beam) as it is constructed by a beam d transmitted on a single polarization (i.e.b=[d0]⁢or⁢b=[0d]). The notation dual-polarized beam is also introduced to refer to a beam transmitted on both polarizations (which are combined with a polarization cophasing factor ejα, i.e. bD⁢P=[dej⁢α⁢d]). Utilizing the assumption that the channel is somewhat sparse, much of the channel energy is captured by only selecting a column subset of BNV,NH(qV, qH) that is, it is sufficient to describe a couple of the SP-beams, which keeps down the feedback overhead. Therefore, selecting a column subset ISconsisting of NSPcolumns of BNV,NH(qV, qH), creates a reduced beam space transformation matrix BlS=[blS(1)blS(2). . . blS(NSP)], eg., selecting column numbers IS=[1 5 10 25] creates the reduced beam space transformation matrix BlS=[b1b5b10b25]. A general precoder structure for precoding of a single layer is: w=BIS[c1c2⋮cNSP]=[bIS(1)bIS(2)…bIS(NS⁢P)][c1c2⋮cNSP]=∑i=1NS⁢Pci⁢bIS(i). where {ci}i=1NSPare complex beam cophasing coefficients. The precoder w in the equation above can be described as a linear combination of beams constructed by cophasing a kthbeam bkwith cophasing coefficient ck. Such a beam cophasing coefficient is a scalar complex number that adjusts at least the phase of a beam relative to other beams according to ckbk. When a beam cophasing coefficient only adjusts relative phase, it is a unit magnitude complex number. It is in general desirable to also adjust the relative gain of beams, in which case the beam cophasing coefficient is not unit magnitude. A more refined multi-beam precoder structure is achieved by separating the complex coefficients in a power (or amplitude) and a phase part as: w=BIS[c1c2⋮cNSP]=BIS[p1⁢ej⁢α1p2⁢ej⁢α2⋮pNSP⁢ej⁢αNSP]=BIS[p10⋱0p2⋱⋱00pNSP][ej⁢α1ej⁢α2⋮ej⁢αNSP]=BIS⁢2[ej⁢α1ej⁢α2⋮ej⁢αNSP] As multiplying the precoder vector w with a complex constant C does not change its beamforming properties (as only the phase and amplitude relative to the other single-polarized beams is of importance), one may without loss of generality assume that the coefficients corresponding to e.g. SP-beam 1 is fixed to p1=1 and ejα1=1, so that parameters for one less beam needs to be signaled from the UE to the base station. Furthermore, the precoder may be further assumed to be multiplied with a normalization factor, so that, e.g., a sum power constraint is fulfilled, i.e. that ∥w∥2=1. Any such normalization factor is omitted from the equations herein for clarity. In some cases, the possible choices of columns of BNV,NH(qV, qH) are restricted so that if column i=i0is chosen, so is column i=i0+NVNH. That is, if an SP-beam corresponding to a certain beam mapped to the first polarization is chosen, e.g. bi0=[di00], this would imply that the SP-beam bi0+NV⁢NH=[0di0] is chosen as well. That is, the SP-beam corresponding to the said certain beam mapped to the second polarization is chosen as well. This would reduce the feedback overhead as only NDP=NSP/2 columns of BNV,NH(qV, qH) would have to be selected and signaled back to the base station. In other words, the column selection is done on a beam (or DP-beam) level rather than an SP-beam level. If a certain beam is strong on one of the polarizations it would typically imply that the beam would be strong on the other polarization as well, at least in a wideband sense, so the loss of restricting the column selection in this way would not significantly decrease the performance. In the following discussion, the use of DP-beams is generally assumed (unless stated otherwise). In some cases, the multi-beam precoder is factorized into two or more factors that are selected with different frequency-granularity, in order to reduce the feedback overhead. In such cases, the SP-beam selection (i.e. the choice of matrix BIS) and the relative SP-beam powers/amplitudes (i.e. the choice of matrix √{square root over (P)}) are selected with a certain frequency-granularity while the SP-beam phases (i.e. the choice of matrix [ej⁢α1ej⁢α2⋮ej⁢αNSP]) are selected with another certain frequency-granularity. In one such case, the certain frequency-granularity corresponds to a wideband selection (that is, one selection for the entire bandwidth) while the said another certain frequency-granularity corresponds to a per-subband selection (that is, the carrier bandwidth is split into a number of subbands, typically consisting of 1-10 PRBs, and a separate selection is done for each subband). In a typical case, the multi-beam precoder vector is factorized as w=W1W2, where W1is selected with a certain frequency-granularity and W2is selected another certain frequency-granularity. The precoder vector may then be expressed as w=BIS⁢P︸=W1⁢[ej⁢α1ej⁢α2⋮ej⁢αNSP]︸=W2=W1⁢W2. Using this notation, if the said certain frequency-granularity corresponds to a wideband selection of W1and the said another certain frequency-granularity corresponds to a per-subband selection of W2, the precoder vector for subband l may be expressed as w1=W1W2(l). That is, only W2is a function of the subband index 1. What needs to be fed back by the UE to the eNodeB is thus:the chosen columns of BNV,NH(qV, qH), i.e., the NSPsingle-polarized beams. This requires at most NSP·log2(2NVNH) bits.the vertical and horizontal DFT basis rotation factors qVand qH. For instance, the q⁡(i)=iQ,i=0,1,…,Q-1, for some value of Q. The corresponding overhead would then be 2·log2Q bits.the (relative) power levels {p2, p3, . . . , pNSP} of the SP-beams. If L is the number of possible discrete power levels, (NSP−1)·log2L is needed to feedback the SP-beam power levels.the cophasing factors {ej⁢α2,ej⁢α3,…,ej⁢αNSP} of the SP-beams. For instance, α⁡(k)=2⁢π⁢kK,k=0,1,…⁢K-1, for some value of K. The corresponding overhead would be, (2NDP−1)·log2K bits per rank per W2(l) report. Recently, 3GPP has agreed to the following working assumption used to develop physical layer specifications for Rel-14 advanced CSI based on multi-beam precoders. Note that the term ‘beam combining coefficient’ is used for the cophasing factors cr,l,ihere, although the cophasing factors can combine elements with different polarizations as well as different beams. Precoders are to be normalized in the equations below.FIG.9Billustrates an example for W1 beam selection, W1 beam power, and W2 determination according to some embodiments of the present disclosure. W1=[B00B],B=[p0⁢bk1(0),k2(0),…,pL-1⁢bk1(L-1),k2(L-1)]⁢For⁢rank⁢1:W=[w~0,0w~1,1]=W1⁢W2,and⁢W2=[c0,0c1,0]⁢For⁢rank⁢2:W=[w~0,0w~0,1w~1,0w~1,1]=W2⁢W2,and⁢W2=[c0,0c0,1c1,0c1,1]⁢Cr,l=[cr,l,0,…,cr,l,L-1]T,r=0,1,l=0,1⁢w~r,l=∑i=0L-1bk1(1)⁢k2(1)·pi·cr,l,i;r=0,1,l=0,1L=2 is the number of beamsbk1, k2is a 2D DFT beam from oversampled gridk1=0, 1, . . . N1O1−1k2=0, 1, . . . N2O2−10≤pi≤1 beam power scaling factor for beam icr,l,ibeam combining coefficient for beam i and on polarization r and layer W1W2Rank(bits)(bits)113621312W1overhead for N1=N2=4Indicate leading beam: ┌log2(N1N2O1O2)┐=┌log 2(16N1N2)┐=8bitsIndicate second beam: ⌈(71)⌉=3⁢bitsRelative power of weaker beam: 2 bits Feedback on PUSCH is supported and feedback on PUCCH is supported. Because feedback on PUCCH is to be supported, and since indications of W1and W2are (at least in some cases) larger than can be supported on PUCCH Format 2, the feedback for W1and/or W2must be modified when reporting on PUCCH Format 2 is configured. FIGS.10A through13Aillustrate procedures for reporting CSI feedback on a physical channel according to some embodiments of the present disclosure. FIG.10Aillustrates a procedure by which the second node14reports CSI feedback to the first node12on a physical channel (step100A). According to some embodiments, the CSI feedback is rich CSI feedback. As used herein, rich CSI refers to CSI that conveys more information than traditional CSI. For example, rich CSI may be a CSI for LTE Advanced or for NR Type 2. Additional examples and description are included below. According to some embodiments, the reporting of CSI feedback is with a small payload. Also, as used herein, a small payload is a payload that includes less total bits than what would usually need to be sent in other applications. For example, an application for advanced CSI is to transmit subband PMI, using a number of bits per subband (considered substantial). Compared to this application, according to some disclosed embodiments, the payload is constrained when there is a need to transmit wideband PMI and further subsample the PMI so that it fits the feedback channel. In such case, a small payload is a payload small enough to fit the feedback channel or smaller. This may be accomplished in many different ways, some of which are discussed below. Specifically, as shown inFIG.11A, the second node14identifies a subset of codebook entries from an advanced CSI codebook of coefficients (step200A). Then, the second node14selects a codebook entry from the subset (202A). An index of the selected codebook entry is reported to the first node12(step204A). In this way, the constraints of the physical channel with the small payload are met, even when sending rich CSI. FIG.12Aillustrates a procedure by which the second node14reports a rank indicator and a beam count indicator in a first transmission (step300A) and reports a cophasing indicator in a second transmission (step302A). In some embodiments, both of these transmissions are sent on the same uplink control channel. In some embodiments, these transmissions are sent on a channel that is acting as a control channel. In some embodiments, the second node14determines a number of beams L used to construct the multi-beam CSI report (step304A). The second node14then determines a beam indicator for an lth beam, the beam indicator identifying the index of a beam of the multi-beam CSI report if L is at least l, and otherwise identifying that L is less than l (step306A). FIG.13Aillustrates a procedure by which the second node14reports CSI corresponding to a first number of beams if the CSI corresponds to a first rank (step400A) and reports CSI corresponding to a second number of beams if the CSI corresponds to a second rank (step402A). FIGS.10B-13B, are figures illustrating analogous operation at a receiving side such as first node12. In LTE Rel-13 Class A codebook based periodic CSI feedback is carried on PUCCH Format 2 over at least three transmissions, i.e.1sttransmission: RI2ndtransmission: W13rdtransmission: W2and CQI For each transmission, up to 11 bits can be transmitted. A primary aim is to have also three transmissions for advanced CSI feedback over PUCCH Format 2. As it is possible to multiplex periodic CSI feedback over several PUCCH transmissions, the individual components comprising the PMI feedback indicating the selection of W1and W2are reiterated. The reporting of W1can be split up into separate components, as was further elaborated in the background:leading beams selection: log2(NV·NH)=4 bits, in the worst case of 2NVNH=32 antenna portsbeam rotations: log2(QH·QV)=log2(4·4)=4 bitssecond beam selection: [log2(7)]=3bitsbeam relative power: 2 bits Although the codebook defines precoders as linear combinations of L=2 beams (or NDP beams using the notation in the description of multi-beam precoders above), it is possible to set the relative beam power of the second beam to zero, resulting in an effective precoder comprising only L=1 beam. In such a case, precoder components describing a second beam do not need to be known to construct the precoder and correspondingly, no signaling indicating said precoder components are needed. Thus, the reporting of the W2matrix uses (2 L−1) Npr bits per subband, where L is the number of beams, Npis the number of phase bits per element of W2(or log 2K bits using the notation of the multibeam precoder discussion above), and r is the rank. Since a QPSK constellation is used, Np=2 and the number of bits for W2per subband for L=1 and L=2 are summarized in Table 2: TABLE 2W2beam cophasingoverhead (per subband)Beams (L)Rank (r)1212 bits6 bits24 bits12 bits Since it may be beneficial to report W2together with CQI in a PUCCH transmission, for PUCCH Format 2, the total payload can be no more than 11 bits. Because CQI occupies 4 and 7 bits for 1 and 2 codewords respectively, W2can occupy no more than 7 or 4 bits for rank 1 or 2 (as rank 1 uses 1 codeword while rank 2 uses 2 codewords in LTE). Therefore, wideband W2PMI for rank 1 can fit on PUCCH Format 2 without subsampling, whereas subsampling 12 bits to 4 bits is needed for rank 2, for L=2. This constitutes a substantial subsampling. Given the above constraints, three different payload sizes (2, 4, or 6) may be used for W2on PUCCH Format 2. eNB must be aware of the number of beams and the rank used to compute W2if the payload size varies. Since in Rel-13, eNB determines the size of the CQI field based on the RI, that principle can be reused to determine the rank used to compute W2. If the beam power field is encoded independently of W2, then the number of beams used to determine W2could also be determined by eNB from the reported beam power field. The table below shows the W2payload sizes. TABLE 3W2payload alternativesAlternativeW2+ CQI payloadOne or Two beams,Rank 1: {2 or 6} + 4 bits = 6 or 10 bitsranks 1 & 2Rank 2: 4 + 7 bits = 11 bits The rich W2CSI feedback in LTE Rel-14 implements a scalar quantization of beam and polarization cophasing for each layer, where the W2matrix for rank 2 may be expressed as: W2=[11c10c11c20c21c30c31] where each ci,j∈{1, j, −1, −j}, i.e. each element may be independently chosen from a QPSK constellation. To further clarify, c1jdenotes a relative phase of the first and second beam on a first polarization, c2jdenotes a relative phase between the two polarizations of the first beam, and c3jdenotes the relative phase of the first beam on the first polarization and the second beam on the second polarization. Since scalar quantization is used, W2may be parametrized using the D=6 dimensional vector c=[c10c20c30c11c21c31]Tand may thus be considered to have six degrees of freedom, resulting in S=NPD=46=4096 possible states, represented by 12 bits. The W2codebook may thus be indexed with k=0, 1, . . . , S−1. One approach to subsampling the W2codebook is to merely subsample the index k so that only every Xthindex may be chosen and instead report the index k~=0,1,…,SX-1, where k=X·{tilde over (k)}. However, such a subsampling does not utilize the structure of the codebook and may provide low CSI granularity. Another approach to subsampling the codebook is to lower the constellation alphabet size, so that for instance ci,j∈{1, −1} and a Binary Phase Shift Keying (BPSK) constellation is used. In our example, though, this would still require 6 bits of feedback overhead which overshoots the target of 4 bits for rank 2. Note that since the BPSK constellation points are comprised in the QPSK constellation, lowering the constellation alphabet size in such a manner constitutes a codebook subsampling since all the resulting precoders in the subsampled codebook are comprised in the non-subsampled codebook. However, in order to further reduce the feedback overhead, a method of rich CSI W2codebook subsampling is presented herein. The method works by parametrizing the W2 codebook using a smaller number of parameters M than the required D parameters to span the entire codebook. That is, the precoders in the subsampled W2may be generated from a size-M vector {tilde over (c)}=[{tilde over (c)}0. . . {tilde over (c)}M−1]Tand a fixed mapping from {tilde over (c)} to precoder matrix. As an illustrative embodiment, consider M=1 so that {tilde over (c)}={tilde over (c)}0. The subsampled precoder codebook may then be generated as, for instance, =[11c~0-c~0-c~0c~0c~0-c~0] If {tilde over (c)}0∈{1, j, −1, −j}, there are thus 41=4 possible W2matrices in the subsampled codebook. Note that all possibleare comprised in the non-subsampled codebook, andthus constitutes a codebook subsampling and not a new, separate codebook. For this to hold true, it is required that each element Coof the precoder matrices in the subsampled codebook belongs to the same constellation as the non-subsampled codebook (e.g. QPSK {1, j, −1, −j}). As Phase Shift Keying (PSK) constellations are closed under multiplication, one may thus construct ci,jby multiplying an arbitrary number of PSK symbols. Thus, if the elements of {tilde over (c)} are from the same constellation as the elements in the non-subsampled codebook, and the elements inare formed by multiplying elements of {tilde over (c)} or other PSK symbols (note that “−1” is a PSK symbol),is ensured to be comprised in the non-subsampled codebook. Based on these rules for generating codebook subsamplings according to the method,-matrices that give a good tradeoff between performance and feedback overhead may be designed. In some embodiments, a codebook subsampling is generated utilizing two properties:Phase offset between beams are (partly) due to differences in propagation delay and so may be similar on both polarizationsThe precoding on different layers are often chosen to be mutually orthogonal The first property suggests that the ratios c1,j/1 and c3,j/c2,jmay be similar in certain propagation conditions. This can be utilized in subsampling design so that the precoding of a single layer may be expressed as: [1cφc⁢φ] where c is a beam cophasing coefficient and φ is a polarization cophasing coefficient, which both are QPSK symbols. Thus, with this design, the ratios c1,j1=c3,jc2,j=c, fulfilling the first desired property. To fulfill the second property, the second layer may be designed to be orthogonal to the first layer so that=σ·I, where I is the identity matrix (a matrix of all zeroes except on the diagonal, which contains all ones), and a is a non-negative scalar. This may be achieved by copying the coefficients for the first layer but negating the entries corresponding to the second polarization as: =[11ccφ-φc⁢φ-c⁢φ] Thus, with this subsampling design, both desired properties are fulfilled. Furthermore, the subsampled codebook is generated from {tilde over (c)}=[cφ]T, i.e. using 2 parameters, where each element in {tilde over (c)} belongs to a QPSK constellation. Thus, 2+2=4 bits are needed to indicate an element in the subsampled codebook, which meets the requirement on PUCCH feedback overhead for W2. In some embodiments, the property that layers are often chosen to be mutually orthogonal is not utilized in the subsampling design, as this puts an unnecessary restriction on the channel quantization for some propagation conditions. Instead, each layer is encoded independently. The previously mentioned first property is still utilized though, so that a separate beam cophasing coefficient and polarization coefficient is used, resulting in a matrix design: =[11c0c1φ0φ1c0⁢φ0c1⁢φ1] Thus, the subsampled codebook may be generated from 4 parameters {tilde over (c)}=[c0c1φ0φ1]Tin this embodiment. To meet the requirement of a 4 bit W2report though, each parameter cannot be selected from a QPSK constellation, as this would require an 8 bit report. However, as the BPSK constellation points are comprised within the QPSK constellation, using a lower order constellation for the parameters will still ensure that theconstitutes a subsampled codebook. Thus, if each parameter is selected from a BPSK constellation, the subsampled codebook may be reported with 4 bits and the requirement is met. A UE assumes L=2 is used for reporting W2if rank=1 and L=1 if rank=2. In this case, there is no subsampling required for either rank=1 or rank=2 W2, since 6 bits and 4 bits can be carried with CQI for rank 1 and rank 2, respectively, as discussed above with respect to W2payload alternatives. For rank=1, the full resolution of W2is preserved, and the full size W2(6 bits in the case of the Rel-14 codebook) is reported. For rank=2, a single beam is used for W2, which corresponds to the W2with the non-subsampled multi-beam codebook, and so requires 4 bits to signal W2using the Rel-14 codebook. For PUCCH Format 2, the following design goals for consistency with Rel-13 operation are identified:1. All CSI reporting types must fit into 11 bits2. At most 3 transmissions are needed to report RI, CQI, PMI, and CRI.a. RI is carried in one transmissionb. Wideband CQI with 4 or 7 bits can be used for 1 or 2 codeword transmission, respectively, and is carried in another PUCCH transmission.c. At least the beam index is carried in a third PUCCH transmission.3. Each transmission should be as useful as possible to the eNodeB in the absence of the other transmissions. Since the RI often needs to be decoded to determine the size of other CSI fields, such as the CQI and the PMI, it is important that it be received reliably. Consequently, the RI should be multiplexed in a PUCCH transmission with as few as possible other fields, while still providing the needed CSI. Transmitting as little extra information as possible means that fewer bits are present in the PUCCH carrying the RI, and so it is received more reliably at a given received SINR. The beam power indication and the second beam index require 2 and 3 bits, respectively. On the other hand, the first beam index requires at least 4 bits (8 bits if the index includes the rotation, as is done in the Rel-14 codebook agreement). Since the first beam index should be reported together with (or directly include) the beam rotation, these 8 bits should be reported in one PUCCH transmission. Overall, then, the beam power indication and the second beam index are reasonable candidates to multiplex with RI, whereas the first beam index and/or beam rotation are not. If RI is multiplexed with the second beam index, then if Rel-13 PUCCH reporting timing is used, since RI (for example PUCCH reporting type 3 or 7), is likely to be reported more slowly than wideband PMI (i.e. PUCCH reporting type 2a), the two beams would be reported at different rates, which is undesirable, since they have the same basic characteristics and vary with propagation at the same rate in time. This unequal reporting rate will also likely degrade performance. Therefore, it does not seem desirable to report the second beam index with RI. Reporting the beam power indication with RI makes intuitive sense, since the number of beams in the channel is similar to its rank, as the number of beams identifies the number of parameters needed to approximate the channel just as the rank does. Furthermore, the beam power indication identifies if precoder parameters for the second beam need to be known, and so can be considered a beam count indicator. The beam power field (also ‘beam count indicator’) can be used to identify the size of the W2cophasing indicator and the presence of information identifying the second beam. If the beam power field corresponding to the 2ndbeam indicates a non-zero value (for example, 1, √{square root over (0.5)}, or √{square root over (0.25)}), then the CSI report corresponds to 2 beams. In this case, the second beam index is reported, and the size of a wideband cophasing indicator W2reported on PUCCH will be 4 bits (with W2subsampling as discussed above). If the beam power field indicates a zero value, then the second beam index is not reported, and the size of a wideband cophasing indicator W2reported on PUCCH will be 2 or 4 bits (also as discussed above with respect to W2beam cophasing overhead per subband), depending on if rank 1 or rank 2, respectively, is indicated by RI. Therefore, in an embodiment, a rank indicator and a beam count indicator are both transmitted in one transmission. The rank indicator identifies the rank used when computing the CSI feedback to which the rank relates. The beam count indicator identifies at least the number of beams used when computing the CSI feedback, and may additionally indicate the relative power of beams identified in the CSI feedback. The rank and beam count indicators may identify the size of a CSI feedback field transmitted in a separate transmission, such as a cophasing indicator (W2) or a beam index (W1). With this embodiment, the advanced CSI feedback can be carried on PUCCH Format 2 over at least three transmissions, i.e.1. 1st transmission: RI+beam power (or beam count indicator)2. 2nd transmission: W1(first beam index+beam rotation+second beam index)3rd transmission: W2and CQI Note that while the transmissions may be sequenced in time in the order of their numbering, this is not required. Also, these may be sent as completely separate transmissions or as separate parts of the same transmission. In a related embodiment, a later transmission carries a CQI field and a cophasing indicator field (W2). The size of the cophasing indicator field is determined by at least a beam count indicator transmitted in an earlier transmission, and the size of the CQI field is determined by at least an RI transmitted in the earlier transmission. It may also be desirable to provide an alternative indication of the number of beams used in the multi-beam CSI report. This can allow the number of beams to be reported to eNB more often than when the number of beams is only provided in reports containing RI, since RI is generally reported infrequently. In this case, a CSI report for the second (weaker) beam jointly identifies the number of beams and an index of the second beam. The particular codebook design used in 3GPP is well suited to this, since the second beam index has 7 possible values, and so an 8th value indicating if the second beam is present can fit in a 3 bit indicator. Therefore, in an embodiment, a first transmission carries a beam index that is jointly encoded with an indication of if a second beam is not present where when the second beam is not present corresponds to a beam power of 0 for the second beam. Additionally, a second transmission may carry a cophasing indicator field. The size of the cophasing indicator field is determined by at least the indication of if a second beam is not present. FIGS.14and15illustrate example embodiments of a second node14such as a wireless device14according to some embodiments of the present disclosure.FIG.14is a schematic block diagram of the wireless device14(e.g., a UE14) according to some embodiments of the present disclosure. As illustrated, the wireless device14includes circuitry18comprising one or more processors20(e.g., Central Processing Units (CPUs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and/or the like) and memory22. The wireless device14also includes one or more transceivers24each including one or more transmitter26and one or more receivers28coupled to one or more antennas30. In some embodiments, the functionality of the wireless device14described above may be fully or partially implemented in software that is, e.g., stored in the memory22and executed by the processor(s)20. In some embodiments, a computer program including instructions which, when executed by at least one processor, causes the at least one processor to carry out the functionality of the wireless device14according to any of the embodiments described herein is provided. In some embodiments, a carrier containing the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium (e.g., a non-transitory computer readable medium such as memory). FIG.15is a schematic block diagram of the wireless device14according to some other embodiments of the present disclosure. The wireless device14includes one or more modules32, each of which is implemented in software. The module(s)32provide the functionality of the wireless device14(e.g., UE14) described herein. FIGS.16through18illustrate example embodiments of a radio network node according to some embodiments of the present disclosure.FIG.16is a schematic block diagram of the node12according to some embodiments of the present disclosure. Other types of network nodes may have similar architectures (particularly with respect to including processor(s), memory, and a network interface). As illustrated, the radio access node12includes a control system34that includes circuitry comprising one or more processors36(e.g., CPUs, ASICs, FPGAs, and/or the like) and memory38. The control system34also includes a network interface40. The radio access node12also includes one or more radio units42that each include one or more transmitters44and one or more receivers46coupled to one or more antennas48. In some embodiments, the functionality of the radio access node12described above may be fully or partially implemented in software that is, e.g., stored in the memory38and executed by the processor(s)36. FIG.17is a schematic block diagram that illustrates a virtualized embodiment of the radio access node12according to some embodiments of the present disclosure. Other types of network nodes may have similar architectures (particularly with respect to including processor(s), memory, and a network interface). As used herein, a “virtualized” radio access node12is a radio access node12in which at least a portion of the functionality of the radio access node12is implemented as a virtual component (e.g., via a virtual machine(s) executing on a physical processing node(s) in a network(s)). As illustrated, the radio access node12optionally includes the control system34, as described with respect toFIG.16. The radio access node12also includes the one or more radio units42that each include the one or more transmitters44and the one or more receivers46coupled to the one or more antennas48, as described above. The control system34(if present) is connected to the radio unit(s)42via, for example, an optical cable or the like. The control system34(if present) is connected to one or more processing nodes50coupled to or included as part of a network(s)52via the network interface40. Alternatively, if the control system34is not present, the one or more radio units42are connected to the one or more processing nodes50via a network interface(s). Each processing node50includes one or more processors54(e.g., CPUs, ASICs, FPGAs, and/or the like), memory56, and a network interface58. In this example, functions60of the radio access node12described herein are implemented at the one or more processing nodes50or distributed across the control system34(if present) and the one or more processing nodes50in any desired manner. In some particular embodiments, some or all of the functions60of the radio access node12described herein are implemented as virtual components executed by one or more virtual machines implemented in a virtual environment(s) hosted by the processing node(s)50. As will be appreciated by one of ordinary skill in the art, additional signaling or communication between the processing node(s)50and the control system34(if present) or alternatively the radio unit(s)42is used in order to carry out at least some of the desired functions. Notably, in some embodiments, the control system34may not be included, in which case the radio unit(s)42communicates directly with the processing node(s)50via an appropriate network interface(s). In some embodiments, a computer program including instructions which, when executed by at least one processor, causes the at least one processor to carry out the functionality of the radio access node12or a processing node50according to any of the embodiments described herein is provided. In some embodiments, a carrier containing the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium (e.g., a non-transitory computer readable medium such as memory). FIG.18is a schematic block diagram of the radio access node12according to some other embodiments of the present disclosure. The radio access node12includes one or more modules62, each of which is implemented in software. The module(s)62provide the functionality of the radio access node12described herein. Example Embodiments While not being limited thereto, some example embodiments of the present disclosure are provided below.1. A method of operation of a second node (14) connected to a first node (12,50) in a wireless communication network, comprising:reporting (100A) rich CSI feedback to the first node (12,50) on a physical channel with a small payload.2. The method of embodiment 1 wherein reporting the rich CSI feedback comprises:identifying (200A) a subset of codebook entries from a codebook of coefficients;selecting (202A) a codebook entry from the subset; andreporting (204A) an index of the selected codebook entry.3. The method of embodiment 2 wherein:each entry of the codebook is identified by an index kthe entry of the codebook with index k comprises a vector or matrix Ckof complex numbers with L′ rows and r columns, L′ and r being positive integers;each of (L′−1)r elements of each entry comprise a scalar complex number that can be one of N complex numbers;∥Ck1−Ck2∥F>0 where k1≠k2are indices of different codebook entries, and CF is the Frobenius norm of a matrix or vector C;the codebook comprises N(L′−1)rentries; andthe subset comprises one of KMentries, where K≤N and M<(L′−)r are positive integers and each entry in the subset is identified by an index.4. The method of embodiment 3 wherein the selected codebook entry for when r=2 can be constructed from M=2 distinct variables and each variable can be one of K=N complex numbers and CkHCk=I for each entry Ckin the subset.5. The method of embodiment 2 wherein:each entry of the codebook comprises a vector or matrix;one or more elements of each entry comprise a scalar complex number;a norm between the matrix or vector difference between any two different codebook entries is greater than zero.6. The method of any one of embodiments 1 to 4 wherein the selected codebook entry for when r=2 can be constructed from M=4 distinct variables and each variable can be one of K=√{square root over (N)} complex numbers and CkHCk≠I for at least one entry Ckin the subset.7. A method of operation of a second node (14) connected to a first node (12,50) in a wireless communication network for reporting multi-beam CSI, comprising:reporting (300A) a rank indicator and a beam count indicator in a first transmission on an uplink control channel; andreporting (302A) a cophasing indicator in a second transmission on the uplink control channel, the cophasing indicator identifying a selected entry of a codebook of cophasing coefficients wherein the number of bits in the cophasing indicator is identified by at least one of the beam count indicator and the rank indicator.8. The method of embodiment 7 wherein the beam count indicator comprises at least one of a number of beams and an indication of relative powers, the possible values of the indication comprising both a zero and a non-zero value.9. A method of operation of a second node (14) connected to a first node in a wireless communication network for reporting CSI, comprising:jointly identifying the number of beams and an index of a beam in a multi-beam CSI report; andtransmitting the multi-beam CSI report to the first node (12,50).10. The method of embodiment 9 wherein jointly identifying the number of beams and the index of the beam in the multi-beam CSI report comprises:determining (304A) a number of beams L used to construct the multi-beam CSI report; anddetermining (306A) a beam indicator for an lthbeam, the beam indicator identifying the index of a beam of the multi-beam CSI report if L is at least l, and otherwise identifying that L is less than l.11. A method of operation of a second node (14) connected to a first node (12,50) in a wireless communication network, comprising:reporting (400A) CSI corresponding to a first number of beams if the CSI corresponds to a first rank; andreporting (402A) CSI corresponding to a second number of beams if the CSI corresponds to a second rank12. The method of embodiment 11 wherein:the first rank is smaller than the second rank; andthe first number of beams is larger than the second number of beams.13. The method of any one of embodiments 1 to 12 further comprising:providing an indication of at least one beam index pair index (lk,mk) in uplink control information, UCI, each beam index pair corresponding to a beam k.14. The method of any one of embodiments 1 to 13 wherein:each beam is a kthbeam d(k) that comprises a set of complex numbers and has index pair (lk, mk), each element of the set of complex numbers being characterized by at least one complex phase shift such that:dn(k)=di(k)αi,nej2π(pΔ1,k+qΔ2,k),dn(k), and di(k) are the ithand nthelements of the beam d(k), respectively,αi,nis a real number corresponding to the ithand nthelements of the beam d(k)p and q are integers, andbeam directions Δ1,kand Δ2,kare real numbers corresponding to beams with index pair (lk, mk) that determine the complex phase shifts ej2πΔ1,kand ej2πΔ2,krespectively15. The method of any one of embodiments 1 to 14 wherein the first node (12,50) is a radio access node (12).16. The method of any one of embodiments 1 to 15 wherein the second node (14) is a wireless device (14).17. A second node (14) adapted to operate according to the method of any one of embodiments 1 to 16.18. A second node (14), comprising:at least one processor (20);memory (22) comprising instructions executable by the at least one processor (20) whereby the second node (14) is operable to:report rich CSI feedback to the first node (12,50) on a physical channel with a small payload.19. A second node (14), comprising:a reporting module (32) operable to report rich CSI feedback to the first node (12,50) on a physical channel with a small payload.20. A method of operation of a first node (12) in a wireless communication network, comprising:receiving (100B) rich CSI feedback from a second node (14) on a physical channel with a small payload.21. The method of embodiment 20 wherein reporting the rich CSI feedback comprises:a subset of codebook being selected (200B) entries from a codebook of coefficients;a codebook entry being selected (202B) from the subset; andreceiving (204B) an index of the selected codebook entry.22. The method of embodiment 21 wherein:each entry of the codebook is identified by an index kthe entry of the codebook with index k comprises a vector or matrix Ckof complex numbers with L′ rows and r columns, L′ and r being positive integers;each of (L′−1)r elements of each entry comprise a scalar complex number that can be one of N complex numbers;∥Ck1−Ck2∥F>0 where k1≠k2are indices of different codebook entries, and CF is the Frobenius norm of a matrix or vector C;the codebook comprises N(L′−r)entries; andthe subset comprises one of KMentries, where K≤N and M<(L′−1)r are positive integers and each entry in the subset is identified by an index.23. The method of embodiment 22 wherein the selected codebook entry for when r=2 can be constructed from M=2 distinct variables and each variable can be one of K=N complex numbers and CkHCk=I for each entry Ckin the subset.24. The method of embodiment 21 wherein:each entry of the codebook comprises a vector or matrix;one or more elements of each entry comprise a scalar complex number;a norm between the matrix or vector difference between any two different codebook entries is greater than zero.25. The method of any one of embodiments 20 to 23 wherein the selected codebook entry for when r=2 can be constructed from M=4 distinct variables and each variable can be one of K=√{square root over (N)} complex numbers and CkHCk≠I for at least one entry Ckin the subset.26. A method of operation of a first node (12) in a wireless communication network for reporting multi-beam CSI, comprising:receiving (300B) a rank indicator and a beam count indicator in a first transmission on an uplink control channel; andreceiving (302B) a cophasing indicator in a second transmission on the uplink control channel, the cophasing indicator identifying a selected entry of a codebook of cophasing coefficients wherein the number of bits in the cophasing indicator is identified by at least one of the beam count indicator and the rank indicator.27. The method of embodiment 26 wherein the beam count indicator comprises at least one of a number of beams and an indication of relative powers, the possible values of the indication comprising both a zero and a non-zero value.28. A method of operation of a first node (12) connected to a first node in a wireless communication network for reporting CSI, comprising:jointly identifying the number of beams and an index of a beam in a multi-beam CSI report; andreceiving the multi-beam CSI report from the second node (14).29. The method of embodiment 28 wherein jointly identifying the number of beams and the index of the beam in the multi-beam CSI report comprises:determining (304A) a number of beams L used to construct the multi-beam CSI report; anddetermining (306A) a beam indicator for an lthbeam, the beam indicator identifying the index of a beam of the multi-beam CSI report if L is at least l, and otherwise identifying that L is less than l.30. A method of operation of a first node (12) in a wireless communication network, comprising:receiving (400B) CSI corresponding to a first number of beams if the CSI corresponds to a first rank; andreceiving (402B) CSI corresponding to a second number of beams if the CSI corresponds to a second rank31. The method of embodiment 30 wherein:the first rank is smaller than the second rank; andthe first number of beams is larger than the second number of beams.32. The method of any one of embodiments 20 to 31 further comprising:receiving an indication of at least one beam index pair index (lk, mk) in uplink control information, UCI, each beam index pair corresponding to a beam k.33. The method of any one of embodiments 20 to 32 wherein:each beam is a kthbeam d(k) that comprises a set of complex numbers and has index pair (lk, mk), each element of the set of complex numbers being characterized by at least one complex phase shift such that:dn(k)=di(k)αi,nej2π(pΔ1,k+qΔ2,k),dn(k), and di(k) are the ithand nthelements of the beam d(k), respectively,αi,nis a real number corresponding to the ithand nthelements of the beam d(k)p and q are integers, andbeam directions Δ1,kand Δ2,kare real numbers corresponding to beams with index pair (lk, mk) that determine the complex phase shifts ej2πΔ1,kand ej2πΔ2,krespectively34. The method of any one of embodiments 20 to 33 wherein the first node (12,50) is a radio access node (12).35. The method of any one of embodiments 20 to 34 wherein the second node (14) is a wireless device (14).36. A first node (12) adapted to operate according to the method of any one of embodiments 20 to 35.37. A first node (12,50), comprising:at least one processor (36);memory (38comprising instructions executable by the at least one processor (36) whereby the first node (12,50) is operable to:receive rich CSI feedback from the second node (14) on a physical channel with a small payload.38. A first node (12,50), comprising:a receiving module (62) operable to receive rich CSI feedback to the first node (12,50) on a physical channel with a small payload. The following acronyms are used throughout this disclosure.1D One-Dimension2D Two-Dimension3GPP Third Generation Partnership Project5G Fifth GenerationACK AcknowledgementARQ Automatic Repeat-RequestASIC Application Specific Integrated CircuitBPSK Binary Phase-Shift KeyingCE Control ElementCPU Central Processing UnitCQI Channel Quality IndicatorCRI CSI-RS Resource IndicationCSI Channel State InformationDCI Downlink Control InformationDFT Discrete Fourier TransformDL-SCH Downlink Shared ChanneleNodeB Enhanced or Evolved Node BEPDCCH Enhanced PDCCHFDD Frequency Division DuplexFD-MIMO Full Dimension MIMOFPGA Field Programmable Gate ArrayGSM Global System for Mobile CommunicationsHARQ Hybrid Automatic Repeat RequestLTE Long Term EvolutionMAC Medium Access ControlMCS Modulation And Coding StateMIMO Multiple-Input Multiple-Outputms millisecondMU-MIMO Multi-User MIMONACK Negative AcknowledgementNR New RadioNZP Non-Zero PowerOFDM Orthogonal Frequency-Division MultiplexingPDCCH Physical Downlink Control ChannelPMI Precoder Matrix IndicatorPRB Physical Resource BlockPUCCH Physical Uplink Control ChannelPUSCH Physical Uplink Shared ChannelQPSK Quadrature Phase-Shift KeyingRI Rank IndicatorRRC Radio Resource ControlRSRP Reference Signal Received PowerRSRQ Reference Signal Received QualityRSSI Received Signal Strength IndicatorSINR Signal-to-Interference-and-Noise RatioSR Scheduling RequestSRB Signaling Radio BearersTDD Time-Division DuplexTFRE Time/Frequency Resource ElementTS Technical SpecificationUCI Uplink Control InformationUE User EquipmentULA Uniform Linear ArrayUL-SCH Uplink Shared ChannelUMB Ultra Mobile BroadbandUPA Uniform Planar ArrayWCDMA Wideband Code-Division Multiple AccessWiMax Worldwide Interoperability for Microwave Access Those skilled in the art will recognize improvements and modifications to the embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein.
84,431
11863252
DESCRIPTION OF EMBODIMENTS To make the technical problems resolved, the technical solutions used, and the technical effects achieved in this application clearer, the following describes the technical solutions in this application with reference to the accompanying drawings in the embodiments. The detailed descriptions provide various embodiments of a device and/or a process by using block diagrams, flowcharts, and/or examples. These block diagrams, flowcharts, and/or examples include one or more functions and/or operations, and therefore a person in the art may understand that each function and/or operation in the block diagrams, the flowcharts, and/or the examples may be performed independently and/or jointly by using much hardware, software, and firmware, and/or any combination thereof. “A plurality of” in this application refers to two or more than two. The term “and/or” in this application describes only an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, the character “/” in this specification generally indicates an “or” relationship between the associated objects. In this application, the terms “first”, “second”, “third”, “fourth”, and the like are intended to distinguish between different objects but do not limit a sequence of the different objects. In this application, the nouns “network” and “system” are often used alternately, but a person skilled in the art may understand the meanings thereof. In some cases, all “terminals”/“terminal devices” mentioned in this application may be mobile devices, for example, mobile phones, personal digital assistants, handheld or laptop computers, and similar devices having a telecommunications capability. In some cases, the “terminals”/“terminal devices” may also be wearable devices or in-vehicle devices, and include terminals in a future 5G network, terminals in a future evolved Public Land Mobile Network (PLMN), or the like. Such a terminal may include a device and a removable storage module (for example, but not limited to a universal integrated circuit card (UICC) including a subscriber identification module (SIM) application, a universal subscriber identification module (USIM) application, or a removable user identity module (R-UIM) application) associated with the device. Alternatively, such a terminal may include a device that does not have the module. In another case, the terms “terminal”/“terminal device” may be a non-portable device having a similar capability, for example, a desktop computer, a set top box, or a network device. The terms “terminal”/“terminal device” may also be any hardware or software component that can terminate a communication session of a user. In addition, “user terminal”, “User Equipment”, “UE”, “station”, “station”, “STA”, “user equipment”, “user agent”, “User Agent”, “UA”, “user device”, “mobile device”, “device”, and the like are alternative terminologies synonymous with the “terminal”/“terminal device” in this specification. For ease of description, in this application, the devices mentioned above are collectively referred to as user equipment or UE. The “access node” mentioned in this application is a network device, is an apparatus deployed in radio access network to provide a wireless communication function for a terminal device, and has functions such as scheduling and configuring a downlink reference signal to UE. The access node may include various forms of macro base stations, micro base stations, relay stations, access points, and the like, including systems and devices for improving peer devices in a conventional wireless telecommunications system. Such advanced or next-generation devices may be included in a Long Term Evolution (LTE) communications system, a 5G communications system, a future evolved system, or a plurality of communication fusion systems, for example, an evolved universal terrestrial radio access network NodeB (E-UTRAN NodeB, eNB) included in the LTE system, a new radio access NodeB (NR NodeB) included in 5G, another radio access point, or a similar component. In systems using different radio access technologies, a device having an access node function may have different names. For ease of description, in this application, the foregoing apparatuses providing the wireless communication function for the UE are collectively referred to as an access node. FIG.1shows a network system architecture in this application. The system is used for UE capability reporting and channel state information (CSI) measurement. The system includes UE100and an access node200.FIG.1uses an example of two stages, connection establishment and CSI measurement, between the UE100and the access node200in a movement process of the UE100. Optionally, if the network is a network centered on a user and tracks the user as the user moves, the system further includes at least one transmission and reception point300(TRP). The transmission and reception point300has some functions of an access node200corresponding to a region in which the transmission and reception point300is located, can partially replace the access node200to interact with the UE100, and may further have particular functions in a particular scenario. For example, the transmission and reception point300can listen to an uplink tracking reference signal sent by an inactive user that does not access the network, and perform listening by using the user as a center and moving with the user. After the user establishes a connection to the access node200, capability reporting of the UE may also be sent to the access node200by using the transmission and reception point300. After the UE100establishes the connection to the access node200, the UE100reports the capability of the UE, so that the access node200performs corresponding configuration. The reported capability includes a UE capability for CSI reporting, a UE capability for pilot configuration, a capability of supporting a CSI measurement type, a cache capability, and the like. In this application, the CSI reporting capability is focused. The UE100reports, to the access node200, CSI reporting capability information related to a quantity, supported in at least one time-domain unit, of ports of pilots used for CSI measurement. A concept of the pilot may include a reference signal (RS), a synchronization signal block (SSB), a preamble, and the like. Subsequently, this application is described by using the RS to represent the pilot. After receiving a CSI measurement and reporting related configuration, the access node200configures the UE100based on the capability information, including a resource configuration and a reporting configuration. Based on the capability information, the access node200needs to consider whether the CSI measurement configuration exceeds a processing capability of the UE, whether processing of corresponding CSI measurement can be completed when the UE100needs to perform reporting, and can perform reporting, and an appropriate time for configuring or triggering the UE100to perform reporting. Then, the CSI measurement stage is performed. The access node200sends a pilot signal to the UE100to perform channel measurement and interference measurement. The UE100reports channel state information (CSI), including, for example, a precoding matrix indicator (PMI), a rank indication (RI), and a channel quality indicator (CQI). The UE100may inform, by using the PMI, the access node200of an optimal precoding matrix in current downlink transmission, and inform, by using the RI, the access node200of an optimal quantity of layers in the current downlink transmission. The CQI indicates, after the suggested RI and PMI are used, an available modulation and coding scheme for ensuring that a bit error rate of downlink data receiving does not exceed a predetermined value. The CSI may be periodically or aperiodically reported to the access node200. A difference between the two manners lies in that reporting configuring or triggering manners are different. It should be noted that,FIG.1shows merely an example of a network system architecture in this application, and this application is not limited thereto. Similarly, this application may also be applied to an IEEE 802.11 system. A station (STA) reports a channel measurement information reporting capability of the station to an access point (AP), thereby effectively configuring channel measurement reporting of the STA. Embodiment 1 In a network, UE establishes a connection to an access node, and reports a CSI reporting capability by using a method in this embodiment, so that the access node learns of an actual CSI reporting capability of the UE, thereby flexibly configuring CSI measurement and reporting. In this embodiment and subsequent embodiments, interaction between the UE and the access node is described merely by way of example. This application is not limited thereto. When a transmission and reception point (TRP) managed by the access node in the network has some related functions of the access node, this application may further be applied to a scenario in which the UE interacts with the TRP to report the CSI reporting capability. According to this embodiment of this application,FIG.2is a flowchart of a first embodiment of a terminal device capability reporting method according to this application. For ease of understanding the solution, behavior on sides of both the UE and the access node is described in this embodiment and the subsequent embodiments. Descriptions are provided from perspectives of all interacting parties. However, this is in no way limited to that improvements in the system are to combine steps of all the interacting parities. The technical solution provided in this application has improvements on each side in the system. The method includes the following steps. S101. UE generates capability information. After establishing a connection to the access node, the UE needs to report, to the access node, capability information of a capability supported by the UE. The reported capability includes a UE capability for CSI reporting, a UE capability for pilot configuration, and the like. In this application, the CSI reporting capability is focused. The CSI reporting capability is capability reflection of a calculation capability of the UE that is related to a CSI reporting requirement. S102. The UE sends the capability information of the UE to an access node. The capability information is used to indicate a channel state information (CSI) reporting capability of the UE. The capability information is associated with a quantity, supported by the UE in at least one time-domain unit, of ports of pilots (for example, RSs) used for CSI measurement. The quantity of ports of the pilots (for example, the RSs) used for CSI measurement may be a quantity of ports for sending the pilots (for example, the RSs) used for CSI measurement. The port may be logically understood as a virtual antenna. The port is defined from a perspective of a receiver. The receiver considers a port as an independent antenna channel. Ports may be distinguished by using different time-frequency resource locations, different code-domain extension sequences, and the like. Different ports may be used to distinguish different channels. There may be one piece of or at least two pieces of capability information. For different CSI measurement types such as different codebook types, different precoding matrix indicator (PMI) types, or different bandwidth part (BWP) sizes, optionally, there may be different combinations of types, such as different codebook types and different PMI types, different codebook types and different BWP sizes, different PMI types and different BWP sizes, or different codebook types, different PMI types, and different BWP sizes. The at least two pieces of capability information may respectively indicate CSI reporting capabilities of the UE for different CSI measurement types. If the capability information is not distinguished for the different CSI measurement types, optionally, the CSI reporting capability of the UE is determined based on a reporting capability in a case of a particular type (for example, a maximum complexity type, a maximum time-consuming type, or a preset specified type). The capability information indicates a CSI reporting capability in a case of a particular type. The quantity of ports of the pilots used for CSI measurement includes at least one of the following: a quantity of ports of pilots used for channel measurement, a sum of the quantity of ports of the pilots used for channel measurement and a quantity of ports of pilots used for interference measurement, and a weighted sum of the quantity of ports of the pilots used for channel measurement and the quantity of ports of the pilots used for interference measurement. Optionally, if calculation complexity of the pilot used for interference measurement is less than complexity of the pilot used for channel measurement, it may be selected that a weighting factor of the quantity of ports of the pilots used for interference measurement is less than or equal to 1. However, this is merely an example in this application, and this application is not limited thereto. The CSI reporting capability of the UE may be reflected by using the quantity, supported in the at least one time-domain unit, of ports of the pilots used for CSI measurement, and optionally, by using a maximum quantity, supported in the at least one time-domain unit, of ports of the pilots used for CSI measurement, that is, reflection of a maximum calculation amount that can be processed by the UE in the at least one time-domain unit; or by using a quantity, supported in at least one time-domain unit and determined according to a predetermined policy and depending on a particular consideration, of ports of pilots used for CSI measurement, for example, a quantity of ports that reflects an average calculation capability or a quantity of ports that can be processed in a particular scenario. A concept of the pilot may include a reference signal (RS), a synchronization signal block (SSB), a preamble, and the like. Subsequently, this application is described by using the RS to represent the pilot. The CSI reporting capability of the UE may also be reflected by using a quantity of time-domain units required for supporting a predetermined quantity of ports of the pilots used for CSI measurement (namely, information about a quantity of time-domain units that is required for calculating a calculation amount corresponding to the predetermined quantity of ports). Specifically, for example, the predetermined quantity of ports is P (P is greater than or equal to 1). Optionally, the predetermined quantity of ports may be a preset minimum configurable quantity of ports that is supported by the UE, a preset maximum configurable quantity of ports that is supported by the UE, or a quantity of ports that is determined according to a predetermined configuration criterion. The quantity of time-domain units that is required for the P ports is calculated as N (N is greater than or equal to 1). Therefore, P/N is a quantity, supported in each time-domain unit, of ports of the pilots used for CSI measurement. It can be learned that the quantity of time-domain units that is required for calculating the predetermined quantity of ports of the pilots used for CSI measurement is in an indirect association relationship with the quantity, supported in the at least one time-domain unit, of ports of the pilots used for CSI measurement. Optionally, the quantity of time-domain units that is required for calculating the predetermined quantity of ports of the pilots used for CSI measurement may be a minimum quantity of time-domain units that is required for calculating the predetermined quantity of ports of the pilots used for CSI measurement, that is, reflecting a fastest processing speed of the UE, or may be the quantity, determined according to a predetermined policy and depending on a particular consideration, of time-domain units that is required for calculating the predetermined quantity of ports of the pilots used for CSI measurement, for example, a quantity of time-domain units reflecting an average calculation speed or a quantity of time-domain units corresponding to a processing speed in a particular scenario. It should be noted that, this application is not limited to such two CSI capability reflection representations, and any direct or indirect representation associated with the quantity, supported in the at least one time-domain unit, of ports of the pilots used for CSI measurement can be used in this application. Therefore, a corresponding CSI capability indicates that the capability information includes information about the quantity, supported in the at least one time-domain unit, of ports of the pilots used for CSI measurement or information about the quantity of time-domain units that is required for calculating the predetermined quantity of ports of the pilots used for CSI measurement. For division of the time-domain unit, one time-domain unit may include at least one of the following division types: n time-domain symbols, n mini-slots, n slots, n subframes, n frames, or n time-domain subunits defined in another form, where n is greater than or equal to 1. The capability information may be indicated by using a bit in a bitmap or by using a value of an indication field. For example, if the capability information is the foregoing quantity, supported in the at least one time-domain unit, of ports of the pilots used for CSI measurement, it is assumed that a quantity of potential values of all possible CSI reporting capabilities is P. (For example, P is 32. In this case, all the possible CSI reporting capabilities have 32 potential values. The 32 potential values correspond to port 1 to port 32, including port 1 and port 32. This is merely an example. The potential values may alternatively correspond to port 1 to port 48, and so on.) An indication manner by using the bit in the bitmap may be: The bitmap has a length of P bits, and the following example is described by using an example in which P is 32. It may be understood that this application is not limited thereto. Port 1 to port 32 are separately indicated by each of the 32 bits, and the bit may be set to 1 or 0 for indication. For example, the quantity of ports to be reported is M. In this case, the Mthbit (a start location is 1) in the bitmap reported by the UE is set to 1 (or 0), and other bits are set to 0 (or 1). When the value of the indication field is used for indication, the quantity of ports may be indicated by using different values of an indication field having a length of ┌ log2P┐ bits. Likewise, if the capability information is the foregoing quantity of time-domain units that is required for calculating the predetermined quantity of ports of the pilots used for CSI measurement, it is assumed that a quantity of potential values of all possible CSI reporting capabilities is N. (Likewise, for example, N is 16. In this case, all the possible CSI reporting capabilities have 16 potential values, for example, unit 1 to unit 16 required for calculating 32 ports.) An indication manner using the bit in the bitmap may be: The bitmap has a length of N bits, assuming that the quantity of time-domain units to be reported is M (that is, calculating the 32 ports requires M units), the Mthbit (a start location is 1) in the bitmap reported by the UE is set to 1 (or 0), and other bits are set to 0 (or 1). When the value of the indication field is used for indication, the quantity of time-domain units may be indicated by using different values of an indication field having a length of ┌ log2N┐ bits. It may be understood that an indication form is not limited to the foregoing examples, and all manners that can indicate different types may be used in this application. S103. The access node determines a CSI measurement related configuration of the UE based on the received capability information reported by the UE. The access node determines the CSI measurement related configuration of the UE. Optionally, the access node determines a configuration range based on the CSI reporting capability of the UE, and then selects a determined configuration based on a status of the access node and a possible status of the UE. Optionally, the related configuration may include a configuration of the pilot used for CSI measurement, a configuration of a corresponding time-frequency resource, and a reporting configuration. The following manner is used as an example for description, but this application is not limited thereto. For example, a base station configures one measurement setting, one or more reporting settings, and one or more resource settings based on the received capability information reported by the UE. Each measurement setting includes one or more links. Each link is used to connect one reporting setting and one resource set, and indicate whether the measurement setting is used for channel measurement or interference measurement. Each reporting setting includes content of the CSI reporting and occupied time-domain and frequency-domain resource locations. Each resource setting includes time-domain and frequency-domain resource locations occupied by a pilot resource used for CSI measurement. When the foregoing content is configured, the access node needs to consider whether the configuration exceeds a processing capability of the UE, whether processing of corresponding CSI measurement can be completed when the UE needs to perform reporting, and can perform reporting, and an appropriate time for configuring or triggering the UE to perform reporting. Therefore, optionally, a configuration related parameter may include at least one of the following: a quantity of reporting settings, a quantity of resource settings, a quantity of CSI pilot resource settings included in each resource setting, a quantity of CSI pilot resources included in each resource setting, a quantity of ports of a CSI pilot included in each resource setting, a periodically reported period, and an aperiodically reported time offset, where the time offset is a time interval from CSI reporting triggering to CSI reporting. For a specific resource configuration policy, refer to detailed descriptions ofFIG.6toFIG.8below. Details are not described herein. S104. The access node sends information about the CSI measurement related configuration to the UE. The UE performs configuration based on the information about the CSI measurement related configuration that is sent by the access node, and completes CSI measurement and reporting based on the configuration. It should be noted that, the foregoing merely describes a manner of defining the CSI reporting capability to be associated with the quantity, supported by the UE in the at least one time-domain unit, of ports of the pilots (for example, the RSs) used for CSI measurement. However, this application is not limited thereto. The CSI reporting capability may further be defined as a quantity of ports supported in at least one unit/a quantity of pilots (CSI-RS) used for CSI measurement/a quantity of CSI-RS sets/a quantity of CSI-RS settings. The set may include one or more CSI-RS resources, and the setting may include a plurality of sets. According to the terminal device capability reporting method in this embodiment of this application, the capability information that can reflect the actual CSI reporting capability of the UE is reported, so that the access point can learn the actual CSI reporting capability of the UE, thereby flexibly configuring CSI measurement and reporting of the UE. Embodiment 2 FIG.3is a flowchart of a second embodiment of a terminal device capability reporting method according to this application. A difference between this embodiment and Embodiment 1 lies in that, in this embodiment, definitely, there are at least two CSI reporting capabilities. Content the same as or similar to that in Embodiment 1 is not described again in this embodiment. It should be noted that, the CSI reporting capability in this embodiment is not necessarily defined, in the manner in Embodiment 1, to be associated with the quantity, supported by the UE in the at least one time-domain unit, of ports of the pilots used for CSI measurement, and may be a CSI reporting capability defined for another purpose. The manner in this embodiment may be used provided that there are at least two CSI reporting capabilities. The method includes the following steps. S201. UE generates capability information. In a network, when the CSI reporting capabilities of the UE need to be distinguished for different CSI measurement types, for example, for different codebook types (such as type I and type II), different precoding matrix indicator (PMI) types (such as having a PMI, having no PMI, a narrowband PMI, and a broadband PMI), and different bandwidth part (BWP) sizes, optionally, there may be different combinations of types, such as different codebook types and different PMI types, different codebook types and different BWP sizes, different PMI types and different BWP sizes, or different codebook types, different PMI types, and different BWP sizes. An access node needs to perform CSI measurement related configuration separately based on the reporting capabilities of the UE for different CSI measurement types. Therefore, the CSI reporting capabilities need to be determined for the different CSI measurement types, to generate at least two pieces of capability information of the CSI reporting capabilities corresponding to the different CSI measurement types. If the CSI reporting capability is defined in the manner in Embodiment 1, for details, refer to related descriptions in Embodiment 1. Details are not described herein again. S202. The UE sends the capability information of the UE to an access node. In this embodiment, the capability information includes capability information of a capability 1, a capability 2, . . . , and a capability N each indicating a CSI reporting capability corresponding to the different CSI measurement types. The CSI reporting capabilities for the different CSI measurement types may be correspondingly indicated by using a corresponding format location of each CSI reporting capability (for example, the capability 1 or the capability 2) in a signaling format in reporting signaling (for example, the first piece of capability information corresponds to codebook type I, and the second piece of capability information corresponds to codebook type II). The CSI reporting capabilities may have a concatenation relationship in the format (that is, independent fields are connected end-to-end, to indicate more values compared with one field). A meaning corresponding to each value may be different codebook types, different PMI types, different BWP sizes, or various combinations of the foregoing types. S203. The access node determines a CSI measurement related configuration of the UE based on the received capability information reported by the UE. Based on a plurality of pieces of received indication information of the CSI reporting capabilities, the access node performs CSI measurement related configuration separately based on the reporting capabilities of the UE for the different CSI measurement types. Other content similar to S103is not described herein again. S204. The access node sends information about the CSI measurement related configuration to the UE. The UE performs configuration based on the information about the CSI measurement related configuration that is sent by the access node, and completes CSI measurement and reporting based on the configuration during subsequent CSI measurement. According to the terminal device capability reporting method in this embodiment of this application, the UE reports the plurality of CSI reporting capabilities for the different CSI measurement types, so that the access point can learn of actual CSI reporting capabilities of the UE for the different CSI measurement types, thereby flexibly configuring CSI measurement and reporting of the UE. Embodiment 3 FIG.4is a flowchart of a third embodiment of a terminal device capability reporting method according to this application. A difference between this embodiment and Embodiment 1 or 2 lies in that, in this embodiment, a UE capability reporting procedure is performed when a UE capability for CSI reporting has different definition type selections. In addition, Embodiment 2 describes a case in which the capability information has a plurality of capability values, and this embodiment is mainly for a case in which a selected format/definition form corresponding to each capability value is optional. Content the same as or similar to that in Embodiment 1 or 2 is not described again in this embodiment. The method includes the following steps. S301. An access node sends CSI reporting capability definition type selection indication information to the UE. When a network includes a plurality of CSI reporting capability definition types, after establishing a connection to the access node, the UE is uncertain about a definition type of a CSI reporting capability to be reported to the access node. Therefore, the access node needs to send the CSI reporting capability definition type selection indication information to the UE, to inform the UE of a definition type of the CSI reporting capability that is to be determined and reported subsequently. It should be noted that, in this embodiment, a CSI reporting capability, corresponding to the CSI reporting capability definition type selection indication information is not necessarily defined in the manner in Embodiment 1, and may be a CSI reporting capability defined for another purpose. The manner in this embodiment may be used for indication provided that the CSI reporting capability has a plurality of definition types. Optionally, in the definition of the CSI reporting capability in Embodiment 1, the definition type is used for defining different CSI reporting capabilities according to different division types of a time-domain unit. For example, the CSI reporting capability may be associated with a quantity, supported in n symbols, of ports of pilots used for CSI measurement, a quantity, supported inn mini-slots, of ports of pilots used for CSI measurement, a quantity, supported in n slots, of ports of pilots used for CSI measurement, a quantity, supported in n subframes, of ports of pilots used for CSI measurement, a quantity, supported in n frames, of ports of pilots used for CSI measurement, or a quantity, supported in n time-domain subunits defined in another form, of ports of pilots used for CSI measurement, where n is greater than or equal to 1. Therefore, the definition type selection indication information may indicate definition type by using a definition type index or a corresponding bit in a bitmap. For example, the time-domain unit has four division types, the bitmap may have a length of four bits, and each bit corresponds to one type (for example, n symbols or n slots). A corresponding bit is set to 1 or 0, to indicate a division type (for example, division is performed by n symbols or n slots) used for dividing the time-domain unit in the definition types. The definition type selection indication information may be indicated by using a value of an indication field. For example, the time-domain unit has four division types, and the indication field may be two bits. Different values such as 00, 01, 10, and 11 of the field are used to respectively indicate the corresponding division types. It may be understood that an indication form is not limited to the foregoing examples, and all manners that can indicate different types may be used in this application. Optionally, in a case of the quantity, supported in the at least one time-domain unit in the definition of the CSI reporting capability in Embodiment 1, of ports of the pilots used for CSI measurement, for the definition type, different CSI reporting capabilities may be defined for different calculation capability categories of the UE. For example, the CSI reporting capability may be associated with whether the UE has a plurality of parallel calculation channels for a CSI measurement reporting setting and whether each time-domain unit supports calculation of one reporting setting, and may include three categories: First, the UE does not have a plurality of parallel calculation channels for the CSI measurement reporting setting and each time-domain unit supports calculation of one reporting setting. Second, the UE has a plurality of parallel calculation channels for the CSI measurement reporting setting and one parallel calculation channel corresponds to calculation of one reporting setting. Third, the UE has a plurality of parallel calculation channels for the CSI measurement reporting setting and the plurality of parallel calculation channels correspond to parallel calculation of one reporting setting. Therefore, likewise, the definition type selection indication information may be indicated by using the definition type index or the corresponding bit in the bitmap, or be indicated by using the value of the indication field. A specific manner is similar, and details are not described herein again. It may be understood that an indication form is not limited to the foregoing examples, and all manners that can indicate different types may be used in this application. S302. The UE generates capability information based on the received CSI reporting capability definition type selection indication information. According to the foregoing example, if the definition type selection indication information indicates that the time-domain unit is divided in units of one slot, the capability information to be reported is generated based on an association definition of a quantity, supported in each slot, of ports of the pilots used for CSI measurement. According to the foregoing example, if the definition type selection indication information indicates that the calculation capability category is the second category described above, the capability information to be reported is generated based on an association definition in which the UE has a plurality of parallel calculation channels for the CSI measurement reporting setting and one parallel calculation channel corresponds to calculation of one reporting setting. S303. The UE sends the capability information of the UE to the access node. The capability information is used to indicate a channel state information (CSI) reporting capability of the UE. If the capability information is associated with a quantity, supported by the UE in at least one time-domain unit, of ports of pilots, for details, refer to related descriptions in Embodiment 1. Details are not described herein again. For the three calculation capability categories in the example in S302, in a case of the first category and the third category, optionally, the capability information includes information about a quantity of time-domain units required for calculating a predetermined quantity of ports of the pilots used for CSI measurement. In a case of the second category, optionally, the capability information includes information about a quantity of time-domain units required for calculating a predetermined quantity of ports of the pilots used for CSI measurement and a quantity of parallel calculation channels. S304. The access node determines a CSI measurement related configuration of the UE based on the received capability information reported by the UE. This step is similar to S103, and details are not described herein again. S305. The access node sends information about the CSI measurement related configuration to the UE. The UE performs configuration based on the information about the CSI measurement related configuration that is sent by the access node, and completes CSI measurement and reporting based on the configuration during subsequent CSI measurement. According to the terminal device capability reporting method in this embodiment of this application, the access node sends the CSI reporting capability definition type selection indication information to the UE, so that the UE can effectively report the CSI reporting capability when the system includes the plurality of CSI reporting capability definition types. Embodiment 4 FIG.5is a flowchart of a fourth embodiment of a terminal device capability reporting method according to this application. A difference between this embodiment and Embodiment 3 lies in that, in this embodiment, UE reports a definition type selection indication when a UE capability for CSI reporting has different definition type selections. Content the same as or similar to that in Embodiment 3 is not described in this embodiment again. It should be noted that, similar to Embodiment 3, a CSI reporting capability in this embodiment is not necessarily defined in the manner in Embodiment 1, and may be a CSI reporting capability defined for another purpose. The manner in this embodiment may be used provided that the CSI reporting capability has a plurality of definition types. The method includes the following steps. S401. The UE generates capability information. When a network includes a plurality of CSI reporting capability definition types, after establishing a connection to an access node to perform capability reporting, the UE may generate the capability information based on a CSI reporting capability definition type selected by the UE. For example, for a division type of a time-domain unit, the UE may flexibly reflect a calculation processing capability of the UE by using different time-domain unit division sizes based on a calculation capability of the UE. S402. The UE sends the capability information of the UE to an access node. For related descriptions of the capability information, refer to Embodiment 1 to Embodiment 3. Details are not described herein again. S403. The UE sends, to the access node, CSI reporting capability definition type selection indication information selected by the UE. When the network includes the plurality of CSI reporting capability definition types, optionally, the UE may indicate, to the access node by using different signaling formats of the capability information, a definition type selected by the UE. Optionally, the UE may inform, by using definition type selection indication information, the access node of a definition type selected by the UE. The definition type selection indication information and the capability information may be sent together or separately. For examples of division of the definition type and an indication manner of the definition type selection indication information, refer to related descriptions in Embodiment 3. Details are not described herein again. It should be noted that, there is no definite order between S402and S403. The steps are merely intended to reflect that the UE sends two pieces of information to the access node. If the definition type selection indication information may be indicated by using different signaling formats of the capability information reported by the UE, S403may be omitted. If the definition type selection indication information and the capability information are sent together, S402and S403are one step. S404. The access node determines a CSI measurement related configuration of the UE based on the received capability information reported by the UE. This step is similar to S103, and details are not described herein again. S405. The access node sends information about the CSI measurement related configuration to the UE. The UE performs configuration based on the information about the CSI measurement related configuration that is sent by the access node, and completes CSI measurement and reporting based on the configuration during subsequent CSI measurement. According to the terminal device capability reporting method in this embodiment of this application, the UE indicates, to the access node, the definition type selected by the UE, so that when the system includes a plurality of CSI reporting capability definition types, the access node can effectively learn of the CSI reporting capability of the UE, thereby performing effective configuration. The foregoing embodiments emphasize reporting of the CSI reporting capability of the UE. After learning of the reporting capability of the UE, the access node may perform CSI measurement related configuration based on the capability. The following provides descriptions about how the access node performs CSI measurement related configuration. FIG.6is a schematic diagram of a configuration manner of configuring, by an access node based on a CSI reporting capability of UE, the UE to periodically or semi-persistently report CSI. There are two configuration manners: a configuration manner 1 and a configuration manner 2. It is assumed that a CSI reporting capability reported by the UE is that a maximum of four ports (4 ports) can be supported in each unit, and the unit is divided by one slot. In other words, the CSI reporting capability of the UE is that the UE can support a maximum of 4 ports in each slot. Therefore, when configuring CSI measurement and reporting of the UE, the access node needs to consider that the UE can support a maximum of 4 ports in each slot. When reporting CSI, the UE needs to complete a calculation amount for the corresponding CSI measurement, so as to report the CSI. As shown inFIG.6, in the configuration manner 1, the access node configures the UE to periodically or semi-persistently report CSI. The CSI is reported every four slot units. The access node configures, in the first slot unit (slot 0), a channel measurement reference signal set (CMR set) that includes 8 ports and that is used for CSI measurement, and configures, in the second slot unit (slot 1), an interference measurement reference signal set (IMR set) that includes 16 ports and that is used for CSI measurement. It can be learned that to complete channel measurement and interference measurement, the UE needs to complete a calculation amount for 8 ports+16 ports=24 ports. According to the CSI reporting capability of the UE, a maximum of 4 ports can be supported in each slot. In this case, the UE needs at least 6 slots to complete measurement calculation for 24 ports. Therefore, if the access node configures the calculation amount for CSI measurement with 24 ports in such a manner as the configuration manner 1, the UE cannot be configured to report CSI every four slots (for example, in the fifth slot unit (slot 4)). In this case, the UE cannot complete measurement or perform reporting. Therefore, the configuration manner 1 is an unavailable configuration. As shown inFIG.6, in the configuration manner 2, the access node configures the UE to periodically or semi-persistently report CSI. Likewise, the CSI is reported every four slot units. The access node configures, in the first slot unit, a CMR set that includes 8 ports, and configures, in the second slot unit, an IMR set that includes 8 ports. It can be learned that to complete channel measurement and interference measurement, the UE needs to complete a calculation amount for 8 ports+8 ports=16 ports. According to the CSI reporting capability of the UE, a maximum of 4 ports can be supported in each slot. In this case, the UE needs at least four slots to complete measurement calculation for 16 ports. In this case, if the access node configures the calculation amount for CSI measurement with 16 ports and a reporting period (every four slots) in such a manner as the configuration manner 2, the UE can complete corresponding measurement and perform reporting based on the configuration. Therefore, the configuration manner 2 is an available configuration. FIG.7is a schematic diagram of a configuration manner of configuring, by an access node based on a CSI reporting capability of UE, the UE to aperiodically report CSI. There are two configuration manners: a configuration manner 1 and a configuration manner 2. It is assumed that a CSI reporting capability reported by the UE is that a maximum of 16 ports can be supported in each unit, and the unit is divided by one slot. That is, the CSI reporting capability of the UE is that the UE can support a maximum of 16 ports in each slot. Therefore, when configuring CSI measurement and reporting of the UE, the access node needs to consider that the UE can support a maximum of 16 ports in each slot. When reporting CSI, the UE needs to complete a calculation amount for the corresponding CSI measurement, so as to report the CSI. As shown inFIG.7, in the configuration manner 1, the access node configures a time offset for aperiodic reporting of the UE. The time offset is an interval between a triggering time-domain unit for CSI triggering sent by the access node and a time-domain unit for CSI reporting. As shown inFIG.7, the time offset Y is configured as 5 slots. For CSI measurement, a plurality of times of measurement and reporting (such as CSI measurement 1 and CSI measurement 2) may be performed in consideration of various factors (for example, an interference change). For the CSI measurement 1, a quantity of reporting settings configured for the UE for reporting is 2, respectively corresponding to a CMR set 1 and an IMR set 1, and a CMR set 2 and an IMR set 2. The access node configures, in the first slot unit (slot 0), the CMR set 1 that includes 8 ports and that is used for CSI measurement, configures, in the second slot unit (slot 1), the CMR set 2 including 16 ports, configures, in the fourth slot unit (slot 3), the IMR set 1 that includes 8 ports and that is used for CSI measurement, and configures, in the fifth slot unit (slot 4), the IMR set 2 including 16 ports. It can be learned that a total calculation amount for the CSI measurement 1 to be calculated by the UE is 8+16+8+16=48 ports. For the CSI measurement 2, a quantity of reporting settings configured for the UE for reporting is 2, respectively corresponding to a CMR set 1 and an IMR set 3, and a CMR set 2 and an IMR set 4. The IMR set 3 including 8 ports is configured in the seventh slot unit (slot 6), and the IMR set 4 including 16 ports is configured in the eighth slot unit (slot 7). It can be learned that a total calculation amount for the CSI measurement 2 to be calculated by the UE is 8+16+8+16=48 ports. It can be learned that for the CSI measurement 1, to complete calculation for 48 ports, the UE needs at least 48/16=3 slots. If the access node configures that the CSI measurement 1 is triggered in slot 3, the UE can complete the CSI measurement 1 and perform reporting in slot 8 after the configured time offset Y, namely, 5 slots. Therefore, the configuration of the time offset and the calculation amount for the CSI measurement 1 is available. Likewise, the configuration of the time offset and the calculation amount for the CSI measurement 2 is also available. However, if for the CSI measurement 2, the access node configures that the CSI measurement 2 is triggered in slot 5, the UE is calculating the CSI measurement 1 at this moment, and has not finished calculation. In this case, this is beyond a capability of the UE, and the access node cannot configure that the CSI measurement 2 is triggered at this moment. Therefore, the configuration is unavailable. As shown inFIG.7, in the configuration manner 2, other conditions remain unchanged. If the access node configures that the CSI measurement 2 is triggered in slot 6, the UE is idle currently, and the access node may trigger new CSI reporting. The access node may configure triggering at this moment. The configuration is available. FIG.8is another schematic diagram of a configuration manner of configuring, by an access node based on a CSI reporting capability of UE, the UE to aperiodically report CSI. There are two configuration manners: a configuration manner 1 and a configuration manner 2. It is assumed that a CSI reporting capability reported by the UE is that a maximum of 16 ports can be supported in each unit, and the unit is divided by one slot. In other words, the CSI reporting capability of the UE is that the UE can support a maximum of 16 ports in each slot. Therefore, when configuring CSI measurement and reporting of the UE, the access node needs to consider that the UE can support a maximum of 16 ports in each slot. When reporting CSI, the UE needs to complete a calculation amount for the corresponding CSI measurement, so as to report the CSI. As shown inFIG.8, in the configuration manner 1, the access node configures a time offset configured by the access node for aperiodic reporting of the UE is still 5 slots. For the CSI measurement 1, a quantity of reporting settings configured for the UE for reporting is 2, respectively corresponding to a CMR set 1 and an IMR set 1, and a CMR set 2 and an IMR set 2. The access node configures, in the first slot unit (slot 0), the CMR set 1 that includes 8 ports and that is used for CSI measurement, configures, in the second slot unit (slot 1), the CMR set 2 including 16 ports, configures, in the sixth slot unit (slot 5), the IMR set 1 that includes 8 ports and that is used for CSI measurement, and configures, in the seventh slot unit (slot 6), the IMR set 2 including 16 ports. It can be learned that a total calculation amount for the CSI measurement 1 to be calculated by the UE is 8+16+8+16=48 ports. For the CSI measurement 2, a quantity of reporting settings configured for the UE for reporting is 2, respectively corresponding to a CMR set 1 and an IMR set 3, and a CMR set 2 and an IMR set 4. The IMR set 3 including 8 ports is sent in the seventh slot unit (slot 6), and the IMR set 4 including 16 ports is sent in the eighth slot unit (slot 7). It can be learned that a total calculation amount for the CSI measurement 2 to be calculated by the UE is 8+16+8+16=48 ports. It can be learned that for the CSI measurement 1, to complete calculation for 48 ports, the UE needs at least 48/16=3 slots. However, it should be noted that in addition to the total calculation time, when performing configuration, the access node further needs to consider a relationship between a time of delivering a CMR set and an IMR set and a reporting time of the UE. For example, according to the configuration manner 1, if the access node triggers reporting of the CSI measurement 1 at the end of slot 1, after a preconfigured time offset, the UE needs to perform reporting at the end of slot 6. The IMR set 1 (8 ports) and the IMR set 2 (16 ports) are respectively delivered in slot 5 and slot 6. The UE is not idle in slot 6 yet, and needs to process calculation of an interference measurement setting. Therefore, the access node cannot configure that reporting of the CSI measurement 2 is triggered in slot 6, and can trigger the CSI measurement 2 at least at the end of slot 6 after the UE completes calculation of the CSI measurement 1. In addition, it should further be noted that, if the IMR set 1 and the IMR set 2 are respectively delivered in slot 5 and slot 6, the UE needs to perform reporting at the end of slot 6. If a sum of quantities of ports of the IMR set 1 and the IMR set 2 exceeds a quantity of ports supported by the UE in the two units, that is, 32 ports, the UE also cannot complete a calculation amount for the configuration in slot 6, and consequently, cannot perform reporting. As shown inFIG.8, in the configuration manner 2, other conditions remain unchanged. If the quantity of ports of the IMR set 2 changes to 8 ports, and the CSI measurement 2 is triggered in slot 5, the UE has a calculation capability of remaining 8 ports in both slot 5 and slot 6. Although in this case (slot 5 and slot 6), the configured IMR set 3 and IMR set 4 are not received, the base station can trigger new CSI reporting, and the UE may first calculate the CMR set 1 (8 ports) known in the CSI measurement 2. Therefore, the access node may configure that the CSI measurement 2 is triggered at this moment (in slot 5), and the configuration is available. It may be understood that in the configuration manner, the UE supports parallel processing. When the calculation capability is surplus, the UE may perform parallel calculation. That the UE supports parallel processing is considered as having a stronger UE capability for CSI reporting. One or more reporting settings may be configured if an average quantity of pilot ports in each time-domain unit is not greater than the UE capability before the CSI reporting. It should be noted that the foregoing descriptions about the configuration manners are specifically for a case in which the CSI reporting capability of the UE is a maximum quantity, and supported in each time-domain unit, of ports of pilots used for CSI measurement. The configuration manners are merely examples for description, provide descriptions about how to configure the CSI reporting capability of the UE, and are not intended to limit this application. For different definition cases of the CSI reporting capability of the UE, the configuration logic in the foregoing examples is followed. The CSI measurement and reporting of the UE that are configured by the access node should not exceed the CSI reporting capability of the UE, so that the UE can complete the CSI measurement and reporting. The foregoing describes the solutions provided in the embodiments of this application mainly by using a procedure in which various entities in the system interact with each other to perform parallel transmission control. It may be understood that to implement the foregoing functions, the foregoing various entities include hardware structures and/or software modules corresponding to the various functions. A person skilled in the art should be easily aware that, in combination with the units and algorithm steps of the examples described in the embodiments disclosed in this specification, this application can be implemented by hardware or a combination of hardware and computer software. Whether a function is performed by hardware or by computer software driving hardware depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. In the embodiments of this application, function module division may be performed on the UE and the access node according to the examples of the methods. For example, various function modules may be divided according to the corresponding functions, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software function module. It should be noted that module division in the embodiments of this application is an example and is merely logical function division. During actual implementation, there may be another division manner. The following descriptions are made by using an example in which function modules are divided corresponding to functions. An embodiment of this application further provides a terminal device. The terminal device may be configured to perform the steps performed by the UE in any one ofFIG.2toFIG.5.FIG.9is a simplified schematic structural diagram of the terminal device. For ease of understanding and convenience of figure illustration, an example in which the terminal device is a mobile phone is used inFIG.9. As shown inFIG.9, the terminal device90includes a processor, a memory, a radio frequency circuit, an antenna, and an input/output apparatus. The processor is mainly configured to: process a communication protocol and communication data, control the terminal device90, execute a software program, process data of the software program, and the like. The memory is mainly configured to store the software program and data. The radio frequency circuit is mainly configured to: perform conversion between a baseband signal and a radio frequency signal, and process the radio frequency signal. The antenna is mainly configured to receive and send a radio frequency signal in a form of an electromagnetic wave. The input/output apparatus, such as a touchscreen, a display, or a keyboard, is mainly configured to: receive data entered by a user and output data to the user. It should be noted that terminal devices90of some types may not have the input/output apparatus. The memory and the processor may be integrated together or may be disposed independently. In addition, the radio frequency circuit and the processor may be integrated together or may be disposed independently. When needing to send data, after performing baseband processing on the data to be sent, the processor outputs a baseband signal to the radio frequency circuit. The radio frequency circuit performs radio frequency processing on the baseband signal and sends a radio frequency signal to outside in a form of an electromagnetic wave by using the antenna. When data is sent to the terminal device90, the radio frequency circuit receives a radio frequency signal by using the antenna, converts the radio frequency signal into a baseband signal, and outputs the baseband signal to the processor. The processor converts the baseband signal into data, and processes the data. For ease of description,FIG.9shows only one memory and processor. In an actual terminal device product, there may be one or more processors and one or more memories. The memory may also be referred to as a storage medium, a storage device, or the like. The memory may be disposed independent of the processor, or may be integrated with the processor. This is not limited in this embodiment of this application. In this embodiment of this application, the antenna and the radio frequency circuit having transmission and receiving functions may be considered as a transceiver unit of the terminal device90, and the processor having a processing function may be considered as a processing unit of the terminal device90. As shown inFIG.9, the terminal device90includes a transceiver unit901and a processing unit902. The transceiver unit may also be referred to as a transceiver (including a transmitter and/or a receiver), a transceiver machine, a transceiver apparatus, a transceiver circuit, or the like. The processing unit may also be referred to as a processor, a processing board, a processing module, a processing apparatus, or the like. Optionally, a component for implementing a receiving function in the transceiver unit901may be considered as a receiving unit, and a component for implementing a sending function in the transceiver unit901may be considered as a sending unit. That is, the transceiver unit901includes a receiving unit and a sending unit. The transceiver unit may also be referred to as a transceiver machine, a transceiver, a transceiver circuit, or the like sometimes. The receiving unit may also be referred to as a receiver machine, a receiver, a receiver circuit, or the like sometimes. The sending unit may also be referred to as a transmitter machine, a transmitter, a transmitter circuit, or the like sometimes. In some embodiments, the transceiver unit901and the processing unit902may be integrated together or may be disposed independently. In addition, all functions of the processing unit902may be integrated into one chip for implementation. Alternatively, some functions may be integrated into one chip for implementation and some other functions are integrated into one or more other chips for implementation. This is not limited in this application. The term “unit” used in this specification may refer to an application-specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory, or a combinational logic circuit that executes one or more software or firmware programs, and/or other suitable components that provide the function. For example, in an implementation, the transceiver unit901is configured to perform the steps performed by the UE in S102and/or S104inFIG.2, and/or other steps in this application. The processing unit902is configured to perform S101inFIG.2, and/or other steps in this application. For example, in another implementation, the transceiver unit901is configured to perform the steps performed by the UE in S202and/or S204inFIG.3, and/or other steps in this application. The processing unit902is configured to perform S201inFIG.3, and/or other steps in this application. For example, in another implementation, the transceiver unit901is configured to perform the steps performed by the UE in S301, S303, and/or S305inFIG.4, and/or other steps in this application. The processing unit902is configured to perform S302inFIG.4, and/or other steps in this application. For example, in another implementation, the transceiver unit901is configured to perform the steps performed by the UE in S402, S403, and/or S405inFIG.5, and/or other steps in this application. The processing unit902is configured to perform S401inFIG.5, and/or other steps in this application. An embodiment of this application further provides a network device. The network device may serve as an access node or a transmission and reception point, and is configured to perform the steps performed by the access node in any one ofFIG.2toFIG.5.FIG.10is a simplified schematic structural diagram of the network device. The network device10includes a part1001and a part1002. The part1001is mainly configured to receive and send a radio frequency signal and perform conversion between the radio frequency signal and a baseband signal. The part1002is mainly configured to perform baseband processing, control the network device10, and the like. The part1001may be usually referred to as a transceiver unit, a transceiver machine, a transceiver circuit, a transceiver, or the like. The part1002is usually a control center of the network device10, and may usually be referred to as a processing unit, a control unit, a processor, a controller, or the like, configured to control the network device10to perform steps performed by a measurement functional entity on an access side or by the access node/the transmission and reception point used as a measurement functional entity on an access side in the foregoing related embodiments. For details, refer to the foregoing descriptions of the related part. A transceiver unit of the part1001may also be referred to as a transceiver machine, a transceiver, or the like. The transceiver unit includes an antenna and a radio frequency unit. The radio frequency unit is mainly configured to perform radio frequency processing. Optionally, a component for implementing a receiving function in the part1001may be considered as a receiving unit, and a component for implementing a sending function may be considered as a sending unit. That is, the part1001includes a receiving unit and a sending unit. The receiving unit may also be referred to as a receiver machine, a receiver, a receiver circuit, or the like. The sending unit may also be referred to as a transmitter machine, a transmitter, a transmitter circuit, or the like. The part1002may include one or more boards. Each board may include one or more processors and one or more memories. The processor is configured to read and execute a program in the memory, to implement a baseband processing function and control the network device10. If there are a plurality of boards, the boards may be interconnected to enhance a processing capability. In an optional implementation, alternatively, the plurality of boards may share one or more processors, or the plurality of boards share one or more memories, or the plurality of boards share one or more processors at the same time. The memory and the processor may be integrated together or may be disposed independently. In some embodiments, the part1001and the part1002may be integrated together or may be disposed independently. In addition, all functions of the part1002may be integrated into one chip for implementation. Alternatively, some functions may be integrated into one chip for implementation and some other functions are integrated into one or more other chips for implementation. This is not limited in this application. For example, in an implementation, the transceiver unit may be configured to perform the steps performed by the access node in S102and/or S104inFIG.2, and/or other steps in this application. The processing unit is configured to perform S103inFIG.2, and/or other steps in this application. For example, in another implementation, the transceiver unit is configured to perform the steps performed by the access node in S202and/or S204inFIG.3, and/or other steps in this application. The processing unit is configured to perform S203inFIG.3, and/or other steps in this application. For example, in another implementation, the transceiver unit is configured to perform the steps performed by the access node in S301, S303, and/or S305inFIG.4, and/or other steps in this application. The processing unit is configured to perform S304inFIG.4, and/or other steps in this application. For example, in another implementation, the transceiver unit is configured to perform the steps performed by the access node in S402, S403, and/or S406inFIG.5, and/or other steps in this application. The processing unit is configured to perform S404inFIG.5, and/or other steps in this application. The apparatus on a terminal side provided above may be a terminal device, or may be a chip or a function module in a terminal device, and may implement the foregoing method by software or hardware, or by hardware executing corresponding software. A specific implementation of the apparatus on a network side provided above may be an access node device. For example, the apparatus may be an access node device, or a chip or a function module in an access node device, and may implement the foregoing method by software or hardware, or by hardware executing corresponding software. For descriptions of related content and beneficial effects of any terminal device, network device, and corresponding apparatus provided above, refer to the corresponding method embodiments provided above. Details are not described herein again. This application further provides a terminal device capability transmission system. The system includes the UE (which may also be a UE-side apparatus implementing functions of the foregoing UE) and the access node (which may also be an access-side apparatus or a transmission and reception point implementing the foregoing access node function) in the foregoing implementations. This application further provides a computer program product. When the computer program product runs on a computer, the computer performs any method provided above. This application further provides a chip, storing an instruction. When the instruction runs on each of the foregoing devices, the device performs the method provided above. This application further provides a computer storage medium, storing a computer program (instruction). When the program (instruction) runs on a computer, the computer performs the method provided above. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, the embodiments may be implemented completely or partially in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the procedure or functions according to the embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, and microwave, or the like) manner. The computer-readable storage medium may be any available medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more available media. The available medium may be a magnetic medium (for example, a soft disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid state disk (SSD)), or the like. A proposal of the foregoing solutions in New Radio (NR for short) is as follows: Actually, for each reporting setting, the actual UE capability for CSI reporting depends on how much workload UE can afford for calculating CSI info based on the CSI-RS resources configured in resource settings linked with the reporting setting. The total number of CSI-RS ports in each unit UE can afford can reflect the UE capability for CSI reporting. UE capability for CSI reporting can be defined as the maximum number of CSI-RS ports for each unit UE afford for update CSI. For UE supports processing in parallel would have a large UE capability for CSI reporting. One or more reporting settings can be configured if the average number of CSI-RS ports for each unit before CSI reporting is no larger than the UE capability. Then we can the following proposal: Proposal XX: UE capability for CSI reporting can be defined as the maximum number of CSI-RS ports for each unit UE afford for update CSI. It can be learned that in the proposal, a total quantity of CSI-RS ports in each unit the UE can afford may reflect the UE capability for CSI reporting. The UE capability for CSI reporting may be defined as the maximum quantity of CSI-RS ports in each unit UE affords for CSI update. The UE that supports processing in parallel would have a larger UE capability for CSI reporting. One or more reporting settings may be configured if an average quantity of CSI-RS ports in each unit is not greater than the UE capability before the CSI reporting. Finally, preferably, a preferred solution in the proposal is that the UE capability for CSI reporting may be defined as the maximum quantity of CSI-RS ports in each unit supported by the UE for CSI update. Although this application is described with reference to the embodiments, in a process of implementing this application that claims protection, a person skilled in the art may understand and implement another variation of the disclosed embodiments by viewing the accompanying drawings, disclosed content, and the accompanying claims. In the claims, “comprising” does not exclude another component or another step, and “a” or “one” does not exclude a case of multiple. A single processor/controller or another unit may implement several functions enumerated in the claims. Some measures are recorded in dependent claims that are different from each other, but this does not mean that these measures cannot be combined to produce a better effect. Although this application is described with reference to specific features and the embodiments thereof, obviously, various modifications and combinations may be made to them without departing from the essence and scope of this application. Correspondingly, the specification and accompanying drawings are merely example description of this application defined by the accompanying claims, and is considered as any of or all modifications, variations, combinations or equivalents that cover the scope of this application. Obviously, a person skilled in the art can make various modifications and variations to this application without departing from the essence and scope of this application. This application is intended to cover these modifications and variations of this application provided that they fall within the scope of protection defined by the following claims and their equivalent technologies.
72,769
11863253
DETAILED DESCRIPTION Wireless communication systems typically utilize hybrid automatic-repeat/request acknowledgment (HARQ-ACK) feedback reporting to confirm that a device has successfully received and decoded a transmission. The HARQ-ACK feedback report may include a feedback codebook (or simply codebook) that includes a series of bits that are generated based on the configuration of the transmission. For example, a base station may schedule a user equipment (UE) with a downlink transmission (e.g., a physical downlink shared channel (PDSCH) transmission) that includes a repetition factor (or aggregation factor) and associated reporting offset value (e.g., K1 value) for the downlink transmission. The UE may be configured with multiple PDSCH aggregation factors, e.g., for a dynamic PDSCH and/or for different semi-persistent scheduling (SPS) configurations. When the repetition or aggregation factor for a downlink transmission is greater than one, the UE may report a negative-acknowledgment (NACK) bit for each repetition of the downlink transmission until the last repetition. For the last repetition, the UE determines whether at least one repetition was successfully received and decoded and reports that acknowledgment/NACK (ACK/NACK) bit for the downlink transmission. However, some repetitions of the downlink transmission may be conflicted out, and therefore unavailable for transmission to the UE. This may generate confusion and inaccuracies in the codebook that the UE generates provides to the base station, which may lead to wasted resources for unnecessary retransmissions and/or a loss of communications between the UE and base station. Aspects of the disclosure are initially described in the context of wireless communication systems. Generally, the described techniques provide for more efficient and responsive codebook generation for HARQ-ACK reporting. For example, a base station may schedule a UE for downlink transmission(s), with each downlink transmission having an associated repetition factor (e.g., aggregation factor) from a plurality of repetition factors configured at the UE (e.g., one, two, four, eight, etc., aggregation factors). However, the base station and UE may identify an applied repetition factor that will be applied to feedback codebook generation for the downlink transmission(s). Broadly, the applied repetition factor may be irrespective of the associated repetition factor for the configured downlink transmissions (e.g., may be the same or may be different than the aggregation factor configured for the downlink transmission). The base station may transmit the downlink transmission(s) to the UE, which then generates a feedback codebook to report feedback for the downlink transmission(s) to the base station. The UE may use the applied repetition factor in generating the feedback codebook, in addition to whether or not the UE was able to successfully receive and decode one or more repetitions of the downlink transmission. Accordingly, the UE may transmit or otherwise convey a feedback report to the base station that carries or conveys an indication of the feedback codebook. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to type-1 codebook construction with multiple aggregation factors. FIG.1illustrates an example of a wireless communication system100that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The wireless communication system100may include one or more base stations105, one or more UEs115, and a core network130. In some examples, the wireless communication system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, or a New Radio (NR) network. In some examples, the wireless communication system100may support enhanced broadband communications, ultra-reliable (e.g., mission critical) communications, low latency communications, communications with low-cost and low-complexity devices, or any combination thereof. The base stations105may be dispersed throughout a geographic area to form the wireless communication system100and may be devices in different forms or having different capabilities. The base stations105and the UEs115may wirelessly communicate via one or more communication links125. Each base station105may provide a coverage area110over which the UEs115and the base station105may establish one or more communication links125. The coverage area110may be an example of a geographic area over which a base station105and a UE115may support the communication of signals according to one or more radio access technologies. The UEs115may be dispersed throughout a coverage area110of the wireless communication system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some example UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115, the base stations105, or network equipment (e.g., core network nodes, relay devices, integrated access and backhaul (IAB) nodes, or other network equipment), as shown inFIG.1. The base stations105may communicate with the core network130, or with one another, or both. For example, the base stations105may interface with the core network130through one or more backhaul links120(e.g., via an S1, N2, N3, or other interface). The base stations105may communicate with one another over the backhaul links120(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105), or indirectly (e.g., via core network130), or both. In some examples, the backhaul links120may be or include one or more wireless links. One or more of the base stations105described herein may include or may be referred to by a person having ordinary skill in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or other suitable terminology. A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115may also include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other examples. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the base stations105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay base stations, among other examples, as shown inFIG.1. The UEs115and the base stations105may wirelessly communicate with one another via one or more communication links125over one or more carriers. The term “carrier” may refer to a set of radio frequency spectrum resources having a defined physical layer structure for supporting the communication links125. For example, a carrier used for a communication link125may include a portion of a radio frequency spectrum band (e.g., a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (e.g., LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (e.g., synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communication system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and time division duplexing (TDD) component carriers. In some examples (e.g., in a carrier aggregation configuration), a carrier may also have acquisition signaling or control signaling that coordinates operations for other carriers. A carrier may be associated with a frequency channel (e.g., an evolved universal mobile telecommunication system terrestrial radio access (E-UTRA) absolute radio frequency channel number (EARFCN)) and may be positioned according to a channel raster for discovery by the UEs115. A carrier may be operated in a standalone mode where initial acquisition and connection may be conducted by the UEs115via the carrier, or the carrier may be operated in a non-standalone mode where a connection is anchored using a different carrier (e.g., of the same or a different radio access technology). The communication links125shown in the wireless communication system100may include uplink transmissions from a UE115to a base station105, or downlink transmissions from a base station105to a UE115. Carriers may carry downlink or uplink communications (e.g., in an FDD mode) or may be configured to carry downlink and uplink communications (e.g., in a TDD mode). A carrier may be associated with a particular bandwidth of the radio frequency spectrum, and in some examples the carrier bandwidth may be referred to as a “system bandwidth” of the carrier or the wireless communication system100. For example, the carrier bandwidth may be one of a number of determined bandwidths for carriers of a particular radio access technology (e.g., 1.4, 3, 5, 10, 15, 20, 40, or 80 megahertz (MHz)). Devices of the wireless communication system100(e.g., the base stations105, the UEs115, or both) may have hardware configurations that support communications over a particular carrier bandwidth or may be configurable to support communications over one of a set of carrier bandwidths. In some examples, the wireless communication system100may include base stations105or UEs115that support simultaneous communications via carriers associated with multiple carrier bandwidths. In some examples, each served UE115may be configured for operating over portions (e.g., a sub-band, a BWP) or all of a carrier bandwidth. Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme, the coding rate of the modulation scheme, or both). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. A wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers or beams), and the use of multiple spatial layers may further increase the data rate or data integrity for communications with a UE115. One or more numerologies for a carrier may be supported, where a numerology may include a subcarrier spacing (Δf) and a cyclic prefix. A carrier may be divided into one or more BWPs having the same or different numerologies. In some examples, a UE115may be configured with multiple BWPs. In some examples, a single BWP for a carrier may be active at a given time and communications for the UE115may be restricted to one or more active BWPs. The time intervals for the base stations105or the UEs115may be expressed in multiples of a basic time unit which may, for example, refer to a sampling period of Ts=1/(Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (e.g., 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (e.g., ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some examples, a frame may be divided (e.g., in the time domain) into subframes, and each subframe may be further divided into a number of slots. Alternatively, each frame may include a variable number of slots, and the number of slots may depend on subcarrier spacing. Each slot may include a number of symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communication systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (e.g., Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (e.g., in the time domain) of the wireless communication system100and may be referred to as a transmission time interval (TTI). In some examples, the TTI duration (e.g., the number of symbol periods in a TTI) may be variable. Additionally or alternatively, the smallest scheduling unit of the wireless communication system100may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (e.g., a control resource set (CORESET)) for a physical control channel may be defined by a number of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (e.g., CORESETs) may be configured for a set of the UEs115. For example, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to a number of control channel resources (e.g., control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. Each base station105may provide communication coverage via one or more cells, for example a macro cell, a small cell, a hot spot, or other types of cells, or any combination thereof. The term “cell” may refer to a logical communication entity used for communication with a base station105(e.g., over a carrier) and may be associated with an identifier for distinguishing neighboring cells (e.g., a physical cell identifier (PCID), a virtual cell identifier (VCID), or others). In some examples, a cell may also refer to a geographic coverage area110or a portion of a geographic coverage area110(e.g., a sector) over which the logical communication entity operates. Such cells may range from smaller areas (e.g., a structure, a subset of structure) to larger areas depending on various factors such as the capabilities of the base station105. For example, a cell may be or include a building, a subset of a building, or exterior spaces between or overlapping with geographic coverage areas110, among other examples. A macro cell generally covers a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by the UEs115with service subscriptions with the network provider supporting the macro cell. A small cell may be associated with a lower-powered base station105, as compared with a macro cell, and a small cell may operate in the same or different (e.g., licensed, unlicensed) frequency bands as macro cells. Small cells may provide unrestricted access to the UEs115with service subscriptions with the network provider or may provide restricted access to the UEs115having an association with the small cell (e.g., the UEs115in a closed subscriber group (CSG), the UEs115associated with users in a home or office). A base station105may support one or multiple cells and may also support communications over the one or more cells using one or multiple component carriers. In some examples, a carrier may support multiple cells, and different cells may be configured according to different protocol types (e.g., MTC, narrowband IoT (NB-IoT), enhanced mobile broadband (eMBB)) that may provide access for different types of devices. In some examples, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some examples, different geographic coverage areas110associated with different technologies may overlap, but the different geographic coverage areas110may be supported by the same base station105. In other examples, the overlapping geographic coverage areas110associated with different technologies may be supported by different base stations105. The wireless communication system100may include, for example, a heterogeneous network in which different types of the base stations105provide coverage for various geographic coverage areas110using the same or different radio access technologies. The wireless communication system100may support synchronous or asynchronous operation. For synchronous operation, the base stations105may have similar frame timings, and transmissions from different base stations105may be approximately aligned in time. For asynchronous operation, the base stations105may have different frame timings, and transmissions from different base stations105may, in some examples, not be aligned in time. The techniques described herein may be used for either synchronous or asynchronous operations. Some UEs115, such as MTC or IoT devices, may be low cost or low complexity devices and may provide for automated communication between machines (e.g., via Machine-to-Machine (M2M) communication). M2M communication or MTC may refer to data communication technologies that allow devices to communicate with one another or a base station105without human intervention. In some examples, M2M communication or MTC may include communications from devices that integrate sensors or meters to measure or capture information and relay such information to a central server or application program that makes use of the information or presents the information to humans interacting with the application program. Some UEs115may be designed to collect information or enable automated behavior of machines or other devices. Examples of applications for MTC devices include smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access control, and transaction-based business charging. Some UEs115may be configured to employ operating modes that reduce power consumption, such as half-duplex communications (e.g., a mode that supports one-way communication via transmission or reception, but not transmission and reception simultaneously). In some examples, half-duplex communications may be performed at a reduced peak rate. Other power conservation techniques for the UEs115include entering a power saving deep sleep mode when not engaging in active communications, operating over a limited bandwidth (e.g., according to narrowband communications), or a combination of these techniques. For example, some UEs115may be configured for operation using a narrowband protocol type that is associated with a defined portion or range (e.g., set of subcarriers or resource blocks (RBs)) within a carrier, within a guard-band of a carrier, or outside of a carrier. The wireless communication system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. For example, the wireless communication system100may be configured to support ultra-reliable low-latency communications (URLLC) or mission critical communications. The UEs115may be designed to support ultra-reliable, low-latency, or critical functions (e.g., mission critical functions). Ultra-reliable communications may include private communication or group communication and may be supported by one or more mission critical services such as mission critical push-to-talk (MCPTT), mission critical video (MCVideo), or mission critical data (MCData). Support for mission critical functions may include prioritization of services, and mission critical services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, mission critical, and ultra-reliable low-latency may be used interchangeably herein. In some examples, a UE115may also be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(e.g., using a peer-to-peer (P2P) or D2D protocol). One or more UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105or be otherwise unable to receive transmissions from a base station105. In some examples, groups of the UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some examples, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between the UEs115without the involvement of a base station105. In some systems, the D2D communication link135may be an example of a communication channel, such as a sidelink communication channel, between vehicles (e.g., UEs115). In some examples, vehicles may communicate using vehicle-to-everything (V2X) communications, vehicle-to-vehicle (V2V) communications, or some combination of these. A vehicle may signal information related to traffic conditions, signal scheduling, weather, safety, emergencies, or any other information relevant to a V2X system. In some examples, vehicles in a V2X system may communicate with roadside infrastructure, such as roadside units, or with the network via one or more network nodes (e.g., base stations105) using vehicle-to-network (V2N) communications, or with both. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or 5G core (5GC), which may include at least one control plane entity that manages access and mobility (e.g., a mobility management entity (MIME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the base stations105associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to the network operators IP services150. The operators IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. Some of the network devices, such as a base station105, may include subcomponents such as an access network entity140, which may be an example of an access node controller (ANC). Each access network entity140may communicate with the UEs115through one or more other access network transmission entities145, which may be referred to as radio heads, smart radio heads, or transmission/reception points (TRPs). Each access network transmission entity145may include one or more antenna panels. In some configurations, various functions of each access network entity140or base station105may be distributed across various network devices (e.g., radio heads and ANCs) or consolidated into a single network device (e.g., a base station105). The wireless communication system100may operate using one or more frequency bands, typically in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (e.g., less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The wireless communication system100may also operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band, or in an extremely high frequency (EHF) region of the spectrum (e.g., from 30 GHz to 300 GHz), also known as the millimeter band. In some examples, the wireless communication system100may support millimeter wave (mmW) communications between the UEs115and the base stations105, and EHF antennas of the respective devices may be smaller and more closely spaced than UHF antennas. In some examples, this may facilitate use of antenna arrays within a device. The propagation of EHF transmissions, however, may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. The techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. The wireless communication system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, the wireless communication system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, devices such as the base stations105and the UEs115may employ carrier sensing for collision detection and avoidance. In some examples, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other examples. A base station105or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a base station105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some examples, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally or alternatively, an antenna panel may support radio frequency beamforming for a signal transmitted via an antenna port. The base stations105or the UEs115may use MIMO communications to exploit multipath signal propagation and increase the spectral efficiency by transmitting or receiving multiple signals via different spatial layers. Such techniques may be referred to as spatial multiplexing. The multiple signals may, for example, be transmitted by the transmitting device via different antennas or different combinations of antennas. Likewise, the multiple signals may be received by the receiving device via different antennas or different combinations of antennas. Each of the multiple signals may be referred to as a separate spatial stream and may carry bits associated with the same data stream (e.g., the same codeword) or different data streams (e.g., different codewords). Different spatial layers may be associated with different antenna ports used for channel measurement and reporting. MIMO techniques include single-user MIMO (SU-MIMO), where multiple spatial layers are transmitted to the same receiving device, and multiple-user MIMO (MU-MIMO), where multiple spatial layers are transmitted to multiple devices. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105, a UE115) to shape or steer an antenna beam (e.g., a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). A base station105or a UE115may use beam sweeping techniques as part of beam forming operations. For example, a base station105may use multiple antennas or antenna arrays (e.g., antenna panels) to conduct beamforming operations for directional communications with a UE115. Some signals (e.g., synchronization signals, reference signals, beam selection signals, or other control signals) may be transmitted by a base station105multiple times in different directions. For example, the base station105may transmit a signal according to different beamforming weight sets associated with different directions of transmission. Transmissions in different beam directions may be used to identify (e.g., by a transmitting device, such as a base station105, or by a receiving device, such as a UE115) a beam direction for later transmission or reception by the base station105. Some signals, such as data signals associated with a particular receiving device, may be transmitted by a base station105in a single beam direction (e.g., a direction associated with the receiving device, such as a UE115). In some examples, the beam direction associated with transmissions along a single beam direction may be determined based on a signal that was transmitted in one or more beam directions. For example, a UE115may receive one or more of the signals transmitted by the base station105in different directions and may report to the base station105an indication of the signal that the UE115received with a highest signal quality or an otherwise acceptable signal quality. In some examples, transmissions by a device (e.g., by a base station105or a UE115) may be performed using multiple beam directions, and the device may use a combination of digital precoding or radio frequency beamforming to generate a combined beam for transmission (e.g., from a base station105to a UE115). The UE115may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands. The base station105may transmit a reference signal (e.g., a cell-specific reference signal (CRS), a channel state information reference signal (CSI-RS)), which may be precoded or unprecoded. The UE115may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook). Although these techniques are described with reference to signals transmitted in one or more directions by a base station105, a UE115may employ similar techniques for transmitting signals multiple times in different directions (e.g., for identifying a beam direction for subsequent transmission or reception by the UE115) or for transmitting a signal in a single direction (e.g., for transmitting data to a receiving device). A receiving device (e.g., a UE115) may try multiple receive configurations (e.g., directional listening) when receiving various signals from the base station105, such as synchronization signals, reference signals, beam selection signals, or other control signals. For example, a receiving device may try multiple receive directions by receiving via different antenna subarrays, by processing received signals according to different antenna subarrays, by receiving according to different receive beamforming weight sets (e.g., different directional listening weight sets) applied to signals received at multiple antenna elements of an antenna array, or by processing received signals according to different receive beamforming weight sets applied to signals received at multiple antenna elements of an antenna array, any of which may be referred to as “listening” according to different receive configurations or receive directions. In some examples, a receiving device may use a single receive configuration to receive along a single beam direction (e.g., when receiving a data signal). The single receive configuration may be aligned in a beam direction determined based on listening according to different receive configuration directions (e.g., a beam direction determined to have a highest signal strength, highest signal-to-noise ratio (SNR), or otherwise acceptable signal quality based on listening according to multiple beam directions). The wireless communication system100may be a packet-based network that operates according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A Radio Link Control (RLC) layer may perform packet segmentation and reassembly to communicate over logical channels. A Medium Access Control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer may also use error detection techniques, error correction techniques, or both to support retransmissions at the MAC layer to improve link efficiency. In the control plane, the Radio Resource Control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a base station105or a core network130supporting radio bearers for user plane data. At the physical layer, transport channels may be mapped to physical channels. The UEs115and the base stations105may support retransmissions of data to increase the likelihood that data is received successfully. Hybrid automatic repeat request (HARQ) feedback is one technique for increasing the likelihood that data is received correctly over a communication link125. HARQ may include a combination of error detection (e.g., using a cyclic redundancy check (CRC)), forward error correction (FEC), and retransmission (e.g., automatic repeat request (ARQ)). HARQ may improve throughput at the MAC layer in poor radio conditions (e.g., low signal-to-noise conditions). In some examples, a device may support same-slot HARQ feedback, where the device may provide HARQ feedback in a specific slot for data received in a previous symbol in the slot. In other cases, the device may provide HARQ feedback in a subsequent slot, or according to some other time interval. A UE115may determine that a base station105has scheduled the UE115for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a plurality of configured repetition factors configured at the UE115. The UE115may identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions. The UE115may generate a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based at least in part on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded. The UE115may transmit to the base station105a feedback report that includes the feedback codebook. A base station105may schedule a UE115for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a plurality of configured repetition factors configured at the UE115. The base station1105may identify an applied repetition factor for the UE115to apply to feedback codebook generation for the one or more downlink transmissions. The base station105may receive a feedback report from the UE115that includes a feedback codebook, the feedback codebook generated for reporting feedback for the one or more downlink transmissions and populated based at least in part on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded by the UE115. FIG.2illustrates an example of a feedback configuration200that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. In some examples, feedback configuration200may implement aspects of wireless communication system100. Aspects of feedback configuration200may be implemented by a base station and/or UE, which may be examples of the corresponding devices described herein. Broadly, feedback configuration200spans a plurality of slots205, with five slots205being shown by way of example only. Slots205may be configured with one or more PDSCH occasions210, a first downlink transmission215(e.g., PDSCH #1), a second downlink transmission220(e.g., PDSCH #2), and a physical uplink control channel (PUCCH)225(e.g., a feedback reporting occasion where the UE transmits a feedback report to a base station for the downlink transmissions). In some wireless communication systems, a UE typically reports HARQ-ACK information (e.g., a feedback report) for a PDSCH reception (e.g., downlink transmission) from slot n−NPDSCHrepeat+1 to slot n in a HARQ-ACK codebook that the UE includes in a PUCCH or physical uplink shared channel (PUSCH) transmission in slot n+k. NPDSCHrepeatmay be a value of the PDSCH aggregation or repetition factor (pdsch-AggregationFactor) if the UE is provided or otherwise configured with a PDSCH aggregation factor. Otherwise, NPDSCHrepeatmay be assumed to be one. k may be a number of slots indicated by the PDSCH-to-HARQ feedback timing indicator (e.g., the offset reporting slot indicated by a K or K1 value) in a corresponding downlink control information (DCI) format, or provided by dl-DatatoUL-ACK if the PDSCH-to-HARQ feedback timing indicator field is not present in the DCI format. If the UE reports HARQ-ACK information for the PDSCH reception in the slot other than slot n, the UE sets a value for each corresponding HARQ-ACK information bit to NACK or N. If the UE is provided with tdd-UL-DL-ConfigurationCommon or tdd-UL-DL-ConfigurationDedicated and none of the repetitions are received due to an uplink/downlink interaction/conflict, then no ACK/NACK bit is generated in the codebook for the associated PDSCH with NPDSCHrepeatrepetitions. In some wireless communication systems, the UE may be configured with multiple SPS configurations. For example, these wireless communication systems may utilize a configuration of a PDSCH aggregation factor (pdsch-AggregationFactor) per downlink SPS configuration, with aggregation factor values ranging from {1,2,4,8} (e.g., aggregation or repetition factors of one, two, four, or eight). For PDSCH scheduled without a corresponding PDCCH transmission (e.g., without DCI) using sps-Config and activated by the DCI format 1_1 or 1_2, or a PDSCH scheduled by the DCI format 1_1 or 1_2 in PDCCH with a cyclic redundancy check (CRC) scrambled with a configured scheduling radio network temporary identifier (CS-RNTI) with a new data indicator (NDI) set to zero, the PDSCH aggregation factor signaled in sps-Config is applied, if configured. Otherwise, the PDSCH aggregation factor signaled in pdsch-Config is applied. For PDSCH scheduled by the DCI format 1_1 or 1_2 in PDCCH with the CRC scrambled with CS-RNTI and the NDI set to one, the PDSCH aggregation factor signaled in pdsch-Config is applied. Accordingly, the UE may be configured with multiple downlink transmission aggregation factors (e.g., pdsch-AggregationFactor), e.g., for dynamic PDSCH in pdsch-Config and/or for different SPS configurations in sps-Config, and these configurations may have different values of aggregation or repetition factors (e.g., pdsch-AggregationFactor). When multiple aggregation or repetition factors are configured, this raises the question of what should be set for NPDSCHrepeatfor the purpose of determining when to send a type-1 HARQ-ACK codebook (e.g., in which sub-slot or slot the HARQ-ACK codebook, such as a feedback report, is sent over PUCCH or PUSCH) and/or how to construct the type-1 HARQ-ACK codebook (e.g., that includes the codebook size and ACK/NACK bit location within the codebook). In such wireless communication systems, a type-1 codebook construction has a size overhead issue when the PDSCH aggregation factor is greater than one and a PDSCH occasion is dropped (e.g., due to a downlink/uplink interaction or conflict). That is, the UE may be configured with only one PDSCH aggregation factor that is set to two (e.g., two repetitions each for the first downlink transmission215and the second downlink transmission220in the example illustrated in feedback configuration200). The configured set of K1 values (illustrated as K) may be {1,2,4, 8}. In slot n (e.g., slot205-d), the second repetition of the second downlink transmission220is dropped due to a conflict with an uplink symbol/signal. Additionally, both PDSCH occasions of slot n−2 (e.g., slot205-b) overlap with uplink symbols or signals, and therefore the first repetition of the first downlink transmission215is dropped. The time domain resource allocation (TDRA) table may have only two, non-overlapping start and length indicator value (SLIV) rows for a PDSCH as shown in the “old” codebook (e.g., N,N for slot205-b, A/N,N for slot205-c, and N,A/N for slot205-d). However, the codeword generated using the conventional techniques (e.g., the “old” codebook) includes excessive bits, which increases the size and/or complexity of the codebook. This may also create confusion between the base station and UE with respect to how the codebook is generated, and can therefore be read by the base station. However, aspects of the described techniques enable the dropped PDSCH occasions (e.g., occasions in which a downlink transmission could be or is scheduled for a UE) to be removed from a type-1 codebook. Accordingly, aspects of the described techniques provide for a base station to schedule a UE for one or more downlink transmissions (e.g., the first downlink transmission215and the second downlink transmission220by way of example only). Each downlink transmission may have an associated repetition factor (e.g., an aggregation factor that is two for both downlink transmissions in this example) that corresponds to one of a plurality of repetition factors configured at the UE (e.g., {1,2,4,8}). But the base station and UE may identify an applied repetition factor (aggregation factor) to be applied to feedback codebook generation for the scheduled downlink transmissions instead of the associated repetition or aggregation factor. The base station may transmit the downlink transmissions to the UE according to the configurations. For example, slot n−3 (e.g., slot205-a) is scheduled with two PDSCH occasions210. Broadly, PDSCH occasions210generally refer to occasions (in the form of resources) in which a downlink transmission can be scheduled for the UE. In feedback configuration200, the PDSCH occasions210shown are resources where a downlink transmission can be scheduled, but are not scheduled. However, each of the downlink transmissions occurs during a PDSCH occasion. In feedback configuration200, the actually scheduled transmissions are indicated as a PDSCH transmission, while unused transmission resources or occasions are referred to as PDSCH occasions210. Each of the slots on which PDSCH may be received is illustrated as having the same PDSCH occasions. As discussed, portions of slot n−2 (e.g., slot205-b) have been configured as uplink portions that overlap with both PDSCH occasions, and therefore any PDSCH occurring within that slot (e.g., whether the PDSCH occasion210or a first repetition of the first downlink transmission215) are dropped. During slot n−1 (e.g., slot205-c), the base station may transmit the second repetition of the first downlink transmission215(having a reporting offset of K or K1=2) and the first repetition of the second downlink transmission220(having a reporting offset of K or K1=1). As also discussed, during slot n (e.g., slot205-d), the second repetition of the second downlink transmission220is conflicted out, and therefore not transmitted or dropped. Accordingly, the UE may generate a feedback codebook for reporting feedback for the scheduled downlink transmissions that is populated based at least in part on the applied repetition factor and on whether the scheduled downlink transmissions were successfully received and decoded by the UE. That is, the UE may utilize the applied repetition factor rather than the repetition factor associated with the downlink transmission(s) configured by the base station. Various alternatives may be utilized with respect to identifying and applying the applied repetition factor. One alternative may include identifying the applied repetition factor based on a maximum number of configured repetition factors from the configured repetition factors of the UE (e.g., the maximum across all PDSCH aggregation factors, or pdsch-AggregationFactor, configurations). For example, the UE and base station may both identify the maximum number of configured repetition factors without counting or otherwise considering the configured repetition factors corresponding to inactive SPS configuration(s) (e.g., the PDSCH aggregation factors do not include inactive SPS configurations, when configured for the UE). In another example, the base station and UE may identify the maximum number of configured repetition factor by counting the configured repetition factors corresponding to both active and inactive SPS configurations (e.g., the PDSCH aggregation factor, or pdsch-AggregationFactor, configurations that includes both active and inactive SPS configurations, when configured for the UE). In a given slot205, the ACK/NACK bit position in a type-1 codebook corresponding to a PDSCH from the TDRA table is dropped only if that PDSCH is dropped in that slot205and all (e.g., max_pdsch-AggregationFactor)−1 slots205. The codebook size would have a larger overhead when the PDSCH within a K−1 window is dropped (e.g., as illustrated in the “old” codebook). One alternative may include identifying the applied repetition factor as one. In this instance, the UE may generate the feedback codebook based on the last instance of each downlink transmission that was actually received and decoded. For example, NPDSCHrepeatmay always be considered as one and the ACK/NACK bit position for each PDSCH with repetitions may be tied with the last actual PDSCH reception. This may result in the UE generating the “new” codebook illustrated in feedback configuration200. As can be seen, this approach reduces the size of the codebook that the UE generates and transmits to the base station by half. That is, this reduces the codebook size when some PDSCH occasions are dropped and the PDSCH aggregation factor is greater than one, with even increased benefits when the PDSCH aggregation factor is large. Accordingly, the UE and base station may determine that instance(s) of the downlink transmissions have been dropped. Accordingly, the UE may generate the feedback codebook differently for the dropped downlink transmissions than for the non-dropped downlink transmissions. For example, the UE may generate an ACK/NACK indication (e.g., ACK/NACK bit) for each downlink transmission that was actually received and decoded, but not generate an ACK/NACK indication breach instance of a dropped downlink transmission opportunity. In some aspects, the feedback codebook may be generated without respect to a DCI associated with the downlink transmissions (e.g., may be based on SPS configurations). For example, the UE may generate a type-1 feedback codebook based on the SPS configuration provided by the base station. During slot n+1 (e.g., slot205-e), the UE may transmit or otherwise convey an indication of a feedback report (e.g., PUCCH225) to the base station that includes the feedback codebook (e.g., the “new” codebook) generated in accordance with the described techniques. As discussed, these techniques may improve the codebook generation by the UE, reduce the overall size of the codebook to minimize overhead, and more accurately ensure consistency between the codebook the UE generates and the codebook that the base station expects to receive. FIG.3illustrates an example of a feedback configuration300that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. In some examples, feedback configuration300may implement aspects of wireless communication system100and/or feedback configuration200. Aspects of feedback configuration300may be implemented by a base station and/or UE, which may be examples of the corresponding devices described herein. Broadly, feedback configuration300spans a plurality of slots305, with five slots305being shown by way of example only. Slots305may be configured with one or more PDSCH occasions310, a first downlink transmission315(e.g., PDSCH #1), a second downlink transmission320(e.g., PDSCH #2), and a PUCCH325(e.g., a feedback reporting occasion where the UE transmits a feedback report to the base station for the downlink transmissions). As discussed above, a base station may schedule the UE for one or more downlink transmissions (e.g., the first downlink transmission315and the second downlink transmission320by way of example only). Each downlink transmission may have an associated repetition factor (e.g., an aggregation factor that is two for the first downlink transmission315and four for the second downlink transmission320in this example) that corresponds to one of a plurality of repetition factors configured at the UE (e.g., {1,2,4,8}). But the base station and UE may identify an applied repetition factor (aggregation factor) to be applied to feedback codebook generation for the scheduled downlink transmissions instead of the associated repetition or aggregation factor. For example, the applied repetition factor may be one, as discussed in relation toFIG.2. The base station may transmit the downlink transmissions315,320to the UE according to the configurations. For example, slot n−3 (e.g., slot305-a) is scheduled for transmission of a first repetition of both the first downlink transmission315and the second downlink transmission320. Slot n−2 (e.g., slot305-b) has been configured to include uplink portions that overlap with the scheduled repetitions of both downlink transmissions315,320, and therefore any PDSCH(s) occurring within that slot (e.g., both of the second repetitions of the first downlink transmission315and second downlink transmission320) are dropped. During slot n−1 (e.g., slot305-c), the base station may transmit the third repetition of the second downlink transmission320, while the PDSCH occasion310is unused. During slot n (e.g., slot305-d), the PDSCH occasion310is again unused and the fourth repetition of the second downlink transmission320is dropped (e.g., conflicted out due to a designated uplink portion of slot n). The first downlink transmission315has a corresponding reporting offset of K or K1=3 and the second downlink transmission320has a corresponding reporting offset of K or K1=1. Accordingly, the UE may generate a feedback codebook for reporting feedback for the scheduled downlink transmissions that is populated based at least in part on the applied repetition factor and on whether the scheduled downlink transmissions were successfully received and decoded by the UE. That is, the UE may utilize the applied repetition factor rather than the repetition factor associated with the downlink transmission(s) configured by the base station. In the example ofFIG.3, if the applied repetition factor is one, and if the UE is configured with a K1 window of {1, 2, 3}, the ACK/NACK bit for the downlink transmission315will not be captured in the feedback codebook. If the applied repletion factor is one, the UE will consider slots n, n−1, and n−2 (corresponding to the K1 window) when populating the feedback codebook. Using the process outlined with respect toFIG.2, nothing would be included in the codebook corresponding to slot n−2. In slot n−1, a NACK will be included to correspond to unused PDSCH occasion310, and an ACK/NACK will be included to correspond to the third repetition of downlink transmission320. In slot n, only a NACK will be included, corresponding to the unused PDSCH occasion310. As such, while the feedback codebook is still reduced, the codebook lacks any reporting for downlink transmission315. A scheduling rule may be used to avoid this scenario. In feedback configuration300, the scheduling rule may include that the UE may not expect to be configured with a set of K−1 values that, for a given PDSCH with a PDSCH aggregation factor of greater than one, none of the actual PDSCH reception lie within the K1 window. That is, the UE may expect to be configured such that at least one reception of a PDSCH lies within the K1 window. For example, the UE may be configured with the plurality of reporting offset values for transmitting the feedback report to the base station, where each of the reporting offset values represent a number of slots after a last nominal downlink transmission. The reporting offset values may span an evaluation window and the feedback codebook may be generated based on the UE evaluating each of the plurality of reporting offset values within the evaluation window. The UE may generate the feedback codebook based on this technique. During slot n+1 (e.g., slot305-e), the UE may transmit or otherwise convey an indication of a feedback report (e.g., PUCCH325) to the base station that includes the feedback codebook generated in accordance with the described techniques. As discussed, these techniques may improve the codebook generation by the UE, reduce the overall size of the codebook to minimize overhead, and more accurately ensure consistency between the codebook the UE generates and the codebook that the base station expects to receive. FIG.4illustrates an example of a process400that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. In some examples, process400may implement aspects of wireless communication system100and/or feedback configurations200and/or300. Aspects of process400may be implemented by UE405and/or base station410, which may be examples of the corresponding devices described herein. At415, base station410may schedule UE405for one or more downlink transmissions (e.g., PDSCH transmissions), with each downlink transmission having an associated repetition factor (e.g., PDSCH aggregation factor) corresponding to one of the plurality of repetition factors configured at UE405by base station410. At420, UE405may identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions. Similarly and at425, the base station may also identify an applied repetition factor for UE405to apply to feedback codebook generation for the one or more downlink transmissions. For example, base station410may transmit a configuration signal (e.g., in RRC configuration signaling) to UE405identifying the applied repetition factor to be utilized for feedback codebook generation. In another example, base station410may configure SPS configuration(s) for UE405, which may implicitly indicate that the applied petition factor is to be used. Accordingly and at430, UE405may generate a feedback codebook for reporting feedback for the downlink transmissions. The feedback codebook may be populated based on the applied repetition factor and on whether one or more of the downlink transmissions were successfully received and decoded by UE405. In some aspects, the feedback codebook may be generated without respect to the DCI associated with the downlink transmissions. For example, UE405may identify the applied repetition factor based on a maximum number of configured repetition factors from the plurality of configured repetition factors. For example, UE405may identify the maximum number of configured repetition factors without counting inactive SPS configurations. In another example, UE405may identify the maximum number of configured repetition factors by counting both active and inactive SPS configurations. In some aspects, this may include UE405identifying the applied repetition factor as one. Accordingly, UE405may generate feedback codebook based on a last instance of each downlink transmission that was actually successfully received and decoded. In some aspects, this may include UE405determining that one or more instances of the downlink transmissions have been dropped. Accordingly, UE405may generate the feedback codebook differently for the dropped downlink transmissions than for the non-dropped (e.g., actually transmitted) downlink transmissions. For example, UE405may generate an ACK/NACK indication for each downlink transmission that was actually received and decoded, but refrain from generating an ACK/NACK indication for a dropped downlink transmission. In some aspects, this may include UE405being configured with the plurality of reporting offset values (e.g., K or K−1 values) for transmitting the feedback report to the base station. Each reporting offset value may represent a number of slots after a last nominal downlink transmission, and the reporting offset values may span an evaluation window (e.g., a K1 window). UE405may generate the feedback codebook based on an evaluation of each reporting offset value within the evaluation window. Accordingly and at435, UE405may transmit (and base station410may receive) a feedback report that includes the feedback codebook generated in accordance with the described techniques. The feedback report may be transmitted by PUCCH and/or PUSCH. FIG.5illustrates an example of a feedback configuration500that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. In some examples, feedback configuration500may implement aspects of wireless communication system100, feedback configuration200and/or300, and/or process400. Aspects of feedback configuration500may be implemented by a base station and/or UE, which may be examples of the corresponding devices described herein. Broadly, feedback configuration500spans a plurality of slots505, with five slots505being shown by way of example only. Slots505may be configured with a first downlink transmission515(e.g., PDSCH #1), a second downlink transmission520(e.g., PDSCH #2), and a PUCCH525(e.g., a feedback reporting occasion where the UE transmits a feedback report to a base station for the downlink transmissions). As discussed above, a base station may schedule the UE for one or more downlink transmissions (e.g., the first downlink transmission515and the second downlink transmission520by way of example only). Each downlink transmission may have an associated repetition factor (e.g., an aggregation factor that is two for the first downlink transmission515and three for the second downlink transmission520in this example) that corresponds to one of a plurality of repetition factors configured at the UE (e.g., {1,2,4,8}). But the base station and UE may identify an applied repetition factor (aggregation factor) to be applied to feedback codebook generation for the scheduled downlink transmissions instead of the associated repetition or aggregation factor. The base station may transmit the downlink transmissions to the UE according to the configurations. For example, slot n−3 (e.g., slot505-a) is scheduled for transmission of a first repetition of the second downlink transmission520. Slot n−2 (e.g., slot505-b) is scheduled for transmission of a first repetition of the first downlink transmission515and second repetition of the second downlink transmission520. Slot n−1 (e.g., slot505-c) is scheduled for transmission of a second repetition of the first downlink transmission515and third repetition of the second downlink transmission520. During slot n (e.g., slot505-d), the second downlink transmission520is dropped (e.g., conflicted out). The first downlink transmission515has a corresponding reporting offset of K or K1=2 and the second downlink transmission520has a corresponding reporting offset of K or K1=1. Accordingly, the UE may generate a feedback codebook for reporting feedback for the scheduled downlink transmissions that is populated based at least in part on the applied repetition factor and on whether the scheduled downlink transmissions were successfully received and decoded by the UE. That is, the UE may utilize the applied repetition factor rather than the repetition factor associated with the downlink transmission(s) configured by the base station. In the example illustrated in feedback configuration500, regardless of how many PDSCH aggregation factors are configured, the UE may report HARQ-ACK information for a PDSCH with NPDSCHrepeat=pdsch−AggregationFactor repetitions, from slot n−NPDSCHrepeat+1 to slot n, when in the HARQ-ACK codebook that the UE includes in a PUCCH or PUSCH transmission in slot n+k, where k is indicted in the DCI. NPDSCHrepeatmay be defined per PDSCH, e.g., pdsch-AggregationFactor can be two or four, for example, for respectively configured downlink grant and SPS configured downlink transmissions. Accordingly, during slot n+1 (e.g., slot505-e) the UE may transmit (and base station may receive) a feedback report (e.g., PUCCH525) that includes the feedback codebook generated in accordance with the described techniques. The feedback report may be transmitted by PUCCH525and/or PUSCH. FIG.6shows a block diagram600of a device605that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The device605may be an example of aspects of a UE115as described herein. The device605may include a receiver610, a communications manager615, and a transmitter620. The device605may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver610may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to type-1 codebook construction with multiple aggregation factors, etc.). Information may be passed on to other components of the device605. The receiver610may be an example of aspects of the transceiver920described with reference toFIG.9. The receiver610may utilize a single antenna or a set of antennas. The communications manager615may determine that a base station has scheduled the UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE, identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions, generate a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded, and transmit to the base station a feedback report that includes the feedback codebook. The communications manager615may be an example of aspects of the communications manager910described herein. The communications manager615, or its sub-components, may be implemented in hardware, code (e.g., software or firmware) executed by a processor, or any combination thereof. If implemented in code executed by a processor, the functions of the communications manager615, or its sub-components may be executed by a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The communications manager615, or its sub-components, may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical components. In some examples, the communications manager615, or its sub-components, may be a separate and distinct component in accordance with various aspects of the present disclosure. In some examples, the communications manager615, or its sub-components, may be combined with one or more other hardware components, including but not limited to an input/output (I/O) component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with various aspects of the present disclosure. The transmitter620may transmit signals generated by other components of the device605. In some examples, the transmitter620may be collocated with a receiver610in a transceiver module. For example, the transmitter620may be an example of aspects of the transceiver920described with reference toFIG.9. The transmitter620may utilize a single antenna or a set of antennas. FIG.7shows a block diagram700of a device705that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The device705may be an example of aspects of a device605, or a UE115as described herein. The device705may include a receiver710, a communications manager715, and a transmitter740. The device705may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver710may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to type-1 codebook construction with multiple aggregation factors, etc.). Information may be passed on to other components of the device705. The receiver710may be an example of aspects of the transceiver920described with reference toFIG.9. The receiver710may utilize a single antenna or a set of antennas. The communications manager715may be an example of aspects of the communications manager615as described herein. The communications manager715may include a scheduling manager720, an applied repetition factor manager725, a feedback codebook manager730, and a feedback report manager735. The communications manager715may be an example of aspects of the communications manager910described herein. The scheduling manager720may determine that a base station has scheduled the UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The applied repetition factor manager725may identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions. The feedback codebook manager730may generate a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded. The feedback report manager735may transmit to the base station a feedback report that includes the feedback codebook. The transmitter740may transmit signals generated by other components of the device705. In some examples, the transmitter740may be collocated with a receiver710in a transceiver module. For example, the transmitter740may be an example of aspects of the transceiver920described with reference toFIG.9. The transmitter740may utilize a single antenna or a set of antennas. FIG.8shows a block diagram800of a communications manager805that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The communications manager805may be an example of aspects of a communications manager615, a communications manager715, or a communications manager910described herein. The communications manager805may include a scheduling manager810, an applied repetition factor manager815, a feedback codebook manager820, a feedback report manager825, a configured repetition factor manager830, a set repetition factor manager835, a dropped transmission manager840, and a slot offset manager845. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The scheduling manager810may determine that a base station has scheduled the UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The scheduling manager810may receive a configuration of a plurality of SPS configurations, wherein each downlink transmission of the one or more downlink transmissions is associated with a common SPS configuration or with different SPS configurations of the plurality of SPS configurations The applied repetition factor manager815may identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions. The feedback codebook manager820may generate a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded. In some cases, the feedback codebook is generated without respect to a downlink control information associated with the one or more downlink transmissions. The feedback report manager825may transmit to the base station a feedback report that includes the feedback codebook. The configured repetition factor manager830may identify the applied repetition factor based on a maximum number of configured repetition factors from the set of configured repetition factors. In some examples, the configured repetition factor manager830may identify the maximum number of configured repetition factors without counting configured repetition factors corresponding to inactive SPS configurations. In some examples, the configured repetition factor manager830may identify the maximum number of configured repetition factors by counting configured repetition factors corresponding to both active and inactive SPS configurations. The set repetition factor manager835may identify the applied repetition factor as one. In some examples, the set repetition factor manager835may generate the feedback codebook based on a last instance of each downlink transmission that was actually received and decoded. The dropped transmission manager840may determine, for each of the one or more downlink transmissions, that one or more instances of the downlink transmission has been dropped. In some examples, the dropped transmission manager840may generate the feedback codebook differently for the one or more instances of the downlink transmission that has been dropped and for one or more instances of the downlink transmission that are not dropped. In some examples, the dropped transmission manager840may generate an ACK/NACK indication for each downlink transmission that was actually received and decoded. In some examples, the dropped transmission manager840may refrain from generating an ACK/NACK indication for each instance of a dropped downlink transmission opportunity. The slot offset manager845may generate the feedback codebook based on evaluating each of the set of reporting offset values within the evaluation window. In some cases, the feedback codebook is a type-1 codebook. FIG.9shows a diagram of a system900including a device905that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The device905may be an example of or include the components of device605, device705, or a UE115as described herein. The device905may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including a communications manager910, an I/O controller915, a transceiver920, an antenna925, memory930, and a processor940. These components may be in electronic communication via one or more buses (e.g., bus945). The communications manager910may determine that a base station has scheduled the UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE, identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions, generate a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded, and transmit to the base station a feedback report that includes the feedback codebook. The I/O controller915may manage input and output signals for the device905. The I/O controller915may also manage peripherals not integrated into the device905. In some cases, the I/O controller915may represent a physical connection or port to an external peripheral. In some cases, the I/O controller915may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. In other cases, the I/O controller915may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller915may be implemented as part of a processor. In some cases, a user may interact with the device905via the I/O controller915or via hardware components controlled by the I/O controller915. The transceiver920may communicate bi-directionally, via one or more antennas, wired, or wireless links as described above. For example, the transceiver920may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver920may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the wireless device may include a single antenna925. However, in some cases the device may have more than one antenna925, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The memory930may include random access memory (RAM) and read-only memory (ROM). The memory930may store computer-readable, computer-executable code935including instructions that, when executed, cause the processor to perform various functions described herein. In some cases, the memory930may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor940may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor940may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into the processor940. The processor940may be configured to execute computer-readable instructions stored in a memory (e.g., the memory930) to cause the device905to perform various functions (e.g., functions or tasks supporting type-1 codebook construction with multiple aggregation factors). The code935may include instructions to implement aspects of the present disclosure, including instructions to support wireless communications. The code935may be stored in a non-transitory computer-readable medium such as system memory or other type of memory. In some cases, the code935may not be directly executable by the processor940but may cause a computer (e.g., when compiled and executed) to perform functions described herein. FIG.10shows a block diagram1000of a device1005that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The device1005may be an example of aspects of a base station105as described herein. The device1005may include a receiver1010, a communications manager1015, and a transmitter1020. The device1005may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1010may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to type-1 codebook construction with multiple aggregation factors, etc.). Information may be passed on to other components of the device1005. The receiver1010may be an example of aspects of the transceiver1320described with reference toFIG.13. The receiver1010may utilize a single antenna or a set of antennas. The communications manager1015may schedule a UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE, identify an applied repetition factor for the UE to apply to feedback codebook generation for the one or more downlink transmissions, and receive a feedback report from the UE that includes a feedback codebook, the feedback codebook generated for reporting feedback for the one or more downlink transmissions and populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded by the UE. The communications manager1015may be an example of aspects of the communications manager1310described herein. The communications manager1015, or its sub-components, may be implemented in hardware, code (e.g., software or firmware) executed by a processor, or any combination thereof. If implemented in code executed by a processor, the functions of the communications manager1015, or its sub-components may be executed by a general-purpose processor, a DSP, an ASIC, a FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The communications manager1015, or its sub-components, may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical components. In some examples, the communications manager1015, or its sub-components, may be a separate and distinct component in accordance with various aspects of the present disclosure. In some examples, the communications manager1015, or its sub-components, may be combined with one or more other hardware components, including but not limited to an I/O component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with various aspects of the present disclosure. The transmitter1020may transmit signals generated by other components of the device1005. In some examples, the transmitter1020may be collocated with a receiver1010in a transceiver module. For example, the transmitter1020may be an example of aspects of the transceiver1320described with reference toFIG.13. The transmitter1020may utilize a single antenna or a set of antennas. FIG.11shows a block diagram1100of a device1105that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The device1105may be an example of aspects of a device1005, or a base station105as described herein. The device1105may include a receiver1110, a communications manager1115, and a transmitter1135. The device1105may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1110may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to type-1 codebook construction with multiple aggregation factors, etc.). Information may be passed on to other components of the device1105. The receiver1110may be an example of aspects of the transceiver1320described with reference toFIG.13. The receiver1110may utilize a single antenna or a set of antennas. The communications manager1115may be an example of aspects of the communications manager1015as described herein. The communications manager1115may include a scheduling manager1120, an applied repetition factor manager1125, and a feedback report manager1130. The communications manager1115may be an example of aspects of the communications manager1310described herein. The scheduling manager1120may schedule a UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The applied repetition factor manager1125may identify an applied repetition factor for the UE to apply to feedback codebook generation for the one or more downlink transmissions. The feedback report manager1130may receive a feedback report from the UE that includes a feedback codebook, the feedback codebook generated for reporting feedback for the one or more downlink transmissions and populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded by the UE. The transmitter1135may transmit signals generated by other components of the device1105. In some examples, the transmitter1135may be collocated with a receiver1110in a transceiver module. For example, the transmitter1135may be an example of aspects of the transceiver1320described with reference toFIG.13. The transmitter1135may utilize a single antenna or a set of antennas. FIG.12shows a block diagram1200of a communications manager1205that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The communications manager1205may be an example of aspects of a communications manager1015, a communications manager1115, or a communications manager1310described herein. The communications manager1205may include a scheduling manager1210, an applied repetition factor manager1215, a feedback report manager1220, a configured repetition factor manager1225, a set repetition factor manager1230, a dropped transmission manager1235, and a feedback codebook manager1240. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The scheduling manager1210may schedule a UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The scheduling manager1210may transmit a configuration of a plurality of SPS configurations, wherein each downlink transmission of the one or more downlink transmissions is associated with a common SPS configuration or with different SPS configurations of the plurality of SPS configurations. The applied repetition factor manager1215may identify an applied repetition factor for the UE to apply to feedback codebook generation for the one or more downlink transmissions. The feedback report manager1220may receive a feedback report from the UE that includes a feedback codebook, the feedback codebook generated for reporting feedback for the one or more downlink transmissions and populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded by the UE. The configured repetition factor manager1225may identify the applied repetition factor based on a maximum number of configured repetition factors from the set of configured repetition factors. In some examples, the configured repetition factor manager1225may identify the maximum number of configured repetition factors without counting configured repetition factors corresponding to inactive SPS configurations of the UE. In some examples, the configured repetition factor manager1225may identify the maximum number of configured repetition factors by counting configured repetition factors corresponding to both active and inactive SPS configurations of the UE. The set repetition factor manager1230may identify the applied repetition factor as one, where the feedback codebook is generated based on a last instance of each downlink transmission that was actually received and decoded. In some examples, the set repetition factor manager1230may schedule, based on the set of configured repetition factors, at least one non-conflicted instance of the downlink transmission during an evaluation window that is based on a reporting offset value. The dropped transmission manager1235may determine, for each of the one or more downlink transmissions, that one or more instances of the downlink transmission has been dropped, where the feedback codebook is generated differently for the one or more instances of the downlink transmission that has been dropped and for one or more instances of the downlink transmission that are not dropped. In some cases, the feedback codebook is generated based on, an ACK/NACK indication is generated for each downlink transmission that was actually received and decoded by the UE, and an ACK/NACK indication is not generated for each instance of a dropped downlink transmission opportunity. The feedback codebook manager1240may monitor, control, or otherwise manage aspects of the feedback codebook being generated without respect to a downlink control information associated with the one or more downlink transmissions. In some cases, the UE is configured with a set of reporting offset values for transmitting the feedback report to the base station, each of the set of reporting offset values representing a number of slots after a last nominal downlink transmission, the set of reporting offset values spanning an evaluation window, and the feedback codebook is generated based on the UE evaluating each of the set of reporting offset values within the evaluation window. In some cases, the feedback codebook is a type-1 codebook. FIG.13shows a diagram of a system1300including a device1305that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The device1305may be an example of or include the components of device1005, device1105, or a base station105as described herein. The device1305may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including a communications manager1310, a network communications manager1315, a transceiver1320, an antenna1325, memory1330, a processor1340, and an inter-station communications manager1345. These components may be in electronic communication via one or more buses (e.g., bus1350). The communications manager1310may schedule a UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE, identify an applied repetition factor for the UE to apply to feedback codebook generation for the one or more downlink transmissions, and receive a feedback report from the UE that includes a feedback codebook, the feedback codebook generated for reporting feedback for the one or more downlink transmissions and populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded by the UE. The network communications manager1315may manage communications with the core network (e.g., via one or more wired backhaul links). For example, the network communications manager1315may manage the transfer of data communications for client devices, such as one or more UEs115. The transceiver1320may communicate bi-directionally, via one or more antennas, wired, or wireless links as described above. For example, the transceiver1320may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1320may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the wireless device may include a single antenna1325. However, in some cases the device may have more than one antenna1325, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The memory1330may include RAM, ROM, or a combination thereof. The memory1330may store computer-readable code1335including instructions that, when executed by a processor (e.g., the processor1340) cause the device to perform various functions described herein. In some cases, the memory1330may contain, among other things, a basic input/output system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1340may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1340may be configured to operate a memory array using a memory controller. In some cases, a memory controller may be integrated into processor1340. The processor1340may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1330) to cause the device1305to perform various functions (e.g., functions or tasks supporting type-1 codebook construction with multiple aggregation factors). The inter-station communications manager1345may manage communications with other base station105, and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. For example, the inter-station communications manager1345may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some examples, the inter-station communications manager1345may provide an X2 interface within an LTE/LTE-A wireless communication network technology to provide communication between base stations105. The code1335may include instructions to implement aspects of the present disclosure, including instructions to support wireless communications. The code1335may be stored in a non-transitory computer-readable medium such as system memory or other type of memory. In some cases, the code1335may not be directly executable by the processor1340but may cause a computer (e.g., when compiled and executed) to perform functions described herein. FIG.14shows a flowchart illustrating a method1400that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The operations of method1400may be implemented by a UE115or its components as described herein. For example, the operations of method1400may be performed by a communications manager as described with reference toFIGS.6through9. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the functions described below. Additionally or alternatively, a UE may perform aspects of the functions described below using special-purpose hardware. At1405, the UE may determine that a base station has scheduled the UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The operations of1405may be performed according to the methods described herein. In some examples, aspects of the operations of1405may be performed by a scheduling manager as described with reference toFIGS.6through9. At1410, the UE may identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions. The operations of1410may be performed according to the methods described herein. In some examples, aspects of the operations of1410may be performed by an applied repetition factor manager as described with reference toFIGS.6through9. At1415, the UE may generate a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded. The operations of1415may be performed according to the methods described herein. In some examples, aspects of the operations of1415may be performed by a feedback codebook manager as described with reference toFIGS.6through9. At1420, the UE may transmit to the base station a feedback report that includes the feedback codebook. The operations of1420may be performed according to the methods described herein. In some examples, aspects of the operations of1420may be performed by a feedback report manager as described with reference toFIGS.6through9. FIG.15shows a flowchart illustrating a method1500that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The operations of method1500may be implemented by a UE115or its components as described herein. For example, the operations of method1500may be performed by a communications manager as described with reference toFIGS.6through9. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the functions described below. Additionally or alternatively, a UE may perform aspects of the functions described below using special-purpose hardware. At1505, the UE may determine that a base station has scheduled the UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The operations of1505may be performed according to the methods described herein. In some examples, aspects of the operations of1505may be performed by a scheduling manager as described with reference toFIGS.6through9. At1510, the UE may identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions. The operations of1510may be performed according to the methods described herein. In some examples, aspects of the operations of1510may be performed by an applied repetition factor manager as described with reference toFIGS.6through9. At1515, the UE may identify the applied repetition factor based on a maximum number of configured repetition factors from the set of configured repetition factors. The operations of1515may be performed according to the methods described herein. In some examples, aspects of the operations of1515may be performed by a configured repetition factor manager as described with reference toFIGS.6through9. At1520, the UE may generate a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded. The operations of1520may be performed according to the methods described herein. In some examples, aspects of the operations of1520may be performed by a feedback codebook manager as described with reference toFIGS.6through9. At1525, the UE may transmit to the base station a feedback report that includes the feedback codebook. The operations of1525may be performed according to the methods described herein. In some examples, aspects of the operations of1525may be performed by a feedback report manager as described with reference toFIGS.6through9. FIG.16shows a flowchart illustrating a method1600that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The operations of method1600may be implemented by a UE115or its components as described herein. For example, the operations of method1600may be performed by a communications manager as described with reference toFIGS.6through9. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the functions described below. Additionally or alternatively, a UE may perform aspects of the functions described below using special-purpose hardware. At1605, the UE may determine that a base station has scheduled the UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The operations of1605may be performed according to the methods described herein. In some examples, aspects of the operations of1605may be performed by a scheduling manager as described with reference toFIGS.6through9. At1610, the UE may identify an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions. The operations of1610may be performed according to the methods described herein. In some examples, aspects of the operations of1610may be performed by an applied repetition factor manager as described with reference toFIGS.6through9. At1615, the UE may identify the applied repetition factor as one. The operations of1615may be performed according to the methods described herein. In some examples, aspects of the operations of1615may be performed by a set repetition factor manager as described with reference toFIGS.6through9. At1620, the UE may generate a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded. The operations of1620may be performed according to the methods described herein. In some examples, aspects of the operations of1620may be performed by a feedback codebook manager as described with reference toFIGS.6through9. At1625, the UE may generate the feedback codebook based on a last instance of each downlink transmission that was actually received and decoded. The operations of1625may be performed according to the methods described herein. In some examples, aspects of the operations of1625may be performed by a set repetition factor manager as described with reference toFIGS.6through9. At1630, the UE may transmit to the base station a feedback report that includes the feedback codebook. The operations of1630may be performed according to the methods described herein. In some examples, aspects of the operations of1630may be performed by a feedback report manager as described with reference toFIGS.6through9. FIG.17shows a flowchart illustrating a method1700that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The operations of method1700may be implemented by a base station105or its components as described herein. For example, the operations of method1700may be performed by a communications manager as described with reference toFIGS.10through13. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the functions described below. Additionally or alternatively, a base station may perform aspects of the functions described below using special-purpose hardware. At1705, the base station may schedule a UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The operations of1705may be performed according to the methods described herein. In some examples, aspects of the operations of1705may be performed by a scheduling manager as described with reference toFIGS.10through13. At1710, the base station may identify an applied repetition factor for the UE to apply to feedback codebook generation for the one or more downlink transmissions. The operations of1710may be performed according to the methods described herein. In some examples, aspects of the operations of1710may be performed by an applied repetition factor manager as described with reference toFIGS.10through13. At1715, the base station may receive a feedback report from the UE that includes a feedback codebook, the feedback codebook generated for reporting feedback for the one or more downlink transmissions and populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded by the UE. The operations of1715may be performed according to the methods described herein. In some examples, aspects of the operations of1715may be performed by a feedback report manager as described with reference toFIGS.10through13. FIG.18shows a flowchart illustrating a method1800that supports type-1 codebook construction with multiple aggregation factors in accordance with aspects of the present disclosure. The operations of method1800may be implemented by a base station105or its components as described herein. For example, the operations of method1800may be performed by a communications manager as described with reference toFIGS.10through13. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the functions described below. Additionally or alternatively, a base station may perform aspects of the functions described below using special-purpose hardware. At1805, the base station may schedule a UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a set of configured repetition factors configured at the UE. The operations of1805may be performed according to the methods described herein. In some examples, aspects of the operations of1805may be performed by a scheduling manager as described with reference toFIGS.10through13. At1810, the base station may identify an applied repetition factor for the UE to apply to feedback codebook generation for the one or more downlink transmissions. The operations of1810may be performed according to the methods described herein. In some examples, aspects of the operations of1810may be performed by an applied repetition factor manager as described with reference toFIGS.10through13. At1815, the base station may receive a feedback report from the UE that includes a feedback codebook, the feedback codebook generated for reporting feedback for the one or more downlink transmissions and populated based on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded by the UE. The operations of1815may be performed according to the methods described herein. In some examples, aspects of the operations of1815may be performed by a feedback report manager as described with reference toFIGS.10through13. At1820, the base station may determine, for each of the one or more downlink transmissions, that one or more instances of the downlink transmission has been dropped, where the feedback codebook is generated differently for the one or more instances of the downlink transmission that has been dropped and for one or more instances of the downlink transmission that are not dropped. The operations of1820may be performed according to the methods described herein. In some examples, aspects of the operations of1820may be performed by a dropped transmission manager as described with reference toFIGS.10through13. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. The following provides an overview of aspects of the present disclosure: Aspect 1: A method for wireless communication at a UE, comprising: determining that a base station has scheduled the UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a plurality of configured repetition factors configured at the UE; identifying an applied repetition factor to apply to feedback codebook generation for the one or more downlink transmissions; generating a feedback codebook for reporting feedback for the one or more downlink transmissions, the feedback codebook populated based at least in part on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded; and transmitting to the base station a feedback report that includes the feedback codebook. Aspect 2: The method of aspect 1, further comprising: identifying the applied repetition factor based at least in part on a maximum number of configured repetition factors from the plurality of configured repetition factors. Aspect 3: The method of aspect 2, further comprising: identifying the maximum number of configured repetition factors without counting configured repetition factors corresponding to inactive SPS configurations. Aspect 4: The method of any of aspects 2 through 3, further comprising: identifying the maximum number of configured repetition factors by counting configured repetition factors corresponding to both active and inactive SPS configurations. Aspect 5: The method of any of aspects 1 through 4, further comprising: identifying the applied repetition factor as one; and generating the feedback codebook based at least in part on a last instance of each downlink transmission that was actually received and decoded. Aspect 6: The method of any of aspects 1 through 5, further comprising: determining, for each of the one or more downlink transmissions, that one or more instances of the downlink transmission has been dropped; and generating the feedback codebook differently for the one or more instances of the downlink transmission that has been dropped and for one or more instances of the downlink transmission that are not dropped. Aspect 7: The method of aspect 6, wherein generating the feedback codebook further comprises: generating an ACK/NACK indication for each downlink transmission that was actually received and decoded; and refraining from generating an ACK/NACK indication for each instance of a dropped downlink transmission opportunity. Aspect 8: The method of any of aspects 1 through 7, further comprising: receiving a configuration of a plurality of SPS configurations, wherein each downlink transmission of the one or more downlink transmissions is associated with a common SPS configuration or with different SPS configurations of the plurality of SPS configurations. Aspect 9: The method of any of aspects 1 through 8, wherein the feedback codebook is generated without respect to a DCI associated with the one or more downlink transmissions. Aspect 10: The method of any of aspects 1 through 9, wherein the UE is configured with a plurality of reporting offset values for transmitting the feedback report to the base station, each of the plurality of reporting offset values representing a number of slots after a last nominal downlink transmission, the plurality of reporting offset values spanning an evaluation window, the method further comprising: generating the feedback codebook based at least in part on evaluating each of the plurality of reporting offset values within the evaluation window. Aspect 11: The method of any of aspects 1 through 10, wherein the feedback codebook comprises a type-1 codebook. Aspect 12: A method for wireless communication at a base station, comprising: scheduling a UE for one or more downlink transmissions, each of the one or more downlink transmissions having an associated repetition factor that corresponds to one of a plurality of configured repetition factors configured at the UE; identifying an applied repetition factor for the UE to apply to feedback codebook generation for the one or more downlink transmissions; and receiving a feedback report from the UE that includes a feedback codebook, the feedback codebook generated for reporting feedback for the one or more downlink transmissions and populated based at least in part on the applied repetition factor and on whether the one or more downlink transmissions were successfully received and decoded by the UE. Aspect 13: The method of aspect 12, further comprising: identifying the applied repetition factor based at least in part on a maximum number of configured repetition factors from the plurality of configured repetition factors. Aspect 14: The method of aspect 13, further comprising: identifying the maximum number of configured repetition factors without counting configured repetition factors corresponding to inactive SPS configurations of the UE. Aspect 15: The method of any of aspects 13 through 14, further comprising: identifying the maximum number of configured repetition factors by counting configured repetition factors corresponding to both active and inactive SPS configurations of the UE. Aspect 16: The method of any of aspects 12 through 15, further comprising: identifying the applied repetition factor as one, wherein the feedback codebook is generated based at least in part on a last instance of each downlink transmission that was actually received and decoded. Aspect 17: The method of aspect 16, further comprising: scheduling, based at least in part on the plurality of configured repetition factors, at least one non-conflicted instance of the downlink transmission during an evaluation window that is based at least in part on a reporting offset value. Aspect 18: The method of any of aspects 12 through 17, further comprising: determining, for each of the one or more downlink transmissions, that one or more instances of the downlink transmission has been dropped, wherein the feedback codebook is generated differently for the one or more instances of the downlink transmission that has been dropped and for one or more instances of the downlink transmission that are not dropped. Aspect 19: The method of aspect 18, wherein the feedback codebook is generated based at least in part on, an ACK/NACK indication is generated for each downlink transmission that was actually received and decoded by the UE, and an ACK/NACK indication is not generated for each instance of a dropped downlink transmission opportunity. Aspect 20: The method of any of aspects 12 through 19, further comprising: transmitting a configuration of a plurality of SPS configurations, wherein each downlink transmission of the one or more downlink transmissions is associated with a common SPS configuration or with different SPS configurations of the plurality of SPS configurations. Aspect 21: The method of any of aspects 12 through 20, wherein the feedback codebook is generated without respect to a DCI associated with the one or more downlink transmissions. Aspect 22: The method of any of aspects 12 through 21, wherein the UE is configured with a plurality of reporting offset values for transmitting the feedback report to the base station, each of the plurality of reporting offset values representing a number of slots after a last nominal downlink transmission, the plurality of reporting offset values spanning an evaluation window, and the feedback codebook is generated based at least in part on the UE evaluating each of the plurality of reporting offset values within the evaluation window. Aspect 23: The method of any of aspects 12 through 22, wherein the feedback codebook comprises a type-1 codebook. Aspect 24: An apparatus for wireless communication at a UE, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 1 through 11. Aspect 25: An apparatus for wireless communication at a UE, comprising at least one means for performing a method of any of aspects 1 through 11. Aspect 26: A non-transitory computer-readable medium storing code for wireless communication at a UE, the code comprising instructions executable by a processor to perform a method of any of aspects 1 through 11. Aspect 27: An apparatus for wireless communication at a base station, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 12 through 23. Aspect 28: An apparatus for wireless communication at a base station, comprising at least one means for performing a method of any of aspects 12 through 23. Aspect 29: A non-transitory computer-readable medium storing code for wireless communication at a base station, the code comprising instructions executable by a processor to perform a method of any of aspects 12 through 23. Although aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR networks. For example, the described techniques may be applicable to various other wireless communication systems such as Ultra Mobile Broadband (UMB), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, as well as other systems and radio technologies not explicitly mentioned herein. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include random-access memory (RAM), read-only memory (ROM), electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
125,097
11863254
DETAILED DESCRIPTION The subject matter of embodiments of the invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, it is contemplated that the claimed subject matter might be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. Throughout this disclosure, several acronyms and shorthand notations are employed to aid the understanding of certain concepts pertaining to the associated system and services. These acronyms and shorthand notations are intended to help provide an easy methodology of communicating the ideas expressed herein and are not meant to limit the scope of embodiments described in the present disclosure. The following is a list of these acronyms:3G Third-Generation Wireless Technology4G Fourth-Generation Cellular Communication System5G Fifth-Generation Cellular Communication SystemCD-ROM Compact Disk Read Only MemoryCDMA Code Division Multiple AccesseNodeB Evolved Node BGIS Geographic/Geographical/Geospatial Information SystemgNodeB Next Generation Node BGPRS General Packet Radio ServiceGSM Global System for Mobile communicationsiDEN Integrated Digital Enhanced NetworkDVD Digital Versatile DiscsEEPROM Electrically Erasable Programmable Read Only MemoryLED Light Emitting DiodeLTE Long Term EvolutionMIMO Multiple Input Multiple OutputMD Mobile DevicePC Personal ComputerPCS Personal Communications ServicePDA Personal Digital AssistantRAM Random Access MemoryRET Remote Electrical TiltRF Radio-FrequencyRFI Radio-Frequency InterferenceR/N Relay NodeRNR Reverse Noise RiseROM Read Only MemoryRSRP Reference Transmission Receive PowerRSRQ Reference Transmission Receive QualityRSSI Received Transmission Strength IndicatorSINR Transmission-to-Interference-Plus-Noise RatioSNR Transmission-to-noise ratioSON Self-Organizing NetworksTDMA Time Division Multiple AccessTXRU Transceiver (or Transceiver Unit)UE User EquipmentUMTS Universal Mobile Telecommunications SystemsWCD Wireless Communication Device (interchangeable with UE) Further, various technical terms are used throughout this description. An illustrative resource that fleshes out various aspects of these terms can be found in Newton's Telecom Dictionary, 31st Edition (2018). Embodiments of our technology may be embodied as, among other things, a method, system, or computer-program product. Accordingly, the embodiments may take the form of a hardware embodiment, or an embodiment combining software and hardware. An embodiment takes the form of a computer-program product that includes computer-useable instructions embodied on one or more computer-readable media. Computer-readable media include both volatile and nonvolatile media, removable and nonremovable media, and contemplate media readable by a database, a switch, and various other network devices. Network switches, routers, and related components are conventional in nature, as are means of communicating with the same. By way of example, and not limitation, computer-readable media comprise computer-storage media and communications media. Computer-storage media, or machine-readable media, include media implemented in any method or technology for storing information. Examples of stored information include computer-useable instructions, data structures, program modules, and other data representations. Computer-storage media include, but are not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD), holographic media or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage, and other magnetic storage devices and may be considered transitory, non-transitory, or a combination of both. These memory components can store data momentarily, temporarily, or permanently. Communications media typically store computer-useable instructions—including data structures and program modules—in a modulated data signal. The term “modulated data signal” refers to a propagated signal that has one or more of its characteristics set or changed to encode information in the signal. Communications media include any information-delivery media. By way of example but not limitation, communications media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, infrared, radio, microwave, spread-spectrum, and other wireless media technologies. Combinations of the above are included within the scope of computer-readable media. By way of background, a traditional wireless communication network employs one or more wireless access points to provide wireless access to mobile stations, in order that they may access a telecommunication network. For example, in a wireless telecommunication network, a plurality of access points, each providing service for a particular geographic area, are used to transmit and receive wireless signals to/from one or more UEs. For the purposes of this specification, an access point may be considered to be one or more otherwise-discrete components comprising an antenna, a radio, and/or a controller, and may be alternatively referred to as a “node,” in that it is the bridge between the wired telecommunication network and the wirelessly connected UE. In aspects, a unique node may be identified by its location (i.e., at cell site A), its orientation/served geographic area (i.e., configured to serve a sector between X° and Y°), and/or its ability to communicate with a UE according to a particular protocol (e.g., 3G, 4G, LTE, 5G, and the like). Modern networks may be able to determine when an abnormal condition exists within the wireless network. Whether based on an indication from an access point, base station, or antenna array, that one or more components have failed or experienced a fault, based on an indication from one or more UEs that wireless service is in a degraded condition or is absent, or based on an indication that a traffic load is higher than a threshold (e.g., within a range of an average traffic condition, exceeding a maximum load capability, etc.), the network operator may receive an indication that wireless service is degraded in a particular geographic area. Frustrating the problem of an abnormal network condition, it may take days or weeks for a technician to climb a tower to replace or repair a damaged component. Further, installing redundant access points to act as backups in the instance that a serving node fails or is overloaded, is limited by legal agreements with tower companies, space on towers, wind load limitations on towers, and the significant financial burden of leasing additional space on towers. As such, the present disclosure is directed to methods, systems, and computer readable media that manage the power of a redundant antenna array that is supplied with power from the same power supply as a primary antenna array, wherein the primary and redundant antenna arrays comprise one or more directional antennas used to communicate in different directions. By utilizing a redundant array, a single antenna system may be used to transmit signals in two different directions, which may allow the antenna system to provide service to user devices that have poor or no service due to congestion or a failure of a neighboring cell. Utilization of the redundant antenna array may provide for contingency service until a failed array can be repaired, network modifications can be made to serve a congested area, or until traffic conditions improve. As used herein, the terms node, access point, or base station may be used interchangeably or without limitation to describe a link between a fixed network and a mobile station (i.e., a UE). The terms “user device,” “user equipment,” “UE,” “mobile device,” “mobile handset,” and “mobile transmitting element” all describe a mobile station and may be used interchangeably in this description. Certain terminology may be used to differentiate access points and/or antenna arrays from one another; for example, a combination access point may be used to describe an access point having a primary antenna array and a redundant antenna array that have different orientations (i.e., configured to serve different geographic areas), distinguished from a traditional access point which may be used to describe an access point comprising a single antenna array used to communicate to a single geographic area. Accordingly, a first aspect of the present disclosure is directed to A system for providing redundant coverage in a wireless network, the system comprising a first antenna array, the first antenna array comprising a first set of antenna elements, each of the first set of antenna elements coupled to a power supply, wherein the first antenna array is configured to transmit in a first direction. The system also comprises a second antenna array, the second antenna array comprising a second set of antenna elements, each of the second set of antenna elements coupled to the power supply, wherein the second antenna array is configured to transmit in a second direction, the second direction different than the first direction. The system further comprises a control element configured to selectively supply power from the power supply to each of the first and second set of antenna elements. A second aspect of the present disclosure is directed to a method A method for providing redundant coverage in a wireless network, the method comprising receiving an indication of a wireless service degradation in a second geographic area. The method further comprises determining that a second antenna array comprising a second set of antennas is configured to transmit signals in to at least a portion of the second geographic area, each of the second set of antennas coupled to a power supply, wherein the power supply is additionally coupled to a first set of antennas, the first set of antennas comprising at least a portion of a first antenna array configured to transmit signals to a first geographic area, the first geographic area being different than the second geographic area. The method further comprises supplying a first amount of power from the power supply to the first set of antennas and a second amount of power from the power supply to the second set of antennas. According to another aspect of the technology described herein, one or more computer-readable media is provided having computer-executable instructions embodied thereon that, when executed, cause the one or more processors to receive an indication of a wireless service degradation in a second geographic area. The one or more processors are further configured to determine that a second antenna array comprising a second set of antennas is configured to transmit signals in to at least a portion of the second geographic area, each of the second set of antennas coupled to a power supply, wherein the power supply is additionally coupled to a first set of antennas, the first set of antennas comprising at least a portion of a first antenna array configured to transmit signals to a first geographic area, the first geographic area being different than the second geographic area. The one or more computer processors are further caused to instruct a control component to supply a first amount of power from the power supply to the first set of antennas and a second amount of power from the power supply to the second set of antennas. Referring toFIG.1, a diagram is depicted of an exemplary computing environment suitable for use with implementations of the present disclosure. In particular, the exemplary computer environment is shown and designated generally as computing device100. Computing device100is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should computing device100be interpreted as having any dependency or requirement relating to any one or combination of components illustrated. In aspects, the computing device100may be a UE, WCD, or other user device, capable of two-way wireless communications with an access point. Some non-limiting examples of the computing device100include a cell phone, tablet, pager, personal electronic device, wearable electronic device, activity tracker, desktop computer, laptop, PC, and the like. The implementations of the present disclosure may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components, including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Implementations of the present disclosure may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc. Implementations of the present disclosure may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. With continued reference toFIG.1, computing device100includes bus102that directly or indirectly couples the following devices: memory104, one or more processors106, one or more presentation components108, input/output (I/O) ports110, I/O components112, and power supply114. Bus102represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the devices ofFIG.1are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component such as a display device to be one of I/O components112. Also, processors, such as one or more processors106, have memory. The present disclosure hereof recognizes that such is the nature of the art, and reiterates thatFIG.1is merely illustrative of an exemplary computing environment that can be used in connection with one or more implementations of the present disclosure. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “handheld device,” etc., as all are contemplated within the scope ofFIG.1and refer to “computer” or “computing device.” Computing device100typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device100and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Computer storage media does not comprise a propagated data signal. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. Memory104includes computer-storage media in the form of volatile and/or nonvolatile memory. Memory104may be removable, nonremovable, or a combination thereof. Exemplary memory includes solid-state memory, hard drives, optical-disc drives, etc. Computing device100includes one or more processors106that read data from various entities such as bus102, memory104or I/O components112. One or more presentation components108presents data indications to a person or other device. Exemplary one or more presentation components108include a display device, speaker, printing component, vibrating component, etc. I/O ports110allow computing device100to be logically coupled to other devices including I/O components112, some of which may be built in computing device100. Illustrative I/O components112include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. Radio116represents a radio that facilitates communication with a wireless telecommunications network. In aspects, the radio116utilizes one or more transmitters, receivers, and antennas to communicate with the wireless telecommunications network on a first downlink/uplink channel. Though only one radio is depicted inFIG.1, it is expressly conceived that the computing device100may have more than one radio, and/or more than one transmitter, receiver, and antenna for the purposes of communicating with the wireless telecommunications network on multiple discrete downlink/uplink channels, at one or more wireless nodes. Illustrative wireless telecommunications technologies include CDMA, GPRS, TDMA, GSM, and the like. Radio116might additionally or alternatively facilitate other types of wireless communications including Wi-Fi, WiMAX, LTE, or other VoIP communications. As can be appreciated, in various embodiments, radio116can be configured to support multiple technologies and/or multiple radios can be utilized to support multiple technologies. A wireless telecommunications network might include an array of devices, which are not shown so as to not obscure more relevant aspects of the invention. Components such as a base station, a communications tower, or even access points (as well as other components) can provide wireless connectivity in some embodiments. Turning now toFIG.2, an exemplary network environment is illustrated in which implementations of the present disclosure may be employed. Such a network environment is illustrated and designated generally as network environment200. Network environment200is but one example of a suitable network environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the network environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated. Network environment200generally includes a cell site202, one or more user devices, and one or more components configured to wirelessly communicate between the one or more user devices and a network220. As used herein, the term “cell site” is used to generally refer to one or more cellular base stations, nodes, RRUs control components, and the like (configured to provide a wireless interface between a wired network and a wirelessly connected user device) that are geographically concentrated at a particular site, so as not to obscure the focus of the present invention. Though illustrated as a macro site, the cell site202may be a macro cell, small cell, femto cell, pico cell, or any other suitably sized cell, as desired by a network carrier for communicating within a particular geographic area. In aspects, such as the one illustrated inFIG.2, the cell site202may comprise one or more nodes (e.g., NodeB, eNodeB, ng-eNodeB, gNodeB, en-gNodeB, and the like) that are configured to communicate with user devices in one or more discrete geographic areas using one or more antennas of an antenna array. In the aspect illustrated inFIG.2, the cell site202may comprise a first antenna system204a second antenna system232and a third antenna system234, wherein the first antenna system204is configured to provide coverage for a first sector211while the first antenna system204is operating in non-redundant mode, the second antenna system232is configured to provide coverage for a second sector213, and the third antenna system234is configured to provide coverage for a third sector215. In aspects where the cell site202comprises more than one antenna system, the antenna systems may be configured to face in different directions; for example,FIG.2illustrates that if the first antenna system204is said to face 0 degrees relative, then the second antenna system232may be said to face 180 degrees relative and the third antenna system234may be said to face 90 degrees relative. The network environment200includes one or more user devices that are in wireless communication with the cell site202via the one or more antenna systems. In an illustrative aspect, a first user device210may be disposed in the first sector211, a second user device212may be disposed in the second sector213, and a third user device214may be disposed in the third sector215(though many more user devices may be in any sector or a sector may be vacant). In network environment200, the user device210,212, or214may take on a variety of forms, such as a personal computer (PC), a user device, a smart phone, a smart watch, a laptop computer, a mobile phone, a mobile device, a tablet computer, a wearable computer, a personal digital assistant (PDA), a server, a CD player, an MP3 player, a global positioning system (GPS) device, a video player, a handheld communications device, a workstation, a router, a hotspot, and any combination of these delineated devices, or any other device (such as the computing device100) that communicates via wireless communications with the cell site202in order to interact with one or more component of the network220. Each of the first user device210, the second user device212, or the third device214may be configured to wirelessly communicate using any one or more wireless communication protocols (e.g., 5G, 4G, and the like). In some cases, the user devices in network environment200can optionally utilize network220to communicate with each other and/or computing devices (e.g., a mobile device(s), a server(s), a personal computer(s), etc.) through the one or more component associated with the cell site202. The network220may be a telecommunications network(s), or a portion thereof. A telecommunications network might include an array of devices or components (e.g., one or more base stations), some of which are not shown. Those devices or components may form network environments similar to what is shown inFIG.2, and may also perform methods in accordance with the present disclosure. Components such as terminals, links, and nodes (as well as other components) can provide connectivity in various implementations. Network220can include multiple networks, as well as being a network of networks, but is shown in more simple form so as to not obscure other aspects of the present disclosure. Network220can be part of a telecommunication network that connects subscribers to their immediate service provider. In some instances, network220can be associated with a telecommunications provider that provides services (e.g., voice, data, SMS) to user devices, such as user devices210,212, or214. For example, network208may provide voice, SMS, and/or data services to user devices or corresponding users that are registered or subscribed to utilize the services provided by a telecommunications provider. Network220can comprise any one or more communication networks providing voice, SMS, and/or data service(s), such as, for example, a 1× circuit voice, a 3G network (e.g., CDMA, CDMA2000, WCDMA, GSM, UMTS), a 4G network (WiMAX, LTE, HSDPA), or a 5G network. In aspects of the present invention, an antenna system is disclosed that provides redundant coverage. Conventionally, a sector antenna system (also referred to as an antenna panel) is a directional antenna system used to transmit and/or receive signals within a particular horizontal (i.e., azimuthal) range and may comprise one or more antenna elements, which may be arranged into one or more arrays or subarrays, and a reflector. As used herein, the term “reflector” is used to generally refer to a component that reflects RF waves in a particular direction (i.e., reflecting plate, ground plane, reflecting surface). In other words, a conventional sector antenna system is configured to transmit within a horizontal range, at most +/−90 degrees of a vector that is normal to the surface/face of the reflector. In order to provide appropriate levels of coverage, wireless network operators typically arrange at least one sector antenna system to face into each sector served by a particular cell site. Each conventional antenna system is individually supplied with a unique power supply, allowing any particular antenna system to transmit at full power to its desired coverage area. Adding antenna systems to a tower is limited by both wind load considerations, weight, space availability (towers typically host antenna systems for multiple operators), and also by economic factors (increasing the amount of money an operator must pay a tower company to lease a particular footprint on the tower). Aspects of the present disclosure solve these problems, and are directed to a redundant antenna system. Implementations of the present disclosure are directed to a redundant antenna system that comprises a first directional antenna system (e.g., a first antenna panel) and a second directional antenna system (e.g., a second antenna panel) that share a power source but are configured to face (i.e., transmit signals) in different directions.FIG.2generally illustrates that the first antenna system204may be referred to as a redundant antenna system type and comprises a first panel206and a second panel208; in contrast, each of the second and third antenna systems232and234comprise only a single panel configured to directionally transmit signals to their respective sectors. The redundant antenna system disclosed herein is defined by the first panel206and the second panel208facing in different directions; for example, in the illustrated aspect, the first panel206of the first antenna system204is configured to face 0 degrees relative and the second panel208of the first antenna system204is configured to face 180 degrees relative. Though illustrated as only having one of the redundant antenna systems (i.e., the first antenna system204), it is specifically envisioned that more than one redundant antenna system may be at a particular cell site, or that every antenna system at a cell site is of the redundant antenna system type. The redundant antenna system disclosed herein may be said to generally have two operational modes: normal (non-redundant) and redundant. In a normal, non-redundant operation mode, the first antenna system204will only transmit signals from the first panel206to the first sector211, which may be said to be generally defined by a first sector boundary216and a third sector boundary218, and will not transmit signals from the second panel208. As discussed in greater detail herein, upon a determination or indication that a service degradation has occurred in an area that the second panel208is capable of serving, power will be supplied from a common power supply to a second set of antenna elements of a second antenna array that comprises the second panel208in addition to (or instead of) supplying a first set of antenna elements of a first antenna array that comprises the first panel206. Thus, in addition to the power supply supplying the first set of antennas of the first antenna panel206, being used to transmit wireless downlink signals to the first sector211, the power supply is also supplying the second set of antennas of the second antenna panel208, being used to transmit wireless downlink signals to the second sector213. Turning toFIGS.3A-3B, the redundant antenna system204ofFIG.2is shown in greater detail. As noted with respect toFIG.2, the redundant antenna system204comprises a first antenna panel206and a second antenna panel208. The first antenna panel206comprises a first antenna array310and the second antenna panel208comprises a second antenna array320, each of the first antenna array310and the second antenna array320being comprised of a plurality of antenna elements. A subset of the first antenna array310may be defined as a first set of antenna elements312and a subset of the second antenna array320may be defined as a second set of antenna elements322. In aspects, the first set of antenna elements312may comprise the same number of antenna elements as the second set of antenna elements322; however, in other aspects the numbers may be different. Further, thoughFIG.3Adepicts each of the first set of antenna elements312and the second set of antenna elements322as comprising 8 antenna elements, each of the sets of antenna elements may comprise at least 2 antenna elements (e.g., 2, 4, 8, 16, 32, or 64 elements per set). As shown inFIG.3A, the first set of antenna elements312and the second set of antenna elements may be mirrored; that is, the first set of antenna elements312may be in the same position on the first antenna panel206as the second set of antenna elements322on the second antenna panel208(e.g., columns A-D, rows 1-2). In other aspects, the position of the first set of antenna elements312on the first antenna panel206may be symmetrically related to the position of the second set of antenna elements322on the second antenna panel208(e.g., the first set of antenna elements312may be in columns A-D, rows 1-2 of the first antenna panel206and the second set of antenna elements322may be in columns E-H, rows 1-2 of the second antenna panel208). Regardless of how many antenna elements make up each of the first set of antenna elements312and the second set of antenna elements322, both of the first set of antenna elements312and the second set of antenna elements are powered by a common power supply302. Though it could take different forms, power supply302may comprise a power amplifier. The power amplifier302may be said to be coupled to each antenna element of the first set of antenna elements312via a first power feed314and couple to each antenna element of the second set of antenna elements322via a second power feed324. In normal operation, when the redundant antenna system is only transmitting with the primary first antenna panel206, the power supply302only provides power to the first set of antenna elements312and does not supply power to the second set of antenna elements322(i.e., the power supplied to the second set of antenna elements322is 0 dBm). In redundant operation, a control component, such as a switch, may operate to selectively provide power to the second set of antenna elements322. In redundant mode, the redundant antenna system204may be configured to allocate power between the first set of antenna elements312and the second set of antenna elements322in any of a number of various configurations. In a first configuration, it may be determined that the power supply is capable of supplying a maximum total power of 40 dBm to any connected antenna element(s); if a first amount of power, supplied to the first set of antennas312, is less than that maximum, it may be said that a power headroom exists (the difference between the maximum total power capable of being supplied by the power supply302and the first amount of power). In such an instance, a second amount of power may be supplied to the second set of antenna elements322in an amount equal to or less than the power headroom without changing the propagation characteristics of the first antenna panel206. In another aspect, such as an aspect where the power supply302is being fully utilized to power the first set of antenna elements312in a normal operation mode, upon activation of the redundant mode, the power supply302may re-allocate (or be instructed to re-allocate) at least a portion of the first amount of power from the first set of antenna elements312to the second set of antenna elements322. In one non-limiting example, if the power supply302is capable of supplying 40 dBm and was supplying the full 40 dBm to the first set of antenna elements312during normal operation, then when the redundant antenna system204enters redundant mode, the first amount of power may be reduced from 40 dBm to 37 dBm and re-allocated to the second set of antennas322such that the second amount of power is also 37 dBm. In other aspects, only a portion of the power reduction may be reallocated (e.g., in the previous example the first amount of power may be reduced from 40 dBm to 37 dBm, leaving 37 dBm available, and the second amount of power may be increased from 0 dBm to 34 dBm, leaving 34 dBm of power available should it be desired that the first amount of power or the second amount of power be further modified). Power allocation may be static or dynamic. In aspects where power allocation is static, the total power available to be supplied by the power supply302may be equally divided or divided based on known or anticipated propagation characteristics of the first antenna panel206and the second antenna panel208. For example, in redundant mode, if the power supply302is capable of supplying a total maximum power of 40 dBm, then 37 dBm may be allocated to each of the first set of antenna elements312and the second set of antennas322. In aspects where power allocation is based on propagation characteristics, it may be known (or estimated) that a degraded service area may be served by the second antenna panel208by supplying the plurality of antenna elements that comprise the second antenna array with a particular amount of power, which functionally equate to ensuring a certain quality of connection within a certain range. For example, if a degraded access point serves a geographic area that is smaller than the redundant antenna system204, when a service degradation occurs, the degraded geographic service area may be served by the second antenna panel208using less than half of the maximum power available from the power supply (i.e., if a degraded geographic service area can be served by the second antenna panel208by setting the second amount of power to 31 dBm, the first amount of power could be as much as 39.4 dBm). In other aspects, the power allocation may be dynamic. Dynamic power decisions may be based, for example, on quantity and location of UEs within the degraded geographic service area and/or the quantity and/or location of UEs within the geographic area served by the first antenna panel206(e.g., the first sector211ofFIG.2). In one non limiting example, if it is determined that a relatively low number of UEs are in the degraded geographic service area and disposed near the redundant antenna system204, the second amount of power may be reduced; whereas, if the number of UEs increases and/or they move further away from the redundant antenna system204, the second amount of power may be increased to continue serving the UEs in the degraded geographic service area. The first amount of power may be similarly reduced or increased based on the location and number of UEs in the geographic service area served by the first antenna panel206. In a dynamic power allocation scheme, it is thus possible that the maximum total power available from the power supply302may not be fully utilized. The redundant antenna system204shown inFIGS.2-3Bdepicts a system wherein the first antenna panel206is back-to-back and parallel to the second antenna panel208. For the purposes of this specification, this relative orientation between panels may be described as the first antenna panel206and the second antenna panel208being offset by 180 degrees.FIGS.4A-4Dillustrate other non-limiting examples of redundant antenna system configurations. A redundant antenna system may have 2 panels or more than 2 panels. For example,FIGS.4A-4Billustrate a redundant antenna system comprising three discrete antenna panels with a 120 degree offset. In aspects with three or more antenna panels, a single power supply may be connected to a set of antenna elements on each panel and may have any one or more of the features described with respect to the two panel system disclosed inFIGS.2-3B.FIGS.4C-4Dillustrate an aspect of the redundant antenna system where each panel of the redundant antenna system is not equally offset. For example, the redundant antenna system may comprise two panels that may only be 90 degrees offset. Though the first and second antenna panels are configured/oriented to transmit signals in different directions, the areas served by the panels may have some overlap. Such an aspect may be useful when implementing redundant coverage for a particularly high traffic area. Returning toFIG.2, the network environment200may further comprise a power management engine240, which may take the form of one or more executable processes running on one or more computer processing devices. The power management engine240comprises at least a monitor242, an analyzer244, and a controller246, each of which may take the form of one or more computer processing components or executable processes running thereon. The monitor242is generally responsible for monitoring the network environment200for indications that the first antenna system204should change operational modes. In aspects, the monitor242may determine whether the first antenna system204should change from the normal operating mode to the redundant mode or from redundant mode to normal mode. Specifically, the monitor242may detect or receive indications of changes in coverage in a particular area that may be relevant to making operating mode changes. The monitor may detect (e.g., detecting a fault in an antenna element), determine (e.g., by detecting a change to the RSRQ, RSRP, SINR, etc., observed by a UE within a particular are), or receive an indication from an outside source (e.g., an MME) that a service degradation has occurred, is likely to have occurred, or is likely to occur (e.g., based on trends of equipment or traffic load). UsingFIG.2as an illustrative example, the monitor242may determine that the second antenna system232has experienced a fault, such as by receiving an indication from a node, an MME, or the network220that at least one antenna element of the antenna array that comprises the second antenna system232is not operating nominally. The monitor may associate the antenna fault information with a particular geographic area that may be (or is) impacted by the degradation. In this illustrative example, the monitor242may associate the second sector213with the degraded second antenna system232and communicate the information to the analyzer244. In other examples, the monitor242may determine that a service degradation exists based on an overload condition; that is, that an amount of traffic load within the second sector213exceeds a predetermined threshold and that the second antenna system is unable to serve all of the traffic in the second sector213as desired. Such a threshold may be a maximum capacity of the second antenna system232or a portion of the maximum capacity (e.g., 75%, 90%, etc.) in order that the network environment may proactively bring on redundant antenna systems to prevent undesirable affects for user devices caused by increasing traffic (i.e., preventing maximum capacity would prevent call drops, call failures, etc.). Regardless of what caused the service degradation or which geographic area is affected by the service degradation (referred to herein as the degraded geographic service area), the monitor242communicates the location of the degraded geographic service area and/or the cause of the service degradation to the analyzer244. At a high level, the analyzer244is configured to determine the availability of redundant antenna systems and determine power management instructions therefor. Based on information communicated from the monitor242, the analyzer may be provided with an indication about the location of the degraded geographic service area and/or the cause of the service degradation (e.g., fault, failure, traffic overload(ing), etc.). The analyzer may compare the information received from the monitor242to information that is known about the location and capabilities of redundant antenna systems within the network environment. UsingFIG.2as an illustrative example, the analyzer244may compare information received from the monitor242(that the degraded geographic service area comprises the second sector213) against a known location of the redundant antenna system204. The analyzer244may know that the redundant antenna system204is located at the cell site202and that the redundant antenna system204comprises a redundant second antenna panel208that is capable of transmitting signals (also referred to as a backlobe) to the second sector213. Based on this information, the analyzer244may make power management decisions. In a first aspect, if the analyzer244has received information that the service degradation is the result of a traffic condition, the analyzer244may determine that the first antenna system204should switch to redundant mode by allocating a second amount of power to the second set of antennas (e.g., the second set of antennas322ofFIG.3A). In this aspect, the analyzer244may also communicate with the monitor242to regularly monitor and provide updates of the traffic condition in the second sector213to the analyzer244in order that the analyzer244may determine when the redundant antenna system204should revert to the normal operating mode (i.e., when the traffic falls back below the predetermined threshold). In other aspects, if the analyzer244is provided with information that the service degradation is due to a hardware fault or failure of the second antenna system232, the analyzer244may determine that the first antenna system should switch to redundant mode and supply the second amount of power to the second antenna panel208until the analyzer244receives a subsequent communication that the service degradation has been resolved. The analyzer244may determine the first amount of power supplied to the first set of antennas312ofFIG.3Aand the second amount of power supplied to the second set of antennas322as described in greater detail herein. Once the analyzer has determined the first amount of power and the second amount of power, the analyzer244will communicate the same to the controller246. The controller246will either execute the power decisions (e.g., if the controller246is local, it may take the form of the power supply or a switch) or communicate the instructions to the appropriate power supply (e.g., if the controller246is remote). Turning now toFIG.5, network environment500is illustrated with one or more components ofFIG.2and an additional cell site510. Cell site510may be said to comprise access point512, wherein access point512, in a first operating condition, is configured to transmit a wireless downlink signal to and serve a neighboring sector516defined by sector boundaries514. In an aspect of the present disclosure, it is envisioned that the second antenna system232may become degraded, whether due to a fault, failure, or traffic condition. In becoming degraded, the second antenna system232may fail to serve one or more of the UEs in the second sector213, which may then be referred to as the degraded geographic service area. In such a condition, the redundant antenna system204may be configured to operate in a redundant mode such that the second antenna panel208is powered in order to serve one or more of the UEs in the second sector213. In aspects where the second antenna panel208is powered in addition to the first antenna panel206, the second antenna panel208may not be able to serve the entire degraded geographic service area. For example, a fourth UE504may have been disposed at or near the cell edge of the second sector213and may have been served by the second antenna system when full power was allocated to serving the second sector213; however, having less power supplied, the second antenna panel208may only be able to serve a redundant geographic service area having boundaries217. In this example, the fourth UE504is disposed beyond the redundant geographic service area. In order to serve the fourth UE504, upon a determination or indication that the redundant antenna system204is operating in a redundant mode and that the redundant geographic service area served by the second antenna panel208is not serving a particular area or a particular UE (e.g., the fourth UE504), an indication may be communicated (e.g., using the X2 interface or another inter-node interface) to the access point512at the second cell site510to modify its transmission characteristics to recapture the area and or UE that are now unserved by the second antenna panel208as it operates in redundant mode. For example, the access point512may increase its transmission power or utilize beamforming to extend its coverage area such that a recaptured geographic service area520having boundaries518includes dropped UEs, such as the fourth UE504. FIG.6depicts a flow diagram of an exemplary method600for managing power allocation of a redundant antenna system. At step610, an indication of a coverage in a first sector is received. As discussed with respect at least toFIGS.2and5, such an indication may be the result of a fault or failure that has been detected by a first access point that is normally configured to serve the first sector. Said indication may also be based on a determination that an amount of traffic requesting service from the first access point exceeds a predetermined threshold. In aspects, the threshold may comprise a maximum service capacity of the first access point. For example, if the first access point is configured to serve a maximum of N number of UEs, but N+10 UEs are requesting service from the first access point, the first access point may be said to be overloaded, which may trigger the indication of step610. In another aspect, the predetermined threshold may be said to be a portion of N; for example, the threshold may be said to be 0.75N, 0.9N, or any other desirable level that, when exceeded, trigger the indication of step610. A proportional threshold may specifically be desirable to prevent UEs from having connection failures before the method600is triggered. Once the indication of610has been received, at step620, it is determined that a second sector backlobe is available. For example, with reference toFIGS.2and5, if an indication is received that a coverage requirement exists in sector213, at step620, it may be determined that a redundant antenna system, such as the redundant antenna system204, has a second antenna panel208that is capable of transmitting a second set of downlink signals to the degraded service area in sector213which may be said, for the purposes of method600, to serve the first sector using a second sector backlobe. Once it has been determined that a second sector backlobe is available, for example by determining the availability of the second antenna panel208to transmit downlink signals, the method600may proceed to step630, wherein the redundant antenna system, such as the redundant antenna system204ofFIG.2, is caused to transmit a signal to the degraded first sector using the antenna panel that is capable of producing a second sector backlobe. In aspects, step630may be the result of receiving an indication from a remote component to transmit a signal to the first sector using the second sector backlobe or, in aspects where method600is at least partially executed local to the redundant antenna system, step630may be executed by a local instruction, based on local determinations in steps610and/or step620. Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the scope of the claims below. Embodiments in this disclosure are described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to readers of this disclosure after and because of reading it. Alternative means of implementing the aforementioned can be completed without departing from the scope of the claims below. Certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims In the preceding detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the preceding detailed description is not to be taken in the limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.
49,970
11863255
DETAILED DESCRIPTION Systems, apparatuses and methods for power control to a beam steering phased array antenna in satellite applications are disclosed. The systems, apparatuses and methods support the growing demands of satellite communications in the millimeter wave spectrum and enable the deployment of broadband wireless connectivity to users. In various examples, LEO satellites and/or ground stations are equipped with a novel Beam Steering Phased Array Antenna (“BSPA”) to enable beamforming by using multiple elements to support high gain and narrow beam formation. Power control is provided with a multi-port amplification (“MPA”) matrix to optimize antenna operation and reduce losses. These MPA matrices may be used for power distribution in various system configurations, including Multi-User Multiple-Input, Multiple-Output (“MU-MIMO”) configurations. The BSPA antennas disclosed herein provide high directional gain and narrow beams, enabling very high data rate spatial transmission in satellite communications, and in particular, in broadband mobile satellite systems, where satellites can move in orbits and communicate with ground stations, stationary or in mobility. Beams formed by BSPA antennas are steered by a certain angle to find ground stations as the communication satellite moves, such as for Satellite Communications on the Move (“SOTM”) and LEO satellites. When beams are electronically steered, there may be a gain loss observed. Larger steering angles result in higher gain loss. These losses may be compensated by adjusting the power using a power amplifier, which in the disclosed examples is part of a MPA matrix or system. Electronic beam steering also eliminates mechanical beam steering, and thereby reduces mass, volume, and power for controlling and operating subsystems and antenna arrays. This MPA system provides power level control, failure resilience, and power combining/dividing capabilities over a range of complex operational requirements. In multiple beam operations of the SOTM systems, beam handover is required when satellites move in beam operations of the SOTM systems or out of the sight of a ground station. Beam swapping can be performed for the handover procedure where a new beam is formed and steered to a new ground station which takes over another beam pointing to another ground station which is about to become out of sight of the satellite. As described in more detail herein below, the various examples propose and disclose a method using beam switching with a Benes network topology to support seamless handover scenarios. When beams are electronically steered, a gain loss is usually observed, which is proportional to cos(θ), with θ referring to the steering angle. The larger the steering angle, the higher the gain loss will be. At the same time, the larger the steering angle, the farther the distance between the satellite and the ground station, or terminal. In space propagation, the propagation loss is increased by six (6) dB when the distance is doubled. For example, for a LEO satellite moving in its orbit at an elevation angle of thirty degrees (30°), the beam steering loss is about three (3) decibels (dB) and the LEO-terminal distance as doubled implies about six (6) dB path loss, thereby totalizing a loss of up to nine (9) dB. These losses can be compensated by adjusting the power, as one of the most effective possible solutions, by using an MPA matrix. The proposed solution based on an MPA system provides power level control, failure resilience, and power combining/dividing features to meet very complex operational requirements. It should be noted that antennas used in space applications, such as satellites, require large-scale beamforming with high directivity gain. These antennas are often designed for beam steering and beam switching, where power control is a key consideration. The goal is to distribute power to the individual antenna elements and to control individual antenna elements (or groups of antenna elements) with individualized power levels. It is appreciated that, in the following description, numerous specific details are set forth to provide a thorough understanding of the examples. However, it is appreciated that the examples may be practiced without limitation to these specific details. In other instances, well-known methods and structures may not be described in detail to avoid unnecessarily obscuring the description of the examples. Also, the examples may be used in combination with each other. FIG.1illustrates a schematic diagram of an antenna system100in accordance with various examples. In one or more examples of the present disclosure, for a satellite with several links pointing to several different ground stations, a power switching and distribution network102may be employed. The power switching and distribution network102may comprise a transceiver104, a rearrangeable switch network106, and a power distribution network108. The power distribution network108is coupled to a plurality of Beam Steering Phased Array (“BSPA”) antennas110-114, which each transmit (radiate) and/or receive (detect) Electromagnetic (“EM”) beams116-120. In one or more examples and as described in more detail below, the BSPA antennas120may be a metastructure antenna or any other phased array antenna capable of beam steering in millimeter wave frequencies. A metastructure, as generally defined herein, is an engineered, non- or semi-periodic structure that is spatially distributed to meet a specific phase and frequency distribution. In addition, it should be noted that, in some examples, lasers and/or detectors may be employed by the disclosed antenna system100instead of BSPA antennas110-114, as is shown inFIG.1. BSPA antennas110-114may be implemented in a BSPA antenna module as described in more detail below with reference toFIG.6. The rearrangeable switch network106, or configurable network, is used for signal source permutation. The power distribution network108is a MPA system, and is responsible for power distribution and power level control of the beams116-120. This feature of power distribution and control may also provide redundancy to achieve fault protection by having parallel amplification. It should be noted that there may be a variety of different designs for the power distribution network108, such as designs utilizing hybrid de-couplers and power amplifiers. As illustrated inFIG.1, multiple signal streams are provided from transceiver104to a switch network106. The rearrangeable switch network106enables a variety of configurations and connections from input to output, and thus supports signal source permutations. A controller (not shown) may be used to control the specific permutations depending on application, signal source and operational considerations. The various permutations are then input to the power distribution network108, also referred to as an MPA. The MPA108is a hank of power amplifiers which are connected to an input matrix and output matrix. An MPA uses parallel amplification of signals according to a stack of Power Amplifiers (“PAs”) so as to achieve power sharing among the multiple output ports. The input signals are transformed by an Input Multiport Network (“INET”) and presented to a stack of High Performance Amplifiers (“HPAs”), whereupon after amplification the signals are recombined by the Output Multiport Network (“ONET”). FIG.2illustrates such a system incorporating an MPA in accordance with various examples. A functional block diagram200shows the amplification process, having a pre-amplification processing stage202and a post-amplification processing stage206. The multi-port amplification portion204, MPA, performs parallel amplification of the signals input thereto by a stack or bank of power amplifiers. This process provides multiple paths to the output, where power is then shared by the multiple output ports. Multiple input signals, In1, In2, In3, and In4are provided to the pre-amplification processing stage202for transformation in preparation for the MPA204. After amplification, the output signals are effectively recombined by post-amplification processing stage206. Pre-amplification processing202includes the INET, where input signals are power-split and phase shifted in preparation for amplification in MPA204. Post-amplification processing206includes the ONET, where the amplified signals are prepared for transmission. Transmission may incorporate all of the output ports, such as Output1, Output2, Output3, and Output4, or may combine into a subset thereof. The power output of each of the output ports may be individually determined, enabling different values of power at each port, rather than a uniform power applied to all signals. Note that the pre-amplification processing stage202corresponds to the level control drivers208, the switch matrix or network210and the hybrid coupler module212. The 4×4 switch network210is a rearrangeable switch network that supports any permutation of four inputs to four outputs. In some examples, the 4×4 switch network210includes six (6) C-switches as illustrated. In other examples, various different types of switches other than C-switches may be employed for the switches of the 4×4 switch network210including, but not limited to, R-switches, T-switches, and/or cross-bar switches. In addition, the antenna system100may be scaled to support any number of inputs to outputs. As such, various different switch networks having various different number of inputs and outputs may be employed instead of a 4×4 switch network210for the antenna system100. The post-amplification processing stage or ONET206corresponds to the hybrid coupler216, which has output ports for the output signals. The MPA204includes the hybrid coupler212portion of stage202, the High-Power Amplifier (“HPA”) bank214and the hybrid coupler216in stage206. In some examples, the MPA204is a matrix amplification network of multiple matrices, where each matrix has M number of input ports and M number of output ports (e.g., M-by-M (M×M) matrices), where M may be equal to two, four, eight, or etc., where M=2, 4, 8, and so on. The output of a first matrix, referred to as the INET, is input to a power amplifier bank (e.g., HPA bank214) of M similar (or identical) power amplifiers, such as Traveling-Wave Tube Amplifiers (“TWTAs”) and/or Solid State Power Amplifiers (“SSPAs”). The HPA bank214provides M number of inputs into a second matrix, i.e., ONET206, that are the outputs of the power amplifiers in HPA hank214. The ONET206separates the M number of amplified signals into M number of streams at M number of outputs. The INET in pre-amplification processing stage202and the ONET206may be recursively constructed using ninety-degree hybrid couplers212and216. As such, the INET in pre-amplification processing stage202is a first set of hybrid couplers212, and the ONET206is a second set of hybrid couplers216. Note that the hybrid couplers212and216have crossover transmission lines related to the wavelength of the center frequency of operation. When power is introduced to the input ports, the power flows to ports at different phases, such as 0° and 90°. Hybrid couplers split the high-power signals in applications where unwanted reflections could damage driver devices. In some examples, a quadrature hybrid is used to generate two outputs having equal amplitude but a quadrant apart, i.e., 90° apart. The matrix structure of the power distribution network108uses control of the input levels to the power amplifiers in HPA bank214to control their output level. Each of the output ports of the power amplifiers in HPA bank214corresponds to a different beam116-120. By adjusting the input power levels to the INET, the output levels of the power amplifiers in HPA bank214will be adjusted, and a dynamic and reconfigurable power distribution over the beams116-120may be achieved. This feature is highly desirable, for example, in high frequency operations, such as for Ka-band and Q/V-band. However, other frequencies may be utilized for the disclosed antenna system100ofFIG.1including, but not limited to L-band, C-band, S-band, X-band, and Ku-band. In some examples, the MPA system204composed of hybrid couplers212, HPA bank214and hybrid couplers216, provides failure resiliency properties. For example, when N number of power amplifiers within the M-dimensional power amplifier array214are in an off mode (e.g., an N number of the power amplifiers in bank214are not operating) there will be no interruption of operation for the remaining power amplifiers in HPA bank214(i.e. M minus N (M−N) number of the power amplifiers in HPA bank214will remain operating, for example due to failure or loss of power). These remaining power amplifiers will continue to work for all M number of paths of the MPA system204, but with a lower power. The power level will be reduced by a factor of (M−N)/M in some examples. As such, power combining and power dividing may be achieved through control of the individual elements of the MPA system204. These solutions may be used to implement multi-cast or broadcast messaging and communications. For example, the antenna system100ofFIG.1may be implemented in a multi-beam satellite system and/or a wireless cellular system as described in more detail below. Also inFIG.2, the INET in pre-amplification processing stage202and the ONET206are each shown to comprise a 4×4 configuration of ninety-degree hybrid couplers212and216, respectively. In addition, the power amplifier array (e.g., a HPA bank214) is shown to comprise four power amplifiers. However, it should be noted that in some examples, the power amplifier array (e.g., a HPA bank)214may comprise additional power amplifiers (e.g., HPAs) for added redundancy. In some examples, the present disclosure implements rearrangeable networks for beam steering and switching on-board a satellite for high data rate space links with high-gain BSPA antennas. BSPA antennas, compared to classical satellite antennas using large size reflectors with mechanical beam steering subsystems, present several advantages, including mass and form factor reduction, electronic beam steering and switching. System200is controlled by control module218, which may be implemented as one or more modules. Control module218may implement desired power requirements, amplification specifics and transmission paths through the system200. Control module218implements an efficient control method to achieve a power regulation requirement based on the rearrangeable switch network106and power distribution network108, which consists of regulating the downlink power so that LEO satellites are able to maintain the coverage Equivalent Isotropically Radiated Power (“EIRP”) level for user links and/or service links, while moving in orbit. In this scenario, one or several BSPA antennas (e.g., BSPA antennas110-114ofFIG.1) can also be reconfigured for different purposes. In particular, by steering and/or switching the downlink beams, together with the downlink power regulation, the overall coverage can be updated per operational requirements, and optimized for power efficiency, as expressed in EIRP coverage on several footprints on ground. These functionalities that include switching, multiport amplification, as well as BSPA antennas, altogether, express some requirements for control, monitoring and calibration, among others. Control module218implements a methodology of dynamic control and reconfiguration. The proposed solution is to utilize the power regulation and downlink beam switching capabilities of the proposed flexible payload system architecture to optimize the on-board satellite radio frequency power resource for LEO satellites that are operated in weather varying and moving environments. Due to the number of LEO satellites in a LEO constellation which can be very large, a Telemetry, Telecontrol & Command (“TT&C”) subsystem can be overwhelmed and some timing sensitive data must be transmitted in low latency. It is proposed to share useful data only with LEO satellites in orbit and in service. The various examples, among several others, disclosed herein consist of utilizing the return link to form a local control loop in control module218which controls directly the payload downlink channels and beams for power optimization. It is also proposed to transmit and exchange data in IP packets associated with beam manipulation using a virtual network in layer2overplayed on the space network between the LEO constellations nodes in view of priority scheduling and fast forwarding in Virtual Local Area Network (“VLAN”) frames. Further, it is proposed to employ a VLAN with switches on board to handle control, monitoring and command data packets either globally with the resource management center, or locally Whenever possible and necessary. The VLAN can either have gateways to the satellite space network, or use TT&C, or traffic uplink and downlink links, all controlled by a centralized mechanism. One of the major benefits of providing link quality information is to alleviate the loading of the satellite network resource management system. This is because in a LEO or Middle Earth Orbit (“MEO”) satellite constellation, a highly sophisticated system topology and architecture usually demands a considerable amount of system resources to be used in signaling, monitoring, control and coordination. As the control loop in control module218becomes local, the involved system components are limited to the satellite payload and the ground user terminals located in one or several footprints or cells on ground, a virtual L2 local network can be used to support it, without the addressing, layering and the corresponding protocol processing relating the function to other system entities including the hub station, TT&C subsystem, and eventually the network resource management system. Localization of certain control requirements will lead to a reduction of the response time, and the length of the system transient period. The proposed payload architecture and use of a local control loop in control module218, as compared to classical ones, has a piggyback uplink from user terminals to the satellite via the uplink. It is appreciated that the proposed VLAN approach can be interfaced with the on-board bus for control, monitoring and command, etc. The ULAN frames encapsulate the IP packets for forwarding and switching, and then delivered as IP packets at the payload bus access nodes. This approach efficiently supports applications such as SNMP for management, and other IP-based applications for payload operations, including the proposed downlink power regulation, downlink beam manipulation including swapping for handover, as well as multiport amplification subsystem calibration and reconfiguration. In some designs, it is also possible to integrate the payload bus and the proposed VLAN into a single VLAN domain so that the bus and payload are managed in a unified way. In this way, priority tasks can be also executed for some time-sensitive applications. Examples of the potential applications for rearrangeable switch network210are illustrated inFIGS.3A-B, where an initial configuration of a switch network300has a first input going to a base station (“BS”), or central unit, A, second input to BS B, third input to BS C, and a fourth input to BS D. In a second configuration shown inFIG.2B, the switch302enables permutation allowing the first input signal to be transmitted to BS B, second input signal to be transmitted to BS D, a third input to be transmitted to BS C, and a fourth input signal to be transmitted to BS A. These are just a few of the permutations that switch network210may enable. The switch network210enables the system to route signals to different system components within the system with little to no change in the beam generation of BSPA antennas110-114. Attention is now directed toFIGS.4A-B, which illustrate switch networks according to various examples. Switch networks402-404are exemplary switch networks that may be employed for the rearrangeable switch network106of the power switching and distribution network102ofFIG.1. Switch networks402-404are each a Benes network comprising multiple base unit switches (e.g., two-by-two (2×2) switches, such as a cross-bar switches or double pole-double throw switch) coupled together. These switch nodes202are arranged as a sequence of stages connected by inverse shuffle permutations, wherein an inverse shuffle, σ−1(x) may be a right-circular rotation of the binary representation of x as opposed to a left-circular rotation for the ordinary shuffle permutation σ(x). The Benes-type network is a rearrangeable network as the switch settings may be rearranged to accommodate any change of input-to-output mapping. Switch networks402-404incorporate base unit switches that may be rearranged (e.g., by various switching combinations) without blocking the signal flow through the switch networks402-404. This means they are capable of full-throughput as packet switches with various routings. There are a variety of techniques for such routing. As illustrated, this type of network enables configuration to satisfy a variety of scenarios without interruption of information or signal flow through the network. These networks may be implemented for any number of input-to-output (I/O) ports, such as network402having 4 inputs and 4 outputs; or may be implemented as network404having 8 inputs and 8 outputs. The internal configurations are coupled to allow reconfiguration. As illustrated, the individual elements of networks402and404are each 2×2 elements. Alternate examples may implement a variety of sizes/configurations. FIG.5illustrates power amplifier (e.g., HPA) banks500-502comprising redundant power amplifiers (e.g., HPAs)508,516and522and power amplifiers504-506,510-512,514,518-520, and524, according to various examples. In some examples, the power amplifier banks502-504may be employed for the power amplifier array (e.g., a HPA bank)214of the system200for added redundancy. Also, in some examples, these power amplifier banks502-504may each be implemented as a small cell backhaul configuration for a cellular system, such as for 5G specifications. As shown inFIG.5, power amplifier bank502is a single redundant amplifier configuration having an additional redundant power amplifier508with four inputs and four outputs. As such, for power amplifier bank502, one power amplifier failure is recoverable without interruption. And, power amplifier bank504is a double redundant amplifier configuration having two additional redundant power amplifiers516,522with four inputs and four outputs. As such, for power amplifier bank504, two power amplifier failures are recoverable without interruption. Attention is now directed toFIG.6, which is a schematic diagram of a BSPA module for use in an antenna system implemented as inFIG.1and in accordance with various examples. BSPA module600includes a dynamically controllable BSPA antenna602, an RF Integrated Circuit (“RFIC”)604and a feed network606. An antenna controller608dynamically controls the BSPA antenna602at the direction of control module218, creating transmission beams with specified parameters, such as beam width, beam direction, and so forth, to achieve a power regulation requirement for a set of LEO satellites. In various examples, the BSPA antenna602may be a metastructure antenna or any other antenna capable of radiating RF signals in millimeter wave frequencies. A metastructure, as generally defined herein, is an engineered, non- or semi-periodic structure that is spatially distributed to meet a specific phase and frequency distribution. A metastructure antenna is composed of multiple metastructure antenna elements positioned in a metastructure array, as shown with element610in metastructure array612. The metastructure antenna elements may include microstrips, gaps, patches, vias, and so forth. The elements in a given metastructure array may have a variety of shapes and configurations and be designed to meet certain specified criteria, including, for example, desired beam characteristics for a fixed wireless network operating within the 5G standard. In some examples, the metastructure, antenna elements are metamaterial cells in a variety of conductive structures and patterns, such that a received transmission signal is radiated therefrom. Each metamaterial cell may have unique properties. These properties may include a negative permittivity and permeability resulting in a negative refractive index; these structures are commonly referred to as left-handed materials (“LHM”). The use of LHM enables behavior not achieved in classical structures and materials, including interesting effects that may be observed in the propagation of electromagnetic waves, or transmission signals. Metamaterials can be used for several interesting devices in microwave and terahertz engineering such as antennas, sensors, matching networks, and reflectors, such as in telecommunications, automotive and vehicular, robotic, biomedical, satellite and other applications. For antennas, metamaterials may be built at scales much smaller than the wavelengths of transmission signals radiated by the metamaterial. Metamaterial properties come from the engineered and designed structures rather than from the base material forming the structures. Precise shape, dimensions, geometry, size, orientation, arrangement and so forth result in the smart properties capable of manipulating electromagnetic waves by blocking, absorbing, enhancing, or bending waves. Active circuit components in RFIC604are able to provide RF signals at multiple steering angles to beam steering antenna602, which then radiates these signals according to their steering angles. The RFIC604includes phase shifting circuitry to provide phase shifts to the beam steering antenna elements in a full 360° direction. The phase shifting circuitry may include phase shifters such as a phase shift network, a vector modulator architecture, a varactor-based phase shifter, and so on. The phase shifting circuitry is fed by feed network606, which is a power divider structure having a plurality of transmission lines for transmitting an RF signal to the phase shifters in RFIC604. In various examples, the BSPA antenna602may be divided into subarrays, such as illustrated inFIGS.7A-B. InFIG.7A, antenna array700is made up of plurality of metastructure antenna elements which may be dynamically arranged into arrays of multiple elements. In this particular scenario, there are subarrays702-710, where each subarray generates a specific beam for transmission. These subarrays may operate individually and in coordination with a single beamform. The five subarrays may generate five separate and distinct beams, wherein the shape and direction of each beam is unique. The five separate beams may support transmission of five different and unique transmissions concurrently.FIG.7Billustrates three other subarrays714-718that may be configured in antenna array712. These subarrays are configured by the antenna controller608, which controls the signal(s) input to each of the specific subarrays according to a desired phase shift generated by MC604for one or more of the antenna elements within a subarray. The configuration, arrangement and control of the subarrays is as flexible as the antenna controller608and feed network606support. Attention is now directed toFIG.8, which illustrates a process for operating an antenna system in a satellite according to various examples. The process first configures the system for a first transmission (802). The first transmission may be to one or more base stations or other receivers within a communication system. The system then determines whether to adjust the power distribution for the first transmission according to a power regulation requirement for the satellite (802), and if not, begins the transmission (812). When the power distribution is to be adjusted, the process determines the transmission output signal power distribution (804). The process then controls INET portion of system to configure inputs to the MPA (806), and then controls ONET to configure transmission outputs from the MPA signals (808). The process then dynamically controls a BSPA antenna to radiate a plurality of transmission beams to achieve a power regulation requirement for the satellite (810) before beginning the transmission (812). It is appreciated that the previous description of the disclosed examples is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these examples will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other examples without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
29,143
11863256
DETAILED DESCRIPTION Exemplary embodiments are described in detail herein, and examples thereof are shown in the accompanying drawings. It should be understood for a technical person in the art that, unless otherwise defined, all terms (comprising technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present invention belongs. It should be further understood that, the terms such as those defined in a dictionary should be understood as meanings consistent with the meanings in the context in the related technology, and should not be explained as meanings that are idealized or excessively formal, unless the terms are clearly defined herein. An embodiment of the present invention provides a new channel equalization-free single-carrier broadband communication transmission method, i.e., delay alignment modulation. A transmitter deliberately introduces the corresponding delay for the symbol sequence to perform delay compensation, so that all multi-path signal components arrive at the receiver simultaneously and constructively after propagating over the time-dispersive channel, thereby avoiding the ISI issue in broadband transmission. In addition, the path-based beamforming is performed with the multiple antennas. As a result, the time-dispersive channel can be transformed into the frequency-flat channel by means of delay compensation and the path-based ZF beamforming. The method includes the following main content (for a specific implementation, refer to the following detailed description and example embodiment description):a) The number of temporal-resolvable multi-paths, and delays and channel gain vectors corresponding to the multi-path are obtained based on a channel impulse response;b) a transmitter inserts a guard interval for each channel coherence block with duration Tc, and deliberately introduces the corresponding delay for the symbol sequence to perform delay compensation, so that all multi-path signal components arrive at the receiver simultaneously and constructively after propagating over the time-dispersive channel;c) when the number of transmit antennas is no smaller than that of the temporal-resolvable multi-paths, the path-based ZF beamforming can be used to completely eliminate ISI; on the other hand, the low-complexity path-based MRT beamforming or the optimal path-based MMSE beamforming may be made;d) at the end of the moment Tc, the transmitter or the receiver determines whether a communication process ends; and if communication ends, the process is completed; ande) if communication continues, process a) to d) is repeated until the process ends. Delay alignment modulation leverages the multi-path sparsity of mmWave and Terahertz channels and the abundant spatial dimension brought by large antenna arrays. When the spatial dimension of the transmitter is much larger than the number of temporal-resolvable multi-paths, the per-path delay compensation may be performed, so that all multi-path signal components arrive at the receiver simultaneously and constructively after propagating over the time-dispersive channel. Based on the channel impulse response, the delay alignment modulation method consists of delay compensation and path-based beamforming modules, whose number are equal to that of the temporal-resolvable multi-paths. An input to the delay alignment modulation method is a single-carrier signal, and an output is the superposed signals after per-path delay compensation and path-based beamforming. In step b), the per-path delay compensation may be achieved by means of time shift, and the pre-compensation value of each multi-path is equal to the maximum delay among all the multi-paths minus its delay. According to an antenna configuration of the transmitter and a time-dispersive channel characteristic, the transmitter selects a proper path-based beamforming scheme. When the number of transmit antennas is no smaller than that of the temporal-resolvable multi-paths, the path-based ZF beamforming is used to completely eliminate ISI. In this case, the receiver receives the multi-path signal components with an identical delay, whose value is equal to the maximum delay among all the multi-paths. All the multi-path signal components contribute to the improvement of the received signal-to-noise ratio (SNR), which enables a low-complexity channel equalization-free single-carrier broadband communication. On the other hand, when the number of transmit antennas is smaller than that of the temporal-resolvable multi-paths, the low-complexity path-based MRT or the optimal path-based MMSE beamforming may be used. In step c), for the path-based ZF beamforming, one orthogonal projection matrix may be constructed for each multi-path, and the channel gain vector of this multi-path is projected into an orthogonal space of all other multi-paths' channel gain vectors, so as to obtain its path-based ZF beamforming vector. For a sparse time-dispersive channel in mmWave/Terahertz communication and/or a large antenna array at the transmitter, the performance of the path-based MRT beamforming approaches that of the path-based ZF beamforming. In this case, the low-complexity MRT beamforming can effectively mitigate the effect of the ISI, thus achieving channel equalization-free single-carrier broadband communication transmission. Within each channel coherence time, delay alignment modulation requires insertion of one guard interval to avoid ISI across different channel coherence blocks, and duration of the guard interval is equal to the maximum delay among all the channel coherence blocks. The method is not only applicable to a single-user communication scenario, but also applicable to a multi-user communication scenario. For the multi-user communication scenario, delay alignment modulation may be compatible with different multiple access schemes. FIG.1is a schematic diagram of delay alignment modulation according to an exemplary embodiment, and it can be seen that each temporal-resolvable multi-path may correspond to multiple sub-paths with different AoDs. FIG.2is a schematic architecture diagram of a delay alignment modulation transmitter according to an exemplary embodiment. After performing per-path delay compensation for the symbol sequence over each multi-path, together with the path-based beamforming, the signal is transmitted. Let L denote the number of temporal-resolvable multi-paths, h1denote the channel gain vector for the lth multi-path, and nlrepresent the corresponding delay. The discrete-time equivalent of the channel impulse response can be expressed as hH[n]=∑l=1L⁢hlH⁢δ[n-nl] where n is the symbol index. Let s[n] be the independent and identically distributed (i.i.d.) information-bearing symbols. With delay alignment modulation, the transmitted signal is x[n]=∑l=1L⁢fl⁢s[n-κl] where Klis the deliberately introduced delay for the lth multi-path at the transmitter, with Kl≠Kl′, ∀l≠l′, and fldenotes the path-based beamforming vector. The received signal for delay alignment modulation is y[n]=hH[n]*x[n]+z[n]=∑l=1L⁢hlH⁢fl⁢s[n-κl-nl]+∑l=1L⁢∑l′≠lL⁢hlH⁢fl′⁢s[n-κl′-nl]+z[n] where * represents the linear convolution operation, and z[n] denotes the additive white Gaussian noise (AWGN). Let M denote the number of transmit antennas, L{l:l=1, . . . , L} be the set of all temporal-resolvable multi-paths, and LlL \l include all other multi-paths excluding multi-path l. Let Qldenote the orthogonal projection matrix to perform the path-based ZF beamforming for the symbol sequence over the multi-path l, and C be an interference-plus-noise covariance matrix in the path-based MMSE beamforming. Denote by Tcand Tsthe channel coherence time and the symbol duration, respectively, and ncthe number of single-carrier symbols within each channel coherence time. Let nminand nmaxdenote the minimum and maximum delay within the current channel coherence block, and n span be the channel delay spread. The delay difference between multi-path l′ and l is represented by Δl′,l, and the maximum delay among all the channel coherence blocks is represented by ñmax. Denote by P and σ2the available power of the transmitter and the power of AWGN, respectively, and IMan identity matrix whose dimension is M×M. Based on the foregoing definitions, specific implementation steps of the proposed method may be summarized as follows:(1) obtaining the number of temporal-resolvable multi-paths L, and their corresponding delay {nl} and channel gain vectors {hl} based on the channel impulse response;(2) calculating the minimum delay nmin=min nland the maximum delay nmax=max nlfor the time-dispersive channel, where 1≤l≤L;(3) calculating the channel delay spread nspan=nmax−nminof the time-dispersive channel;(4) setting the pre-compensation delay value of each multi-path as Kl=nmax−nl, where 1≤l≤L, and the received signal for delay alignment modulation is y[n]=(∑l=1L⁢hlH⁢fl)⁢s[n-nmax]+∑l=1L⁢∑l′≠lL⁢hlH⁢fl′⁢s[n-nmax+nl′-nl]+z[n];(5) selecting a proper path-based beamforming design scheme according to the relationship between the number of transmit antennas M and that of temporal-resolvable multi-paths L; when M≥L, the path-based ZF beamforming is used; for the lth multi-path, let Hl=[hl, . . . , hl−1, hl+1, . . . , hL] denote the matrix formed by channel gain vectors of other multi-paths excluding multi-path l, and Ql=IM−Hl(HlHHl)−1HlHbe the orthogonal projection matrix, the path-based ZF beamforming vector is set as flZ⁢F=P⁢Ql⁢hl/∑l=1L⁢Ql⁢hl2, where 1≤l≤L;(6) When M<L, the transmitter may select the path-based MRT beamforming or the path-based MMSE beamforming; if low-complexity path-based MRT beamforming is used, step (7) is performed; and if the optimal path-based MMSE beamforming to maximize the signal-to-interference-plus-noise ratio (SINR) is used, step (8) is performed; flMRT=P⁢hl/∑l=1L⁢hl2,(7) setting the path-based MRT beamforming vector as where 1≤l≤L;(8) calculating the delay difference Δl,j=nl′−nl, ∀l≠l′ between multi-path l′ and l, we have Δl,j∈{±1, . . . , ±nspan}; Since the interfering symbols with identical delay difference correspond to identical symbols, they need to be combined; for delay difference i∈{±1, . . . , ±nspan}, define the following equivalent channel: gl′H=△{hlH,If⁢∃l∈Ll′,s.t.nl′-nl=i0,Otherwisethe received signal can be equivalently expressed as: y[n]=(∑l=1L⁢hlH⁢fl)⁢s[n-nmax]+∑i=-nspan,i≠0nspan⁢(∑l′=1L⁢gl′H[i]⁢fl′)⁢s[n-nmax+i]+z[n]Leth=[hlT, . . . , hLT]T, andg[i]=[glT[i]gLT[I]]Tthe interference-noise covariance matrix is C=Σi=0g[i]gH[i]+(σ2/P)I; Based on the MMSE criterion, we have fMMSE=√{square root over (P)}C−1h/∥C−1h∥; the path-based MMSE beamforming is set as flMMSE=f_l⁢M+1⁢(l+1)⁢MMMSE,where⁢1≤l≤L;(9) determining the transmitted signal x[n] for delay alignment modulation according to the used path-based beamforming vector;(10)as shown inFIG.3, a guard interval of length ñmaxis inserted at the beginning of the channel coherence block, with a guard interval overhead of ñmax/ñc, and the signals are started to be transmitted;(11)if the transmitter uses the path-based ZF beamforming, the received signal is y[n]=(∑l=1L⁢hlH⁢flZ⁢F)⁢s[n-nmax]+z[n], in this case, the channel equalization-free single-carrier transmission can be achieved; for the path-based MRT beamforming or the path-based MMSE beamforming, either the complexity is further reduced or the received SINR is improved by tolerating some residual ISI;(12)at the end of the moment Tc, determining whether the communication process ends; and if the communication ends, the communication process is completed; and(13)if communication continues, repeating steps (1) to (12) until the communication ends. In the foregoing method, by leveraging the multi-path sparsity of mmWave and Terahertz channels and the abundant spatial dimension brought by large antenna array, the per-path delay compensation and the path-based beamforming can be effectively performed for the symbol sequence over each multi-path (steps (4) to (8)). Besides, only one guard interval is needed for each channel coherence block to avoid ISI across different channel coherence blocks (step (10)). Compared with OFDM, as shown inFIG.4, the guard interval overhead can be significantly reduced. The foregoing descriptions are merely some implementations of the present invention. It should be noted that a person of ordinary skill in the art may make several improvements or refinements without departing from the principle of the present invention and the improvements or refinements shall fall within the protection scope of the present invention.
12,678
11863257
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially utilized on other aspects without specific recitation. DETAILED DESCRIPTION Aspects of the present disclosure provide apparatus, methods, processing systems, and computer readable mediums for reducing inter-sector interference encountered by a UE based on feedback provided by the UE. For example, a UE in a first sector may monitor interfering beamformed transmissions from a BS to another UE located in a second sector. The UE in the first sector may provide feedback to the BS with regard to the inter-sector interference (such as the interfering beamformed transmissions), and the BS may take one or more actions to reduce the inter-sector interference based on the feedback received from the UE as further described herein. The following description provides examples, and is not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. The techniques described herein may be used for various wireless communication technologies, such as LTE, CDMA, TDMA, FDMA, OFDMA, SC-FDMA and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. cdma2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as NR (e.g. 5G RA), Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDMA, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). New Radio (NR) is an emerging wireless communications technology under development in conjunction with the 5G Technology Forum (5GTF). 3GPP Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). cdma2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the wireless networks and radio technologies mentioned above as well as other wireless networks and radio technologies. For clarity, while aspects may be described herein using terminology commonly associated with 3G and/or 4G wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems, such as 5G and later, including NR technologies. New radio (NR) access (e.g., 5G technology) may support various wireless communication services, such as enhanced mobile broadband (eMBB) targeting wide bandwidth (e.g., 80 MHz or beyond), millimeter wave (mmW) targeting high carrier frequency (e.g., 25 GHz or beyond), massive machine type communications MTC (mMTC) targeting non-backward compatible MTC techniques, and/or mission critical targeting ultra-reliable low-latency communications (URLLC). These services may include latency and reliability requirements. These services may also have different transmission time intervals (TTI) to meet respective quality of service (QoS) requirements. In addition, these services may co-exist in the same subframe. Example Wireless Communications System FIG.1illustrates an example wireless communication network100in which aspects of the present disclosure may be performed. The wireless communication network100may be a New Radio (NR) or 5G network that provides UE assisted reduction of inter-sector interference. For example, the UE120amay provide feedback to the BS110awith regard to inter-sector interference encountered by the UE120a. In certain aspects, the inter-sector interference encountered by the UE120amay be from beamformed transmissions from the BS110ato the UE120b. The BS110amay take one or more actions as further described herein to reduce the inter-sector interference based on the feedback from the UE120a. For instance, the BS110amay identify the beam that is causing the interference and adjust parameters associated (e.g., a transmit power of the beam) with the beam to reduce the inter-sector interference. As illustrated inFIG.1, the wireless network100may include a number of base stations (BSs)110and other network entities. A BS may be a station that communicates with user equipment (UEs). Each BS110may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a Node B (NB) and/or a Node B subsystem serving this coverage area, depending on the context in which the term is used. In NR systems, the term “cell” and next generation NodeB (gNB), new radio base station (NR BS), 5G NB, access point (AP), or transmission reception point (TRP) may be interchangeable. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some examples, the base stations may be interconnected to one another and/or to one or more other base stations or network nodes (not shown) in wireless communication network100through various types of backhaul interfaces, such as a direct physical connection, a wireless connection, a virtual network, or the like using any suitable transport network. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular radio access technology (RAT) and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, etc. A frequency may also be referred to as a carrier, a subcarrier, a frequency channel, a tone, a subband, etc. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. A base station (BS) may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or other types of cells. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having an association with the femto cell (e.g., UEs in a Closed Subscriber Group (CSG), UEs for users in the home, etc.). A BS for a macro cell may be referred to as a macro BS. ABS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, the BSs110a,110band110cmay be macro BSs for the macro cells102a,102band102c, respectively. The BS110xmay be a pico BS for a pico cell102x. The BSs110yand110zmay be femto BSs for the femto cells102yand102z, respectively. A BS may support one or multiple (e.g., three) cells. Wireless communication network100may also include relay stations. A relay station is a station that receives a transmission of data and/or other information from an upstream station (e.g., a BS or a UE) and sends a transmission of the data and/or other information to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that relays transmissions for other UEs. In the example shown inFIG.1, a relay station110rmay communicate with the BS110aand a UE120rin order to facilitate communication between the BS110aand the UE120r. A relay station may also be referred to as a relay BS, a relay, etc. Wireless network100may be a heterogeneous network that includes BSs of different types, e.g., macro BS, pico BS, femto BS, relays, etc. These different types of BSs may have different transmit power levels, different coverage areas, and different impact on interference in the wireless network100. For example, macro BS may have a high transmit power level (e.g., 20 Watts) whereas pico BS, femto BS, and relays may have a lower transmit power level (e.g., 1 Watt). Wireless communication network100may support synchronous or asynchronous operation. For synchronous operation, the BSs may have similar frame timing, and transmissions from different BSs may be approximately aligned in time. For asynchronous operation, the BSs may have different frame timing, and transmissions from different BSs may not be aligned in time. The techniques described herein may be used for both synchronous and asynchronous operation. A network controller130may couple to a set of BSs and provide coordination and control for these BSs. The network controller130may communicate with the BSs110via a backhaul. The BSs110may also communicate with one another (e.g., directly or indirectly) via wireless or wireline backhaul. The UEs120(e.g.,120x,120y, etc.) may be dispersed throughout the wireless network100, and each UE may be stationary or mobile. A UE may also be referred to as a mobile station, a terminal, an access terminal, a subscriber unit, a station, a Customer Premises Equipment (CPE), a cellular phone, a smart phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet computer, a camera, a gaming device, a netbook, a smartbook, an ultrabook, an appliance, a medical device or medical equipment, a biometric sensor/device, a wearable device such as a smart watch, smart clothing, smart glasses, a smart wrist band, smart jewelry (e.g., a smart ring, a smart bracelet, etc.), an entertainment device (e.g., a music device, a video device, a satellite radio, etc.), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) devices or evolved MTC (eMTC) devices. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, etc., that may communicate with a BS, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, which may be narrowband IoT (NB-IoT) devices. Certain wireless networks (e.g., LTE) utilize orthogonal frequency division multiplexing (OFDM) on the downlink and single-carrier frequency division multiplexing (SC-FDM) on the uplink. OFDM and SC-FDM partition the system bandwidth into multiple (K) orthogonal subcarriers, which are also commonly referred to as tones, bins, etc. Each subcarrier may be modulated with data. In general, modulation symbols are sent in the frequency domain with OFDM and in the time domain with SC-FDM. The spacing between adjacent subcarriers may be fixed, and the total number of subcarriers (K) may be dependent on the system bandwidth. For example, the spacing of the subcarriers may be 15 kHz and the minimum resource allocation (called a “resource block” (RB)) may be 12 subcarriers (or 180 kHz). Consequently, the nominal Fast Fourier Transfer (FFT) size may be equal to 128, 256, 512, 1024 or 2048 for system bandwidth of 1.25, 2.5, 5, 10, or 20 megahertz (MHz), respectively. The system bandwidth may also be partitioned into subbands. For example, a subband may cover 1.08 MHz (i.e., 6 resource blocks), and there may be 1, 2, 4, 8, or 16 subbands for system bandwidth of 1.25, 2.5, 5, 10 or 20 MHz, respectively. While aspects of the examples described herein may be associated with LTE technologies, aspects of the present disclosure may be applicable with other wireless communications systems, such as NR. NR may utilize OFDM with a cyclic prefix (CP) on the uplink and downlink and include support for half-duplex operation using TDD. Beamforming may be supported and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Aggregation of multiple cells may be supported with up to 8 serving cells. In some examples, access to the air interface may be scheduled, wherein a scheduling entity (e.g., a base station) allocates resources for communication among some or all devices and equipment within its service area or cell. The scheduling entity may be responsible for scheduling, assigning, reconfiguring, and releasing resources for one or more subordinate entities. That is, for scheduled communication, subordinate entities utilize resources allocated by the scheduling entity. Base stations are not the only entities that may function as a scheduling entity. In some examples, a UE may function as a scheduling entity and may schedule resources for one or more subordinate entities (e.g., one or more other UEs), and the other UEs may utilize the resources scheduled by the UE for wireless communication. In some examples, a UE may function as a scheduling entity in a peer-to-peer (P2P) network, and/or in a mesh network. In a mesh network example, UEs may communicate directly with one another in addition to communicating with a scheduling entity. InFIG.1, a solid line with double arrows indicates desired transmissions between a UE and a serving BS, which is a BS designated to serve the UE on the downlink and/or uplink. A finely dashed line with double arrows indicates interfering transmissions between a UE and a BS. FIG.2illustrates example components of BS110and UE120(as depicted inFIG.1), which may be used to implement aspects of the present disclosure. For example, antennas252, processors266,258,264, and/or controller/processor280of the UE120and/or antennas234, processors220,230,238, and/or controller/processor240of the BS110may be used to perform the various techniques and methods described herein (such as the operations depicted inFIGS.3and4). For example, the UE120may provide feedback to the BS110with regard to inter-sector interference encountered by the UE120. In certain aspects, the inter-sector interference encountered by the UE120may be from beamformed transmissions from the BS110to another UE located in a different sector than UE120. The BS110may take one or more actions as further described herein to reduce the inter-sector interference based on the feedback from the UE120. For instance, the BS110may identify the beam that is causing the interference and adjust parameters associated (e.g., a transmit power of the beam) with the beam to reduce the inter-sector interference. At the BS110, a transmit processor220may receive data from a data source212and control information from a controller/processor240. The control information may be for the physical broadcast channel (PBCH), physical control format indicator channel (PCFICH), physical hybrid ARQ indicator channel (PHICH), physical downlink control channel (PDCCH), group common PDCCH (GC PDCCH), etc. The data may be for the physical downlink shared channel (PDSCH), etc. The processor220may process (e.g., encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. The processor220may also generate reference symbols, e.g., for the primary synchronization signal (PSS), secondary synchronization signal (SSS), and cell-specific reference signal (CRS). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, and/or the reference symbols, if applicable, and may provide output symbol streams to the modulators (MODs)232athrough232t. Each modulator232may process a respective output symbol stream (e.g., for OFDM, etc.) to obtain an output sample stream. Each modulator may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from modulators232athrough232tmay be transmitted via the antennas234athrough234t, respectively. At the UE120, the antennas252athrough252rmay receive the downlink signals from the base station110and may provide received signals to the demodulators (DEMODs) in transceivers254athrough254r, respectively. Each demodulator254may condition (e.g., filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator may further process the input samples (e.g., for OFDM, etc.) to obtain received symbols. A MIMO detector256may obtain received symbols from all the demodulators254athrough254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor258may process (e.g., demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE120to a data sink260, and provide decoded control information to a controller/processor280. On the uplink, at UE120, a transmit processor264may receive and process data (e.g., for the physical uplink shared channel (PUSCH)) from a data source262and control information (e.g., for the physical uplink control channel (PUCCH) from the controller/processor280. The transmit processor264may also generate reference symbols for a reference signal (e.g., for the sounding reference signal (SRS)). The symbols from the transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by the demodulators in transceivers254athrough254r(e.g., for SC-FDM, etc.), and transmitted to the base station110. At the BS110, the uplink signals from the UE120may be received by the antennas234, processed by the modulators232, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by the UE120. The receive processor238may provide the decoded data to a data sink239and the decoded control information to the controller/processor240. The controllers/processors240and280may direct the operation at the base station110and the UE120, respectively. The processor240and/or other processors and modules at the BS110may perform or direct the execution of processes for the techniques described herein. The memories242and282may store data and program codes for BS110and UE120, respectively. A scheduler244may schedule UEs for data transmission on the downlink and/or uplink. Example User Equipment Assisted Inter-Sector Interference Avoidance In certain wireless communication systems (e.g., NR/mmWave networks), analog beamforming is used to improve the performance of transmissions (e.g., millimeter transmissions). The BS and UE may use directional beams to establish links (e.g., mmWave links). As an example, the BS may use large antenna arrays, which enable the BS to focus a tight/narrow analog beam directed at the UE. Cellular networks can increase network DL capacity if the BS transmits to multiple UEs simultaneously. Communicating with multiple UEs simultaneously is referred to as Multi-User MIMO (MU-MIMO). MU-MIMO enables the BS to serve simultaneous DL transmissions on different directional beams. In NR, with analog beamforming, MU-MIMO uses “orthogonal” resources in spatial domain. With multiple simultaneous DL transmissions in the network, the UEs may encounter increased interference, such as inter-sector interference, intra-sector interference, or inter-cell interference. The present disclosure describes techniques for reducing at least inter-sector interference encountered by a UE using feedback from the UE. FIG.3is a flow diagram illustrating example operations300that may be performed, for example, by a base station (e.g., BS110), for reducing downlink inter-sector interference, in accordance with certain aspects of the present disclosure. Operations300may begin, at302, where the BS transmits, in a multi-user multiple-input and multiple-output (MU-MIMO) mode, first beamformed transmissions using a first beam to a first user equipment (UE) in a first sector and second beamformed transmissions using a second beam to a second UE in a second sector, wherein the BS is configured to control a plurality of sectors comprising the first sector and the second sector. At304, the BS receives, from the first UE, a feedback report indicating inter-sector interference encountered by the first UE in the first sector due to the second beamformed transmissions. At306, the BS takes one or more actions based on the feedback report to reduce the inter-sector interference encountered by the first UE in the first sector. FIG.4is a flow diagram illustrating example operations400that may be performed, for example, by a user equipment (e.g., UE120), for reducing inter-sector interference, in accordance with certain aspects of the present disclosure. Operations400may begin, at402, where the UE receives, in a first sector, first beamformed transmissions transmitted via a first beam from a base station (BS). At404, the UE receives, in the first sector, interfering beamformed transmissions transmitted via a second beam from the BS, the interfering beamformed transmissions being associated with the second beam and with a second sector, wherein the BS is configured to control a plurality of sectors comprising the first sector and the second sector. At406, the UE generates a feedback report indicating inter-sector interference encountered by the first UE in the first sector due to the interfering beamformed transmissions. At408, the UE reports the feedback report to the BS. In certain aspects, each of the beamformed transmissions may provide or indicate a beam index associated with the beam and/or a sector index associated with the sector. The beam index may be a unique identifier linked to the beam used for one of the beamformed transmissions, and the sector index may be a unique identifier for the sector from where the beamformed transmission was transmitted. In aspects, the beam index and sector index may be explicitly or implicitly indicated in the beamformed transmissions. For example, control signaling information (e.g., radio resource control (RRC) message, medium access control (MAC) control element (MAC-CE) message, or a downlink control information (DCI) message) may be encoded in the beamformed transmissions with the beam index and sector index. As another example, the beam index and/or sector index may be implicitly indicated based on phase or frequency variations of the signal used to transmit the beamformed transmission. In certain aspects, the BS may indicate, to the UE (e.g., the first UE of operations300), the beamformed transmissions (e.g., the second beamformed transmissions of operations300) that are simultaneously transmitted with transmissions to the UE. The indication of the beamformed transmissions may be provided by a scheduling control message via a DCI message, MAC-CE message, or RRC message, for example. For instance, the BS may transmit, to the UE, a scheduling control message that indicates a beam index associated with each of the beamformed transmissions and a sector index associated with each of the beamformed transmissions. As another example, at302, the BS may transmit, to the first UE, the first beamformed transmissions with a scheduling control message that indicates a first beam index associated with the first beam, a first sector index associated with the first sector and the first beam, a second beam index associated with the second beam, and a second sector index associated with the second sector and the second beam. The beamformed transmissions may include various types of transmissions. As examples, the beamformed transmissions may include a beam training transmission, a beam management transmission, a control transmission, a scheduling transmission, or a data transmission. That is, the beamformed transmissions may be from a beam training operation, beam management operation, or a data link. The beamformed transmissions may also include various types of synchronization signals or reference signals, such as a channel state information reference signal (CSI-RS) or a demodulation reference signal (DM-RS). In aspects, the sector antennas may be arranged in different azimuthal orientations (e.g., 120° spacings). For example, the first beamformed transmissions at402may be transmitted via a first sector antenna that is arranged in a different azimuthal orientation than a second sector antenna used for transmitting the interfering beamformed transmissions. The feedback report may provide information related to the downlink inter-sector interference encountered by the UE. The UE may determine the beams to include in the feedback report, at406, based on the scheduling control message received at402and/or the interfering beamformed transmissions received at404. For instance, the feedback report may provide one or more received signal powers of interfering beamformed transmissions, a beam index associated with each of the received signal powers, and/or a sector index associated with each of received signal powers. The received signal power may be an indication of the signal power of interfering beamformed transmission measured by the UE such as a received signal strength indication (RSSI) or a reference signal received power (RSRP). The beam index and/or sector index may be identified by the UE from indexes included in the beamformed transmissions as described herein. In aspects, the one or more received signal powers may be measures of signal power (e.g., RSSI or RSRP) of the interfering beamformed transmissions, based on the beam index and/or sector index, received by the UE. In certain aspects, the UE may also generate unique identifiers for the beam index and/or sector index. The feedback report may be transmitted via various types of messages, including an acknowledgment (ACK) message, a negative ACK (NACK) message, a channel state information (CSI) report, a random access channel (RACH) message, an interference measurement report, or a beam management report. Upon receiving a feedback report, the BS may perform various actions to reduce the downlink inter-sector interference encountered by the UE. The BS may identify the beam that is causing the interference and adjust parameters associated with the beam to reduce the inter-sector interference. For instance, the BS may adjust the transmit power of a beam identified in the feedback report, select a different UE for transmission, switch to single user-MIMO (SU-MIMO) mode, or switch to another beam than the interfering beam. In SU-MIMO mode, the BS may communicate with only a single UE at a time on a particular frequency resource (e.g., a frequency band or subband), whereas in MU-MIMO mode, the BS may communicate with multiple UEs simultaneously on the same frequency resource. The BS may perform the one or more actions if the received signal power of an interfering transmission is greater than or equal to a threshold value. As an example, the BS may reduce the transmit power for an interfering beamformed transmission with a reported signal power greater than the threshold. This may reduce the signal strength of the interfering transmission encountered by the UE. As another example, the BS may switch to SU-MIMO to communicate with the UE encountering inter-sector interference. The BS may take any combination of actions as described herein to reduce the inter-sector interference encountered by the UE. In aspects, the BS may select a different UE to send transmissions than the UE associated with the interfering transmission. The BS may schedule the interfering transmission for a different time slot than that used for the UE encountering the inter-sector interference and also replace transmissions to the interfering UE with transmissions to a UE that might not interfere. This may eliminate the inter-sector interference associated with the interfering transmission and also allow the BS to remain in MU-MIMO mode. For example, the BS may select a third UE in the second sector of operations300. The BS may transmit, in the MU-MIMO mode, the first beamformed transmissions using the first beam to the first UE in the first sector and third beamformed transmissions using a third beam to the third UE in the second sector at a first time period. Then, the BS may transmit, in the MU-MIMO mode, the second beamformed transmissions using the second beam to the second UE in the second sector at a second time period that is different than the first time period. In aspects, the BS may identify that there are other beams available for communicating with the interfering UE than the beam that is causing the inter-sector interference. The BS may transmit the beamformed transmission using a different beam than the beam causing inter-sector interference. For instance, the BS may switch to a secondary beam of lower quality to serve a UE instead of primary beam of higher quality. As used herein, a primary beam may be a beam having higher signal quality than another beam, and a secondary beam may be a beam having lower signal quality than another beam. The BS may select the secondary beam if the beam has a quality greater than or equal to a threshold quality value. As another example, the BS may transmit to the second UE of operations300using a different beam than the second beam for the second beamformed transmissions. FIG.5Aillustrates an example coverage cell for a base station where a UE is encountering inter-sector interference, in accordance with certain aspects of the present disclosure. As shown, the BS510has a coverage cell502partitioned into three sectors having a first sector504, a second sector506, and a third sector508. The sectors504,506,508may be linked to antennas arranged in different azimuthal orientations (e.g., 120° spacing). In this example, the cell502provides links to UEs including a first UE520a, a second UE520b, a third UE520c, a fourth UE520d, and a fifth UE520e. The first UE520amay receive, in the first sector504, beamformed downlink transmissions on a first beam530aand interfering beamformed downlink transmissions on a second beam530band a third beam530cused to communicate with second and third UEs520b,520c, respectively. As illustrated, the second and third beams530b,530cmay be directed to the second and third UEs520b,520cin the second and third sectors506,508, respectively, but the interfering beamformed transmissions on these beams may reflect into the first sector504causing inter-sector interference with the first UE520a. The first UE520amay generate a feedback report indicating inter-sector interference encountered by the first UE520ain the first sector504due to the interfering beamformed transmissions. The BS510may take one more actions based on the feedback report to reduce the inter-sector interference encountered by the first UE520ain the first sector504. FIG.5Billustrates an example coverage cell for a base station where various actions are taken to reduce the inter-sector interference encountered by a UE, in accordance with certain aspects of the present disclosure. The BS510may reduce the transmit power of the beamformed transmissions on the third beam530c. As another example, the BS510may select to transmit to the fourth and/or fifth UEs520d,520eon beams530d,530e, which are directed away from the first UE520a, during the same time period used to transmit to the first UE520a. As the beams530dand530eare directed away from the first UE520a, the first UE520amay not encounter inter-sector interference from such beams. The BS510may also select a different beam to transmit to the second UE520c, such as the secondary beam532c. In this example, the secondary beam532cmay have a lower quality than the primary beam530c. FIG.6illustrates a communications device600(e.g., BS110) that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIG.3. The communications device600includes a processing system602coupled to a transceiver608. The transceiver608(e.g., a transmitter and/or receiver) is configured to transmit and/or receive signals for the communications device600via an antenna610, such as the various signal described herein. The processing system602may be configured to perform processing functions for the communications device600, including processing signals received and/or to be transmitted by the communications device600. The processing system602includes a processor604coupled to a computer-readable medium/memory612via a bus606. In certain aspects, the computer-readable medium/memory612is configured to store instructions that when executed by processor604, cause the processor604to perform the operations illustrated inFIG.3, or other operations for performing the various techniques discussed herein. In certain aspects, the processing system602may further include a transmit component614for performing the operations illustrated inFIG.3. Additionally, the processing system602may include a receive616for performing the operations illustrated inFIG.3. Additionally, the processing system602may include a taking action component618for performing the operations illustrated inFIG.3. Additionally, the processing system602may include a controlling component620for performing the operations illustrated inFIG.3. The transmit component614, receive component616, taking action component618, and controlling component620may be coupled to the processor604via bus606. In certain aspects, the transmit component614, receive component616, taking action component618, and controlling component620may be hardware circuits. In certain aspects, the transmit component614, receive component616, taking action component618, and controlling component620may be software components that are executed and run on processor604. FIG.7illustrates a communications device700(e.g., UE120) that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIG.4. The communications device700includes a processing system702coupled to a transceiver708. The transceiver708(e.g., a transmitter and/or receiver) is configured to transmit and/or receive signals for the communications device700via an antenna710, such as the various signal described herein. The processing system702may be configured to perform processing functions for the communications device700, including processing signals received and/or to be transmitted by the communications device700. The processing system702includes a processor704coupled to a computer-readable medium/memory712via a bus706. In certain aspects, the computer-readable medium/memory712is configured to store instructions that when executed by processor704, cause the processor704to perform the operations illustrated inFIG.4, or other operations for performing the various techniques discussed herein. In certain aspects, the processing system702may further include a transmit component714for performing the operations illustrated inFIG.4. Additionally, the processing system702may include a receive component716for performing the operations illustrated inFIG.4. Additionally, the processing system702may include a generating component718for performing the operations illustrated inFIG.4. Additionally, the processing system702may include a reporting component720for performing the operations illustrated inFIG.4. The transmit component714, receive component716, generating component718, and reporting component720may be coupled to the processor704via bus706. In certain aspects, the transmit component714, receive component716, generating component718, and reporting component720may be hardware circuits. In certain aspects, the transmit component714, receive component716, generating component718, and reporting component720may be software components that are executed and run on processor704. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. If implemented in hardware, an example hardware configuration may comprise a processing system in a wireless node. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement the signal processing functions of the PHY layer. In the case of a user equipment120(seeFIG.1), a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the machine-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein, for example, instructions for performing the operations described herein and illustrated inFIGS.3and4. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
49,148
11863258
DETAILED DESCRIPTION Systems and techniques are disclosed herein that enable machine learning and deployment of communication over an impaired RF channel using multiple-antenna transceivers. In some implementations, a transmitter implements multiple transmit antennas to send multiple signals over the RF channel, and a receiver implements multiple receive antennas to receive multiple signals over the RF channel. The number of transmit antennas at the transmitter and the number of receive antennas at the receiver may, in general, be different numbers or the same number, and may be at least one. In a wireless communications scenario (e.g., cellular, mesh network, optical, acoustic, etc.), each receive antenna receives a signal that represents an aggregated reception of the signals that were transmitted by the multiple transmit antennas, having been mixed together and altered by transmission through the RF channel. In general, such multi-antenna communications is referred to as multi-input-multi-output (MIMO) communications. At least one machine-learning network may be implemented in at least one of the transmitter or the receiver of the MIMO communication system. For example, in some implementations, the transmitter includes a machine-learning encoder network that is trained to encode information as a signal that is transmitted over a MIMO channel using multiple transmit antennas, and/or the receiver includes a machine-learning decoder network that is trained to receive a signal over the MIMO channel using multiple receive antennas and decode the signals to recover the original information. In some implementations, the system may additionally or alternatively implement a machine-learning network to estimate channel state information (CSI) regarding the channel, such as a state of the radio transmission channel, or spatial information or scheduling information regarding multiple users of the MIMO channel model. Such CSI, for example, may be estimated at the transmitter based on a reverse channel, and/or may be estimated at the receiver and communicated to the transmitter via feedback. In many real-world scenarios, seeking compact representations of such CSI feedback may be an important objective, given limitations in delay and and/or bandwidth along with the increasing number of devices and antennas deployed in modern wireless systems. In such scenarios, implementations disclosed herein may enable a machine-learning network to learn compact representations of such CSI for various types of MIMO channel models to achieve such an objective. As such, the present disclosure describes various machine-learning scenarios that may be implemented in a MIMO communication system, wherein one or more machine-learning network may be trained to learn to encode signals transmitted over the MIMO channel, and/or to decode signals received over the MIMO channel, and/or to estimate CSI to assist communications over the MIMO channel. The MIMO system may be an open-loop system in which the transmitter and receiver learn to communicate over a MIMO channel without the help of CSI feedback, or may be a closed-loop system in which the transmitter and receiver learn to communicate over a MIMO channel with the benefit of CSI feedback. Open loop may be an attractive option for broadcast or multicast channels, or for providing improved coverage range or resilience especially when considering mobility, while closed loop may be a more attractive option for dense urban or multi-user interference limited or more stable mobility models, where it can offer improved information density, multi-user capacity and throughput. The at least one machine-learning network may be trained or may be designed to achieve various criteria in the MIMO communication system, such a low bit error rate, low power, low bandwidth, low complexity, low latency, performing well in particular regimes such as at a low signal to noise (SNR) ratio or under specific types of channel fading or interference, and/or other criteria. The results of training such machine-learning networks may then be utilized to deploy real-world communication scenarios to communicate various types of information over various types of RF communication media using multiple-antennas. In some implementations, further learning and adaptation of the machine-learning network(s) may be implemented during deployment in real-world systems, for example based on feedback information. These machine-learning networks may replace or augment one or more signal processing functions such as modulation, demodulation, mapping, error correction, CSI estimation and/or CSI feedback, or other components which exist in those systems today. When tuned after deployment, these systems may have the benefit in that they may improve the algorithms and encoding for specific deployment parameters such as the delay spread, reflectors, spatial distribution, user behavior, specific impairments and/or other statistical features or distribution of a specific area, specific hardware, cellular coverage area, or operating environment, thereby improving performance from the general case or previously trained models. The disclosed implementations present a novel approach to how digital radio systems are designed and deployed for MIMO radio communications. For example, the disclosed implementations may help improve a typically slow and incremental process of MIMO radio signal processing engineering, and instead enable a new way of designing, constructing, and realizing MIMO radio communications systems. By implementing machine-learning networks that may be trained to learn suitable techniques for communication over different types of communication media, techniques disclosed herein offer various advantages, such as improved power, throughput, spectral efficiency, resiliency, and complexity advantages over presently available MIMO systems. In some scenarios, this can be especially important for MIMO communications channels which have very complex sets of effects which are hard to model, or hard to optimize for using other approaches especially when considering additional non-linear effects introduced by hardware, amplifiers, interferers or other effects. In some implementations, a multi-antenna information representation transmitted from each antenna element may be learned using an optimization process (e.g., gradient descent or other solver) to minimize reconstruction loss of the information. As an example, the encoding process, over the air representation, and decoding process may be all jointly trained in an end-to-end optimization process to obtain the best representation of each portion of the system. This optimization process may be designed to produce a MIMO transmission scheme which achieves one or more objectives, such as minimizing bit or codeword error rate, maximizing throughput, maximizing capacity, minimizing computational complexity to fit the encoding and decoding networks of interest, and/or optimizing the representation used to fit the specific MIMO channel conditions used in a MIMO channel impairment module of the training system. The scheme accordingly provides the ability of wireless systems to leverage, in an efficient and non-linear manner, spatially diverse multi-antenna channels in extremely computationally efficient methods that often outperform the state of the art linear analytic methods used in fourth generation wireless systems and beyond. This system and method therefore provides a powerful MIMO wireless transmission scheme which provides the basis on which future cellular wireless and other non-cellular wireless diversity systems (such as WLAN) are expected to be based in the coming years. Further this system and method may provide powerful techniques for scaling MIMO transmission schemes efficiently to different configurations which may have many antennas (e.g. Massive MIMO systems), wherein using the antennas effectively at low computational complexity has been a challenge to this point. In general, the system may implement one or more machine-learning networks that are trained to learn suitable input-output mappings based on one or more objective criteria. For example, the machine-learning networks may be artificial neural networks. During training, the machine-learning networks may be adapted through selection of model architecture, weights, and parameters in the transmitter and/or the receiver to learn suitable mappings of inputs to outputs of the network. The machine-learning networks may be trained jointly, or may be trained in an iterative manner. For example, in some implementations, the transmitter may implement an encoder machine-learning network and the receiver may implement a decoder machine-learning network. The encoder machine-learning network and decoder machine-learning network may be implemented as an autoencoder, in which the encoder network and decoder network are jointly optimized. In some implementations, the autoencoder may be trained by modeling the effects of an impaired MIMO channel as one or more channel-modeling layers, such as stochastic layers which may include regularization layers (e.g. regularization layers, transforming layers, variational layers/samplers, noise layers, mixing layers, etc.) in the autoencoder network or as another set of differentiable functions representing the behavior of a MIMO channel. The layers that model the MIMO channel may form a regularization function across random behavior of a MIMO channel. In some implementations, in addition to as an alternative to implementing an encoder machine-learning network and/or a decoder machine-learning network, the system may implement a machine-learning network to estimate channel state information (CSI) about the MIMO channel. For example, such a CSI machine-learning network may be jointly trained with an encoder network and a decoder network in a single end-to-end autoencoder structure to achieve one or more objectives. In such a structure, an overall end-to-end system architecture for machine learning may be implemented. In other implementations, one or more of the encoder, the decoder, or the CSI estimator may instead be implemented with pre-designed communication components, and one or more other parts of the encoder, the decoder, and/or the CSI estimator may implement a machine-learning network to be trained and optimized around such pre-designed components. During training, the one or more machine-learning networks may be trained to perform unsupervised, or partially supervised, machine learning to determine techniques for communicating over an impaired MIMO channel. Therefore, in some scenarios, rather than being reliant upon pre-designed systems for error correction, modulation, pre-coding, or shaping, etc., the disclosed implementations herein may adaptively learn techniques for encoding information into waveforms that are transmitted over a MIMO channel, and/or techniques for decoding received waveforms received over the MIMO into reconstructed information, and/or techniques to estimate and/or feedback CSI about the MIMO channel. The one or more machine-learning networks may be trained on real or simulated MIMO channel conditions. Systems that utilize results of training such machine-learning networks may further be updated during deployment over real-world MIMO channels, thus providing advantages in adapting to different types of wireless MIMO system requirements, and in some cases improving the throughput, error rate, complexity, and power consumption performance of such MIMO systems. As such, regardless of the particular characteristics of MIMO channel or MIMO channel impairment, implementations disclosed herein may provide broadly applicable techniques for learning representations of information that enable reliable communication over impaired MIMO channels. Depending on the configuration of the training system and data sets and channel models used, such machine-learning communication techniques may specialize in performance for a narrow class of conditions, signal or MIMO channel types, or may generalize and optimize performance for a wide range of signal or MIMO channel types or mixtures of one or more signals or MIMO channels. Implementations disclosed herein may be applied to a wide range of MIMO radio communication systems, such as cellular systems, satellite systems, optical systems, acoustic systems, tactical mesh network systems, emergency hand-held, broadcast, point-to-point, Wi-Fi, Bluetooth, and other forms of MIMO radio communications that undergo transmission impairments. MIMO channel impairments may include, for example, thermal noise, such as Gaussian-like noise, to more complex impairments such as interference, multi-path fading, impulse noise, spurious or continuous jamming, distortion, hardware effects, and other impairments of the MIMO channel. In some instances, the multiple-transceiver elements represent radio transmission on the same band from distinct antennas, but in other instances, they may represent transmission over distinct polarizations within the same band, or transmission of information over multiple distinct bands or mediums. In some implementations, technique disclosed herein may be utilized to implement a multi-user MIMO system, wherein different information from multiple users (each utilizing multiple-antenna transceivers) are communicated over a common MIMO channel. The system may be trained to learn encoding and/or decoding techniques for each user that achieve a balance of competing objectives for the multiple users sharing the same MIMO channel. As one example of a multi-user implementation, in downlink scenarios where single base station transmits to multiple mobile users, a single multi-user encoder may be trained to encode information for the multiple users, and multiple decoders may be trained to decode information for each of the multiple users. As another example of a multi-user implementation, in uplink scenarios where multiple mobile users transmit to a single base station, multiple encoders may be trained to encode information for each of the multiple users, and a single decoder may be trained to collectively decode information for the multiple users. In another example implementation, where distributed MIMO is considered, multiple base stations may encode or decode information across the MIMO channel for one or multiple users within or across cells. MIMO communications schemes are currently used within cellular technologies such as Long Term Evolution (LTE), and implement a variety of analytically derived methods such as beam forming, Alamouti coding, or other space-time block codes or spatial multiplexing techniques with the goal of efficiently transmitting information from a set of transmitting antennas to a set of receiving antennas. The use of MIMO transmission schemes helps make efficient use of multi-path and multi-user spatial propagation environments, and helps to improve throughput, efficiency and resiliency of information transmission. These schemes have been derived through various highly specific signal processing algorithms, which are not known to achieve optimal capacity in all situations. Especially in multi-user MIMO systems with non-linear effects, optimal capacity limits are currently not well defined or characterized. The system and method disclosed herein, in contrast, leverages a more adaptive method for learning a parametric encoding and decoding network, which can achieve improvements in resilience and both single and multi-user capacity and/or throughput by leveraging more degrees of freedom and more informed distributions over the wireless channel paths and effects, compared to the schemes noted above. FIG.1illustrates an example of a radio frequency (RF) system100that implements at least one machine-learning network to perform learned communication over a multi-input-multi-output (MIMO) channel using multi-antenna transceivers. The system100includes a transmitter102and a receiver104that implement encoding and decoding techniques that were learned by machine learning networks that are trained to communicate over an impaired MIMO channel106. In some scenarios, referred to as “closed-loop” scenarios, the transmitter102also utilizes channel state information (CSI)118regarding the MIMO channel106to perform the encoding. By contrast, scenarios in which the transmitter102encodes the input information108without the benefit of any CSI118are referred to as “open-loop” scenarios. In closed-loop scenarios, the CSI118may, for example, be generated using techniques that were learned by a machine-learning network that was trained to estimate the CSI118and/or to communicate the CSI118to the transmitter102. However, implementations are not limited to performing all of the functions of encoding, decoding, and CSI estimation/feedback in the system100using machine-learning networks. Instead, some implementations may utilize machine-learning networks to perform only one or some of the techniques of encoding, decoding, and CSI estimation/feedback in system100, and other parts of the system100may implement pre-designed communication techniques around which the machine-learning networks are trained to adapt for communication over the MIMO channel106. The transmitter102transform the input information108(and the CSI118in closed-loop scenarios) into multiple transmitted signals112, each of which is transmitted by one of multiple transmit antennas over the MIMO channel106. Analogously, the receiver104may receive multiple received signals114, each of which is received by one of multiple receive antennas, and generate reconstructed information110that approximates the original input information108. Additionally, for closed-loop scenarios, the CSI118may either be estimated by the transmitter102(e.g., using a reverse channel or reverse pilot signal), or may be estimated by the receiver104and communicated to the transmitter102(e.g., via a feedback channel). The transmitter102and/or receiver104may be updated by an update process116. The transmitter102and receiver104may be trained to achieve various types of objective functions, such as a measure of reconstruction error, a measure of computational complexity, bandwidth, latency, power, or various combinations therefor and other objectives. For example, the transmitter102and/or receiver104may implement one or more machine-learning networks that are updated by the update process116. Further details of such a network structure are described below with reference toFIG.3and further details of training are described below with reference toFIG.4. In scenarios of deployment, the transmitter102and/or receiver104may implement techniques that were previously learned from training, or that may be (further) trained during deployment. The transmitter102and receiver104may be deployed in various application scenarios to perform communication, using the encoding and/or decoding and/or CSI representations that were learned during training. In some implementations, the transmitter102and/or receiver104may be further updated during deployment based on real-time performance results such as reconstruction error, power consumption, delay, etc. Further details of deployment are described below with reference toFIG.7. In some implementations, feedback, such as CSI118or error feedback of loss functions, may be implemented via a communications bus or a protocol message within the wireless system, which can be used to update the transmitter102and/or receiver104, along with information to help characterize the response of the MIMO channel106. The input information108and reconstructed information110may be any suitable form of information that is to be communicated over a MIMO channel, such as a stream of bits, packets, discrete-time signals, or continuous-time waveforms. Implementations disclosed herein are not limited to any particular type of input information108and reconstructed information110, and are generally applicable to learn encoding and decoding techniques for communicating a wide variety of types of information over the MIMO channel106. The transmitter102and receiver104may leverage the multiple antennas in various ways to achieve advantages over single-antenna systems. For example, the transmitter102and receiver104may leverage the multiple antennas to achieve either spatial multiplexing gain or spatial diversity gain. The spatial multiplexing gain scenario involves splitting the input information108into multiple sub-streams that are transmitted simultaneously from the separate transmit antennas to improve efficiency, throughput or density. By contrast, the spatial diversity gain scenario involves sending the same input information108or different encodings thereof over the multiple transmit antennas, thus averaging out severe impairments effects of the MIMO channel, and improving overall performance, reliability or coverage. In the spatial multiplexing gain scenario, the transmitter may determine, based on the input information108, multiple information portions that each correspond to information to be transmitted over one of the multiple transmit antennas. Based on each information portion, the transmitter may generate a corresponding one of the multiple RF signals112for transmission over that transmit antenna. Analogously, at the receiver, each of the received RF signals114may be processed to generate multiple smaller-rate sub-streams information portions, which may then be combined to yield the reconstructed information110. In the spatial diversity gain scenario, the transmitter may transform the same input information108into the different RF signals112for transmission over the multiple transmit antennas. Analogously, at the receiver, the different received RF signals114may be processed collectively to generate the reconstructed information110. In some implementations, the transmitter102and receiver104employ one or more signal processing operations, which are suited to the type of RF communication domain. As examples, the transmitter102and/or receiver104may implement filtering, modulation, analog-to-digital (A/D) or digital-to-analog (D/A) conversion, equalization, subcarrier/slot assignment or other signal processing methods that may be suitable for a particular types of RF signals or MIMO communication domains. In some implementations, the transmitter102and/or receiver104may implement one or more transmit and receive antennas, and other hardware or software suitable for transmitting multiple signals112and receiving multiple signals114over the MIMO channel106using multiple antennas. In such scenarios, as shown in the example ofFIG.1, the transmitted signal112and received signal114may represent actual RF waveforms that are transmitted and received over the MIMO channel106through multiple antennas. Thus, the transmitter102and receiver104may represent generalized mappings between information108/110and RF waveforms112/114. By contrast, in some implementations, the system100may implement signal processing and RF transmission/reception processes separately from the transmitter102and receiver104. In such implementations, one or more signal transmission and/or signal reception components, such as filtering, modulation, A/D or D/A conversion, single or multiple antennas, etc., may be represented as part of the MIMO channel106. The impairments in the MIMO channel106may therefore include transmitter/receiver effects, such as filtering impairments, additive noise, or other impairments in the transmitter and/or receiver components. Therefore, in such scenarios, the transmitted signals112and received signals114represent intermediate representations of information108/110, and the channel106represents a general transformation of those intermediate representations of information to and from actual RF waveforms that are transmitted and received over an RF medium. For example, each of the transmitted signals112and received signals114may represent basis coefficients for RF waveforms, time-domain samples of RF waveforms, distributions over RF waveform values, or other intermediate representations that may be transformed to and from RF waveforms. In scenarios of training, the reconstructed information110may be compared with the original information108, and one or more machine-learning networks in the transmitter102and/or the receiver104may be trained (updated) based on results of the reconstruction. In some implementations, updating the machine-learning networks may also be based on other factors, such as computational complexity of the machine-learning networks (which can be measured, for example, by the number of parameters, number of multiplies/adds, execution time, Kolmogorov complexity, or otherwise), transmission bandwidth or power used to communicate over the channel106, or various combinations thereof and other metrics. In some implementations, the transmitter102and/or the receiver104may include artificial neural networks that consist of one or more connected layers of parametric multiplications, additions, and non-linearities. In such scenarios, updating the transmitter102and/or receiver104may include updating weights of the neural network layers, or updating connectivity in the neural network layers, or other modifications of the neural network architecture, so as to modify a mapping of inputs to outputs. The transmitter102and/or the receiver104may be configured to encode, and/or decode, and/or generate CSI118using any suitable machine-learning technique. For example, the transmitter102may be configured to learn a mapping from input information108into a lower-dimensional or higher-dimensional representation as the transmitted signals112that are transmitted using multiple transmit antennas. Analogously, the receiver104may be configured to learn a reverse mapping from lower dimensional or higher-dimensional received signals114that are received by multiple receive antennas into the reconstructed information110. As an example, the mappings that are implemented in the transmitter102and receiver104may involve learning a set of basis functions for RF signals. In such scenarios, for a particular set of basis functions, the transmitter102may transform the input information108into a set of basis coefficients corresponding to those basis functions, and the basis coefficients may then be used to generate a corresponding one of the multiple transmitted RF waveforms112(for example, by taking a weighted combination of the basis functions weighted by the basis coefficients). Analogously, the receiver104may generate the reconstructed information110by generating a set of basis coefficients from a corresponding one of the received RF waveforms114(for example by taking projections of the received RF waveform onto the set of basis functions). The basis functions themselves may be any suitable orthogonal or non-orthogonal set of basis functions, subject to appropriate constraints on energy, amplitude, bandwidth, or other conditions. In closed-loop scenarios (with CSI118), the transmitter102may implement the encoding mapping to take into account the CSI118, in addition to the input information108, when generating the transmit signals112. For example, the receiver104may implement a mapping from the receive signals114to CSI118, and may communicate the CSI118back to the transmitter102. As another example, the transmitter102itself may generate the CSI118(e.g., using outputs of a reverse channel or reverse pilot signal from the receiver104to the transmitter102). The CSI118may be generated by a machine-learning network that has been trained to learn to represent information about the MIMO channel106, or may be generated by pre-designed CSI estimation and/or CSI feedback techniques (e.g., a CSI precoding table used in LTE cellular communication systems). In some scenarios, for example to reduce complexity, during deployment the transmitter102and/or receiver104may utilize simplified techniques that are based on results of training machine-learning networks. For example, the transmitter102and/or receiver104may utilize approximations or compact look up tables based on the learned encoding/decoding mappings. In such deployment scenarios, the transmitter102and/or receiver104may implement more simplified structures, rather than a full machine-learning network. For example, techniques such as distillation may be used to train smaller machine-learning networks which perform the same signal processing function. Further discussion of such deployment scenarios is provided in regards toFIG.7, below. In some implementations, the transmitter102and/or receiver104may include one or more fixed components or algorithms that are designed to facilitate communication over MIMO channels, such as expert synchronizers, equalizers, CSI quantizers, etc. As such, during training, the transmitter102and/or receiver104may be trained to learn encoding/decoding techniques that are suitable for such fixed components or algorithms. RF signals that are transmitted and received by system100may include any suitable radio-frequency signal, such as acoustic signals, optical signals, or other analog waveforms. The spectrum of RF signals that are processed by system100may be in a range of 1 kHz to 300 GHz. For example, such RF signals include very low frequency (VLF) RF signals between 1 kHz to 30 kHz, low frequency (LF) RF signals between 30 kHz to 300 kHz, medium frequency (MF) RF signals between 300 kHz to 1 MHz, high frequency (HF) RF signals between 1 MHz to 30 MHz, and higher-frequency RF signals up to 300 GHz. As an example of one possible application scenario, the system100may be utilized to perform communications from one or more base stations or access points to a mobile device (e.g. cell phone, laptop, Internet-of-Things (IoT) device, etc.) using one or more antennas for transmission and reception (i.e., a “downlink” channel in this example, from tower to mobile device). Here, information may be received from a cellular backhaul network or produced within a cellular tower to be transmitted to a cellular (or non-cellular) mobile device such as a cell phone, laptop, or IoT device. The parametric encoding network in a transmitter102is used by the cellular tower to encode information108into signals112, which are the passed through radio transmit hardware and over wireless channel paths in the MIMO channel106to reach the receive MIMO antennas as received signals114. Radio tuning and ADC may be used at the mobile device to recover samples information in the receive signals114from each receive antenna, which is then passed through a parametric decoding network in the receiver104in order to recover reconstructed information110. In some implementations, location information may be used to inform the parametric decoding network in the receiver104or its weights in this process on the cellular mobile device. As another example of an application scenario, the system100may be utilized to perform communications from a mobile device to one or more base stations or access points using one or more antennas for transmission and reception (i.e., an “uplink” channel from mobile devices to a base station or tower). In this example, a cellular mobile device uses the transmitter102to transmit encode information108and transmit signals112using multiple transmit antennas over wireless channel paths in the MIMO channel106, and one or more cellular towers implementing the receiver104then receive the signals114using multiple receive antennas, and consume the information or pass it to a cellular backhaul network. In this example, the input information108may be processed by one or more parametric encoding networks in the transmitter102on the mobile device and may be passed through a digital to analog converter, mixer, and amplifier to be transformed into signals112for transmission from one or more MIMO antennas. These signals112emanate over wireless channel paths in the MIMO channel106to arrive at the multiple antennas at the receiver104on the cellular tower, which generates reconstructed information110, for example by passing through a parametric decoder network. In the two examples above, the cellular downlink system and the uplink system may be used together within a bi-directional cellular transmission protocol, such as in a cellular system or cellular standard. In closed-loop implementations, the mobile device and the tower in such a system may exchange channel state information (CSI), such as the current fade conditions which may be used within the parametric decoding process. This CSI may be quantized by obtaining a discretized encoding of the channel state information which can be compactly transmitted to the network or mobile device. FIG.2illustrates an example of a network structure200of a transmitter implementing machine-learning encoder network and a receiver implementing a machine-learning decoder network that may be implemented in an RF system to perform learned communication over MIMO channels using multi-antenna transceivers. The network structure200uses one or more layers that form an encoder network202and a decoder network204. The output of each layer is used as input to the next layer in the network. Each layer of the network generates an output from a received input in accordance with current values of a respective set of parameters. For example, in some implementations, the encoder network202and/or decoder network204may include a plurality of networks that may be collectively or iteratively trained. As such, the network input208inFIG.2may be the original information (e.g., input information108and/or CSI118inFIG.1, above), or may be an output of previous one or more layers in the encoder network204. Analogously, the network output210may represent the reconstructed information (e.g., reconstructed information110inFIG.1, above), or may be an input into subsequent one or more layers in the decoder network204. In some instances, networks may not be sequential in nature, leveraging connections between various layers or neurons which bypass or route through a plurality of possible architectures. During training, the encoder network202and/or decoder network204may be trained to learn encoding and/or decoding techniques for communicating over various types of MIMO channels. During deployment, the encoder network202and/or decoder network204(having been trained) may be implemented in an encoder and/or decoder. Alternatively, in some scenarios of deployment, a deployed encoder and decoder may utilize simplified encoding and decoding mapping based on results of training the encoder network202and/or decoder network204. In the latter scenario, the encoder network202and/or decoder network204is only utilized during training, and provide learned encoding and/or decoding techniques that may be utilized in more simplified encoders and decoders that are deployed in real-world systems. Further discussion of such simplified deployment scenarios is provided in regards toFIG.7, below. In the example ofFIG.2, the encoder network202and decoder network204are implemented using a neural network structure200that is configured as an autoencoder. In the scenario of an autoencoder structure, the encoder and decoder are jointly trained to learn best representations of information for communication over the MIMO channel206. In general, however, the network structure200may be configured as separate networks in the encoder network202and decoder network204, which may be jointly or iteratively trained. During training, the encoder network202and/or decoder network204may be updated by a network update process216. In general, the encoder network202and/or decoder network204may include one or more collections of multiplications, divisions, and summations or other operations of inputs and intermediate values, optionally followed by non-linearities (such as rectified linear units, sigmoid function, or otherwise) or other operations (e.g., normalization), which may be arranged in a feed-forward manner or in a manner with feedback and in-layer connections (e.g., a recurrent neural network (RNN) where sequences of training information may be used in some instances). For example, a recurrent neural network may be a long-short term memory (LSTM) neural network that includes one or more LSTM memory blocks, or a quasi-recurrent neural network (QRNN) which combines elements of convolutional networks with recurrent networks. Parameters and weight values in the network may be used for a single multiplication, as in a fully connected deep neural network (DNN), or they may be “tied” or replicated across multiple locations within the network to form one or more receptive fields, such as in a convolutional neural network, a dilated convolutional neural network, a residual network unit, or similar. A collection of one or more of these layers may constitute both the encoder202and the decoder204, as shown in the example ofFIG.2. The specific structure for the networks may be explicitly specified at design time, or may be selected from a plurality of possible architecture candidates to ascertain the best performing candidate. In some implementations, the encoder network202may include an output layer that includes a linear regression layer. The decoder network204may include at least one of (i) an output layer that includes a linear layer for regression of reconstructed information210in decoding the received RF signal214, or (ii) a sigmoid or hard-sigmoid activation layer for probability regression or slicing of the received RF signal214, or (iii) an activation of a combination of sigmoid expressions such as a SoftMax or hierarchical SoftMax which can compute a probabilistic expression such as a pseudo-likelihood or pseudo-probability of a discrete message, discrete portion of a message or one or more bits. In some implementations, the encoder network202and/or decoder network204may include one or more layers that implement fixed communications algorithms, such as synchronization, equalization, etc. As such, in some scenarios, the encoder network202and/or decoder network204may be trained and deployed to learn suitable encoding and/or decoding techniques based on such fixed layers in the networks. Therefore, in general, the network structure200disclosed herein enables flexible design and training of the encoder network202and decoder network204, for example by incorporating one or more existing communication algorithms that may be deployed in real-world systems in conjunction with machine-learning techniques to optimize around those fixed algorithms. The example ofFIG.2shows only one possible implementation of a network structure that may be implemented. In general, implementations are not limited to these specific types of layers, and other configurations of layers and non-linearities may be used, such as dense, fully connected, and/or DNN layers, including rectified linear-unit (ReLU), sigmoid, tan h, and others. The network structure200uses these layers to predict an output210for a received input208. In some implementations, a linear regression layer may be implemented on the output of the encoder202and a linear layer on the output of the decoder204(for soft decoding), or a hard-sigmoid activation on the output of the decoder204(for hard decoding). The multiple transmitted signals212, created by the encoder202, may be the actual RF waveforms in analog form, or may each be a series of radio samples in time, frequency, or any other signal representation basis, or may be intermediate representations (e.g., RF samples, basis coefficients, distributions over RF waveform values, etc.), for mapping the input information208into RF waveforms for transmission over the MIMO channel206. Analogously, the multiple received signals214may be the actual received RF waveforms in analog form, or may be intermediate representations (e.g., RF samples, basis coefficients, distributions over RF waveform values, etc.), for mapping received RF waveforms into the reconstructed information210. For example, in the scenario where the encoder202and decoder204are implemented as a variational auto-encoder, the multiple transmitted RF signals212and multiple received RF signals214may each represent distributions over RF waveform values. The network structure200may also include one or more MIMO channel-modeling layers207(to model the MIMO channel effects206), which may be stochastic layers (e.g., regularization layers). In some instances, the MIMO channel-modeling layers207may have at least one of weight regularization on convolutional network layer weights, activity regularization on dense network layer activations, or other stochastic impairments on activations or weights, such as dropout or noise. In some instances, or in addition to these, the layers may perform additional approximation of non-linearities present in a MIMO channel system (such as amplifier, antenna, or other RF component behaviors), or they may leverage variational layers such as sampling from a random distribution specified by or parameterized by weights or activations. In some implementations, the MIMO channel-modeling layer(s)207may model impairment effects in the MIMO channel206, which may be include various types of impairments in a MIMO RF medium and/or transmission and reception components of the multiple transmit and/or receive antennas in the MIMO system. Such MIMO channel-modeling layers207may be implemented during training of the network structure200, in which case the MIMO channel-modeling layer(s)207may be implemented as one or more layers in an overall auto-encoder structure to represent impairment effects of the MIMO channel206. During evaluation or deployment over actual MIMO channels, the MIMO channel206would be a real-world MIMO communication channel (including possible transmitter and/or receiver effects), and the corresponding MIMO channel-modeling layers207would be removed from deployment, with only the network layers of the encoder202and the decoder204being deployed on the real MIMO channel206. In general, however, MIMO channel-modeling layers207may be implemented in different parts of the network structure200for various reasons, such as to prevent over-fitting, or to implement dropout, such as a penalty on the convolutional layer weights, to encourage minimum energy bases, or to implement a penalty on dense layer activations to encourage sparsity of solutions, or to improve generalization of the system to unseen conditions or channel states or behaviors. In scenarios of training that use MIMO channel-modeling layer(s)207to model the MIMO channel206, the network structure200may implement domain-specific regularization to model RF channel impairment effects. For example, the MIMO channel-modeling layer(s)207may model different types of impairments that occur during over-the-air transmission in a wireless RF system, such as additive Gaussian thermal noise, unknown time and rate of arrival, carrier frequency and phase offset, fading, hardware distortions, interference, and/or delay spread in the received signal. Such MIMO channel-modeling layers207, such as Gaussian noise and dropout, may be used during training and removed during evaluation or deployment over real channels. In radio communications, additive noise, such as Additive White Gaussian Noise (AWGN) may be modeled by adding a real-valued Gaussian random variable to different signal components, which may be signal basis functions (e.g., in-phase (I) and quadrature (Q) components), that are passed through the channel. In some implementations, a normalization layer may be implemented before the AWGN effects, which normalizes the average power incoming activations, for example to a normalized value equal to one. This form of constraint can be applied to the encoder202to enforce a wide range of possible waveform design criteria, such as a maximum power, minimum power, mean power, mean amplitude, peak to average power ratio, or a wide range of properties of the transmit waveform which may be used as a hard constraint. Alternatively, similar such waveform design objectives may be included as soft constraints which are combined into the network's loss function during training, as further discussed in regards toFIG.4, below. The MIMO channel-modeling layers207may also be implemented to model unknown time and rate of arrival, for example by applying a random or a priori unknown shift and scaling in the time domain, which may model scenarios in which radio propagation times vary and clocks on distributed radio systems are not synchronized. These effects may be modeled, for example, by a random time shift and a random time-dilation rate that have Gaussian distributions. As other examples of the MIMO channel-modeling layers207, carrier frequency and phase offset may be modeled as rotations in signal components, which may be signal basis functions. In some implementations, sampling may be performed using complex baseband representations, in which case unknown offsets in center frequency and absolute phase of arrival due to unsynchronized oscillators on transmitter and receiver, as well as Doppler shift, may result in static or linear polar mixing of the different signal components. To simulate a real system and to improve generalization, such MIMO channel-modeling layers207may randomly select a phase and a frequency offset, or a linear phase ramp based on an expected center frequency offset error due to independent drifting oscillators. As yet another example of MIMO channel-modeling layers207, delay spread in the received signals214may be modeled to simulate the arrival of numerous delayed and phase shifted copies of multiple signals arriving at the receiver. Since this is simulated as a linear system and we assume stability over a single sample time window, we can choose a random non-impulsive channel delay spread filter and convolve it with the received signal to obtain an output which has been spread in time linearly according to a random channel response. This assumption may be appropriate, for example, in scenarios where the signal window is smaller than the channel coherence time. In scenarios where the signal window larger than a channel coherence time, the channel progression may be modeled as a sequence with some degree of correlation, and the network200may learn techniques correcting the sequence of delay spread modes (e.g. due to multiple paths in the MIMO channel, or due to memory effects within hardware components). Such delay spread and coherence time may vary in different types of communication systems, including wire-line and space-based wireless systems which can sometimes have very short impulsive channel responses, or high frequency and dense multi-path wireless systems which can have long delay spreads. In some implementations, the delay spread is modeled as a MIMO channel-modeling layer207that implements one or more convolutions or filtering operations on the transmitted RF signals212. In some implementations, the network structure200may be utilized with one or more fixed transmission and/or receiving techniques, and may adapt the layers of the encoding network202and/or the decoding network204to learn encoding and decoding operations that are suitable for those fixed transmission/reception components. For example, in some scenarios the network structure200may employ fixed filtering, sampling, modulation, equalization, subcarrier assignment, reference signal insertion, encoding, or other transmission/reception techniques, and may learn suitable network layer parameters or network structures that adapt the overall communication system to best utilize those fixed components. A general design objective for the network structure200may be to obtain a desired reconstruction performance for the reconstructed information210, subject to other objectives or constraints. For example, certain realizations of the system may favor reduced power and/or bandwidth, other improved properties of the RF signals212to be transmitted over the channel, or improved computational complexity. As such, the system may evaluate a trade-off between these objectives, which may be used in order to help determine the specific architecture used for encoding, decoding, or other signal inference tasks. FIG.3Aillustrates an example of an open-loop network structure300that may be implemented in an RF system to perform learned communication over MIMO channels using multi-antenna transceivers, without the help of channel state information (CSI). In this example, the transmitter302implements a machine-learning encoder network and the receiver304implements a machine-learning decoder network that each include one or more neural network layers that may be trained to communicate over the MIMO channel306, as was detailed with reference toFIG.2, above. However, implementations are not limited thereto, and systems may generally implement machine-learning networks in only one of the transmitter302or the receiver304. The encoder network in the transmitter302includes one or more neural network layers that transform the input information308into multiple RF signals312for transmission over the MIMO channel306, as detailed with reference toFIG.2, above. The MIMO channel306includes one or more neural network layers that model the MIMO channel, and may additionally include layers that model effects of transmission and/or reception using transmitter and/or receiver components, as detailed with reference toFIG.2, above. The receiver304includes one or more neural network layers that transform multiple RF signals314received from the MIMO channel306into reconstructed information310, as detailed with reference toFIG.2, above. During training, the MIMO channel306may be modeled using either analytic, simulation, or real channel data models. For example, inFIG.3A, the MIMO channel306is modeled using a randomized model that implements a layer for multiplicative effects in addition to a noise layer. In some implementations, the layers of the MIMO channel model may implement an input-output transformation according to a matrix H that is configured to generate NRoutputs {right arrow over (y)}=(y1, . . . , yNR) that correspond to NRreceive antennas, based on NTinputs {right arrow over (x)}=(x1, . . . xNT) that correspond to NTtransmit antennas. FIG.3Billustrates an example of a closed-loop network structure350that may be implemented in an RF system to perform learned communication over MIMO channels using multi-antenna transceivers, with the benefit of channel state information (CSI). Although the example ofFIG.3Bshows the CSI being generated at the receiver and fed back to the transmitter, implementations are not limited thereto, and the CSI may alternatively be generated at the transmitter, for example by utilizing a reverse channel or reverse pilot signal. Analogous to the example that was discussed with reference toFIG.3A, above, the example ofFIG.3Bshows that a encoder network in the transmitter352includes one or more neural network layers that transform the input information358into multiple RF signals362for transmission over the MIMO channel356. The receiver354includes one or more neural network layers that transform multiple RF signals364received from the MIMO channel356into reconstructed information360. However, in this example ofFIG.3B, the receiver354further implements a CSI estimator370, which generates and transmits CSI368back to the transmitter352. The CSI368may be generated based on the received RF signals364, and may indicate various types of information regarding the MIMO channel356, for example a state of the MIMO channel356, or spatial information regarding multiple antennas of the system, or scheduling information regarding multiple users of the MIMO channel356. In this example, the decoder network at the receiver354generates the reconstructed information360without having explicit knowledge of the random channel state, but may then estimate the random channel state as CSI368for transmission back to the transmitter352(e.g., to assist in the encoding of the next packet), or may directly use the estimated CSI368for the transmission of information on the reverse link. The CSI368may be utilized by the transmitter352, in addition to the input information308, to generate the input RF signals362for transmission over the MIMO channel356. As such, the CSI368may be combined with the input information308into the encoding network at the transmitter352to obtain an improved transmit representation to effectively utilize the MIMO channel model356given the current random channel state. Alternatively or additionally, in some implementations, the CSI368may be utilized to update the MIMO channel model356, for example, during training to achieve improved training results. In some implementations, the CSI estimator370may itself implement a machine-learning network, for example as shown inFIG.3B, including one or more neural network layers. The CSI machine-learning network in the CSI estimator370may be trained to learn a representation of the received RF signals364into a CSI368that indicates the random state of the channel. For example, the CSI machine-learning network may be trained to generate the CSI as a representation of channel information, which may indicate a state of the MIMO channel or spatial information, or scheduling information regarding multiple users of the MIMO channel. In some implementations, the CSI368may represent a “full CSI” learned model indicating uncompressed channel information (but still typically estimated without indicating perfect knowledge of the channel state) that was learned, or may represent a “compact CSI” learned model (e.g., using partial/lossy/discretized CSI). The latter scenario may be achieved, for example, by reducing the dimension or quantizing or classifying the channel information into one of a discrete number of states or finite number of bits as the CSI, which may be referred to as “CSI embedding.” In this scenario of CSI embedding, a wide range of different parametric machine-learning networks may be chosen for the CSI estimator370such that the CSI accurately represents the channel information in an accurate manner using minimal bits. The CSI embedding may be optimized for certain SNR levels, numbers of antennas, or multi-antenna propagation conditions. In some cases, a hyper-parameter optimization method or system may be used in order to select CSI machine-learning network in the CSI estimator370which best meet the engineering and performance needs of the resulting system in terms of bit error rate, information density, signal linearity, and computational complexity. FIG.4illustrates an example of training an RF system400that implements at least one machine-learning network to learn to communicate over MIMO channels using multi-antenna transceivers. The system400includes a transmitter implementing an encoder network402and a receiver implementing a decoder network404that are trained to communicate over the MIMO channel406. The training illustrated inFIG.4may be implemented prior to deployment, or in some scenarios may be incorporated as part of deployment, for example to further update and refine the encoder network402and/or the decoder network404based on real-world performance. In closed-loop scenarios, the system400may also include CSI estimator420, which generates CSI418that is additionally utilized by the encoder network402, as was previously described with reference toFIG.3B, above. In open-loop scenarios, however, the system400does not implement the CSI estimator420. In closed-loop scenarios, the CSI estimator420may itself implement a machine-learning network that may be trained to generate the CSI418. However, implementations are not limited to using machine-learning networks for all of the encoding, decoding, and CSI estimation, as some implementations may utilize machine-learning networks in only one of the encoding or decoding, or alternatively, may utilize machine-learning networks for only the CSI estimator420, while using pre-designed components for the other functionality of system400. In some implementations, the encoder network402and decoder network404may be utilized for training to learn suitable encoding and decoding mappings, and such mappings may be implemented in a deployed system using more simplified encoders and decoders. For example, a deployed system may utilize using lookup tables at the encoder and distance-based metrics at the decoder, or other simplified forms of encoding and decoding, that are designed based on results of training the encoder network402and decoder network404. Similarly, the CSI estimator420may implement a machine-learning network that is utilized for training to learn suitable CSI estimations, and such mappings may be implemented in a deployed system using more simplified CSI estimators. Further discussion of such simplified deployment scenarios is provided in regards toFIG.7, below. The channel406that is implemented during training may be a model of an RF channel that is obtained via simulation and/or based on real-world RF channel data. For example, in some implementations, training may begin with a simulated channel model and train the encoder network402, the decoder network404, and/or the CSI estimator420based on simulated propagation models reflecting a real world propagation environment or emitter data. The encoder network402, the decoder network404, and/or the CSI estimator420may then be further trained against a real channel where hardware is used with a training feedback loop. In some implementations, the model of the channel406may include effects of transmitter and receiver components, such as filtering, modulation, etc. For example, in scenarios where a simulated channel is used for training, an analytic channel impairment model may be utilized that fits a specific set of hardware/software and wireless deployment conditions. As such, the training inFIG.4may train the encoder network402, the decoder network404, and/or the CSI estimator420to operate under different channel conditions, as well as for different real-world transmitter, receiver, and CSI estimator scenarios. During training, the encoder network402, the decoder network404, and/or the CSI estimator420may either be jointly trained or iteratively trained. For example, the encoder network402, the decoder network404, and/or the CSI estimator420may be jointly trained as an auto-encoder (as described in regards toFIG.2, above). In some implementations, the encoder network402, decoder network404, and/or the CSI estimator420may be separately trained. In such scenarios, one or some of the machine-learning networks in system400may be fixed, either by previous training or by a pre-designed transmission/reception/CSI scheme, while the other networks are trained to learn an encoding/decoding/CSI strategy that is appropriate for the fixed counterpart networks. For example, the encoder network402may be fixed to generate a particular mapping of input information408to transmitted RF signals412, and the CSI estimator420may be fixed to generate a particular mapping of received RF signals414to CSI418. Meanwhile, the decoder network404may be trained to learn a mapping from the received RF signal414to reconstructed information410that is best suited for the fixed encoder402and fixed CSI estimator420. In some implementations, the input information408may be represented by training data that is utilized for training purposes. The training data may have a different form than the input information408, but nonetheless may represent the input information408for purposes of training. In such scenarios, the encoder network402may process the training data that represents the first information, and the decoder network404may generate reconstructed information410as a reconstruction of the first information408represented by the training data. The system400may compute a loss function412between the original input information408and the reconstructed information410. The loss function412may be any suitable measure of distance between the input information408and reconstructed information410, such as cross-entropy, f-divergence, mean squared error, or other geometric distance metric (e.g., MAE). In some implementations, the loss function412may combine several geometric, entropy based, and/or other classes of distance metrics into an aggregate expression for distance or loss. In some implementations, additional loss terms may be used in the loss function412in combination with such primary loss terms, for example to accomplish secondary objectives (e.g., to reduce interference imposed upon a secondary receiver, or to improve favorable signal properties such as peak to average power ratio (PAPR), or to balance power between antennas). In addition to achieving an objective that includes the loss function412, the system400may also be configured to achieve an objective related to other performance measures, such as power, bandwidth, complexity, or other performance metrics that are relevant for communication. In some implementations, the system400may be configured to achieve a desired trade-off between different performance metrics. For example, achieving such a trade-off may be implemented using an objective function that combines different metrics, for example as a weighted combination of the metrics. In addition or as an alternative, this trade-off may be achieved by selecting a model according to user preferences or application specifications. In addition or as an alternative, the system400may implement one or more hard constraints on performance metrics, such as constraints on power, bandwidth, reconstruction error, etc. In some implementations, a network update process416may update the encoder network402, the decoder network404, and/or the CSI estimator420based on the various performance metrics. This updating may include updates to the network architectures, parameters, or weights of the networks in the encoder network402, the decoder network404, and/or the CSI estimator420. For example, the updating may include updating weights or parameters in one or more layers of the network(s), selecting machine-learning models for the network(s), or selecting a specific network architecture, such as choice of layers, layer-hyperparameters, or other network features. As discussed, updating may be implemented on the encoder network402, the decoder network404, and/or the CSI estimator420in a joint or iterative manner, or individually (as in the case where one or some of the networks is fixed). As discussed above, the updates performed by network update process416may be performed during training to learn suitable encoding, decoding, and/or CSI estimation techniques prior to deployment, and/or may be performed during deployment (if a deployed encoder, decoder, or CSI estimator implement machine-learning networks) to further update the machine-learning networks based on real-world deployment performance results. In some implementations, the network update process416may update the encoder network402, the decoder network304, and/or the CSI estimator420to achieve a desired objective function, which may include the loss function412and other performance metrics discussed above. In some implementations, the network update process416may utilize an optimization method such as one of evolution, gradient descent, stochastic gradient descent, or other solution technique. As an example of gradient-based updates, the network update process416may calculate a rate of change of the objective function relative to variations in the encoder network402, the decoder network404, and/or the CSI estimator420, for example by calculating or approximating a gradient of the objective function. Such variations may include, for example, variations in the weights of one or more network layers, as shown in the example ofFIG.4, or other network architecture choices. In scenarios where the channel406is based on real RF channel data and does not have a closed form gradient solution, an approximate method may be used to estimate the gradient of the objective function. Based on the calculated rate of change of the objective function, the network update process416may determine a first variation for the encoder network402and/or a second variation for the decoder network404, and/or a third variation for the CSI estimator420. These variations may be computed, for example, using Stochastic Gradient Descent (SGD) style optimizers, such as Adam, AdaGrad, Nesterov SGD, or others. In some implementations, these variations may be computed using other scalable methods for direct search, such as evolutionary algorithms or particle swarm optimizations. Once the variations have been determined, the network update process416then applies those variations to the appropriate machine-learning network. For example, the network update process416may update at least one encoding network weight in one or more layers of the encoder network402, at least one decoding network weight in one or more layers of the decoder network404, and/or at least one decoding network weight in one or more layers of the CSI estimator420. In general, updating the machine-learning networks of system400is not limited to updating network weights, and other types of updates may be implemented. For example, updating the machine-learning networks may include selecting a machine-learning model for the encoder network402from among a plurality of encoding models, selecting a machine-learning model for the decoder network404, and/or selecting a machine-learning model for the CSI estimator420from among a plurality of CSI estimation models. In such implementations, selecting machine-learning models may include selecting a specific network architecture, such as choice of layers, layer-hyperparameters, or other network features. The encoder network402, the decoder network404, and/or the CSI estimator420may be trained over various training models of the MIMO channel406and/or CSI feedback channels, which may be of the same type or of different types of MIMO channel models. Depending on the composition of the set of models for channel406, at least one of the encoder network402, the decoder network404, or the CSI estimator420may be optimized to communicate over a certain type of MIMO channel and/or CSI feedback channel, or a wide range of different types of MIMO channels and/or CSI feedback channels. In some implementations, the model of MIMO channel406may be categorized into a number of different modes. During training, the encoder network402, the decoder network404, and/or the CSI estimator420may be trained on the different modes of the MIMO channel406. For each of these modes, the machine-learning network(s) may learn suitable encoding/decoding/CSI estimation techniques for the different channel modes. The different modes of the MIMO channel406may represent any suitable categorization of channel condition, such as level of noise, SNR, delay spread, rate of channel variations, bandwidth, etc. Similarly, the CSI estimator420may be trained on different modes of the CSI feedback channel, for example representing different levels of noise, bandwidth, etc. In some implementations, instead of the MIMO channel406being a simulated channel, a real channel may be used to train the encoder network402and/or decoder network304. In such implementations, additional transmission and reception components (either hardware or software) may be implemented to transmit and receive analog RF waveforms over the real channel. Such transmit and receive components may be implemented either in the encoder network402and decoder network404, or their effects may be included in the channel effects that are accounted for in the model of the MIMIO channel406. As such, the training inFIG.4may be performed over any suitable MIMO channel406, whether simulated or real, to train the encoder network402, decoder network404and/or the CSI estimator420to learn suitable encoding/decoding/CSI estimation techniques. In some implementations, measurements may be made of wireless channel propagation information for the MIMO channel model406during training of a MIMO communications system using reference sounding in a real world environment. In such a system, a MIMO sounding recorder (which may be integrated within a handset or mobile device, or may be integrated within mobile embedded devices such as on a drone or vehicle) may be used to characterize the effects of the wireless channel paths between cellular towers (or other similar access points, base stations, or RF transceivers/gateways) and a mobile device such as a phone, laptop, or Internet-of Things (IoT) device, which would be in the same location as the MIMO sounding recorder. In this case, the cellular towers may use a reference signal generation process such as the transmission of a known P/N sequence, a preamble or other reference signal, radio transmit hardware such as mixers, digital to analog converters, filters, amplifiers, etc., and a set of transmit antennas. These signals emanate over a set of wireless channel paths in the real-world MIMO channel between transmitter antennas and receiver antennas at the sounding recorder. A radio tuning and analog-to-digital converter (ADC) receives and digitizes the transmitted signal at the MIMO sounding recorder. In some implementations, an optional synchronization algorithm is used to locate and perform estimation or synchronization tasks on the reference signal. Subsequently, a radio channel response, derived information or raw receive signal is stored on the device to maintain a record of the conditions present on the wireless channel paths at the time of measurement. Location information reception and storage may also be performed in some implementations on the MIMO sounding recorder to correlate this information with spatial information about the MIMO channel environment, which can be used later during training or deployment of MIMO communications systems. This stored information may contribute to a large experiential data set of real measured channel propagation conditions which may be used to generate MIMO channel models406from recorded sounding data during the training of new radio communications systems. During training, the encoder network402may be configured to learn a mapping from input information408into multiple transmitted RF signals412. Analogously, the decoder network404may be configured to learn a reverse mapping from multiple received RF signals414into reconstructed information410. As discussed above, the transmitted RF signals412and received RF signals414may represent analog RF waveforms that are transmitted and received over a the MIMO channel, or may represent intermediate representations (e.g., samples of RF waveforms, coefficients of basis functions, distributions over RF waveforms, etc.) that are transformed to and from analog RF waveforms through processing by one or more other components, such as filters, modulators, equalizers, etc. For example, in the scenario where the encoder network402and decoder network404are implemented as a variational auto-encoder (as discussed in regards toFIG.2, above), the RF signals412and414may represent distributions over RF waveform values. In general, the transmitted RF signals412and received RF signals414may represent any suitable RF signal representations that are learned by the encoder network402and decoder network404for encoding and decoding information over a particular channel or class of channels. In some implementations, the encoding and decoding mappings may involve a set of basis functions. The basis functions may be used by the encoder network402to transform the input information408into the transmitted RF signals412, each of which may be a set of basis coefficients, or an RF waveform that is a weighted combination of basis functions, or other suitable representation using a particular set of basis functions. Analogously, the decoder network404may use the same set of basis functions to process the received RF signals414to generate the reconstructed information410, for example by taking projections of each of the RF signals414onto the set of basis functions to generate basis coefficients, or in the scenario where each of the RF signals414is itself a set of basis coefficients, by transforming the basis coefficients in each of the RF signals414into the reconstructed information410. The basis functions may be any suitable set of orthogonal or non-orthogonal basis functions. For example, the basis functions may be In-Phase and Quadrature-Phase (I/Q) signals, Fourier basis functions, polynomial basis functions, Gaussian basis functions, exponential basis functions, wavelet basis functions, or combinations of these and/or other suitable set of basis functions that can be utilized represent RF waveforms that are transmitted over a channel. The basis functions may have different phase, amplitude, and/or frequency components. In some implementations, the basis functions may be parameterized and the training may involve optimizing over parameters of the basis functions. Training the encoder network402and decoder network404may begin with any suitable set of initial conditions. For example, the training may begin with a random set of basis functions subject to certain conditions. Alternatively, the training may begin with a fixed set of basis functions, such as commonly used RF communication basis functions including Quadrature Phase-Shift Keying (QPSK) or Gaussian Binary Frequency Shift Keying (GFSK), orthogonal frequency division multiple access (OFDM), a previously trained set of machine learning networks, or other fixed set of basis functions. During training, the encoder network402and decoder network404attempt to learn improved basis functions, according to results of encoding and decoding. Training the encoder402and decoder404may involve optimizing over a set of basis functions or over different sets of basis functions, for example using greedy search or other optimization-type algorithm. In some implementations, the input information408may be chosen from a training set of information. The input information408may, in some implementations, be limited to a particular class of information, such as binary information, discrete-time information, analog waveforms, or other class of information. In such scenarios, the system400will be trained to learn communication encoding and decoding techniques that are tuned to communicate that particular class of information (over a particular channel or class of channels). By training on different types of information408and different types of MIMO channels406, the system400may be trained to learn different encoding and decoding operations that are applicable to different communication scenarios. The loss function412may be any suitable measure, or combination of measures, of distance between the input information408and the reconstructed information410. For example, the loss function412may include cross-entropy, f-divergence, mean squared error (MSE), clipped MSE which penalizes predicted values according to MSE but only for values which fall on the wrong side of a decision threshold, or an exponential loss function that penalizes loss exponentially, or other suitable distance metric(s). In addition, as discussed above, other performance metrics may be incorporated into training, for example as part of the loss function412and/or as hard constraints, etc. For example, such performance metrics may include codeword error rate (CER), bit error rate (BER) as a function of the signal-to-noise ratio (SNR), communication bandwidth, communication power, spectral efficiency (the number of bits per second that can be transmitted over a fixed bandwidth channel at a specific SNR). Any one or combinations of such metrics may be utilized during training as part of the loss function412(e.g., as a weighted combination) and/or as hard constraints in addition to the loss function412. FIG.5is a flowchart illustrating an example method500of training an RF system that implements at least one machine-learning network to learn to communicate over MIMO channels with feedback of CSI using multi-antenna transceivers. The training method500may be performed by one or more processors, such as one or more CPUs, GPUs, DSPs, FPGAs, ASICs, TPUs, or neuromorphic chips or vector accelerators that execute instructions encoded on a computer storage medium. The training method500includes determining a transmitter and a receiver, at least one of which is configured to implement at least one machine-learning network (502). As discussed above, the at least one machine-learning network may be an encoding machine-learning network, a decoding machine-learning network, and/or a CSI estimation machine-learning network. The method500further includes determining a MIMO channel model that represents transmission effects of a MIMO communication channel (504). As discussed above, the MIMO channel model may be implemented using either analytical models, simulation models, or real-world propagation data. The method500further includes determining first information for transmission over the MIMO channel model (506). As discussed above, the first information may be any suitable discrete-time, analog, discrete-valued, or continuous-valued information. For example, in some instances, this input information may be whitened discrete bits or symbols, or in other cases, the input information may follow the distribution of a non-whitened information source. As previously discussed in regards toFIG.4, above, in some implementations, the first information may be represented by training data that is utilized for training purposes. In such scenarios, the training data may have a different form than the first information, but nonetheless may represent the first information for purposes of training. The method500further includes using the transmitter to process the first information and generate a plurality of first RF signals representing inputs to a MIMO channel model (508). As discussed above, in some implementations the first information may be represented by training data, in which case the transmitter processes the training data representing the first information. Furthermore, as discussed above, the generated first RF signal may represent an analog RF waveform that is transmitted over a channel, or may be an intermediate representation (e.g., samples, basis coefficients, distributions over RF waveforms, etc.) that undergoes further processing (e.g., filtering, D/A conversion, modulation, etc.) to generate an analog RF waveform. This encoding process may utilize any suitable mapping from an input information space into an RF signal space, as discussed in regards toFIG.4, above. In closed-loop scenarios, processing the first information may also include processing CSI that is generated by a CSI estimator, as discussed with reference toFIG.3B, above, The method500further includes determining a plurality of second RF signals representing outputs of the MIMO channel model, each second RF signal of the plurality of second RF signals representing aggregated reception of the plurality of first RF signals having been altered by transmission through the MIMO channel model (510). In training scenarios, the effects of the communication channel may be implemented by a model of a channel obtained by simulation and/or real channel data, or may be implemented by a real-world communication channel. As discussed above, each of the second RF signals may represent an analog RF waveform that is received over a channel, or may be an intermediate representation (e.g., samples, basis coefficients, distributions over RF waveforms etc.) that is a result of processing (e.g., filtering, sampling, equalizing, etc.) a received analog RF waveform. The method500further includes using the receiver to process the plurality of second RF signals and generate second information as a reconstruction of the first information (512). As previously discussed in regards toFIG.4above, in some implementations, the first information may have been represented by training data that is utilized for training purposes. In such scenarios, the input training data may have a different form than the original first information, but nonetheless the receiver may generate the second information as a reconstruction of the first information that is represented by the training data. This decoding process may utilize any suitable mapping from an RF signal space into reconstructed information space, as discussed in regards toFIG.4, above. The method500further includes calculating a measure of distance between the second information and the first information (514). This measure of distance may be implemented as a loss function (e.g., loss function318inFIG.4) and may represent a difference or error between the original input information and the second (reconstructed) information. As examples, the measure of distance may include cross-entropy, mean squared error, or other geometric distance metric (e.g., MSE, MAE, KL divergence, f-divergence), or may combine several geometric and/or entropy-based distance metrics into an aggregate expression for distance. The method500further includes updating the at least one machine-learning network based on the measure of distance between the second information and the first information (516). This update may be applied to machine-learning networks in the transmitter and/or the receiver in a joint or iterative manner, or individually, as discussed above. In closed-loop scenarios, the update may be applied to a CSI estimator in the receiver. The updates may generally include updating any suitable machine-learning network feature of the transmitter and/or receiver, such as network weights, architecture choice, machine-learning model, or other parameter or connectivity design, as discussed in regards toFIG.4, above. As an example, in some implementations, if the transmitter and/or receiver are trained to learn a set of basis functions for communicating over the MIMO channel, then the update process may include updating the set of basis functions that are utilized in the transmitter and/or receiver. FIGS.6A and6Billustrate examples of different types of transmit and receive RF signals that may be learned by machine-learning networks for communication over a MIMO channel without the help of CSI (i.e., the open-loop scenario). The signals may correspond to the previously discussed transmitted RF signals412and received RF signals414inFIG.4, above. In particular,FIGS.6A and6Billustrate open-loop transmit constellations (the upper two figures in each ofFIGS.6A and6B) and open-loop receive constellations (the lower two figures in each ofFIGS.6A and6B) for a two-input and two-output (2×2) MIMO channel. FIG.6A shows transmit and receive constellations over a number of random channel samples for the MIMO channel in which all the entries of the channel transition matrix H are selected at random, whileFIG.6Bshows the constellations for an all-ones H matrix. As shown, for an H matrix with roughly uniform power for each channel, the machine-learning decoder network learns a receive waveform which has nearly constant amplitude phase encoding, while the encoder network learns transmit constellations that appear to be quite random arrangements of 2k=16 bits, forming a non-standard 16-QAM type arrangement. Therefore, based on such training, the encoder and/or decoder may be deployed to utilize different sets of transmit and receive constellations for different channel conditions (e.g., different channel transition matrices H) in an adaptive manner. For example, in scenarios where machine-learning networks are deployed in a real-world communication system, the system may obtain channel state information (CSI) and adjust the encoder network and/or decoder network according to the state of the channel. Depending on the state of the channel, the encoder network and/or decoder network may simply adjust parameters (e.g., transmission power) for the same set of constellations, or may change the set of constellations entirely (e.g., by switching between the constellations inFIGS.6A and6B). Such updates may also be performed during deployment based on simplified encoders and/or decoders that do not utilize full machine-learning networks, but instead utilize simplified encoding and/or decoding techniques based on results of training a corresponding encoder machine-learning network and decoder machine-learning network. FIGS.6C and6Dillustrate examples of different types of transmit and receive RF signals that may be learned by machine-learning networks for communication over a MIMO channel with the benefit of CSI (i.e., the closed-loop scenario). The signals may correspond to the previously discussed transmitted RF signals412and received RF signals414inFIG.4, above. In particular,FIGS.6C and6Dillustrate closed-loop transmit constellation (the upper row of four figures in each ofFIGS.6C and6D) and closed-loop receive constellations (the lower row of four figures in each ofFIGS.6C and6D) for a two-input and two-output (2×2) MIMO channel with 1-bit CSI and 2-bit CSI.FIG.6Cshows transmit and receive constellations over a number of random channel samples for the MIMO channel in which all the entries of the channel transition matrix H are selected at random, whileFIG.6Dshows the constellations for an all-ones H matrix. For the 1-bit CSI scenario shown inFIGS.6C and6D, we see that the transmitter and receiver learn a multi-level constellation scheme, where the first antenna and the second antenna transmit constant modulus encodings at two distinct power levels. For the 2-bit CSI scenario shown inFIGS.6C and6D, we can see that the transmitter learns a complex multi-level transmission scheme, similar to an irregular 16-QAM on each transmitter, and that the receiver learns, for roughly equal power paths, approximately constant modulus constellations. FIG.7illustrates an example of a system700with a transmitter and receiver that may be deployed for learned communication of information over a real-world MIMO communication channel. The system700includes a transmitter702and receiver704that are deployed to communicate over a real-world MIMO channel706. The transmitter702receives input information708to be communicated, and maps the input information708into multiple RF signals712for transmission over multiple transmit antennas. The encoding mapping that is utilized by the transmitter702may be designed based on previous training of a machine-learning network that learned how to encode information into RF signals, using the training described in regards toFIGS.2to4, above. For example, the transmitter702may implement a trained machine-learning network during deployment, or may implement a simplified encoding mapping that utilizes results of training a machine-learning network, as discussed further below. As previously discussed, in some implementations, the transmitter702may include processing (e.g., filtering, modulation, mixing, amplification, A/D conversion, etc.) that generates the RF signals712as analog RF waveforms for transmission. Alternatively, in other implementations, the transmitter702may generate the RF signals712as intermediate representations that are subsequently processed into analog RF waveforms by additional processing such as filtering or modulation for transmission over the MIMO channel706. The receiver704receives multiple RF signals714over multiple receive antennas over the MIMO channel706, and maps the multiple received RF signals714into reconstructed information710. The decoding mapping that is utilized by receiver704may be designed based on previous training of a machine-learning network that learned how to decode RF signals into reconstructed information, using the training described in regards toFIGS.2to4, above. For example, the receiver704may implement a trained machine-learning network during deployment, or may implement a simplified decoding mapping that utilizes results of training a machine-learning network, as discussed further below. In a closed-loop system, the receiver704may also implement a CSI estimator720that generates CSI718based on the received RF signals714, as discussed with reference toFIG.3B, above. As previously discussed, in some implementations, the receiver704may include processing (e.g., filtering, modulation, amplification, mixing, A/D conversion, etc.) that directly inputs the multiple received RF signals714as analog RF waveforms received over the channel. Alternatively, in other implementations, the receiver704may process the RF signals714as intermediate representations that result from prior processing of multiple analog RF waveforms that were received from the MIMO channel706. In some implementations, the transmitter702and/or receiver704(including the CSI estimator720in closed-loop scenarios) may be updated during deployment, for example by update process716, based on results of communication. Such updates may be based on feedback information that is determined based on results of transmission and reconstructions during deployment. In some implementations, the system700may be configured to collect information regarding the MIMO channel706and/or regarding performance metrics, for example using channel state and performance estimator722. The channel state and performance estimator722may be configured to detect such information, for example, by detecting a training signal that was transmitted as the transmitted RF signals714and/or based on the CSI generated by the CSI estimator720(which may be implemented as part of the channel state and performance estimator722in some implementations) The channel state and performance estimator722may provide such information via feedback to control various updates to the transmitter702and/or receiver704during deployment, as shown in the example ofFIG.7. Such updates may include updating one or more machine-learning features (in scenarios where the transmitter702and/or receiver704implement machine-learning networks during deployment), or may include updating a simplified encoding/decoding/CSI estimator mappings that utilized by the transmitter702and/or receiver704(in scenarios where the transmitter702and/or receiver704implement simplified encoding/decoding/CSI estimation techniques based on previously-trained machine-learning networks). The feedback that is sent from the CSI estimator720and/or the channel state and performance estimator722may take any suitable form and may be transmitted on any suitable time scale. For example, such feedback may be provided based on estimates obtained from the forward link (transmitter702to receiver704) or obtained from a reverse link (receiver704to transmitter702) to estimate the state of the channel and/or estimates of performance. The feedback information may vary in size depending on various factors, such as the number of modes to choose from, the available bandwidth and latency of the feedback channel, and other considerations. In some instances, this feedback information may be encoded into protocol messages within a wireless system. As an example, the feedback may be generated by the transmitter702transmitting a known training RF signal, and the receiver704(and/or other component in the RF receiver) determining the state of the channel and performance measurements based on comparing the received RF signal with the known transmitted training RF signal. In some implementations, the feedback may only be provided to the receiver to update the receiver704, without necessarily providing the feedback to the transmitter to update the transmitter702(half-model updates). The conditions of MIMO channel706may change over different time scales, for example depending on the type of environment in which the RF signals are communicated. For instance, the time scale of variations in the MIMO channel706may depend on whether the environment is rural or urban, whether environmental objects are moving quickly or slowly (e.g., specifying coherence time, or correlation time of channel statistics), whether the environment is in an aircraft or spacecraft, or may depend on user density, band allocations, or whether other radio emitters are located nearby. In some instances where the channel coherence time is very long or static (e.g., fixed radios and reflectors), encodings may be specifically learned for these impairments over long time scale. One example of this might be in a fixed geometry industrial or urban communications environment. In some implementations, the conditions present in the MIMO channel706may be categorized into a number of different modes. The different modes may represent any suitable categorization of channel conditions, such as level of noise, SNR, delay spread, time scale of channel variations, etc. For each of these modes, machine-learning networks in the transmitter702and/or receiver704may have been trained to learn a suitable set of encoding/decoding/CSI estimation techniques, as discussed in regards toFIG.4, above. During deployment, the transmitter702and/or receiver704may be adaptively updated based on the particular mode of the MIMO channel706that is estimated. As shown inFIG.7, in some implementations, a transmission mode controller724may be implemented to decide which mode configuration is to be utilized for the transmitter702and/or receiver704. The transmission mode controller724may utilize feedback from the channel state and performance estimation722. As discussed above, such feedback may be obtained from the forward and/or reverse link, and may be provided to the transmission mode controller722to help decide which mode of operation to select at any given time. In this way, the system700can learn suitable encoding and/or decoding techniques for a range of different channel conditions and then adaptively update the transmitter702and/or receiver704to select a suitable mode under any given channel condition. There are a number of scenarios in which learned communications may be used in real world applications. For example, during training, the transmitter702and/or receiver704may be trained on closed-form analytic models of the MIMO channel706. Given sufficiently accurate stable analytic models of the channel of interest, efficient representations for communication across the MIMO channel706may be learned and used without any on-line adaptation. Such implementations may be suitable in environments where the real-world MIMO channel706sufficiently corresponds to analytic models, such as channels that vary slowly in time, or are otherwise more stable and predictable. As another example, in scenarios where channels vary more unpredictably in the real world, such as depending on deployment location, conditions, or nearby effects, the system700may perform on-line adaptation and on-line learning of specialized encoding and/or decoding techniques that perform well for the given real world deployment scenario. In such implementations, updates to the transmitter702and/or receiver704may be performed during deployment, based on real-world measurements of the channel and/or system performance. Such updates may be performed based on results of objective-achieving strategies that were learned in regards to training inFIGS.2to4, above. However, if the real-world feedback does not lend itself to exact analytic expression for the channel transform, then the update process716may utilize approximations, rather than exact analytic solutions, to determine updates for the transmitter702and/or receiver704. For example, in implementations where gradients of an objective function are computed, then approximate gradients may be computed, rather than exact derivative calculations. Furthermore, in real-world scenarios, the update process716may additionally consider real-world factors, such as communications cost, latency, and capacity of the feedback channel from the channel state and performance estimator722. In general, more accurate and more extensive feedback allows for updates that are more effective by the update process716, but at the cost of communications, latency, and bandwidth. Therefore, in deployment scenarios where the transmitter702and/or receiver704are updated based on feedback information, such additional considerations may be factored into the update process716. In some implementations, the transmitter702and/or receiver704may utilize simplified forms of encoding/decoding/CSI estimation mappings that were learned during training. For example, the transmitter702may utilize a simplified lookup table to generate the transmitted RF signals712based on the input information708. Analogously, in some implementations, the receiver704may generate the reconstructed information710and/or CSI718from the received RF signals714by utilizing a distance-based decoding technique, or other simplified decoding technique that is based on a more general decoding mapping learned during training, or that is based on an encoder mapping that was learned during training. As a specific example of such simplified deployment, in some implementations, during training, an encoder machine-learning network may learn a mapping from input information708to RF signals712. The mapping may be, for example, a signal constellation that represents different RF signals712as different points in the constellation corresponding to particular input information708. However, during deployment, the transmitter702may utilize a simplified lookup-table (LUT) to map input information708to points on the constellation to generate the RF signals712, based on the training results of the encoder machine-learning network. Analogously, the receiver704may utilize simplified decoding algorithms (e.g., distance-based decoding algorithms) that are based on results of training a decoder machine-learning network, or based on a counterpart trained encoder machine-learning network. In such scenarios, the transmitter702and/or receiver704may be trained (e.g., as an autoencoder) for system design during training, but approximations or compact look up tables may be utilized, in the transmitter702and/or the receiver704, to deploy and implement the system700in real-world applications. As such, in some implementations, the transmitter702and receiver704that are implemented in a deployed system may not implement a full machine-learning network, but instead may utilize results of encoding/decoding/CSI estimation mappings that were learned by machine-learning networks during training. In some cases, these learned mappings from a neural network may already form very compact and efficient tensor computation expressions which can be deployed efficiently into a baseband processor. FIG.8is a flowchart illustrating an example method800of deploying a system that utilizes techniques that have been learned based on results of training at least one machine-learning network to perform learned communication over a real-world RF channel using multi-antenna transceivers. Such deployment may utilize encoding, decoding, and/or CSI estimations techniques that were previously learned by machine-networks during training, for example by using training techniques discussed in regards toFIGS.2to4, above, or similar training techniques. The method800includes determining a transmitter and a receiver, at least one of which is configured to implement at least one machine-learning network that has been trained to communicate over a MIMO communication channel (802). In some scenarios, the deployed transmitter and/or receiver may implement machine-learning networks that were previously trained. Alternatively, in other scenarios, the transmitter and/or receiver may utilize simplified encoding/decoding mappings that are based on results of previously training an encoder machine-learning network and/or decoder machine-learning network and/or CSI machine-learning network, as discussed in regards toFIG.7, above. The method800further includes determining first information for transmission over the MIMO communication channel (804). As discussed above, the first information may be any suitable discrete-time, analog, discrete-valued, or continuous-valued information. The method800further includes using the transmitter to process the first information and generate a plurality of first RF signals (806). As discussed above, the first RF signals may represent analog RF waveforms that are transmitted over the MIMO channel, or may be intermediate representations (e.g., samples, basis coefficients, etc.) that undergo further processing (e.g., filtering, D/A conversion, amplification, modulation, etc.) to generate analog RF waveforms. This encoding process may utilize any suitable mapping, or simplified form of a mapping, from an input information space into an RF signal space that was learned during training an encoder machine-learning network, for example using training techniques discussed in regards toFIG.4, above. In closed-loop scenarios, the transmitter may process the first information by also processing CSI, which may have been generated by the transmitter itself or received via feedback from the receiver, as discussed in regards toFIG.3B, above. The method800further includes transmitting the plurality of first RF signals using respective ones of a plurality of transmit antennas through the MIMO communication channel (808). As discussed in regards to step806, above, transmission of the first RF signals may involve directly transmitting the first RF signals themselves (e.g., if the transmitter has generated the first RF signals as analog RF waveforms suitable for transmission over the channel), or may involve processing the first RF signals to convert them into respective analog RF waveforms for transmission (e.g., using filtering, D/A conversion, modulation, etc.). The transmission may utilize any suitable transmission technique which may include other features or parameters, for example using multiple antennas, adaptive power control, etc. The method800further includes receiving a plurality of second RF signals using respective ones of a plurality of receive antennas, each second RF signal of the plurality of second RF signals representing aggregated reception of the plurality of first RF signals having been altered by transmission through the MIMO communication channel (810). In deployment scenarios, the communication is in a real-world MIMO channel (in contrast to training scenarios ofFIG.4, where the channel may be a simulated channel or real-world channel). As discussed above, the second RF signals may represent analog RF waveforms that are received over the MIMO channel, or may be intermediate representations (e.g., samples, basis coefficients, etc.) that are results of processing (e.g., filtering, sampling, equalizing, etc.) received analog RF waveforms. The method800further includes using the receiver to process the plurality of second RF signals and generate second information as a reconstruction of the first information (812). This decoding process may utilize any suitable mapping, or simplified form of a mapping, from multiple RF signal spaces into reconstructed information space that was learned by a decoder machine-learning network during training, for example using training techniques discussed in regards toFIG.4, above. In closed-loop scenarios, the receiver may also implement a CSI estimator to generate CSI that is fed back to the transmitter. As discussed in regards toFIG.7, above, in some implementations, a deployed system may utilize the received second RF signals (and/or other information resulting from the communication) to generate feedback and update the transmitter and/or the receiver based on real-world channel information and/or performance results. Furthermore as discussed in regards toFIG.7, above, in some implementations, the transmitter and/or receiver may utilize simplified forms of encoding/decoding/CSI estimations mappings that were learned during training. For example, the transmitter may utilize a simplified lookup table to generate the first RF signal based on the first information. Furthermore, in some implementations, the receiver may utilize a distance-based decoding technique, or other simplified decoding technique that is based on a more general decoding mapping that was learned during training, or that is based on the encoder mapping that was learned during training. As such, in some implementations, the transmitter and receiver that are implemented in a deployed system may not implement a full machine-learning network, but instead may utilize results of encoding/decoding/CSI estimations mappings that were learned by machine-learning networks during training. FIG.9Aillustrates an example of deploying a multi-user downlink system that implements a single machine-learning encoder network and multiple decoders to perform learned communication over a real-world RF channel with multi-antenna transceivers. In some implementations, technique disclosed herein may be utilized to implement a multi-user MIMO system, wherein different information from multiple users (each utilizing multiple-antenna transceivers) are communicated over a common MIMO channel. The system may be trained to learn encoding and/or decoding techniques for each user that achieve a balance of competing objectives for the multiple users sharing the same MIMO channel. The example inFIG.9Aillustrates one example of a multi-user implementation, namely a downlink scenario where a base station implements a single multi-user encoder902to encode input information908a,908b, . . . ,908ncorresponding to multiple mobile users and generate a plurality of RF signals912over the MIMO channel906. Multiple decoders904a,904b, . . . ,904nmay be trained and implemented corresponding to multiple devices at the multiple mobile users. Each of the decoders904a,904b, . . . ,904nimplements multiple receive antennas, which receive the received RF signals914which are used to generate reconstructed information910a,910b, . . . ,910nfor each of the multiple users. In the general case, multiple base stations can also be combined using one or more multi-user encoder networks in order to implement a distributed multi-user MIMO downlink system. During training, an optimizer916may be implemented to update at least one machine-learning network in the encoder902and/or decoders904a,904b, . . . ,904nto learn suitable encoding/decoding/CSI estimation techniques for the MIMO channel906. In closed-loop scenarios, the CSI estimation may utilize CSI918that is either generated at the encoder902or fed back from the decoders904a,904b, . . . ,904nto the encoder902. FIG.9Billustrates another example of a multi-user implementation, namely an uplink scenario where multiple mobile users transmit to a single base station. The multi-user uplink system implements multiple encoders902a,902b, . . . ,902nat multiple devices of different users to encode input information908a,908b, . . . ,908n, where each device utilizes multiple antennas to generate a plurality of RF signals912over the MIMO channel906. A single machine-learning decoder904is implemented to receive RF signals914which are used to generate reconstructed information910a,910b, . . . ,910nfor each of the multiple users. In the general case, multiple base stations can also be combined using one or more multi-user decoder networks in order to implement a distributed multi-user MIMO uplink system. During training, an optimizer916may be implemented to update at least one machine-learning network in the encoders902a,902b, . . . ,902nand/or the decoder904to learn suitable encoding/decoding/CSI estimation techniques for the MIMO channel906. In closed-loop scenarios, the CSI estimation may utilize CSI918that is either generated at each of the encoders902a,902b, . . . ,902nor fed back from the decoder904to each of the encoders902a,902b,902n. FIG.10is a diagram illustrating an example of a computing system that may be used to implement one or more components of a system that performs learned communication over RF channels. The computing system includes computing device1000and a mobile computing device1050that can be used to implement the techniques described herein. For example, one or more parts of an encoder machine-learning network system or a decoder machine-learning network system could be an example of the system1000described here, such as a computer system implemented in any of the machine-learning networks, devices that access information from the machine-learning networks, or a server that accesses or stores information regarding the encoding and decoding performed by the machine-learning networks. The computing device1000is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The mobile computing device1050is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, mobile embedded radio systems, radio diagnostic computing devices, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to be limiting. The computing device1000includes a processor1002, a memory1004, a storage device1006, a high-speed interface1008connecting to the memory1004and multiple high-speed expansion ports1010, and a low-speed interface1012connecting to a low-speed expansion port1014and the storage device1006. Each of the processor1002, the memory1004, the storage device1006, the high-speed interface1008, the high-speed expansion ports1010, and the low-speed interface1012, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor1002can process instructions for execution within the computing device1000, including instructions stored in the memory1004or on the storage device1006to display graphical information for a GUI on an external input/output device, such as a display1016coupled to the high-speed interface1008. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. In addition, multiple computing devices may be connected, with each device providing portions of the operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). In some implementations, the processor1002is a single-threaded processor. In some implementations, the processor1002is a multi-threaded processor. In some implementations, the processor1002is a quantum computer. The memory1004stores information within the computing device1000. In some implementations, the memory1004is a volatile memory unit or units. In some implementations, the memory1004is a non-volatile memory unit or units. The memory1004may also be another form of computer-readable medium, such as a magnetic or optical disk. The storage device1006is capable of providing mass storage for the computing device1000. In some implementations, the storage device1006may be or include a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid-state memory device, or an array of devices, including devices in a storage area network or other configurations. Instructions can be stored in an information carrier. The instructions, when executed by one or more processing devices (for example, processor1002), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices such as computer- or machine-readable mediums (for example, the memory1004, the storage device1006, or memory on the processor1002). The high-speed interface1008manages bandwidth-intensive operations for the computing device1000, while the low-speed interface1012manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In some implementations, the high-speed interface1008is coupled to the memory1004, the display1016(e.g., through a graphics processor or accelerator), and to the high-speed expansion ports1010, which may accept various expansion cards (not shown). In the implementation, the low-speed interface1012is coupled to the storage device1006and the low-speed expansion port1014. The low-speed expansion port1014, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device1000may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server1020, or multiple times in a group of such servers. In addition, it may be implemented in a personal computer such as a laptop computer1022. It may also be implemented as part of a rack server system1024. Alternatively, components from the computing device1000may be combined with other components in a mobile device (not shown), such as a mobile computing device1050. Each of such devices may include one or more of the computing device1000and the mobile computing device1050, and an entire system may be made up of multiple computing devices communicating with each other. The mobile computing device1050includes a processor1052, a memory1064, an input/output device such as a display1054, a communication interface1066, and a transceiver1068, among other components. The mobile computing device1050may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor1052, the memory1064, the display1054, the communication interface1066, and the transceiver1068, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate. The processor1052can execute instructions within the mobile computing device1050, including instructions stored in the memory1064. The processor1052may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor1052may provide, for example, for coordination of the other components of the mobile computing device1050, such as control of user interfaces, applications run by the mobile computing device1050, and wireless communication by the mobile computing device1050. The processor1052may communicate with a user through a control interface1058and a display interface1056coupled to the display1054. The display1054may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface1056may include appropriate circuitry for driving the display1054to present graphical and other information to a user. The control interface1058may receive commands from a user and convert them for submission to the processor1052. In addition, an external interface1062may provide communication with the processor1052, so as to enable near area communication of the mobile computing device1050with other devices. The external interface1062may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. The memory1064stores information within the mobile computing device1050. The memory1064can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory1074may also be provided and connected to the mobile computing device1050through an expansion interface1072, which may include, for example, a SIMM (Single In Line Memory Module) card interface. The expansion memory1074may provide extra storage space for the mobile computing device1050, or may also store applications or other information for the mobile computing device1050. Specifically, the expansion memory1074may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, the expansion memory1074may be provide as a security module for the mobile computing device1050, and may be programmed with instructions that permit secure use of the mobile computing device1050. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. The memory may include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, instructions are stored in an information carrier such that the instructions, when executed by one or more processing devices (for example, processor1052), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices, such as one or more computer- or machine-readable mediums (for example, the memory1064, the expansion memory1074, or memory on the processor1052). In some implementations, the instructions can be received in a propagated signal, for example, over the transceiver1068or the external interface1062. The mobile computing device1050may communicate wirelessly through the communication interface1066, which may include digital signal processing circuitry where necessary. The communication interface1066may provide for communications under various modes or protocols, such as GSM (Global System for Mobile communications) voice calls, SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), LTE, 5G/6G cellular, among others. Such communication may occur, for example, through the transceiver1068using a radio frequency. In addition, short-range communication may occur, such as using a Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, a GPS (Global Positioning System) receiver module1070may provide additional navigation- and location-related wireless data to the mobile computing device1050, which may be used as appropriate by applications running on the mobile computing device1050. The mobile computing device1050may also communicate audibly using an audio codec1060, which may receive spoken information from a user and convert it to usable digital information. The audio codec1060may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device1050. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on the mobile computing device1050. The mobile computing device1050may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone1080. It may also be implemented as part of a smart-phone1082, personal digital assistant, or other similar mobile device. The term “system” as used in this disclosure may encompass all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. A processing system can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program (also known as a program, software, software application, script, executable logic, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile or volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks or magnetic tapes; magneto optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Sometimes a server is a general-purpose computer, and sometimes it is a custom-tailored special purpose electronic device, and sometimes it is a combination of these things. Implementations can include a back end component, e.g., a data server, or a middleware component, e.g., an application server, or a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described is this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet. The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. While this disclosure contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular implementations of particular inventions. Certain features that are described in this disclosure in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
126,137
11863259
DETAILED DESCRIPTION As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects. For example, the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. The disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. As another example, the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function. Furthermore, embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non-transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code. Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)). Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to,” unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise. As used herein, a list with a conjunction of “and/or” includes any single item in the list or a combination of items in the list. For example, a list of A, B and/or C includes only A, only B, only C, a combination of A and B, a combination of B and C, a combination of A and C or a combination of A, B and C. As used herein, a list using the terminology “one or more of” includes any single item in the list or a combination of items in the list. For example, one or more of A, B and C includes only A, only B, only C, a combination of A and B, a combination of B and C, a combination of A and C or a combination of A, B and C. As used herein, a list using the terminology “one of includes one and only one of any single item in the list. For example, “one of A, B and C” includes only A, only B or only C and excludes combinations of A, B and C. As used herein, “a member selected from the group consisting of A, B, and C,” includes one and only one of A, B, or C, and excludes combinations of A, B, and C.” As used herein, “a member selected from the group consisting of A, B, and C and combinations thereof” includes only A, only B, only C, a combination of A and B, a combination of B and C, a combination of A and C or a combination of A, B and C. Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams. The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams. The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams. The flowchart diagrams and/or block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods, and program products according to various embodiments. In this regard, each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated Figures. Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code. The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements. Generally, the present disclosure describes systems, methods, and apparatus for determining a precoder for wireless communications. In certain embodiments, the methods may be performed using computer code embedded on a computer-readable medium. In certain embodiments, an apparatus or system may include a computer-readable medium containing computer-readable code which, when executed by a processor, causes the apparatus or system to perform at least a portion of the below described solutions. Increased process automation, proliferation of streaming services, and advances in immersive digital experiences expose the wireless communications systems to heterogeneous and stringent traffic requirements in terms of reduced latency, higher spectral efficiency, and increased robustness. However, such requirements are often competing for physical communications resources to deliver on the necessary quality of service demands of various applications. To this end, heterogeneous traffic streams originating from the same point of transmission become more and more apparent in modern communications systems. For instance, high-rate and low-to-medium latency data streams associated with multi-object audio/video content or immersive 360 degrees/3D content are paired with control and metadata low-rate reduced latency streams within eXtended Reality (“XR”) applications. The latter are necessary for the receiver side to offer a high quality of experience to users irrespective of the wireless channel conditions. Similarly, in autonomous operation control and consensus loops combinations of low-latency, highly-reliable control streams are paired with high data rate streams of sensor inputs (e.g., point cloud data, 360° video, etc.) of a node. Alternatively, cognitive radio systems may exhibit traffic of high heterogeneity where high data rate/massive machine type of communications streams are paired with low latency traffic for control and metadata signaling among radio nodes and relays. Therefore, serving a greater stream-level heterogeneity and stricter timing requirements with increased determinism on the radio link is a challenge. Furthermore, it is often the case that the data streams arrive already source encoded at the radio transmitter such that the potential advantage of further inter-stream correlations cannot be taken in practical systems realizations. Consequently, it is of high interest to consider methods and mechanisms to effectively multiplex multi-stream information for point-to-point (“P2P”) wireless transmissions (e.g., from a base station, e.g., a Node B (“NB”), a 5G Node B (“gNB”), an evolved Node B (“eNB”), and/or the like, to a UE/customer premises equipment (“CPE”), UE to UE etc.) across heterogeneous traffic requirements and provide spectral efficient tools for higher layer procedures to deliver consistent quality of service to various applications. In one embodiment, the subject matter herein proposes an effective generic precoding method for a transmitter terminal applicable to generic P2P links, which is jointly optimized for single-stream overloading or multi-stream non-orthogonal combining and for interference management, respectively. The proposed method unlocks additional degrees of freedom for multiplexing beyond Nyquist rate for heterogeneous requirements of rate, latency, and reliability. The proposed solution, in one embodiment, also describes—an efficient realization of the above method via complex-valued linear transformations as harmonic spherical codes or as a generic realization via linear spike spherical codes and associated various signaling procedures necessary to communicate precoding configurations between a transmitter node/terminal and a receiver node/terminal for different scenario realizations. In one embodiment, the proposed solution in this disclosure targets flexible spherical codebooks in their overloaded (compressive) representations for heterogeneous communications requirements. In one embodiment, a spherical codebook design for joint synchronous P2Plink-level overloading and self-interference management is disclosed where information symbols are represented by superposed complex-valued codewords on a unit hypersphere exploiting all available orthogonal physical degrees of freedom available as harmonic linear transforms based on truncated, power and interference optimized Discrete Fourier Transform (“DFT”) matrices given the available signal space dimensionality and a desired overloading rate and/or approximate spherical linear codes yielded by optimized linear transforms with respect to the power spreading potential of the codewords and overloading interference. In one embodiment, a spherical codebook is precoded for multiplexing of information symbols beyond the orthogonal capacity of the single communication link. The proposed method transforms the single P2Pcommunication link into a multiple access channel for single/multiple data streams of information sourced at the same device. The method expands the available signal space with virtual degrees of freedom at the cost of managed minimized interference, increasing link spectral efficiency as a sum-rate and exploiting existent orthogonal multi-user access schemes. In one embodiment, the additional degrees of freedom and achievable capacity region gained may benefit single data stream transmissions (e.g., the proposed precoding by the spherical codebook in overloaded configuration for a single data stream acts as a faster-than-Nyquist (“FTN”) signaling mechanism) at higher rates closer to the Shannon capacity than conventional orthogonal discrete coded realizations and synchronous non-orthogonal multiplexing at one transmitter for multiple data streams whose rates are split to benefit heterogeneous constraints of minimal rate, maximum latency, and minimal reliability of individual streams, applicable to combinations such as enhanced mobile broadband (“eMBB”) and ultra-reliable low-latency communications (“URLLC”) data streams, massive machine-type communications (“mMTC”) and URLLC data streams, and eMBB and mMTC data streams. In one embodiment, an associated signaling apparatus of the precoder and single/multiple stream multiplexing configurations is disclosed to aid receivers as part of data channel demodulation reference signal (“DM-RS”) or as part of generic control information signal (“xCI”) over control channels. FIG.1depicts a wireless communication system100for determining a precoder for wireless communications, according to embodiments of the disclosure. In one embodiment, the wireless communication system100includes at least one remote unit105, a Fifth-Generation Radio Access Network (“5G-RAN”)115, and a mobile core network140. The 5G-RAN115and the mobile core network140form a mobile communication network. The 5G-RAN115may be composed of a 3GPP access network120containing at least one cellular base unit121and/or a non-3GPP access network130containing at least one access point131. The remote unit105communicates with the 3GPP access network120using 3GPP communication links123and/or communicates with the non-3GPP access network130using non-3GPP communication links133. Even though a specific number of remote units105, 3GPP access networks120, cellular base units121, 3GPP communication links123, non-3GPP access networks130, access points131, non-3GPP communication links133, and mobile core networks140are depicted inFIG.1, one of skill in the art will recognize that any number of remote units105, 3GPP access networks120, cellular base units121, 3GPP communication links123, non-3GPP access networks130, access points131, non-3GPP communication links133, and mobile core networks140may be included in the wireless communication system100. In one implementation, the RAN120is compliant with the 5G system specified in the Third Generation Partnership Project (“3GPP”) specifications. For example, the RAN120may be a NextGen RAN (“NG-RAN”), implementing New Radio (“NR”) Radio Access Technology (“RAT”) and/or Long Term Evolution (“LTE”) RAT. In another example, the RAN120may include non-3GPP RAT (e.g., Wi-Fi® or Institute of Electrical and Electronics Engineers (“IEEE”) 802.11-family compliant WLAN). In another implementation, the RAN120is compliant with the LTE system specified in the 3GPP specifications. More generally, however, the wireless communication system100may implement some other open or proprietary communication network, for example Worldwide Interoperability for Microwave Access (“WiMAX”) or IEEE 802.16-family standards, among other networks. The present disclosure is not intended to be limited to the implementation of any particular wireless communication system architecture or protocol. In one embodiment, the remote units105may include computing devices, such as desktop computers, laptop computers, personal digital assistants (“PDAs”), tablet computers, smart phones, smart televisions (e.g., televisions connected to the Internet), smart appliances (e.g., appliances connected to the Internet), set-top boxes, game consoles, security systems (including security cameras), vehicle on-board computers, network devices (e.g., routers, switches, modems), or the like. In some embodiments, the remote units105include wearable devices, such as smart watches, fitness bands, optical head-mounted displays, or the like. Moreover, the remote units105may be referred to as the UEs, subscriber units, mobiles, mobile stations, users, terminals, mobile terminals, fixed terminals, subscriber stations, user terminals, wireless transmit/receive unit (“WTRU”), a device, or by other terminology used in the art. In various embodiments, the remote unit105includes a subscriber identity and/or identification module (“SIM”) and the mobile equipment (“ME”) providing mobile termination functions (e.g., radio transmission, handover, speech encoding and decoding, error detection and correction, signaling and access to the SIM). In certain embodiments, the remote unit105may include a terminal equipment (“TE”) and/or be embedded in an appliance or device (e.g., a computing device, as described above). The remote units105may communicate directly with one or more of the cellular base units121in the 3GPP access network120via uplink (“UL”) and downlink (“DL”) communication signals. Furthermore, the UL and DL communication signals may be carried over the 3GPP communication links123. Similarly, the remote units105may communicate with one or more access points131in the non-3GPP access network(s)130via UL and DL communication signals carried over the non-3GPP communication links133. Here, the access networks120and130are intermediate networks that provide the remote units105with access to the mobile core network140. In some embodiments, the remote units105communicate with a remote host (e.g., in the data network150or in the data network160) via a network connection with the mobile core network140. For example, an application107(e.g., web browser, media client, telephone and/or Voice-over-Internet-Protocol (“VoIP”) application) in a remote unit105may trigger the remote unit105to establish a protocol data unit (“PDU”) session (or other data connection) with the mobile core network140via the 5G-RAN115(i.e., via the 3GPP access network120and/or non-3GPP network130). The mobile core network140then relays traffic between the remote unit105and the remote host using the PDU session. The PDU session represents a logical connection between the remote unit105and a User Plane Function (“UPF”)141. In order to establish the PDU session (or packet data network (“PDN”) connection), the remote unit105must be registered with the mobile core network140(also referred to as “attached to the mobile core network” in the context of a Fourth Generation (“4G”) system). Note that the remote unit105may establish one or more PDU sessions (or other data connections) with the mobile core network140. As such, the remote unit105may have at least one PDU session for communicating with the packet data network150. Additionally—or alternatively—the remote unit105may have at least one PDU session for communicating with the packet data network160. The remote unit105may establish additional PDU sessions for communicating with other data networks and/or other communication peers. In the context of a 5G system (“5GS”), the term “PDU Session” refers to a data connection that provides end-to-end (“E2E”) user plane (“UP”) connectivity between the remote unit105and a specific Data Network (“DN”) through the UPF131. A PDU Session supports one or more Quality of Service (“QoS”) Flows. In certain embodiments, there may be a one-to-one mapping between a QoS Flow and a QoS profile, such that all packets belonging to a specific QoS Flow have the same 5G QoS Identifier (“5QI”). In the context of a 4G/LTE system, such as the Evolved Packet System (“EPS”), a PDN connection (also referred to as EPS session) provides E2E UP connectivity between the remote unit and a PDN. The PDN connectivity procedure establishes an EPS Bearer, i.e., a tunnel between the remote unit105and a Packet Gateway (“PGW”, not shown) in the mobile core network130. In certain embodiments, there is a one-to-one mapping between an EPS Bearer and a QoS profile, such that all packets belonging to a specific EPS Bearer have the same QoS Class Identifier (“QCI”). As described in greater detail below, the remote unit105may use a first data connection (e.g., PDU Session) established with the first mobile core network130to establish a second data connection (e.g., part of a second PDU session) with the second mobile core network140. When establishing a data connection (e.g., PDU session) with the second mobile core network140, the remote unit105uses the first data connection to register with the second mobile core network140. The cellular base units121may be distributed over a geographic region. In certain embodiments, a cellular base unit121may also be referred to as an access terminal, a base, a base station, a Node-B (“NB”), an Evolved Node B (abbreviated as eNodeB or “eNB,” also known as Evolved Universal Terrestrial Radio Access Network (“E-UTRAN”) Node B), a 5G/NR Node B (“gNB”), a Home Node-B, a Home Node-B, a relay node, a device, or by any other terminology used in the art. The cellular base units121are generally part of a radio access network (“RAN”), such as the 3GPP access network120, that may include one or more controllers communicably coupled to one or more corresponding cellular base units121. These and other elements of radio access network are not illustrated but are well known generally by those having ordinary skill in the art. The cellular base units121connect to the mobile core network140via the 3GPP access network120. The cellular base units121may serve a number of remote units105within a serving area, for example, a cell or a cell sector, via a 3GPP wireless communication link123. The cellular base units121may communicate directly with one or more of the remote units105via communication signals. Generally, the cellular base units121transmit DL communication signals to serve the remote units105in the time, frequency, and/or spatial domain. Furthermore, the DL communication signals may be carried over the 3GPP communication links123. The 3GPP communication links123may be any suitable carrier in licensed or unlicensed radio spectrum. The 3GPP communication links123facilitate communication between one or more of the remote units105and/or one or more of the cellular base units121. Note that during NR operation on unlicensed spectrum (referred to as “NR-U”), the base unit121and the remote unit105communicate over unlicensed (i.e., shared) radio spectrum. The non-3GPP access networks130may be distributed over a geographic region. Each non-3GPP access network130may serve a number of remote units105with a serving area. An access point131in a non-3GPP access network130may communicate directly with one or more remote units105by receiving UL communication signals and transmitting DL communication signals to serve the remote units105in the time, frequency, and/or spatial domain. Both DL and UL communication signals are carried over the non-3GPP communication links133. The 3GPP communication links123and non-3GPP communication links133may employ different frequencies and/or different communication protocols. In various embodiments, an access point131may communicate using unlicensed radio spectrum. The mobile core network140may provide services to a remote unit105via the non-3GPP access networks130, as described in greater detail herein. In some embodiments, a non-3GPP access network130connects to the mobile core network140via an interworking entity135. The interworking entity135provides an interworking between the non-3GPP access network130and the mobile core network140. The interworking entity135supports connectivity via the “N2” and “N3” interfaces. As depicted, both the 3GPP access network120and the interworking entity135communicate with the AMF143using a “N2” interface. The 3GPP access network120and interworking entity135also communicate with the UPF141using a “N3” interface. While depicted as outside the mobile core network140, in other embodiments the interworking entity135may be a part of the core network. While depicted as outside the non-3GPP RAN130, in other embodiments the interworking entity135may be a part of the non-3GPP RAN130. In certain embodiments, a non-3GPP access network130may be controlled by an operator of the mobile core network140and may have direct access to the mobile core network140. Such a non-3GPP AN deployment is referred to as a “trusted non-3GPP access network.” A non-3GPP access network130is considered as “trusted” when it is operated by the 3GPP operator, or a trusted partner, and supports certain security features, such as strong air-interface encryption. In contrast, a non-3GPP AN deployment that is not controlled by an operator (or trusted partner) of the mobile core network140, does not have direct access to the mobile core network140, or does not support the certain security features is referred to as a “non-trusted” non-3GPP access network. An interworking entity135deployed in a trusted non-3GPP access network130may be referred to herein as a Trusted Network Gateway Function (“TNGF”). An interworking entity135deployed in a non-trusted non-3GPP access network130may be referred to herein as a non-3GPP interworking function (“N3IWF”). While depicted as a part of the non-3GPP access network130, in some embodiments the N3IWF may be a part of the mobile core network140or may be located in the data network150. In one embodiment, the mobile core network140is a 5G core (“5GC”) or the evolved packet core (“EPC”), which may be coupled to a data network150, like the Internet and private data networks, among other data networks. A remote unit105may have a subscription or other account with the mobile core network140. Each mobile core network140belongs to a single public land mobile network (“PLMN”). The present disclosure is not intended to be limited to the implementation of any particular wireless communication system architecture or protocol. The mobile core network140includes several network functions (“NFs”). As depicted, the mobile core network140includes at least one UPF141. The mobile core network140also includes multiple control plane functions including, but not limited to, an Access and Mobility Management Function (“AMF”)143that serves the 5G-RAN115, a Session Management Function (“SMF”)145, a Policy Control Function (“PCF”)147, n Authentication Server Function (“AUSF”)148, a Unified Data Management (“UDM”) and Unified Data Repository function (“UDR”). The UPF(s)141is responsible for packet routing and forwarding, packet inspection, QoS handling, and external PDU session for interconnecting Data Network (“DN”), in the 5G architecture. The AMF143is responsible for termination of non-access stratum (“NAS”) signaling, NAS ciphering & integrity protection, registration management, connection management, mobility management, access authentication and authorization, security context management. The SMF145is responsible for session management (i.e., session establishment, modification, release), remote unit (i.e., UE) IP address allocation & management, DL data notification, and traffic steering configuration for UPF for proper traffic routing. The PCF147is responsible for unified policy framework, providing policy rules to control plane (“CP”) functions, access subscription information for policy decisions in UDR. The AUSF148acts as an authentication server. The UDM is responsible for generation of Authentication and Key Agreement (“AKA”) credentials, user identification handling, access authorization, subscription management. The UDR is a repository of subscriber information and can be used to service a number of network functions. For example, the UDR may store subscription data, policy-related data, subscriber-related data that is permitted to be exposed to third party applications, and the like. In some embodiments, the UDM is co-located with the UDR, depicted as combined entity “UDM/UDR”149. In various embodiments, the mobile core network140may also include an Network Exposure Function (“NEF”) (which is responsible for making network data and resources easily accessible to customers and network partners, e.g., via one or more APIs), a Network Repository Function (“NRF”) (which provides NF service registration and discovery, enabling NFs to identify appropriate services in one another and communicate with each other over Application Programming Interfaces (“APIs”)), or other NFs defined for the 5GC. In certain embodiments, the mobile core network140may include an authentication, authorization, and accounting (“AAA”) server. In various embodiments, the mobile core network140supports different types of mobile data connections and different types of network slices, wherein each mobile data connection utilizes a specific network slice. Here, a “network slice” refers to a portion of the mobile core network140optimized for a certain traffic type or communication service. A network instance may be identified by a single Network Slice Selection Assistance Information (“S-NSSAI”), while a set of network slices for which the remote unit105is authorized to use is identified by NSSAI. In certain embodiments, the various network slices may include separate instances of network functions, such as the SMF and UPF141. In some embodiments, the different network slices may share some common network functions, such as the AMF143. The different network slices are not shown inFIG.1for ease of illustration, but their support is assumed. Although specific numbers and types of network functions are depicted inFIG.1, one of skill in the art will recognize that any number and type of network functions may be included in the mobile core network140. Moreover, where the mobile core network140comprises an EPC, the depicted network functions may be replaced with appropriate EPC entities, such as a Mobility Management Entity (“MIME”), Serving Gateway (“S-GW”), PDN Gateway (“P-GW”), Home Subscriber Server (“HSS”), and the like. WhileFIG.1depicts components of a 5G RAN and a 5G core network, the described embodiments for using a pseudonym for access authentication over non-3GPP access apply to other types of communication networks and RATs, including IEEE 802.11 variants, GSM, GPRS, UMTS, LTE variants, CDMA 2000, Bluetooth, ZigBee, Sigfox, and the like. For example, in an 4G/LTE variant involving an EPC, the AMF143may be mapped to an MME, the SMF mapped to a control plane portion of a P-GW and/or to an MME, the UPF141may be mapped to an S-GW and a user plane portion of the P-GW, the UDM/UDR149may be mapped to an HSS, etc. As depicted, a remote unit105(e.g., a UE) may connect to the mobile core network (e.g., to a 5G mobile communication network) via two types of accesses: (1) via 3GPP access network120and (2) via a non-3GPP access network130. The first type of access (e.g., 3GPP access network120) uses a 3GPP-defined type of wireless communication (e.g., NG-RAN) and the second type of access (e.g., non-3GPP access network130) uses a non-3GPP-defined type of wireless communication (e.g., WLAN). The 5G-RAN115refers to any type of 5G access network that can provide access to the mobile core network140, including the 3GPP access network120and the non-3GPP access network130. As background, general realizations of radio transmission multiplexing of information streams can be widely categorized as:interference-free—information symbols pertaining to potentially different data streams are separated over orthogonal degrees of freedom over the signaling domain to avoid self-interference effects;interference-managed—information symbols pertaining to potentially different data streams are separated over non-orthogonal degrees of freedom in a consciously designed manner to minimize or control the self-inflicted signaling interference and aid symbol detection at the receiver. In one embodiment, conventional approaches to practical systems exploit the interference-free concept, e.g., as described in 3GPP TS 38.211 and TS 38.214, even though in information-theoretic sense, e.g., see El Gamal, A., & Kim, Y. H. (2011), Network information theory. Cambridge University Press, it is known that superposition signaling is beneficial in offering higher spectral efficiency and signaling design degrees of freedom in case of heterogeneous transmission scenarios. Regarding 3GPP signaling and multiplexing, in one embodiment, the 3GPP systems leverage the orthogonal frequency-domain multiplexing (“OFDM”) physical transport waveform in multiplexing orthogonally symbols across both time and frequency domains in a quest for an interference-free multiplex (e.g., TS 38.211). To this end, a physical layer cyclic prefix buffer may be prepended to time domain OFDM symbols (e.g., Tinfo=1/sub-carrier-spacing duration) to counteract multipath effects and collect the interference inflicted by the propagation media. However, these portions of signals cannot be further leveraged in transmitting additional information, as their main role is in fact to ensure no inter-symbol interference (“ISI”) over multiple time-domain multiplexing (“TDM”) OFDM transmissions. In orthogonal methods, the NR Physical Layer (“PHY”) in 3GPP has been proposed to offer signaling flexibility at fundamental levels to cater jointly for eMBB, URLLC and mMTC types of traffic that NR aimed to address. Features addressing this type of flexibility include:scalable OFDM numerology affecting frequency-domain subcarrier spacing (“SCS”) and time-domain OFDM symbol duration to aid heterogeneous traffic resource allocation in time and frequency domain (e.g., (4.1-4.3, TS 38.211), (8.1, TR 38.912));mini-slots introducing 2-,4-symbol long time allocation for UL/DL primarily targeting lower-latency enhancements (e.g., 8.1, TR 38.912);bandwidth parts adding additional scalability enhancements to the frequency-domain by allowing multiple BW components to be served in an interference free manner across various range of devices (e.g., 4.5, TS 38.211). Nevertheless, in one embodiment, these enhancements are inherently limited by basic propagation principles and resource availability, as follows:scalable OFDM numerology suffers generally from the uncertainty principle (as signals cannot be jointly well localized in time and frequency), as well as from the bandwidth (“BW”) scarcity in frequency range 1 (410 MHz-7125 MHz) (“FR1”) licensed bands;mini-slots absolute resolution is tightly linked to the SCS width, hence particular configurations may be infeasible in practice;bandwidth parts (“BWP”) are constrained again by spectrum availability as well as UE capabilities to support various BWP configurations. For non-orthogonal methods, in one embodiment, to solve such physical and resources constraints, 3GPP has also previously considered transmission strategies based on the interference-managed approach to provide PHY tools to support heterogeneous traffic in eMBB, URLLC or mMTC directions. The following points are relevant:DL Multiuser Superposition Transmission (“DL-MUST”) studied in TR 36.859 and integrated in Release 14 of 3GPP given reduced complexity configurations as detailed in clause 2 of R1-1613802;Non-orthogonal Multiple Access (“NOMA”) schemes and advanced receivers studied in SI TR 38.812 to improve the multiple access techniques of NR and system spectral efficiency could be considered for next releases. Despite significant performance gains reported in several contributions to TR 38.812 for link-level simulation (“LLS”)/system-level simulation (“SLS”) with non-sparse NOMA codebooks and spreading sequences, such as for instance RSMA (e.g., R1-1809434/Cao, Y., Sun, H., Soriaga, J., & Ji, T. (September 2017). Resource spread multiple access-A novel transmission scheme for 5G uplink. In 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall)) or MUSA (e.g., R1-1805840), the schemes required may include:additional distributed synchronization mechanisms among UEs by configured grants and codeword orchestration;significant DM-RS and Random Access Channel (“RACH”) enhancements;receivers capable of resolving interference. NOTE: The latter advanced receivers, in one embodiment, were shown to not be a bottleneck given reasonable complexity designs and complementary trade-offs between interference cancellation (“IC”) and complexity for linear receivers, i.e., linear minimum mean squared error (“LMMSE”), with both hard-IC and hybrid-IC for block- and non-block-based receiver processing (e.g., TR 38.812). Regarding cross-layered and configuration methods, in one embodiment, to offer support to URLLC requirements under heterogeneous traffic 3GPP introduced features within NR to this regard both at PHY and at higher-level signaling and protocol stacks, e.g., RP-191584, such as:Physical Downlink Control Channel (“PDCCH”) enhancements:configurable field sizes for Downlink Control Information (“DCI”) improved reliability;increased PDCCH monitoring configuration to minimize scheduling delay.Uplink Control Information (“UCI”) enhancements:support for multiple Hybrid Automated Repeat-Request Acknowledgement (“HARQ-ACK”) feedback report per slot to reduce latency;construction of multiple HARQ-ACK codebooks for heterogeneous services.Physical Uplink Shared Channel (“PUSCH”) enhancements:support for cross-slot-boundary scheduling;scheduling enhancement thereafter of PUSCH dynamic grant;scheduling enhancement thereafter of PUSCH configured grant.HARQ prioritization and scheduling enhancements in support of low-latency traffic over non-priority traffic;inter-UE prioritization and multiplexing by:UL pre-emption allowing gNB to interrupt UL of a user to accommodate URLLC for another user;enhanced UL power control to enable power increase of UL URLLC overlapping with eMBB traffic.multiple active configurations for configured grants to accommodate heterogeneous traffic requirements. The foregoing enhancements add upon existing features, e.g., Release 15 URLLC features, such as:high SCS for lower signaling latency over radio interface;mini-slots configurations;configured-grant procedures enabling UEs to autonomously UL without grant requests to gNB;DL pre-emption by UL URLLC traffic;Packet Data Convergence Protocol (“PDCP”) duplication and multi-slot repetition;low-spectral efficient Modulation and Coding Scheme (“MCS”) configurations. In one embodiment, despite all these features, there are still some radio access technology gaps in offering support to heterogeneous traffic over wireless communications. For instance, multiple UEs and advanced terminals are capable already to produce mixed streams of data that needs to be transmitted with heterogeneous requirements over a network, such as:autonomous vehicles may emit in Vehicle-to-Everything Communications (“V2X”) both high-rate video streams as well as cooperative awareness messages as per ETSI C-ITS specifications in support of dead-angle awareness and traffic safety (e.g., ETSI European Standard EN302637-2 (v1.4.1—April 2019). Intelligent Transportation Systems (ITS); Vehicular Communications; Basic Set of Applications; Part 2: Specification of Cooperative Awareness Basic Service);XR terminals or UE-tethered XR terminals may need to process both eMBB video encoded traffic as well as URLLC control information for high-quality remote renderings of XR content;other examples include a terminal that generates and transmits simultaneously heterogeneous traffic including remote tele-operations, advanced sensors in digital twinning, holographic teleportation, etc. In one embodiment, some open questions that remain to be addressed include:how can the transmission of information be optimized to benefit simultaneously heterogeneous multiple streams originating at the same transmission point under constrained physical multiplexing resources (time/frequency/layers)?how to code simultaneous transmissions of multiple uncorrelated streams in a spectral efficient way? In one existing method, spectrally efficient frequency division multiplexing (“SEFDM”), which, in contrast to the 3GPP NOMA solutions discussed priorly, which treat multiple access schemes for information at multiuser level, SEFDM studies a fundamental spectrum efficient non-orthogonal waveform capable to provide the additional degrees of freedom and virtual scheduling resources necessary to accommodate heterogeneous requirements at individual information stream level (e.g., Xu, T., & Darwazeh, I. (November 2014). Spectrally efficient FDM: Spectrum saving technique for 5G? In 1st International Conference on 5G for Ubiquitous Connectivity (pp. 273-278). IEEE). SEFDM was also considered as a FTN waveform candidate to NR (e.g., Ahmadi, S. (2019). 5G NR: Architecture, technology, implementation, and operation of 3GPP new radio standards. Academic Press; Xu, T., & Darwazeh, I. (November 2014). Spectrally efficient FDM: Spectrum saving technique for 5G? In 1st International Conference on 5G for Ubiquitous Connectivity (pp. 273-278). IEEE). Concretely, SEFDM compresses the OFDM modulation allowing for spectrum saving under the same number of data information subcarriers, or alternatively, for superimposing more data subcarriers than OFDM within the same BW constraints. This is achieved by an interference-managed linear transform on the information symbols snas X[k]=1ρ⁢N⁢∑n=0N-1exp(j⁢2⁢π⁢nk·αρ⁢N)·sn.Eq.1 Thus, the regular OFDM modulation is scaled by the BW compression factor α=ΔfSEFDM·TS, and by the oversampling integer factor ρ≥1, respectively. And for α=1 and ρ=1, the SEFDM modulation is identical to the regular OFDM Inverse Fast Fourier Transform (“IFFT”) based modulator. The SEFDM time-signal X[k] where k={0, 1, . . . ρN−1} may be oversampled or not depending on the system realization. The interference introduced by the SEFDM modulation technique can be easily computed by grouping the modulator sub-bands and considering their Gram matrix representing the induced inter-carrier interference (“ICI”) when α<1. This yields the matrix C∈N×N Cm,n(α,ρ)=1ρ⁢N·{ρ⁢N,m=n1-exp⁡(j⁢2⁢π⁡(n-m)⁢α)1-exp(j⁢2⁢π⁡(n-m)⁢αρ⁢N),m≠nEq.2 The knowledge of the self-interference levels of C can be leveraged at the receiver post-demodulation which is performed by matched filtering with conjugate operation described in Eq. (e.g., Xu, T., & Darwazeh, I. (November 2014). Spectrally efficient FDM: Spectrum saving technique for 5G? In 1st International Conference on 5G for Ubiquitous Connectivity (pp. 273-278). IEEE). Turbo and hybrid-IC receivers (e.g., Xu, T., & Darwazeh, I. (November 2014). Spectrally efficient FDM: Spectrum saving technique for 5G? In 1st International Conference on 5G for Ubiquitous Connectivity (pp. 273-278). IEEE) and advanced discrete-aware linear receivers with knowledge of C (e.g., Iimori, H., De Abreu, G. T. F., Hara, T., Ishibashi, K., Stoica, R. A., Gonzalez, D., & Gonsa, O. (March 2021). Robust symbol detection in large-scale overloaded NOMA systems. IEEE Open Journal of the Communications Society, 2, 512-533) are proven to detect the information symbols close to orthogonal signaling performance in realistic scenarios with at most cubic complexity for overloading levels of up to 40% (e.g., Xu, T., & Darwazeh, I. (November 2014). Spectrally efficient FDM: Spectrum saving technique for 5G? In 1st International Conference on 5G for Ubiquitous Connectivity (pp. 273-278). IEEE; Iimori, H., De Abreu, G. T. F., Hara, T., Ishibashi, K., Stoica, R. A., González, D., & Gonsa, O. (March 2021). Robust symbol detection in large-scale overloaded NOMA systems. IEEE Open Journal of the Communications Society, 2, 512-533.), i.e., α=0.6. Similar remarks apply to early neural receivers at lower complexity (e.g., Chorti, A., & Picard, D. (2021). Rate Analysis and Deep Neural Network Detectors for SEFDM FTN Systems. arXiv preprint arXiv:2103.02306.). The SEFDM modulation/demodulation efficient hardware realization can be linked to the IFFT/FFT transform as follows. The saving in BW offered by SEFDM given a total BW constraint allows an additional number of data subcarriers to be used for transmission. Assume thus the total number of SEFDM subcarriers to be M such that the original BW of an N-dimensional OFDM modulation is ΔfOFDMN=ΔfSEFDMM. It follows that α=ρN/M, or equivalently, M=ρN/α, with M, N, ρ integers and furthermore M may be a power of 2 for efficient IFFT/FFT realizations. This fact limits α to a particular set of values for scalable hardware implementations taking advance of the radix-2 FFT. In general, the present disclosure introduces a scalable transformation-based precoder. In one embodiment, this has the effective role of compressing multiple information symbols in a spectrally efficient manner over the available physical communications resources. The features of the proposed design offer novel degrees of freedom for transmission encoding applicable to increasingly emerging heterogeneous communications requirements of information data streams originating from a single transmission point (e.g., network node, gNB, Transmission-Reception Point (“TRP”), UE). Information Theory defines the fundamental limit of capacity for a wireless communication link determining its spectral efficiency upper bound as a function of the instantaneous Signal-to-Noise Ratio (“SNR”) over the channel. Attaining this limit given the inherent discrete processing and MCS of modern communications systems is practically infeasible. In addition, the increasing heterogeneous requirements of data in terms of high rate, low-latency, high reliability across more streams of information makes this maximization problem difficult in practice. This is the case, as competing requirements deplete the orthogonal resources for communications which are currently exploited, such as for instance, frequency subbands, spatial degrees of freedom, time slots. A flexible solution to increase the spectral efficiency of the current communications systems under fixed physical resources (i.e., spectrum, spatial and time degrees of freedom) is needed to effectively address this emerging issue and allow competing non-orthogonal requirements to be multiplexed over a single use of the channel. Information-theoretic results (e.g., El Gamal, A., & Kim, Y. H. (2011). Network information theory. Cambridge University Press.) also suggest a generic strategy to treat the problem by means of joint interference management and superposition to funnel more information simultaneously over a communication link. Consider, without loss of generality, the communications system ofFIG.2, which depicts a communications system based on OFDM modulation with a transmitter on top and a receiver at the bottom. In gray, the proposed transformation block precoder202is highlighted on the transmitter side, whereas the receiver's necessary knowledge of the codebook204is outlined. The proposed transformation pre-mapping of the information symbols to the orthogonal communication resources, e.g., OFDM modulation, is represented at the transmitter as a precoder202. The codebook design of the precoder shall act as an information funnel to speed up the signaling over the waveform multiplexing. This is achieved, in one embodiment, by compression with an optimized codebook204whose interference is appropriately managed to reduce all the self-interference effects. Let such a codebook be denoted by S(M, N, t) such that:M is the dimensionality of the codewords;N is the dimensionality of the codebook;t represents the maximum cross-codeword interference (or, alternatively, cross-correlation) magnitude. In one example, M is the number of subcarriers/resource elements (“REs”) occupied. Generally, M can be number of modulation resource that is being multiplexed. For example, the modulation resource may be frequency subcarriers, time slots, spatial layers or antenna ports, or a combination thereof. In one example, N is the maximum number of information symbols that is precoded. Furthermore, consider the codeword dimensionality M constrained to the actual available physical resources available for communication. As such, the question to answer is: “how to optimize the N-sized codebook design to simultaneously benefit more information symbols than the multiplexing degree of freedom available?” In one embodiment, the proposed design for the codebook is applicable to finite-energy discrete complex-valued inputs and encodes higher dimensional points onto non-orthogonal combinations of lower dimensional complex-valued codewords. The inherent non-orthogonality is a consequence of the dimensionality reduction in the inputs' representation as lower dimensional codewords which compresses the information symbols at the price of induced interference. To optimize the transmission strategy and the link spectral efficiency, the induced interference can be reduced as part of the codebook design. As a result, the proposed design of the precoding method described can be reduced to the optimization problem: t2=minSmaxk≠l❘"\[LeftBracketingBar]"i⁡(sk,sl)❘"\[RightBracketingBar]"2Eq.3s.t.k≠l∈{1,2,…,N}SS*=NM⁢IdM❘"\[LeftBracketingBar]"i⁡(sj,sj)❘"\[RightBracketingBar]"2=1,∀j The codebook design may be further broken down as follows: the objective of the optimization is to minimize the maximum interference energy measure, e.g., |i(sk, sl)2, across any pair of different codewords from S(M, N, t). The second constraint of the codebook, S⁢S*=NM⁢IdM, imposes the uniform and uncorrelated ergodic power representation of any random input combinations onto the lower-dimensional physical resource space. The third constraint limits the codewords of the codebook to uniform unit energy. Considering the practical linear transform realization of the codebook design, the optimization from Eq. 3 above can be simplified to its linear inner product space formulation as finding the linear code S=S(M, N, t), such that: t2=minSmaxk≠l⁢❘"\[LeftBracketingBar]"〈sk,sl〉❘"\[RightBracketingBar]"2⁢s.t.⁢k≠l∈{1,2,…,N}⁢SSH=NM⁢IM⁢sj2=1,∀j⁢S∈ℂM×N,M<N.Eq.4 The cross-codewords interference energy measure of Eq. 3, |i(sk, sl)| has been replaced in Eq. 4 by the L2inner-product measure, |(sk, st)|, and the adjoint operator * has been replaced by the conjugate transposition of linear vector spaces, respectively. Similarly, the identity operator IdMhas been replaced by its subsequent linear counterpart, e.g., the identity matrix IM, whereas the codebook can be compactly described by the complex matrix S. The design optimization problem of Eq. 4 may attain its global minimum for Grassmannian code constructions (e.g., Casazza, P. G., & Kutyniok, G. (Eds.). (2012). Finite frames: Theory and applications. Springer Science & Business Media.), which is the well-known Welch Bound (“WB”) for the maximum cross-codeword interference, such that: t≥N-MM⁡(N-1).Eq.5 The existence of such optimal Grassmannian codes, S⁡(M,N,N-MM⁡(N-1)), is not universally applicable to any generic dimensionality pair (M, N), (e.g., Strohmer, T., & Heath Jr, R. W. (2003). Grassmannian frames with applications to coding and communication. Applied and computational harmonic analysis, 14(3), 257-275.), as per the conjecture of Grassmannian frames existence in Representation Theory. On the other hand, the design condition imposed by the diagonalization of the operator SSHin the second constraint relates to bringing the codebook S as close as possible as an ensemble to an orthogonal basis in terms of power spreading and representation optimality. This also impacts the ensemble total interference energy as follows: SH⁢SF2=trace⁢(SH⁢S·SH⁢S)=trace⁢(SSH·SSH)=trace⁢(NM⁢IM·NM⁢IM)=N2M,Eq.6 where the symmetric Gramian operator SHS is just a compact representation of all the possible cross-codewords interference levels, as its entries are just all |sk, sl|, k, l∈{1, 2, . . . , N}. Eq. 6 additionally highlights that the proposed precoder design is constrained to a total sum of squared correlations among its codewords equal to N2M, and as such itself is a complex valued WB Equality (“WBE”) sequence. On the other hand, the precoder design jointly seeks for a codebook such that not only this ensemble sum interference level is optimal, but also any pairwise codeword interference level is individually minimized as much as possible as well. In effect this jointly lowers the induced interference by overloading. The unit-norm equality constraint on the individual codewords enforces a well-defined energy normalization relevant to practical systems, but simultaneously defines the search space of the codewords to a unit hypersphere of dimensionality M. As a result, the outcome of the proposed precoder S(M, N, t)302is a spherical codebook, as displayed inFIG.3. The spherical precoder S thus selects a finite set of N points on an M-dimensional unit hypersphere such that the cross-correlation of these N codewords is pairwise and ensemble-wise controlled and minimized. The application of the proposed precoder S(M, N, t)302to increase spectral efficiency and to provide additional degrees of freedom for the existent physical resources of the signal space is valid both for single data streams, as well as, for multiple independent data streams with heterogeneous rate/latency/robustness requirements as detailed in the following embodiments. In some embodiments, the precoding process is defined as a general non-linear function, covering non-linear precoding. The design can be done by parameterized training (e.g., like a deep neural network (“DNN”), within a function space constructed via a known reproducing kernel Hilbert space (“RKHS”)), where other objective/constraint items can be added to Eq. 3. Some embodiments may so consider a constraint on a measure of receiver decoding complexity, or a constraint on keeping a precoder-agnostic receiver processing scheme (e.g., the receiver works with a fixed precoder assumption, but the transmitter precoders are tunned based on the required quality of service or objective function). The receiver-side detection and decoding of communications symbols precoded by S(M, N, t)302generally resumes to resolving the introduced interference. To this end, in one embodiment, a receiver digital signal processor requires knowledge of the codebook or codebook interference pattern. Post-precoding, the compressed symbol to transmit is a linear combination of the codewords scaled by complex discrete input points of the overloaded information symbols. As a result, in one embodiment, the compressed precoded symbol is an M-dimensional overloaded discrete signal which can be jointly unmapped to the discrete input symbols by decoding the codebook interference. The detection of the original information symbols can be effectively achieved therefore by approximate joint maximum likelihood detection (e.g., Iimori, H., De Abreu, G. T. F., Hara, T., Ishibashi, K., Stoica, R. A., González, D., & Gonsa, O. (March 2021). Robust symbol detection in large-scale overloaded NOMA systems. IEEE Open Journal of the Communications Society, 2, 512-533.), by linear iterative MMSE-IC with soft or hard decisions (e.g., Xu, T., & Darwazeh, I. (November 2014). Spectrally efficient FDM: Spectrum saving technique for 5G? In 1st International Conference on 5G for Ubiquitous Connectivity (pp. 273-278). IEEE), by loopy belief propagation methods or by neural receivers (e.g., Chorti, A., & Picard, D. (2021). Rate Analysis and Deep Neural Network Detectors for SEFDM FTN Systems. arXiv preprint arXiv:2103.02306.) with knowledge of the interference pattern compactly represented by SHS. For increased detection accuracy turbo receivers which employ joint detection and decoding given the MCS of the input symbols can also be applied to use the outer channel code redundancy to correct for the potential erroneous decisions of the symbol detector, either in soft outputs or hard outputs configuration. In one embodiment, regarding overloading by harmonic spherical codes, to prevent signaling overhead required to transmit the full precoder information over a network, e.g., either the precoder codebook or its interference pattern, an embodiment of the proposed precoding method may consider codebook realizations where the transmitter and receiver share some knowledge of the family of the codebook. An implementation efficient embodiment may consider quantized complex realizations of the N codewords associated with the information symbols to encode signals based on harmonic multiples of the N-th root of unity given the twiddle factor ωN=exp⁡(-j⁢2⁢πN). An efficient ensemble representation of the full N-sized harmonic orthonormal representation basis is provided by the unitary DFT matrix realization summarized below as: wNk=[ωNk·0⁢ωNk·1⁢…⁢ωNk·i⁢…⁢ωNk·(N-1)]T,0≤i<NEq.7WN=1N[wN0⁢wN1⁢…⁢wNi⁢…⁢wNN-1],0≤i<N. The compression of the higher dimensional input space N to the available signal space of dimensionality M is achieved by critically pruning harmonics from the complete orthonormal basis WN. As a result, N−M rows are removed from WNto compress the individual N codewords to the available M physical space dimension. The resultant complex matrix of size M×N is denoted as WN\M. For N<M this implies that WN\Mis a rectangular matrix, such that: WN\MWN\MHIMEq. 8 as the orthogonality of the remaining rows shall be preserved. On the other hand, the interference pattern introduced by the Gram matrix, is obtained for each (k, l) entry as: WN∖MH⁢WN∖M=GN∖M(k,l)=1N.{M,k=l0-∑i∈RN∖MωN*l·i⁢ωNk·i,k≠l,Eq.9 where the ordered set RN\Mof cardinality |RN\M|=N−M contains the indices 0≤i<N of the pruned rows from the original DFT matrix WN. To satisfy the normalization constraints for the ensemble and individual codewords, given Eq. 8 and Eq. 9, the M×N matrix WN\Mmust be scaled by the factor NM, and so, the codebook: S=S⁡(M,N,maxk≠l❘"\[LeftBracketingBar]"GN∖M(k,l)❘"\[RightBracketingBar]")=NM·WN∖MEq.10 is obtained. In one embodiment, the best selection of RN\Mfor S based on Eq. 9 consists in finding the N−M rows that minimize the magnitude of any sum Σi∈RN\Mω*N*l·iωNk·i, ∀k≠l. The search space size necessary to perform the optimal RN\Mis CN-MN=(NN-M), wherein N−M rows must be picked from the available N given the optimization objective. This combinatorial search problem is parallelizable and may always be solved effectively offline and its outputs tabulated. In another embodiment, offline optimized harmonic spherical codes S(M, N, t) for generic M, N can be tabulated based on the dimensionality, i.e., M, N, and on the set of selected or pruned rows, respectively. Efficient embodiments may tabulate either the full set of indices consisting of the selected rows of the original N-DFT matrix if M≤N−M, or alternatively, the full set of indices consisting of the pruned rows if M>N−M. Some realizations may additionally tabulate a N-bit bitmap with a bit of information, e.g., bpruned, to indicate whether the rows set is pruned or not. In one embodiment, algebraically motivated, possibly non-optimal selections of the rows for WN\Mmay be performed given the selection of M p-consecutive rows out of the N possible ones to form a geometric progression in the interference plane, or alternatively, the pruning of N−M p-consecutive rows out of the N possible ones to form a geometric progression in the interference plane. An ordered set of Z p-consecutive indices is defined as follows RpZ={ioffset, ioffset+p, ioffset+2p, . . . , ioffset+(Z-1)p}. The obtained geometric progressions defining the cross-codeword interferences become: ❘"\[LeftBracketingBar]"GN∖M(k,l)❘"\[RightBracketingBar]"=1M·❘"\[LeftBracketingBar]"ωN(k-l)·offset❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"∑i∈RpZ1-ωN(k-l)·p·Z1-ωN(k-l)·p❘"\[RightBracketingBar]"=1M·❘"\[LeftBracketingBar]"∑i∈RpZ1-ωN(k-l)·p·Z1-ωN(k-l)·p❘"\[RightBracketingBar]"⁢∀k≠lEq.11 where Z can either be M or N−M, respectively. In certain embodiments where the number of input symbols exceeds the number of physical resources by unity, i.e., N=M+1, the selection of the pruned row for the codebook may be arbitrary. This is an outcome of the resultant interference levels for any k≠l and fixed pruned row i, which becomes: t2≥❘"\[LeftBracketingBar]"GN∖M(k,l)❘"\[RightBracketingBar]"=1M·❘"\[LeftBracketingBar]"ωN*l·i⁢ωNk·i❘"\[RightBracketingBar]"=1MEq.12 Therefore, the obtained codebook S⁡(M,M+1,1M) is globally optimal in terms of Eq. 4 as the WB in this case is equal to t and the precoder is a complex-valued maximum WBE sequence. One embodiment may consider higher-layer configurations (e.g., Radio Resource Control (“RRC”)) of the precoder dimensionality and design based on at least one of the data stream or multiple data streams and their associated rates, latency, and reliability requirements. The higher-layer configuration and codebook setup signaling may thus consist of a bit field (e.g., one 1 bit-width field) for at least one of dynamic switch-on, dimensionality information, i.e., M, N and row selection/pruning information, as well as optional metadata on the constraints, if any, regarding latency, rate or reliability guarantees that the radio link is required to fulfill. Some realizations may configure/indicate/provide/report dimensionality based on offset to the configured physical resources M, i.e., α=N−M, to minimize the reporting signaling size. The enablement flag and precoder dimensionality information, in one embodiment, is required at the receiver side to resolve the precoding introduced interference and perform successful detection and subsequent decoding of information. The precoder information at the transmitter, in one embodiment, shall be indicated/reported/provided to the receiver (or the transmitter shall configure/indicate/provide/report to the receiver the precoder information) as part of a data channel embedded reference signal (e.g., based on DM-RS sequence over Physical Downlink Shared Channel (“PDSCH”)/PUSCH/Physical Sidelink Shared Channel (“PSSCH”) realizations; the DM-RS may not be precoded) or via a control channel signaling mechanism (e.g., based on DCI for DL/UL scheduling over PDCCH, configured grant scheduling information, UCI for UL over Physical Uplink Control Channel (“PUCCH”)/PUSCH—e.g., data-associated control UCI resources or OFDM symbols comprising the UCI on PUSCH may not be precoded with the overloading precoder, or SCI for Sidelink (“SL”) over Physical Sidelink Control Channel (“PSCCH”)). In another embodiment, an effective signaling may be considered where additional compression mechanisms for the DFT rows information may be applied. Source coding compression such as algebraic or arithmetic lossless coding of the selected or pruned rows' indices may be used to reduce the necessary signaling overhead. The compression may be reported front-loaded to the actual compressed bitstream as an explicit bit field, followed by the selected source coding compression strategy to allow the retrieval of the compressed information. The compressed information may be indicated/reported/provided to the receiver as part of the precoder information. A MIMO multi-layered embodiment may individually precode each layer by the proposed codebook. The precoding configuration may be Common-layer/layer-common precoding or layer-independent precoding. Common-layer/layer-common precoding may be performed with the same precoder on each layer, and a common precoder information/configuration may be reported/indicated/provided. Layer-independent precoding may be performed with different precoders for each layer with the precoder information may be reported/indicated/provided on a per-layer basis. In some embodiments, the initial coding space with dimension M includes elements of subcarriers, elements of time domain symbols, elements of spatial layers, or a combination thereof. In some embodiments, where a strong prior channel knowledge is present (e.g., a static Line-of-Sight (“LoS”) condition), the interference objective function can be defined at the transmitter from the point of view of the receiver, e.g., the precoding strategy is designed such that it minimizes the interference among the precoded sequences after the effect of the channel. Regarding overloading by general spike spherical codes, in one embodiment, alternate realizations may consider a codebook design which is unconstrained, e.g., no structural implicit assumptions are made, such as for instance, a harmonically quantized representation. In this case the codebook design shall take full advantage of the entire M-dimensional entropy of the spherical topology and exploit it to freely arrange the displacement of the N codewords. Despite Eq. 4, being a highly non-trivial optimization problem, in one embodiment, it is solvable in approximate sense by stochastic numerical, algebraic alternate projections or algebraic shrinkage methods that take advantage of the spectral structure of the Gram matrix associated with S. The approximate solutions yield matrices S which satisfy the second constraint in terms of representation energy tightness across the physical resource space, e.g., S⁢SH=NM⁢IM, yet they are only approximate in terms of achieving the WB and maintaining unitary power of the associated codewords, i.e., 1−δ≤∥sk∥2≤1+δ, 0<δ<<1. This fact is a consequence of the polar decomposition in Representation Theory which states that for any arbitrary M×N matrix W an S WBE overloaded representation can be obtained as: S=NM·(WWH)-0.5⁢W,∀W∈ℂM×N.Eq.13 However, this transformation, albeit bounded in terms of codeword total energy since: trace(SSH)=N=trace(SHS) does not preserve exactly the individual unit-normality of the codewords, and thus, the codewords are not unitary anymore but approximates thereof. This approximation is tight in practice given the spectral properties of the Gram matrix of S which is M-rank. Consequently, codebooks designed generally as such do not place the codewords on the M-unit hypersphere but outline spiky realizations thereof around the spherical surface, as highlighted inFIG.4. In a first embodiment, offline priorly optimized spike spherical codes S(M, N, t)402for generic M, N without structural constraints may be compactly represented as tabulated entries stored in memory processing units containing the codebook dimensionality, e.g., M and N respectively, or alternatively M and the ratio α=N−M (or based on the ratio, e.g., α/M), and the values of the codebook's codewords. The latter may be stored in their full complex floating-point representation or in space-efficient realizations such as quantized complex integer representation, quantized normalized complex integer representation. In a second embodiment, higher-layers (e.g., RRC triggered) configurations and selection of the precoder custom spike codebook realizations may be performed based on at least one of the available information regarding the current or predicted channel state information (“CSI”), PDCP load and associated data stream or multiple data streams and their subsequent rates, latency, and reliability requirements. The higher-layer configuration and codebook setup signaling may thus consist of a bit field (e.g., one 1 bit-width field) for at least one of dynamic switch-on, one 1 bit-width field as custom codebook flag, precoder information (dimensionality and codeword values or table index of the tabulated precoder entry), as well as optional metadata on the constraints, if any, regarding latency, rate or reliability guarantees that the radio link is required to fulfil. Some realizations may configure/indicate/provide/report dimensionality based on offset to the configured physical resources M, e.g., α=N−M, to minimize the reporting signaling size. The enablement flags and precoder information may be required at the receiver side to resolve the precoding introduced interference and perform successful detection and subsequent decoding of information. The precoder information at the transmitter may be indicated/provided/reported to the receiver as part of a data channel embedded reference signal (e.g., based on DM-RS over PDSCH/PUSCH/PSSCH realizations) or via a control channel signaling mechanism (e.g., based on DCI for DL/UL scheduling over PDCCH, configured grant scheduling information, UCI for UL over PUCCH/PUSCH, or SCI for SL over PSCCH). A third embodiment may consider the effective signaling of the precoder information to the receiver side. A simplified schematic realization of signaling this information is shown inFIG.5. It consists of an initial signal506from the transmitter502to the receiver504over an appropriate control channel (e.g., PDCCH, PUCCH, PSCCH) as ×CI containing a tabular index identifying the codebook. This index may be a common predefined non-decreasing integer identifying the code within a given bit mask length or, alternatively, a hash signature of the custom stored codebook. The hash function is considered set (e.g., SHA-256 etc.) or is additionally signaled according to a commonly defined reference table. The hash content is generated by the hash function applied to the entire codebook and acts as both an index and check mechanism for the receiver to determine whether its own memory storage unit contains the same realization of the codebook as the transmitter side, as inconsistencies may impact the receiver-side detection process. In case the hash is not verified by the receiver504, or alternatively, the receiver504cannot identify the transmitter502sent xCI-packed index of the used codebook, in one embodiment, the receiver504shall signal508this with a codebook NACK (e.g., codebook corresponding to the hash is unknown). The receiver NACK may indicate that the codebook reference was not found and thus let the transmitter know that the codebook needs to be explicitly shared. To continue with the higher-layer selected precoder, in one embodiment, the transmitter502signals512the explicit codebook to the receiver either over a control channel as a configuration field of the corresponding control information or by means of a data transmission over a (e.g., front-loaded) reference signal, e.g., a DM-RS or set of DM-RSs. On the other hand, in one embodiment, if the receiver504can verify the hash or identify the codebook index based on the signaled common table of available codebooks, the receiver504shall simply acknowledge the codebook setup and reply510with a codebook ACK to the transmitter. In an embodiment where explicit signaling of the configured codebook is required, the codebook information may be additionally compressed. Source coding compression such as algebraic or arithmetic lossless coding of the selected or pruned rows' indices may be used to reduce the necessary signaling overhead. The compression, in one embodiment, must be reported front-loaded to the actual compressed bitstream as an explicit bit field, followed by the selected source coding compression strategy to allow the retrieval of the compressed information. In some embodiments, the indicated precoder by the transmitter is not completely known at the receiver, but related information (general precoder type/structure, or a previously used precoder version with a different dimension etc.) is known. Then, in one embodiment, the receiver requests an incremental info to be sent by the transmitter, thereby avoiding the transmission of the complete precoder information. The compressed information may be indicated/reported/provided to the receiver as part of the precoder information. A MIMO multi-layered embodiment, in one embodiment, individually precodes each layer by the proposed codebook. Common-layer precoding configurations may be reported/indicated/provided for layer-common precoding, whereas layer-independent precoding designs may be reported/indicated/provided on a per-layer basis. In some embodiments, the initial coding space with dimension M includes elements of subcarriers, elements of time domain symbols, elements of spatial layers, or a combination thereof. In some embodiments, where a strong prior channel knowledge is present (e.g., a static LoS condition), the interference objective function can be defined at the transmitter from the point of view of the receiver, e.g., the precoding strategy is designed such that it minimizes the interference among the precoded sequences after the effect of the channel. Regarding heterogeneous multiplexing, in one embodiment, the proposed precoder602has the potential to transform any P2P link into a virtual synchronous multiple access channel given the intrinsic superposition and coded combining of input information symbols. The virtual multiple access channel obtained, in one embodiment, consists in multiplexing of overloaded symbols originated from a single data stream or multiple data streams belonging to a single user, as highlighted inFIG.6.FIG.6depicts a communications system (top: transmitter, bottom: receiver) realization where two independent data streams of a transmitter are multiplexed non-orthogonally at symbol-level via a linear precoder602transformation which virtualizes the communication link to a synchronous multiple access channel at the stream level. Without loss of generality of the extension to multiple data streams, in one embodiment, let two independent, possible heterogeneous (e.g., in terms of rate, latency, reliability demands), data streams be transmitted over a P2P link. The symbols associated with the first data stream post encoding, rate matching, and modulation form the symbol vector: x1=[x1,x2,…,xm1]T,xi∈Q1⊂ℂm1,0<i≤m1=⌈n1log2⁢❘"\[LeftBracketingBar]"Q1❘"\[RightBracketingBar]")⌉,Eq.15 given the channel coded data stream of n1bits and the selected Q1constellation set given the higher-layered configured MCS. Similarly, the symbols associated with the second data stream form the symbol vector x2as: x2=[x1,x2,…,xm2]T,xi∈Q2⊂ℂm2,0<i≤m2=⌈n2log2⁢❘"\[LeftBracketingBar]"Q2❘"\[RightBracketingBar]")⌉.Eq.16 The combined symbol vector yielded by the proposed precoder design is s=Sx,x=△[x1T,x2T]T, for a fixed design S=S(M, N, t), N=m1+m2and M, the amount of orthogonal system available physical resources, e.g., frequency subcarriers, time slots, signal layers, and/or the like. Given the proposed precoding and its simultaneous and synchronous spreading of the x1and x2symbols over all the available physical resources the achievable rates of the two data streams based on the joint maximum likelihood detection strategy are: R1≤I(X1; Y|X2=x2) R2≤I(X2; Y|X1=x1) R1+R2≤I(X1, X2; Y) where y is the channel transformed realization of s according to some channel distribution p(y|s). Upon Shannon's channel capacity limit and its extension to the multiple access channel, the right-hand side upper bound of the Eq. 17 is given by the capacity of the discrete memoryless Gaussian multiple access channel as: R1≤log⁢det⁡(IM+P·S1⁢x1·x1H⁢S1HN0)=log⁢det⁡(IM+PS,X1N0)⁢R2≤log⁢det⁡(IM+P·S2⁢x2·x2H⁢S2HN0)=log⁢det⁡(IM+PS,X2N0)⁢R1+R2≤log⁢det⁢(IM+P·Sx·xH⁢SHN0=log⁢det⁡(IM+PS,X1+PS,X2N0)Eq.18 where the P represents the joint diagonal transmission power matrix and S1, S2represent the codewords out of the codebook S604associated with symbols of x1and x2, respectively. The selected precoding strategy, in one embodiment, is ensemble-optimal as it diagonalizes the power spread SNR, maximizes the determinant term, and thus also maximizes the sum-rate in Eq. 18 given the design of S(M, N, t) such that SSHis uniformly diagonal. In one embodiment, this increases the overall spectral efficiency of the P2P communications link up to the Shannon limit. Furthermore, in one embodiment, additional power control and optimization schemes may be complementary embedded as the design of the power matrix P is decoupled from the codebook. The usual capacity gap due to discrete constellation points is additionally attenuated by the proposed precoder as this acts as a combiner of input symbols in the IQ-space, and as such, performs a spatial convolution. This effect results in a constellation shaping in joint geometric and stochastic terms based on precoder realization. The precoder realizations based on antipodal designs as the harmonic spherical codes S(M, N, t) preserve the symmetry of the combined discrete constellation result, as displayed for reference inFIG.7. For typical Q-sized PSK and/or QAM, in one embodiment, discrete constellations are normalized such that the first two moments are 0 and σ2, respectively, the precoder thus optimally combines and represents the signal powers of heterogeneous data streams given their MCS and associated rates. The capacity region achievable by the proposed overloaded multiplexing technique is displayed inFIG.8. The joint maximum likelihood detection scheme to decoding the individual information streams attains any rate pair within the convex hull (region S802) and its boundaries. Alternatively, the SIC detection strategy resolves interference hierarchically by first detecting and decoding symbols in decreasing order of SNRs. This strategy, in one embodiment, achieves the corner points of operation OX1804(if user X1is decoded first), OX2806(if user X2is decoded first). On the other hand, conventional strategies, such as treating interference as noise typical for CDMA systems and single-user receivers, in one embodiment, would reduce spectral efficiency to region808. Similarly, in one embodiment, TDM without power control would operate constrained to the boundary of region2810. And lastly, in one embodiment, TDM with adaptive power control (region3812) may achieve the sum-rate capacity at the potential costs of increased power usage above an average power constraint, additional feedback signaling regarding the CSI and prospective latency increase as at least 2 channel slots are necessary to multiplex in time domain the desired information. One embodiment may consider the overloaded multiplexing of a single data stream at faster-than-Nyquist signaling rate given a fixed MCS and precoding setup and may be configured dynamically by higher-layers (e.g., RRC). The MCS and M physical resources selected for communication, in one embodiment, shall determine the dimensionality of the precoder realization since for any k=nR information bits transmitted at rate R and discrete input constellation set Q, the number of overloaded symbols post rate-matching is of N≥⌈nlog2(❘"\[LeftBracketingBar]"Q❘"\[RightBracketingBar]")⌉. The N codewords, in one embodiment, shall be directly mapped to the information symbols for the precoding transformation. The precoder enablement flag and selected configuration may be periodically (with each data transmission) reported as part of data channel (e.g., front-loaded) reference signals (e.g., DM-RS) to aid the detection and decoding, or non-periodically (upon toggle or refresh of the precoder setup) as part of control channel information signals (e.g., DCI, UCI, SCI). Such a communication setup, in one embodiment, shall provide additional degrees of freedom in terms of configuration to allow a more flexible rate adaptation based on a common code rate where advantage may be taken of longer codewords and constellation shaping to increase the overall link spectral efficiency and achieve the Shannon capacity limit, or approach it closer than conventional discrete communications systems, respectively. A second embodiment that may consider overloaded multiplexing of a single data stream at a faster-than-Nyquist signaling rate given a set of MCSs configurations and a common precoding setup, may be configured dynamically by the higher-layers (e.g., RRC) and triggered by network-level metadata information (e.g., UEP trigger). Therefore, in one embodiment, the higher layers may partition the k information bits into distinct bit sequences, each with their individual configured MCS. Without loss of generality, in one embodiment, consider the realization where two separate rates are desired such that two TBs of lengths k1=n1R1and k2=n2R2bits are formed, with k=k1+k2, and R1≤R2. These sequences may be served by the same HARQ or different HARQ processes. The corresponding coded bits may be mapped post rate-matching to their individually configured discrete modulation symbols sets, Q1, Q2. The number of overloaded symbols corresponding to the more redundant component is m1≥⌈n1log2(❘"\[LeftBracketingBar]"Q1❘"\[RightBracketingBar]")⌉, while the second component coded at rate R2maps to m2≥⌈n2log2(❘"\[LeftBracketingBar]"Q2❘"\[RightBracketingBar]")⌉ symbols, such that the codebook size is N=m1+m2. If the rate requirements R2≤log2⁢det⁡(IM+PS,X2N0) are fulfilled, in one embodiment, more resources may be allocated (e.g., by the RRC) to the first component and increase adaptively its reliability at the rate R1by decreasing the constellation size. For adaptive configurations, in one embodiment, the precoder setup and the codewords' mapping to the information symbols of the distinct transport blocks (“TBs”) for the precoding transform may be reported as an additional information field of (e.g., front-loaded) information over a data channel reference signal (e.g., DM-RS). For static configurations, this information may be reported via the control channel information signals (e.g., DCI, UCI, SCI) to reduce signaling overhead with respect to the precoder configuration and its mapping. In a third embodiment, overloaded multiplexing of different data streams may be considered where two or more possibly uncorrelated information sources are combined by the proposed precoding. Higher layer QoS, rate, latency and reliability requirements of the data streams may be used by the RRC and lower layers to optimize and setup a joint configuration set of MCS associated with the individual data streams and a precoder configuration, respectively. Without loss of generality, in one embodiment, consider two distinct data streams and their associated TBs each containing k1=n1R1and k2=n2R2information bits to be transmitted at rates R1and R2, where based on the required QoS, the minimum transmission rates of ρ1≤R1and ρ2≤R2may be minimally necessary for each data stream to avoid outage given additional latency and reliability constraints associated with the TBs of the two streams. The instantaneous configuration optimized by the higher layers, in one embodiment, shall yield realizations where ⌈k1p1·log2(❘"\[LeftBracketingBar]"Q1❘"\[RightBracketingBar]")⌉≥m1≥⌈n1log2(❘"\[LeftBracketingBar]"Q1❘"\[RightBracketingBar]")⌉⁢and⁢⌈k2p2·log2(|Q2|)⌉≥m2≥⌈n2log2(❘"\[LeftBracketingBar]"Q2❘"\[RightBracketingBar]")⌉ symbols are non-orthogonally multiplexed over M<m1+m2physical resource elements. This setup, in one embodiment, shall provide additional degrees of freedom in terms of available physical resource elements to allow for a more flexible adaptation to heterogeneous demands on rate, latency and reliability based on the precoded rate splits given the selected m1and m2for the two data streams. A fourth embodiment may implement the proposed overloaded multiplexing of two data streams, x1which is of a first type (e.g., eMBB, associated with a first priority level), and x2which is of a second type (e.g., URLLC, associated with a second priority level, the second priority level may be higher than the first priority level), respectively. The expected rate pairs, in one embodiment, are thus R1≤R2and the URLLC data symbols shall be prioritized in terms of latency and rate realization by the MAC layer relative to the eMBB data. As a result, in one embodiment, the rate splitting between the two streams may be selected such that R1is temporarily lowered down to a minimum level q1≤R1, which may correspond to the eMBB QoS, to allow the effective sum-rate realization R1+R2under the maximum achievable total rate given the CSI constraints and a R2rate fulfilling the reliability requirements of the URLLC stream decoding under the available CSI SNR levels. Once the duration transmission of the URLLC TB has concluded and its TB corresponding to the HARQ process acknowledged, in one embodiment, the eMBB stream, if not already completed, shall be throttled back to a higher rate realization than q1, if possible, given the updated channel conditions. In a fifth embodiment, the joint transmission via the proposed overloaded precoding of a set of streams may be realized, out of which at least one stream, x1, is of second type (e.g., URLLC) and the others, e.g., x2, x3, . . . , xp, represent third type (e.g., mMTC type, associated with a third priority level, the third priority level may be lower than the second priority level) of traffic. Upon prioritization of the URLLC data for latency and necessary rate R1to fulfil the reliability requirements, the gap to the achievable sum-rate given the CSI and available transmission power may be filled by mMTC symbols precoded and multiplexed spanning one or multiple coded blocks. A sixth embodiment may treat the multiplexing realization by the proposed precoded overloading of a set of streams where one of the streams, x1, corresponds to a first type (e.g., eMBB data), and the other symbols, e.g., x2, x3, . . . , xp, represent a third type (e.g., mMTC traffic, associated with a third priority level, the third priority level may be lower than the first priority level). The total available sum-rate of the link given the CSI conditions and total available transmit power may be split accordingly to eMBB and mMTC TBs. The eMBB traffic may be adapted between a minimum QoS-based rate ρ1≤R1and its maximum achievable rate to provide additional rate overhead for transmission of mMTC TBs in case this is necessary. Otherwise, the eMBB rate, in one embodiment, shall be prioritized and mMTC traffic may be overloaded spuriously as it occurs or in batches of symbols. Note that the realizations above, in one embodiment, depend on the selection of the precoder S(M, N, t) as the capacity gain obtained by combining the symbols is in fact an overloading gain defined by γ=NM as the precoded superposition of symbols increases the effective SNR given that the introduced interference can be decoded and so resolved. This latter assumption, in one embodiment, is valid in practice with modern receivers at cubic complexity in the number of information symbols to be recovered, N. Thus, a performance-complexity trade-off policy may be considered in the selection of the precoder where additional information of the receiver capabilities may be additionally utilized, if available. FIG.9depicts a user equipment apparatus900that may be used for determining a precoder for wireless communications, according to embodiments of the disclosure. In various embodiments, the user equipment apparatus900is used to implement one or more of the solutions described above. The user equipment apparatus900may be one embodiment of the remote unit105and/or the UE, described above. Furthermore, the user equipment apparatus900may include a processor905, a memory910, an input device915, an output device920, and a transceiver925. In some embodiments, the input device915and the output device920are combined into a single device, such as a touchscreen. In certain embodiments, the user equipment apparatus900may not include any input device915and/or output device920. In various embodiments, the user equipment apparatus900may include one or more of: the processor905, the memory910, and the transceiver925, and may not include the input device915and/or the output device920. As depicted, the transceiver925includes at least one transmitter930and at least one receiver935. In some embodiments, the transceiver925communicates with one or more cells (or wireless coverage areas) supported by one or more base units121. In various embodiments, the transceiver925is operable on unlicensed spectrum. Moreover, the transceiver925may include multiple UE panel supporting one or more beams. Additionally, the transceiver925may support at least one network interface940and/or application interface945. The application interface(s)945may support one or more APIs. The network interface(s)940may support 3GPP reference points, such as Uu, N1, PC5, etc. Other network interfaces940may be supported, as understood by one of ordinary skill in the art. The processor905, in one embodiment, may include any known controller capable of executing computer-readable instructions and/or capable of performing logical operations. For example, the processor905may be a microcontroller, a microprocessor, a central processing unit (“CPU”), a graphics processing unit (“GPU”), an auxiliary processing unit, a field programmable gate array (“FPGA”), or similar programmable controller. In some embodiments, the processor905executes instructions stored in the memory910to perform the methods and routines described herein. The processor905is communicatively coupled to the memory910, the input device915, the output device920, and the transceiver925. In certain embodiments, the processor905may include an application processor (also known as “main processor”) which manages application-domain and operating system (“OS”) functions and a baseband processor (also known as “baseband radio processor”) which manages radio functions. In various embodiments, the processor905and transceiver925control the user equipment apparatus900to implement the above described UE behaviors. In one embodiment, the processor905determines a precoder for a set of modulation symbols, the determined precoder comprising a plurality of codewords and configured to reduce an ensemble interference and a pairwise interference among the plurality of codewords. In one embodiment, processor905precodes the set of modulation symbols based on the determined precoder. In one embodiment, the processor905maps the precoded set of modulation symbols to a set of physical transmission resources for a transmission layer, wherein a number of modulation symbols in the set of modulation symbols is greater than a number of physical transmission resources in the set of physical transmission resources. In one embodiment, the transceiver925transmits, to a receiver node, an indication of the determined precoder and the set of physical transmission resources. In one embodiment, the precoder at least one of compresses, superpositions, and combines the set of modulation symbols. In one embodiment, the processor905generates a non-orthogonally multiplexed channel for the physical transmission resources by overloading the set of modulation symbols at a transmission rate that is faster than a Nyquist transmission rate. In one embodiment, the precoder comprises a codebook of N codewords selected on a surface of a complex unit M-sphere, where M is the number of physical transmission resources and M<N. In one embodiment, the precoder codebook is based on a set of discrete harmonics on the sphere surface given by an N-th root of unity and the precoder codebook comprises an M×N truncated N-DFT linear codebook. In one embodiment, the processor905selects M harmonics of the set of discrete harmonics for a compressed representation of a precoded signal space of the set of modulation symbols based on optimization criteria, wherein the optimization criteria comprise at least one of an ensemble interference magnitude and a pairwise interference magnitude given an entire search space of available discrete set of N harmonics, partially to the ensemble interference magnitude upon the selection of any M harmonics out of the available discrete set of N harmonics, and an ensemble interference magnitude and a pairwise interference magnitude over all possible harmonic and non-harmonic spherical codebook realizations by selecting any M harmonics out of an available set of N=M+1 harmonics. In one embodiment, a configuration of the precoder codebook is based on a dimensionality and indices of at least one of the selected M harmonics and a pruned N−M set of discrete harmonics, the configuration of the determined precoder transmitted to the receiver node. In one embodiment, the processor905determines the M×N linear precoding codebook based on storage of a tabulated codebook entry comprising the set of selected or pruned harmonic indices and at least two codebook dimension parameters selected from the group comprising M, N, NM, N−M, and any combination thereof. In one embodiment, the transceiver925transmits an indication of a codebook index corresponding to the tabulated codebook entry of the determined precoder to the receiver node. In one embodiment, the processor905determines the precoder configuration based on at least one of a rate, a latency, and a reliability of at least a portion of the set of modulation symbols, CSI wherein the CSI comprises at least a channel quality indicator (“CQI”), and channel coder type and MCS. In one embodiment, the precoder comprises a codebook of N codewords based on complex-valued approximate spherical codes that reduce the ensemble and pairwise codeword interference magnitude yielding an M×N linear precoding codebook, the codewords approximately placed on the unit M-sphere. In one embodiment, the processor905determines an M×N configuration of the linear precoding codebook based on an indexed codebook entry within a memory processing unit, the index derived on at least one of a non-decreasing index counter and a hash function representation of the codebook. In one embodiment, the processor905determines the precoder configuration based on at least one of a rate, a latency, and a reliability of at least a portion of the set of modulation symbols, CSI wherein the CSI comprises at least a CQI, and channel coder type and MCS. In one embodiment, the set of modulation symbols comprises an information stream comprising at least one of a fixed coding rate and modulation configuration across a TB, a dynamic coding rate and modulation configuration across the TB, and an additional non-negative number of information streams with distinct coding rates and modulation configurations for unequal error protection. In one embodiment, the precoded set of modulation symbols represent at least two distinct non-orthogonally and synchronously multiplexed streams of information. In one embodiment, the at least two multiplexed information streams comprise heterogeneous requirements with respect to at least one of a rate, a latency, and a reliability. In one embodiment, the transceiver925transmits at least a portion of the precoder codebook information corresponding to the indicated determined precoder in response to receiving an indication of at least one of an unknown and an unverified precoder corresponding to the indicated determined precoder. In one embodiment, the processor905uses one of a layer-common precoding with a same determined precoder for each transmission layer and a layer-independent precoding with different precoders determined for each transmission layer. The memory910, in one embodiment, is a computer readable storage medium. In some embodiments, the memory910includes volatile computer storage media. For example, the memory910may include a RAM, including dynamic RAM (“DRAM”), synchronous dynamic RAM (“SDRAM”), and/or static RAM (“SRAM”). In some embodiments, the memory910includes non-volatile computer storage media. For example, the memory910may include a hard disk drive, a flash memory, or any other suitable non-volatile computer storage device. In some embodiments, the memory910includes both volatile and non-volatile computer storage media. In some embodiments, the memory910stores data related to determining a precoder for wireless communications. For example, the memory910may store various parameters, panel/beam configurations, resource assignments, policies, and the like as described above. In certain embodiments, the memory910also stores program code and related data, such as an operating system or other controller algorithms operating on the user equipment apparatus900. The input device915, in one embodiment, may include any known computer input device including a touch panel, a button, a keyboard, a stylus, a microphone, or the like. In some embodiments, the input device915may be integrated with the output device920, for example, as a touchscreen or similar touch-sensitive display. In some embodiments, the input device915includes a touchscreen such that text may be input using a virtual keyboard displayed on the touchscreen and/or by handwriting on the touchscreen. In some embodiments, the input device915includes two or more different devices, such as a keyboard and a touch panel. The output device920, in one embodiment, is designed to output visual, audible, and/or haptic signals. In some embodiments, the output device920includes an electronically controllable display or display device capable of outputting visual data to a user. For example, the output device920may include, but is not limited to, an LCD display, an LED display, an OLED display, a projector, or similar display device capable of outputting images, text, or the like to a user. As another, non-limiting, example, the output device920may include a wearable display separate from, but communicatively coupled to, the rest of the user equipment apparatus900, such as a smart watch, smart glasses, a heads-up display, or the like. Further, the output device920may be a component of a smart phone, a personal digital assistant, a television, a table computer, a notebook (laptop) computer, a personal computer, a vehicle dashboard, or the like. In certain embodiments, the output device920includes one or more speakers for producing sound. For example, the output device920may produce an audible alert or notification (e.g., a beep or chime). In some embodiments, the output device920includes one or more haptic devices for producing vibrations, motion, or other haptic feedback. In some embodiments, all, or portions of the output device920may be integrated with the input device915. For example, the input device915and output device920may form a touchscreen or similar touch-sensitive display. In other embodiments, the output device920may be located near the input device915. The transceiver925communicates with one or more network functions of a mobile communication network via one or more access networks. The transceiver925operates under the control of the processor905to transmit messages, data, and other signals and also to receive messages, data, and other signals. For example, the processor905may selectively activate the transceiver925(or portions thereof) at particular times in order to send and receive messages. The transceiver925includes at least transmitter930and at least one receiver935. One or more transmitters930may be used to provide UL communication signals to a base unit121, such as the UL transmissions described herein. Similarly, one or more receivers935may be used to receive DL communication signals from the base unit121, as described herein. Although only one transmitter930and one receiver935are illustrated, the user equipment apparatus900may have any suitable number of transmitters930and receivers935. Further, the transmitter(s)930and the receiver(s)935may be any suitable type of transmitters and receivers. In one embodiment, the transceiver925includes a first transmitter/receiver pair used to communicate with a mobile communication network over licensed radio spectrum and a second transmitter/receiver pair used to communicate with a mobile communication network over unlicensed radio spectrum. In certain embodiments, the first transmitter/receiver pair used to communicate with a mobile communication network over licensed radio spectrum and the second transmitter/receiver pair used to communicate with a mobile communication network over unlicensed radio spectrum may be combined into a single transceiver unit, for example a single chip performing functions for use with both licensed and unlicensed radio spectrum. In some embodiments, the first transmitter/receiver pair and the second transmitter/receiver pair may share one or more hardware components. For example, certain transceivers925, transmitters930, and receivers935may be implemented as physically separate components that access a shared hardware resource and/or software resource, such as for example, the network interface940. In various embodiments, one or more transmitters930and/or one or more receivers935may be implemented and/or integrated into a single hardware component, such as a multi-transceiver chip, a system-on-a-chip, an ASIC, or other type of hardware component. In certain embodiments, one or more transmitters930and/or one or more receivers935may be implemented and/or integrated into a multi-chip module. In some embodiments, other components such as the network interface940or other hardware components/circuits may be integrated with any number of transmitters930and/or receivers935into a single chip. In such embodiment, the transmitters930and receivers935may be logically configured as a transceiver925that uses one more common control signals or as modular transmitters930and receivers935implemented in the same hardware chip or in a multi-chip module. FIG.10depicts a network apparatus1000that may be used for determining a precoder for wireless communications, according to embodiments of the disclosure. In one embodiment, network apparatus1000may be one implementation of a RAN node, such as the base unit121, the RAN node120, or gNB, described above. Furthermore, the base network apparatus1000may include a processor1005, a memory1010, an input device1015, an output device1020, and a transceiver1025. In some embodiments, the input device1015and the output device1020are combined into a single device, such as a touchscreen. In certain embodiments, the network apparatus1000may not include any input device1015and/or output device1020. In various embodiments, the network apparatus1000may include one or more of: the processor1005the memory1010and the transceiver1025, and may not include the input device1015and/or the output device1020. As depicted, the transceiver1025includes at least one transmitter1030and at least one receiver1035. Here, the transceiver1025communicates with one or more remote units105. Additionally, the transceiver1025may support at least one network interface1040and/or application interface1045. The application interface(s)1045may support one or more APIs. The network interface(s)1040may support 3GPP reference points, such as Uu, N1, N2 and N3. Other network interfaces1040may be supported, as understood by one of ordinary skill in the art. The processor1005in one embodiment, may include any known controller capable of executing computer-readable instructions and/or capable of performing logical operations. For example, the processor1005may be a microcontroller, a microprocessor, a CPU, a GPU, an auxiliary processing unit, a FPGA, or similar programmable controller. In some embodiments, the processor1005executes instructions stored in the memory1010to perform the methods and routines described herein. The processor1005is communicatively coupled to the memory1010the input device1015, the output device1020, and the transceiver1025. In certain embodiments, the processor1005may include an application processor (also known as “main processor”) which manages application-domain and operating system (“OS”) functions and a baseband processor (also known as “baseband radio processor”) which manages radio function. In various embodiments, the network apparatus1000is a RAN node (e.g., gNB) that includes a processor1005and a transceiver1025. In one embodiment, the transceiver1025receives an indication of a determined precoder from a transmitter node and receives a set of physical transmission resources, the physical transmission resources mapped to a set of modulation symbols that are precoded using the determined precoder. In one embodiment, the processor1005uses the determined precoder and the set of physical transmission resources for transmissions between the receiver node and the transmitter node. The memory1010, in one embodiment, is a computer readable storage medium. In some embodiments, the memory1010includes volatile computer storage media. For example, the memory1010may include a RAM, including dynamic RAM (“DRAM”), synchronous dynamic RAM (“SDRAM”), and/or static RAM (“SRAM”). In some embodiments, the memory1010includes non-volatile computer storage media. For example, the memory1010may include a hard disk drive, a flash memory, or any other suitable non-volatile computer storage device. In some embodiments, the memory1010includes both volatile and non-volatile computer storage media. In some embodiments, the memory1010stores data related to determining a precoder for wireless communications. For example, the memory1010may store parameters, configurations, resource assignments, policies, and the like, as described above. In certain embodiments, the memory1010also stores program code and related data, such as an operating system or other controller algorithms operating on the network apparatus1000. The input device1015, in one embodiment, may include any known computer input device including a touch panel, a button, a keyboard, a stylus, a microphone, or the like. In some embodiments, the input device1015may be integrated with the output device1020, for example, as a touchscreen or similar touch-sensitive display. In some embodiments, the input device1015includes a touchscreen such that text may be input using a virtual keyboard displayed on the touchscreen and/or by handwriting on the touchscreen. In some embodiments, the input device1015includes two or more different devices, such as a keyboard and a touch panel. The output device1020, in one embodiment, is designed to output visual, audible, and/or haptic signals. In some embodiments, the output device1020includes an electronically controllable display or display device capable of outputting visual data to a user. For example, the output device1020may include, but is not limited to, an LCD display, an LED display, an OLED display, a projector, or similar display device capable of outputting images, text, or the like to a user. As another, non-limiting, example, the output device1020may include a wearable display separate from, but communicatively coupled to, the rest of the network apparatus1000, such as a smart watch, smart glasses, a heads-up display, or the like. Further, the output device1020may be a component of a smart phone, a personal digital assistant, a television, a table computer, a notebook (laptop) computer, a personal computer, a vehicle dashboard, or the like. In certain embodiments, the output device1020includes one or more speakers for producing sound. For example, the output device1020may produce an audible alert or notification (e.g., a beep or chime). In some embodiments, the output device1020includes one or more haptic devices for producing vibrations, motion, or other haptic feedback. In some embodiments, all, or portions of the output device1020may be integrated with the input device1015. For example, the input device1015and output device1020may form a touchscreen or similar touch-sensitive display. In other embodiments, the output device1020may be located near the input device1015. The transceiver1025includes at least one transmitter1030and at least one receiver1035. One or more transmitters1030may be used to communicate with the UE, as described herein. Similarly, one or more receivers1035may be used to communicate with network functions in the non-public network (“NPN”), PLMN and/or RAN, as described herein. Although only one transmitter1030and one receiver1035are illustrated, the network apparatus1000may have any suitable number of transmitters1030and receivers1035. Further, the transmitter(s)1030and the receiver(s)1035may be any suitable type of transmitters and receivers. FIG.11is a flowchart diagram of a method1100for determining a precoder for wireless communications. The method1100may be performed by a transmitter node such as a UE as described herein, for example, the remote unit105, the UE and/or the user equipment apparatus900and/or a network entity such as a base node, a gNB, and/or the network equipment apparatus1000. In some embodiments, the method1100may be performed by a processor executing program code, for example, a microcontroller, a microprocessor, a CPU, a GPU, an auxiliary processing unit, a FPGA, or the like. In one embodiment, the method1100includes determining1105a precoder for a set of modulation symbols, the determined precoder comprising a plurality of codewords and configured to reduce an ensemble interference and a pairwise interference among the plurality of codewords. In one embodiment, the method1100includes precoding1110the set of modulation symbols based on the determined precoder. In one embodiment, the method1100includes mapping1115the precoded set of modulation symbols to a set of physical transmission resources for a transmission layer, wherein a number of modulation symbols in the set of modulation symbols is greater than a number of physical transmission resources in the set of physical transmission resources. In one embodiment, the method1100includes transmitting1120, to a receiver node, an indication of the determined precoder and the set of physical transmission resources, and the method1100ends. FIG.12is a flowchart diagram of a method1200for determining a precoder for wireless communications. The method1200may be performed by a receiver node such as a UE as described herein, for example, the remote unit105, the UE and/or the user equipment apparatus900and/or a network entity such as a base node, a gNB, and/or the network equipment apparatus1000. In some embodiments, the method1200may be performed by a processor executing program code, for example, a microcontroller, a microprocessor, a CPU, a GPU, an auxiliary processing unit, a FPGA, or the like. In one embodiment, the method1200includes receiving1205an indication of a determined precoder from a transmitter node. In one embodiment, the method1200includes receiving1210a set of physical transmission resources, the physical transmission resources mapped to a set of modulation symbols that are precoded using the determined precoder. In one embodiment, the method1200includes using1215the determined precoder and the set of physical transmission resources for transmissions between the receiver node and the transmitter node, and the method1200ends. A first apparatus is disclosed for determining a precoder for wireless communications. The first apparatus may include a transmitter node such as a UE as described herein, for example, the remote unit105, the UE and/or the user equipment apparatus900and/or a network entity such as a base node, a gNB, and/or the network equipment apparatus1000. In some embodiments, the first apparatus may include a processor executing program code, for example, a microcontroller, a microprocessor, a CPU, a GPU, an auxiliary processing unit, a FPGA, or the like. In one embodiment, the first apparatus includes a processor that determines a precoder for a set of modulation symbols, the determined precoder comprising a plurality of codewords and configured to reduce an ensemble interference and a pairwise interference among the plurality of codewords. In one embodiment, the processor precodes the set of modulation symbols based on the determined precoder. In one embodiment, the processor maps the precoded set of modulation symbols to a set of physical transmission resources for a transmission layer, wherein a number of modulation symbols in the set of modulation symbols is greater than a number of physical transmission resources in the set of physical transmission resources. In one embodiment, the first apparatus includes a transceiver that transmits, to a receiver node, an indication of the determined precoder and the set of physical transmission resources. In one embodiment, the precoder at least one of compresses, superpositions, and combines the set of modulation symbols. In one embodiment, the processor generates a non-orthogonally multiplexed channel for the physical transmission resources by overloading the set of modulation symbols at a transmission rate that is faster than a Nyquist transmission rate. In one embodiment, the precoder comprises a codebook of N codewords selected on a surface of a complex unit M-sphere, where M is the number of physical transmission resources and M<N. In one embodiment, the precoder codebook is based on a set of discrete harmonics on the sphere surface given by an N-th root of unity and the precoder codebook comprises an M×N truncated N-DFT linear codebook. In one embodiment, the processor selects M harmonics of the set of discrete harmonics for a compressed representation of a precoded signal space of the set of modulation symbols based on optimization criteria, wherein the optimization criteria comprise at least one of an ensemble interference magnitude and a pairwise interference magnitude given an entire search space of available discrete set of N harmonics, partially to the ensemble interference magnitude upon the selection of any M harmonics out of the available discrete set of N harmonics, and an ensemble interference magnitude and a pairwise interference magnitude over all possible harmonic and non-harmonic spherical codebook realizations by selecting any M harmonics out of an available set of N=M+1 harmonics. In one embodiment, a configuration of the precoder codebook is based on a dimensionality and indices of at least one of the selected M harmonics and a pruned N−M set of discrete harmonics, the configuration of the determined precoder transmitted to the receiver node. In one embodiment, the processor determines the M×N linear precoding codebook based on storage of a tabulated codebook entry comprising the set of selected or pruned harmonic indices and at least two codebook dimension parameters selected from the group comprising M, N, NM, N−M, and any combination thereof. In one embodiment, the transceiver transmits an indication of a codebook index corresponding to the tabulated codebook entry of the determined precoder to the receiver node. In one embodiment, the processor determines the precoder configuration based on at least one of a rate, a latency, and a reliability of at least a portion of the set of modulation symbols, CSI wherein the CSI comprises at least a CQI, and channel coder type and MCS. In one embodiment, the precoder comprises a codebook of N codewords based on complex-valued approximate spherical codes that reduce the ensemble and pairwise codeword interference magnitude yielding an M×N linear precoding codebook, the codewords approximately placed on the unit M-sphere. In one embodiment, the processor determines an M×N configuration of the linear precoding codebook based on an indexed codebook entry within a memory processing unit, the index derived on at least one of a non-decreasing index counter and a hash function representation of the codebook. In one embodiment, the processor determines the precoder configuration based on at least one of a rate, a latency, and a reliability of at least a portion of the set of modulation symbols, CSI wherein the CSI comprises at least a CQI, and channel coder type and MCS. In one embodiment, the set of modulation symbols comprises an information stream comprising at least one of a fixed coding rate and modulation configuration across a TB, a dynamic coding rate and modulation configuration across the TB, and an additional non-negative number of information streams with distinct coding rates and modulation configurations for unequal error protection. In one embodiment, the precoded set of modulation symbols represent at least two distinct non-orthogonally and synchronously multiplexed streams of information. In one embodiment, the at least two multiplexed information streams comprise heterogeneous requirements with respect to at least one of a rate, a latency, and a reliability. In one embodiment, the transceiver transmits at least a portion of the precoder codebook information corresponding to the indicated determined precoder in response to receiving an indication of at least one of an unknown and an unverified precoder corresponding to the indicated determined precoder. In one embodiment, the processor uses one of a layer-common precoding with a same determined precoder for each transmission layer and a layer-independent precoding with different precoders determined for each transmission layer. A first method is disclosed for determining a precoder for wireless communications. The first method may be performed by a transmitter node such as a UE as described herein, for example, the remote unit105, the UE and/or the user equipment apparatus900and/or a network entity such as a base node, a gNB, and/or the network equipment apparatus1000. In some embodiments, the first method may be performed by a processor executing program code, for example, a microcontroller, a microprocessor, a CPU, a GPU, an auxiliary processing unit, a FPGA, or the like. In one embodiment, the first method includes determining a precoder for a set of modulation symbols, the determined precoder comprising a plurality of codewords and configured to reduce an ensemble interference and a pairwise interference among the plurality of codewords. In one embodiment, the first method includes precoding the set of modulation symbols based on the determined precoder. In one embodiment, the first method includes mapping the precoded set of modulation symbols to a set of physical transmission resources for a transmission layer, wherein a number of modulation symbols in the set of modulation symbols is greater than a number of physical transmission resources in the set of physical transmission resources. In one embodiment, the first method includes transmitting, to a receiver node, an indication of the determined precoder and the set of physical transmission resources. In one embodiment, the precoder at least one of compresses, superpositions, and combines the set of modulation symbols. In one embodiment, the first method includes generating a non-orthogonally multiplexed channel for the physical transmission resources by overloading the set of modulation symbols at a transmission rate that is faster than a Nyquist transmission rate. In one embodiment, the precoder comprises a codebook of N codewords selected on a surface of a complex unit M-sphere, where M is the number of physical transmission resources and M<N. In one embodiment, the precoder codebook is based on a set of discrete harmonics on the sphere surface given by an N-th root of unity and the precoder codebook comprises an M×N truncated N-DFT linear codebook. In one embodiment, the first method includes selecting M harmonics of the set of discrete harmonics for a compressed representation of a precoded signal space of the set of modulation symbols based on optimization criteria, wherein the optimization criteria comprise at least one of an ensemble interference magnitude and a pairwise interference magnitude given an entire search space of available discrete set of N harmonics, partially to the ensemble interference magnitude upon the selection of any M harmonics out of the available discrete set of N harmonics, and an ensemble interference magnitude and a pairwise interference magnitude over all possible harmonic and non-harmonic spherical codebook realizations by selecting any M harmonics out of an available set of N=M+1 harmonics. In one embodiment, a configuration of the precoder codebook is based on a dimensionality and indices of at least one of the selected M harmonics and a pruned N−M set of discrete harmonics, the configuration of the determined precoder transmitted to the receiver node. In one embodiment, the first method includes determining the M×N linear precoding codebook based on storage of a tabulated codebook entry comprising the set of selected or pruned harmonic indices and at least two codebook dimension parameters selected from the group comprising M, N, NM, N−M, and any combination thereof. In one embodiment, the first method includes transmitting an indication of a codebook index corresponding to the tabulated codebook entry of the determined precoder to the receiver node. In one embodiment, the first method includes determining the precoder configuration based on at least one of a rate, a latency, and a reliability of at least a portion of the set of modulation symbols, CSI wherein the CSI comprises at least a CQI, and channel coder type and MCS. In one embodiment, the precoder comprises a codebook of N codewords based on complex-valued approximate spherical codes that reduce the ensemble and pairwise codeword interference magnitude yielding an M×N linear precoding codebook, the codewords approximately placed on the unit M-sphere. In one embodiment, the first method includes determining an M×N configuration of the linear precoding codebook based on an indexed codebook entry within a memory processing unit, the index derived on at least one of a non-decreasing index counter and a hash function representation of the codebook. In one embodiment, the first method includes determining the precoder configuration based on at least one of a rate, a latency, and a reliability of at least a portion of the set of modulation symbols, CSI wherein the CSI comprises at least a CQI, and channel coder type and MCS. In one embodiment, the set of modulation symbols comprises an information stream comprising at least one of a fixed coding rate and modulation configuration across a TB, a dynamic coding rate and modulation configuration across the TB, and an additional non-negative number of information streams with distinct coding rates and modulation configurations for unequal error protection. In one embodiment, the precoded set of modulation symbols represent at least two distinct non-orthogonally and synchronously multiplexed streams of information. In one embodiment, the at least two multiplexed information streams comprise heterogeneous requirements with respect to at least one of a rate, a latency, and a reliability. In one embodiment, the first method includes transmitting at least a portion of the precoder codebook information corresponding to the indicated determined precoder in response to receiving an indication of at least one of an unknown and an unverified precoder corresponding to the indicated determined precoder. In one embodiment, the first method includes using one of a layer-common precoding with a same determined precoder for each transmission layer and a layer-independent precoding with different precoders determined for each transmission layer. A second apparatus is disclosed for determining a precoder for wireless communications. The second apparatus may include a receiver node such as a UE as described herein, for example, the remote unit105, the UE and/or the user equipment apparatus900and/or a network entity such as a base node, a gNB, and/or the network equipment apparatus1000. In some embodiments, the second apparatus may include a processor executing program code, for example, a microcontroller, a microprocessor, a CPU, a GPU, an auxiliary processing unit, a FPGA, or the like. In one embodiment, the second apparatus includes a transceiver that receives an indication of a determined precoder from a transmitter node and receives a set of physical transmission resources, the physical transmission resources mapped to a set of modulation symbols that are precoded using the determined precoder. In one embodiment, the second apparatus includes a processor that uses the determined precoder and the set of physical transmission resources for transmissions between the receiver node and the transmitter node. A second method is disclosed for determining a precoder for wireless communications. The second method may be performed by a receiver node such as a UE as described herein, for example, the remote unit105, the UE and/or the user equipment apparatus900and/or a network entity such as a base node, a gNB, and/or the network equipment apparatus1000. In some embodiments, the second method may be performed by a processor executing program code, for example, a microcontroller, a microprocessor, a CPU, a GPU, an auxiliary processing unit, a FPGA, or the like. In one embodiment, the second method includes receiving an indication of a determined precoder from a transmitter node. In one embodiment, the second method includes receiving a set of physical transmission resources, the physical transmission resources mapped to a set of modulation symbols that are precoded using the determined precoder. In one embodiment, the second method includes using the determined precoder and the set of physical transmission resources for transmissions between the receiver node and the transmitter node. Embodiments may be practiced in other specific forms. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
129,000
11863260
With respect to the description of the drawings, the same or similar reference signs may be used for the same or similar elements. Any of the embodiment(s) described herein may be used in conjunction with any other embodiment(s) described herein. DETAILED DESCRIPTION Hereinafter, various example embodiments disclosed in the present disclosure will be described with reference to the accompanying drawings. However, this is not intended to limit the present disclosure to the specific embodiments, and it is to be construed to include various modifications, equivalents, and/or alternatives of embodiments of the present disclosure. FIG.1is a block diagram illustrating an electronic device101in a network environment100according to various embodiments. Referring toFIG.1, the electronic device101in the network environment100may communicate with an electronic device102via a first network198(e.g., a short-range wireless communication network), or at least one of an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input module150, a sound output module155, a display module160, an audio module170, a sensor module176, an interface177, a connecting terminal178, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one of the components (e.g., the connecting terminal178) may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components (e.g., the sensor module176, the camera module180, or the antenna module197) may be implemented as a single component (e.g., the display module160). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may store a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120comprising processing circuitry may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), or an auxiliary processor123(e.g., a graphics processing unit (GPU), a neural processing unit (NPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. For example, when the electronic device101includes the main processor121and the auxiliary processor123, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123comprising processing circuitry may control at least some of functions or states related to at least one component (e.g., the display module160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. According to an embodiment, the auxiliary processor123(e.g., the neural processing unit) may include a hardware structure specified for artificial intelligence model processing. An artificial intelligence model may be generated by machine learning. Such learning may be performed, e.g., by the electronic device101where the artificial intelligence is performed or via a separate server (e.g., the server108). Learning algorithms may include, but are not limited to, e.g., supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. The artificial intelligence model may include a plurality of artificial neural network layers. The artificial neural network may be a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), deep Q-network or a combination of two or more thereof but is not limited thereto. The artificial intelligence model may, additionally or alternatively, include a software structure other than the hardware structure. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input module150may receive a command or data to be used by another component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input module150may include, for example, a microphone, a mouse, a keyboard, a key (e.g., a button), or a digital pen (e.g., a stylus pen). The sound output module155may output sound signals to the outside of the electronic device101. The sound output module155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record. The receiver may be used for receiving incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display module160may visually provide information to the outside (e.g., a user) of the electronic device101. The display module160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display module160may include a touch sensor adapted to detect a touch, or a pressure sensor adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input module150, or output the sound via the sound output module155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element of or including a conductive material or a conductive pattern formed in or on a substrate (e.g., a printed circuit board (PCB)). According to an embodiment, the antenna module197may include a plurality of antennas (e.g., array antennas). In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102or104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that various embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element(s). As used in connection with various embodiments of the disclosure, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor (e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities, and some of the multiple entities may be separately disposed in different components. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. Any of the embodiment(s) described herein may be used in conjunction with any other embodiment(s) described herein. FIG.2is a block diagram200of the electronic device101supporting legacy network communication and 5G network communication, according to various embodiments. Referring toFIG.2, the electronic device101may include a first communication processor212comprising processing circuitry, a second communication processor214comprising processing circuitry, a first radio frequency integrated circuit (RFIC)222, a second RFIC224, and a third RFIC226, a fourth RFIC228, a first radio frequency front end (RFFE)232, a second RFFE234, a first antenna module242comprising at least one antenna, a second antenna module244comprising at least one antenna, and an antenna(s)248. The electronic device101may further include the processor120comprising processing circuitry and the memory130. The second network199may include a first cellular network292and/or a second cellular network294. According to an example embodiment, the electronic device101may further include at least one of the components illustrated inFIG.1, and the second network199may further include at least one other network. According to an example embodiment, the first communication processor212, the second communication processor214, the first RFIC222, the second RFIC224, the fourth RFIC228, the first RFFE232, and the second RFFE234may be included in and/or make up at least a part of a wireless communication module192. According to an example embodiment, the fourth RFIC228may be omitted or may be included as a part of the third RFIC226. The first communication processor212may support the establishment of a communication channel of a band to be used for wireless communication with the first cellular network292and legacy network communication through the established communication channel. According to various example embodiments, the first cellular network292may be a legacy network including a 2nd generation (2G), 3rd generation (3G), 4th generation (4G), and/or long-term evolution (LTE) network. The second communication processor214may support the establishment of a communication channel corresponding to a specified band (e.g., approximately 6 GHz to 60 GHz) among bands to be used for wireless communication with the second cellular network294, and 5G network communication through the established communication channel. According to various example embodiments, the second cellular network294may be a 5G network defined by 3GPP. Additionally, according to an embodiment, the first communication processor212and/or the second communication processor214may support the establishment of a communication channel corresponding to another specified band (e.g., approximately 6 GHz or less) among bands to be used for wireless communication with the second cellular network294, and 5G network communication through the established communication channel. According to an example embodiment, the first communication processor212and the second communication processor214may be implemented in a single chip or a single package. According to various example embodiments, the first communication processor212and/or the second communication processor214may be formed with the processor120, the coprocessor123ofFIG.1, and/or the communication module190in a single chip or a single package. Upon transmission, the first RFIC222may convert a baseband signal generated by the first communication processor212into a radio frequency (RF) signal of approximately 700 MHz to approximately 3 GHz used in the first cellular network292(e.g., a legacy network). Upon reception, an RF signal may be obtained from the first cellular network292(e.g., a legacy network) through an antenna(s) (e.g., the first antenna module242), and may be preprocessed through an RFFE (e.g., the first RFFE232). The first RFIC222may convert the preprocessed RF signal into a baseband signal so as to be processed by the first communication processor212. Upon transmission, the second RFIC224may convert a baseband signal generated by the first communication processor212or the second communication processor214into an RF signal (hereinafter, referred to as a 5G Sub6 RF signal) of the Sub6 band (e.g., approximately 6 GHz or less) used in the second cellular network294(e.g., the 5G network). Upon reception, a 5G Sub6 RF signal may be obtained from the second cellular network294(e.g., the 5G network) through an antenna(s) (e.g., the second antenna module244), and may be preprocessed through an RFFE (e.g., the second RFFE234). The second RFIC224may convert the preprocessed 5G Sub6 RF signal into a baseband signal so as to be processed by a corresponding one of the first communication processor212and/or the second communication processor214. The third RFIC226may convert a baseband signal generated by the second communication processor214into an RF signal (hereinafter, referred to as a 5G Above6 RF signal) of the 5G Above6 band (e.g., approximately 6 GHz to approximately 60 GHz) to be used in the second cellular network294(e.g., the 5G network). Upon reception, a 5G Above6 RF signal may be obtained from the second cellular network294(e.g., the 5G network) through an antenna(s) (e.g., the antenna248), and may be preprocessed through the third RFFE236. For example, the third RFFE236may perform preprocessing of the signal using a phase shifter238. The third RFIC226may convert the preprocessed 5G Above6 RF signal into a baseband signal so as to be processed by the second communication processor214. According to an embodiment, the third RFFE236may be formed as a part of the third RFIC226. According to an embodiment, the electronic device101may include the fourth RFIC228separately from or at least as a part of the third RFIC226. In this case, the fourth RFIC228may convert the baseband signal generated by the second communication processor214into an RF signal (hereinafter, referred to as an intermediate frequency (IF) signal) of an intermediate frequency band (e.g., approximately 9 GHz to approximately 11 GHz), and then transmit the IF signal to the third RFIC226. The third RFIC226may convert the IF signal into a 5G Above6 RF signal. Upon reception, a 5G Above6 RF signal may be received from the second cellular network294(e.g., the 5G network) through an antenna (e.g., the antenna248), and may be converted into an IF signal by the third RFIC226. The fourth RFIC228may convert the IF signal into the baseband signal so as to be processed by the second communication processor214. According to an embodiment, the first RFIC222and the second RFIC224may be implemented as a single chip or at least a part of a single package. According to an embodiment, the first RFFE232and the second RFFE234may be implemented as a single chip or at least a part of a single package. According to an embodiment, at least one of the first antenna module242or the second antenna module244may be omitted or combined with another antenna module to process RF signals of a plurality of corresponding bands. According to an embodiment, the third RFIC226and the antenna248may be disposed on the same substrate to form a third antenna module246. For example, the wireless communication module192comprising communication circuitry and/or the processor120comprising processing circuitry may be disposed on a first substrate (e.g., a main PCB). In this case, the third RFIC226may be disposed in a partial region (e.g., the lower surface) of a second substrate (e.g., a sub PCB) separate and/or different from the first substrate, and the antenna248may be disposed in another partial region (e.g., the upper surface) to form the third antenna module246. According to an embodiment, the antenna248may include, for example, an antenna array that may be used for beamforming. By disposing the third RFIC226and the antenna248on the same substrate, it is possible to reduce the length of the transmission line therebetween. This, for example, may reduce the loss (e.g., attenuation) of a signal in a high frequency band (e.g., approximately 6 GHz to approximately 60 GHz) used for 5G network communication by the transmission line. Accordingly, the electronic device101may improve the quality and/or speed of communication with the second cellular network294(e.g., 5G network). The second cellular network294(e.g., 5G network) may be operated independently (e.g., Stand-Alone (SA)) from the first cellular network292(e.g., a legacy network) and/or operated to be connected to (e.g., Non-Stand Alone (NSA)) the first cellular network292(e.g., a legacy network). For example, in the 5G network, there may be only an access network (e.g., 5G radio access network (RAN) or next-generation RAN (NG RAN)) and no core network (e.g., next-generation core (NGC)). In this case, after accessing the access network of the 5G network, the electronic device101may access an external network (e.g., the Internet) under the control of a core network (e.g., evolved packed core (EPC)) of the legacy network. Protocol information for communication with the legacy network (e.g., LTE protocol information) and/or protocol information for communication with the 5G network (e.g., New Radio (NR) protocol information) may be stored in the memory230and may be accessed by other components (e.g., the processor120, the first communication processor212, or the second communication processor214). FIG.3illustrates a cross-section taken along line B-B′ of the third antenna module246of400aofFIG.4. A printed circuit board410of the illustrated embodiment may include an antenna layer311and a network layer313. The antenna layer311may include at least one dielectric layer337-1, and an antenna element(s)436and/or a feeding unit325formed on the outer surface of or inside of the dielectric layer. The feeding unit325may include a feeding point327and/or a feeding line329. The network layer313may include at least one dielectric layer337-2, at least one ground layer333formed on the outer surface of or inside of the dielectric layer337-2, at least one conductive via335, a transmission line323, and/or a signal line329. In addition, in the example illustrated embodiment, the third RFIC226may be electrically connected to the network layer313, for example, through first and/or second connecting portions (e.g., solder bumps)340-1and/or340-2. In certain example embodiments, various connecting structures (e.g., solder and/or ball grid array (BGA)) may be used instead of the solder bumps connecting portions. The third RFIC226may be electrically connected to the antenna element436via a first connecting portion340-1, the transmission line323, and the feeding unit325. The third RFIC226may also be electrically connected to the ground layer333via the second connecting portion340-2and the conductive via335. Although not illustrated, the third RFIC226may also be electrically connected to the module interface mentioned above via the signal line329. FIG.4illustrates, for example, an example embodiment of a structure of the third antenna module246described with reference toFIGS.2-3.400aofFIG.4is a perspective view of the third antenna module246as viewed from one side, and400bofFIG.4is a perspective view of the third antenna module246as viewed from the other side.400cofFIG.4is a cross-sectional view at A-A′ of the third antenna module246. Referring toFIG.4, in an example embodiment, the third antenna module246may include a printed circuit board410, an antenna array430, a radio frequency integrated circuit (RFIC)452, and a power manage integrated circuit (PMIC)454, and a module interface (not shown). The third antenna module246may further optionally include a shielding member490. In other embodiments, at least one of the aforementioned parts may be omitted, or at least two of the parts may be integrally formed. The printed circuit board410may include a plurality of conductive layers, and a plurality of non-conductive layers alternately stacked with the conductive layers. The printed circuit board410may provide electrical connection between various electronic components disposed on the printed circuit board410and/or outside, using wires and conductive vias formed on the conductive layer. The antenna array430(e.g.,248inFIG.2) may include a plurality of antenna elements432,434,436, and438arranged to form a directional beam. The antenna elements may be formed on a first surface of the printed circuit board410as illustrated. According to an example embodiment, the antenna array430may be formed inside the printed circuit board410. According to certain example embodiments, the antenna array430may include a plurality of antenna arrays (e.g., a dipole antenna array and/or a patch antenna array) of the same or different shape or type. The RFIC452(e.g., the third RFIC226inFIG.2) may be disposed in another region of the printed circuit board410(e.g., a second surface opposite to the first surface), spaced apart from the antenna array430. The RFIC452may be configured to process a signal of a selected frequency band that is transmitted and received via the antenna array430. According to an embodiment, upon transmission, the RFIC452may convert a baseband signal obtained from a communication processor comprising processing circuitry (not illustrated) into an RF signal of a specified band. Upon reception, the RFIC452may convert an RF signal received via the antenna array430into a baseband signal and transmit the converted signal to the communication processor. According to an example embodiment, upon transmission, the RFIC452may up-convert an IF signal (e.g., approximately 7 GHz to approximately 13 GHz) obtained from an intermediate frequency integrated circuit (IFIC) (e.g., the fourth RFIC228inFIG.2) into the RF signal of the selected band. Upon reception, the RFIC452may down-convert the RF signal obtained via the antenna array430into an IF signal, and transmit the converted signal to the IFIC. The PMIC454may be disposed in another partial region (e.g., the second surface) of the printed circuit board410, spaced apart from the antenna array. The PMIC454may receive a voltage from a main PCB (not shown) and provide power required for various components (e.g., the RFIC452) on antenna modules. The shielding member490may be disposed on a part (e.g., the second surface) of the printed circuit board410in order to electromagnetically shield at least one of the RFIC452and the PMIC454. According to an embodiment, the shielding member490may include a conductive shield can. Although not illustrated, in various embodiments, the third antenna module246may be electrically connected to another printed circuit board (e.g., the main circuit board) through the module interface. The module interface may include a connection member, for example, a coaxial cable connector, a board to board connector, an interposer, or a flexible printed circuit board (FPCB). Using the connection member, the RFIC452and/or the PMIC454of the third antenna module246may be electrically connected to the printed circuit board. FIG.5illustrates an example embodiment of an operation for wireless communication connection between a base station520and the electronic device101using a directional beam for wireless connection in the second network294(e.g., the 5G network) ofFIG.2. The base station (gNodeB (gNB), transmission reception point (TRP))520may perform a beam detection operation with the electronic device101for wireless communication connection. For beam detection, the base station520may sequentially transmit a plurality of transmit beams, for example, first to fifth transmit beams535-1to535-5having different directions, thereby making it possible to perform at least one transmit beam sweeping530. The first to fifth transmit beams535-1to535-5may include at least one synchronization sequences (SS)/physical broadcast channel (PBCH) block (SS/PBCH Block). The SS/PBCH Block may be used to periodically measure a channel or beam strength of the electronic device101. In an example embodiment, the first to fifth transmit beams535-1to535-5may include at least one channel state information-reference signal (CSI-RS). The CSI-RS is a reference signal flexibly set by the base station520and may be transmitted periodically, semi-persistently and/or aperiodically. The electronic device101may measure a channel or beam strength using the CSI-RS. The transmit beams may form a radiation pattern having a selected beam width. For example, the transmit beams may have a broad radiation pattern having a first beam width or a sharp radiation pattern having a second beam width sharper than the first beam width. For example, transmit beams including SS/PBCH block may have a broader radiation pattern than transmit beams including CSI-RS. The electronic device101may perform receive beam sweeping540while the base station520performs the transmit beam sweeping530. For example, while the base station520performs first transmit beam sweeping530, the electronic device101may fix a first receive beam545-1in a first direction to receive a signal of an SS/PBCH block transmitted in at least one of the first to fifth transmit beams535-1to535-5. While the base station520performs second transmit beam sweeping530, the electronic device101may fix a second receive beam545-2in a second direction to receive a signal of an SS/PBCH block transmitted in one or more of the first to fifth transmit beams535-1to535-5. In this way, the electronic device101may select a communicable receive beam (e.g., the second receive beam545-2) and a communicable transmit beam (e.g., the third transmit beam535-3) based on the result of the signal reception operation through the receive beam sweeping540. As described above, after the communicable beams for receiving and transmitting are determined, the base station520and the electronic device101may transmit and/or receive basic information for cell setting, and based on the information, set information for additional beam operation. For example, the beam operation information may include detailed information on a set beam, SS/PBCH Block, CSI-RS, or setting information on an additional reference signal. In addition, the electronic device101may continuously monitor the channel and the strength of the beam using at least one of the SS/PBCH Block and CSI-RS included in the transmit beam. The electronic device101may adaptively select a beam having good beam quality using the monitoring operation. Optionally, when a communication is disconnected due to movement of the electronic device101or blocking of a beam, the above-mentioned beam sweeping operation may be performed again to determine a communicable beam. FIG.6illustrates a block diagram of the electronic device101for 5G network communication, according to an example embodiment. The electronic device101may include various components illustrated inFIG.2; however, for brief description,FIG.6illustrates the electronic device101as including the processor120, the second communication processor214, the fourth RFIC228, and at least one third antenna module246. In the illustrated embodiment, the third antenna module246may include first to fourth phase shifters613-1to613-4(e.g., the phase shifter238inFIG.2) and/or first to fourth antenna elements617-1to617-4(e.g., the antenna248inFIG.2). Each of the first to fourth antenna elements617-1to617-4may be electrically connected to one of the first to fourth phase shifters613-1to613-4individually. The first to fourth antenna elements617-1to617-4may form at least one antenna array615. The second communication processor214may control the first to fourth phase shifters613-1to613-4, thereby controlling the phases of the transmitted and/or received signals through the first to fourth antenna elements617-1to617-4, which makes it possible to generate a transmit beam and/or a receive beam in a selected direction. According to an embodiment, the third antenna module246may form a beam651of the broad radiation pattern (hereinafter, referred to as a ‘broad beam’) or a beam653of the sharp radiation pattern (hereinafter, referred to as a ‘sharp beam’) as mentioned above, depending on the number of the used antenna elements. For example, the third antenna module246may form the sharp beam653when all of the first to fourth antenna elements617-1to617-4are used, and form the broad beam651when only the first antenna element617-1and/or the second antenna element617-2is/are used. The broad beam651has a broader coverage than the sharp beam653, but has a small antenna gain, and thus it may be more effective in searching for a beam. On the other hand, the sharp beam653has a narrower coverage than the broad beam651, but has a higher antenna gain, and thus it may improve communication performance. According to an embodiment, the second communication processor214may utilize a sensor module176(e.g., a 9-axis sensor, grip sensor, or GPS) for beam search. For example, the electronic device101may adjust a beam search position and/or a beam search period based on the position and/or movement of the electronic device101using the sensor module176. For another example, when the electronic device101is gripped by a user, an antenna module having better communication performance may be selected from among the plurality of third antenna modules246by identifying the gripping part of the user using a grip sensor. FIG.7illustrates electronic devices of which the shapes change according to various example embodiments. According to various example embodiments, the form of the electronic device201may be physically changed with folding/unfolding. For example, the electronic device may include a housing and a display having flexibility in at least some portions. Around the flexible portion of the electronic device, the electronic device may be folded (e.g., open) or unfolded (e.g., closed). For example, a portion of the electronic device having flexibility may be referred to as a folded portion. The folded portion refers to a portion (e.g., a hinge) or a region in which the form of the electronic device is to be changed, and is not limited to a specific structure. According to an embodiment, a first electronic device101A (e.g., the electronic device101ofFIG.1) may be folded left and right. For example, the first electronic device101A may be folded around at least a folded portion191A. For example, the first electronic device101A may include a first display161A (e.g., the display device160ofFIG.1) and a housing120A, which have flexibility at a portion corresponding to the folded portion191A. The first electronic device101A may be folded left and right around the folded portion191A. The first electronic device101A may include a second display162A (e.g., the display device160ofFIG.1) exposed to the outside in a folded state. InFIG.7, the first electronic device101A is illustrated as an in-fold electronic device in which the first display161A is folded inward; however, embodiments of the present disclosure are not limited thereto. For example, the first electronic device101A may be an out-fold electronic device or an electronic device that supports both in-fold and out-fold. For another example, the first display161A is illustrated as a single display; however, embodiments of the present disclosure are not limited thereto. The first electronic device101A may include a plurality of displays divided around the folded portion191A. The housing120A may also include a plurality of housings divided around the folded portion191A. For another example, the first electronic device101A may be a combination of a plurality of electronic devices coupled to be folded around the folded portion191A. In this case, a plurality of electronic devices may be coupled to each other by a separate structure (e.g., a housing or a hinge). According to an embodiment, a second electronic device101B (e.g., the electronic device101ofFIG.1) may be folded left and right around a plurality of axes. For example, the second electronic device101B may include a display160B (e.g., the display device160ofFIG.1) and a housing120B that have flexibility at least at portions corresponding to a second folded portion192B and a third folded portion193B. The second electronic device101B may be folded left and right around the second folded portion192B and the third folded portion193B. InFIG.7, the second electronic device101B is illustrated as an out-fold electronic device in which the display160B is folded outward; however, embodiments of the present disclosure are not limited thereto. For example, the second electronic device101B may be in-folded at the second folded portion192B and/or the third folded portion193B. For another example, the display160B is illustrated as a single display; however, embodiments of the present disclosure are not limited thereto. The second electronic device101B may include a plurality of displays divided based on at least one of the first folded portion192B and the second folded portion193B. The housing120B may also include a plurality of housings divided based on at least one of the first folded portion192B and the second folded portion193B. For another example, the second electronic device101B may be a combination of a plurality of electronic devices coupled to be folded around the first folded portion191B and the second folded portion193B. In this case, for example, a plurality of electronic devices may be coupled to each other by a separate structure (e.g., a housing or a hinge). According to an example embodiment, a third electronic device101C (e.g., the electronic device101ofFIG.1) may be folded up and down. For example, the third electronic device101C may include a display160C (e.g., the display device160ofFIG.1) and a housing120C, which have flexibility at least at a portion corresponding to a fourth folded portion194C. The third electronic device101B may be folded up and down around the fourth folded portion194C. InFIG.7, the third electronic device101C is illustrated as an in-fold electronic device in which the display160B is folded inward; however, embodiments of the present disclosure are not limited thereto. For example, the third electronic device101C may be out-folded, or in-folded and out-folded, at the third folded portion193C. For another example, the display160C is illustrated as a single display; however, embodiments of the present disclosure are not limited thereto. The third electronic device101C may include a plurality of displays divided based on the fourth folded portion194C. The housing120C may also include a plurality of housings divided based on the folded portion194C. For another example, the third electronic device101C may be a combination of a plurality of electronic devices coupled to be folded around the folded portion194C. In this case, a plurality of electronic devices may be coupled to each other by a separate structure (e.g., a housing or a hinge). Changes in the physical shape of the electronic devices (e.g.,101A,101B, and101C) illustrated inFIG.7are exemplary, and embodiments of the present disclosure are not limited thereto. For example, the electronic device may be folded or unfolded about any axis. FIG.8illustrates electronic devices of which the shapes change according to various example embodiments. According to various example embodiments, the form of an electronic device may be physically changed with extension/retraction of the housing of the electronic device. For example, the electronic device may include a housing and/or a display of which at least a portion is able to extend. For example, a portion of the electronic device may be slid or rolled so that the electronic device may be extended (e.g., open) or retracted (e.g., closed). When the shape of the electronic device is changed from a first shape to a second shape, an extension part refers to a portion or region corresponding to the difference between the first shape and the second shape, and is not limited to a specific structure. According to an example embodiment, a fourth electronic device101D (e.g., the electronic device101ofFIG.1) may include an extension part181D that extends/retracts up and down. For example, at least a portion of a housing120D of the fourth electronic device101D may include the extension part181D that is able to extend upward of the fourth electronic device101D. For example, the extension part181D is a part of the housing120D, and may extend the housing120D of the fourth electronic device101D by moving relatively upward with respect to the other part of the housing120D. The extension part181D may move independently of the display160D (e.g., the display device160ofFIG.1). For example, the extension part181D may be drawn upward relative to the display160D. For another example, the extension part181D may be drawn downward relative to the display160D. According to an embodiment, the extension part181D may include a camera module. For example, the camera module may be configured to rotate with the movement of the extension part181D. According to an example embodiment, a fifth electronic device101E (e.g., the electronic device101ofFIG.1) may include an extension part181E that extends/retracts left and right. For example, at least a portion of a housing120E of the fifth electronic device101E may include an extension part181E that is able to extend in the right direction of the fifth electronic device101E. For example, the extension part181E may move independently of the display160E (e.g., the display device160ofFIG.1). In this case, a portion of the housing120E may be drawn beyond one side relative to the display160E, thereby forming the extension part181E. For another example, the extension part181E may move together with the display160E. In this case, a portion of the housing120E and the display160E may relatively protrude beyond one side, thereby forming the extension part181E. According to an embodiment, the extension part181E may include a camera module. For example, the camera module may be configured to rotate with the movement of the extension part181E. According to an example embodiment, a sixth electronic device101F (e.g., the electronic device101ofFIG.1) may include an extension part181F that extends/retracts left and right. For example, a display160F (e.g., the display device160ofFIG.1) of the sixth electronic device101F may be a rollable display. For example, the display160F may be rolled and accommodated in a first housing121F. For example, the display160F may extend between the first housing121F and a second housing122F by being unrolled. The extension part181F may be generated by unrolling the display160F. Changes in the physical shape of the electronic devices (e.g.,101D,101E, and101F) illustrated inFIG.8are exemplary, and embodiments of the present disclosure are not limited thereto. For example, the electronic device may extend or retract in any direction. With regard to the first electronic device101A, the second electronic device101B, the third electronic device101C, the fourth electronic device101D, the fifth electronic device101E, or the sixth electronic device101F ofFIGS.7and8, changes in the shapes of various electronic devices have been described. The changes in shape are exemplary, and embodiments of the present disclosure are not limited thereto. For example, the electronic devices ofFIGS.7and8may include an antenna module for 5G mobile communication (e.g., the third antenna module246ofFIG.6). In 5G mobile communication using a frequency band of 6 GHz or higher, the change in the shape of an electronic device may affect characteristics (e.g., a radiation direction and/or a shielding area) of the antenna module. For example, the characteristics of the antenna module may be changed by changing the position or orientation of the antenna module with the change in the shape of the electronic device. For the electronic devices ofFIG.7, the characteristics of the antenna module may be changed with the open/close state of the electronic devices. For the electronic devices ofFIG.8, the characteristics of the antenna module may be changed with the extension/retraction of the electronic devices. For example, when the antenna module is positioned in the extension part, characteristics of the antenna module may be changed with the extension/retraction of the extension part. For another example, the characteristics of the antenna module may be changed with the change in the internal environment of the electronic device by extension/retraction. Since the characteristics of the antenna module may be changed with the change in the shape of the electronic device, the electronic device may perform communication in consideration of the change in the characteristics of the antenna module. Hereinafter, various example embodiments will be described with a focus on the first electronic device101A, the second electronic device101B, the third electronic device101C, the fourth electronic device101D, the fifth electronic device101E, or the sixth electronic device101F ofFIGS.7and8. The following embodiments may be similarly applied to an electronic device (e.g., the electronic device101of FIG. FIG.9illustrates an antenna module arrangement of an electronic device according to an example. FIG.9illustrates the arrangement of an antenna module (e.g., the second antenna module246ofFIG.2) of the first electronic device101A ofFIG.7, according to an example embodiment. In the example ofFIG.9, the rear surface of the first electronic device101A is illustrated. For example, on the rear surface of the first electronic device101A, a camera170A and the second display162A may be viewable through the housing120A. According to an embodiment, the first electronic device101A may include a plurality of antenna modules910,920, and930. For example, each of the first antenna module910, the second antenna module920, and the third antenna module930may correspond to the second antenna module246ofFIG.2. Each of the plurality of antenna modules910,920, and930may include at least one array antenna (e.g., the array antenna430ofFIG.4). Each of the at least one array antenna may include a plurality of conductive patterns (e.g., antenna elements432,434,436, and438ofFIG.4). Each of the at least one array antenna may be operatively coupled to a communication circuit (e.g., the third RFIC226and the third RFFE236ofFIG.2) and at least one processor (e.g., the second communication processor214and/or the fourth RFIC228ofFIG.2). At least one processor comprising processing circuitry may perform beamforming using at least one array of antenna modules. According to an example embodiment, the first electronic device101A may perform beamforming based on a beam book. For example, the beam book may include information on beams stored in a memory (e.g., the memory130ofFIG.1). The beam book may include beam information for operating the antenna modules of the first electronic device101A. For example, the beam book may include beam identification information (e.g., beam ID) corresponding to each beam. The beam book may include, for example, polarization information (e.g., vertical polarization and/or horizontal polarization) and/or target angle information (e.g., vertical plane angle and/or horizontal plane angle) corresponding to each piece of beam identification information. The beam book may include, for example, phase shift information about an antenna module(s) corresponding to each piece of beam identification information and/or each antenna element(s). According to a comparative example, in the unfolded state of the first electronic device101A, the first electronic device101A may include the first antenna module910configured to form first beam patterns911toward an upper portion (e.g., a +Y direction) of the first electronic device101A, the second antenna module920configured to form second beam patterns921to the left (e.g., a −X direction) of the first electronic device101A, and the third antenna module930configured to form third beam patterns931to the right (e.g., a +X direction) of the first electronic device101A. For example, the first electronic device101A may form the first beam patterns911, the second beam patterns921, and/or the third beam patterns931based on a specified beam book. According to the comparative example, in the folded state of the first electronic device101A, it may be assumed that the same beam book as that in the unfolded state of the first electronic device101A is used. In this case, the positions of the antenna modules (e.g.,910and920) of the first electronic device101A may be changed. For example, the left and right of the first beam patterns911of the first antenna module910may be reversed with respect to the unfolded state. The second beam patterns921of the second antenna module920may be formed to the left (e.g., the +X direction) in a closed state. In this case, a part of the beam coverage of the second antenna module920and a part of the beam coverage of the third antenna module930may overlap. According to the comparative example, depending on the change in the shape of the first electronic device101A, the correlation with the structures around the antenna modules (e.g.,910,920, and930) and/or a ground (GND) condition may be changed. In this case, the performance of each of the antenna modules (e.g.,910,920, and930) may be changed with the change in the shape of the first electronic device101A. According to the comparative example, when the same beam book is used in the unfolded state and the folded state of the first electronic device101A, the communication state of the first electronic device101A may be deteriorated due to the change in the position and performance of the antenna modules. According to an example embodiment, the first electronic device101A may perform communication using different beam books depending on the shape. For example, the first electronic device101A may perform communication using a first beam book in the unfolded state and may perform communication using a second beam book in the folded state. For example, the first beam book and the second beam book may include at least one different beam (e.g., a beam with at least one different information among beam identification information, polarization information, beam associated antenna module information, or phase shift information about an antenna array). According to an example embodiment, the first electronic device101A may identify the state (e.g., the open state or a folded state) using at least one sensor (e.g., the sensor module176ofFIG.1). For example, the first electronic device101A may determine the state of the first electronic device101A using at least one of a hinge sensor positioned in the folded portion191A, an acceleration sensor positioned in the housing120A, and/or a magnetic force sensor positioned in the housing120A. FIG.10illustrates beam book operation of an electronic device according to an example embodiment. According to various example embodiments, the electronic device (e.g., the electronic device101ofFIG.1) may perform communication using different antenna module groups depending on the state of the electronic device. According to an example embodiment, the electronic device may perform communication using antenna modules of a first group in a first state (e.g., an open, unrolled, or unfolded state), and may perform communication using antenna modules of a second group in a second state (e.g., a closed, rolled, or folded state). For example, at least one of the antenna modules of the first group and at least one of the antenna modules of the second group may be different from each other. For example, some of the antenna modules of the electronic device may be in an on state in the folded state and the unfolded state, and some of the antenna modules of the electronic device may be in an on state only in the folded state or the unfolded state. Referring toFIG.10, the rear surface of the first electronic device101A (e.g., the electronic device101ofFIG.1) is illustrated. For example, according to an embodiment, the first electronic device101A may include a plurality of antenna modules910,920,930, and940. For example, each of the first antenna module910, the second antenna module920, the third antenna module930, and the fourth antenna module940may correspond to the second antenna module246ofFIG.2. For example, the fourth antenna module940may be positioned adjacent (or similarly) to the folded portion191A. Each of the plurality of antenna modules910,920,930, and940may include at least one array antenna (e.g., the array antenna430ofFIG.4). Each of the at least one array antenna may include a plurality of conductive patterns (e.g., antenna elements432,434,436, and438ofFIG.4). Each of the at least one array antenna may be operatively coupled to a communication circuit (e.g., the third RFIC226and the third RFFE236ofFIG.2) and at least one processor (e.g., the second communication processor214and/or the fourth RFIC228ofFIG.2). At least one processor may perform beamforming using at least one array of antenna modules. According to an example embodiment, in the unfolded state, the first electronic device101A may perform communication using the first beam book. For example, the first beam book may include beams associated with the antenna modules of the first group (e.g., the first antenna module910, the second antenna module920, and the third antenna module930). In this case, for example, the fourth antenna module940may be turned off. For example, the first beam book may include a beam1-1, a beam1-2, a beam1-3, a beam1-4, and a beam1-5, which are associated with the first antenna module910, a beam2-1, a beam2-2, a beam2-3, a beam2-4, and a beam2-5, which are associated with the second antenna module920, and a beam3-1, a beam3-2, a beam3-3, a beam3-4, and a beam3-5, which are associated with the third antenna module930. According to an embodiment, in the folded state, the first electronic device101A may perform communication using the second beam book different from the first beam book. For example, the second beam book may include beams associated with the antenna modules of the second group (e.g., the first antenna module910, the third antenna module930, and the fourth antenna module940). In this case, the third antenna module930may be turned off (e.g., disconnected from the communication circuit). For example, the second beam book may include the beam1-1, the beam1-2, the beam1-3, the beam1-4, and the beam1-5, which are associated with the first antenna module910, the beam3-1, the beam3-2, the beam3-3, the beam3-4, and the beam3-5, which are associated with the third antenna module930, and a beam4-1, a beam4-2, a beam4-3, a beam4-4, and a beam4-5, which are associated with the fourth antenna module940. According to an example embodiment, the plurality of antenna modules910,920,930, and940may be connected to at least one communication circuit (e.g., the fourth RFIC228ofFIG.2). For example, the communication circuit may be connected to a limited number of antenna modules. For example, the plurality of antenna modules910,920,930, and940may be selectively connected to the communication circuit depending on the state of the first electronic device101A. For example, in the unfolded state, the first antenna module910, the second antenna module920, and the third antenna module930may be connected to the communication circuit, and the fourth antenna module940may be disconnected from the communication circuit. For example, the first electronic device101A may include a switch for selectively connecting at least some of the antenna modules to the communication circuit. For example, the first electronic device101A may include a switch circuit for selectively connecting the third antenna module930or the fourth antenna module940to the communication circuit. For example, the plurality of antenna modules910,920,930, and940may be selectively enabled depending on the state of the first electronic device101A. For example, the first electronic device101A may include at least one switch for selectively enabling or activating at least some of the plurality of antenna modules910,920,930, and940. Referring toFIG.15, a structure of an electronic device for selectively enabling antenna modules according to an embodiment may be described. For example, an electronic device (e.g., the electronic device101ofFIG.1) may include a plurality of antenna modules1511,1512,1513A,1513B,1514A, and1514B connected to the fourth RFIC (228) depending on a connecting structure1501. Each of the plurality of antenna modules may correspond to, for example, the second antenna module246ofFIG.6. In the example ofFIG.15, the fourth RFIC226may be selectively connected to at least some of the plurality of antenna modules through a switching circuit. For example, the fourth RFIC226may be electrically connected to a third-third A antenna module1513A or a third-third B antenna module1513B through the first switching circuit1521. For example, the second communication processor214may control the first switching circuit1521to selectively enable or activate the third-third A antenna module1513A or the third-third B antenna module1513B. For example, the fourth RFIC226may be electrically connected to a third-fourth A antenna module1514A or a third-fourth B antenna module1514B through the second switching circuit1522. For example, the second communication processor214may control the second switching circuit1522to selectively enable or activate the third-fourth A antenna module1514A or the third-fourth B antenna module1514B. In the connecting structure1501ofFIG.15, the electronic device101may selectively enable the antenna modules using a switching circuit with the change in the shape of the electronic device101. According to an example embodiment, the fourth RFIC226may be configured to be connected to a limited number of antenna modules. If the number of antenna modules of the electronic device101is increased in consideration of the change in the shape of the electronic device101, the number of antenna modules of the electronic device101may be greater than the number of antenna modules that may be simultaneously connected to the fourth RFIC226. Accordingly, by selectively connecting at least some of the antenna modules to the fourth RFIC226using the switching circuit, the electronic device101may satisfy the connection limitation of the fourth RFIC226. Referring toFIGS.10and15, for example, the first antenna module910may correspond to a third-first antenna module1511, and the third antenna module930may correspond to a third-second antenna module1512. The first antenna module910and the third antenna module930may be connected to an IFIC of the first electronic device101A (e.g., the fourth RFIC228ofFIG.15) regardless of the change in the shape of the first electronic device101A. For example, the second antenna module920may correspond to the third-third A antenna module1513A, and the fourth antenna module940may correspond to the third-third B antenna module1513B. The first electronic device101A may include a switching circuit corresponding to the first switching circuit1521and may enable or activate the second antenna module920or the fourth antenna module using the switching circuit. In the example ofFIG.10, the second switching circuit1522, the third-fourth A antenna module1514A, and the third-fourth B antenna module1514B of the connecting structure1501ofFIG.15may be omitted. According to an example embodiment, the first electronic device101A may control the antenna modules910,920,930, and940based on the change in the shape of the electronic device. For example, in the unfolded state, the first antenna module910, the second antenna module920, and the third antenna module930may be enabled, and the fourth antenna module940may be disabled. For example, in the folded state, the first antenna module910, the third antenna module930, and the fourth antenna module940may be enabled, and the third antenna module930may be disabled. For example, the first electronic device101A may control enabling/disabling of the second antenna module920and/or the fourth antenna module940using the switching circuit. According to an example embodiment, the memory of the first electronic device101A may store mapping information between the beams of the first beam book and the beams of the second beam book. For example, the beam2-5of the first beam book may be mapped to the beam4-5of the second beam book. Similarly, the beams2-1,2-2,2-3, and2-4of the first beam book are to be mapped to the beams4-1,4-2,4-3, and4-4of the second beam book, respectively. According to an example embodiment, if the shape of the first electronic device101A is changed during communication using the first beam book, the first electronic device101A may perform communication using the second beam book. For example, the first electronic device101A may perform communication using the beam of the second beam book corresponding to the beam of the first beam book based on mapping information about the first beam book and the second beam book. For example, if the first electronic device101A is changed from the unfolded state to the closed state during communication using the beam2-3, the first electronic device101A may perform communication using the beam4-3corresponding to the beam2-3. By performing communication using the mapped beam, the first electronic device101A may reduce a time for beam searching. According to an example embodiment, if the shape of the first electronic device101A is changed during communication using the first beam book, the first electronic device101A may perform beam searching using the second beam book. For example, the first electronic device101A may attempt communication using the beam of the second beam book corresponding to the beam of the first beam book based on mapping information about the first beam book and the second beam book. If a communication environment using the mapped beam of the second beam book is less than a threshold value (e.g., if a reference signal reception power is lower than or equal to a threshold power and/or an error rate is equal to or higher than a threshold error rate), for example, the first electronic device101A may perform beam searching. In this case, the first electronic device101A may attempt beam searching from a beam adjacent to the mapped beam. For example, if the first electronic device101A is changed from the unfolded state to the closed state during communication using the beam2-3, the first electronic device101A may attempt communication using the beam4-3corresponding to the beam2-3. If beam searching is determined, the first electronic device101A may perform the beam searching from a beam adjacent to the beam4-3of the second beam book. For example, the first electronic device101A may sequentially perform the beam searching from the beam4-2or beam4-4of the second beam book. For example, the adjacent beam may be a beam that is physically adjacent to the mapped beam. For another example, the adjacent beam may be a beam having a beam index close to the mapped beam. By performing beam searching using the mapped beam, the first electronic device101A may reduce a time for beam searching. FIG.11illustrates beam book operation of an electronic device according to an example embodiment. With reference toFIG.10, the beam book operation methods have been described as including a plurality of communication circuits; however, embodiments of the present disclosure are not limited thereto. For example, the change of the beam book based on the change in the shape described with reference toFIG.10may be similarly applied to the electronic device ofFIG.1, the second electronic device101B ofFIG.7, and the third electronic device101C. For example, the description with reference toFIG.10may be applied to the third electronic device101C ofFIG.11. Referring toFIG.11, according to an example embodiment, in the folded state, the third electronic device101C may perform communication using the first beam book. For example, the first beam book may include beams associated with antenna modules of a first group (e.g., a first antenna module1110, a second antenna module1120, and a third antenna module1130). In this case, a fourth antenna module1140may be turned off. For example, the first beam book includes a beam1-1, a beam1-2, a beam1-3, a beam1-4, and a beam1-5, which are associated with the first antenna module1110, a beam2-1, a beam2-2, a beam2-3, a beam2-4, and a beam2-5, which are associated with the second antenna module1120, and a beam3-1, a beam3-2, a beam3-3, a beam3-4, and a beam3-5, which are associated with the third antenna module1130. According to an embodiment, in the folded state, the third electronic device101C may perform communication using the second beam book different from the first beam book. For example, the second beam book may include beams associated with antenna modules of a second group (e.g., the second antenna module1120, the third antenna module1130, and a fourth antenna module1140). In this case, the first antenna module1110may be turned off (e.g., disconnected from the communication circuit). For example, the second beam book includes the beam2-1, the beam2-2, the beam2-3, the beam2-4, and the beam2-5, which are associated with the second antenna module1120, the beam3-1, the beam3-2, the beam3-3, the beam3-4, and the beam3-5, which are associated with the third antenna module1130, and a beam4-1, a beam4-2, a beam4-3, a beam4-4, and a beam4-5, which are associated with the fourth antenna module1140. Referring toFIGS.11and15, for example, the second antenna module1120may correspond to the third-first antenna module1511, and the third antenna module1130may correspond to the third-second antenna module1512. The second antenna module1120and the third antenna module1130may be connected to an IFIC of the third electronic device101C (e.g., the fourth RFIC228ofFIG.15) regardless of the change in the shape of the third electronic device101C. For example, the first antenna module1110may correspond to the third-third A antenna module1513A, and the fourth antenna module1140may correspond to the third-third B antenna module1513B. The third electronic device101C may include a switching circuit corresponding to the first switching circuit1521and may enable or activate the first antenna module1110or the fourth antenna module1140using the switching circuit. In the example ofFIG.11, the second switching circuit1522, the third-fourth A antenna module1514A, and the third-fourth B antenna module1514B of the connecting structure1501ofFIG.15may be omitted. Referring toFIG.11, according to an embodiment, the plurality of antenna modules1110,1120,1130, and1140may be connected to one communication circuit (e.g., the fourth RFIC228ofFIG.2). For example, the communication circuit may be connected to a limited number of antenna modules. For example, the plurality of antenna modules1110,1120,1130, and1140may be selectively connected to the communication circuit depending on the state of the third electronic device101C. For example, in the unfolded state, the first antenna module1110, the second antenna module1120, and the third antenna module1130may be connected to the communication circuit, and the fourth antenna module1140may be disconnected from the communication circuit. For example, the third electronic device101C may include a switch for selectively connecting at least some of the antenna modules to the communication circuit. For example, the third electronic device101C may include a switch circuit for selectively connecting the first antenna module1110or the fourth antenna module1140to the communication circuit. For example, the plurality of antenna modules1110,1120,1130, and1140may be selectively enabled depending on the state of the third electronic device101C. According to an embodiment, the third electronic device101C may control the antenna modules1110,1120,1130, and1140based on the change in the shape of the electronic device. For example, in the unfolded state, the first antenna module1110, the second antenna module1120, and the third antenna module1130may be enabled, and the fourth antenna module1140may be disabled. For example, in the folded state, the second antenna module1120, the third antenna module1130, and the fourth antenna module1140may be enabled, and the first antenna module1110may be disabled. According to an embodiment, the memory of the third electronic device101C may store mapping information between the beams of the first beam book and the beams of the second beam book. For example, the beam2-5of the first beam book may be mapped to the beam2-5of the second beam book. The fact that third electronic device101C may perform communication or beam searching using the beam mapped based on the mapping information is as described above with reference toFIG.10. The change of the beam book based on the change in the shape described with reference toFIGS.10and11may be similarly applied to the electronic device101ofFIG.1, the second electronic device101B ofFIG.7, the fourth electronic device101D, the fifth electronic device101E, and/or the sixth electronic device101F ofFIG.8. For example, in the case of the second electronic device101B, the antenna module may be changed which is configured to radiate to the left or right of the second electronic device101B depending on the open/closed state of the second electronic device101B. The communication may be performed using mapping information about the first beam book and the second beam book set depending on the change in the antenna module of the second electronic device101B and/or a change in the orientation of the antenna module. For another example, in the case of an electronic device of FIG. (e.g., the fourth electronic device101D, the fifth electronic device101E, or the sixth electronic device101F), at least a portion of the antenna module used may be changed with the retraction/extension of the extension part. For example, even if the position of the antenna module is independent of the extension/retraction of the extension part, the available antenna module may be changed as the structure of the electronic device is changed with the extension/retraction of the extension part. For another example, if the position of the antenna module is changed with the extension/retraction of the extension part (e.g., if the antenna module is included in the extension part), the antenna module may be put into a usable state with the change of the position of the antenna module. A beam book to be used may be changed with the change in the available antenna module. According to various embodiments, an electronic device (e.g., the electronic device101ofFIG.1) may include a housing (e.g., the housings120A,120B,120C,120D,120E,121F, and122F ofFIGS.7and8) of which a shape is changeable, a plurality of antenna modules (e.g., the first to fourth antenna modules910,920,930, and940or the first to fourth antenna modules1110,1120,1130, and1140ofFIG.11) positioned inside the housing, each of the plurality of antenna modules including at least one antenna array, at least one processor (e.g., the second communication processor214ofFIG.2) operatively connected to the plurality of antenna modules and configured to perform beamforming using the at least one antenna array, and a memory (e.g., the memory130ofFIG.1) connected to the at least one processor comprising processing circuitry. For example, each of the plurality of antenna modules may include at least one antenna array (e.g., the antenna array430ofFIG.4). At least one processor may perform beamforming using at least one antenna array. For example, the plurality of antenna modules may include at least one antenna module positioned adjacent to at least one folded portion of the flexible display and/or the housing of the electronic device. The remaining antenna modules except for the at least one antenna module positioned adjacent to the folded portion may be positioned adjacent to a periphery of the housing. The memory may store one or more instructions that causes the processor comprising processing circuitry of the electronic device to perform operations of the electronic device to be described later with reference toFIGS.12to14. FIG.12is a flowchart1200of a communication method of an electronic device according to an embodiment. Referring toFIG.12, according to an embodiment, in operation1205, an electronic device (e.g., the electronic device101ofFIG.1) may perform communication based on the first beam book. For example, the electronic device may perform communication using one of the beams of the first beam book using an antenna array of at least some of the plurality of antenna modules of the electronic device. According to an embodiment, in operation1210, the electronic device may detect a change in the shape of the housing during communication based on the first beam book. For example, the electronic device may further include a flexible display that is viewable through at least a portion of the housing. The electronic device may detect the change in the shape of at least one of the housing or the flexible display (e.g., folded, unfolded, rolled out, rolled in, extended, or retracted). According to an embodiment, the electronic device may detect the change in shape by detecting an acceleration, a magnetic force, and/or a folding angle using a sensor circuit of the electronic device. The electronic device may detect the change in shape by comparing accelerations of regions of the electronic device divided around the folded portion. The electronic device may detect the change in shape by detecting a change in the magnitude of a magnetic force that is changed depending on an open/close state. The electronic device may detect the change in shape using a hinge sensor connected to a hinge structure included in the folded portion of the electronic device. The electronic device may detect the change in shape based on whether the antenna module (comprising at least one antenna) positioned adjacent to the folded portion is shielded. For example, a shielded antenna module (comprising at least one antenna) in an open state may be opened in a closed state. According to an embodiment, the electronic device may detect the change in shape using a sensor for detecting the change in the shape of the housing, which is disposed in the housing. For example, the electronic device may detect the change in shape using a sensor for detecting opening/closing of the housing (e.g., extension/retraction of the extension part). According to an embodiment, in operation1215, the electronic device may perform communication based on the second beam book in response to the change in the shape of the housing. For example, the second beam book may include information on beams associated with at least some of the plurality of antenna modules, and may include information on beams different from at least a portion of the first beam book. For example, when the change in shape is detected during communication using the first beam of the first beam book, the electronic device may perform communication based on the second beam of the second beam book mapped to the first beam. FIG.13is a flowchart1300of a beam book operation method of an electronic device according to an example embodiment. According to an embodiment, in operation1305, an electronic device (e.g., the electronic device101ofFIG.1) may perform communication based on the first beam book. For example, the electronic device may perform communication using one (e.g., first beam) of the beams of the first beam book using an antenna array of at least some of the plurality of antenna modules of the electronic device. According to an embodiment, in operation1310, the electronic device may detect a change in the shape of the housing during communication based on the first beam book. Description of operation1310may be referred to by description of operation1210ofFIG.12. According to an example embodiment, in operation1315, the electronic device may determine whether antenna modules (comprising antennas) associated with the first beam book and the second beam book are the same. For example, the electronic device may determine that the antenna modules associated with the first beam book and the second beam book are different if at least some of the antenna modules associated with the first beam book and the antenna modules associated with the second beam book are different. According to an example embodiment, if the antenna modules associated with the first beam book and the second beam book are different (No in1315), in operation1318, the electronic device may control the antenna modules according to the setting of antenna modules corresponding to the second beam book. For example, the electronic device may enable the antenna modules associated with the second beam book. For example, the electronic device may enable the antenna modules associated with the second beam book by connecting the antenna modules associated with the second beam book by controlling a switch selectively connecting the IFIC and some of the plurality of antenna modules. According to an example embodiment, the electronic device may perform operation1320if the antenna modules associated with the first beam book and the second beam book are the same (Yes in1315). According to an embodiment, in operation1320, the electronic device may perform communication using the bean mapped based on the second beam book in response to the change in the shape of the housing. Description of operation1320may be referred to by description of operation1205ofFIG.12. FIG.14is a flowchart1400of a beam searching method according to an example embodiment. According to an embodiment, in operation1405, an electronic device (e.g., the electronic device101ofFIG.1) may perform communication based on the first beam book. For example, the electronic device may perform communication using one (e.g., first beam) of the beams of the first beam book using an antenna array of at least some of the plurality of antenna modules of the electronic device. According to an embodiment, in operation1410, the electronic device may detect a change in the shape of the housing during communication based on the first beam book. Description of operation1410may be referred to by description of operation1210ofFIG.12. According to an embodiment, in operation1415, the electronic device may determine whether antenna modules associated with the first beam book and the second beam book are the same. Description of operation1415may be referred to by description of operation1315ofFIG.13. According to an embodiment, if the antenna modules associated with the first beam book and the second beam book are different (No in1415), in operation1418, the electronic device may control the antenna modules according to the setting of antenna modules corresponding to the second beam book. Description of operation1418may be referred to by description of operation1318ofFIG.13. According to an embodiment, the electronic device may perform operation1420if the antenna modules associated with the first beam book and the second beam book are the same (Yes in1415). According to an embodiment, in operation1420, the electronic device may perform beam searching from the beam mapped based on the second beam book in response to the change in the shape of the housing. For example, the electronic device may perform beam searching using beams sequentially adjacent to the second beam of the second beam book mapped to the first beam of the first beam book. For example, the electronic device may perform beam searching when the communication quality using the second beam is less than or equal to a threshold value. For another example, the electronic device may perform beam searching when the change in shape is detected regardless of the communication quality. According to various example embodiments, an electronic device (e.g., the electronic device101ofFIG.1) may include a housing (e.g., the housings120A,120B,120C,120D,120E,121F, and122F ofFIGS.7and8) of which a shape is changeable, a plurality of antenna modules (e.g., the first to fourth antenna modules910,920,930, and940or the first to fourth antenna modules1110,1120,1130, and1140ofFIG.11) positioned inside the housing, each of the plurality of antenna modules including at least one antenna array, at least one processor (e.g., the second communication processor214ofFIG.2) operatively connected to the plurality of antenna modules and configured to perform beamforming using the at least one antenna array, and a memory (e.g., the memory130ofFIG.1) connected to the at least one processor. For example, each of the plurality of antenna modules may include at least one antenna array (e.g., the antenna array430ofFIG.4). At least one processor may perform beamforming using at least one antenna array. According to an embodiment, the memory may store instructions that, when executed, cause the at least one processor to perform communication based on a first beam book including information on beams associated with at least some of the plurality of antenna modules, using the plurality of antenna modules, detect a change in the shape of the housing during communication based on the first beam book, and perform communication based on a second beam book including information on beams associated with at least some of the plurality of antenna modules and including information on beams different from the at least a portion of the first beam book, in response to the change in the shape. According to an embodiment, when executed, the one or more instructions may cause the at least one processor to perform communication based on a first beam of the first beam book, using the plurality of antenna modules, detect the change in the shape of the housing during communication using the first beam book, and perform communication based on a second beam of the second beam book mapped to the first beam, in response to the change in the shape. According to an embodiment, when executed, the one or more instructions may cause the at least one processor to perform beam searching from a beam adjacent to the second beam. According to an embodiment, when executed, the one or more instructions may cause the at least one processor to perform the beam searching if a communication quality of the communication based on the second beam is lower than or equal to a threshold value. For example, at least some of antenna modules associated with the first beam book and antenna modules associated with the second beam book may be different. According to an embodiment, when executed, the one or more instructions may cause the at least one processor to enable the antenna modules associated with the second beam book among the plurality of antenna modules, in response to the change in the shape. According to an embodiment, the at least one processor may be selectively connected to some of the plurality of antenna modules through an intermediate frequency integrated circuit and a switch (e.g., the first switching circuit1521and/or the second switching circuit1522ofFIG.15). When executed, the one or more instructions may cause the at least one processor to control the switch in response to the change in the shape to connect the antenna modules associated with the second beam book among the plurality of antenna modules to the intermediate frequency integrated circuit. According to an embodiment, the electronic device may further include a flexible display that is viewable through at least a portion of the housing. When executed, the one or more instructions may cause the at least one processor to detect a change in a shape of at least one of the housing or the flexible display. The change in the shape may include folding or unfolding. According to an embodiment, when executed, the one or more instructions may cause the at least one processor to detect the change in the shape by detecting at least one of an acceleration, a magnetic force, or a folding angle using the sensor circuit. According to an embodiment, the plurality of antenna modules may include at least one antenna module positioned adjacent to at least one folded portion of the flexible display and the housing, and the rest of the plurality of antenna modules except for the at least one antenna module may be positioned adjacent to a periphery of the housing. According to various embodiments, a communication method of an electronic device including a housing of which a shape is changeable may include performing communication based on a first beam book including information on beams associated with at least some of the plurality of antenna modules, using a plurality of antenna modules positioned in the housing, detecting a change in the shape of the housing during communication based on the first beam book, and performing communication based on a second beam book including information on beams associated with at least some of the plurality of antenna modules and including information on beams different from the at least a portion of the first beam book, in response to the change in the shape. According to an embodiment, the performing of the communication based on the first beam book may include performing communication based on a first beam of the first beam book. The performing of the communication based on the second beam book may include performing communication based on a second beam of the second beam book mapped to the first beam in response to the change in the shape. According to an embodiment, the method may further include performing beam searching from a beam adjacent to the second beam. According to an embodiment, the performing of the beam searching may include performing the beam searching if a communication quality of communication based on the second beam is lower than or equal to a threshold value. According to an embodiment, at least some of antenna modules associated with the first beam book and antenna modules associated with the second beam book may be different. According to an embodiment, the method may include enabling the antenna modules associated with the second beam book among the plurality of antenna modules, in response to the change in the shape. For example, the enabling of the antenna modules associated with the second beam book may include selectively connecting antenna modules associated with the second beam book among the plurality of antenna modules to an intermediate frequency integrated circuit of the electronic device, in response to the change in the shape. According to an embodiment, the detecting of the change in the shape may include detecting a change in at least one shape of the housing and a flexible display that is viewable through at least a portion of the housing. For example, the change in the shape may include folding or unfolding. According to an embodiment, the detecting of the change in at least one shape of the housing and the flexible display that is viewable through at least the portion of the housing may include detecting the change in the shape by detecting at least one of an acceleration, a magnetic force, or a folding angle of the electronic device. According to an embodiment, the plurality of antenna modules may include at least one antenna module positioned adjacent to at least one folded portion of the flexible display and the housing, and the rest of the plurality of antenna modules except for the at least one antenna module may be positioned adjacent to a periphery of the housing.
96,437
11863261
DESCRIPTION OF EMBODIMENTS High-Frequency-Band Small Cell FIG.1illustrates an example of a radio communication system. As illustrated inFIG.1, the radio communication system may include one or more first base stations20and a plurality of second base stations40. As one non-limiting example,FIG.1illustrates four base stations40-1,40-2,40-3, and40-4. Base station20exemplarily forms cell30. Base stations40(40-1,40-2,40-3, and40-4) form cells50(50-1,50-2,50-3, and50-4), respectively. Cells50may be encompassed within cell30or may partially overlap with cell30. Cell30may be, for example, a macro cell, and each of cells50may be, for example, a cell (e.g., a small cell or a semi-macro cell) having a smaller coverage than the macro cell. Base station20may be, for example, an aggregate node (central unit (CU)) and some or all of base stations40may be, for example, distributed nodes (distributed units (DUs)) connected to CU20by a fronthaul (FH) interface. For example, a common public radio interface (CPRI) may be applied as the FH interface. The CU may be referred to as “centralized baseband unit (CBBU)” or “BBU.” In the following description, a base station corresponding to the CU may be referred to as a “macro base station,” “macro cell,” or “core apparatus” for convenience. On the other hand, a base station corresponding to the DU may be referred to as a “small base station,” “small cell,” “base station cell,” or “radio apparatus” for convenience. Mobile station10connects with (accesses) at least one of macro base station20and small base stations40. In areas where macro cell30and small cells50overlap, mobile station10is capable of connecting with both macro base station20and small base stations40. Cells40may be assigned a high frequency band (e.g., a frequency band of several GHz to tens of GHz that is a frequency band used for 5th Generation New Radio (5G NR)) higher than the frequency band in cell30. Cell30may be assigned a low frequency band (e.g., a frequency band of several hundred MHz to several GHz that is a frequency band used for Long Term Evolution (LTE)). In the high frequency band of several GHz to tens of GHz, it is easier to secure a radio resource (hereinafter, referred to simply as “resource”) of a broader bandwidth as compared with the low frequency band, and thus, high-speed and large-capacity communication can be realized. On the other hand, a radio wave in the high frequency band has higher straightness and has a shorter wavelength than a radio wave in the low frequency band. Accordingly, the radio wave in the high frequency band is likely to suffer a greater radio wave propagation loss, and the communication distance thus tends to be shorter. Therefore, the coverages of cells50that can be formed by base stations40tend to be smaller than the coverage of macro cell30formed by base station20as illustrated in FIG. 1. Note that, a cell utilizing a high frequency band (e.g., small cell or semi-macro cell) may be referred to as “high-frequency-band cell” or “high-frequency-band small cell.” Massive MIMO Antenna Massive MIMO transmission using a massive MIMO antenna including, for example,100or more antenna elements may be applied to radio signal transmission in high-frequency-band cells40. The massive MIMO antenna includes a large number of antenna elements, so as to be capable of facilitating spatial multiplexing of transmission and reception streams to achieve high-speed and large-capacity radio communication. The massive MIMO antenna also makes it possible to realize enhanced beamforming (BF) as follows. FIG.2Ais an explanatory view illustrating an example of beamforming in a low frequency band.FIG.2Bis an explanatory view illustrating an example of beamforming in a high frequency band. Beam B2in the high frequency band illustrated inFIG.2Bsuffers a greater radio wave propagation loss than beam B1in the low frequency band illustrated inFIG.2A. Thus, the reachable distance of beam B2in the high frequency band is likely to be shorter than beam B1in the low frequency band having the same full width at half maximum illustrated inFIG.2A. In order to extend the reachable distance of beam B2in the high frequency band, (sharper) beam B3having a narrower full width at half maximum is generated, for example, by beamforming using the massive MIMO antenna. The beamforming allows an increase in beam gain (which may also be referred to as “beamforming gain,” hereinafter), to extend the reachable distance of beam B3. In other words, a decrease in reception strength due to the increase in the propagation loss of the radio wave in the high frequency band can be covered by the beamforming gain produced using the massive MIMO antenna. Further, beamforming using a large number of MIMO antenna elements allows the directivity of a beam to be directed in a particular direction, so as to allow, for example, easier reduction of interference with other cells, thereby improving resource utilization efficiency. Further, the size of each of the MIMO antenna elements can be small because the size is proportional to the wavelength of the radio wave to be transmitted and received. The shorter the wavelength of the radio wave to be transmitted and received (i.e., the higher the frequency), the smaller the size of the antenna element can be. Therefore, the massive MIMO antenna in the high frequency band as a whole can be miniaturized relatively easily despite including a large number of antenna elements. High-Speed Movement Environment in High-Frequency Band FIGS.3A to3Care explanatory views illustrating examples of radio communication environments including mobile station10that is an example of the terminal. InFIGS.3A to3C, mobile station10moves across base station cells40-1and40-2. The movement of mobile station10may be exemplarily assumed to be high-speed movement of a vehicle in transportation, such as a car, a railway, or the like. For example, when mobile station10is moving at high speed, a change of channel state information (CSI) due to the Doppler effect is larger in the radio communication in the high frequency band than in the radio communication in the low frequency band. Depending on the moving speed of mobile station10, an abrupt change may occur in the CSI. Note that the CSI is estimated (or measured) at mobile station10, for example, based on a reference signal (RS) transmitted by base station40(or20) and is fed back (reported) to base station40(or20). The CSI report includes, for example, information on a beam suitable for reception at mobile station10(e.g., beam number (index), precoding weight index, and/or the like). Base station40(or20) controls DL transmission to mobile station10based on the CSI report from mobile station10. For example, the DL transmission control based on the CSI report may include adaptive determination of a modulation and coding scheme, control on the number of transmission streams, and determination of a precoding weight. When an abrupt temporal change occurs in the CSI with the movement of mobile station10, adaptive control on DL transmission cannot follow the change, and the transmission performance (e.g., throughput) deteriorates due to a channel estimation error between channels at the time of CSI measurement and at the time of data transmission. For example, channels H(t1), H(t2), and H(t3) estimated at times t1to t3illustrated inFIG.3AtoFIG.3Care different from one another. The higher the moving speed of mobile station10, the larger the change of channel H estimated at mobile station10becomes due to the effect of the Doppler shift. When the change of channel H estimated at mobile station10becomes large, the change of the CSI reported from mobile station10to base station40also becomes large. For example, base station cell40-1controls the DL transmission to mobile station10using the CSI report based on channel H(t1). However, when mobile station10has already moved at high speed to the position illustrated inFIG.3Bat time t2, channel H(t2) at time t2is significantly different from channel H(t1) at time t1. Thus, when base station cell40-1controls the DL transmission to mobile station10at time t2illustrated inFIG.3Bbased on the CSI report at time t1, the DL transmission control suitable for channel H(t2) is not performed, and the transmission performance is degraded depending on an error between channel H(t1) and channel H(t2). Further, for example, at time t3illustrated inFIG.3C, base station cell40to which mobile station10is connected is switched from base station cell40-1to base station cell40-2. As connection-target base station cell40switches, channel H(t2) changes to channel H(t3), and a beam selected based on the CSI report switches accordingly. Here, as described above, the full width at half maximum of the beam is narrower by the BF using the massive MIMO antenna. That is, the beam is sharper, which increases the switching frequency of the beam to be switched with the movement of mobile station10. To address these issues, it is conceivable, for example, to increase the transmission frequency of a downlink (DL) reference signal, such as a beam reference signal (BRS) or a CSI measurement reference signal (in other words, to shorten the transmission periodicity). However, when the transmission frequency of the reference signal is increased, the radio overhead increases, and the communication performance (e.g., data throughput) is degraded. Note that, the DL reference signal (RS) may be any signal known between mobile station10and base station40(or20). For example, the BRS may be replaced by another reference signal, such as a synchronization signal (SS), a synchronization signal block (SS/PBCH block), or a channel-state information reference signal (CSI-RS). In addition, the application of the CSI measurement reference signal is not limited to, for example, CSI measurement, but may also be used for other uses such as phase noise compensation. In addition, the RS may be a demodulation reference signal (DMRS) or a phase tracking reference signal (PTRS). In the following, an embodiment relating to a radio communication control capable of reducing a decrease in communication performance caused by the movement of the terminal in the high frequency band will be described with reference to the drawings. One Example of Configuration according to Present Disclosure FIG.4illustrates an example of a configuration of base station40and core apparatus100according to the present disclosure. For simplicity,FIG.4focuses on one base station40. Base station (base station cell or cell)40includes, for example, control section21, communication section22, physical layer processing section23, and radio processing section24. Physical layer processing section23includes, for example, Doppler shift estimation section25. Radio processing section24includes, for example, beamforming (BF) section26. For example, in accordance with a DL transmission instruction signal inputted from communication section22, control section (control circuit)21instructs physical layer processing section23to perform physical layer processing on a DL signal to mobile station10, and also instructs radio processing section24to perform DL radio transmission. Here, the DL transmission instruction signal is a signal for instructing base station40to perform transmission of the DL signal to mobile station10, and is one example of a control signal. Further, for example, in accordance with a reference signal (RS) transmission instruction signal inputted from communication section22, control section21instructs physical layer processing section23to perform physical layer processing on the DL signal in which an RS is mapped to time resources, and instructs radio processing section24to perform DL radio transmission. Here, the RS transmission instruction signal is a signal for instructing base station40to perform transmission of the DL signal in which the RS is mapped to the time resources, and is one example of a control signal. Communication section22includes, for example, a transmission circuit and a reception circuit, and transmits and receives a signal to and from core apparatus100. For example, an FH interface may be applied as communication section22. Communication section22receives Doppler information from Doppler shift estimation section25and transmits the Doppler information to core apparatus100. Further, communication section22, for example, transmits a UL signal inputted from physical layer processing section23to core apparatus100. Further, communication section22receives the RS transmission instruction signal and DL transmission instruction signal from core apparatus100, and outputs the RS transmission instruction signal and DL transmission instruction signal to control section21. Further, communication section22receives the DL signal from core apparatus100and outputs the DL signal to physical layer processing section23, for example. Physical layer processing section23exemplarily includes a signal processing circuit. For example, in accordance with an instruction from control section21, physical layer processing section23performs physical layer processing on the DL signal inputted from communication section22. Physical layer processing section23outputs the processed DL signal to radio processing section24. Further, physical layer processing section23performs physical layer processing on the UL signal inputted from radio processing section24, for example. Physical layer processing section23outputs the processed UL signal to communication section22. For example, radio processing section24performs radio processing on a radio signal received from mobile station10and outputs the processed UL signal to physical layer processing section23. The radio processing on the UL signal may include, for example, beamforming (BF) and Analog to Digital (A/D) conversion. Further, for example, in accordance with an instruction from control section21, radio processing section24performs radio processing on the DL signal inputted from physical layer processing section23, and transmits the processed signal to mobile station10. The radio processing on the DL signal may include, for example, Digital to Analog (D/A) conversion, and BF. Doppler shift estimation section25estimates a Doppler shift, for example, based on the UL signal inputted from physical layer processing section23(e.g., UL RS). The UL RS may be, for example, a sounding reference signal (SRS). The information indicating the estimated Doppler shift (Doppler information) is outputted to, for example, communication section22. BF section26performs beamforming processing on the DL signal. BF section26includes at least one of an analog beamforming circuit that performs beamforming processing on the DL signal having been subjected to the D/A conversion, and digital beamforming circuit that performs beamforming processing on the DL signal yet to be subjected to the D/A conversion. Core apparatus100includes, for example, control section11, physical layer processing section12, and communication section13. For example, based on the Doppler information inputted from base station40, control section (control circuit)11determines a mapping method (hereinafter, referred to as “RS design”) for mapping the RS to the time resources of the DL signal, and a beam (e.g., beam number, precoding weight, and the like) used for transmission of the DL signal in which the RS is mapped. A target RS for the RS design is, for example, at least one of a beam reference signal (BRS) and a CSI measurement RS. The BRS and CSI measurement RS are referred to as “estimation reference signal (ERS)” for convenience. The BRS is a reference signal used for beam control (e.g., for specifying or identifying a reception beam by mobile station10). The BRS may be replaced by, for example, a synchronization signal (SS) or a channel-state information reference signal (CSI-RS) in 5th Generation (5G) New Radio (NR). The application of the CSI measurement RS is not limited to CSI measurement, but may also be used for phase noise compensation, for example. The ERS may include, for example, a demodulation reference signal (DMRS) and a phase-tracking reference signal (PTRS) in 5G NR. Control section11instructs physical layer processing section12to generate an RS based on the determined RS design, for example. Further, control section11outputs, to communication section13, a control signal (e.g., RS transmission instruction signal) for instructing base station40to transmit the generated RS. Further, for example, based on a CSI report from mobile station10, control section11outputs, to communication section13, a control signal (for example, DL transmission instruction signal) for controlling transmission of the DL signal addressed to mobile station10. Physical layer processing section12includes, for example, a signal processing circuit. In accordance with an instruction of control section11, physical layer processing section12performs physical layer processing on the UL signal inputted from communication section13. Further, physical layer processing section12performs physical layer processing on the DL signal, for example, in accordance with an instruction of control section11. The processed DL signal is outputted to, for example, communication section13. Communication section13includes, for example, a transmission circuit and a reception circuit, and transmits and receives signals to and from base stations40. For example, an FH interface may be applied as communication section13. For example, communication section13receives Doppler information from base station40and outputs the Doppler information to control section11. Further, communication section13receives the UL signal from base station40and outputs the UL signal to physical layer processing section12. Further, communication section13receives the DL signal from physical layer processing section12and transmits the DL signal to base station40. Further, communication section13receives the RS transmission instruction signal or DL transmission instruction signal from control section11and transmits the RS transmission instruction signal or DL transmission instruction signal to base station40. One Example of Operation according to Present Disclosure FIG.5is a sequence diagram illustrating an example of the operation of mobile station10, base station40, and core apparatus100according to the present disclosure. Note that, depending on exemplary embodiments according to the present disclosure, at least one step in the sequence illustrated inFIG.5may be omitted. In addition, the order of steps included in the sequence illustrated inFIG.5is merely an example of the order of the operation performed by mobile station10, base station40, and core apparatus100. The steps included in the sequence diagram may be performed in parallel with or concurrently with other steps, or the order of execution may be interchanged with other steps. The steps included in the sequence illustrated inFIG.5may also possibly be divided into a plurality of steps. The same applies to the flowchart illustrated inFIG.7,11,14, or16, and the sequence diagram illustrated inFIG.18. Reference is made toFIG.5. In step S12, radio processing section24of base station40receives an UL radio signal including a reference signal from mobile station10. The UL reference signal is used to estimate the Doppler shift. In one example, the UL reference signal is the SRS. The SRS is used, for example, for channel estimation for UL adaptive radio link control (e.g., for determining precoding weight, and/or determining a modulation scheme). In another example, the reference signal may be the demodulation reference signal (DMRS). Further, in still another example, the reference signal may be a known signal transmitted by mobile station10upon request of base station20or base station40. The known signal is a signal known between base station40and mobile station10. In step S14, Doppler shift estimation section25of base station40estimates the Doppler shift of the radio signal based on the reference signal received from mobile station10. In one example, Doppler shift estimation section25estimates the Doppler shift based on the correlation between orthogonal frequency division multiplexing (OFDM) symbols included in the reference signal. In another example, Doppler shift estimation section25estimates the Doppler shift based on the correlation between cyclic prefixes (CPs) of the OFDM symbols included in the radio signal. Doppler shift estimation section25may estimate the Doppler shift based on multiple types of reference signals. In step S16, communication section22of base station40transmits Doppler information indicating the estimated Doppler shift to core apparatus100. In core apparatus100, the Doppler information is inputted to control section11via communication section13. In step S18, based on the estimated Doppler information, control section11of core apparatus100performs mapping (or assignment) of the RS to the beam that is to be transmitted from base station40. Note that, in the following, designations “thick beam” and “sharp beam” with respect to beams transmitted by base station40may be used differentially for convenience. The “thick beam” means, for example, a beam that, by assignment of the same RS to a plurality of beams, virtually extends a spatial range within which mobile station10receives temporally the same RS, as compared to a case where different RSs are assigned to individual beams. In contrast to this “thick beam,” the “sharp beam” means an individual beam to which a different RS is assigned (in other words, to which the same RS is not assigned). FIG.6illustrates examples of the RS design. FIG.6illustrates a plurality of RS designs dependent on different magnitudes of the Doppler shift. In one example, the slower the moving speed of mobile station10(i.e., the smaller the magnitude of the estimated Doppler shift), the smaller the number of RSs mapped to time resources per unit time. In other words, the RS transmission frequency decreases, or the RS transmission periodicity (or RS insertion interval) becomes longer. Accordingly, the RS transmission periodicity is adjusted depending on the moving state of mobile station10. In one example, as illustrated inFIG.6, a signal of a data channel (which may be referred to as “data signal”) may be mapped to a time resource to which no RS is mapped. The longer the RS transmission periodicity, the more the time resources to which the data signal can be mapped. It is thus possible to increase the data throughput. Reference is made again toFIG.5. In step S20, communication section13of core apparatus100transmits, to base station40, for example, a control signal for mapping the RS to DL time resources in accordance with the RS design for transmission of the RS. Control section21of base station40receives the control signal from core apparatus100via communication section22. Control section21performs mapping of one or both of the RS and data in accordance with the received control signal. Code multiplexing for allowing individual beams to be identified (or specified) even when the same RS is assigned to a plurality of beams may be performed on the RS. An example of this code multiplexing will be described later. Note that, core apparatus100may transmit a DL signal obtained by mapping the RS in physical layer processing section12to base station40. In step S22, mobile station10receives the DL RS (e.g., BRS) transmitted by base station40. In step S24, based on the received BRS, mobile station10specifies (or identifies or selects) a beam suitable for reception by mobile station10. Depending on the moving speed of mobile station10, the reception beam specified is, for example, one sharp beam or a beam bundle (which may be referred to as “beam group”) including a plurality of sharp beams. In step S26, mobile station10transmits information on the specified reception beam (reception-beam specifying information) to base station40. In step S28, base station40transmits the reception-beam specifying information received from mobile station10to core apparatus100. In step S30, based on the reception-beam specifying information received from mobile station10, core apparatus100determines a beam (transmission beam) to be used by base station40for transmitting the data to mobile station10. The transmission beam determined is, for example, one sharp beam or a beam bundle including a plurality of sharp beams. In step S32, communication section13of core apparatus100transmits, to base station40, a DL signal processed by physical layer processing section12and an RS (e.g., ERS) transmission instruction signal inputted from control section11. Base station40(e.g., physical layer processing section23) maps one or both of the ERS and DL signal to the DL radio resources. Note that, core apparatus100may transmit, to base station40, the DL signal in which the RS is mapped by physical layer processing section12. In step S34, mobile station10receives the ERS transmitted by base station40. In step S36, based on the received ERS, mobile station10estimates (or measures) the CSI between mobile station10and base station40being a transmitter of the ERS. In step S38, mobile station10transmits a report of the estimated CSI to base station40. In step S40, base station40transmits the CSI report received from mobile station10to core apparatus100. In step S42, core apparatus100generates a DL signal including a data signal addressed to mobile station10. In step S44, for example, control section11of core apparatus100instructs base station40via communication section13to assign the beam determined in step S30to the data transmission addressed to mobile station10. In accordance with the instruction from core apparatus100, control section21of base station40assigns the transmission beam determined in step S30, for example, to transmission of the DL signal addressed to mobile station10. Further, for example, in accordance with control of core apparatus100based on the CSI report, physical layer processing section23of base station40generates a transmission precoding weight for reducing mutual interference between terminals including mobile station10, and applies the transmission precoding weight to the data signal addressed to mobile station10. In step S46, base station40transmits the DL signal to mobile station10using the assigned beam in accordance with the DL transmission instruction signal. Mobile station10receives the DL signal transmitted using the beam by base station40. Embodiment 1 FIG.7is a flowchart illustrating an example of the operation of core apparatus100according to Embodiment 1. The flowchart illustrated inFIG.7illustrates an example of the operation performed in step S18ofFIG.5. In step S52, control section11of core apparatus100determines whether or not there is mobile station10moving at high speed. In one example, when the magnitude of the Doppler shift estimated by Doppler shift estimation section25exceeds a predetermined magnitude, control section11determines that mobile station10is moving at high speed. When there is no mobile station10moving at high speed (step S52: No), step S18ends. In this case, different BRSs are transmitted from base station40in a plurality of sharp beams, respectively. In other words, a common BRS is not shared among a plurality of sharp beams. FIG.8Aillustrates an example of beams generated by the base station according to Embodiment 1.FIG.8Billustrates an example of RS design D1determined for base station40according to Embodiment 1. As illustrated inFIG.8B, in RS design D1determined for base station40, the BRS is mapped to the DL time resources with a predetermined periodicity (first periodicity). In other words, the BRS is transmitted with a predetermined transmission periodicity. For example, when mobile station10illustrated inFIG.8Amoves at low speed, it is possible for mobile station10to specify sharp reception beams even when switching of the reception beams occurs, for example, at points P1to P6. It is thus possible for base station40to use sharp beams for data transmission to mobile station10, as illustrated inFIG.8A, to improve communication performance (e.g., throughput). On the other hand, when there is mobile station10moving at high speed (step S52: Yes), control section11determines, in step S54, an RS design for transmitting the same BRS in a plurality of sharp beams. In other words, the common BRS is shared among a plurality of sharp beams, and a plurality of sharp beams are virtually treated as one thick beam. In step S56, for example, control section11configures the transmission periodicity of BRS in the thick beam to a periodicity (second periodicity) longer than the predetermined periodicity (first periodicity). This periodicity configuration reduces the number of BRS transmissions. It is thus possible to reduce radio overhead. Note that, a reduced number of BRS transmissions may allow the data signal (or another RS as described later) to be mapped to the time resources for which the BRS transmission (or BRS mapping) is scheduled in the case of the first periodicity. FIG.9Aillustrates an example of beams generated by base station40according to Embodiment 1.FIG.9Billustrates an example of RS design D2determined for base station40according to Embodiment 1. According to Embodiment 1, for example, as illustrated inFIG.9A, when there is mobile station10moving at high speed, the common BRS is shared among a plurality of sharp beams included in each of beam bundles B1, B2, and B3. Accordingly, even when mobile station10moves at high speed, mobile station10is capable of reducing the switching frequency of the reception beam. For example, mobile station10is capable of reducing switching between a plurality of sharp beams included in any of beam bundles B1, B2, and B3. For example, among points P1to P6illustrated inFIG.9A, mobile station10does not have to switch between the reception beams at points P1, P3, P4, and P6. It is thus possible, for example, to improve the durability or reliability of communication between mobile station10and base station40. In addition, as illustrated inFIG.9B, sharing the common BRS among a plurality of sharp beams allows reduction in the BRS transmission frequency. Consequently, it is possible, for example, to map the data signal to the time resources for which the BRS was scheduled before the reduction, so as to improve the resource utilization efficiency and the data throughput. Modification 1 Embodiment 1 focuses on one base station40. Next, attention is paid to a case where mobile station10passes across multi-cells (a plurality of base station cells40). FIG.10illustrates an example of beams generated by the base station cells according to Embodiment 2. Two base stations40-1and40-2are connected to core apparatus100(not illustrated). As illustrated inFIG.10, mobile station10moves across the cells of two base stations40-1and40-2. In step S18ofFIG.5, core apparatus100may assign the same BRS to a plurality of beams (e.g., beam bundles B6and B7) that are transmitted, for example, toward a region where two base station cells40-1,40-2overlap. Thus, beam bundles B6and B7transmitted by two base stations40-1and40-2are virtually treated as one thick beam. In addition, in step S18ofFIG.5, core apparatus100may assign the same BRS to a plurality of beams (e.g., beam bundles B4and B9) of the plurality of beams (e.g., beam bundles B4to B9) which do not interfere with each other (or interfere the least with each other). This assignment makes it possible to reuse the same BRS between a plurality of base stations40. Accordingly, it is possible to improve the utilization efficiency of radio resources used for the BRS transmission. According to Modification 1, the common BRS is shared among a plurality of beams transmitted by a plurality of base stations40. It is thus possible to reduce the BRS transmission frequency for each of the plurality of base stations40. Further, a plurality of beams (e.g., beam bundles B6and B7) are treated as the same beam in the region where two base station cells40-1and40-2overlap. It is thus possible to achieve smooth cell switching (in other words, handover) within the region (e.g., at point P7). Furthermore, since a plurality of beams are treated as one beam in the region where two base station cells40-1and40-2overlap, it is possible to reduce the occurrence of inter-cell interference. Embodiment 2 Embodiment 1 and Modification 1 described above focus on mobile station10moving at high speed. Next, attention is paid to a case where mobile station10moving at high speed and mobile station10unmoving or moving at low speed exist together. FIG.11is a flowchart illustrating an example of the operation of core apparatus100according to Embodiment 2. The flowchart illustrated inFIG.11is an example of the operation performed in step S18ofFIG.5. In step S62, control section11of core apparatus100assigns the same BRS to a plurality of sharp beams. In other words, the common BRS is shared among the plurality of sharp beams, and the plurality of sharp beams are virtually treated as one thick beam. FIGS.12A to12Cillustrate examples of beam X-Y (X=1, 2, or 3 and Y=1, 2, or 3) generated by base station40according to Embodiment 2.FIGS.12A to12Cillustrate beams X-Y (X=1, 2, or 3 and Y=1, 2, or 3) transmitted respectively at time t1, t2, and t3. When step S62is performed, for each of X=1, 2, and 3, the same BRS is assigned to sharp beams X-1, X-2, and X-3 illustrated inFIGS.12A to12C, for example. Consequently, for each of X=1, 2 and 3, a plurality of sharp beams X-1, X-2, and X-3 are treated virtually as one thick beams. Reference is made toFIG.11again. In step S64, control section11of core apparatus100determines code multiplexing of the BRS in the time axis (time domain). For example, control section11determines to multiplex the BRS with different codes at respective different times that are different per sharp beam. By code multiplexing, for example, mobile station10is capable of selectively identifying the thick beam (beam bundle) and the sharp beams (individual beams included in the beam bundle). For example, the BRSs transmitted by the beams illustrated inFIGS.12A to12Care BRSs S1, S2, and S3in which BRSs for the sharp beams are code-multiplexed with codes in the time axis. It is possible for mobile station10to specify, based on at least one of code-multiplexed BRSs S1, S2, and S3, X of reception beams X-Y (X=1, 2, or 3 and Y=1, 2, or 3) for identifying the beam bundle. It is also possible for mobile station10to specify, based on all of code-multiplexed BRSs S1, S2, and S3, X of reception beams X-Y (X=1, 2, or 3 and Y=1, 2, or 3) for identifying the beam bundle and Y for identifying the sharp beams in the beam bundle. For example, mobile station10moving at high speed specifies beam bundle (X) and mobile station10unmoving or moving at low speed specifies sharp beams (Y) in beam bundle (X). Reference is made toFIG.11again. In step S66, control section11of core apparatus100configures the transmission periodicity of the shared common BRS in the RS design to a periodicity (second periodicity) longer than the predetermined periodicity (first periodicity). This periodicity configuration reduces the number of BRS transmissions. It is thus possible to reduce the radio overhead. Note that, a reduced number of BRS transmissions may allow the data signal (or another RS as described later) to be mapped to the time resources for which the BRS transmission (or BRS mapping) is scheduled in the case of the first periodicity. FIG.13illustrates an example of the RS design at the time of generation of the beams illustrated inFIGS.12A to12C. As illustrated inFIG.13, in RS design D4, code-multiplexed BRSs S1to S3are transmitted with a longer periodicity (e.g., at times t1, t2, t3, t4, t5, and t6) as compared with RS design D3prior to assigning the common BRSs. Because of the longer periodicity of the BRSs, the time resources scheduled for transmission (mapping) before making longer the periodicity become free resources. In order to improve resource utilization efficiency, a data signal (or another RS) may be mapped to the free resources. FIG.14is a flowchart illustrating an example of the operation of mobile station10according to Embodiment 2. The flowchart illustrated inFIG.14is an example of the operation performed in step S24ofFIG.5. In step S72, mobile station10determines whether or not code separation is possible for a code-multiplexed received BRS. For example, a description will be given, with reference toFIGS.15A and15B, of how mobile station10unmoving or moving at low speed is capable of the code separation while mobile station10moving at high speed fails to perform the code separation. FIGS.15A and15Billustrate other examples of beam X-Y (X=1 to 4 and Y=1 or 2) generated by base station40according to Embodiment 2.FIGS.15A and15Billustrate beams X-Y (X=1 to 4 and Y=1 or 2) transmitted at times t1and t2, respectively. At time t1, base station40transmits, by beam1-1, a BRS code-multiplexed with value s and transmits, by beam1-2, BRS S1code-multiplexed with value s. In addition, at time t2, base station40transmits, by beam1-1, the BRS code-multiplexed with value s and transmits, by beam1-2, BRS S2code-multiplexed with value −s. In this case, when a reception signal received by mobile station10at time t is represented by y(t) and a channel of beam i-j at time t is represented by hi-j(t), following Equations 1 and 2 hold: y(1)=h1-1(t1)s+h1-2(t1)s(Equation 1) y(t2)=h1-1(t2)s−h1-2(t2)s(Equation 2). Reference is made toFIG.14again. When code separation of the BRS is possible (step S72: Yes), mobile station10specifies the sharp beam in step S74. For example, when mobile station10is unmoving or moving at low speed, channel hi-j(t) does not change at times t1and t2, that is, hi-j(t1)=hi-j(t2) may hold true. Thus, h1-1(*) is obtained by addition of Equations 1 and 2. Further, h1-2(*) is obtained by subtracting Equation 2 from Equation 1. It is thus possible for mobile station10to specify, from h1-1(*) or h1-2(*), channel number i-j for the BRS, or in other words, beam number i-j (sharp beam). On the other hand, when the code separation of the BRS is failed (step S72: No), mobile station10specifies one beam bundle based on the BRS in step S76. For example, when mobile station10moves at high speed, channel hi-j(t) changes between times t1and t2, that is, hi-j(t1)s≠hi-j(t2). Thus, it is impossible to specify an individual channel number (i-j) by the addition or subtraction between Equations 1 and 2. However, from Equation 1 or 2, h1-1(*)+h1-2(*) or h1-1(*)−h1-2(*) is obtained in mobile station10. It is thus possible to distinguish index i of the beam bundle. Therefore, it is possible for mobile station10moving at high speed to specify beam bundle (i). After step S74or step S76is performed, mobile station10ends step S24ofFIG.5. FIG.16is a flowchart illustrating an example of the operation of core apparatus100according to Embodiment 2. The flowchart illustrated inFIG.16is an example of the operation performed in step S30ofFIG.5. In step S82, based on the reception-beam specifying information received from mobile station10, core apparatus100determines whether or not a sharp beam has been specified. When the sharp beam has been specified (step S82: Yes), core apparatus100assigns the specified sharp beam to the data transmission addressed to mobile station10in step S84. On the other hand, when the sharp beam has not been specified but the beam bundle is specified (step S82: No), control section11of core apparatus100assigns the specified beam bundle to the data transmission addressed to mobile station10in step S86. After step S84or S86is performed, core apparatus100ends step S30ofFIG.5. FIG.17illustrates even another example of the beams generated by base station40according to Embodiment 2. At time t1, mobile station10-1moving at high speed and unmoving mobile stations10-2,10-3, and10-4exist together in base station cell40. In step S86ofFIG.16, beam bundle2(beams2-1,2-2, and2-3), which is a thick beam, is assigned to the data transmission addressed to mobile station10-1moving at high speed. On the other hand, in step S84ofFIG.16, sharp beams1-1,3-1, and3-3are assigned to the data transmissions addressed to unmoving mobile stations10-2,10-3, and10-4, respectively. Thus, it is possible for mobile station10-1moving at high speed to use thick beam (beam bundle)2. When, for X=1, 2, and 3, base station40illustrated inFIG.17treats beams X-1, X-2, and X-3as thick beams, the number of beams is 3. In contrast, sharp beams1-1,3-1, and3-3are used for data transmission to unmoving mobile stations10-2,10-3, and10-4, and accordingly, the number of beams transmitted from base station40is 4 or more. According to Embodiment 2, it is possible for mobile station10-1moving at high speed to use thick beam (beam bundle)2. Therefore, for mobile station10-1moving at high speed, the beam is virtually expanded, and it is thus possible to reduce switching between the beams with the movement of mobile station10-1. In addition, mobile stations10-2,10-3, and10-4moving at low speed can occupy the resources of sharp beams1-1,3-1, and3-3, and thus, the communication performance (e.g., throughput) of the entire system is improved. Moreover, the use of sharp beams for mobile stations10-2,10-3, and10-4can reduce interference between adjacent beams or interference with neighboring cells. Embodiment 3 In Embodiments 1 and 2 described above, by way of example, a data signal is assigned to the time resources in which the BRS is reduced. Unlike the above embodiments, in Embodiment 3, attention is paid, for example, to a case where the number of mobile stations10moving at high speed is greater than a threshold, and an ERS is assigned to a time resource in which the BRS is reduced. FIG.18is a sequence diagram illustrating an example of the operation of mobile station10, base station40, and core apparatus100according to Embodiment 3.FIG.18is different fromFIG.5in step S31. The same portions betweenFIG.18andFIG.5are steps common between the present embodiment and Embodiment 1 or 2, and therefore, the description of such portions inFIG.18is omitted. It is more preferable that base station40transmit, at a higher frequency than the frequency for the BRS used for beam switching, the ERS used for estimation of CSI that changes more quickly than beam switching. Accordingly, in step S31, control section11of core apparatus100may determine to map (add or insert) the ERS to free time resources generated by reduction in the BRS transmission frequency achieved by the RS design determined in step S18. FIG.19illustrates an example of RS design D8at the time of generation of the beams by base station40according to Embodiment 3. The RS design in step S18illustrated inFIG.18determines RS design D7in which the BRS transmission periodicity is longer than in RS design D6. Furthermore, by performing step S31illustrated inFIG.18, RS design D8is determined in which the ERS is mapped to the time resources to which no BRS transmission is assigned in RS design D7. According to Embodiment 3, the ERS is mapped to the free time resources generated owing to the longer BRS periodicity. It is thus possible to transmit a greater number of ERSs without changing the total resource amount for the RS. Therefore, it is possible for mobile station10to estimate the CSI with a higher frequency, and thus, core apparatus100can easily follow the high-speed movement of mobile station10. Other Embodiments In one exemplary configuration illustrated inFIG.4, physical layer processing section23of base station40includes Doppler shift estimation section25. Instead of this configuration, physical layer processing section12may include Doppler shift estimation section25depending on functional allotment between physical layer processing section12and physical layer processing section23. Further, according to a configuration change, control section21of base station40may implement at least a part of the function of control section11of core apparatus100, or control section11may implement at least a part of the function of control section21. Doppler shift estimation section25may be replaced with a “moving speed estimation section” that estimates the moving speed of mobile station10. The moving speed of mobile station10may be estimated, for example, based on a history of beamforming (e.g., weighting) for mobile station10. Further, the estimation of the moving speed of mobile station10may be performed by mobile station10, and base station40may receive information indicating the moving speed of mobile station10estimated by mobile station10and transmit the information to core apparatus100. In this case, core apparatus100may perform the RS design based on the information indicating the moving speed of mobile station10. Core apparatus100may be referred to as a communication control apparatus, aggregate node, aggregate base station, signal processing apparatus, BaseBand processing Unit (BBU), Centralized-BBU (C-BBU), or master station. In addition, base station40may be referred to as a distributed node, extension station, Radio Unit (RU), remote installation base station, transmission point, or slave station. Hardware Configuration The block diagrams used to describe the embodiments illustrate blocks on the basis of functions. These functional blocks (constituent sections) are implemented by any combination of hardware and/or software. A means for implementing the functional blocks is not particularly limited. That is, the functional blocks may be implemented by one physically and/or logically coupled apparatus. Two or more physically and/or logically separated apparatuses may be directly and/or indirectly (for example, via wires and/or wirelessly) connected, and the plurality of apparatuses may implement the functional blocks. For example, the base station, terminal, and the like according to an embodiment of the present disclosure may function as a computer that executes processing of a radio communication method of the present disclosure.FIG.20illustrates an example of a hardware configuration of mobile station10, base station20, base station40, and core apparatus100according to an embodiment of the present disclosure. Physically, mobile station10, base station20, base station40, and core apparatus100as described above may be a computer apparatus including processor1001, memory1002, storage1003, communication apparatus1004, input apparatus1005, output apparatus1006, bus1007, and the like. Note that the term “apparatus” in the following description can be replaced with a circuit, a device, a unit, or the like. The hardware configurations of mobile station10, base station20, base station40, and core apparatus100may include one apparatus or a plurality of apparatuses illustrated in the drawings or may not include part of the apparatuses. For example, although only one processor1001is illustrated, there may be a plurality of processors. The processing may be executed by one processor, or the processing may be executed by one or more processors at the same time, in succession, or in another manner. Note that processor1001may be implemented by one or more chips. The functions in base station20, base station40, and core apparatus100are implemented by predetermined software (program) loaded into hardware, such as processor1001, memory1002, and the like, according to which processor1001performs the arithmetic and controls communication performed by communication apparatus1004or reading and/or writing of data in memory1002and storage1003. Processor1001operates an operating system to entirely control the computer, for example. Processor1001may be composed of a central processing unit (CPU) including an interface with peripheral apparatuses, control apparatus, arithmetic apparatus, register, and the like. For example, control section11, physical layer processing section12, control section21, physical layer processing section23, and the like as described above may be implemented by processor1001. Processor1001reads out a program (program code), a software module, or data from storage1003and/or communication apparatus1004to memory1002and executes various types of processing according to the read-out program or the like. As the program, a program for causing the computer to perform at least a part of the operation described in the embodiments is used. For example, at least part of the functional blocks constituting base station40and core apparatus100may be implemented by a control program stored in memory1002and operated by processor1001, and the other functional blocks may also be implemented in the same way. While it has been described that the various types of processing as described above are performed by one processor1001, the various types of processing may be performed by two or more processors1001at the same time or in succession. Processor1001may be implemented by one or more chips. Note that the program may be transmitted from a network through a telecommunication line. Memory1002is a computer-readable recording medium and may be composed of, for example, at least one of a Read Only Memory (ROM), an Erasable Programmable ROM (EPROM), an Electrically Erasable Programmable ROM (EEPROM), and a Random Access Memory (RAM). Memory1002may be called as a register, a cache, a main memory (main storage apparatus), or the like. Memory1002can save a program (program code), a software module, and the like that can be executed to carry out the radio communication method according to an embodiment of the present disclosure. Storage1003is a computer-readable recording medium and may be composed of, for example, at least one of an optical disk such as a Compact Disc ROM (CD-ROM), a hard disk drive, a flexible disk, a magneto-optical disk (for example, a compact disc, a digital versatile disc, or a Blu-ray (registered trademark) disc), a smart card, a flash memory (for example, a card, a stick, or a key drive), a floppy (registered trademark) disk, and a magnetic strip. Storage1003may also be called as an auxiliary storage apparatus. The storage medium as described above may be, for example, a database, a server, or other appropriate media including memory1002and/or storage1003. Communication apparatus1004is hardware (transmission and reception device) for communication between computers through a wired and/or wireless network and is also called as, for example, a network device, a network controller, a network card, or a communication module. For example, communication section22, radio processing section24, and the like as described above may be implemented by communication apparatus1004. Input apparatus1005is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, or a sensor) that receives input from the outside. Output apparatus1006is an output device (for example, a display, a speaker, or an LED lamp) which makes outputs to the outside. Note that input apparatus1005and output apparatus1006may be integrated (for example, a touch panel). The apparatuses, such as processor1001and memory1002, are connected by bus1007for communication of information. Bus1007may be composed of a single bus or by buses different among the apparatuses. Furthermore, base station20, base station40, and core apparatus100may include hardware, such as a microprocessor, a digital signal processor (DSP), an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), and a Field Programmable Gate Array (FPGA), and the hardware may implement part or all of the functional blocks. For example, processor1001may be implemented by at least one of these pieces of hardware. Notification and Signaling of Information The notification of information is not limited to the aspects or embodiments described in the present disclosure, and the information may be notified by another method. For example, the notification of information may be carried out by one or a combination of physical layer signaling (for example, Downlink Control Information (DCI) and Uplink Control Information (UCI)), upper layer signaling (for example, Radio Resource Control (RRC) signaling, Medium Access Control (MAC) signaling, notification information (Master Information Block (MIB), and System Information Block (SIB))), and other signals. The RRC signaling may be called an RRC message and may be, for example, an RRC connection setup message, an RRC connection reconfiguration message, or the like. Applied System The aspects and embodiments described in the present specification may be applied to at least one of a system using Long Term Evolution (LTE), LTE-Advanced (LTE-A), SUPER 3G, IMT-Advanced, 4th generation mobile communication system (4G), 5th generation mobile communication system (5G), Future Radio Access (FRA), New Radio (NR), W-CDMA (registered trademark), GSM (registered trademark), CDMA2000, Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi (registered trademark)), IEEE 802.16 (WiMAX (registered trademark)), IEEE 802.20, Ultra-WideB and (UWB), Bluetooth (registered trademark), or other appropriate systems and a next-generation system extended based on the above systems. Additionally or alternatively, a combination of two or more of the systems (e.g., a combination of at least LTE or LTE-A and 5G) may be applied. Processing Procedure and the like The orders of the processing procedures, the sequences, the flow charts, and the like of the aspects and embodiments described in the present disclosure may be changed as long as there is no contradiction. For example, elements of various steps are presented in exemplary orders in the methods described in the present disclosure, and the methods are not limited to the presented specific orders. Operation of Base Station Specific operations which are described in the present disclosure as being performed by the base station may sometimes be performed by an upper node depending on the situation. Various operations performed for communication with a terminal in a network constituted by one network node or a plurality of network nodes including a base station can be obviously performed by at least one of the base station and a network node other than the base station (examples include, but not limited to, Mobility Management Entity (MME) or Serving Gateway (S-GW)). Although there is one network node in addition to the base station in the case illustrated above, a plurality of other network nodes may be combined (for example, MME and S-GW). Direction of Input and Output The information or the like (see the item of “Information and Signals”) can be output from a higher layer (or a lower layer) to a lower layer (or a higher layer). The information, the signals, and the like may be input and output through a plurality of network nodes. Handling of Input and Output Information and the like The input and output information and the like may be saved in a specific place (for example, memory) or may be managed using a management table. The input and output information and the like can be overwritten, updated, or additionally written. The output information and the like may be deleted. The input information and the like may be transmitted to another apparatus. Determination Method The determination may be made based on a value expressed by one bit (0 or 1), based on a Boolean value (true or false), or based on comparison with a numerical value (for example, comparison with a predetermined value). Variations and the like of Aspects The aspects and embodiments described in the present disclosure may be independently used, may be used in combination, or may be switched and used along the execution. Furthermore, notification of predetermined information (for example, notification indicating “it is X”) is not limited to explicit notification, and may be performed implicitly (for example, by not notifying the predetermined information). While the present disclosure has been described in detail, it is obvious to those skilled in the art that the present disclosure is not limited to the embodiments described in the present disclosure. Modifications and variations of the aspects of the present disclosure can be made without departing from the spirit and the scope of the present disclosure defined by the description of the appended claims. Therefore, the description of the present disclosure is intended for exemplary description and does not limit the present disclosure in any sense. Software Regardless of whether the software is called as software, firmware, middleware, a microcode, or a hardware description language or by another name, the software should be broadly interpreted to mean an instruction, an instruction set, a code, a code segment, a program code, a program, a subprogram, a software module, an application, a software application, a software package, a routine, a subroutine, an object, an executable file, an execution thread, a procedure, a function, and the like. The software, the instruction, the information, and the like may be transmitted and received through a transmission medium. For example, when the software is transmitted from a website, a server, or another remote source by using at least one of a wired technique (e.g., a coaxial cable, an optical fiber cable, a twisted pair, and a digital subscriber line (DSL)) and a radio technique (e.g., an infrared ray and a microwave), the at least one of the wired technique and the radio technique is included in the definition of the transmission medium. Information and Signals The information, the signals, and the like described in the present disclosure may be expressed by using any of various different techniques. For example, data, instructions, commands, information, signals, bits, symbols, chips, and the like that may be mentioned throughout the entire description may be expressed by one or an arbitrary combination of voltage, current, electromagnetic waves, magnetic fields, magnetic particles, optical fields, and photons. Note that the terms described in the present disclosure and the terms necessary to understand the present disclosure may be replaced with terms with the same or similar meaning. For example, at least one of the channel and the symbol may be a signal (signaling). The signal may be a message. The component carrier (CC) may be called a carrier frequency, a cell, a frequency carrier, or the like. “System” and “Network” The terms “system” and “network” used in the present disclosure can be interchangeably used. Names of Parameters and Channels The information, the parameters, and the like described in the present disclosure may be expressed using absolute values, using values relative to predetermined values, or using other corresponding information. For example, radio resources may be indicated by indices. The names used for the parameters are not limitative in any respect. Furthermore, the numerical formulas and the like using the parameters may be different from the ones explicitly disclosed in the present disclosure. Various channels (for example, PUCCH and PDCCH) and information elements, can be identified by any suitable names, and various names assigned to these various channels and information elements are not limitative in any respect. Base Station The terms “Base Station (BS),” “radio base station,” “fixed station,” “NodeB,” “eNodeB (eNB),” “gNodeB (gNB),” “access point,” “transmission point,” “reception point, “transmission/reception point,” “cell,” “sector,” “cell group,” “carrier,” and “component carrier” may be used interchangeably in the present disclosure. The base station may be called a macro cell, a small cell, a femtocell, or a pico cell. The base station can accommodate one cell or a plurality of (for example, three) cells. When the base station accommodates a plurality of cells, the entire coverage area of the base station can be divided into a plurality of smaller areas, and each of the smaller areas can provide a communication service based on a base station subsystem (for example, small base station for indoor remote radio head (RRH)). The term “cell” or “sector” denotes part or all of the coverage area of at least one of the base station and the base station subsystem that perform the communication service in the coverage. Mobile Station The terms “Mobile Station (MS),” “user terminal,” “User Equipment (UE),” and “terminal” may be used interchangeably in the present disclosure. The mobile station may be called, by those skilled in the art, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communication device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or by some other appropriate terms. Base Station/Mobile Station At least one of the base station and the mobile station may be called a transmission apparatus, a reception apparatus, a communication apparatus, or the like. Note that, at least one of the base station and the mobile station may be a device mounted in a mobile entity, the mobile entity itself, or the like. The mobile entity may be a vehicle (e.g., an automobile or an airplane), an unmanned mobile entity (e.g., a drone or an autonomous vehicle), or a robot (a manned-type or unmanned-type robot). Note that, at least one of the base station and the mobile station also includes an apparatus that does not necessarily move during communication operation. For example, at least one of the base station and the mobile station may be Internet-of-Things (IoT) equipment such as a sensor. The base station in the present disclosure may also be replaced with the user terminal. For example, the aspects and the embodiments of the present disclosure may find application in a configuration that results from replacing communication between the base station and the user terminal with communication between multiple user terminals (such communication may, e.g., be referred to as device-to-device (D2D), vehicle-to-everything (V2X), or the like). In this case, user equipment10may be configured to have the functions that the base station described above has. The wordings “uplink” and “downlink” may be replaced with a corresponding wording for inter-equipment communication (for example, “side”). For example, an uplink channel, a downlink channel, and the like may be replaced with a side channel. Similarly, the user terminal in the present disclosure may be replaced with the base station. In this case, the base station is configured to have the functions that user equipment10described above has. Meaning and Interpretation of Terms As used herein, the term “determining” may encompass a wide variety of actions. For example, “determining” may be regarded as judging, calculating, computing, processing, deriving, investigating, looking up, searching (or, search or inquiry)(e.g., looking up in a table, a database or another data structure), ascertaining and the like. Furthermore, “determining” may be regarded as receiving (for example, receiving information), transmitting (for example, transmitting information), inputting, outputting, accessing (for example, accessing data in a memory) and the like. Also, “determining” may be regarded as resolving, selecting, choosing, establishing, comparing and the like. That is, “determining” may be regarded as a certain type of action related to determining. Also, “determining” may be replaced with “assuming,” “expecting,” “considering,” and the like. The terms “connected” and “coupled” as well as any modifications of the terms mean any direct or indirect connection and coupling between two or more elements, and the terms can include cases in which one or more intermediate elements exist between two “connected” or “coupled” elements. The coupling or the connection between elements may be physical or logical coupling or connection or may be a combination of physical and logical coupling or connection. For example, “connected” may be replaced with “accessed.” When the terms are used in the present disclosure, two elements can be considered to be “connected” or “coupled” to each other using at least one of one or more electrical wires, cables, and printed electrical connections or using electromagnetic energy with a wavelength of a radio frequency domain, a microwave domain, an optical (both visible and invisible) domain, or the like hat are non-limiting and non-inclusive examples. Reference Signal The reference signal can also be abbreviated as an RS and may also be called as a pilot depending on the applied standard. Meaning of “based on” The description “based on” used in the present disclosure does not mean “based only on,” unless otherwise specified. In other words, the description “based on” means both of “based only on” and “based at least on.” Terms “first” and “second” Any reference to elements by using the terms “first,” “second,” and the like that are used in the present disclosure does not generally limit the quantities of or the order of these elements. The terms can be used as a convenient method of distinguishing between two or more elements in the present disclosure. Therefore, reference to first and second elements does not mean that only two elements can be employed, or that the first element has to precede the second element somehow. “Means” The “means” in the configuration of each apparatus described above may be replaced with “section,” “circuit,” “device,” or the like. Open-ended Format In a case where terms “include,” “including,” and their modifications are used in the present disclosure, these terms are intended to be inclusive like the term “comprising.” Further, the term “or” used in the present disclosure is not intended to be an exclusive or. Time Units such as a TTI, Frequency Units such as an RB, and a Radio Frame Configuration The radio frame may be constituted by one frame or a plurality of frames in the time domain. The one frame or each of the plurality of frames may be called a subframe in the time domain. The subframe may be further constituted by one slot or a plurality of slots in the time domain. The subframe may have a fixed time length (e.g., 1 ms) independent of numerology. The numerology may be a communication parameter that is applied to at least one of transmission and reception of a certain signal or channel. The numerology, for example, indicates at least one of SubCarrier Spacing (SCS), a bandwidth, a symbol length, a cyclic prefix length, Transmission Time Interval (TTI), the number of symbols per TTI, a radio frame configuration, specific filtering processing that is performed by a transmission and reception apparatus in the frequency domain, specific windowing processing that is performed by the transmission and reception apparatus in the time domain, and the like. The slot may be constituted by one symbol or a plurality of symbols (e.g., Orthogonal Frequency Division Multiplexing (OFDM)) symbol, Single Carrier-Frequency Division Multiple Access (SC-FDMA) symbol, or the like) in the time domain. The slot may also be a time unit based on the numerology. The slot may include a plurality of mini-slots. Each of the mini-slots may be constituted by one or more symbols in the time domain. Furthermore, the mini-slot may be referred to as a subslot. The mini-slot may be constituted by a smaller number of symbols than the slot. A PDSCH (or a PUSCH) that is transmitted in the time unit that is greater than the mini-slot may be referred to as a PDSCH (or a PUSCH) mapping type A. The PDSCH (or the PUSCH) that is transmitted using the mini-slot may be referred to as a PDSCH (or PUSCH) mapping type B. The radio frame, the subframe, the slot, the mini slot, and the symbol indicate time units in transmitting signals. The radio frame, the subframe, the slot, the mini slot, and the symbol may be called by other corresponding names. For example, one subframe, a plurality of continuous subframes, one slot, or one mini-slot may be called a Transmission Time Interval (TTI). That is, at least one of the subframe and the TTI may be a subframe (1 ms) in the existing LTE, a duration (for example, 1 to 13 symbols) that is shorter than 1 ms, or a duration that is longer than 1 ms. Note that, a unit that represents the TTI may be referred to as a slot, a mini-slot, or the like instead of a subframe. Here, the TTI, for example, refers to a minimum time unit for scheduling in radio communication. For example, in an LTE system, the base station performs scheduling for allocating a radio resource (a frequency bandwidth, a transmit power, and the like that are used in each user terminal) on a TTI-by-TTI basis to each user terminal. Note that, the definition of TTI is not limited to this. The TTI may be a time unit for transmitting a channel-coded data packet (a transport block), a code block, or a codeword, or may be a unit for processing such as scheduling and link adaptation. Note that, when the TTI is assigned, a time section (for example, the number of symbols) to which the transport block, the code block, the codeword, or the like is actually mapped may be shorter than the TTI. Note that, in a case where one slot or one mini-slot is referred to as the TTI, one or more TTIs (that is, one or more slots, or one or more mini-slots) may be a minimum time unit for the scheduling. Furthermore, the number of slots (the number of mini-slots) that make up the minimum time unit for the scheduling may be controlled. A TTI that has a time length of 1 ms may be referred to as a usual TTI (a TTI in LTE Rel.8to LTE Rel. 12), a normal TTI, a long TTI, a usual subframe, a normal subframe, a long subframe, a slot, or the like. A TTI that is shorter than the usual TTI may be referred to as a shortened TTI, a short TTI, a partial TTI (or a fractional TTI), a shortened subframe, a short subframe, a mini-slot, a subslot, a slot, or the like. Note that the long TTI (for example, the usual TTI, the subframe, or the like) may be replaced with the TTI that has a time length which exceeds 1 ms, and the short TTI (for example, the shortened TTI or the like) may be replaced with a TTI that has a TTI length which is less than a TTI length of the long TTI and is equal to or longer than 1 ms. A resource block (RB) is a resource allocation unit in the time domain and the frequency domain, and may include one or more contiguous subcarriers in the frequency domain. The number of subcarriers that are included in the RB may be identical regardless of the numerology, and may be 12, for example. The number of subcarriers that are included in the RB may be determined based on the numerology. In addition, the RB may include one symbol or a plurality of symbols in the time domain, and may have a length of one slot, one mini slot, one subframe, or one TTI. One TTI and one subframe may be constituted by one resource block or a plurality of resource blocks. Note that one or more RBs may be referred to as a Physical Resource Block (PRB), a Sub-Carrier Group (SCG), a Resource Element Group (REG), a PRB pair, an RB pair, or the like. In addition, the resource block may be constituted by one or more Resource Elements (REs). For example, one RE may be a radio resource region that is one subcarrier and one symbol. A bandwidth part (BWP) (which may be referred to as a partial bandwidth or the like) may represent a subset of contiguous common resource blocks (RB) for certain numerology in a certain carrier. Here, the common RBs may be identified by RB indices that use a common reference point of the carrier as a reference. The PRB may be defined by a certain BWP and may be numbered within the BWP. The BWP may include a UL BWP and a DL BWP. An UE may be configured with one or more BWPs within one carrier. At least one of the configured BWPs may be active, and the UE does not have to assume transmission/reception of a predetermined signal or channel outside the active BWP. Note that, “cell,” “carrier,” and the like in the present disclosure may be replaced with “BWP.” Structures of the radio frame, the subframe, the slot, the mini-slot, the symbol, and the like are described merely as examples. For example, the configuration such as the number of subframes that are included in the radio frame, the number of slots per subframe or radio frame, the number of mini-slots that are included within the slot, the numbers of symbols and RBs that are included in the slot or the mini-slot, the number of subcarriers that are included in the RB, the number of symbols within the TTI, the symbol length, the Cyclic Prefix (CP) length, and the like can be changed in various ways. Maximum Transmit Power The “maximum transmit power” described in the present disclosure may mean a maximum value of the transmit power, the nominal UE maximum transmit power, or the rated UE maximum transmit power. Article In a case where articles, such as “a,” “an,” and “the” in English, for example, are added in the present disclosure by translation, nouns following these articles may have the same meaning as used in the plural. “Different” In the present disclosure, the expression “A and B are different” may mean that “A and B are different from each other.” Note that, the expression may also mean that “A and B are different from C.” The expressions “separated” and “coupled” may also be interpreted in the same manner as the expression “A and B are different.” The present patent application claims the benefit of priority based on Japanese Patent Application No. 2019-085355 filed on April 26, 2019, and the entire content of Japanese Patent Application No. 2019-085355 is hereby incorporated by reference. INDUSTRIAL APPLICABILITY One aspect of the present disclosure is useful for mobile communication systems. REFERENCE SIGNS LIST 10Mobile station 11Control section 12Physical layer processing section 13Communication section 20Base station 21Control section 22Communication section 23Physical layer processing section 24Radio processing section 25Doppler shift estimation section 26BF section 30Macro cell 40,40-1,40-2,40-3,40-4Base station cell 50,50-1,50-2,50-3,50-4Small or semi-macro cell 100Core apparatus
74,797
11863262
DESCRIPTION OF EMBODIMENTS The following describes technical solutions of this application with reference to the accompanying drawings. The technical solutions of embodiments of this application may be applied to various communications systems, such as a global system for mobile communications (GSM) system, a code division multiple access (CDMA) system, a wideband code division multiple access (WCDMA) system, a general packet radio service (GPRS) system, a long term evolution (LTE) system, an LTE frequency division duplex (FDD) system, an LTE time division duplex (TDD), a universal mobile telecommunications system (UMTS), a worldwide interoperability for microwave access (WiMAX) communications system, or a 5th generation (5G) system or new radio (NR) system. To facilitate understanding of the embodiments of this application, a communications system shown inFIG.1is first used as an example to describe in detail a communications system applicable to the embodiments of this application.FIG.1is a schematic diagram of a communications system100to which a method for indicating vectors used to construct a precoding vector according to an embodiment of this application is applicable. As shown inFIG.1, the communications system100may include at least one network device, for example, a network device110shown inFIG.1. The communications system100may further include at least one terminal device, for example, a terminal device120shown inFIG.1. The network device110and the terminal device120may communicate with each other through a wireless link. Each communications device, such as the network device110or the terminal device120, may be configured with a plurality of antennas. For each communications device in the communications system100, the plurality of configured antennas may include at least one transmit antenna configured to transmit a signal and at least one received antenna configured to receive a signal. Therefore, all communications devices, such as the network device110and the terminal device120, in the communications system100may communicate with each other by using a multiple-antenna technology. It should be understood that the network device in the communications system may be any device having a wireless transceiver function. The network device includes but is not limited to an evolved NodeB (eNB), a radio network controller (RNC), a NodeB (NB), a base station controller (BSC), a base transceiver station (BTS), a home base station (for example, a home evolved NodeB or a home Node B, HNB), a baseband unit (BBU), an access point (AP) in wireless fidelity (WiFi) system, a wireless relay node, a wireless backhaul node, a transmission point (TP), a transmission reception point (TRP), or the like; or may be a gNB or a transmission point (TRP or TP) in a 5G system such as an NR system, or one antenna panel or one group of antenna panels (including a plurality of antenna panels) of a base station in a 5G system; or may be a network node that constitutes a gNB or a transmission point, for example, a baseband unit (BBU) or a distributed unit (DU). In some deployment, the gNB may include a centralized unit (CU) and a DU. The gNB may further include a radio frequency unit (RU). The CU implements some functions of the gNB, and the DU implements some functions of the gNB. For example, the CU implements functions of a radio resource control (RRC) layer and a packet data convergence protocol (PDCP) layer, and the DU implements functions of a radio link control (RLC), a media access control (MAC) layer, and a physical (PHY) layer. Information at the RRC layer is eventually converted into information at the PHY layer, or is converted from information at the PHY layer. Therefore, in this architecture, higher layer signaling such as RRC layer signaling may also be considered as being sent by the DU or sent by the DU and the RU. It may be understood that the network device may be a CU node, a DU node, or a device including a CU node and a DU node. In addition, the CU may be a network device in an access network (radio access network, RAN), or may be a network device in a core network (CN). This is not limited in this application. It should be further understood that the terminal device in the wireless communications system may also be referred to as user equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a mobile station, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communications device, a user agent, or a user apparatus. The terminal device in the embodiments of this application may be a mobile phone, a tablet computer (pad), a computer having a wireless transceiver function, a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a wireless terminal in self driving, a wireless terminal in telemedicine (remote medical), a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, or the like. An application scenario is not limited in the embodiments of this application. It should be further understood thatFIG.1is merely a simplified schematic diagram used as an example for ease of understanding. The communications system100may further include another network device or another terminal device, which is not shown inFIG.1. To facilitate understanding of the embodiments of this application, the following briefly describes a processing process of a downlink signal at a physical layer before the downlink signal is sent. It should be understood that the processing process of the downlink signal described below may be performed by a network device, or may be performed by a chip disposed in a network device. For ease of description, the network device and the chip disposed in the network device are collectively referred to as a network device below. The network device may process a codeword on a physical channel. The codeword may be a coded bit on which coding (for example, including channel coding) is performed. Scrambling is performed on the codeword, to generate a scrambled bit. Modulation mapping is performed on the scrambled bit, to obtain a modulation symbol. The modulation symbol is mapped to a plurality of layers (layer), which are referred to as transport layers, through layer mapping. Precoding is performed on the modulation symbol obtained through the layer mapping, to obtain a precoded signal. The precoded signal is mapped to a plurality of resource elements (RE) through RE mapping. These REs are then transmitted through an antenna port after orthogonal frequency division multiplexing (OFDM) modulation. It should be understood that the processing process of the downlink signal described above is merely an example for description, and should not constitute any limitation on this application. For a specific processing process of the downlink signal, refer to the current technology. For brevity, detailed descriptions of the specific process are omitted herein. To facilitate understanding of the embodiments of this application, the following first briefly describes terms in the embodiments of this application. 1. Precoding technology: When knowing a channel state, a transmitting device (for example, a network device) processes a to-be-sent signal by using a precoding matrix that matches a channel resource, so that a precoded to-be-sent signal adapts to a channel, and complexity of eliminating inter-channel impact by a receiving device (for example, a terminal device) is reduced. Therefore, after precoding processing is performed on the to-be-sent signal, received signal quality (for example, a signal to interference plus noise ratio (SINR)) is improved. Therefore, the transmitting device and a plurality of receiving devices can perform transmission on a same time-frequency resource by using the precoding technology. In other words, multi-user multiple-input multiple-output (MU-MIMO) is implemented. It should be understood that the related descriptions about the precoding technology are merely an example for ease of understanding, and are not intended to limit the protection scope of the embodiments of this application. In a specific implementation process, the transmitting device may alternatively perform precoding in another manner. For example, when channel information (for example, but not limited to a channel matrix) cannot be learned of, precoding is performed by using a preset precoding matrix or in a preset weighted processing manner. For brevity, specific content of the precoding technology is not described in detail in this specification. 2. Channel state information (CSI) report: The CSI report is information that is used to describe a channel attribute of a communications link and reported by a receiving device (for example, a terminal device) to a transmitting device (for example, a network device) in a wireless communications system. The CSI report may also be referred to as CSI for short. For example, the CSI report may include, but is not limited to, a precoding matrix indicator (PMI), a rank indication (RI), a channel quality indicator (CQI), a channel state information reference signal (CSI-RS) resource indicator (CSI-RS resource indicator, CRI), a layer indicator (LI), and the like. It should be understood that the specific content listed above and included in the CSI report is merely an example for description, and should not constitute any limitation on this application. The CSI report may include one or more of the items listed above, or may include other information than those listed above, where the other information is used to represent the CSI. This is not limited in this application. For example, the terminal device reports the CSI to the network device. The terminal device may report one or more CSI reports in one time unit (for example, a slot), and each CSI report may correspond to one CSI reporting configuration condition. For example, the CSI reporting configuration condition may be determined based on a CSI reporting setting. The CSI reporting setting may be used to indicate a time domain behavior, bandwidth, a format corresponding to a report quantity, and the like that are of CSI reporting. The time domain behavior includes, for example, periodic, semi-persistent, and aperiodic. The terminal device may generate one CSI report based on one CSI reporting setting. That the terminal device reports one or more CSI reports in one time unit such as one slot may be referred as one-time reporting of the CSI reports. In the embodiments of this application, when the terminal device generates the CSI report, information used to indicate a precoding vector may be divided into two parts. For example, the CSI report may include a first part and a second part. The first part and the second part may be independently encoded. A payload size of the first part may be predefined, and a payload size of the second part may be determined based on information carried in the first part. The network device may decode the first part based on the predefined payload size of the first part, to obtain the information carried in the first part. The network device may determine the payload size of the second part based on the information obtained from the first part, and then decode the second part to obtain information carried in the second part. It should be understood that functions of the first part and the second part may be similar to functions of a part 1 and a part 2 of CSI that are defined in Release 15 (R15) of the NR protocol TS38.214. It should be further understood that, because the embodiments of this application mainly relate to reporting of the PMI, content that is of the first part and the second part of the CSI report and listed in the following embodiments relates only to related information of the PMI, and does not relate to other information. However, it should be understood that, this should not constitute any limitation on this application. In addition to the information that is included or indicated in the first part and the second part of the CSI report and that is listed in the following embodiments, the first part of the CSI report may further include one or more of the RI, the CQI, and the LI, or may further include other information used to predefine feedback overheads, and the second part of the CSI report may further include other information. This is not limited in this application. 3. Precoding matrix and precoding matrix indicator (PMI): The PMI may be carried in a CSI report to indicate a precoding matrix. The precoding matrix may be, for example, a precoding matrix that corresponds to each frequency domain unit and that is determined by the terminal device based on a channel matrix of the frequency domain unit (for example, a subband). The channel matrix may be determined by the terminal device based on channel reciprocity or through channel estimation or in another manner. However, it should be understood that a specific method for determining the channel matrix by the terminal device is not limited to the foregoing descriptions. For a specific implementation, refer to the current technology. For brevity, details are not listed herein. The precoding matrix may be obtained by performing singular value decomposition (SVD) on the channel matrix or on a covariance matrix of the channel matrix, or may be obtained by performing eigenvalue decomposition (EVD) on a covariance matrix of the channel matrix. It should be understood that the foregoing listed methods for determining the precoding matrix are merely examples, and should not constitute any limitation on this application. For a method for determining the precoding matrix, refer to the current technology. For brevity, details are not listed herein. It should be noted that, according to the method for indicating vectors used to construct a precoding vector provided in the embodiments of this application, the network device may determine, based on a feedback of the terminal device, a space-frequency vector pair used to construct a precoding vector, and further determine the precoding matrix corresponding to each frequency domain unit. The precoding matrix may be directly used for downlink data transmission. Some beamforming methods, including, for example, zero forcing (ZF), regularized zero-forcing (RZF), minimum mean-squared error (MMSE), and maximizing a signal-to-leakage-and-noise ratio (SLNR), may also be used, to obtain a precoding matrix finally used for downlink data transmission. This is not limited in this application. Unless otherwise specified, all precoding matrices mentioned below may be determined according to the method provided in this application. 4. Precoding vector: A precoding matrix may include one or more vectors, for example, column vectors. One precoding matrix may be used to determine one or more precoding vectors. When a quantity of transport layers is 1 and a quantity of polarization directions of a transmit antenna is also 1, a precoding vector may be a precoding matrix. When a quantity of transport layers is greater than 1 and a quantity of polarization directions of a transmit antenna is 1, a precoding vector may be a component of a precoding matrix at a transport layer. When a quantity of transport layers is 1 and a quantity of polarization directions of a transmit antenna is greater than 1, a precoding vector may be a component of a precoding matrix in a polarization direction. When a quantity of transport layers is greater than 1 and a quantity of polarization directions of a transmit antenna is also greater than 1, a precoding vector may be a component of a precoding matrix at a transport layer in a polarization direction. It should be understood that the precoding vector may alternatively be determined based on a vector in the precoding matrix, for example, obtained by performing mathematical transformation on the vector in the precoding matrix. A mathematical transformation relationship between the precoding matrix and the precoding vector is not limited in this application. 5. Antenna port: The antenna port is referred to as a port for short. The antenna port may be understood as a virtual antenna identified by a receiving device, or a transmit antenna that can be distinguished in space. One antenna port may be configured for each virtual antenna, each virtual antenna may be a weighted combination of a plurality of physical antennas, and each antenna port may correspond to one reference signal port. Therefore, each antenna port may be referred to as a port of one reference signal. In the embodiments of this application, the antenna port may refer to an actually independent transmitting unit (transceiver unit, T×RU). 6. Spatial domain vector: The spatial domain vector is also referred to as a beam vector. Each element in the spatial domain vector may represent a weight of each antenna port. Based on the weights that are of the antenna ports and that are represented by the elements in the spatial domain vector, linear superposition is performed on signals of the antenna ports, so that an area in which a signal is relatively strong may be formed in a direction of space. For ease of description below, it is assumed that the spatial domain vector is denoted as us. A length of the spatial domain vector usmay be a quantity Nsof transmit antenna ports in a polarization direction, where Ns≥1, and Nsis an integer. The spatial domain vector may be, for example, a column vector or a row vector whose length is Ns. This is not limited in this application. Optionally, the spatial domain vector is obtained from a discrete Fourier transform (DFT) matrix. Each column vector in the DFT matrix may be referred to as a DFT vector. In other words, the spatial domain vector may be a DFT vector. The spatial domain vector may be, for example, a DFT vector defined in a type II codebook in Release 15 (R15) of the NR protocol TS 38.214. 7. Spatial domain vector set: The spatial domain vector set may include a plurality of spatial domain vectors having different lengths, to correspond to different quantities of transmit antenna ports. In the embodiments of this application, a length of a spatial domain vector is Ns. Therefore, a length of each spatial domain vector in a spatial domain vector set to which a spatial domain vector reported by the terminal device belongs is Ns. In a possible design, the spatial domain vector set may include Nsspatial domain vectors, and the Nsspatial domain vectors may be orthogonal to each other. Each spatial domain vector in the spatial domain vector set may be obtained from a two-dimensional (2 dimension, 2D)-DFT matrix. 2D may represent two different directions, for example, a horizontal direction and a vertical direction. The Nsspatial domain vectors may be denoted as, for example, bs1, bs2, . . . , and bsNs. The Nsspatial domain vectors may be used to construct a matrix Us, where Us□[bs1, bs2. . . bsNs].In another possible design, the spatial domain vector set may be extended to Os×Nsspatial domain vectors by using an oversampling factor Os. In this case, the spatial domain vector set may include Ossubsets, and each subset may include Nsspatial domain vectors. The Nsspatial domain vectors in each subset may be orthogonal to each other. Each spatial domain vector in the spatial domain vector set may be obtained from an oversampled 2D-DFT matrix. The oversampling factor Osis a positive integer. Specifically, Os=O1×O2, O1may be an oversampling factor in the horizontal direction, and O2may be an oversampling factor in the vertical direction. O1≥1, O2≥1, and O1and O2are not both 1 at the same time and are both integers. Nsspatial domain vectors in an osth(where 0≤os≤Os−1 and osis an integer) subset in the spatial domain vector set may be denoted as, for example, bs,os1, bs,os2, . . . , and bs,osNs. In this case, a matrix Usosmay be constructed based on the Nsspatial domain vectors in the osthsubset, where Us,os□[bs,os1, bs,os2, . . . , bs,osNs]. 8. Frequency domain unit: The frequency domain unit is a unit of a frequency domain resource, and may represent different frequency domain resource granularities. For example, the frequency domain unit may include, but is not limited to, a subband, a resource block (RB), a subcarrier, a resource block group (RBG), or a precoding resource block group (PRG). In the embodiments of this application, a precoding matrix corresponding to the frequency domain unit may be a precoding matrix determined by performing channel measurement and feedback based on a reference signal on the frequency domain unit. The precoding matrix corresponding to the frequency domain unit may be used to precode data subsequently transmitted on the frequency domain unit. In the following descriptions, a precoding matrix or a precoding vector corresponding to a frequency domain unit may also be referred to as a precoding matrix or a precoding vector of the frequency domain unit for short. 9. Frequency domain vector: The frequency domain vector is a vector proposed in the embodiments of this application and used to represent a change rule of a channel in frequency domain. Each frequency domain vector may represent one change rule. When a signal is transmitted through a radio channel, the signal may arrive at a receive antenna through a plurality of paths from a transmit antenna. A multipath delay leads to frequency selective fading, namely, a change of a frequency domain channel. Therefore, different frequency domain vectors may be used to represent change rules that are of the channel in frequency domain and caused by delays on different transmission paths. Optionally, a length of the frequency domain vector is a quantity of some or all of frequency domain units included in bandwidth occupied by CSI measurement in frequency domain. The bandwidth occupied by the CSI measurement resource in frequency domain may be bandwidth used to transmit a reference signal. The reference signal herein may be a reference signal used for channel measurement, for example, a CSI-RS used for downlink channel measurement. In the embodiments of this application, the length of the frequency domain vector may be a quantity of all frequency domain units included in the bandwidth occupied by the CSI measurement in frequency domain, or may be a quantity of some frequency domain units included in the bandwidth occupied by the CSI measurement in frequency domain. This is not limited in this application. For example, a rule of determining the length of the frequency domain vector based on the bandwidth occupied by the CSI measurement in frequency domain may be defined in the protocol. In NR, signaling used to indicate the bandwidth occupied by the CSI measurement resource in frequency domain may be, for example, a bandwidth range (CSI-Frequency Occupation) occupied by the CSI. It should be understood that the bandwidth occupied by the CSI measurement resource in frequency domain is named only for ease of description, and should not constitute any limitation on this application. This application does not exclude a possibility of expressing a same meaning by using another name. It should be further understood that, as an example of the signaling used to indicate the bandwidth occupied by the CSI measurement resource in frequency domain, the CSI-Frequency Occupation should not constitute any limitation on this application. This application does not exclude a possibility of defining other signaling in a future protocol to implement a same or similar function. Optionally, the length of the frequency domain vector is a length of signaling used to indicate positions and a quantity of to-be-reported frequency domain units. In NR, the signaling used to indicate positions and a quantity of to-be-reported frequency domain units may be reporting bandwidth (reporting band). For example, the signaling may indicate the positions and the quantity of the to-be-reported frequency domain units by using a bitmap. Therefore, a dimension of the frequency domain vector may be a quantity of bits of the bitmap. It should be understood that the reporting band is merely an example of the signaling used to indicate the positions and the quantity of the to-be-reported frequency domain units, and should not constitute any limitation on this application. This application does not exclude a possibility of defining other signaling in a future protocol to implement a same or similar function. Optionally, the length of the frequency domain vector is a quantity of to-be-reported frequency domain units. The quantity of to-be-reported frequency domain units may be indicated by using, for example, the foregoing signaling, the reporting bandwidth. The quantity of to-be-reported frequency domain units may be a quantity of all frequency domain units in the bandwidth occupied by the CSI measurement resource in frequency domain, or may be a quantity of some frequency domain units in the bandwidth occupied by the CSI measurement resource in frequency domain. Alternatively, the quantity of to-be-reported frequency domain units may be the same as a signaling length of the reporting bandwidth, or may be less than a signaling length of the reporting bandwidth. This is not limited in this application. A length of the frequency domain vector may be specifically predefined in a protocol. The length of the frequency domain unit may be one of the items listed above, or may be defined by using another possible parameter. This is not limited in this application. When it is defined in the protocol that the length of the frequency domain vector is one of the items listed above, it may be considered that one of the signaling used to indicate the bandwidth occupied by the CSI measurement resource in frequency domain or the signaling used to indicate the positions and the quantity of the to-be-reported frequency domain units implicitly indicates the length of the frequency domain vector. For ease of description below, it is assumed that the frequency domain vector is denoted as uf, and the length of the frequency domain vector of is Nf, where Nf≥1 and Nfis an integer. The frequency domain vector may be a column vector or row vector whose length is Nf. This is not limited in this application. 10. Frequency domain vector set: The frequency domain vector set may include a plurality of frequency domain vectors having different lengths. In the embodiments of this application, a length of a frequency domain vector is Nf. Therefore, a length of each frequency domain vector in a frequency domain vector set to which a frequency domain vector reported by the terminal device belongs is Nf. In a possible design, the frequency domain vector set may include Nffrequency domain vectors. The Nffrequency domain vectors may be orthogonal to each other. Each frequency domain vector in the frequency domain vector set may be obtained from a DFT matrix. The Nffrequency domain vectors may be denoted as, for example, bf1, bf2, . . . , and bfNf. The Nffrequency domain vectors may be used to construct a matrix Uf, where Uf□[bf1bf2. . . bfNf]. In another possible design, the frequency domain vector set may be extended to Of×Nffrequency domain vectors by using an oversampling factor Of. In this case, the frequency domain vector set may include Ofsubsets, and each subset may include Nffrequency domain vectors. The Nffrequency domain vectors in each subset may be orthogonal to each other. Each frequency domain vector in the frequency domain vector set may be obtained from an oversampled DFT matrix. The oversampling factor Ofis a positive integer. Nffrequency domain vectors in an ofth(where 0≤of≤Of−1 and ofis an integer) subset in the frequency domain vector set may be denoted as, for example, bf,of1, bf,of2, . . . , bf,ofNf. In this case, a matrix Ufofmay be constructed based on the Nffrequency domain vectors in the ofthsubset, where Ufof□[bf,of1, bf,of2. . . bf,ofNf]. 11. Space-frequency component matrix: A space-frequency component matrix may be determined by using a spatial domain vector and a frequency domain vector. A space-frequency component matrix may be determined by using, for example, a spatial domain vector and a conjugate transpose of a frequency domain vector, for example, us×ufHwhere a dimension of us×ufHmay be Ns×Nf. It should be understood that the space-frequency component matrix may be a representation form of a space-frequency base unit determined by using a spatial domain vector and a frequency domain vector. The space-frequency base unit may alternatively be represented as, for example, a space-frequency component vector, and the space-frequency component vector may be determined by using, for example, a Kronecker product of a spatial domain vector and a frequency domain vector. The space-frequency base unit may alternatively be represented as, for example, a space-frequency vector pair. A specific representation form of the space-frequency base unit is not limited in this application. Various possible forms that are determined by a person skilled in the art by using a spatial domain vector and a frequency domain vector based on a same concept should fall within the protection scope of this application. In addition, if a defined form of the spatial domain vector or the frequency domain vector is different from that listed above, an operation relationship among the space-frequency component matrix, the spatial domain vector, and the frequency domain vector may also be different. The operation relationship among the space-frequency component matrix, the spatial domain vector, and the frequency domain vector is not limited in this application. 12. Space-frequency matrix: In the embodiments of this application, the space-frequency matrix is an intermediate variable used to determine a precoding matrix. For the terminal device, the space-frequency matrix may be determined by using a precoding matrix or a channel matrix. For the network device, the space-frequency matrix may be obtained by weighting a plurality of space-frequency component matrices, and is used to determine a downlink channel or a precoding matrix. The space-frequency component matrix may be represented as a matrix whose dimension is Ns×Nf, or the space-frequency component matrix may be represented as a matrix whose dimension is Nf×Ns. The matrix whose dimension is Ns×Nfmay include Nfcolumn vectors whose length is Ns. The Nfcolumn vectors may correspond to Nffrequency domain units, and each column vector may be used to determine a precoding vector of a corresponding frequency domain unit. For example, the space-frequency matrix may be denoted as H, where H=[w1w2. . . wNf]. w1to WNfare the Nfcolumn vectors corresponding to the Nffrequency domain units, and the length of each column vector may be Ns. The Nfcolumn vectors may be used to determine the precoding vectors of the Nffrequency domain units. It should be understood that the space-frequency matrix is merely a representation form of the intermediate variable used to determine the precoding matrix, and should not constitute any limitation on this application. For example, the column vectors in the space-frequency matrix are connected from left to right in sequence, where a tail of a vector is followed by a head of a vector that is on the right of and adjacent to the vector, or arranged according to another predefined rule, to obtain a vector whose length is Ns×Nf. The vector may be referred to as a space-frequency vector. It should be further understood that a dimension of the space-frequency matrix and a dimension of the space-frequency vector that are shown above are merely examples, and should not constitute any limitation on this application. For example, the space-frequency matrix may alternatively be a matrix whose dimension is Nf×Ns. Each row vector may correspond to one frequency domain unit, and is used to determine a precoding vector of a corresponding frequency domain unit. In addition, when the transmit antenna is configured with a plurality of polarization directions, the dimension of the space-frequency matrix may further be extended. For example, for a dual-polarized antenna, the dimension of the space-frequency matrix may be 2Ns×Nfor Nf×2Ns. It should be understood that a quantity of polarization directions of the transmit antenna is not limited in this application. 13. Dual-domain compression: The dual-domain compression includes spatial domain compression and frequency domain compression. The spatial domain compression may mean that one or more spatial domain vectors are selected from a spatial domain vector set to construct a precoding vector. The frequency domain compression may mean that one or more frequency domain vectors are selected from a frequency domain vector set to construct a precoding vector. A matrix constructed by one spatial domain vector and one frequency domain vector may be, for example, the foregoing space-frequency component matrix. The selected one or more spatial domain vectors and the selected one or more frequency domain vectors may be used to construct one or more space-frequency component matrices. A weighted sum of the one or more space-frequency component matrices may be used to construct a space-frequency matrix corresponding to a transport layer. In other words, the space-frequency matrix may be approximately the weighted sum of the space-frequency component matrices that are constructed by the selected one or more spatial domain vectors and the selected one or more frequency domain vectors. Then, a precoding vector corresponding to each frequency domain unit may be determined. In the dual-domain compression, compression is performed in space domain and frequency domain separately. When providing a feedback, the terminal device may feed back the selected one or more spatial domain vectors and the selected one or more frequency domain vectors to the network device, and does not need to feed back a weighting coefficient (for example, including an amplitude and a phase) of a subband based on each frequency domain unit (such as the subband). Therefore, feedback overheads can be greatly reduced. In addition, because a frequency domain vector can represent a change rule of a channel in frequency domain, a change of the channel in frequency domain is simulated through linear superposition of one or more frequency domain vectors. Therefore, relatively high feedback precision can still be maintained, so that a precoding matrix recovered by the network device based on the feedback of the terminal device can still well adapt to the channel. For specific content of the dual-domain compression, refer to the Patent Application No. 201811263110.1 entitled “METHOD FOR INDICATING AND DETERMINING PRECODING VECTOR AND COMMUNICATIONS APPARATUS”. For brevity, detailed descriptions of the specific content are omitted herein. 14. Weighting coefficient, amplitude, and phase: The weighting coefficient is used to represent a weight of each space-frequency component matrix when the space-frequency component matrices are used to calculate a weighted sum to determine a space-frequency matrix. For example, the foregoing space-frequency matrix may be approximately a weighted sum of a plurality of space-frequency component matrices, and the weighting coefficient may represent a weight of each of the plurality of space-frequency component matrices. Each weighting coefficient may include an amplitude and a phase. For example, in a weighting coefficient αejθ, α is an amplitude, and θ is a phase. In the weighting coefficients corresponding to the plurality of space-frequency component matrices, amplitudes (or amplitude values) of some weighting coefficients may be zero or close to zero, and quantization values corresponding to the weighting coefficients may be zero. A weighting coefficient whose amplitude is quantized by using a quantization value being zero may be referred to as a zero-amplitude weighting coefficient. Correspondingly, amplitudes of some weighting coefficients are relatively large, and quantization values corresponding to the weighting coefficients are not zero. A weighting coefficient whose amplitude is quantized by using a non-zero quantization value may be referred to as a non-zero-amplitude weighting coefficient. In other words, the plurality of weighting coefficients includes one or more non-zero-amplitude weighting coefficients and one or more zero-amplitude weighting coefficients. It should be understood that a weighting coefficient may be indicated by using a quantization value, may be indicated by using an index of a quantization value, or may be indicated by using a non-quantization value. An indication manner of the weighting coefficient is not limited in this application, provided that a peer end is enabled to learn of the weighting coefficient. For ease of description below, information used to indicate the weighting coefficient is referred to as quantization information of the weighting coefficient. The quantization information may be, for example, a quantization value, an index, or any other information that may be used to indicate the weighting coefficient. 15. Transport layer: A quantity of transport layers is a rank of a channel matrix. The terminal device may determine a quantity of transport layers based on a channel matrix obtained through channel estimation. A precoding matrix may be determined by using the channel matrix. For example, the precoding matrix may be determined by performing SVD on the channel matrix or a covariance matrix of the channel matrix. In the SVD process, different transport layers may be distinguished based on eigenvalues. For example, a precoding vector determined by using an eigenvector vector corresponding to a maximum eigenvalue may correspond to the first transport layer, and a precoding vector determined by using an eigenvector vector corresponding to a minimum eigenvalue may correspond to an Rthtransport layer. That is, eigenvalues corresponding to the first transport layer to the Rthtransport layer decrease in sequence. It should be understood that distinguishing the different transport layers based on the eigenvalues is merely a possible implementation, and should not constitute any limitation on this application. For example, another criterion for distinguishing the transport layers may also be predefined in a protocol. This is not limited in this application. In addition, to facilitate understanding of the embodiments of this application, the following descriptions are provided. 1: To facilitate understanding and description, main parameters in this application are described as follows:R represents a quantity of transport layers. In the embodiments of this application, R≥1, and R is an integer. The R transport layers may include, for example, the first transport layer to an Rthtransport layer. For ease of description below, an rthtransport layer is used as an example to describe the method for indicating vectors used to construct a precoding vector provided in the embodiments of this application. A value of r may be an integer value ranging from 1 to R.Rmrepresents a predefined maximum value of the quantity of transport layers, that is, 1≤R≤Rm. For example, a value of Rmmay be defined in a protocol. Optionally, Rmis 4.P represents a quantity of polarization directions of a transmit antenna, where P≥1 and P is an integer.L represents a maximum quantity in quantities of spatial domain vectors corresponding to the R transport layers, and is pre-configurable, where L≥1, and L is an integer.M represents a maximum quantity in quantities of frequency domain vectors corresponding to the R transport layers, and is pre-configurable, where M≥1, and M is an integer.Mrrepresents a quantity of spatial domain vectors that is configured for the rthtransport layer when the quantity of transport layers is R, where L≥Lr≥1, and Lris an integer.Mrrepresents a quantity of frequency domain vectors that is configured for the rthtransport layer when the quantity of transport layers is R, where M≥Mr≥1, and Mris an integer.Krrepresents a quantity of to-be-reported space-frequency vector pairs that is configured for the rthtransport layer when the quantity of transport layers is R, where Kr≥1, and Kris an integer. Because a space-frequency vector pair reported for each transport layer corresponds to a weighting coefficient, the quantity of to-be-reported space-frequency vector pairs that is configured for the rthtransport layer may also be a quantity of to-be-reported weighting coefficients that is configured for the rthtransport layer. The parameter Krmay be a quantity of all to-be-reported weighting coefficients (or space-frequency vector pairs) that is configured for the rthtransport layer, or may be a quantity of some to-be-reported weighting coefficients (or space-frequency vector pairs) that is configured for the rthtransport layer. The quantity of some weighting coefficients may be indicated because a minimum quantity of to-be-reported weighting coefficients (or space-frequency vector pairs) may be predefined for each transport layer in a protocol or a quantity of weighting coefficients (or space-frequency vector pairs) that need to be reported for every r transport layers may be predefined in a protocol. In this case, the parameter Krmay be a difference between a total quantity of weighting coefficients that is originally configured for the rthtransport layer and a minimum quantity of to-be-reported weighting coefficients that is predefined for the rthtransport layer. For example, if the total quantity of weighting coefficients that is configured for the rthtransport layer is Q, and the minimum quantity of to-be-reported weighting coefficients that is predefined for the rthtransport layer is ar, the parameter Krmay be Q−ar. Both Q and arare positive integers. The minimum quantity of to-be-reported weighting coefficients that is predefined for the rthtransport layer may include, for example, a quantity of normalized coefficients. The normalized coefficient may include, for example, a plurality of normalized coefficients corresponding to a plurality of polarization directions or a normalized coefficient in a plurality of polarization directions. Specific weighting coefficients corresponding to the minimum quantity of reported space-frequency vectors are not limited in this application. In addition, when the quantity of transport layers is greater than 1, the minimum quantities of to-be-reported weighting coefficients that are predefined for the transport layers may be the same, or may be partially different, or may be different from each other. This is not limited in this application. K represents a maximum value that is determined by traversing R from 1 to Rmand that is in quantities of to-be-reported space-frequency vector pairs that are pre-configured for R transport layers, or a maximum value that is determined by traversing R from 1 to Rmand that is in quantities of to-be-reported weighting coefficients that are pre-configured for R transport layers. In other words, K is a maximum value that is of Lrand that is determined by traversing r from 1 to R and traversing R from 1 to 4. K≥1, and K is an integer. Tris a quantity of space-frequency vector pairs reported for the rthtransport layer, where Tr≤Kr, and Tris an integer. 2: In the embodiments, for ease of description, when numbering is used, consecutive numbering may start from 1. For example, the R transport layers may include the first transport layer to the Rthtransport layer, the L beam vectors may include the first beam vector to an Lth beam vector, and so on. Examples are not described one by one herein. Certainly, specific implementation is not limited thereto. For example, consecutive numbering may alternatively start from 0. It should be understood that the foregoing settings are intended to facilitate description of the technical solutions provided in the embodiments of this application, but are not intended to limit the scope of this application. 3: In the embodiments of this application, transformations between matrices and vectors are involved in many places. For ease of understanding, unified descriptions are provided herein. A superscript T represents transposition. For example, ATrepresents transposition of a matrix (or a vector) A. A superscript H represents a conjugate transpose. For example, AHrepresents a conjugate transpose of the matrix (or the vector) A. For brevity, descriptions of a same or similar case are omitted below. 4: In the embodiments shown below, an example in which both a beam vector and a frequency domain vector are column vectors is used to describe the embodiments provided in this application, but this should not constitute any limitation on this application. Based on a same idea, a person skilled in the art may further think of more possible representation manners. 5: In the embodiments of this application, “used to indicate” may include direct indication and indirect indication. For example, when a piece of indication information is described to be used to indicate information I, this may include that the indication information directly indicates I or indirectly indicates I, but it does not necessarily represent that the indication information carries I. Information indicated by indication information is referred to as to-be-indicated information. In a specific implementation process, the to-be-indicated information may be indicated in a plurality of manners, for example, but not limited to, indicated directly. For example, the to-be-indicated information is indicated by using the to-be-indicated information or an index of the to-be-indicated information. Alternatively, the to-be-indicated information may be indirectly indicated by indicating other information, and there is an association relationship between the other information and the to-be-indicated information. Alternatively, only a part of the to-be-indicated information may be indicated, and the other part of the to-be-indicated information is known or agreed on in advance. For example, specific information may be indicated by using an arrangement sequence of various pieces of information that is agreed on in advance (for example, stipulated in a protocol), to reduce indication overheads to some extent. In addition, a common part of each piece of information may be further identified, and all the information is indicated, to reduce indication overheads caused by separate indication of the same information. For example, a person skilled in the art should understand that the precoding matrix includes precoding vectors, and the precoding vectors in the precoding matrix may have a same part in composition or another attribute. In addition, a specific indication manner may alternatively include various existing indication manners, for example, but not limited to, the foregoing indication manners and various combinations thereof. For specific details of the various indication manners, refer to the current technology. Details are not described in this specification. It may be learned from the foregoing descriptions that, for example, when a plurality of pieces of information of a same type needs to be indicated, different information may be indicated in different manners. In a specific implementation process, a required indication manner may be selected based on a specific requirement. A selected indication manner is not limited in the embodiments of this application. In this case, the indication manner in the embodiments of this application should be understood as covering various methods that can enable a to-be-indicated party to learn of to-be-indicated information. In addition, the to-be-indicated information may have another equivalent form. For example, a row vector may be represented as a column vector, a matrix may be represented by using a transposed matrix of the matrix, or a matrix may be represented in a form of a vector or an array. The vector or the array may be obtained by connecting row vectors or column vectors of the matrix. A Kronecker product of two vectors may be represented in a form such as a product of a vector and a transposed vector of the other vector. The technical solutions provided in the embodiments of this application should be understood as covering various forms. For example, some or all features in the embodiments of this application should be understood as covering various representation forms of the features. The to-be-indicated information may be sent as a whole, or may be sent separately by dividing the to-be-indicated information into a plurality of pieces of sub-information, and transmitting periodicities and/or transmitting occasions of these pieces of sub-information may be the same or different. A specific transmitting method is not limited in this application. The transmitting periodicities and/or the transmitting occasions of these pieces of sub-information may be predefined, for example, predefined according to a protocol, or may be configured based on configuration information sent by a transmit end device to a receive end device. For example, the configuration information may include, but is not limited to, one of or a combination of at least two of: radio resource control signaling such as RRC signaling, MAC layer signaling such as MAC-CE signaling, and physical layer signaling such as downlink control information (downlink control information, DCI). 6: Definitions of many features (for example, a Kronecker product, a PMI, a frequency domain unit, a spatial domain vector, a frequency domain vector, and a weighting coefficient of a space-frequency vector pair) that are listed in this application are merely used to explain functions of the features by using examples. For detailed content of the features, refer to the current technology. 7: In the embodiments shown below, first, second, third, fourth, and various numbers are merely used for differentiation for convenient description, and are not intended to limit the scope of the embodiments of this application. For example, different fields and different indication information are distinguished. 8: In the embodiments shown below, “pre-configuration” may be indicated in advance by using signaling, or may be determined according to a preset rule. A specific implementation thereof is not limited in this application. Corresponding to “pre-configuration”, “actual reporting” may refer to information that is actually reported by the terminal device to the network device based on channel measurement. For example, a quantity of to-be-reported spatial domain vectors that is pre-configured for a transport layer may be a quantity of spatial domain vectors that need to be reported for the transport layer. Therefore, the quantity of to-be-reported spatial domain vectors that is configured for the transport layer may be greater than or equal to a quantity of actually reported spatial domain vectors. For another example, a quantity of to-be-reported frequency domain vectors that is pre-configured for a transport layer may be a quantity of frequency domain vectors that need to be reported for the transport layer. Therefore, the quantity of to-be-reported frequency domain vectors that is configured for the transport layer may be greater than or equal to a quantity of actually reported frequency domain vectors. For still another example, a quantity of to-be-reported weighting coefficients that is pre-configured for a transport layer may be a quantity of space-frequency vector pairs that need to be reported for the transport layer. Therefore, the quantity of to-be-reported space-frequency vector pairs that is configured for the transport layer may be greater than or equal to a quantity of actually reported weighting coefficients. 9: “Predefinition” may be implemented in a manner in which corresponding code, a table, or other related indication information may be prestored in a device (for example, including the terminal device and the network device). A specific implementation thereof is not limited in this application. “Store” may mean that the corresponding code, the table, or the other related indication information is stored in one or more memories. The one or more memories may be separately disposed, or may be integrated into an encoder, a decoder, a processor, or a communications apparatus. Alternatively, some of the one or more memories may be separately disposed, and some of the one or more memories are integrated into a decoder, a processor, or a communications apparatus. A type of the memory may be any form of storage medium. This is not limited in this application. 10: “Protocol” in the embodiments of this application may be a standard protocol in the communications field, for example, may include an LTE protocol, an NR protocol, and a related protocol applied to a future communications system. This is not limited in this application. 11: “At least one” means one or more, and “a plurality of” means two or more than two. The term “and/or” describes an association relationship between associated objects and may indicate three relationships. For example, A and/or B may indicate the following cases: Only A exists, both A and B exist, and only B exists. A and B may be singular or plural. The character “/” generally represents an “or” relationship between the associated objects. “At least one item (piece) of the following” or a similar expression thereof means any combination of these items, including a singular item (piece) or any combination of plural items (pieces). For example, at least one of a, b, and c may indicate: a, b, c, a and b, a and c, b and c, or a, b, and c. a, b, and c may be single or plural. With reference to the accompanying drawings, the following describes in detail the method for indicating vectors used to construct a precoding vector provided in the embodiments of this application. It should be understood that the method provided in the embodiments of this application may be applied to a system performing communication by using a multiple-antenna technology, for example, the communications system100shown inFIG.1. The communications system may include at least one network device and at least one terminal device. The network device and the terminal device may communicate with each other by using the multiple-antenna technology. It should be further understood that, a specific structure of an execution body of the method provided in the embodiments of this application is not specifically limited in the embodiments that are shown below, provided that a program that records code for the method provided in the embodiments of this application can be run to perform communication according to the method provided in the embodiments of this application. For example, the execution body of the method provided in the embodiments of this application may be the terminal device, the network device, or a function module that can invoke and execute the program in the terminal device or the network device. Without loss of generality, the following describes in detail, by using interaction between a network device and a terminal device as an example, the method for indicating vectors used to construct a precoding vector provided in the embodiments of this application. FIG.2is a schematic flowchart of a method200for indicating vectors used to construct a precoding vector from a perspective of device interaction according to an embodiment of this application. As shown in the figure, the method200may include step210to step230. The following describes the steps in the method200in detail. In step210, a terminal device generates a CSI report. The CSI report includes a bitmap, and a length of the bitmap is irrelevant to a quantity R of transport layers. A plurality of indicator bits in the bitmap may correspond to a plurality of space-frequency vector pairs. Each indicator bit may be used to indicate whether a corresponding space-frequency vector pair is selected. When each indicator bit in the bitmap is used to indicate whether a corresponding space-frequency vector pair is selected, a space-frequency vector pair selected for each transport layer or a space-frequency vector pair reported for each transport layer is indicated. For example, when an indicator bit is set to “0”, it indicates that a corresponding space-frequency vector pair is not selected. When an indicator bit is set to “1”, it indicates that a corresponding space-frequency vector pair is selected. Therefore, a total quantity of indicator bits “1” in the bitmap may represent a quantity of space-frequency vector pairs reported for the R transport layers. A total quantity of indicator bits “1” that are in indicator bits corresponding to an rthtransport layer in the bitmap may represent a quantity of space-frequency vector pairs reported for the rthtransport layer. A correspondence between each indicator bit in the bitmap and each transport layer is described in detail below with reference to a specific embodiment. Detailed descriptions of the correspondence are temporarily omitted herein. It should be understood that listed meanings expressed by the values of the indicator bit herein are merely examples, and should not constitute any limitation on this application. The selected space-frequency vector pair is a space-frequency vector pair used to construct a precoding vector. Each space-frequency vector pair may include one spatial domain vector and one frequency domain vector. In other words, the space-frequency vector pair is determined by using the spatial domain vector and the frequency domain vector. One or more space-frequency vector pairs may be reported for a same transport layer. When a plurality of space-frequency vector pairs are reported for a same transport layer, in any two space-frequency vector pairs, a spatial domain vector and/or a frequency domain vector included in one space-frequency vector pair are/is different from a spatial domain vector and/or a frequency domain vector included in the other space-frequency vector pair, or at least one of a spatial domain vector and a frequency domain vector included in one space-frequency vector pair is different from that included in the other space-frequency vector pair. In this embodiment of this application, the space-frequency vector pair reported for each transport layer may be selected from a plurality of space-frequency vector pairs. For example, the plurality of space-frequency vector pairs may be predefined, for example, pre-agreed on by a network device and the terminal device, or defined in a protocol. The plurality of space-frequency vector pairs may alternatively be determined by the terminal device and reported to the network device. For example, for each transport layer, the terminal device determines and reports one or more spatial domain vectors and one or more frequency domain vectors. The one or more spatial domain vectors and the one or more frequency domain vectors may be used to determine one or more space-frequency vector pairs. Therefore, the bitmap may include sub-bitmaps corresponding to a plurality of transport layers. A plurality of indicator bits in each sub-bitmap may correspond to a plurality of space-frequency vector pairs at a transport layer. When each indicator bit is used to indicate whether a corresponding space-frequency vector pair is selected, this is equivalent to indicating a selected space-frequency vector pair in a plurality of space-frequency vector pairs corresponding to a plurality of indicator bits. In other words, this indicates relative positions of the space-frequency vector pairs reported for the R transport layers in the plurality of space-frequency vector pairs. The network device may predetermine the plurality of space-frequency vector pairs that correspond to the plurality of indicator bits in the bitmap. For example, the space-frequency vector pairs are predefined or reported by the terminal device (for example, indicated by using a field in a second part below). Therefore, after the relative positions of the space-frequency vector pairs reported for the transport layers in the plurality of space-frequency vector pairs are indicated, the network device may determine the space-frequency vector pair used to construct the precoding vector. For ease of description below, it is assumed that the terminal device may report one or more spatial domain vectors and one or more frequency domain vectors for each transport layer. A space-frequency vector pair reported by the terminal device for each transport layer may be selected from a plurality of space-frequency vector pairs determined by the spatial domain vectors and the frequency domain vectors. The rthtransport layer in the R transport layers is used as an example, and it is assumed that the quantity of space-frequency vector pairs reported by the terminal device for the rthtransport layer is Tr(where Tr≥1 and Tris an integer). The Trspace-frequency vector pairs may be one or more space-frequency vector pairs selected from Lr×Mrspace-frequency vector pairs determined by Lr(where Lr≥1 and Lris an integer) spatial domain vectors and Mr(where Mr≥1 and Mris an integer) frequency domain vectors. For example, the Lrspatial domain vectors and the Mrfrequency domain vectors may be determined by the terminal device and reported to the network device. In this embodiment of this application, quantities of spatial domain vector pairs reported for at least two of the R transport layers may be different or the same. This is not limited in this application. A quantity of space-frequency vector pairs that need to be reported for each transport layer, for example, a quantity Krof space-frequency vector pairs that need to be reported for the rthtransport layer, may be predefined, or may be directly or indirectly indicated by the network device by using signaling. This is not limited in this application. Optionally, the method further includes: receiving first indication information. The first indication information is used to indicate a quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer. Correspondingly, the network device transmits the first indication information. The network device may include the first indication information in higher layer signaling such as an RRC message or a MAC CE or in physical layer signaling such as DCI, to indicate, to the terminal device, the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer. Specific signaling carrying the first indication information is not limited in this application. In a possible design, the first indication information may indicate a maximum value K in quantities of to-be-reported space-frequency vector pairs that are configured for the R transport layers. The maximum value K may be replaced with a minimum value, an average value, or the like. The terminal device may determine, according to a predefined rule and based on the quantity of transport layers and a value indicated by the first indication information, the reporting quantity corresponding to each transport layer. In this case, the first indication information indirectly indicates the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer. For example, the first indication information indicates the maximum value K. The predefined rule may be, for example, that when R is 1, a quantity K1of to-be-reported space-frequency vector pairs that is configured for the transport layer is the maximum value K; when R is 2, quantities K1and K2of to-be-reported space-frequency vector pairs that are configured for the two transport layers are both the maximum value K; when R is 3, a quantity of to-be-reported space-frequency vector pairs that is configured for the first transport layer is the maximum value K, and quantities of to-be-reported space-frequency vector pairs that are configured for the second transport layer and the third transport layer are both a half of the maximum value, K/2; and when R is 4, a quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer is a half of the maximum value, K/2. As described above, a minimum quantity of weighting coefficients reported for each transport layer may alternatively be predefined in a protocol. Therefore, when the first indication information is used to indicate the maximum value K in the quantities of reported space-frequency vector pairs that are configured for the R transport layers, the maximum value K may be a maximum value in total quantities of reported space-frequency vector pairs that are configured for the R transport layers (namely, a maximum value in the R total reporting quantities configured for the R transport layers), or may be a value obtained by subtracting a minimum reporting quantity from a maximum value in the quantities of reported space-frequency vector pairs that are configured for the R transport layers. The maximum value K may be the maximum value in the R total reporting quantities configured for the R transport layers, and a total reporting quantity configured for the rthtransport layer indicates a total quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is configured for the rthtransport layer. It should be noted that the minimum reporting quantity and the maximum value correspond to a same transport layer. For example, a total quantity of to-be-reported weighting coefficients that is configured for the first transport layer in the R transport layers is a maximum value in total quantities of to-be-reported weighting coefficients that are configured for the R transport layers, for example, K. In this case, the first indication information may indicate the total quantity of to-be-reported weighting coefficients that is configured for the first transport layer, or may indicate a value obtained by subtracting a minimum quantity of to-be-reported weighting coefficients that is predefined for the first transport layer from the total quantity of to-be-reported weighting coefficients that is configured for the first transport layer, for example, K−a1, where a1indicates the minimum quantity of to-be-reported weighting coefficients that is predefined for the first transport layer, and a1is a positive integer. For example, it is assumed that the maximum value in the total quantities of to-be-reported space-frequency vector pairs that are configured for the R transport layers is 8. The maximum value is the total quantity of to-be-reported spatial domain vector pairs that is configured for the first transport layer. If the minimum quantity of to-be-reported space-frequency vector pairs that is predefined for the first transport layer is 2, the first indication information may indicate 8 or 6 (which is obtained from 8−2). A specific rule for indicating the maximum value K by the first indication information may be predefined in a protocol, or may be pre-negotiated by the network device and the terminal device. The network device and the terminal device may indicate and determine, according to a same rule, the maximum value K in the total quantities of to-be-reported space-frequency vector pairs that are configured for the R transport layers. It should be understood that the rules listed above are merely examples, and should not constitute any limitation on this application. The following describes in detail, with reference to a specific embodiment, a relationship between the maximum value and the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer. It may be understood that, when a meaning represented by the value indicated by the first indication information changes, the predefined rule for determining the quantity of reported space-frequency vector pairs corresponding to each transport layer also changes. It should be further understood that the maximum value, the minimum value, and the average value listed above are merely several possible implementations, and should not constitute any limitation on this application. In another possible design, the first indication information may directly indicate, when a value of R changes, the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer. The terminal device may directly determine, based on the first indication information and the quantity of transport layers, the quantity of space-frequency vector pairs that need to be reported for each transport layer. When the first indication information is used to indicate the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer, the first indication information may directly indicate the total quantity of to-be-reported space-frequency vector pairs, or may indicate a difference between the total quantity of to-be-reported space-frequency vector pairs and a minimum quantity of to-be-reported space-frequency vector pairs corresponding to the transport layer. In other words, when the first indication information is used to indicate a quantity of to-be-reported weighting coefficients that is configured for each transport layer, the first indication information may directly indicate a total quantity of to-be-reported weighting coefficients, or may indicate a difference between the total quantity of to-be-reported weighting coefficients and a minimum quantity of to-be-reported weighting coefficients corresponding to the transport layer. For example, the first indication information indicates that the quantity of to-be-reported space-frequency vector pairs that is configured for the rthtransport layer is Kr, where Kris a positive integer. In this case, Krmay be the total quantity of to-be-reported space-frequency vector pairs that is configured for the rthtransport layer, or may be a value obtained by subtracting a predefined minimum quantity of space-frequency vector pairs reported for the rthtransport layer from the total quantity of to-be-reported space-frequency vector pairs configured for the rthtransport layer. A specific rule for indicating the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer may be predefined in a protocol, or may be pre-negotiated by the network device and the terminal device. The network device and the terminal device may indicate and determine, according to a same rule, the total quantity of to-be-reported space-frequency vector pairs that is configured for each of the R transport layers. In still another possible design, the first indication information and second indication information or third indication information listed below may be same indication information. For example, a relationship between a quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer and a quantity of to-be-reported spatial domain vectors that is configured for each transport layer may be predefined, a relationship between a quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer and a quantity of to-be-reported frequency domain vectors that is configured for each transport layer may be predefined, or a relationship among a quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer, a quantity of to-be-reported spatial domain vectors that is configured for each transport layer, and a quantity of to-be-reported frequency domain vectors that is configured for each transport layer may be predefined. In other words, there may be a correspondence between the quantity of to-be-reported space-frequency vector pairs and the quantity of to-be-reported spatial domain vectors, there may be a correspondence between the quantity of to-be-reported space-frequency vector pairs and the quantity of to-be-reported frequency domain vectors, or there may be a correspondence between the quantity of to-be-reported space-frequency vector pairs and the quantity of to-be-reported spatial domain vectors and the quantity of to-be-reported frequency domain vectors. Therefore, when the network device indicates the quantities/quantity of to-be-reported spatial domain vectors and/or frequency domain vectors that are/is configured for each transport layer, the terminal device may determine the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer, based on a quantity of space-frequency vector pairs and the quantities/quantity of to-be-reported spatial domain vectors and/or frequency domain vectors. It may be understood that, when a meaning represented by the value indicated by the first indication information changes, the predefined rule for determining the reporting quantity corresponding to each transport layer also changes. In addition, the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer may alternatively be predefined, for example, defined in a protocol. For example, the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer or the maximum value K when the value of R changes may be predefined in the protocol, or the quantities/quantity of to-be-reported spatial domain vectors and/or frequency domain vectors that are/is configured for each transport layer when the value of R changes may be predefined in the protocol. This is not limited in this application. It should be noted that for some weighting coefficients whose amplitude quantization values are zero, amplitudes and phases corresponding to the weighting coefficients may not be reported. In other words, the terminal device may not report the weighting coefficients whose amplitude quantization values are zero. Therefore, a quantity of space-frequency vector pairs actually reported by the terminal device to the network device for the R transport layers may be less than or equal to a pre-configured reporting quantity, and may also be less than or equal to a maximum value of the pre-configured reporting quantity. For example, for the rthtransport layer, Tr≤Kr≤K. As described above, when the terminal device indicates, by using each indicator bit in the bitmap, whether the corresponding space-frequency vector pair is selected, this is equivalent to implicitly indicating a quantity and positions of space-frequency vector pairs reported for each transport layer. The terminal device may calculate a weighted sum of one or more space-frequency vector pairs reported for each transport layer, to construct a precoding vector corresponding to each frequency domain unit at the transport layer. Therefore, each space-frequency vector pair may correspond to one weighting coefficient. Therefore, that the terminal device indicates, by using the bitmap, the quantity and the positions of space-frequency vector pairs reported for the R transport layers may also be understood as that the terminal device indicates, by using the bitmap, a quantity and positions of weighting coefficients reported for the R transport layers. The following describes in detail a specific process of indicating, by using the bitmap, the quantity and the positions of space-frequency vector pairs reported for the R transport layers. In this embodiment of this application, in addition to indicating the quantity of space-frequency vector pairs reported for the R transport layers, the terminal device may further indicate, by using the CSI report, weighting coefficients corresponding to the space-frequency vector pairs, and the like. Therefore, when different implementations are described below, information carried in each part of the CSI report is further described with reference to a first part and the second part of the CSI report. It should be understood that various embodiments listed below are shown only for better understanding of the method provided in this application, and should not constitute any limitation on this application. It is assumed that a quantity of polarization directions of a transmit antenna is 1. For the rthtransport layer in the R transport layers, a quantity of corresponding indicator bits in the sub-bitmap may be, for example, Lr×Mr, so that the Lr×Mrindicator bits correspond to Lr×Mrspace-frequency vector pairs determined by Lrspatial domain vectors and Mrfrequency domain vectors. A length of the sub-bitmap corresponding to the rthtransport layer may be related to values of Lrand Mrconfigured for the rthtransport layer. In other words, the length of the bitmap may be related to quantities of to-be-reported spatial domain vectors and frequency domain vectors that are configured for each transport layer. For example, the length of the bitmap may be a maximum value that is of ∑r=1RLr×Mr and that is determined by traversing R from 1 to Rm. If Lr=L and Mr=M, a correspondence between L×M indicator bits in the sub-bitmap corresponding to the rthtransport layer and L×M space-frequency vector pairs may be related to a combination manner of spatial domain vectors and frequency domain vectors in the L×M space-frequency vector pairs. For example, the L×M space-frequency vector pairs corresponding to the L×M indicator bits may be arranged by first traversing the M frequency domain vectors and then traversing the L spatial domain vectors, or may be arranged by first traversing the L spatial domain vectors and then traversing the M frequency domain vectors. It is assumed that L spatial domain vectors selected from a spatial domain vector set are denoted as vs1, . . . , and vsL. M frequency domain vectors selected from a frequency domain vector set are denoted as vf1, . . . , and vfM. If the M frequency domain vectors are first traversed and then the L spatial domain vectors are traversed, an arrangement sequence of the L×M space-frequency vector pairs may be (vs1, vf1), (vs1, vf2), . . . , (vs1, vfM), (vs2, vf1), (vs2, vf2) . . . , and (vsL, vfM). There are a total of L×M space-frequency vector pairs. For brevity, examples are not listed herein one by one. The L×M bits in the bitmap are in a one-to-one correspondence with the L×M space-frequency vector pairs. If the L spatial domain vectors are first traversed and then the M frequency domain vectors are traversed, an arrangement sequence of the L×M space-frequency vector pairs may be (vs1, vf1), (vs2, vf1), . . . , (vsL, vf1), (vs1, vf2), (vs2, vf2) . . . , and (vsL, vfM). There are a total of L×M space-frequency vector pairs. For brevity, examples are not listed herein one by one. The L×M bits in the bitmap are in a one-to-one correspondence with the L×M space-frequency vector pairs. In the foregoing descriptions, the rthtransport layer is used as an example to briefly describe a specific method for indicating the positions of the reported space-frequency vector pairs by using the sub-bitmap. For any one of the plurality of transport layers, the terminal device may indicate a position of a reported space-frequency vector pair in a same manner. The sub-bitmaps corresponding to the R transport layers may be concatenated together to form the bitmap used to indicate the space-frequency vector pairs reported for the R transport layers. For ease of differentiation and description, a bitmap used to indicate a position of a space-frequency vector pair reported for each of the R transport layers is referred to as a sub-bitmap below. The bitmap corresponding to the R transport layers may include R sub-bitmaps. In this embodiment of this application, the length of the bitmap may be a fixed value. In other words, the length of the bitmap may be irrelevant to the quantity R of transport layers. In a possible design, the quantity of polarization directions of the transmit antenna is 1, and the length of the bitmap may be L×M×Rm. In other words, the length of the bitmap may be designed according to a predefined maximum quantity Rmof transport layers. For the R transport layers, the first L×M×R bits in the bitmap are valid. Herein, “valid” may mean that the bits may be used to indicate positions of space-frequency vector pairs. Specifically, the bitmap having the length of L×M×Rmmay include Rmsub-bitmaps, and each sub-bitmap may correspond to a plurality of space-frequency vector pairs at one transport layer. For example, when the quantity R of transport layers is actually 1, the first L×M bits in the bitmap are valid, and may be referred to as indicator bits; and the last L×M×3 bits have no effect. Compared with the first L×M indicator bits, the last L×M×3 bits may be referred to as invalid bits, and the invalid bits may be padded with any values, for example, may be padded with zeros. When the quantity R of transport layers is actually 2, the first L×M×2 bits in the bitmap are valid, and the last L×M×2 bits may be any bits. When the quantity R of transport layers is actually 3, the first L×M×3 bits in the bitmap are valid, and the last L×M bits may be any bits; and so on. For brevity, examples are not described one by one herein. The invalid bits are still considered as a part of the bitmap. In other words, the bitmap may include actually valid indicator bits and invalid bits. The indicator bits and the invalid bits may be used as a whole. For example, the indicator bits and the invalid bits may belong to a code block, and may be encoded as a whole. It should be understood that, that the indicator bits and the invalid bits belong to a code block does not represent that the code block includes only the indicator bits and the invalid bits, and the code block may further include more information bits. This is not limited in this application. For brevity, descriptions of a same or similar case are omitted below. It should be understood that a relative position of the indicator bits and the invalid bits in the foregoing example is merely an example, and should not constitute any limitation on this application. For example, the invalid bits may alternatively be located before the indicator bits. It should be noted that, in this embodiment of this application, the invalid bits are considered as a part of the bitmap. The bitmap may be used as an indication field to indicate the positions of the space-frequency vector pairs reported for the R transport layers. When the quantity R of transport layers changes, the length of the bitmap is a fixed value. In other words, a length of the indication field is a fixed value. However, this should not constitute any limitation on this application. The indication field may be understood differently. For example, the indication field may alternatively include only actually valid indicator bits in the bitmap. Other bits than the actual valid indicator bits in the bitmap may be padded with any values, for example, padded with a string of invalid bits whose values are “0”, to ensure that a total length of the indicator bits and the invalid bits remains unchanged when the value of R changes. In this case, the string of padded invalid bits whose values are “0” is a part external to the indication field. The bits that can be padded with any values in this part are invalid. The network device does not need to interpret this part. The bits that may be padded with any values in this part are referred to as padding (padding) bits or supplementary bits. If the padding bits are considered as a part external to the indication field, the bitmap whose length is L×M×Rmdefined above may include the indication field and the padding bits. In this case, only the actual valid indicator bits in the bitmap are considered as the indication field. The length of the indication field may be related to the quantity of transport layers. For example, in the foregoing design, the length of the indication field may be L×M×R. In an embodiment, when Rm=4, the length of the bitmap is L×M×4. Optionally, for the R sub-bitmaps in the bitmap, a relationship among the quantity Lrof spatial domain vectors, the quantity Mrof frequency domain vectors, and the quantity Krof space-frequency vector pairs may be configured as shown in Table 1: TABLE 1First transportSecond transportThird transportFourth transport∑r=1RKrlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 1LMKKR = 2LMKLMK2KR = 3LMKLMK/2LMK/22KR = 4LMK/2LMK/2LMK/2LMK/22K As shown in Table 1, when R=1, the quantity of reported space-frequency vector pairs is K. In other words, a total quantity of space-frequency vector pairs reported by the terminal device is K. The K space-frequency vector pairs are selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. When R=2, a total quantity of space-frequency vector pairs reported by the terminal device for the two transport layers is 2K. A quantity of space-frequency vector pairs reported for each transport layer is K, and the K space-frequency vector pairs reported for each transport layer are selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. It should be understood that the L spatial domain vectors corresponding to the first transport layer may be the same as or different from the L spatial domain vectors corresponding to the second transport layer. The M frequency domain vectors corresponding to the first transport layer may be the same as or different from the M frequency domain vectors corresponding to the second transport layer. This is not limited in this application. When R=3, a total quantity of space-frequency vector pairs reported by the terminal device for the three transport layers is 2K. A quantity of space-frequency vector pairs reported for the first transport layer is K, and the K space-frequency vector pairs may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. A quantity of space-frequency vector pairs reported for the second transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. A quantity of space-frequency vector pairs reported for the third transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. It should be understood that the three transport layers may have L same or different corresponding spatial domain vectors, and the three transport layers may have M same or different corresponding frequency domain vectors. This is not limited in this application. Herein, that the three transport layers have the L same corresponding spatial domain vectors may mean that any two of the three transport layers have same corresponding spatial domain vectors. That is, the L spatial domain vectors may be shared at the three transport layers. That the three transport layers have the L different corresponding spatial domain vectors may mean that at least two of the three transport layers have different corresponding spatial domain vectors. That is, the L spatial domain vectors may not be shared at the three transport layers. The spatial domain vectors corresponding to the three transport layers may be independent of each other. Similarly, that the three transport layers have the M same corresponding frequency domain vectors may mean that any two of the three transport layers have same frequency domain vectors. That is, the M frequency domain vectors may be shared at the three transport layers. That the three transport layers have the M different corresponding frequency domain vectors may mean that at least two of the three transport layers have different corresponding frequency domain vectors. That is, the M frequency domain vectors may not be shared at the three transport layers. The frequency domain vectors corresponding to the three transport layers may be independent of each other. For brevity, descriptions of a same or similar case are omitted below. When R=4, a total quantity of space-frequency vector pairs reported by the terminal device for the four transport layers is 2K. A quantity of space-frequency vector pairs reported for each transport layer is K/2, and the K/2 space-frequency vector pairs reported for each transport layer may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. It should be understood that the four transport layers may have L same or different corresponding spatial domain vectors, and the four transport layers may have M same or different corresponding frequency domain vectors. This is not limited in this application. That the three transport layers have the L same or different corresponding spatial domain vectors and that the three transport layers have the M same or different corresponding frequency domain vectors are described above by using the three transport layers as an example. For brevity, details are not described herein again. In another possible design, the quantity of polarization directions of the transmit antenna is 1, and the length of the bitmap may be L×M×2. When Rm>2, the length of the bitmap is smaller than the length of the bitmap in the previous design. When R<2, for example, R=1, the first L×M bits in the bitmap are valid, and the last L×M bits may be padded with any values. The padding bits may be considered as a part of the bitmap. In other words, the bitmap may include indicator bits and padding bits. The indicator bits and the padding bits may be used as a whole, for example, may belong to a code block. It should be understood that a relative position of the indicator bits and the padding bits in the foregoing example is merely an example, and should not constitute any limitation on this application. For example, the padding bits may alternatively be located before the indicator bits. It should be noted that, in this embodiment of this application, the padding bits are considered as a part of the bitmap. The bitmap may be used as an indication field to indicate the positions of the space-frequency vector pairs reported for the R transport layers. When the quantity R of transport layers changes, the length of the bitmap is a fixed value. In other words, a length of the indication field is a fixed value. However, this should not constitute any limitation on this application. For example, the indication field may alternatively include only the valid bits in the bitmap, and the padding bits are considered as a part external to the indication field. If the padding bits are considered as a part external to the indication field, when R=1, the bitmap whose length is L×M×2 defined above may include the indication field and the padding bits. In this case, the length of the indication field may be related to the quantity of transport layers. For example, in the foregoing design, when R=1, the length of the indication field may be L×M. When R≥2, all bits in the bitmap are valid. Specifically, when R=2, every L×M bits in the bitmap may correspond to one transport layer. For example, the first L×M bits in the bitmap correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; and the last L×M bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer. When R=3, the first L×M bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; the intermediate L×M/2 bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer; and the last L×M/2 bits may correspond to the third transport layer, and may be a sub-bitmap corresponding to the third transport layer. When R=4, every L×M/2 bits in the bitmap correspond to one transport layer. For example, the first L×M/2 bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; L×M/2 bits after the first L×M/2 bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer; L×M/2 bits after the first L×M bits may correspond to the third transport layer, and may be a sub-bitmap corresponding to the third transport layer; and the last L×M/2 bits may correspond to the fourth transport layer, and may be a sub-bitmap corresponding to the fourth transport layer. It should be understood that the correspondences that are between the bits and the transport layers and listed above are merely examples, and should not constitute any limitation on this application. For example, the bits in the bitmap may alternatively correspond to the first to the Rthtransport layers from back to front. Optionally, for the R sub-bitmaps in the bitmap, a relationship among the quantity Lrof spatial domain vectors, the quantity Mrof frequency domain vectors, and the quantity Krof space-frequency vector pairs may be configured as shown in Table 2: TABLE 2First transportSecond transportThird transportFourth transport∑r=1RKrlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 1LMKKR = 2LMKLMK2KR = 3LMKLM/2K/2LM/2K/22KR = 4LM/2K/2LM/2K/2LM/2K/2LM/2K/22K As shown in Table 2, when R=1, the quantity of reported space-frequency vector pairs is K. In other words, a total quantity of space-frequency vector pairs reported by the terminal device for the transport layer is K. The K space-frequency vector pairs may be selected from Lx M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. When R=2, a total quantity of space-frequency vector pairs reported by the terminal device for the two transport layers is 2K. A quantity of space-frequency vector pairs reported for each transport layer is K, and the K space-frequency vector pairs reported for each transport layer may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. It should be understood that the L spatial domain vectors corresponding to the first transport layer may be the same as or different from the L spatial domain vectors corresponding to the second transport layer. The M frequency domain vectors corresponding to the first transport layer may be the same as or different from the M frequency domain vectors corresponding to the second transport layer. This is not limited in this application. When R=3, a total quantity of space-frequency vector pairs reported by the terminal device for the three transport layers is 2K. A quantity of space-frequency vector pairs reported for the first transport layer is K, and the K space-frequency vector pairs may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. A quantity of space-frequency vector pairs reported for the second transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. A quantity of space-frequency vector pairs reported for the third transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. It should be understood that the three transport layers may have L same or different corresponding spatial domain vectors; and the M/2 frequency domain vectors corresponding to each of the second transport layer and the third transport layer may be a subset of the M frequency domain vectors corresponding to the first transport layer, or may not belong to the M frequency domain vectors, and the second transport layer and the third transport layer may have M/2 same or different corresponding frequency domain vectors. This is not limited in this application. When R=4, a total quantity of space-frequency vector pairs reported by the terminal device for the four transport layers is 2K. A quantity of space-frequency vector pairs reported for each transport layer is K/2, and the K/2 space-frequency vector pairs reported for each transport layer may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. It should be understood that the four transport layers may have L same or different corresponding spatial domain vectors, and the four transport layers may have M/2 same or different corresponding frequency domain vectors. This is not limited in this application. Optionally, for the R sub-bitmaps in the bitmap, a relationship among the quantity Lrof spatial domain vectors, the quantity Mrof frequency domain vectors, and the quantity Krof space-frequency vector pairs may be configured as shown in Table 3: TABLE 3First transportSecond transportThird transportFourth transport∑r=1RKrlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 1LMKKR = 2LMK/2LMK/2KR = 3LMK/2LM/2K/4LM/2K/4KR = 4LM/2K/4LM/2K/4LM/2K/4LM/2K/4K As shown in Table 3, when R=1, the quantity of reported space-frequency vector pairs is K. In other words, a total quantity of space-frequency vector pairs reported by the terminal device for the transport layer is K. The K space-frequency vector pairs may be selected from Lx M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. When R=2, a total quantity of space-frequency vector pairs reported by the terminal device for the two transport layers is K. A quantity of space-frequency vector pairs reported for each transport layer is K/2, and the K/2 space-frequency vector pairs reported for each transport layer may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. It should be understood that the L spatial domain vectors corresponding to the first transport layer may be the same as or different from the L spatial domain vectors corresponding to the second transport layer. The M frequency domain vectors corresponding to the first transport layer may be the same as or different from the M frequency domain vectors corresponding to the second transport layer. This is not limited in this application. When R=3, a total quantity of space-frequency vector pairs reported by the terminal device for the three transport layers is K. A quantity of space-frequency vector pairs reported for the first transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from Lx M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. A quantity of space-frequency vector pairs reported for the second transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. A quantity of space-frequency vector pairs reported for the third transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. It should be understood that the three transport layers may have L same or different corresponding spatial domain vectors; and the M/2 frequency domain vectors corresponding to each of the second transport layer and the third transport layer may be a subset of the M frequency domain vectors corresponding to the first transport layer, or may not belong to the M frequency domain vectors, and the second transport layer and the third transport layer may have M/2 same or different corresponding frequency domain vectors. This is not limited in this application. When R=4, a total quantity of space-frequency vector pairs reported by the terminal device for the four transport layers is K. The quantity of space-frequency vector pairs reported for each transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. It should be understood that the four transport layers may have L same or different corresponding spatial domain vectors, and the four transport layers may have M/2 same or different corresponding frequency domain vectors. This is not limited in this application. In still another possible design, the quantity of polarization directions of the transmit antenna is 1, and the length of the bitmap may be L×M. The length of the bitmap is smaller than the lengths of the bitmaps in the previous two designs. When R is any value, all bits in the bitmap are valid, and no padding bit is required to ensure a same length. Therefore, no matter how an indication field is defined, in this design, a length of the indication field is also a fixed value. When R=1, all the bits in the bitmap may correspond to one transport layer. When R=2, the first L×M/2 bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; and the last L×M/2 bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer. When R=3, the first L×M/2 bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; L×M/4 bits after the first L×M/2 bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer; and the last L×M/4 bits may correspond to the third transport layer, and may be a sub-bitmap corresponding to the third transport layer. When R=4, every L×M/4 bits in the bitmap correspond to one transport layer. For example, the first L×M/4 bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; L×M/4 bits after the first L×M/4 bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer; L×M/4 bits after the first L×M/2 bits may correspond to the third transport layer, and may be a sub-bitmap corresponding to the third transport layer; and the last L×M/4 bits may correspond to the fourth transport layer, and may be a sub-bitmap corresponding to the fourth transport layer. It should be understood that the correspondences that are between the bits and the transport layers and listed above are merely examples, and should not constitute any limitation on this application. For example, the bits in the bitmap may alternatively correspond to the first to the Rthtransport layers from back to front. Optionally, for the R sub-bitmaps in the bitmap, a relationship among the quantity Lrof spatial domain vectors, the quantity Mrof frequency domain vectors, and the quantity Krof space-frequency vector pairs may be configured as shown in Table 4: TABLE 4First transportSecond transportThird transportFourth transport∑r=1RKrlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 1LMKKR = 2LM/2K/2LM/2K/2KR = 3LM/2K/2LM/4K/4LM/4K/4KR = 4LM/4K/4LM/4K/4LM/4K/4LM/4K/4K As shown in Table 4, when R=1, the quantity of reported space-frequency vector pairs is K. In other words, a total quantity of space-frequency vector pairs reported by the terminal device is K. The K space-frequency vector pairs are selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. When R=2, a total quantity of space-frequency vector pairs reported by the terminal device for the two transport layers is K. A quantity of space-frequency vector pairs reported for each transport layer is K/2, and the K/2 space-frequency vector pairs reported for each transport layer may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. When R=3, a total quantity of space-frequency vector pairs reported by the terminal device for the three transport layers is K. A quantity of space-frequency vector pairs reported for the first transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from Lx M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. A quantity of space-frequency vector pairs reported for the second transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors and M/4 frequency domain vectors. A quantity of space-frequency vector pairs reported for the third transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors and M/4 frequency domain vectors. It should be understood that the three transport layers may have L same or different corresponding spatial domain vectors; and the M/4 frequency domain vectors corresponding to each of the second transport layer and the third transport layer may be a subset of the M/2 frequency domain vectors corresponding to the first transport layer, or may not belong to the M/2 frequency domain vectors, and the second transport layer and the third transport layer may have M/4 same or different corresponding frequency domain vectors. This is not limited in this application. When R=4, a total quantity of space-frequency vector pairs reported by the terminal device for the four transport layers is K. A quantity of space-frequency vector pairs reported for each transport layer is K/4, and the K/4 space-frequency vector pairs reported for each transport layer may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors and M/4 frequency domain vectors. It should be understood that the four transport layers may have L same or different corresponding spatial domain vectors, and the four transport layers may have M/4 same or different corresponding frequency domain vectors. This is not limited in this application. For ease of understanding, the foregoing describes the bitmaps in the three different designs in detail by using an example in which the quantity of polarization directions of the transmit antenna is 1. However, this should not constitute any limitation on this application. The bitmap may also be applicable to a case in which there are a plurality of polarization directions. In a possible design, the quantity of polarization directions of the transmit antenna is 2, and the length of the bitmap may be 2L×M×Rm. Similar to the case in which the quantity of polarization directions is 1, the length of the bitmap may be designed according to a predefined maximum quantity Rmof transport layers. A difference lies in that for different polarization directions, positions of selected space-frequency vector pairs may be separately indicated. Therefore, when the quantity of polarization directions is 2, for the R transport layers, the first 2L×M×R bits in the bitmap are valid. For the rthtransport layer, a (2L×M×(r−1)+1)th bit to a (2L×M×r)thbit in the bitmap are valid. These valid bits are 2L×M bits in total. In these valid bits, the first L×M bits may correspond to a first polarization direction, and the last L×M bits may correspond to a second polarization direction. Alternatively, in these valid bits, the first L×M bits may correspond to a second polarization direction, and the last L×M bits may correspond to a first polarization direction. This is not limited in this application. In an embodiment, when Rm=4, the length of the bitmap is 2L×M×4. Optionally, for the R sub-bitmaps in the bitmap, a relationship among the quantity Lrof spatial domain vectors, the quantity Mrof frequency domain vectors, and the quantity Krof space-frequency vector pairs may be configured as shown in Table 5: TABLE 5First transportSecond transportThird transportFourth transport∑r=1RKrlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 1L/2LMKKR = 2L/2LMKL/2LMK2KR = 3L/2LMKL/2LMK/2L/2LMK/22KR = 4L/2LMK/2L/2LMK/2L/2LMK/2L/2LMK/22K It should be noted that “L/2L” in Table 5 indicates L or 2L. Spatial domain vectors in two polarization directions are considered for the value that is of Lrfor each transport layer and that is shown in Table 5. Therefore, the value may be L or 2L. If L same spatial domain vectors are shared in the two polarization directions, the value of Lrmay be L. If L spatial domain vectors are used in each of the two polarization directions, the value of Lrmay be 2L. As described above, the L spatial domain vectors in a polarization direction may be the same as or different from those in the other polarization direction. This is not limited in this application. That is, when the quantity of polarization directions is 2, the value of Lrmay be L or 2L. This is not limited in this application. As shown in Table 5, when R=1, the quantity of reported space-frequency vector pairs is K. In other words, a total quantity of space-frequency vector pairs reported by the terminal device is K. A space-frequency vector pair reported for the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. When R=2, a total quantity of space-frequency vector pairs reported by the terminal device for the two transport layers is 2K. A quantity of space-frequency vector pairs reported for each transport layer is K, and the K space-frequency vector pairs reported for each transport layer are selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. When R=2, r is 1 or 2. It should be understood that the 2L spatial domain vectors corresponding to the first transport layer may be the same as or different from the 2L spatial domain vectors corresponding to the second transport layer. The M frequency domain vectors corresponding to the first transport layer may be the same as or different from the M frequency domain vectors corresponding to the second transport layer. This is not limited in this application. When R=3, a total quantity of space-frequency vector pairs reported by the terminal device for the three transport layers is 2K. A quantity of space-frequency vector pairs reported for the first transport layer is K, and the K space-frequency vector pairs may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A quantity of space-frequency vector pairs reported for the second transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A quantity of space-frequency vector pairs reported for the third transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. When R=3, r is 1, 2, or 3. It should be understood that the three transport layers may have L same or different corresponding spatial domain vectors, and the three transport layers may have M same or different corresponding frequency domain vectors. This is not limited in this application. When R=4, a total quantity of space-frequency vector pairs reported by the terminal device for the four transport layers is 2K. A quantity of space-frequency vector pairs reported for each transport layer is K/2, and the K/2 space-frequency vector pairs reported for each transport layer may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. When R=4, r is 1, 2, 3, or 4. It should be understood that the four transport layers may have L same or different corresponding spatial domain vectors, and the four transport layers may have M same or different corresponding frequency domain vectors. This is not limited in this application. It should be further understood that, when R is 1, 2, 3, or 4 as described above, L spatial domain vectors corresponding to a transport layer in the first polarization direction may be the same or different from L spatial domain vectors corresponding to the same transport layer in the second polarization direction. This is not limited in this application. To adapt to more possible polarization directions, the length of the bitmap may be more commonly represented as P×L×M×Rm. P represents a quantity of polarization directions, P≥1, and P is an integer. In another possible design, the quantity of polarization directions of the transmit antenna is 2, and the length of the bitmap may be 2L×M×2. Similar to the case in which the quantity of polarization directions is 1, the length of the bitmap in this design is less than the length of the bitmap in the previous design. A difference lies in that for different polarization directions, positions of selected space-frequency vector pairs may be separately indicated. Therefore, when R<2, for example, R=1, the first 2L×M bits in the bitmap are valid, and the last 2L×M bits may be padded with any values. The padding bits may be considered as a part of the bitmap. In other words, the bitmap may include indicator bits and padding bits. The indicator bits and the padding bits may be used as a whole, for example, belong to a code block. It should be understood that a relative position of the indicator bits and the padding bits in the foregoing example is merely an example, and should not constitute any limitation on this application. For example, the padding bits may alternatively be located before the indicator bits. It should be noted that, in this embodiment of this application, the padding bits are considered as a part of the bitmap. The bitmap may be used as an indication field to indicate the positions of the space-frequency vector pairs reported for the R transport layers. When the quantity R of transport layers changes, the length of the bitmap is a fixed value. In other words, a length of the indication field is a fixed value. However, this should not constitute any limitation on this application. For example, the indication field may alternatively include only the valid bits in the bitmap, and the padding bits are considered as a part external to the indication field. If the padding bits are considered as a part external to the indication field, when R=1, the bitmap whose length is 2L×M×2 defined above may include the indication field and the padding bits. In this case, the length of the indication field may be related to the quantity of transport layers. For example, in the foregoing design, when R=1, the length of the indication field may be 2L×M. When R≥2, all bits in the bitmap are valid. Specifically, when R=2, every 2L×M bits in the bitmap may correspond to one transport layer. For example, the first 2L×M bits in the bitmap correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; and the last 2L×M bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer. When R=3, the first 2L×M bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; the intermediate 2L×M/2 (namely, L×M) bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer; and the last 2L×M/2 (namely, L×M) bits may correspond to the third transport layer, and may be a sub-bitmap corresponding to the third transport layer. When R=4, every 2L×M/2 (namely, L×M) bits in the bitmap correspond to one transport layer. For example, the first L×M bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; L×M bits after the first L×M bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer; L×M bits after the first 2L×M bits may correspond to the third transport layer, and may be a sub-bitmap corresponding to the third transport layer; and the last L×M bits may correspond to the fourth transport layer, and may be a sub-bitmap corresponding to the fourth transport layer. In addition, in the sub-bitmap corresponding to the rthtransport layer, a first half of bits (for example, ½ of a bit length of the sub-bitmap) may correspond to the first polarization direction, and a second half of bits may correspond to the second polarization direction. Alternatively, a first half of bits may correspond to the second polarization direction, and a second half of bits may correspond to the first polarization direction. This is not limited in this application. It should be understood that the foregoing correspondences between the bits and the transport layers and the foregoing correspondences between the bits and the polarization directions are merely examples, and should not constitute any limitation on this application. For example, the bits in the bitmap may alternatively correspond to the first to the Rthtransport layers from back to front. Optionally, for the R sub-bitmaps in the bitmap, a relationship among the quantity Lrof spatial domain vectors, the quantity Mrof frequency domain vectors, and the quantity Krof space-frequency vector pairs may be configured as shown in Table 6: TABLE 6First transportSecond transportThird transportFourth transport∑r=1RKrlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 1L/2LMKKR = 2L/2LMKL/2LMK2KR = 3L/2LMKL/2LK/2K/2L/2LK/2K/22KR = 4L/2LK/2K/2L/2LK/2K/2L/2LK/2K/2L/2LK/2K/22K It should be noted that “L/2L” in Table 6 indicates L or 2L. Spatial domain vectors in two polarization directions are considered for the value that is of Lrfor each transport layer and that is shown in Table 6. Therefore, the value may be L or 2L. If L same spatial domain vectors are shared in the two polarization directions, the value of Lrmay be L. If L spatial domain vectors are used in each of the two polarization directions, the value of Lrmay be 2L. As described above, the L spatial domain vectors in a polarization direction may be the same as or different from those in the other polarization direction. This is not limited in this application. That is, when the quantity of polarization directions is 2, the value of Lrmay be L or 2L. This is not limited in this application. As shown in Table 6, when R=1, the quantity of reported space-frequency vector pairs is K. In other words, a total quantity of space-frequency vector pairs reported by the terminal device for the transport layer is K. The K space-frequency vector pairs may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A space-frequency vector pair reported for the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction. When R=2, a total quantity of space-frequency vector pairs reported by the terminal device for the two transport layers is 2K. A quantity of space-frequency vector pairs reported for each transport layer is K, and the K space-frequency vector pairs reported for each transport layer may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. When R=2, r is 1 or 2. It should be understood that the 2L spatial domain vectors corresponding to the first transport layer may be the same as or different from the 2L spatial domain vectors corresponding to the second transport layer. The M frequency domain vectors corresponding to the first transport layer may be the same as or different from the M frequency domain vectors corresponding to the second transport layer. This is not limited in this application. When R=3, a total quantity of space-frequency vector pairs reported by the terminal device for the three transport layers is 2K. A quantity of space-frequency vector pairs reported for the first transport layer is K, and the K space-frequency vector pairs may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. In addition, a space-frequency vector pair reported for the first transport layer in the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the first transport layer in the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. A quantity of space-frequency vector pairs reported for the second transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from L×M space-frequency vector pairs determined by 2L spatial domain vectors and M/2 frequency domain vectors. In addition, a space-frequency vector pair reported for the second transport layer in the first polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/2 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the second transport layer in the second polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/2 frequency domain vectors corresponding to the second polarization direction. A quantity of space-frequency vector pairs reported for the third transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from L×M space-frequency vector pairs determined by 2L spatial domain vectors and M/2 frequency domain vectors. In addition, a space-frequency vector pair reported for the third transport layer in the first polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/2 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the third transport layer in the second polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/2 frequency domain vectors corresponding to the second polarization direction. It should be understood that the three transport layers may have L same or different corresponding spatial domain vectors; and the M/2 frequency domain vectors corresponding to each of the second transport layer and the third transport layer may be a subset of the M frequency domain vectors corresponding to the first transport layer, or may not belong to the M frequency domain vectors, and the second transport layer and the third transport layer may have M/2 same or different corresponding frequency domain vectors. This is not limited in this application. When R=4, a total quantity of space-frequency vector pairs reported by the terminal device for the four transport layers is 2K. A quantity of space-frequency vector pairs reported for each transport layer is K/2, and the K/2 space-frequency vector pairs reported for each transport layer may be selected from L×M space-frequency vector pairs determined by 2L spatial domain vectors and M/2 frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/2 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/2 frequency domain vectors corresponding to the second polarization direction. When R=4, r is 1, 2, 3, or 4. It should be understood that the four transport layers may have L same or different corresponding spatial domain vectors, and the four transport layers may have M/2 same or different corresponding frequency domain vectors. This is not limited in this application. It should be further understood that, when R is 1, 2, 3, or 4 as described above, L spatial domain vectors corresponding to a transport layer in the first polarization direction may be the same or different from L spatial domain vectors corresponding to the same transport layer in the second polarization direction. This is not limited in this application. Optionally, for the R sub-bitmaps in the bitmap, a relationship among the quantity Lrof spatial domain vectors, the quantity Mrof frequency domain vectors, and the quantity Krof space-frequency vector pairs may be configured as shown in Table 7: TABLE 7First transportSecond transportThird transportFourth transport∑r=1RKrlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 1L/2LMKKR = 2L/2LMK/2L/2LMK/2KR = 3L/2LMK/2L/2LK/2K/2L/2LK/2K/2KR = 4L/2LK/2K/2L/2LK/2K/2L/2LK/2K/2L/2LK/2K/2K It should be noted that “L/2L” in Table 7 indicates L or 2L. Spatial domain vectors in two polarization directions are considered for the value that is of Lrfor each transport layer and that is shown in Table 7. Therefore, the value may be L or 2L. If L same spatial domain vectors are shared in the two polarization directions, the value of Lrmay be L. If L spatial domain vectors are used in each of the two polarization directions, the value of Lrmay be 2L. As described above, there may be L same or different spatial domain vectors in the two polarization directions. This is not limited in this application. That is, when the quantity of polarization directions is 2, the value of Lrmay be L or 2L. This is not limited in this application. As shown in Table 7, when R=1, the quantity of reported space-frequency vector pairs is K. In other words, a total quantity of space-frequency vector pairs reported by the terminal device for the transport layer is K. The K space-frequency vector pairs may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A space-frequency vector pair reported for the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. When R=2, a total quantity of space-frequency vector pairs reported by the terminal device for the two transport layers is K. A quantity of space-frequency vector pairs reported for each transport layer is K/2, and the K/2 space-frequency vector pairs reported for each transport layer may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors and M frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. When R=2, r is 1 or 2. It should be understood that the L spatial domain vectors corresponding to the first transport layer may be the same as or different from the L spatial domain vectors corresponding to the second transport layer. The M frequency domain vectors corresponding to the first transport layer may be the same as or different from the M frequency domain vectors corresponding to the second transport layer. This is not limited in this application. When R=3, a total quantity of space-frequency vector pairs reported by the terminal device for the three transport layers is K. A quantity of space-frequency vector pairs reported for the first transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. In addition, a space-frequency vector pair reported for the first transport layer in the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the first transport layer in the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. A quantity of space-frequency vector pairs reported for the second transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from 2L×M/2 (namely, L×M) space-frequency vector pairs determined by 2L spatial domain vectors and M/2 frequency domain vectors. In addition, a space-frequency vector pair reported for the second transport layer in the first polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/2 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the second transport layer in the second polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/2 frequency domain vectors corresponding to the second polarization direction. A quantity of space-frequency vector pairs reported for the third transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from 2L×M/2 (namely, L×M) space-frequency vector pairs determined by 2L spatial domain vectors and M/2 frequency domain vectors. In addition, a space-frequency vector pair reported for the third transport layer in the first polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/2 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the third transport layer in the second polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/2 frequency domain vectors corresponding to the second polarization direction. It should be understood that the three transport layers may have L same or different corresponding spatial domain vectors; and the M/2 frequency domain vectors corresponding to each of the second transport layer and the third transport layer may be a subset of the M frequency domain vectors corresponding to the first transport layer, or may not belong to the M frequency domain vectors, and the second transport layer and the third transport layer may have M/2 same or different corresponding frequency domain vectors. This is not limited in this application. When R=4, a total quantity of space-frequency vector pairs reported by the terminal device for the four transport layers is K. A quantity of space-frequency vector pairs reported for each transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors and M/2 frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/2 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/2 frequency domain vectors corresponding to the second polarization direction. When R=4, r is 1, 2, 3, or 4. It should be understood that the four transport layers may have L same or different corresponding spatial domain vectors, and the four transport layers may have M/2 same or different corresponding frequency domain vectors. This is not limited in this application. It should be further understood that, when R is 1, 2, 3, or 4 as described above, L spatial domain vectors corresponding to a transport layer in the first polarization direction may be the same or different from L spatial domain vectors corresponding to the same transport layer in the second polarization direction. This is not limited in this application. To adapt to more possible polarization directions, the length of the bitmap may be more commonly represented as P×L×M×2. In still another possible design, the quantity of polarization directions of the transmit antenna is 2, and the length of the bitmap may be 2L×M. Similar to the case in which the quantity of polarization directions of the transmit antenna is 1, the length of the bitmap in this design is less than the lengths of the bitmaps in the previous two designs. When R is any value, all bits in the bitmap are valid. Therefore, no padding bit is required. Therefore, no matter how an indication field is defined, in this design, a length of the indication field is also a fixed value. When R=1, all the bits in the bitmap may correspond to one transport layer. When R=2, the first 2L×M/2 (namely, L×M) bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; and the last 2L×M/2 (namely, L×M) bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer. When R=3, the first 2L×M/2 (namely, L×M) bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; 2L×M/4 (namely, L×M/2) bits after the L×M bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer; and the last 2L×M/4 (namely, LX M/2) bits may correspond to the third transport layer, and may be a sub-bitmap corresponding to the third transport layer. When R=4, every L×M/4 bits in the bitmap correspond to one transport layer. For example, the first 2L×M/4 (namely, L×M/2) bits in the bitmap may correspond to the first transport layer, and may be a sub-bitmap corresponding to the first transport layer; 2L×M/4 (namely, L×M/2) bits after the first L×M/2 bits may correspond to the second transport layer, and may be a sub-bitmap corresponding to the second transport layer; 2L×M/4 (namely, L×M/2) bits after the first L×M bits may correspond to the third transport layer, and may be a sub-bitmap corresponding to the third transport layer; and the last 2L×M/4 (namely, L×M/2) bits may correspond to the fourth transport layer, and may be a sub-bitmap corresponding to the fourth transport layer. It should be understood that the correspondences that are between the bits and the transport layers and listed above are merely examples, and should not constitute any limitation on this application. For example, the bits in the bitmap may alternatively correspond to the first to the Rthtransport layers from back to front. In addition, in the sub-bitmap corresponding to the rthtransport layer, a first half of bits (for example, ½ of a bit length of the sub-bitmap) may correspond to the first polarization direction, and a second half of bits may correspond to the second polarization direction. Alternatively, a first half of bits may correspond to the second polarization direction, and a second half of bits may correspond to the first polarization direction. This is not limited in this application. It should be understood that the foregoing correspondences between the bits and the transport layers and the foregoing correspondences between the bits and the polarization directions are merely examples, and should not constitute any limitation on this application. For example, the bits in the bitmap may alternatively correspond to the first to the Rthtransport layers from back to front. Optionally, for the R sub-bitmaps in the bitmap, a relationship among the quantity Lrof spatial domain vectors, the quantity Mrof frequency domain vectors, and the quantity Krof space-frequency vector pairs may be configured as shown in Table 8: TABLE 8First transportSecond transportThird transportFourth transport∑r=1RKrlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 1L/2LMKKR = 2L/2LK/2K/2L/2LK/2K/2KR = 3L/2LK/2K/2L/2LK/2K/2L/2LK/2K/2KR = 4L/2LK/2K/2L/2LK/2K/2L/2LK/2K/2L/2LK/2K/2K It should be noted that “L/2L” in Table 8 indicates L or 2L. Spatial domain vectors in two polarization directions are considered for the value that is of Lrfor each transport layer and that is shown in Table 8. Therefore, the value may be L or 2L. If L same spatial domain vectors are shared in the two polarization directions, the value of Lrmay be L. If L spatial domain vectors are used in each of the two polarization directions, the value of Lrmay be 2L. As described above, the L spatial domain vectors a polarization direction may be the same as or different from those in the other polarization direction. This is not limited in this application. That is, when the quantity of polarization directions is 2, the value of Lrmay be L or 2L. This is not limited in this application. As shown in Table 8, when R=1, the quantity of reported space-frequency vector pairs is K. In other words, a total quantity of space-frequency vector pairs reported by the terminal device is K. The K space-frequency vector pairs are selected from 2L×M space-frequency vector pairs determined by 2L spatial domain vectors and M frequency domain vectors. A space-frequency vector pair reported for the first polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the second polarization direction may be selected from L×M space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M frequency domain vectors corresponding to the second polarization direction. When R=2, a total quantity of space-frequency vector pairs reported by the terminal device for the two transport layers is K. A quantity of space-frequency vector pairs reported for each transport layer is K/2, and the K/2 space-frequency vector pairs reported for each transport layer may be selected from 2L×M/2 (namely, L×M) space-frequency vector pairs determined by 2L spatial domain vectors and M/2 frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/2 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/2 frequency domain vectors corresponding to the second polarization direction. When R=2, r is 1 or 2. When R=3, a total quantity of space-frequency vector pairs reported by the terminal device for the three transport layers is K. A quantity of space-frequency vector pairs reported for the first transport layer is K/2, and the K/2 space-frequency vector pairs may be selected from 2L×M/2 (namely, L×M) space-frequency vector pairs determined by 2L spatial domain vectors and M/2 frequency domain vectors. A space-frequency vector pair reported for the first transport layer in the first polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/2 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the first transport layer in the second polarization direction may be selected from L×M/2 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/2 frequency domain vectors corresponding to the second polarization direction. A quantity of space-frequency vector pairs reported for the second transport layer is K/4, and the K/4 space-frequency vector pairs may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors and M/4 frequency domain vectors. A space-frequency vector pair reported for the second transport layer in the first polarization direction may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/4 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the second transport layer in the second polarization direction may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/4 frequency domain vectors corresponding to the second polarization direction. A quantity of space-frequency vector pairs reported for the third transport layer is K/4, and the K/4 space-frequency vector pairs are selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors and M/4 frequency domain vectors. A space-frequency vector pair reported for the third transport layer in the first polarization direction may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/4 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the third transport layer in the second polarization direction may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/4 frequency domain vectors corresponding to the second polarization direction. It should be understood that the three transport layers may have L same or different corresponding spatial domain vectors; and the M/4 frequency domain vectors corresponding to each of the second transport layer and the third transport layer may be a subset of the M/2 frequency domain vectors corresponding to the first transport layer, or may not belong to the M/2 frequency domain vectors, and the second transport layer and the third transport layer may have M/4 same or different corresponding frequency domain vectors. This is not limited in this application. When R=4, a total quantity of space-frequency vector pairs reported by the terminal device for the four transport layers is K. A quantity of space-frequency vector pairs reported for each transport layer is K/4, and the K/4 space-frequency vector pairs reported for each transport layer may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors and M/4 frequency domain vectors. A space-frequency vector pair reported for the rthtransport layer in the first polarization direction may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors corresponding to the first polarization direction and M/4 frequency domain vectors corresponding to the first polarization direction, and a space-frequency vector pair reported for the rthtransport layer in the second polarization direction may be selected from L×M/4 space-frequency vector pairs determined by L spatial domain vectors corresponding to the second polarization direction and M/4 frequency domain vectors corresponding to the second polarization direction. When R=4, r is 1, 2, 3, or 4. It should be understood that the four transport layers may have L same or different corresponding spatial domain vectors, and the four transport layers may have M/4 same or different corresponding frequency domain vectors. This is not limited in this application. It should be further understood that, when R is 1, 2, 3, or 4 as described above, L spatial domain vectors corresponding to a transport layer in the first polarization direction may be the same or different from L spatial domain vectors corresponding to the same transport layer in the second polarization direction. This is not limited in this application. To adapt to more possible polarization directions, the length of the bitmap may be more commonly represented as P×L×M. Several possible designs of the bitmap provided in this embodiment of this application are described above in detail with reference to Table 1 to Table 8. This should not constitute any limitation on this application. For example, when the quantity of polarization directions of the transmit antenna is 1, the length of the bitmap may alternatively be designed as L×M×Rm/2. When the quantity of polarization directions of the transmit antenna is 2, the length of the bitmap may alternatively be designed as 2L×M×Rm/2 (namely, L×M×Rm). For the correspondences between the bits in the bitmap and the transport layers, refer to the correspondences listed above. For brevity, details are not described herein again. It should be further understood that, when it is defined in a protocol that one of the plurality of designs of the bitmap is used, the terminal device may generate the bitmap based on the defined design, and the network device may also parse the bitmap based on the corresponding design. It should be further understood that, when L and M are defined differently, the foregoing listed formula used to calculate the length of the bitmap also changes accordingly. A method that is used to calculate the length of the bitmap and determined by a person skilled in the art based on a same inventive concept shall fall within the protection scope of this application. Optionally, the bitmap is located in the first part of the CSI report. As described above, a length of the first part of the CSI report is predefined. The length of the bitmap may be a fixed value, and is irrelevant to the quantity R of transport layers. Therefore, the bitmap may be designed in the first part of the CSI report. Overheads of the first part of the CSI report may be fixed, and do not change with the quantity R of transport layers. The overheads of the first part may be predefined in a protocol, so that after receiving the CSI report, the network device decodes the first part based on the predefined length. Optionally, the CSI report further includes the second part, and the second part includes an indication of the weighting coefficient reported for each of the R transport layers. For example, for the rthtransport layer, the CSI report may be used to indicate Trweighting coefficients. In the first part, the bitmap is used to indicate the quantity of space-frequency vector pairs reported for each transport layer, and a quantity of quantization bits of each weighting coefficient may be predetermined. Therefore, indication overheads of the weighting coefficient reported for each of the R transport layers in the second part may be determined. The rthtransport layer is used as an example. A quantity of weighting coefficients reported by the terminal device is Tr. If a quantity of amplitude quantization bits of each weighting coefficient is x, and a quantity of phase quantization bits of the weighting coefficient is y, the indication overheads of the weighting coefficients reported for the rthtransport layer may be, for example, (x+y)×Trbits. In this case, indication overheads of the weighting coefficients reported for the R transport layers may be, for example ∑r=1R(x+y)×Tr bits. As described above, because the weighting coefficients correspond to the space-frequency vector pairs, the CSI report indicates, by using the bitmap, both the quantity and the positions of space-frequency vector pairs reported for each transport layer, and the quantity and the positions of weighting coefficients reported for each transport layer. For example, the terminal device may indicate a weighting coefficient in a normalized manner, or may indicate, by using a quantization value or an index of a quantization value, one or more weighting coefficients corresponding to a space-frequency vector pair. If indicating the weighting coefficient in the normalized manner, the terminal device may determine a maximum coefficient within a predetermined normalized range, and normalize the maximum coefficient. Then, the terminal device may indicate a relative value of another coefficient relative to the maximum coefficient by using a quantization value or an index of a quantization value. If indicating the weighting coefficient in the normalized manner, the terminal device may further indicate a position of the normalized coefficient in the second part. It should be understood that the normalization mentioned above may mean that a maximum weighting coefficient is determined in a unit of each polarization direction, each transport layer, or all transport layers, so that the normalization is performed in different ranges such as the polarization direction, the transport layer, or all the transport layers. It should be further understood that for a specific method for indicating the weighting coefficient in the normalized manner, refer to the current technology. For brevity, detailed descriptions of the specific process are omitted herein. The terminal device may indicate, by using the quantization value or the index of the quantization value, the weighting coefficients corresponding to the space-frequency vector pairs in a pre-agreed sequence. For example, the terminal device may sequentially indicate corresponding weighting coefficients according to a sequence of the reported space-frequency vector pairs in the bitmap. Further, when a plurality of weighting coefficients are reported for a same transport layer, the plurality of weighting coefficients corresponding to the same transport layer may belong to at least two quantization levels. The at least two quantization levels include a first quantization level and a second quantization level, and a quantity of quantization bits of a weighting coefficient corresponding to the first quantization level is greater than a quantity of quantization bits of a weighting coefficient corresponding to the second quantization level. A quantity of quantization levels and a classification rule may be predefined, for example, defined in a protocol. In an implementation, a plurality of quantization levels may be obtained through classification based on amplitude quantization values. In other words, different quantization levels may correspond to different amplitude quantization values. Table 9 shows an example of a correspondence between a quantization level and an amplitude quantization value. As shown in Table 9, four quantization levels may be obtained through classification based on amplitude quantization values of weighting coefficients. TABLE 9Quantization levelAmplitude quantization value111/221/41/81/1631/321/6440 Optionally, a quantity of phase quantization bits of the weighting coefficient corresponding to the first quantization level is greater than a quantity of phase quantization bits of the weighting coefficient corresponding to the second quantization level. Table 10 shows an example of a correspondence among a quantization level, an amplitude quantization value, and a phase quantization bit. TABLE 10AmplitudePhaseQuantizationquantizationquantizationlevelvaluebit1141/221/431/81/1631/3221/64400 Certainly, correspondences between different quantization levels and quantities of amplitude quantization bits may further be defined. For brevity, examples are not provided one by one herein for description. The indication overheads of the weighting coefficient reported for each transport layer may be determined based on quantities of amplitude quantization bits and quantities of phase quantization bits that correspond to different quantization levels and quantities of weighting coefficients corresponding to the quantization levels. For brevity, no example is provided herein for description. It should be understood that the foregoing correspondences among the quantity of quantization levels, the quantization levels, the amplitude quantization values, and the phase quantization bits are merely examples for ease of understanding, and should not constitute any limitation on this application. The quantity of quantization levels and the correspondence among the quantization level, the amplitude quantization value, and the phase quantization bit are not limited in this application. In another implementation, a plurality of quantization levels may be obtained through classification based on amplitude quantization values and a quantity of weighting coefficients. For example, a quantity of quantization levels is predefined as 2. In this case, in the weighting coefficients reported for each transport layer, some weighting coefficients having larger amplitude quantization values may be classified into a first quantization level, and some weighting coefficients having smaller amplitude quantization values may be classified into a second quantization level. For example, the quantity of weighting coefficients reported for the rthtransport layer is Tr. In this case, Tr/2 weighting coefficients having larger amplitude quantization values may be classified into the first quantization level, and are quantized by using a quantity of quantization bits corresponding to the first quantization level; and Tr/2 weighting coefficients having smaller amplitude quantization values may be classified into the second quantization level, and are quantized by using a quantity of quantization bits corresponding to the second quantization level. It should be understood that the foregoing quantization level classification methods are merely two possible implementations, and should not constitute any limitation on this application. The quantity of quantization levels and the classification rule of the quantization levels are not limited in this application Optionally, the second part of the CSI report further includes indication of spatial domain vectors reported for each of the R transport layers. For example, for the rthtransport layer, the CSI report may be used to indicate Lrspatial domain vectors. A quantity of spatial domain vectors reported for each transport layer may be predefined, or may be directly or indirectly indicated by the network device by using signaling. This is not limited in this application. Optionally, the method further includes: The terminal device receives second indication information. The second indication information is used to indicate the quantity of to-be-reported spatial domain vectors that is configured for each transport layer. Correspondingly, the network device transmits the second indication information. The network device may include the second indication information in higher layer signaling such as an RRC message or a MAC CE or by using physical layer signaling such as DCI, to indicate, to the terminal device, the quantity of to-be-reported spatial domain vectors that is configured for each transport layer. Specific signaling carrying the second indication information is not limited in this application. In a possible design, the second indication information may indicate a maximum value L of a quantity of to-be-reported spatial domain vectors that is configured for each of the R transport layers. The maximum value L may be replaced with a minimum value, an average value, or the like. The terminal device may determine, according to a predefined rule and based on the quantity of transport layers and a value indicated by the second indication information, the quantity of spatial domain vectors reported for each transport layer. In this case, the second indication information indirectly indicates the quantity of to-be-reported spatial domain vectors that is configured for each transport layer. In another possible design, the second indication information may directly indicate, when the value of R changes, the quantity of to-be-reported spatial domain vectors that is configured for each transport layer, and the terminal device and the network device may directly determine, based on the second indication information and the quantity of transport layers, the quantity of spatial domain vectors that need to be reported for each transport layer. It may be understood that, when a meaning represented by the value indicated by the second indication information changes, the predefined rule for determining the reporting quantity corresponding to each transport layer also changes, and the foregoing listed formulas used to calculate the length of the bitmap also change. In addition, the quantity of spatial domain vectors reported for each transport layer may alternatively be predefined, for example, defined in a protocol. For example, the quantity of to-be-reported spatial domain vectors that is configured for each transport layer when the value of R changes may be predefined in the protocol. Alternatively, the maximum value L may be predefined. This is not limited in this application. It should be understood that the quantity of transport layers may be determined by the terminal device. For example, the terminal device may determine the quantity of transport layers based on downlink channel measurement. For a specific method for determining the quantity of transport layers by the terminal device, refer to the current technology. For brevity, detailed descriptions of the specific process are omitted herein. In this embodiment of this application, optionally, a same quantity of spatial domain vectors are configured for each of the R transport layers. For example, the quantity is L. After the quantity of spatial domain vectors that need to be reported for each transport layer is determined, indication overheads used to indicate the spatial domain vectors may be determined. For example, when the spatial domain vectors reported for each transport layer are selected from a spatial domain vector set, for the transport layer, the indication overheads used to indicate the spatial domain vectors may be, for example, ┌log2CNsLr┐. Nsrepresents a quantity of spatial domain vectors in the spatial domain vector set or a quantity of spatial domain vectors in an orthogonal group in the spatial domain vector set. Optionally, the second part of the CSI report further includes indication of frequency domain vectors reported for each of the R transport layers. For example, for the rthtransport layer, the CSI report may be used to indicate Mrfrequency domain vectors. A quantity of frequency domain vectors reported for each transport layer may be predefined, or may be directly or indirectly indicated by the network device by using signaling. This is not limited in this application. Optionally, the method further includes: The terminal device receives third indication information. The third indication information is used to indicate the quantity of to-be-reported frequency domain vectors that is configured for each transport layer. Correspondingly, the network device transmits the third indication information. The network device may include the third indication information in higher layer signaling such as an RRC message or a MAC CE or by using physical layer signaling such as DCI, to indicate, to the terminal device, the quantity of to-be-reported frequency domain vectors that is configured for each transport layer. Specific signaling carrying the third indication information is not limited in this application. In a possible design, the third indication information may indicate a maximum value M in the quantities of to-be-reported frequency domain vectors that are configured for the R transport layers. The maximum value M may be replaced with a minimum value, an average value, or the like. The terminal device may determine, according to a predefined rule and based on the quantity of transport layers and a value indicated by the third indication information, the reporting quantity corresponding to each transport layer. In this case, the third indication information indirectly indicates the quantity of to-be-reported frequency domain vectors that is configured for each transport layer. For example, the third indication information indicates the maximum value M. The predefined rule may be, for example, that when R is 1, a quantity M1of to-be-reported frequency domain vectors that is configured for the transport layer is the maximum value M; when R is 2, quantities M1and M2of to-be-reported frequency domain vectors that are configured for the two transport layers are both the maximum value M or a half of the maximum value, 2/M; when R is 3, a quantity M1of to-be-reported frequency domain vectors that is configured for the first transport layer is the maximum value M, and quantities M2and M3of to-be-reported frequency domain vectors that are configured for the second transport layer and the third transport layer are both a half of the maximum value, M/2; and when R is 4, quantities M1to M4of to-be-reported frequency domain vectors that are configured for the four transport layers are all a half of the maximum value, M/2. It should be understood that the rule listed above is merely an example, and should not constitute any limitation on this application. The foregoing has described in detail, with reference to a specific embodiment, a relationship between the maximum value and the quantity of to-be-reported frequency domain vectors that is configured for each transport layer. For brevity, examples are not described one by one herein. It may be understood that, when a meaning represented by the value indicated by the third indication information changes, the predefined rule for determining the quantity of the reported frequency domain vectors corresponding to each transport layer also changes, and the foregoing listed formulas used to calculate the length of the bitmap also change. It should be further understood that the maximum value, the minimum value, and the average value listed above are merely several possible implementations, and should not constitute any limitation on this application. In another possible design, the third indication information may directly indicate, when the value of R changes, the quantity of to-be-reported frequency domain vectors that is configured for each transport layer, and the terminal device may directly determine, based on the third indication information and the quantity of transport layers, the quantity of frequency domain vectors that need to be reported for each transport layer. In still another possible design, the third indication information and the foregoing second indication information above may be same indication information. For example, a relationship between a quantity of to-be-reported frequency domain vectors that is configured for each transport layer and a quantity of to-be-reported spatial domain vectors that is configured for the transport layer may be predefined. In other words, there may be a correspondence between the quantity of to-be-reported frequency domain vectors and the quantity of to-be-reported spatial domain vectors. For example, when the quantity of to-be-reported spatial domain vectors is 4, the quantity of to-be-reported frequency domain vectors is 4; and when the quantity of to-be-reported spatial domain vectors is 8, the quantity of to-be-reported frequency domain vectors is 6. It should be understood that the examples are provided herein merely for ease of understanding, and should not constitute any limitation on this application. The quantity of to-be-reported spatial domain vectors, the quantity of to-be-reported frequency domain vectors, and a relationship therebetween are not limited in this application. In addition, the correspondence between the quantity of to-be-reported spatial domain vectors and the quantity of to-be-reported frequency domain vectors is not limited to a one-to-one correspondence. In this case, the foregoing second indication information indirectly indicates the quantity of to-be-reported frequency domain vectors that is configured for each transport layer. In addition, the quantity of to-be-reported frequency domain vectors that is configured for each transport layer may alternatively be predefined, for example, defined in a protocol. For example, the quantity of to-be-reported frequency domain vectors that is configured for each transport layer when the value of R changes may be predefined in the protocol. Alternatively, the maximum value L may be predefined. This is not limited in this application. After the quantity of frequency domain vectors that need to be reported for each transport layer is determined, indication overheads used to indicate the frequency domain vectors may be determined. For example, when the frequency domain vectors reported for each transport layer are selected from a frequency domain vector set, for the transport layer, the indication overheads used to indicate the frequency domain vectors may be, for example ┌log2CNfLr┐. Nfrepresents a quantity of frequency domain vectors in the frequency domain vector set or a quantity of frequency domain vectors in an orthogonal group in the frequency domain vector set. It should be understood that the first indication information, the second indication information, and the third indication information mentioned above may be carried by using same signaling, or may be carried by using different signaling. This is not limited in this application. In addition, in some cases, the first indication information, the second indication information, and the third indication information may also be combined into one piece of indication information or two pieces of indication information. This is not limited in this application. The spatial domain vector and the frequency domain vector that are reported for each transport layer and that are used to construct the precoding vector may be determined, for example, by the terminal device based on downlink channel measurement. Specifically, the spatial domain vector and the frequency domain vector that are used to construct the precoding vector may be specifically a spatial domain vector and a frequency domain vector that are included in a space-frequency vector pair reported by the terminal device. The spatial domain vector used to construct the precoding vector may be selected from a predefined spatial domain vector set, and the frequency domain vector used to construct the precoding vector may be selected from a predefined frequency domain vector set. The rthtransport layer is used as an example. Spatial domain vectors included in Trspace-frequency vector pairs reported by the terminal device may be selected from Lrspatial domain vectors, and the Lrspatial domain vectors may be determined in the predefined spatial domain vector set. Frequency domain vectors included in the Trspace-frequency vector pairs reported by the terminal device may be selected from Mrfrequency domain vectors, and the Mrfrequency domain vectors may be determined in the predefined frequency domain vector set. The Lrspatial domain vectors and the Mrfrequency domain vectors may be predefined, or may be determined by the terminal device based on downlink channel measurement and reported to the network device. When the Lrspatial domain vectors and the Mrfrequency domain vectors are determined by the terminal device based on downlink channel measurement, the Lrspatial domain vectors and the Mrfrequency domain vectors may be determined, for example, based on a precoding vector corresponding to each frequency domain unit at the rthtransport layer. Herein, the precoding vector corresponding to each frequency domain unit at the rthtransport layer may be determined based on a channel matrix obtained through measurement in each frequency domain unit. The precoding vector corresponding to each frequency domain unit at the rthtransport layer may be used to construct a space-frequency matrix corresponding to the rthtransport layer. The Lrspatial domain vectors and the Mrfrequency domain vectors may be determined, for example, by performing spatial domain DFT and frequency domain DFT on the space-frequency matrix corresponding to the rthtransport layer. For example, the space-frequency matrix constructed by using the precoding vector corresponding to each frequency domain unit at the rthtransport layer may be denoted as, for example, Hr. A matrix Usmay be constructed by using the predefined spatial domain vector set, and a matrix Ufmay be constructed by using the predefined frequency domain vector set. In this case, after the spatial domain DFT and the frequency domain DFT are performed on the space-frequency matrix, a coefficient matrix may be obtained according to the following formula: C=UsHHrUf. C represents the coefficient matrix obtained by performing the DFT. The terminal device may determine stronger Lrrows and stronger Mrcolumns in the coefficient matrix C. For example, the network device may determine, based on a square sum of moduli of all elements in each row of the coefficient matrix C, the Lrrows in which the square sums of the moduli are larger, and may determine, based on a square sum of moduli of all elements in each column of the coefficient matrix C, the Mrcolumns in which the square sums of the moduli are larger. Therefore, the Lrspatial domain vectors and the Mrfrequency domain vectors that are reported for the rthtransport layer may be determined based on the space-frequency matrix corresponding to the rthtransport layer. The terminal device may further determine Krelements having larger moduli in the stronger Lrrows and the stronger Mrcolumns, to determine stronger Krspace-frequency vector pairs. If the terminal device can determine Krnon-zero elements having larger moduli in the stronger Lrrows and the stronger Mrcolumns, the quantity Trof space-frequency vector pairs reported by the terminal device for the rthtransport layer may be the same as a pre-configured reporting quantity Kr. If a quantity of non-zero elements that can be determined by the terminal device in the stronger Lrrows and the stronger Mrcolumns is less than Kr, the quantity Trof space-frequency vector pairs reported by the terminal device for the rthtransport layer may be less than a pre-configured reporting quantity Kr. In addition, the selected Lrrows and Mrcolumns in the coefficient matrix C may be used to construct a new coefficient matrix C′, and the coefficient matrix C′ may be a matrix whose dimension is Lr×Mr. Positions of selected Trelements in the coefficient matrix C′ correspond to positions of the selected Trspace-frequency vector pairs in the Lr×Mrspace-frequency vector pairs determined by the Lrspatial domain vectors and the Mrfrequency domain vectors. The selected Trelements in the coefficient matrix C′ are weighting coefficients of Trcorresponding space-frequency vector pairs. When the L spatial domain vectors or the M frequency domain vectors are shared at the R transport layers, the terminal device may determine the L spatial domain vectors, or the M frequency domain vectors based on the space-frequency matrix corresponding to each transport layer. A specific method may be similar to that described above. For brevity, details are not described herein again. It should be understood that, for a specific method for determining the precoding matrix based on the channel matrix, refer to the current technology. For brevity, detailed descriptions of the specific process are omitted herein. It should be further understood that, in the foregoing descriptions, for ease of understanding only, the rthtransport layer is used as an example to describe in detail the specific process in which the terminal device determines the Lrspatial domain vectors, the Mrfrequency domain vectors, the Trspace-frequency vector pairs, and the corresponding weighting coefficients. However, this should not constitute any limitation on this application. A specific method for determining, by the terminal device, the spatial domain vectors, the frequency domain vectors, the space-frequency vector pairs, and the corresponding weighting coefficients for each transport layer is not limited to that in the foregoing descriptions. For example, the terminal device may alternatively determine the spatial domain vectors, the frequency domain vectors, the space-frequency vector pairs, and the corresponding weighting coefficients for each transport layer by using an existing estimation algorithm such as a multiple signal classification algorithm (MUSIC), a Bartlett algorithm, or an estimation of signal parameters via rotational invariant technique algorithm (ESPRIT). When the terminal device reports the spatial domain vectors and the frequency domain vectors for each transport layer, the spatial domain vectors and the frequency domain vectors that are selected for each transport layer may be combined in a pairwise manner, to obtain a plurality of spatial domain vector pairs. For example, the Lrspatial domain vectors and the Mrfrequency domain vectors that are selected for the rthtransport layer may be combined to obtain Lr×Mrspace-frequency vector pairs. However, this is only a possible implementation, and should not constitute any limitation on this application. In an embodiment, when reporting the spatial domain vectors and the frequency domain vectors for each transport layer, the terminal device may also report one or more corresponding frequency domain vectors based on each spatial domain vector. After the spatial domain vectors reported for each transport layer are determined, the quantity of frequency domain vectors that need to be reported may be determined based on strengths of the spatial domain vectors. The rthtransport layer is used as an example, and the terminal device may first determine the Lrspatial domain vectors. For example, the Lrspatial domain vectors may be determined by performing the foregoing DFT. The Lrspatial domain vectors may correspond to the stronger Lrrows that are in the coefficient matrix C and that are obtained by performing the DFT. Based on amplitudes of coefficients of the Lrspatial domain vectors, a quantity of frequency domain vectors reported for each spatial domain vector may further be determined. The amplitudes of the coefficients of the Lrspatial domain vectors may be determined by the square sum of the moduli of the elements in each of the stronger Lrrows in the coefficient matrix C. In a possible design, the Lrspatial domain vectors may be grouped into two groups. A first group of spatial domain vectors may include Lr1spatial domain vectors, and a second group of spatial domain vectors may include Lr2spatial domain vectors. Lr1≥1, Lr2≥1, Lr=Lr1+Lr2, and Lr1and Lr2are both integers. The first group of spatial domain vectors and the second group of spatial domain vectors may be obtained through grouping according to a preset rule, for example, based on amplitudes. Specifically, an average amplitude of coefficients of the first group of spatial domain vectors is greater than an average amplitude of coefficients of the second group of spatial domain vectors, or a square sum of coefficients of the first group of spatial domain vectors is greater than a square sum of coefficients of the second group of spatial domain vectors, and so on. A specific rule for obtaining through grouping the first group of spatial domain vectors and the second group of spatial domain vectors is not limited in this application. A quantity of frequency domain vectors reported for each spatial domain vector in the first group of spatial domain vectors may be less than or equal to a maximum value M of a pre-configured quantity of to-be-reported frequency domain vectors. A quantity of frequency domain vectors reported for each spatial domain vector in the first group of spatial domain vectors may be a predefined value, for example, 1. A quantity Lr2of spatial domain vectors in the second group may be predetermined, for example, defined in a protocol, or agreed on in advance by the network device and the terminal device, for example, Lr2=2. A quantity of frequency domain vectors reported for each spatial domain vector in the second group of spatial domain vectors may be a predefined value, for example, 1. Therefore, a quantity of space-frequency vector pairs that may be determined based on the frequency domain vectors reported for the first group of spatial domain vectors is less than or equal to Lr1×M, and a quantity of space-frequency vector pairs that may be determined based on the frequency domain vectors reported for the second group of spatial domain vectors is less than Lr2×M. A quantity of space-frequency vector pairs that may be determined based on the first group of spatial domain vectors, the frequency domain vectors corresponding to the first group of spatial domain vectors, the second group of spatial domain vectors, and the frequency domain vectors corresponding to the second group of spatial domain vectors is less than Lr×M. In the CSI report, the foregoing bitmap may still be used to indicate the space-frequency vector pairs reported for each transport layer. Further, when the terminal device indicates, by using the second part of the CSI report, the spatial domain vectors reported for each transport layer, Lrshared spatial domain vectors may be reported for the R transport layers, and indication overheads thereof may be, for example, ┌log2CNsLr┐ bits. Optionally, a quantity of the shared spatial domain vectors that are reported for the R transport layers is L, and indication overheads may be, for example, ┌log2CNsL┐ bits. Alternatively, the terminal device may separately report corresponding spatial domain vectors for different transport layers, and indication overheads thereof may be, for example ∑r=1R⌈log2⁢CNsLr⌉. The specific method for indicating, by the terminal device, the spatial domain vectors reported for each transport layer has been described in detail above. For brevity, details are not described herein again. When the terminal device indicates, by using the second part of the CSI report, the frequency domain vectors reported for each transport layer, the frequency domain vectors may be reported for each transport layer. Frequency domain vectors reported for the first group of spatial domain vectors may be Mrfrequency domain vectors shared by a plurality of spatial domain vectors in the first group of spatial domain vectors, and indication overheads thereof may be, for example, ┌log2CNfMr┐ bits. Optionally, a quantity of shared frequency domain vectors that are reported for the plurality of spatial domain vectors in the first group of spatial domain vectors is M, and indication overheads may be, for example, ┌log2CNfM┐ bits. The frequency domain vectors reported for the first group of spatial domain vectors may alternatively be one or more frequency domain vectors reported for each spatial domain vector. If a quantity of the frequency domain vectors reported for each spatial domain vector is denoted as Ml, where Ml≥1 and Mlis an integer, a total quantity of frequency domain vectors reported for the first group of spatial domain vectors may be ∑l=1Lr⁢1Ml. Therefore, indication overheads thereof may be, for example ∑l=1Lr⁢1⌈log2⁢CNfMl⌉ bits. A quantity of frequency domain vectors reported for each spatial domain vector in the second group of spatial domain vectors may be predefined, for example, 1. In this case, indication overheads brought by the frequency domain vectors reported for each spatial domain vector in the second group of spatial domain vectors may be ┌log2CNf1┐ bits. If a quantity of spatial domain vectors in the second group of spatial domain vectors is 2, indication overheads of frequency domain vectors reported for the second group of spatial domain vectors may be 2┌log2CNf1┐ bits. It should be understood that the listed quantity of spatial domain vectors in the second group of spatial domain vectors and the listed quantity of frequency domain vectors reported for each spatial domain vector in the second group of spatial domain vectors are merely examples, and should not constitute any limitation on this application. This is not limited in this application. It should be further understood that, for ease of understanding only, the foregoing shows a possible case of reporting the frequency domain vectors for the first group of spatial domain vectors and the indication overheads, and a possible case of reporting the frequency domain vectors for the second group of spatial domain vectors and the indication overheads. However, this should not constitute any limitation on this application. A specific manner in which the terminal device reports the frequency domain vectors and overheads thereof are not limited in this application. When the specific manner in which the terminal device reports the frequency domain vectors is predefined in a protocol, the terminal device may generate indications of the frequency domain vectors in the CSI report in the manner, and the network device may parse the indications of the frequency domain vectors in the CSI report in the manner. It should be further understood that the foregoing lists the indication overheads and the plurality of manners in which the spatial domain vectors, the frequency domain vectors, the space-frequency vector pairs, and the weighting coefficients corresponding to the space-frequency vector pairs that are reported for each transport layer are indicated. These are merely examples for ease of understanding, and should not constitute any limitation on this application. Indication overheads and a manner in which the spatial domain vectors, the frequency domain vectors, the space-frequency vector pairs, and the weighting coefficients corresponding to the space-frequency vector pairs that are reported for each transport layer are indicated are not limited in this application. Based on the foregoing method, indication overheads of the second part of the CSI report may be determined based on the first part of the CSI report. Further, the second part of the CSI report may include a first field, a second field, and a third field. In a possible design, the first field may include an indication of the spatial domain vectors reported for each transport layer, the second field may include an indication of the frequency domain vectors reported for each transport layer, and the third field may include an indication of the weighting coefficients reported for each transport layer. In another possible design, the first field may include indication of the frequency domain vectors reported for each transport layer, the second field may include indication of the spatial domain vectors reported for each transport layer, and the third field may include indication of weighting coefficients reported for each transport layer. When a plurality of fields in the second part are encoded, an encoding sequence of the plurality of fields may be: The first field is located before the second field, and the second field is located before the third field. In addition, information in each field is sequentially encoded in a sequence from the first transport layer to the Rthtransport layer value. It should be understood that the following shows schematic diagrams of arrangements of the plurality of fields in the second part with reference to the accompanying drawings. However, the accompanying drawings are shown merely for ease of understanding, and should not constitute any limitation on this application. An encoding sequence of the fields shown in each of the figures may be understood as a sequence of bit sequences corresponding to the fields in a bit sequence generated based on the CSI report. The terminal device may encode corresponding bit sequences according to an arrangement sequence of the foregoing listed information. Correspondingly, the network device may also decode the corresponding bit sequences according to the arrangement sequence of the foregoing listed information. It should be further understood that, the foregoing describes the encoding sequence of the fields in the second part. This does not represent that the plurality of fields in the second part are independently encoded for a plurality of times. The plurality of fields in the second part may be encoded as a whole. For example, the plurality of fields belong to a code block. The foregoing encoding sequence may be understood as, for example, a sequence in which the bit sequences corresponding to the fields in the second part are sequentially input into an encoder for encoding. A sequence in which the network device decodes the second part may be the same as the encoding sequence. It may be understood that the fields in the second part are sequentially parsed according to the decoding sequence. For brevity, descriptions of a same or similar case are omitted below. It should be further understood that, for a specific encoding process, refer to the current technology. For brevity, detailed descriptions of the specific process are omitted herein. FIG.3toFIG.6each show the fields in the second part of the CSI report according to the encoding/decoding sequence according to this embodiment of this application. InFIG.3andFIG.5, the first field includes the indication of the spatial domain vectors for each transport layer, and the second field includes the indication of the frequency domain vectors for each transport layer. InFIG.4andFIG.6, the first field includes the indication of the frequency domain vectors for each transport layer, and the second field includes the indication of the spatial domain vectors for each transport layer. It should be understood thatFIG.3andFIG.4are merely examples for ease of understanding the encoding/decoding sequence of the fields, and do not indicate that the fields need to be arranged in the second part according to the sequences shown in the figures. In addition, the encoding/decoding sequence of the fields may correspond to a sequence of the foregoing priorities. Therefore, the encoding/decoding sequence of the fields may correspond to the arrangement sequence of the fields shown in each ofFIG.3andFIG.4. Optionally, L same spatial domain vectors are reported for the R transport layers. The L spatial domain vectors may be shared at the R transport layers. In this case, the indication of the spatial domain vectors reported for each transport layer in the figures may be combined. That is, the spatial domain vectors reported for the R transport layers need to be indicated only once. Optionally, M same frequency domain vectors are reported for the R transport layers. The M frequency domain vectors may be shared at the R transport layers. In this case, the indication of the frequency domain vectors reported for each transport layer in the figures may be combined. That is, the frequency domain vectors reported for the R transport layers need to be indicated only once. Therefore,FIG.3andFIG.4may be simplified intoFIG.5andFIG.6. Further, when the quantity R of transport layers is greater than 1, frequency domain vectors reported for some transport layers may be a subset of the frequency domain vectors reported for the first transport layer, as listed in Table 2, Table 3, Table 4, Table 6, Table 7, and Table 8 above. Table 2 is used as an example. When the quantity R of transport layers is 3, the M/2 frequency domain vectors reported for the second transport layer and the M/2 frequency domain vectors reported for the third transport layer may be subsets of the M frequency domain vectors reported for the first transport layer. If the M/2 frequency domain vectors reported for the second transport layer are the same as the M/2 frequency domain vectors reported for the third transport layer, relative positions of the M/2 frequency domain vectors in the M frequency domain vectors reported for the first transport layer may further be indicated, for example, may be indicated by using a combined index. Indication overheads of the combined index may be, for example, ┌log2CMM/2┌ bits. In this case, inFIG.3orFIG.4, an indication of the frequency domain vectors reported for the second transport layer and an indication of the frequency domain vectors reported for the third transport layer may be combined, and relative positions of the M/2 frequency domain vectors in the M frequency domain vectors reported for the first transport layer are indicated. If the M/2 frequency domain vectors reported for the second transport layer are different from the M/2 frequency domain vectors reported for the third transport layer, relative positions of the M/2 frequency domain vectors for the second transport layer in the M frequency domain vectors reported for the first transport layer and relative positions of the M/2 frequency domain vectors for the third transport layer in the M frequency domain vectors reported for the first transport layer may be separately indicated. Indication overheads thereof may be, for example, 2┌log2CMM/2┐ bits. The indications may be separately placed at a position of the indications of the frequency domain vectors reported for the second transport layer and a position of the indications of the frequency domain vectors reported for the third transport layer inFIG.3orFIG.4. Certainly, the M/2 frequency domain vectors reported for the second transport layer and the M/2 frequency domain vectors reported for the third transport layer may not be a subset of the M frequency domain vectors reported for the first transport layer. In this case, the frequency domain vectors reported for each transport layer may be separately indicated. When a transmission resource scheduled by the network device for the terminal device is insufficient to transmit all content of the CSI report, some pieces of information in the second part may be discarded. Optionally, the method further includes: determining to-be-discarded information in the second part in ascending order of priorities. A priority of the third field is lower than a priority of the second field, the priority of the second field is lower than a priority of the first field, and priorities of information in each field are in descending order from the first transport layer to the Rthtransport layer. In other words, when the transmission resource scheduled by the network device for the terminal device is insufficient to transmit all the content of the CSI report, it may be considered that the indications of the weighting coefficients in the second part are first discarded. In the indications of the weighting coefficients, it may be considered that the indication of the weighting coefficient of the Rthtransport layer is first discarded. After all the indications of the weighting coefficients are discarded, the indications of the spatial domain vectors or the indications of the frequency domain vectors may further be discarded. The fields shown in each ofFIG.3toFIG.6are encoded/decoded in ascending order of the priorities. For brevity, no additional figure is provided herein for description. Further, in the third field, a plurality of weighting coefficients reported for a same transport layer may correspond to at least two priorities, and the at least two priorities may include a first priority and a second priority. An amplitude of a weighting coefficient corresponding to the first priority may be greater than or equal to an amplitude of a weighting coefficient corresponding to the second priority. Priorities of weighting coefficients that are of the transport layers and that correspond to the first priority are higher than priorities of weighting coefficients that are of the transport layer and that correspond to the second priority. In addition, in weighting coefficients that are of a plurality of transport layers and that correspond to a same priority, priorities are in descending order from the first transport layer to the Rthtransport layer. For ease of understanding,FIG.7andFIG.8each show the fields that are arranged in the second part of the CSI report in a priority sequence according to this embodiment of this application. InFIG.7, the first field includes the indication of the spatial domain vectors for each transport layer, and the second field includes the indication of the frequency domain vectors for each transport layer. Specific content in the first field and the second field may be, for example, shown inFIG.3orFIG.5. For brevity, the specific content is not listed in the figure. InFIG.8, the first field includes the indication of the frequency domain vectors for each transport layer, and the second field includes the indication of the spatial domain vectors for each transport layer. Specific content in the first field and the second field may be, for example, shown inFIG.4orFIG.6. For brevity, the specific content is not listed in the figure. As shown in the figures, in the third field, priorities of weighting coefficients that are of the transport layer and that correspond to the first priority are higher than priorities of weighting coefficients that are of the transport layers and that correspond to the second priority. In the figures, an ellipsis indicates that priorities obtained through division based on amplitudes of the weighting coefficients are not limited to the first priority and the second priority, and may further include more priorities. Weighting coefficients corresponding to the more priorities may be discarded in ascending order of the priorities. Optionally, the plurality of weighting coefficients reported for the same transport layer may correspond to at least two quantization levels. Quantities of quantization bits of the plurality of weighting coefficients may be determined based on the at least two quantization levels. The at least two quantization levels may include a first quantization level and a second quantization level. A quantity of quantization bits of a weighting coefficient corresponding to the first quantization level may be greater than a quantity of quantization bits of a weighting coefficient corresponding to the second quantization level. In the third field, a priority of a weighting coefficient that is of each transport layer and that corresponds to the first quantization level is higher than a priority of a weighting coefficient that is of the transport layer and that corresponds to the second quantization level. In addition, in a plurality of weighting coefficients that are of the transport layers and that correspond to a same quantization level, priorities are in descending order from the first transport layer to the Rthtransport layer. The at least two quantization levels may correspond to the at least two priorities described above. In other words, inFIG.7andFIG.8, “first priority” may be replaced with “first quantization level”, and “second priority” may be replaced with “second quantization level”. It should be noted that, “discard” described above may be understood as: Before the second part is encoded/decoded, it is determined that the to-be-discarded information is not encoded/decoded. Therefore, the to-be-discarded information is not fed back to the network device. This seems that some pieces of information in the second part are discarded. Based on the foregoing method, the terminal device generates the CSI report. In step220, the terminal device transmits the CSI report. Correspondingly, the network device receives the CSI report. For example, the terminal device may transmit the CSI report to the network device on a physical uplink resource such as a physical uplink shared channel (PUSCH) or a physical uplink control channel (PUCCH), so that the network device determines, based on the CSI report, the space-frequency vector pair reported for each transport layer, to recover a precoding vector corresponding to each frequency domain unit at each transport layer. A specific method for transmitting, by the terminal device, the CSI report to the network device on the physical uplink resource may be the same as that in the current technology. For brevity, detailed descriptions of a specific process thereof are omitted herein. In step230, the network device determines, based on the CSI report, a space-frequency vector pair used to construct a precoding vector. A specific process in which the terminal device indicates, by using the CSI report, the quantity and the positions of the space-frequency vector pairs reported for the R transport layers has been described in detail in step210. The spatial domain vector pairs reported for the R transport layers are space-frequency vector pairs used to construct the precoding vector. After receiving the CSI report, the network device may decode the first part of the CSI report based on the predefined length of the first part. After parsing the first part of the CSI report, the network device may determine the quantity and the positions of space-frequency vector pairs reported for each transport layer, to determine the indication overheads of the second part of the CSI report, and then decode the second part. A specific process of parsing the CSI report by the network device is similar to the specific process of generating the CSI report by the terminal device. For brevity, detailed descriptions of the specific process are omitted herein. In addition, for a specific decoding process, refer to the current technology. For brevity, detailed descriptions of the specific process are omitted herein. As described above, the second part of the CSI report may include the indication of the weighting coefficients reported for each transport layer, the indication of the spatial domain vectors reported for each transport layer, and the indication of the frequency domain vectors reported for each transport layer. The rthtransport layer is used as an example. The network device may determine, based on the CSI report, Trspace-frequency vector pairs and weighting coefficients that correspond to the rthtransport layer. The Trspace-frequency vector pairs corresponding to the rthtransport layer may be used to construct Trspace-frequency component matrices. The Trspace-frequency component matrices may be weighted and summed to obtain a space-frequency matrix corresponding to the rthtransport layer, and then the precoding vector corresponding to each frequency domain unit at the rthtransport layer may be determined. A specific method for determining, by the network device, the precoding vector corresponding to each frequency domain unit based on the spatial domain vector pairs and weighting coefficients that correspond to the rthtransport layer has been described in detail above. For brevity, detailed descriptions of the specific process are omitted herein. Based on a precoding vector that is determined for an nth(where 1≤n≤Nfand n is an integer) frequency domain unit at each transport layer, a precoding matrix corresponding to the nthfrequency domain unit may be constructed. For example, the precoding vector corresponding to the nthfrequency domain unit at the first transport layer to the precoding vector corresponding to the nthfrequency domain unit at the Rthtransport layer are sequentially arranged in a sequence from the first transport layer to the Rthtransport layer, and normalization processing is performed, to obtain a precoding matrix corresponding to the nthfrequency domain unit. It should be understood that the foregoing method is merely an example. According to the method, the precoding vector corresponding to each frequency domain unit at each transport layer is determined based on the space-frequency vector pairs and the weighting coefficients that are indicated in the CSI report, and then the precoding matrix corresponding to the frequency domain unit is determined. This should not constitute any limitation on this application. A specific method for determining, by the network device, the precoding matrix based on the space-frequency vector pair and the weighting coefficient is not limited in this application. The precoding vector constructed based on the space-frequency vector pair and the weighting coefficient that are reported by the terminal device is determined based on downlink channels in a plurality of frequency domain units, and can well adapt to the downlink channels due to frequency domain correlation, thereby ensuring relatively high feedback precision. In addition, compared with a feedback manner of a type II codebook in the current technology, in the method for indicating vectors used to construct a precoding vector, feedback overheads do not increase as a quantity of frequency domain units increases. This helps reduce the feedback overheads. In this embodiment of this application, the terminal device may generate a bitmap having a fixed length in the CSI report, so that the network device determines indication overheads of other indication information based on the bitmap having the fixed length. Therefore, the network device may determine, based on the CSI report, the space-frequency vector pairs reported by the terminal device for each transport layer and the weighting coefficients corresponding to the space-frequency vector pairs, and then construct the precoding vector corresponding to each frequency domain unit. It should be noted that in step210, designs of the quantity of to-be-reported spatial domain vectors, the quantity of to-be-reported frequency domain vectors, and the quantity of to-be-reported space-frequency vector pairs that are configured for each transport layer are described in detail with reference to Table 1 to Table 8, and the designs are not limited to being used in the bitmap mentioned in the method200. In addition, when the quantity R of transport layers changes, a table (for example, any one of Table 1 to Table 8 above) is used to define the quantity of to-be-reported spatial domain vectors, the quantity of to-be-reported frequency domain vectors, and the quantity of to-be-reported space-frequency vector pairs that are configured for each transport layer. This is merely a possible implementation, and should not constitute any limitation on this application. In an implementation, the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer in the R transport layers may be determined by Kr=┌βr×LrMr┐. Lrrepresents the quantity of to-be-reported spatial domain vectors that is configured for the rthtransport layer, Mrrepresents the quantity of to-be-reported frequency domain vectors that is configured for the rthtransport layer, and βris a coefficient. It should be noted that the coefficient βris different from the weighting coefficient described above. The coefficient βrmay be understood as a ratio of the quantity of to-be-reported space-frequency vector pairs to a product of the quantity of to-be-reported spatial domain vectors and the quantity of to-be-reported frequency domain vectors. In addition, it should be noted that L, M, and β listed below are constants. L, M, and β may all be predefined values. For example, L, M, and β may all be indicated by the network device by using signaling, or may be defined in a protocol. As described above, L may represent, for example, a maximum value, a minimum value, an average value, or the like in the quantities of to-be-reported spatial domain vectors corresponding to the R transport layers. When the quantity of polarization directions is 2, L may be replaced with 2L or remains unchanged. When the quantity of polarization directions is 2 and L is replaced with 2L, the predefined value may still be L. For example, the network device indicates a value of L by using signaling, or a value of L is defined in a protocol. M may represent, for example, a maximum value, a minimum value, an average value, or the like in the quantities of reported frequency domain vectors corresponding to the R transport layers. The foregoing definitions of L, M, and β may be applicable to the embodiments in this application. However, it should be understood that foregoing definitions of L, M, and β are merely an example, and should not constitute any limitation on this application. The definitions of L, M, and β are not limited in this application. For brevity, descriptions of a same or similar case are omitted below. Optionally, the coefficient βrconfigured for the rthtransport layer is a variable, the quantity of to-be-reported spatial domain vectors that is configured for each of the R transport layers remains unchanged, and the quantity of to-be-reported frequency domain vectors that is configured for each transport layer remains unchanged. Therefore, the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer may change as a value of the coefficient βrchanges. For example, when the quantity of polarization directions is 1, the quantity of to-be-reported spatial domain vectors that is configured for each of the R transport layers is L, the quantity of to-be-reported frequency domain vectors that is configured for each of the R transport layers is M, and a coefficient configured for the rthtransport layer is βr. βrmay be separately configured. In this case, Kr=┌βr×LM┐. Table 1 above may be used as an example in which Krsatisfies Kr=┌βr×LM┐. For another example, when the quantity of polarization directions is 2, the quantity of to-be-reported spatial domain vectors that is configured for each of the R transport layers is 2L, the quantity of to-be-reported frequency domain vectors that is configured for each of the R transport layers is M, and a coefficient configured for the rthtransport layer is βr. βrmay be separately configured. In this case, Kr=┌βr×2LM┐. Certainly, when the quantity of polarization directions is 2, the quantity of to-be-reported spatial domain vectors that is configured for each transport layer may alternatively be L. In this case, Kr=┌βr×LM┐. Table 5 above may be used as an example in which Krsatisfies Kr=┌βr×LM┐ or Kr=┌βr×2LM┐. For example, if Kr=┌βr×2LM┐, K in Table 5 above may be replaced with ┌β×2LM┐, and K/2 may be replaced with ┌β×2L×(M/2)┐. In addition, Table 5 may also be transformed into Table 11. TABLE 11First transportSecond transportThird transportFourth transportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LMβ/22LMβ/2R = 42LMβ/22LMβ/22LMβ/22LMβ/2 As described above, when the coefficient βris a variable, the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer may change as the value of βrchanges. In this case, both Table 5 and Table 11 above may be used to indicate the relationship among Kr, L, M, and βr. It may be understood that Table 5 and Table 11 are merely two possible representation forms used to indicate the relationship among Kr, L, M, and βr. The relationship among Kr, L, M, and βrmay alternatively be shown in Table 12. TABLE 12First transport layer (r = 1)Second transport layer (r = 2)L1M1β1K1L2M2β2K2R = 12LMβ┌β × 2LM┐R = 22LMβ┌β × 2LM┐2LMβ┌β × 2LM┐R = 32LMβ┌β × 2LM┐2LMβ/2┌(β/2) × 2LM┐R = 42LMβ/2┌(β/2) × 2LM┐2LMβ/2┌(β/2) × 2LM┐Third transport layer (r = 3)Fourth transport layer (r = 4)L3M3β3K3L4M4β4K4R = 32LMβ/2┌(β/2) × 2LM┐R = 42LMβ/2┌(β/2) × 2LM┐2LMβ/2┌(β/2) × 2LM┐ Actually, it may be considered that Table 5, Table 11, and Table 12 above are equivalent. However, it should be understood that the relationship among Kr, L, M, and βris not limited to those shown in Table 5, Table 11, and Table 12 above. For example, a ratio of βrto may be directly configured. For brevity, tables are not listed herein one by one for description. With reference to Table 13 to Table 15, the following lists another example in which the quantity Krof to-be-reported space-frequency vector pairs satisfies Kr=┌βr×2LM┐. Similar to the foregoing tables, it may be considered that Table 13, Table 14, and Table 15 are equivalent. TABLE 13First transportSecond transportThird transportFourth transportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LMβ/22LMβ/2R = 42LMβ2LMβ/32LMβ/32LMβ/3 Table 13 may alternatively be represented as Table 14 or Table 15. TABLE 14First transport layer (r = 1)Second transport layer (r = 2)L1M1K1L2M2K2R = 12LM┌β × 2LM┐R = 22LM┌β × 2LM┐2LM┌β × 2LM┐R = 32LM┌β × 2LM┐2LM┌(β/2) × 2LM┐R = 42LM┌β × 2LM┐2LM┌(β/2) × 2LM┐Third transport layer (r = 3)Fourth transport layer (r = 4)L3M3L4M4R = 32LM┌(β/2) × 2LM┐R = 42LM┌(β/3) × 2LM┐2LM┌(β/3) × 2LM┐ TABLE 15First transport layer (r = 1)Second transport layer (r = 2)L1M1β1K1L2M2β2K2R = 12LMβ┌β × 2LM┐R = 22LMβ┌β × 2LM┐2LMβ┌β × 2LM┐R = 32LMβ┌β × 2LM┐2LMβ/2┌(β/2) × 2LM┐R = 42LMβ┌β × 2LM┐2LMβ/3┌(β/3) × 2LM┐Third transport layer (r = 3)Fourth transport layer (r = 4)L3M3β3K3L4M4β4K4R = 32LMβ/2┌(β/2) × 2LM┐R = 42LMβ/3┌(β/3) × 2LM┐2LMβ/3┌(β/3) × 2LM┐ In the following descriptions, Table 16 to Table 23 show several examples in which the quantity Krof to-be-reported space-frequency vector pairs satisfy Kr=┌βr×2LM┐. TABLE 16FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LMβ/22LMβ/2R = 42LMβ2LMβ/22LMβ/22LMβ/4 TABLE 17FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LMβ/22LMβ/2R = 42LMβ2LMβ/22LMβ/22LMβ/2 TABLE 18FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LM2β/32LM2β/32LM2β/3R = 42LMβ/22LMβ/22LMβ/22LMβ/2 TABLE 19FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LM2β/32LM2β/32LM2β/3R = 42LM2β/32LM2β/32LMβ/32LMβ/3 TABLE 20FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ/22LMβ/22LMβ/2R = 42LMβ/22LMβ/22LMβ/22LMβ/2 TABLE 21FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LMβ2LMβ/2R = 42LMβ2LMβ2LMβ/22LMβ/2 TABLE 22FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LMβ2LMβ/2R = 42LMβ2LMβ2LMβ/42LMβ/4 TABLE 23FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LMβ2LMβ/2R = 42LMβ2LMβ/22LMβ/22LMβ/2 It should be understood that any one of Table 16 to Table 23 above may be transformed based on a relationship between Table 13 and Table 14 above and a relationship between Table 13 and Table 15 above, to obtain an equivalent table, to indicate the relationship among Kr, L, M, and βr. It should be further understood that the relationship among Kr, L, M, and βris described above with reference to Table 11 to Table 23 only for ease of understanding. This should not constitute any limitation on this application. As described above, when the quantity of polarization directions is 2, or when the quantity of polarization directions is 1, Krmay also satisfy Kr=┌βr×LM┐. In this case, values of L1to L4in the foregoing tables may all be replaced with L. It should be further understood that Table 11 to Table 23 are merely examples, and should not constitute any limitation on this application. When Krhas the foregoing relationship with L, M, and βr, all parameters shown in the tables do not need to be configured. For example, when values in the four columns L1to L4in the tables are the same, for example, are all 2L, only one column may alternatively be shown. For another example, when values in the four columns M1to M4in the tables are the same, for example, are all M, only one column may alternatively be shown. For still another example, values of L, M, and Krmay be directly shown, and β1to β4are not shown. Based on a same concept, a person skilled in the art may make proper transformations and adjustments based on the foregoing tables. These transformations and adjustments shall all fall within the protection scope of this application. Optionally, the quantity Mrof to-be-reported frequency domain vectors that is configured for the rthtransport layer in the R transport layers is a variable, the quantity of to-be-reported spatial domain vectors that is configured for each of the R transport layers remains unchanged, and the coefficient configured for each transport layer remains unchanged. Therefore, the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer may change as the quantity Mrof to-be-reported frequency domain vectors changes. For example, when the quantity of polarization directions is 1, the quantity of to-be-reported spatial domain vectors that is configured for each of the R transport layers is L, the coefficient configured for each transport layer is β, and the quantity of to-be-reported frequency domain vectors that is configured for the rthtransport layer is Mr. Mrmay be separately configured. In this case, Kr=┌β×LMr┐. Table 2 above may be used as several examples in which Krsatisfies Kr=┌β×LMr┐. For another example, when the quantity of polarization directions is 2, the quantity of to-be-reported spatial domain vectors that is configured for each of the R transport layers is 2L, the coefficient configured for each transport layer is β, and the quantity of to-be-reported frequency domain vectors that is configured for the rthtransport layer is Mr. Mrmay be separately configured. In this case, Kr=┌β×2LMr┐. Certainly, when the quantity of polarization directions is 2, the quantity of to-be-reported spatial domain vectors that is configured for each transport layer may alternatively be L. In this case, Kr=┌β×LMr┐. Table 6 above may be used as two examples in which Krsatisfies Kr=┌β×2LMr┐ or Kr=┌β×2LMr┐. For example, if Kr=┌β×2LMr┐ in Table 6 above may be replaced with ┌β×2LM┐, and K/2 may be replaced with ┌β×2L×*(M/2)┐. In addition, Table 6 may also be transformed into Table 24. TABLE 24FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LM/2β2LM/2βR = 42LM/2β2LM/2β2LM/2β2LM/2β As described above, when Mris a variable, the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer may change as a value of Mrchanges. In this case, both Table 6 and Table 24 above may be used to indicate the relationship among Kr, L, Mr, and β. It may be understood that Table 6 and Table 24 are merely two possible representation forms used to indicate the relationship among Kr, L, Mr, and βr. The relationship among Kr, L, M, and β may alternatively be represented in the representation form shown in Table 25. TABLE 25First transportSecond transportlayer (r = 1)layer (r = 2)L1M1β1K1L2M2β2K2R = 12LMβ┌β × 2LM┐R = 22LMβ┌β × 2LM┐2LMβ┌β × 2LM┐R = 32LMβ┌β × 2LM┐2LM/2β┌β × 2L ×(M/2)┐R = 42LM/2β┌β × 2L ×2LM/2β┌β × 2L ×(M/2)┐(M/2)┐Third transportFourth transportlayer (r = 3)layer (r = 4)L3M3β3K3L4M4β4K4R = 32LM/2β┌β × 2L ×(M/2)┐R = 42LM/2β┌β × 2L ×2LM/2β┌β × 2L × (M/2)┐(M/2)┐ Actually, it may be considered that Table 6, Table 24, and Table 25 above are equivalent. It should be understood that the relationship among Kr, L, Mr, and β is not limited to the representation forms shown in Table 6, Table 24, and Table 25 above. For example, a ratio of Mrto M may alternatively be directly configured in a table. For brevity, tables are not listed herein one by one for description. It should be understood that the relationship among Kr, L, Mr, and β is described above with reference to Table 24 and Table 25 only for ease of understanding. However, this should not constitute any limitation on this application. As described above, when the quantity of polarization directions is 2, or when the quantity of polarization directions is 1, Krmay also satisfy Kr=┌β×LMr┐. In this case, values of L1to L4in the foregoing tables may all be replaced with L. It should be further understood that Table 24 and Table 25 are merely examples, and should not constitute any limitation on this application. When Krhas the foregoing relationship with L, Mr, and β, all parameters shown in the tables do not need to be configured. For example, when values in the four columns β1to β4in Table 21 are all β, only one column may alternatively be shown. For another example, values of L, Mr, and Krmay be directly shown, and β1to β4are not shown. Based on a same concept, a person skilled in the art may make proper transformations and adjustments based on the foregoing tables. These transformations and adjustments shall all fall within the protection scope of this application. It should be noted that, in the examples described above, the quantity of to-be-reported frequency domain vectors that is configured for each transport layer is a variable, and the quantity of to-be-reported spatial domain vectors that is configured for each transport layer and the coefficient configured for each transport layer are both fixed values. Based on a same concept, optionally, the quantity Lrof to-be-reported spatial domain vectors that is configured for the rthtransport layer in the R transport layers is a variable, the quantity of to-be-reported frequency domain vectors that is configured for each of the R transport layers remains unchanged, and the coefficient configured for each transport layer remains unchanged. Therefore, the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer may change as the quantity Lrof to-be-reported spatial domain vectors changes. For example, when the quantity of polarization directions is 1, the quantity of to-be-reported frequency domain vectors that is configured for each of the R transport layers is M, the coefficient configured for each transport layer is, and the quantity of to-be-reported spatial domain vectors that is configured for the rthtransport layer is Lr. Lrmay be separately configured. In this case, Kr=┌β×LrM┐. For another example, when the quantity of polarization directions is 2, the quantity of to-be-reported frequency domain vectors that is configured for each of the R transport layers is M, the coefficient configured for each transport layer is β, and the quantity of to-be-reported spatial domain vectors that is configured for the rthtransport layer is 2Lr. 2Lrmay be separately configured. In this case, Kr=┌β×2LrM┐. Certainly, when the quantity of polarization directions is 2, the quantity of to-be-reported spatial domain vectors that is configured for each transport layer may alternatively be Lr. In this case Kr=┌β×LrM┐. It should be understood that the relationship among Kr, L, Mr, and β is described above with reference to Table 25. When Lris a variable, and M and remain unchanged, a relationship among Kr, Lr, M, and β is similar to that shown in Table 6, Table 24, or Table 25. For brevity, no example is provided herein for description. Optionally, the quantity of to-be-reported spatial domain vectors that is configured for the rthtransport layer in the R transport layers and the quantity of to-be-reported frequency domain vectors that is configured for the rthtransport layer are both variables, and the coefficient configured for each transport layer remains unchanged. Therefore, the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer may change as the quantity Lrof to-be-reported spatial domain vectors and the quantity Mrof to-be-reported frequency domain vectors change. When either of values of Lrand Mrchanges, a value of Krchanges accordingly. For example, the quantity of to-be-reported spatial domain vectors that is configured for the rthtransport layer is Lr, the quantity of to-be-reported frequency domain vectors that is configured for the rthtransport layer is Mr, and the coefficient configured for each transport layer is β. Lrand Mrmay be separately configured for each transport layer. In this case, Kr=┌β×LrMr┐. Table 26 to Table 28 below show an example in which Krsatisfies Kr=┌β×LrMr┐. In Table 26 to Table 28, an example in which the quantity of polarization directions is 2 is used to show the relationship among Kr, Lr, Mr, and β. TABLE 26FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LMβLM/2βR = 42LMβ2LMβLM/2βLM/2β As described above, when Lrand Mrare variables, the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer may change as the values of Lrand Mrchange. In this case, Table 26 above may be used to indicate the relationship among Kr, Lr, Mr, and β. It may be understood that Table 26 is merely a possible representation form used to represent the relationship among Kr, Lr, Mr, and β. The relationship that is among Kr, Lr, Mr, and β and that is shown in Table 26 above may alternatively be represented in a form shown in Table 27 or Table 28. TABLE 27First transportSecond transportThird transportFourth transportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1K1L2M2K2L3M3K3L4M4K4R = 12LM┌β × 2LM┐R = 22LM┌β × 2LM┐2LM┌β × 2LM┐R = 32LM┌β × 2LM┐2LM┌β × 2LM┐LM/2┌β × 2L × (M/2)┐R = 42LM┌β × 2LM┐2LM┌β × 2LM┐LM/2┌β × 2L × (M/2)┐LM/2┌β × 2L × (M/2)┐ TABLE 28First transportSecond transportlayer (r = 1)layer (r = 2)L1M1β1K1L2M2β2K2R = 12LMβ┌β × 2LM]R = 22LMβ┌β × 2LM┐2LMβ┌β × 2LM┐R = 32LMβ┌β × 2LM┐2LM/2β┌β × 2LM┐R = 42LMβ┌β × 2LM┐2LM/2β┌β × 2LM┐Third transportFourth transportlayer (r = 3)layer (r = 4)L3M3β3K3L4M4β4K4R = 3LM/2β┌β × 2L ×(M/2)┐R = 4LM/2β┌β × 2L ×LM/2β┌β × 2L ×(M/2)┐(M/2)┐ Table 26, Table 27, and Table 28 show an example of the relationship among Kr, Lr, Mr, and β. Similar to a relationship among the listed Table 6, Table 24, and Table 25, it may be considered that Table 26, Table 27, and Table 28 are equivalent. However, it should be understood that the relationship among Kr, Lr, Mr, and β is not limited to the representation forms shown in Table 26, Table 27, and Table 28 above. For example, a ratio of Mrto M may alternatively be directly configured in a table, and/or a ratio of Lrto L may be directly configured in the table. For brevity, tables are not listed herein one by one for description. In the following descriptions, Table 29 and Table 30 show several examples in which the quantity Krof to-be-reported space-frequency vector pairs satisfy Kr=┌β×LrMr┐. TABLE 29FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβ2LM/2β2LM/2βR = 42LMβ2LM/2βLM/2βLM/2β TABLE 30FirstSecondThirdFourthtransporttransporttransporttransportlayer (r = 1)layer (r = 2)layer (r = 3)layer (r = 4)L1M1β1L2M2β2L3M3β3L4M4β4R = 12LMβR = 22LMβ2LMβR = 32LMβLMβLMβR = 42LMβLMβLM/2βLM/2β It should be understood that either of Table 29 and Table 30 above may be transformed based on a relationship between Table 26 and Table 27 above and a relationship between Table 26 and Table 28 above, to obtain an equivalent table, to indicate the relationship among Kr, Lr, Mr, and β. It should be further understood that the relationship among Kr, Lr, Mr, and β is described above with reference to Table 26 to Table 30 only for ease of understanding. However, this should not constitute any limitation on this application. As described above, when the quantity of polarization directions is 2 or 1, all “L” in the foregoing tables may be replaced with “L/2”. It should be further understood that Table 26 to Table 30 are merely examples, and should not constitute any limitation on this application. When Krhas the foregoing relationship with Lr, Mr, and β, all parameters shown in the tables do not need to be configured. For example, when values in the four columns β1to β4in the tables are all β, only one column may alternatively be shown. For another example, values of Lr, Mr, and Krmay be directly shown, and β1to β4are not shown. Based on a same concept, a person skilled in the art may make proper transformations and adjustments based on the foregoing tables. These transformations and adjustments shall all fall within the protection scope of this application. Based on the foregoing technical solutions, the network device may flexibly configure, for the terminal device, the quantity of to-be-reported spatial domain vectors for each transport layer, the quantity of to-be-reported frequency domain vectors for each transport layer, and the quantity of to-be-reported space-frequency vector pairs for each transport layer. FIG.9is a schematic flowchart of a method300for indicating vectors used to construct a precoding vector from a perspective of device interaction according to another embodiment of this application. As shown in the figure, the method300may include step310to step330. The following describes the steps in the method300in detail. In step310, a terminal device generates a CSI report. The CSI report is used to indicate a quantity of space-frequency vector pairs reported for R transport layers. In this embodiment of this application, when the terminal device indicates, by using a binary number, the quantity of space-frequency vector pairs reported for the R transport layers, the terminal device may indicate a quantity of space-frequency vector pairs reported for each transport layer, or indicate a total quantity of space-frequency vector pairs reported for the R transport layers. A field used to indicate the quantity of space-frequency vector pairs reported for the R transport layers may be an indication field having a fixed length, and is irrelevant to a quantity R of the transport layers. Optionally, indication overheads of the quantity of space-frequency vector pairs reported for the R transport layers is a maximum value that is of ∑r=1R⌈log2⁢Kr⌉ and that is determined by traversing R from 1 to Rm. In other words, the length of the indication field may be the maximum value that is of ∑r=1R⌈log2⁢Kr⌉ and that is determined by traversing R from 1 to Rm. Krmay represent a quantity of to-be-reported space-frequency vector pairs that is pre-configured for an rthtransport layer, where Kr≥1, and Kris an integer. As described above, Krmay specifically represent a total quantity of to-be-reported space-frequency vector pairs that is pre-configured for the rthtransport layer, or may represent a value obtained by subtracting a minimum quantity of to-be-reported space-frequency vector pairs that is predefined for the rthtransport layer from a total quantity of to-be-reported space-frequency vector pairs that is pre-configured for the rthtransport layer. Alternatively, Krmay represent a quantity of to-be-reported weighting coefficients that is pre-configured for the rthtransport layer, where Kr≥1, and Kris an integer. As described above, Krmay specifically represent a total quantity of to-be-reported weighting coefficients that is pre-configured for the rthtransport layer, or may represent a value obtained by subtracting a minimum quantity of to-be-reported weighting coefficients that is predefined for the rthtransport layer from a total quantity of to-be-reported weighting coefficients that is pre-configured for the rthtransport layer. However, it should be understood that the definitions of the parameter Krin this application are merely examples, and should not constitute any limitation on this application. For example, Krmay alternatively be defined as the total quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is pre-configured for the rthtransport layer. In this case, the indication overheads of the quantity of space-frequency vector pairs reported for the R transport layers is a maximum value that is of ∑r=1R⌈log2(Kr-ar)⌉ and that is determined by traversing R from 1 to Rm. In other words, the length of the indication field may be the maximum value that is of ∑r=1R⌈log2(Kr-ar)⌉ and that is determined by traversing R from 1 to Rm. arrepresents the minimum quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is predefined for the rthtransport layer, and aris a positive integer. In this embodiment of this application, when the parameter Kris used, Krmay be understood as the total quantity of to-be-reported space-frequency vector pairs that is pre-configured for the rthtransport layer, or the value obtained by subtracting the minimum quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is predefined for the rthtransport layer from the total quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is pre-configured for the rthtransport layer. Specifically, when the quantity R of the transport layers is specified, the terminal device may determine, based on the quantity of to-be-reported space-frequency vector pairs that is configured for each of the R transport layers, indication overheads used to indicate the quantity of space-frequency vector pairs reported for each transport layer. For example, if a quantity Trof space-frequency vector pairs reported by the terminal device for the rthtransport layer is less than or equal to the quantity Krof to-be-reported space-frequency vector pairs that is configured for the rthtransport layer, indication overheads of the quantity of space-frequency vector pairs reported for the rthtransport layer may be ┌log2Kr┐. The indication overheads of the total quantity of space-frequency vector pairs reported by the terminal device for the R transport layers may be ∑r=1R⌈log2⁢Kr⌉ bits. Because a value of R cannot be determined in advance, the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer may be different. In this case, the indication overheads of the quantity of space-frequency vector pairs reported for the R transport layers may be defined as the maximum value that is of ∑r=1R⌈log2⁢Kr⌉ and that is determined by traversing R from 1 to Rm. Therefore, the length of the indication field may be irrelevant to the quantity R of the transport layers. Based on the foregoing design, the terminal device may indicate, by using the indication field, the quantity of space-frequency vector pairs reported for each of the R transport layers. Optionally, indication overheads of the quantity of space-frequency vector pairs reported for the R transport layers is a maximum value that is of ⌈log2(∑r-1RKr)⌉ and that is determined by traversing R from 1 to Rm. As described above, Krmay specifically represent the total quantity of to-be-reported space-frequency vector pairs that is pre-configured for the rthtransport layer, or may represent the value obtained by subtracting the minimum quantity of to-be-reported space-frequency vector pairs that is predefined for the rthtransport layer from the total quantity of to-be-reported space-frequency vector pairs that is pre-configured for the rthtransport layer. Alternatively, Krmay represent the quantity of to-be-reported weighting coefficients that is pre-configured for the rthtransport layer, where Kr≥1, and Kris an integer. As described above, Krmay specifically represent the total quantity of to-be-reported weighting coefficients that is pre-configured for the rthtransport layer, or may represent the value obtained by subtracting the minimum quantity of to-be-reported weighting coefficients that is predefined for the rthtransport layer from the total quantity of to-be-reported weighting coefficients that is pre-configured for the rthtransport layer. However, it should be understood that the definitions of the parameter Krin this application are merely examples, and should not constitute any limitation on this application. For example, Krmay alternatively be defined as the total quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is pre-configured for the rthtransport layer. In this case, the indication overheads of the quantity of space-frequency vector pairs reported for the R transport layers is the maximum value that is of ⌈log2(∑r=1R(Kr-ar))⌉ and that is determined by traversing R from 1 to Rm. In other words, the length of the indication field may be the maximum value that is of ⌈log2(∑r=1R(Kr-ar))⌉ and that is determined by traversing R from 1 to Rm. arrepresents the minimum quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is predefined for the rthtransport layer, and aris a positive integer. In this embodiment of this application, when the parameter Kris used, Krmay be understood as the total quantity of to-be-reported space-frequency vector pairs that is pre-configured for the rthtransport layer, or the value obtained by subtracting the minimum quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is predefined for the rthtransport layer from the total quantity of to-be-reported space-frequency vector pairs (or weighting coefficients) that is pre-configured for the rthtransport layer. For ease of description below, the maximum value that is of ⌈log2(∑r=1RKr)⌉ and that is determined by traversing R from 1 to Rmis denoted as Z. In this case, the length of the indication field of the quantity of space-frequency vectors reported for the R transport layers is Z bits. Specifically, when the quantity R of the transport layers is specified, the terminal device may determine, based on the quantity of to-be-reported space-frequency vector pairs that is configured for each of the R transport layers, the total quantity of to-be-reported space-frequency vector pairs that is configured for the R transport layers. For example, if the reporting quantity configured by the terminal device for the rthtransport layer is Kr, the total reporting quantity configured for the R transport layers is ∑r=1RKr. The terminal device may determine, based on the total quantity of to-be-reported space-frequency vector pairs that is configured for the R transport layers, the indication overheads of the space-frequency vector pairs reported for the R transport layers. For example, when the total reporting quantity is ∑r=1RKr, the indication overheads thereof may be ⌈log2(∑r=1RKr)⌉ bits. Because a value of R cannot be determined in advance, the quantity of to-be-reported space-frequency vector pairs that is configured for each transport layer may be different. In this case, the indication overheads of the quantity of space-frequency vector pairs reported for the R transport layers may be defined as the maximum value that is of ⌈log2(∑r=1RKr)⌉ and that is determined by traversing R from 1 to Rm. Therefore, the length of the indication field may be irrelevant to the quantity R of the transport layers. In an implementation, when the indication field is not fully filled, the indication field may be padded with any bit to ensure that the length of the entire indication field is fixed. In this case, the indication field may include a valid bit and a padding bit. The valid bit is used to indicate the quantity of space-frequency vector pairs reported for the R transport layers, and a remaining bit may be padded with any value. The padding bit may be located before the valid bit, or may be located after the valid bit. This is not limited in this application. In another implementation, the entire indication field may be used to indicate the quantity of space-frequency vector pairs reported for the R transport layers. The Z bits in the indication field may be used to indicate a value that is of ⌈log2(∑r=1RKr)⌉ and that is obtained when R is any value from 1 to Rm. Based on the foregoing designs, the terminal device may indicate, by using the indication field, the total quantity of space-frequency vector pairs reported for the R transport layers. It may be understood that, in this case, a network device cannot determine, based on the indication field, the quantity of space-frequency vector pairs reported for each transport layer. It should be understood that, when it is defined in a protocol that one of the two designs of the overheads of the indication field that are listed above is used, the terminal device may generate a corresponding indication field based on the defined design, and the network device may also parse the indication field based on the corresponding design. Krmay represent a quantity of to-be-reported space-frequency vector pairs that is configured for the rthtransport layer when the quantity of transport layers is R. As described in the foregoing method200, a value of Krmay be predefined, may be configured by the network device by using first indication information, or may be determined based on another parameter. This is not limited in this application. A specific method for determining the quantity of space-frequency vector pairs reported for each transport layer has been described in detail in the foregoing method200. For brevity, details are not described herein again. It should be noted that, when the definition of Krchanges, the formula listed above for calculating the indication overheads of the reporting quantity for the R transport layers also change accordingly. A method that is used to calculate the indication overheads of the reporting quantity for the R transport layers and that is determined by a person skilled in the art based on a same inventive concept shall fall within the protection scope of this application. Optionally, the indication of the quantity of space-frequency vector pairs reported for the R transport layers is located in a first part of the CSI report. As described above, a length of the first part of the CSI report is predefined. A length of the indication of the quantity of space-frequency vector pairs reported for the R transport layers may be a fixed value, and is irrelevant to the quantity R of the transport layers. Therefore, the indication of the quantity of space-frequency vector pairs reported for the R transport layers may be designed in the first part of the CSI report. Overheads of the first part of the CSI report may be fixed, and do not change with the quantity R of the transport layers. The overheads of the first part may be predefined in a protocol, so that after receiving the CSI report, the network device decodes the first part based on the predefined length. Further, the CSI report further includes a second part, and the second part includes position indications of the space-frequency vector pairs reported for the R transport layers. Positions of the space-frequency vector pairs reported for the R transport layers may be relative positions of the space-frequency vector pairs reported for the R transport layers in a plurality of predetermined space-frequency vector pairs. The plurality of predetermined space-frequency vector pairs may be determined by one or more spatial domain vectors and one or more frequency domain vectors. For example, relative positions of the space-frequency vector pairs reported for each transport layer in the plurality of predetermined space-frequency vector pairs may be indicated by using the bitmap in the foregoing implementations, or may be indicated by using an index of a combination of the space-frequency vector pairs reported for each transport layer in the plurality of space-frequency vector pairs. A specific method for indicating, by using the bitmap, positions of the space-frequency vector pairs reported for each transport layer has been described in detail in the foregoing method200. For brevity, details are not described herein again. A specific method for indicating, by using the index, the space-frequency vector pairs reported for each transport layer is described in detail below. The rthtransport layer is used as an example. As described above, the Trspace-frequency vector pairs reported by the terminal device for the rthtransport layer may be one or more space-frequency vector pairs selected from Lr×Mrspace-frequency vector pairs that are determined by Lrspatial domain vectors and Mrfrequency domain vectors. The terminal device may indicate the Trspace-frequency vector pairs by using an index of a combination of the Trspace-frequency vector pairs in the Lr×Mrspace-frequency vector pairs. To be specific, the terminal device may predetermine a plurality of combinations of a plurality of space-frequency vector pairs based on the Lr×Mrspace-frequency vectors obtained by combining the Lrspatial domain vectors and the Mrfrequency domain vectors. Each combination may correspond to one index. The Trspace-frequency vector pairs may be one of the plurality of combinations, or may be close to one of the plurality of combinations. The terminal device may indicate the Trspace-frequency vector pairs by indicating the index of the combination of the Trspace-frequency vector pairs. Indication overheads caused by the Trspace-frequency vector pairs may be, for example, ┌log2CLr×MrTr┐ bits. Based on the same manner, the terminal device may indicate, by using R indexes corresponding to the R transport layers, the space-frequency vector pair reported for each transport layer. Therefore, indication overheads thereof may be, for example ∑r=1R⌈log2⁢CLr×MrT⌉ bits. Optionally, the second part of the CSI report further includes an indication of weighting coefficients reported for each of the R transport layers. An indication manner and indication overheads of the weighting coefficients are described in detail in the foregoing method200. For brevity, details are not described herein again. Optionally, the second part of the CSI report further includes an indication of spatial domain vectors reported for each of the R transport layers. An indication manner and indication overheads of the spatial domain vectors are described in detail in the foregoing method200. For brevity, details are not described herein again. Optionally, the second part of the CSI report further includes an indication of frequency domain vectors reported for each of the R transport layers. An indication manner and indication overheads of the frequency domain vectors are described in detail in the foregoing method200. For brevity, details are not described herein again. Based on the foregoing method, indication overheads of the second part of the CSI report may be determined based on the first part of the CSI report. Further, the second part of the CSI report may include a first field, a second field, a third field, and a fourth field. In a possible design, the first field may include the indication of the spatial domain vectors reported for each transport layer, the second field may include the indication of the frequency domain vectors reported for each transport layer, the third field may include the indication of the weighting coefficients reported for each transport layer, and the fourth field may include the position indication of the space-frequency vector pairs reported for each transport layer. In another possible design, the first field may include the indication of the frequency domain vectors reported for each transport layer, the second field may include the indication of the spatial domain vectors reported for each transport layer, the third field may include the indication of the weighting coefficients reported for each transport layer, and the fourth field may include the position indication of the space-frequency vector pairs reported for each transport layer. The fourth field may be a bitmap, or may be indexes of combinations of the space-frequency vector pairs. This is not limited in this application. An encoding sequence of the plurality of fields in the second part may be as follows: The first field is located before the second field, the second field is located before the fourth field, and the fourth field is located before the third field. In addition, information in each field is sequentially encoded/decoded in a sequence from the first transport layer to the Rthtransport layer. FIG.10toFIG.13each show the fields in the second part of the CSI report in the encoding/decoding sequence according to this embodiment of this application. InFIG.10andFIG.12, the first field includes the indication of the spatial domain vector for each transport layer, and the second field includes the indication of the frequency domain vector for each transport layer. InFIG.11andFIG.13, the first field includes the indication of the frequency domain vector for each transport layer, and the second field includes the indication of the spatial domain vector for each transport layer. It should be understood thatFIG.10andFIG.11are merely examples for ease of understanding the encoding/decoding sequence of the fields, and do not indicate that the fields need to be arranged in the second part according to the sequences shown in the figures. In addition, the encoding/decoding sequence of the fields may correspond to a sequence of the foregoing priorities. Therefore, the encoding/decoding sequence of the fields may correspond to the arrangement sequence of the fields shown in each ofFIG.10andFIG.11. Optionally, L same spatial domain vectors are reported for the R transport layers. The L spatial domain vectors may be shared at the R transport layers. In this case, the indication of the spatial domain vectors reported for each transport layer in the figures may be combined. That is, the spatial domain vectors reported for the R transport layers need to be indicated only once. Optionally, M same frequency domain vectors are reported for the R transport layers. The M frequency domain vectors may be shared at the R transport layers. In this case, the indication of the frequency domain vectors reported for each transport layer in the figures may be combined. That is, the frequency domain vectors reported for the R transport layers need to be indicated only once. Therefore,FIG.10andFIG.11may be simplified intoFIG.12andFIG.13. Further, when the quantity R of the transport layers is greater than 1, frequency domain vectors reported for some transport layers may be a subset of frequency domain vectors reported for the first transport layer. Therefore, the indication of the frequency domain vectors reported for some transport layer inFIG.10andFIG.11may be combined. In the foregoing implementations, detailed descriptions have been provided with reference to Table 2. For brevity, details are not described herein again. When a transmission resource scheduled by the network device for the terminal device is insufficient to transmit all content of the CSI report, some pieces of information in the second part may be discarded. Optionally, the method further includes: determining to-be-discarded information in the second part in ascending order of priorities. A priority of the third field is lower than a priority of the fourth field, the priority of the fourth field is lower than a priority of the second field, and the priority of the second field is lower than a priority of the first field. In addition, priorities of information in each field are in descending order from the first transport layer to the Rthtransport layer. The fields shown in each ofFIG.10toFIG.13are encoded/decoded in ascending order of the priorities. For brevity, no additional figure is provided herein for description. Further, in the third field, a plurality of weighting coefficients reported for a same transport layer may correspond to at least two priorities, and the at least two priorities may include a first priority and a second priority. An amplitude of a weighting coefficient corresponding to the first priority may be greater than or equal to an amplitude of a weighting coefficient corresponding to the second priority. Priorities of weighting coefficient that are of the transport layers and that correspond to the first priority are higher than priorities of weighting coefficients that are of the transport layer and that correspond to the second priority. In addition, in weighting coefficients that are of a plurality of transport layers and that correspond to a same priority, priorities are in descending order from the first transport layer to the Rthtransport layer. For ease of understanding,FIG.14andFIG.15each show the fields that are arranged in the second part of the CSI report in a priority sequence according to this embodiment of this application. InFIG.14, the first field includes the indication of the spatial domain vectors for each transport layer, and the second field includes the indication of the frequency domain vectors for each transport layer. Specific content in the first field and the second field may be, for example, shown inFIG.10orFIG.12. For brevity, the specific content is not listed in the figure. InFIG.15, the first field includes the indication of the frequency domain vectors for each transport layer, and the second field includes the indication of the spatial domain vectors for each transport layer. Specific content in the first field and the second field may be, for example, shown inFIG.11orFIG.13. For brevity, the specific content is not listed in the figure. As shown in the figures, in the third field, priorities of weighting coefficients that are of the transport layers and that correspond to the first priority are higher than priorities of weighting coefficients that are of the transport layers and that correspond to the second priority. In the figures, an ellipsis indicates that priorities obtained through division based on amplitudes of the weighting coefficients are not limited to the first priority and the second priority, and may further include more priorities. Weighting coefficients corresponding to the more priorities may be discarded in ascending order of the priorities. Optionally, the plurality of weighting coefficients reported for the same transport layer may correspond to at least two quantization levels. Quantities of quantization bits of the plurality of weighting coefficients may be determined based on the at least two quantization levels. The at least two quantization levels may include a first quantization level and a second quantization level. A quantity of quantization bits of a weighting coefficient corresponding to the first quantization level may be greater than a quantity of quantization bits of a weighting coefficient corresponding to the second quantization level. In the third field, a priority of a weighting coefficient that is of each transport layer and that corresponds to the first quantization level is higher than a priority of a weighting coefficient that is of the transport layer and that corresponds to the second quantization level. In addition, in a plurality of weighting coefficients that are of the transport layers and that correspond to a same quantization level, priorities are in descending order from the first transport layer to the Rthtransport layer. The at least two quantization levels may correspond to the at least two priorities described above. In other words, inFIG.14andFIG.15, “first priority” may be replaced with “first quantization level”, and “second priority” may be replaced with “second quantization level”. It should be noted that, “discard” described above may be understood as: Before the second part is encoded/decoded, it is determined that the to-be-discarded information is not encoded/decoded. Therefore, the to-be-discarded information is not fed back to the network device. This seems that some pieces of information in the second part are discarded. Based on the foregoing two different implementations, the terminal device may indicate, to the network device by using the CSI report, the quantity and positions of the space-frequency vector pairs reported for the R transport layers. It should be noted that, when it is defined in a protocol that an implementation is used to indicate the quantity and positions of the space-frequency vector pairs to the terminal device, the terminal device and the network device may generate the CSI report and parse the CSI report based on the same implementation. Based on the foregoing method, the terminal device generates the CSI report. In step320, the terminal device transmits the CSI report. Correspondingly, the network device receives the CSI report. A specific process of step320may be the same as a specific process of step220in the foregoing method200. For brevity, details are not described herein again. In step330, the network device determines, based on the CSI report, a quantity of space-frequency vector pairs used to construct a precoding vector. A specific process in which the terminal device indicates, by using the CSI report, the quantity of the space-frequency vector pairs reported for the R transport layers has been described in detail in step310. The spatial domain vector pairs reported for the R transport layers are space-frequency vector pairs used to construct the precoding vector. After receiving the CSI report, the network device may decode the first part of the CSI report based on the predefined length of the first part. After parsing the first part of the CSI report, the network device may determine the quantity of the space-frequency vector pairs reported for the R transport layers, to determine the indication overheads of the second part of the CSI report, and then decode the second part and determine the space-frequency vector pairs reported for each transport layer. A specific process of parsing the CSI report by the network device is similar to the specific process of generating the CSI report by the terminal device. For brevity, detailed descriptions of the specific process are omitted herein. In addition, for a specific decoding process, refer to the current technology. For brevity, detailed descriptions of the specific process are omitted herein. As described above, the second part of the CSI report may include the indication of the weighting coefficients reported for each transport layer, the indication of the spatial domain vectors reported for each transport layer, and the indication of the frequency domain vectors reported for each transport layer. Therefore, the network device may determine, based on the space-frequency vector pairs and the weighting coefficients that are reported for each transport layer, a precoding vector corresponding to each frequency domain unit at each transport layer, and further determine a precoding matrix corresponding to the frequency domain unit. The foregoing has described a specific process in which the network device determines, based on the spatial domain vector pairs and the weighting coefficients, the precoding vector corresponding to each frequency domain unit, and further determines the precoding matrix corresponding to the frequency domain unit. In addition, for the specific process in which the network device determines, based on the space-frequency vector pairs and the weighting coefficients that are indicated in the CSI report, the precoding vector corresponding to each frequency domain unit at each transport layer, and further determines the precoding matrix corresponding to the frequency domain unit, refer to the current technology. For brevity, details are not described herein again. In this embodiment of this application, the terminal device generates the indication field having the fixed length in the CSI report, so that the network device determines indication overheads of other indication information based on the indication field having the fixed length. Therefore, the network device may determine, based on the CSI report, the space-frequency vector pairs reported by the terminal device for each transport layer and the weighting coefficients corresponding to the space-frequency vector pairs, and then construct the precoding vector corresponding to each frequency domain unit. The precoding vector constructed based on the space-frequency vector pair and the weighting coefficient that are reported by the terminal device is determined based on downlink channels in a plurality of frequency domain units, and can well adapt to the downlink channels due to frequency domain correlation, thereby ensuring relatively high feedback precision. In addition, compared with a feedback manner of a type II codebook in the current technology, in the method for indicating vectors used to construct a precoding vector, feedback overheads do not increase as a quantity of frequency domain units increases. This helps reduce the feedback overheads. It should be understood that, in the foregoing embodiments, sequence numbers of the foregoing processes do not mean execution sequences. The execution sequences of the processes should be determined based on functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of the embodiments of this application. With reference toFIG.2toFIG.15, the foregoing describes in detail the methods for indicating vectors used to construct a precoding vector provided in the embodiments of this application. Communications apparatuses provided in the embodiments of this application are described below in detail with reference toFIG.16toFIG.18. FIG.16is a schematic block diagram of a communications apparatus according to an embodiment of this application. As shown in the figure, the communications apparatus1000may include a communications unit1100and a processing unit1200. In a possible design, the communications apparatus1000may correspond to a terminal device in the foregoing method embodiments, for example, may be a terminal device or a chip disposed in a terminal device. Specifically, the communications apparatus1000may correspond to the terminal device in the method200in the embodiments of this application, and the communications apparatus1000may include units configured to perform the method performed by the terminal device in the method200inFIG.2. In addition, the units in the communications apparatus1000and the foregoing other operations and/or functions are intended to implement corresponding procedures of the method200inFIG.2or of the method300inFIG.9. When the communications apparatus1000is configured to perform the method200inFIG.2, the communications unit1100may be configured to perform step220in the method200, and the processing unit1200may be configured to perform step210in the method200. It should be understood that a specific process of performing a corresponding step by each unit has been described in detail in the foregoing method embodiments. For brevity, details are not described herein. When the communications apparatus1000is configured to perform the method300inFIG.9, the communications unit1100may be configured to perform step320in the method200, and the processing unit1200may be configured to perform step310in the method200. It should be understood that a specific process of performing a corresponding step by each unit has been described in detail in the foregoing method embodiments. For brevity, details are not described herein. It should be further understood that, when the communications apparatus1000is a terminal device, the communications unit1100in the communications apparatus1000may correspond to a transceiver2020in a terminal device2000shown inFIG.17, and the processing unit1200in the communications apparatus1000may correspond to a processor2010in the terminal device2000shown inFIG.17. It should be further understood that, when the communications apparatus1000is a chip disposed in a terminal device, the communications unit1100in the communications apparatus1000may be an input/output interface. In another possible design, the communications apparatus1000may correspond to a network device in the foregoing method embodiments, for example, may be a network device or a chip disposed in a network device. Specifically, the communications apparatus1000may correspond to the network device in the method200in the embodiments of this application, and the communications apparatus1000may include units configured to perform the method performed by the network device in the method200inFIG.2. In addition, the units in the communications apparatus1000and the foregoing other operations and/or functions are intended to implement corresponding procedures of the method200inFIG.2or of the method300inFIG.9. When the communications apparatus1000is configured to perform the method200inFIG.2, the communications unit1100may be configured to perform step220in the method200, and the processing unit1200may be configured to perform step230in the method200. It should be understood that a specific process of performing a corresponding step by each unit has been described in detail in the foregoing method embodiments. For brevity, details are not described herein. When the communications apparatus1000is configured to perform the method300inFIG.9, the communications unit1100may be configured to perform step320in the method200, and the processing unit1200may be configured to perform step330in the method200. It should be understood that a specific process of performing a corresponding step by each unit has been described in detail in the foregoing method embodiments. For brevity, details are not described herein. It should be further understood that, when the communications apparatus1000is a network device, the communications unit in the communications apparatus1000may correspond to a transceiver3200in a network device3000shown inFIG.18, and the processing unit1200in the communications apparatus1000may correspond to a processor3100in the network device3000shown inFIG.18. It should be further understood that, when the communications apparatus1000is a chip disposed in a network device, the communications unit1100in the communications apparatus1000may be an input/output interface. FIG.17is a schematic structural diagram of a terminal device2000according to an embodiment of this application. The terminal device2000may be applied to the system shown inFIG.1, and perform a function of the terminal device in the foregoing method embodiments. As shown inFIG.17, the terminal device2000includes a processor2010and a transceiver2020. Optionally, the terminal device2000further includes a memory2030. The processor2010, the transceiver2002, and the memory2030may communicate with each other through an internal connection path, to transfer a control signal and/or a data signal. The memory2030is configured to store a computer program. The processor2010is configured to: invoke the computer program from the memory2030and run the computer program, to control the transceiver2020to transmit/receive a signal. Optionally, the terminal device2000may further include an antenna2040, configured to transmit, by using a radio signal, uplink data or uplink control signaling output by the transceiver2020. The processor2010and the memory2030may be integrated into one processing apparatus, and the processor2010is configured to execute program code stored in the memory2030, to implement the foregoing function. During specific implementation, the memory2030may alternatively be integrated into the processor2010, or may be independent of the processor2010. The processor2010may correspond to the processing unit inFIG.16. The transceiver2020may correspond to the communications unit inFIG.16, and may also be referred to as a transceiver unit. The transceiver2020may include a receiver (or referred to as a receiver machine or a receiver circuit) and a transmitter (or referred to as a transmitter machine or a transmitter circuit). The receiver is configured to receive a signal, and the transmitter is configured to transmit a signal. It should be understood that, the terminal device2000shown inFIG.17can implement each process of the terminal device in the method embodiment shown inFIG.2orFIG.9. Operations and/or functions of modules in the terminal device2000are intended to implement corresponding procedures in the foregoing method embodiments. For details, refer to the descriptions in the foregoing method embodiments. To avoid repetition, detailed descriptions are properly omitted herein. The processor2010may be configured to perform an action that is implemented inside the terminal device and that is described in the foregoing method embodiments. The transceiver2020may be configured to perform a transmitting action performed by the terminal device for the network device in the foregoing method embodiments or a receiving action performed by the terminal device from the network device in the foregoing method embodiments. For details, refer to the descriptions in the foregoing method embodiments. Details are not described herein again. Optionally, the terminal device2000may further include a power supply2050, configured to supply power to various components or circuits in the terminal device. In addition, to make functions of the terminal device more perfect, the terminal device2000may further include one or more of an input unit2060, a display unit2070, an audio circuit2080, a camera2090, a sensor2100, and the like, and the audio circuit may further include a speaker2082, a microphone2084, and the like. FIG.18is a schematic structural diagram of a network device according to an embodiment of this application, for example, may be a schematic structural diagram of a base station. The base station3000may be applied to the system shown inFIG.1, and perform a function of the network device in the foregoing method embodiments. As shown in the figure, the base station3000may include one or more radio frequency units, for example, a remote radio unit (, RRU)3100and one or more baseband units (BBU) (which may also be referred to as a distributed unit (DU))3200. The RRU3100may be referred to as a transceiver unit, and corresponds to the communications unit1200inFIG.16. Optionally, the transceiver unit3100may also be referred to as a transceiver machine, a transceiver circuit, a transceiver, or the like, and may include at least one antenna3101and a radio frequency unit3102. Optionally, the transceiver unit3100may include a receiving unit and a transmitting unit. The receiving unit may correspond to a receiver (or referred to as a receiver machine or a receiver circuit), and the transmitting unit may correspond to a transmitter (or referred to as a transmitter machine or a transmitter circuit). The RRU3100is mainly configured to: receive and transmit a radio frequency signal, and perform conversion between the radio frequency signal and a baseband signal, for example, configured to transmit indication information to a terminal device. The BBU3200is mainly configured to perform baseband processing, control the base station, and the like. The RRU3100and the BBU3200may be physically disposed together, or may be physically separated, namely, a distributed base station. The BBU3200is a control center of the base station, or may be referred to as a processing unit. The BBU3200may correspond to the processing unit1100inFIG.16, and is mainly configured to implement a baseband processing function such as channel coding, multiplexing, modulation, or spreading. For example, the BBU (the processing unit) may be configured to control the base station to perform an operation procedure related to the network device in the foregoing method embodiments, for example, generating the foregoing indication information. In an example, the BBU3200may include one or more boards, and a plurality of boards may jointly support a radio access network (such as an LTE network) having a single access standard, or may separately support radio access networks (for example, an LTE network, a 5G network, or another network) having different access standards. The BBU3200further includes a memory3201and a processor3202. The memory3201is configured to store a necessary instruction and necessary data. The processor3202is configured to control the base station to perform a necessary action, for example, configured to control the base station to perform an operation procedure related to the network device in the foregoing method embodiments. The memory3201and the processor3202may serve the one or more boards. In other words, the memory and the processor may be independently disposed on each board. Alternatively, the plurality of boards may share the same memory and the same processor. In addition, each board may further be provided with a necessary circuit. It should be understood that, the base station3000shown inFIG.18can implement each process of the network device in the method embodiment inFIG.2orFIG.9. The operations and/or the functions of the modules in the base station3000are intended to implement corresponding procedures in the foregoing method embodiments. For details, refer to the descriptions in the foregoing method embodiments. To avoid repetition, detailed descriptions are properly omitted herein. The BBU3200may be configured to perform an action that is implemented inside the network device and that is described in the foregoing method embodiments. The RRU3100may be configured to perform a transmitting action performed by the network device for the terminal device in the foregoing method embodiments or a receiving action performed by the network device from the terminal device in the foregoing method embodiments. For details, refer to the descriptions in the foregoing method embodiments. Details are not described herein again. An embodiment of this application further provides a processing apparatus, including a processor and an interface. The processor is configured to perform the communication method in any one of the foregoing method embodiments. It should be understood that, the processing apparatus may be a chip. For example, the processing apparatus may be a field programmable gate array (FPGA), an application-specific integrated chip (ASIC), a system on chip (SoC), a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), a micro controller unit (MCU), a programmable controller (PLD), or another integrated chip. In an implementation process, the steps in the foregoing methods can be implemented by using a hardware integrated logic circuit in the processor, or by using instructions in a form of software. The steps of the methods disclosed with reference to the embodiments of this application may be directly performed by a hardware processor, or may be performed by a combination of hardware and software modules in the processor. The software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory, and the processor reads information in the memory and completes the steps in the foregoing methods in combination with hardware of the processor. To avoid repetition, details are not described herein again. It should be noted that, the processor in the embodiments of this application may be an integrated circuit chip, and has a signal processing capability. In an implementation process, the steps in the foregoing method embodiments may be implemented by using a hardware integrated logic circuit in the processor, or by using instructions in a form of software. The processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a discrete gate or a transistor logic device, or a discrete hardware component. The processor may implement or perform the methods, the steps, and logical block diagrams that are disclosed in the embodiments of this application. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. The steps of the methods disclosed with reference to the embodiments of this application may be directly performed by a hardware decoding processor, or may be performed by using a combination of hardware and software modules in a decoding processor. The software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory, and the processor reads information in the memory and completes the steps in the foregoing methods in combination with hardware of the processor. It may be understood that the memory in the embodiments of this application may be a volatile memory or a non-volatile memory, or may include both a volatile memory and a non-volatile memory. The non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory may be a random access memory (RAM), and is used as an external cache. Through descriptions of the examples but not limitative descriptions, RAMs in many forms may be used, for example, a static random access memory (static RAM, SRAM), a dynamic random access memory (dynamic RAM, DRAM), a synchronous dynamic random access memory (synchronous DRAM, SDRAM), a double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), an enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), a synchlink dynamic random access memory (synchlink DRAM, SLDRAM), and a direct rambus random access memory (direct rambus RAM, DR RAM). It should be noted that the memory of the systems and methods described in this specification includes but is not limited to these and any memory of another proper type. According to the methods provided in the embodiments of this application, this application further provides a computer program product, and the computer program product includes computer program code. When the computer program code is run on a computer, the computer is enabled to perform the method in either of the embodiments shown inFIG.2andFIG.9. According to the methods provided in the embodiments of this application, this application further provides a computer-readable medium. The computer-readable medium stores program code. When the program code is run on a computer, the computer is enabled to perform the method in either of the embodiments shown inFIG.2andFIG.9. According to the methods provided in the embodiments of this application, this application further provides a system. The system includes the foregoing one or more terminal devices and one or more network devices. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When being implemented by using the software, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium, or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another web site, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a high-density digital video disc (DVD)), a semiconductor medium (for example, a solid-state drive (SSD)), or the like. The network device and the terminal device in the foregoing apparatus embodiments completely correspond to the terminal device or network device in the method embodiments. A corresponding module or unit performs a corresponding step. For example, the communications unit (the transceiver) performs transmitting or receiving steps in the method embodiments, and the processing unit (the processor) performs another step other than the transmitting and receiving steps. For a function of a specific unit, refer to a corresponding method embodiment. There may be one or more processors. The terms such as “part”, “module”, and “system” used in this specification are used to indicate computer-related entities, hardware, firmware, combinations of hardware and software, software, or software being executed. For example, a part may be, but is not limited to, a process that runs on a processor, a processor, an object, an executable file, an execution thread, a program, and/or a computer. As shown in the figures, both a computing device and an application running on the computing device may be parts. One or more parts may reside within a process and/or an execution thread, and a part may be located on one computer and/or distributed between two or more computers. In addition, these parts may be executed from various computer-readable media that store various data structures. For example, the parts may perform communication by using a local and/or remote process and based on, for example, a signal having one or more data packets (for example, data exchanged between a part and the other part in a local system, a distributed system, and/or across a network such as an internet interacting with another system by using the signal). A person of ordinary skill in the art may be aware that, in combination with various illustrative logical blocks and steps described in the embodiments disclosed in this specification may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiments are merely examples. For example, division into the units is merely logical function division and may be other division during actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions in the embodiments. In addition, the function units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit. In the foregoing embodiments, all or some of the functions of the function units may be implemented by software, hardware, firmware, or any combination thereof. When being implemented by using the software, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions (programs). When the computer program instructions (programs) are loaded and executed on a computer, the procedures or functions according to the embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid-state drive (SSD)), or the like. When the functions are implemented in a form of a software function unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the current technology, or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by the person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
291,073
11863263
DESCRIPTION OF EMBODIMENTS The following describes embodiments of the present invention with reference to the drawings. Embodiment 1 The following describes the transmission scheme, transmission device, reception scheme, and reception device of the present embodiment. Prior to describing the present embodiment, an overview is provided of a transmission scheme and decoding scheme in a conventional spatial multiplexing MIMO system. FIG.1shows the structure of an Nt×Nrspatial multiplexing MIMO system. An information vector z is encoded and interleaved. As output of the interleaving, an encoded bit vector u=(u1, . . . , uNt) is acquired. Note that ui=(ui1, . . . , uiM) (where M is the number of transmission bits per symbol). Letting the transmission vector s=(s1, . . . , sNt)Tand the transmission signal from transmit antenna #1 be represented as si=map(ui), the normalized transmission energy is represented as E{|si|2}=Es/Nt (Esbeing the total energy per channel). Furthermore, letting the received vector be y=(y1, . . . , yNr)T, the received vector is represented as in Equation 1. Math⁢1y=(y1,…,yNr)T=HNtNr⁢s+nEquation⁢1 In this Equation, HNtNris the channel matrix, n=(n1, . . . , nNr)Tis the noise vector, and niis the i.i.d. complex Gaussian random noise with an average value 0 and variance σ2. From the relationship between transmission symbols and reception symbols that is induced at the reception device, the probability for the received vector may be provided as a multi-dimensional Gaussian distribution, as in Equation 2. Math⁢2p⁡(y⁢❘"\[LeftBracketingBar]"u)=1(2⁢πσ2)Nr⁢exp⁡(-12⁢σ2⁢y-H⁢s⁡(u)2)Equation⁢2 Here, a reception device that performs iterative decoding composed of an outer soft-in/soft-out decoder and a MIMO detector, as inFIG.1, is considered. The vector of a log-likelihood ratio (L-value) inFIG.1is represented as in Equations 3-5. Math⁢3L⁡(u)=(L⁡(u1),…,L⁡(uNt))TEquation⁢3Math⁢4L⁡(ui)=(L⁡(ui⁢1),…,L⁡(ui⁢M))Equation⁢4Math⁢5L⁡(uij)=ln⁢P⁡(uij=+1)P⁡(uij=-1)Equation⁢5 <Iterative Detection Scheme> The following describes iterative detection of MIMO signals in the Nt×Nrspatial multiplexing MIMO system. The log-likelihood ratio of umnis defined as in Equation 6. Math⁢6L⁡(um⁢n⁢❘"\[LeftBracketingBar]"y)=ln⁢P⁡(um⁢n=+1⁢❘"\[LeftBracketingBar]"y)P⁡(um⁢n=-1⁢❘"\[LeftBracketingBar]"y)Equation⁢6 From Bayes' theorem, Equation 6 can be expressed as Equation 7. Math⁢7L⁡(um⁢n⁢❘"\[LeftBracketingBar]"y)=ln⁢p⁡(y⁢❘"\[LeftBracketingBar]"um⁢n=+1)⁢P⁡(um⁢n=+1)/p⁡(y)p⁡(y⁢❘"\[LeftBracketingBar]"um⁢n=-1)⁢P⁡(um⁢n=-1)/p⁡(y)=ln⁢P⁡(um⁢n=+1)P⁡(um⁢n=-1)+ln⁢p⁡(y⁢❘"\[LeftBracketingBar]"um⁢n=+1)p⁡(y⁢❘"\[LeftBracketingBar]"um⁢n=-1)=ln⁢P⁡(um⁢n=+1)P⁡(um⁢n=-1)+ln⁢∑Umn,+1p⁡(y⁢❘"\[LeftBracketingBar]"u)⁢p⁡(u⁢❘"\[LeftBracketingBar]"um⁢n)∑Umn,-1p⁡(y⁢❘"\[LeftBracketingBar]"u)⁢p⁡(u⁢❘"\[LeftBracketingBar]"um⁢n)Equation⁢7 Let Umn,±1={u|umn=±1}. When approximating ln Σaj˜max ln aj, an approximation of Equation 7 can be sought as Equation 8. Note that the above symbol “˜” indicates approximation. Math⁢8L⁡(um⁢n⁢❘"\[LeftBracketingBar]"y)≈ln⁢P⁡(um⁢n=+1)P⁡(um⁢n=-1)+maxU⁢m⁢n,+1{ln⁢p⁡(y⁢❘"\[LeftBracketingBar]"u)+P⁡(u⁢❘"\[LeftBracketingBar]"um⁢n)}-maxU⁢m⁢n,-1{ln⁢p⁡(y⁢❘"\[LeftBracketingBar]"u)+P⁡(u⁢❘"\[LeftBracketingBar]"um⁢n)}Equation⁢8 P(u|umn) and ln P(u|umn) in Equation 8 are represented as follows. Math⁢9P⁡(u⁢❘"\[LeftBracketingBar]"umn)=∏(ij)≠(mn)P⁡(uij)=∏(ij)≠(mn)exp⁢(uij⁢L⁡(uij)2)exp⁢(L⁡(uij)2)+exp⁢(-L⁡(uij)2)Equation⁢9Math⁢10ln⁢P(u⁢❘"\[LeftBracketingBar]"umn)=(∑ijln⁢P(uij))-ln⁢P(umn)Equation⁢10Math⁢11ln⁢P⁡(uij)=12⁢uij⁢P⁡(uij)-ln⁢(exp⁢(L⁡(uij)2)+exp⁢(-L⁡(uij)2))≈12⁢uij⁢L⁡(uij)-12⁢❘"\[LeftBracketingBar]"L⁡(uij)❘"\[RightBracketingBar]"⁢for⁢❘"\[LeftBracketingBar]"L⁡(uij)❘"\[RightBracketingBar]">2=❘"\[LeftBracketingBar]"L⁡(uij)2❘"\[RightBracketingBar]"⁢(uij⁢sign⁢(L⁡(uij))-1)Equation⁢11 Incidentally, the logarithmic probability of the equation defined in Equation 2 is represented in Equation 12. Math⁢12ln⁢P⁡(y⁢❘"\[LeftBracketingBar]"u)=-Nr2⁢ln⁢(2⁢π⁢σ2)-12⁢σ2⁢y-H⁢s⁡(u)2Equation⁢12 Accordingly, from Equations 7 and 13, in MAP or A Posteriori Probability (APP), the a posteriori L-value is represented as follows. Math⁢13L(um⁢n⁢❘"\[LeftBracketingBar]"y)=ln⁢∑Umn,+1exp⁢{-12⁢σ2⁢y-H⁢s⁡(u)2+∑ijln⁢P⁡(uij)}∑Umn,-1exp⁢{-12⁢σ2⁢y-H⁢s⁡(u)2+∑ijln⁢P⁡(uij)}Equation⁢13 Hereinafter, this is referred to as iterative APP decoding. From Equations 8 and 12, in the log-likelihood ratio utilizing Max-Log approximation (Max-Log APP), the a posteriori L-value is represented as follows. Math⁢14L⁡(um⁢n⁢❘"\[LeftBracketingBar]"y)maxUmn,+1{Ψ⁡(u,y,L⁡(u))}-maxUmn,-1⁢{Ψ⁡(u,y,L⁡(u))}Equation⁢14Math⁢15Ψ⁡(u,y,L⁡(u))=-12⁢σ2⁢y-H⁢s⁡(u)2+∑ijln⁢P⁡(uij)Equation⁢15 Hereinafter, this is referred to as iterative Max-log APP decoding. The extrinsic information required in an iterative decoding system can be sought by subtracting prior inputs from Equations 13 and 14. <System Model> FIG.28shows the basic structure of the system that is related to the subsequent description. This system is a 2×2 spatial multiplexing MIMO system. There is an outer encoder for each of streams A and B. The two outer encoders are identical LDPC encoders. (Here, a structure using LDPC encoders as the outer encoders is described as an example, but the error correction coding used by the outer encoder is not limited to LDPC coding. The present invention may similarly be embodied using other error correction coding such as turbo coding, convolutional coding, LDPC convolutional coding, and the like. Furthermore, each outer encoder is described as having a transmit antenna, but the outer encoders are not limited to this structure. A plurality of transmit antennas may be used, and the number of outer encoders may be one. Also, a greater number of outer encoders may be used than the number of transmit antennas.) The streams A and B respectively have interleavers (πa, πb). Here, the modulation scheme is 2h-QAM (with h bits transmitted in one symbol). The reception device performs iterative detection on the above MIMO signals (iterative APP (or iterative Max-log APP) decoding). Decoding of LDPC codes is performed by, for example, sum-product decoding. FIG.2shows a frame structure and lists the order of symbols after interleaving. In this case, (ia, ja), (ib, jb) are represented by the following Equations. Math 16 (ia,ja)=πa(Ωia,jaa)  Equation 16 Math 17 (ib,jb)=πb(Ωib,jba)  Equation 17 In this case, ia, ibindicate the order of symbols after interleaving, ja, jbindicate the bit positions (ja, jb=1, . . . , h) in the modulation scheme, πa, πbindicate the interleavers for the streams A and B, and Ωaia, ja, Ωbib, jbindicate the order of data in streams A and B before interleaving. Note thatFIG.2shows the frame structure for ia=ib. <Iterative Decoding> The following is a detailed description of the algorithms for sum-product decoding used in decoding of LDPC codes and for iterative detection of MIMO signals in the reception device. Sum-Product Decoding Let a two-dimensional M×N matrix H={Hmn} be the check matrix for LDPC codes that are targeted for decoding. Subsets A(m), B(n) of the set [1, N]={1, 2, . . . , N} are defined by the following Equations. Math 18 A(m)≡{n:Hmn=1}  Equation 18 Math 19 B(n)≡{m:Hmn=1}  Equation 19 In these Equations, A(m) represents the set of column indices of 1's in the mthcolumn of the check matrix H, and B(n) represents the set of row indices of 1's in the nthrow of the check matrix H. The algorithm for sum-product decoding is as follows. Step A.1 (initialization): let a priori value log-likelihood ratio βmn=0 for all combinations (m, n) satisfying Hmn=1. Assume that the loop variable (the number of iterations) lsum=1 and the maximum number of loops is set to lsum, max. Step A.2 (row processing): the extrinsic value log-likelihood ratio αmnis updated for all combinations (m, n) satisfying Hmn=1 in the order of m=1, 2, . . . , M, using the following updating Equations. Math⁢20αm⁢n=(∏n′∈A⁡(m)∖nsign⁢(λn′+βmn′))×f⁡(∑n′∈A⁡(m)∖nf⁡(λn′+βmn′))Equation⁢20Math⁢21sign⁢(x)≡{1x≥0-1x<0Equation⁢21Math⁢22f⁡(x)≡ln⁢exp⁢(x)+1exp⁢(x)-1Equation⁢22 In these Equations, f represents a Gallager function. Furthermore, the scheme of seeking λnis described in detail later. Step A.3 (column processing): the extrinsic value log-likelihood ratio βmnis updated for all combinations (m, n) satisfying Hmn=1 in the order of n=1, 2, . . . , N, using the following updating Equation. Math⁢23βm⁢n=∑m′∈B⁡(n)∖mαm′⁢nEquation⁢23 Step A.4 (calculating a log-likelihood ratio): the log-likelihood ratio Lnis sought for n∈[1, N] by the following Equation. Math⁢24Ln=∑m′∈B⁡(n)∖mαm′⁢n+λnEquation⁢24 Step A.5 (count of the number of iterations): if lsum<lsum, max, then lsumis incremented, and processing returns to step A.2. If lsum=lsum, max, the sum-product decoding in this round is finished. The operations in one sum-product decoding have been described. Subsequently, iterative MIMO signal detection is performed. In the variables m, n, αmn, βmn, λn, and Ln, used in the above description of the operations of sum-product decoding, the variables in stream A are ma, na, αamana, βamana, λna, and Lna, and the variables in stream B are mb,nb, αbmbnb, βbmbnb, λnb, and Lnb. <Iterative MIMO Signal Detection> The following describes the scheme of seeking λnin iterative MIMO signal detection in detail. The following Equation holds from Equation 1. Math⁢25y⁢(t)=(y1(t),y2(t))T=H2⁢2(t)⁢s⁡(t)+n⁡(t)Equation⁢25 The following Equations are defined from the frame structures ofFIG.2and from Equations 16 and 17. Math 26 na=Ωia,jaaEquation 26 Math 27 nb=Ωib,jbbEquation 27 In this case, na,nb∈[1, N]. Hereinafter, λna, Lna, λnb, and Lnb, where the number of iterations of iterative MIMO signal detection is k, are represented as λk, na, Lk, na, λk, nb, and Lk, nb. Step B.1 (initial detection; k=0): λ0, naand λ0, nbare sought as follows in the case of initial detection. In iterative APP decoding: Math⁢28λ0,nx=ln⁢∑U0,nX,+1exp⁢{-12⁢σ2⁢y⁡(iX)-H2⁢2(iX)⁢S⁡(u⁡(iX))2}∑U0,nX,-1exp⁢{-12⁢σ2⁢y⁡(iX)-H2⁢2(iX)⁢S⁡(u⁡(iX))2}Equation⁢28 In iterative Max-log APP decoding: Math⁢29λ0,nX=maxU0,nX,+1{Ψ⁡(u⁡(iX),y⁡(iX))}-maxU0,nX,-1{Ψ⁡(u⁡(iX),y⁡(iX))}Equation⁢29Math⁢30Ψ⁡(u⁡(iX),y⁡(iX))=-12⁢σ2⁢y⁡(iX)-H2⁢2(iX)⁢s⁡(u⁡(iX))2Equation⁢30 Here, let X=a, b. Then, assume that the number of iterations of iterative MIMO signal detection is lmimo=0 and the maximum number of iterations is set to lmimo, max. Step B.2 (iterative detection; the number of iterations k): λk, naand λk, nb, where the number of iterations is k, are represented as in Equations 31-34, from Equations 11, 13-15, 16, and 17. Let (X, Y)=(a, b)(b, a). In iterative APP decoding: Math⁢31λk,nX=Lk-1,ΩiX,jXX(uΩiX,jXX)+ln⁢∑Uk,nX,+1exp⁢{-12⁢σ2⁢y⁡(iX)-H22(iX)⁢s⁡(u⁡(iX))2+ρ(uΩiX,jXX)}∑Uk,nX,-1exp⁢{-12⁢σ2⁢y⁡(iX)-H22(iX)⁢s⁡(u⁡(iX))2+ρ(uΩiX,jXX)}Equation⁢31Math⁢32ρ(uΩiX,jXX)=∑γ=1γ≠jXh❘"\[LeftBracketingBar]"Lk-1,ΩiX,γX(uΩiX,γX)2❘"\[RightBracketingBar]"⁢(uΩiX,γX⁢sign(Lk-1,ΩiX,γX(uΩiX,γX))-1)+∑γ=1h❘"\[LeftBracketingBar]"Lk-1,ΩiX,γX(uΩiX,γX)2❘"\[RightBracketingBar]"⁢(uΩiX,γX⁢sign(Lk-1,ΩiX,γX(uΩiX,γX))-1)Equation⁢32 In iterative Max-log APP decoding: Math⁢33λk,nX=Lk-1,ΩiX,jXX(uΩiX,jXX)+maxUk,nX,+1{Ψ(u⁡(iX),y⁡(iX),ρ(uΩiX,jXX))}-maxUk,nX,-1{Ψ(u⁡(iX),y⁡(iX),ρ(uΩiX,jXX))}Equation⁢33Math⁢34Ψ(u⁡(iX),y⁡(iX),ρ(uΩiX,jXX))=-12⁢σ2⁢y⁡(iX)-H22(iX)⁢s⁡(u⁡(iX))2+ρ(uΩiX,jXX)Equation⁢34 Step B.3 (counting the number of iterations and estimating a codeword): increment lmimoif lmimo<lmimo, max, and return to step B.2. Assuming that lmimo=lmimo, max, the estimated codeword is sought as in the following Equation. Math⁢35uˆnX={1Llmimo,nX≥0-1Llmimo,nX<0Equation⁢35 Here, let X=a, b. FIG.3is an example of the structure of a transmission device300in the present embodiment. An encoder302A receives information (data)301A and a frame structure signal313as inputs and, in accordance with the frame structure signal313, performs error correction coding such as convolutional coding, LDPC coding, turbo coding, or the like, outputting encoded data303A. (The frame structure signal313includes information such as the error correction scheme used for error correction coding of data, the coding rate, the block length, and the like. The encoder302A uses the error correction scheme indicated by the frame structure signal313. Furthermore, the error correction scheme may be hopped.) An interleaver304A receives the encoded data303A and the frame structure signal313as inputs and performs interleaving, i.e. changing the order of the data, to output interleaved data305A. (The scheme of interleaving may be hopped based on the frame structure signal313.) A mapping unit306A receives the interleaved data305A and the frame structure signal313as inputs, performs modulation such as Quadrature Phase Shift Keying (QPSK), 16 Quadrature Amplitude Modulation (16QAM), 64 Quadrature Amplitude Modulation (64QAM), or the like, and outputs a resulting baseband signal307A. (The modulation scheme may be hopped based on the frame structure signal313.) FIGS.24A and24Bare an example of a mapping scheme over an I-Q plane, having an in-phase component I and a quadrature component Q, to form a baseband signal in QPSK modulation. For example, as shown inFIG.24A, if the input data is “00”, the output is I=1.0, Q=1.0. Similarly, for input data of “01”, the output is I=−1.0, Q=1.0, and so forth.FIG.24Bis an example of a different scheme of mapping in an I-Q plane for QPSK modulation thanFIG.24A. The difference betweenFIG.24BandFIG.24Ais that the signal points inFIG.24Ahave been rotated around the origin to yield the signal points ofFIG.24B. Non-Patent Literature 9 and Non-Patent Literature 10 describe such a constellation rotation scheme, and the Cyclic Q Delay described in Non-Patent Literature 9 and Non-Patent Literature 10 may also be adopted. As another example apart fromFIGS.24A and24B,FIGS.25A and25Bshow signal point layout in the I-Q plane for 16QAM. The example corresponding toFIG.24Ais shown inFIG.25A, and the example corresponding toFIG.24Bis shown inFIG.25B. An encoder302B receives information (data)301B and the frame structure signal313as inputs and, in accordance with the frame structure signal313, performs error correction coding such as convolutional coding, LDPC coding, turbo coding, or the like, outputting encoded data303B. (The frame structure signal313includes information such as the error correction scheme used, the coding rate, the block length, and the like. The error correction scheme indicated by the frame structure signal313is used. Furthermore, the error correction scheme may be hopped.) An interleaver304B receives the encoded data303B and the frame structure signal313as inputs and performs interleaving, i.e. changing the order of the data, to output interleaved data305B. (The scheme of interleaving may be hopped based on the frame structure signal313.) A mapping unit306B receives the interleaved data305B and the frame structure signal313as inputs, performs modulation such as Quadrature Phase Shift Keying (QPSK), 16 Quadrature Amplitude Modulation (16QAM), 64 Quadrature Amplitude Modulation (64QAM), or the like, and outputs a resulting baseband signal307B. (The modulation scheme may be hopped based on the frame structure signal313.) A weighting information generating unit314receives the frame structure signal313as an input and outputs information315regarding a weighting scheme based on the frame structure signal313. The weighting scheme is characterized by regular hopping between weights. A weighting unit308A receives the baseband signal307A, the baseband signal307B, and the information315regarding the weighting scheme, and based on the information315regarding the weighting scheme, performs weighting on the baseband signal307A and the baseband signal307B and outputs a signal309A resulting from the weighting. Details on the weighting scheme are provided later. A wireless unit310A receives the signal309A resulting from the weighting as an input and performs processing such as orthogonal modulation, band limiting, frequency conversion, amplification, and the like, outputting a transmission signal311A. A transmission signal511A is output as a radio wave from an antenna312A. A weighting unit308B receives the baseband signal307A, the baseband signal307B, and the information315regarding the weighting scheme, and based on the information315regarding the weighting scheme, performs weighting on the baseband signal307A and the baseband signal307B and outputs a signal309B resulting from the weighting. FIG.26shows the structure of a weighting unit. The baseband signal307A is multiplied by w11(t), yielding w11(t)s1(t), and is multiplied by w21(t), yielding w21(t)s1(t). Similarly, the baseband signal307B is multiplied by w12(t) to generate w12(t)s2(t) and is multiplied by w22(t) to generate w22(t)s2(t). Next, z1(t)=w11(t)s1(t)+w12(t)s2(t) and z2(t)=w21(t)s1(t)+w22(t)s2(t) are obtained. Details on the weighting scheme are provided later. A wireless unit310B receives the signal309B resulting from the weighting as an input and performs processing such as orthogonal modulation, band limiting, frequency conversion, amplification, and the like, outputting a transmission signal311B. A transmission signal511B is output as a radio wave from an antenna312B. FIG.4shows an example of the structure of a transmission device400that differs fromFIG.3. The differences inFIG.4fromFIG.3are described. An encoder402receives information (data)401and the frame structure signal313as inputs and, in accordance with the frame structure signal313, performs error correction coding and outputs encoded data402. A distribution unit404receives the encoded data403as an input, distributes the data403, and outputs data405A and data405B. Note that inFIG.4, one encoder is shown, but the number of encoders is not limited in this way. The present invention may similarly be embodied when the number of encoders is m (where m is an integer greater than or equal to one) and the distribution unit divides encoded data generated by each encoder into two parts and outputs the divided data. FIG.5shows an example of a frame structure in the time domain for a transmission device according to the present embodiment. A symbol500_1is a symbol for notifying the reception device of the transmission scheme. For example, the symbol500_1conveys information such as the error correction scheme used for transmitting data symbols, the coding rate, and the modulation scheme used for transmitting data symbols. The symbol5011is for estimating channel fluctuation for the modulated signal z1(t) (where t is time) transmitted by the transmission device. The symbol502_1is the data symbol transmitted as symbol number u (in the time domain) by the modulated signal z1(t), and the symbol5031is the data symbol transmitted as symbol number u+1 by the modulated signal z1(t). The symbol5012is for estimating channel fluctuation for the modulated signal z2(t) (where t is time) transmitted by the transmission device. The symbol502_2is the data symbol transmitted as symbol number u by the modulated signal z2(t), and the symbol5032is the data symbol transmitted as symbol number u+1 by the modulated signal z2(t). The following describes the relationships between the modulated signals z1(t) and z2(t) transmitted by the transmission device and the received signals r1(t) and r2(t) received by the reception device. InFIG.5,504#1and504#2indicate transmit antennas in the transmission device, and505#1and505#2indicate receive antennas in the reception device. The transmission device transmits the modulated signal z1(t) from transmit antenna504#1and transmits the modulated signal z2(t) from transmit antenna504#2. In this case, the modulated signal z1(t) and the modulated signal z2(t) are assumed to occupy the same (a shared/common) frequency (bandwidth). Letting the channel fluctuation for the transmit antennas of the transmission device and the antennas of the reception device be h11(t), h12(t), h21(t), and h22(t), the signal received by the receive antenna505#1of the reception device be r1(t), and the signal received by the receive antenna505#2of the reception device be r2(t), the following relationship holds. Math⁢36(r⁢1⁢(t)r⁢2⁢(t))=(h11(t)h12(t)h21(t)h2⁢2(t))⁢(z⁢1⁢(t)z⁢2⁢(t))Equation⁢36 FIG.6relates to the weighting scheme (precoding scheme) in the present embodiment. A weighting unit600integrates the weighting units308A and308B inFIG.3. As shown inFIG.6, a stream s1(t) and a stream s2(t) correspond to the baseband signals307A and307B inFIG.3. In other words, the streams s1(t) and s2(t) are the baseband signal in-phase components I and quadrature components Q when mapped according to a modulation scheme such as QPSK, 16QAM, 64QAM, or the like. As indicated by the frame structure ofFIG.6, the stream s1(t) is represented as s1(u) at symbol number u, as s1(u+1) at symbol number u+1, and so forth. Similarly, the stream s2(t) is represented as s2(u) at symbol number u, as s2(u+1) at symbol number u+1, and so forth. The weighting unit600receives the baseband signals307A (s1(t)) and307B (s2(t)) and the information315regarding weighting information inFIG.3as inputs, performs weighting in accordance with the information315regarding weighting, and outputs the signals309A (z1(t)) and309B (z2(t)) after weighting inFIG.3. In this case, z1(t) and z2(t) are represented as follows. For symbol number 4i (where i is an integer greater than or equal to zero): Math⁢37(z⁢1⁢(4⁢i)z⁢2⁢(4⁢i))=12⁢(ej⁢0ej⁢0ej⁢0ej⁢34⁢π)⁢(s⁢1⁢(4⁢i)s⁢2⁢(4⁢i))Equation⁢37 Here, j is an imaginary unit. For symbol number 4i+1: Math⁢38(z⁢1⁢(4⁢i+1)z⁢2⁢(4⁢i+1))=12⁢(ej⁢0ej⁢0ej⁢34⁢πej⁢0)⁢(s⁢1⁢(4⁢i+1)s⁢2⁢(4⁢i+1))Equation⁢38 For symbol number 4i+2: Math⁢39(z⁢1⁢(4⁢i+2)z⁢2⁢(4⁢i+2))=12⁢(ej⁢0ej⁢34⁢πej⁢0ej⁢0)⁢(s⁢1⁢(4⁢i+2)s⁢2⁢(4⁢i+2))Equation⁢39 For symbol number 4i+3: Math⁢40(z⁢1⁢(4⁢i+3)z⁢2⁢(4⁢i+3))=12⁢(ej⁢34⁢πej⁢0ej0ej⁢0)⁢(s⁢1⁢(4⁢i+3)s⁢2⁢(4⁢i+3))Equation⁢40 In this way, the weighting unit inFIG.6regularly hops between precoding weights over a four-slot period (cycle). (While precoding weights have been described as being hopped between regularly over four slots, the number of slots for regular hopping is not limited to four.) Incidentally, Non-Patent Literature 4 describes hopping the precoding weights for each slot. This hopping of precoding weights is characterized by being random. On the other hand, in the present embodiment, a certain period (cycle) is provided, and the precoding weights are hopped between regularly. Furthermore, in each 2×2 precoding weight matrix composed of four precoding weights, the absolute value of each of the four precoding weights is equivalent to (1/sqrt(2)), and hopping is regularly performed between precoding weight matrices having this characteristic. In an LOS environment, if a special precoding matrix is used, reception quality may greatly improve, yet the special precoding matrix differs depending on the conditions of direct waves. In an LOS environment, however, a certain tendency exists, and if precoding matrices are hopped between regularly in accordance with this tendency, the reception quality of data greatly improves. On the other hand, when precoding matrices are hopped between at random, a precoding matrix other than the above-described special precoding matrix may exist, and the possibility of performing precoding only with biased precoding matrices that are not suitable for the LOS environment also exists. Therefore, in an LOS environment, excellent reception quality may not always be obtained. Accordingly, there is a need for a precoding hopping scheme suitable for an LOS environment. The present invention proposes such a precoding scheme. FIG.7is an example of the structure of a reception device700in the present embodiment. A wireless unit703_X receives, as an input, a received signal702_X received by an antenna701_X, performs processing such as frequency conversion, quadrature demodulation, and the like, and outputs a baseband signal704_X. A channel fluctuation estimating unit705_1for the modulated signal z1 transmitted by the transmission device receives the baseband signal704_X as an input, extracts a reference symbol501_1for channel estimation as inFIG.5, estimates a value corresponding to h11in Equation 36, and outputs a channel estimation signal706_1. A channel fluctuation estimating unit705_2for the modulated signal z2 transmitted by the transmission device receives the baseband signal704_X as an input, extracts a reference symbol501_2for channel estimation as inFIG.5, estimates a value corresponding to h12in Equation 36, and outputs a channel estimation signal706_2. A wireless unit703_Y receives, as input, a received signal702_Y received by an antenna701_Y, performs processing such as frequency conversion, quadrature demodulation, and the like, and outputs a baseband signal704_Y. A channel fluctuation estimating unit707_1for the modulated signal z1 transmitted by the transmission device receives the baseband signal704_Y as an input, extracts a reference symbol501_1for channel estimation as inFIG.5, estimates a value corresponding to h21in Equation 36, and outputs a channel estimation signal708_1. A channel fluctuation estimating unit707_2for the modulated signal z2 transmitted by the transmission device receives the baseband signal704_Y as an input, extracts a reference symbol501_2for channel estimation as inFIG.5, estimates a value corresponding to h22in Equation 36, and outputs a channel estimation signal708_2. A control information decoding unit709receives the baseband signal704_X and the baseband signal704_Y as inputs, detects the symbol500_1that indicates the transmission scheme as inFIG.5, and outputs a signal710regarding information on the transmission scheme indicated by the transmission device. A signal processing unit711receives, as inputs, the baseband signals704_X and704_Y, the channel estimation signals706_1,706_2,708_1, and708_2, and the signal710regarding information on the transmission scheme indicated by the transmission device, performs detection and decoding, and outputs received data712_1and712_2. Next, operations by the signal processing unit711inFIG.7are described in detail.FIG.8is an example of the structure of the signal processing unit711in the present embodiment.FIG.8shows an INNER MIMO detector, a soft-in/soft-out decoder, and a weighting coefficient generating unit as the main elements. Non-Patent Literature 2 and Non-Patent Literature 3 describe the scheme of iterative decoding with this structure. The MIMO system described in Non-Patent Literature 2 and Non-Patent Literature 3 is a spatial multiplexing MIMO system, whereas the present embodiment differs from Non-Patent Literature 2 and Non-Patent Literature 3 by describing a MIMO system that changes precoding weights with time. Letting the (channel) matrix in Equation 36 be H(t), the precoding weight matrix inFIG.6be W(t) (where the precoding weight matrix changes over t), the received vector be R(t)=(r1(t),r2(t))T, and the stream vector be S(t)=(s1(t),s2(t))T, the following Equation holds. Math 41 R(t)=H(t)W(t)S(t)  Equation 41 In this case, the reception device can apply the decoding scheme in Non-Patent Literature 2 and Non-Patent Literature 3 to the received vector R(t) by considering H(t)W(t) as the channel matrix. Therefore, a weighting coefficient generating unit819inFIG.8receives, as input, a signal818regarding information on the transmission scheme indicated by the transmission device (corresponding to710inFIG.7) and outputs a signal820regarding information on weighting coefficients. An INNER MIMO detector803receives the signal820regarding information on weighting coefficients as input and, using the signal820, performs the calculation in Equation 41. Iterative detection and decoding is thus performed. The following describes operations thereof. In the signal processing unit inFIG.8, a processing scheme such as that shown inFIG.10is necessary for iterative decoding (iterative detection). First, one codeword (or one frame) of the modulated signal (stream) s1 and one codeword (or one frame) of the modulated signal (stream) s2 are decoded. As a result, the Log-Likelihood Ratio (LLR) of each bit of the one codeword (or one frame) of the modulated signal (stream) s1 and of the one codeword (or one frame) of the modulated signal (stream) s2 is obtained from the soft-in/soft-out decoder. Detection and decoding is performed again using the LLR. These operations are performed multiple times (these operations being referred to as iterative decoding (iterative detection)). Hereinafter, description focuses on the scheme of generating the log-likelihood ratio (LLR) of a symbol at a particular time in one frame. InFIG.8, a storage unit815receives, as inputs, a baseband signal801X (corresponding to the baseband signal704_X inFIG.7), a channel estimation signal group802X (corresponding to the channel estimation signals706_1and706_2inFIG.7), a baseband signal801Y (corresponding to the baseband signal704_Y inFIG.7), and a channel estimation signal group802Y (corresponding to the channel estimation signals708_1and708_2inFIG.7). In order to achieve iterative decoding (iterative detection), the storage unit815calculates H(t)W(t) in Equation 41 and stores the calculated matrix as a transformed channel signal group. The storage unit815outputs the above signals when necessary as a baseband signal816X, a transformed channel estimation signal group817X, a baseband signal816Y, and a transformed channel estimation signal group817Y. Subsequent operations are described separately for initial detection and for iterative decoding (iterative detection). <Initial Detection> The INNER MIMO detector803receives, as inputs, the baseband signal801X, the channel estimation signal group802X, the baseband signal801Y, and the channel estimation signal group802Y. Here, the modulation scheme for the modulated signal (stream) s1 and the modulated signal (stream) s2 is described as 16QAM. The INNER MIMO detector803first calculates H(t)W(t) from the channel estimation signal group802X and the channel estimation signal group802Y to seek candidate signal points corresponding to the baseband signal801X.FIG.11shows such calculation. InFIG.11, each black dot (•) is a candidate signal point in the I-Q plane. Since the modulation scheme is 16QAM, there are 256 candidate signal points. (SinceFIG.11is only for illustration, not all 256 candidate signal points are shown.) Here, letting the four bits transferred by modulated signal s1 be b0, b1, b2, and b3, and the four bits transferred by modulated signal s2 be b4, b5, b6, and b7, candidate signal points corresponding to (b0, b1, b2, b3, b4, b5, b6, b7) inFIG.11exist. The squared Euclidian distance is sought between a received signal point1101(corresponding to the baseband signal801X) and each candidate signal point. Each squared Euclidian distance is divided by the noise variance σ2. Accordingly, EX(b0, b1, b2, b3, b4, b5, b6, b7), i.e. the value of the squared Euclidian distance between a candidate signal point corresponding to (b0, b1, b2, b3, b4, b5, b6, b7) and a received signal point, divided by the noise variance, is sought. Note that the baseband signals and the modulated signals s1 and s2 are each complex signals. Similarly, H(t)W(t) is calculated from the channel estimation signal group802X and the channel estimation signal group802Y, candidate signal points corresponding to the baseband signal801Y are sought, the squared Euclidian distance for the received signal point (corresponding to the baseband signal801Y) is sought, and the squared Euclidian distance is divided by the noise variance σ2. Accordingly, EY(b0, b1, b2, b3, b4, b5, b6, b7), i.e. the value of the squared Euclidian distance between a candidate signal point corresponding to (b0, b1, b2, b3, b4, b5, b6, b7) and a received signal point, divided by the noise variance, is sought. Then EX(b0, b1, b2, b3, b4, b5, b6, b7)+EY(b0, b1, b2, b3, b4, b5, b6, b7)=E(b0, b1, b2, b3, b4, b5, b6, b7) is sought. The INNER MIMO detector803outputs E(b0, b1, b2, b3, b4, b5, b6, b7) as a signal804. A log-likelihood calculating unit805A receives the signal804as input, calculates the log likelihood for bits b0, b1, b2, and b3, and outputs a log-likelihood signal806A. Note that during calculation of the log likelihood, the log likelihood for “1” and the log likelihood for “0” are calculated. The calculation scheme is as shown in Equations 28, 29, and 30. Details can be found in Non-Patent Literature 2 and Non-Patent Literature 3. Similarly, a log-likelihood calculating unit805B receives the signal804as input, calculates the log likelihood for bits b4, b5, b6, and b7, and outputs a log-likelihood signal806B. A deinterleaver (807A) receives the log-likelihood signal806A as an input, performs deinterleaving corresponding to the interleaver (the interleaver (304A) inFIG.3), and outputs a deinterleaved log-likelihood signal808A. Similarly, a deinterleaver (807B) receives the log-likelihood signal806B as an input, performs deinterleaving corresponding to the interleaver (the interleaver (304B) inFIG.3), and outputs a deinterleaved log-likelihood signal808B. A log-likelihood ratio calculating unit809A receives the interleaved log-likelihood signal808A as an input, calculates the log-likelihood ratio (LLR) of the bits encoded by the encoder302A inFIG.3, and outputs a log-likelihood ratio signal810A. Similarly, a log-likelihood ratio calculating unit809B receives the interleaved log-likelihood signal808B as an input, calculates the log-likelihood ratio (LLR) of the bits encoded by the encoder302B inFIG.3, and outputs a log-likelihood ratio signal810B. A soft-in/soft-out decoder811A receives the log-likelihood ratio signal810A as an input, performs decoding, and outputs a decoded log-likelihood ratio812A. Similarly, a soft-in/soft-out decoder811B receives the log-likelihood ratio signal810B as an input, performs decoding, and outputs a decoded log-likelihood ratio812B. <Iterative Decoding (Iterative Detection), Number of Iterations k> An interleaver (813A) receives the log-likelihood ratio812A decoded by the soft-in/soft-out decoder in the (k−1)thiteration as an input, performs interleaving, and outputs an interleaved log-likelihood ratio814A. The interleaving pattern in the interleaver (813A) is similar to the interleaving pattern in the interleaver (304A) inFIG.3. An interleaver (813B) receives the log-likelihood ratio812B decoded by the soft-in/soft-out decoder in the (k−1)thiteration as an input, performs interleaving, and outputs an interleaved log-likelihood ratio814B. The interleaving pattern in the interleaver (813B) is similar to the interleaving pattern in the interleaver (304B) inFIG.3. The INNER MIMO detector803receives, as inputs, the baseband signal816X, the transformed channel estimation signal group817X, the baseband signal816Y, the transformed channel estimation signal group817Y, the interleaved log-likelihood ratio814A, and the interleaved log-likelihood ratio814B. The reason for using the baseband signal816X, the transformed channel estimation signal group817X, the baseband signal816Y, and the transformed channel estimation signal group817Y instead of the baseband signal801X, the channel estimation signal group802X, the baseband signal801Y, and the channel estimation signal group802Y is because a delay occurs due to iterative decoding. The difference between operations by the INNER MIMO detector803for iterative decoding and for initial detection is the use of the interleaved log-likelihood ratio814A and the interleaved log-likelihood ratio814B during signal processing. The INNER MIMO detector803first seeks E(b0, b1, b2, b3, b4, b5, b6, b7), as during initial detection. Additionally, coefficients corresponding to Equations 11 and 32 are sought from the interleaved log-likelihood ratio814A and the interleaved log-likelihood ratio914B. The value E(b0, b1, b2, b3, b4, b5, b6, b7) is adjusted using the sought coefficients, and the resulting value E′(b0, b1, b2, b3, b4, b5, b6, b7) is output as the signal804. The log-likelihood calculating unit805A receives the signal804as input, calculates the log likelihood for bits b0, b1, b2, and b3, and outputs the log-likelihood signal806A. Note that during calculation of the log likelihood, the log likelihood for “1” and the log likelihood for “0” are calculated. The calculation scheme is as shown in Equations 31, 32, 33, 34, and 35. Details can be found in Non-Patent Literature 2 and Non-Patent Literature 3. Similarly, the log-likelihood calculating unit805B receives the signal804as input, calculates the log likelihood for bits b4, b5, b6, and b7, and outputs the log-likelihood signal806B. Operations by the deinterleaver onwards are similar to initial detection. Note that whileFIG.8shows the structure of the signal processing unit when performing iterative detection, iterative detection is not always essential for obtaining excellent reception quality, and a structure not including the interleavers813A and813B, which are necessary only for iterative detection, is possible. In such a case, the INNER MIMO detector803does not perform iterative detection. The main part of the present embodiment is calculation of H(t)W(t). Note that as shown in Non-Patent Literature 5 and the like, QR decomposition may be used to perform initial detection and iterative detection. Furthermore, as shown in Non-Patent Literature 11, based on H(t)W(t), linear operation of the Minimum Mean Squared Error (MMSE) and Zero Forcing (ZF) may be performed in order to perform initial detection. FIG.9is the structure of a different signal processing unit thanFIG.8and is for the modulated signal transmitted by the transmission device inFIG.4. The difference withFIG.8is the number of soft-in/soft-out decoders. A soft-in/soft-out decoder901receives, as inputs, the log-likelihood ratio signals810A and810B, performs decoding, and outputs a decoded log-likelihood ratio902. A distribution unit903receives the decoded log-likelihood ratio902as an input and distributes the log-likelihood ratio902. Other operations are similar toFIG.8. FIGS.12A and12Bshow BER characteristics for a transmission scheme using the precoding weights of the present embodiment under similar conditions toFIGS.29A and29B.FIG.12Ashows the BER characteristics of Max-log A Posteriori Probability (APP) without iterative detection (see Non-Patent Literature 1 and Non-Patent Literature 2), andFIG.12Bshows the BER characteristics of Max-log-APP with iterative detection (see Non-Patent Literature 1 and Non-Patent Literature 2) (number of iterations: five). ComparingFIGS.12A,12B,29A, and29Bshows how if the transmission scheme of the present embodiment is used, the BER characteristics when the Rician factor is large greatly improve over the BER characteristics when using spatial multiplexing MIMO system, thereby confirming the usefulness of the scheme in the present embodiment. As described above, when a transmission device transmits a plurality of modulated signals from a plurality of antennas in a MIMO system, the advantageous effect of improved transmission quality, as compared to conventional spatial multiplexing MIMO system, is achieved in an LOS environment in which direct waves dominate by hopping between precoding weights regularly over time, as in the present embodiment. In the present embodiment, and in particular with regards to the structure of the reception device, operations have been described for a limited number of antennas, but the present invention may be embodied in the same way even if the number of antennas increases. In other words, the number of antennas in the reception device does not affect the operations or advantageous effects of the present embodiment. Furthermore, in the present embodiment, the example of LDPC coding has particularly been explained, but the present invention is not limited to LDPC coding. Furthermore, with regards to the decoding scheme, the soft-in/soft-out decoders are not limited to the example of sum-product decoding. Another soft-in/soft-out decoding scheme may be used, such as a BCJR algorithm, a SOVA algorithm, a Max-log-MAP algorithm, and the like. Details are provided in Non-Patent Literature 6. Additionally, in the present embodiment, the example of a single carrier scheme has been described, but the present invention is not limited in this way and may be similarly embodied for multi-carrier transmission. Accordingly, when using a scheme such as spread spectrum communication, Orthogonal Frequency-Division Multiplexing (OFDM), Single Carrier Frequency Division Multiple Access (SC-FDMA), Single Carrier Orthogonal Frequency-Division Multiplexing (SC-OFDM), or wavelet OFDM as described in Non-Patent Literature 7 and the like, for example, the present invention may be similarly embodied. Furthermore, in the present embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, and the like), symbols for transmission of control information, and the like, may be arranged in the frame in any way. The following describes an example of using OFDM as an example of a multi-carrier scheme. FIG.13shows the structure of a transmission device when using OFDM. InFIG.13, elements that operate in a similar way toFIG.3bear the same reference signs. An OFDM related processor1301A receives, as input, the weighted signal309A, performs processing related to OFDM, and outputs a transmission signal1302A. Similarly, an OFDM related processor1301B receives, as input, the weighted signal309B, performs processing related to OFDM, and outputs a transmission signal1302B. FIG.14shows an example of a structure from the OFDM related processors1301A and1301B inFIG.13onwards. The part from1401A to1410A is related to the part from1301A to312A inFIG.13, and the part from1401B to1410B is related to the part from1301B to312B inFIG.13. A serial/parallel converter1402A performs serial/parallel conversion on a weighted signal1401A (corresponding to the weighted signal309A inFIG.13) and outputs a parallel signal1403A. A reordering unit1404A receives a parallel signal1403A as input, performs reordering, and outputs a reordered signal1405A. Reordering is described in detail later. An inverse fast Fourier transformer1406A receives the reordered signal1405A as an input, performs a fast Fourier transform, and outputs a fast Fourier transformed signal1407A. A wireless unit1408A receives the fast Fourier transformed signal1407A as an input, performs processing such as frequency conversion, amplification, and the like, and outputs a modulated signal1409A. The modulated signal1409A is output as a radio wave from an antenna1410A. A serial/parallel converter1402B performs serial/parallel conversion on a weighted signal1401B (corresponding to the weighted signal309B inFIG.13) and outputs a parallel signal1403B. A reordering unit1404B receives a parallel signal1403B as input, performs reordering, and outputs a reordered signal1405B. Reordering is described in detail later. An inverse fast Fourier transformer1406B receives the reordered signal1405B as an input, performs a fast Fourier transform, and outputs a fast Fourier transformed signal1407B. A wireless unit1408B receives the fast Fourier transformed signal1407B as an input, performs processing such as frequency conversion, amplification, and the like, and outputs a modulated signal1409B. The modulated signal1409B is output as a radio wave from an antenna1410B. In the transmission device ofFIG.3, since the transmission scheme does not use multi-carrier, precoding hops to form a four-slot period (cycle), as shown inFIG.6, and the precoded symbols are arranged in the time domain. When using a multi-carrier transmission scheme as in the OFDM scheme shown inFIG.13, it is of course possible to arrange the precoded symbols in the time domain as inFIG.3for each (sub)carrier. In the case of a multi-carrier transmission scheme, however, it is possible to arrange symbols in the frequency domain, or in both the frequency and time domains. The following describes these arrangements. FIGS.15A and15Bshow an example of a scheme of reordering symbols by reordering units1401A and1401B inFIG.14, the horizontal axis representing frequency, and the vertical axis representing time. The frequency domain runs from (sub)carrier 0 through (sub)carrier 9. The modulated signals z1 and z2 use the same frequency bandwidth at the same time.FIG.15Ashows the reordering scheme for symbols of the modulated signal z1, andFIG.15Bshows the reordering scheme for symbols of the modulated signal z2. Numbers #1, #2, #3, #4, . . . are assigned to in order to the symbols of the weighted signal1401A which is input into the serial/parallel converter1402A. At this point, symbols are assigned regularly, as shown inFIG.15A. The symbols #1, #2, #3, #4, . . . are arranged in order starting from carrier 0. The symbols #1 through #9 are assigned to time $1, and subsequently, the symbols #10 through #19 are assigned to time $2. Similarly, numbers #1, #2, #3, #4, . . . are assigned in order to the symbols of the weighted signal1401B which is input into the serial/parallel converter1402B. At this point, symbols are assigned regularly, as shown inFIG.15B. The symbols #1, #2, #3, #4, . . . are arranged in order starting from carrier 0. The symbols #1 through #9 are assigned to time $1, and subsequently, the symbols #10 through #19 are assigned to time $2. Note that the modulated signals z1 and z2 are complex signals. The symbol group1501and the symbol group1502shown inFIGS.15A and15Bare the symbols for one period (cycle) when using the precoding weight hopping scheme shown inFIG.6. Symbol #0 is the symbol when using the precoding weight of slot 4i inFIG.6. Symbol #1 is the symbol when using the precoding weight of slot 4i+1 inFIG.6. Symbol #2 is the symbol when using the precoding weight of slot 4i+2 inFIG.6. Symbol #3 is the symbol when using the precoding weight of slot 4i+3 inFIG.6. Accordingly, symbol #x is as follows. When x mod 4 is 0, the symbol #x is the symbol when using the precoding weight of slot 4i inFIG.6. When x mod 4 is 1, the symbol #x is the symbol when using the precoding weight of slot 4i+1 inFIG.6. When x mod 4 is 2, the symbol #x is the symbol when using the precoding weight of slot 4i+2 inFIG.6. When x mod 4 is 3, the symbol #x is the symbol when using the precoding weight of slot 4i+3 inFIG.6. In this way, when using a multi-carrier transmission scheme such as OFDM, unlike during single carrier transmission, symbols can be arranged in the frequency domain. Furthermore, the ordering of symbols is not limited to the ordering shown inFIGS.15A and15B. Other examples are described with reference toFIGS.16A,16B,17A, and17B. FIGS.16A and16Bshow an example of a scheme of reordering symbols by the reordering units1404A and1404B inFIG.14, the horizontal axis representing frequency, and the vertical axis representing time, that differs fromFIGS.15A and15B.FIG.16Ashows the reordering scheme for symbols of the modulated signal z1, andFIG.16Bshows the reordering scheme for symbols of the modulated signal z2. The difference inFIGS.16A and16Bas compared toFIGS.15A and15Bis that the reordering scheme of the symbols of the modulated signal z1 differs from the reordering scheme of the symbols of the modulated signal z2. InFIG.16B, symbols #0 through #5 are assigned to carriers 4 through 9, and symbols #6 through #9 are assigned to carriers 0 through 3. Subsequently, symbols #10 through #19 are assigned regularly in the same way. At this point, as inFIGS.15A and15B, the symbol group1601and the symbol group1602shown inFIGS.16A and16Bare the symbols for one period (cycle) when using the precoding weight hopping scheme shown inFIG.6. FIGS.17A and17Bshow an example of a scheme of reordering symbols by the reordering units1404A and1404B inFIG.14, the horizontal axis representing frequency, and the vertical axis representing time, that differs fromFIGS.15A and15B.FIG.17Ashows the reordering scheme for symbols of the modulated signal z1, andFIG.17Bshows the reordering scheme for symbols of the modulated signal z2. The difference inFIGS.17A and17Bas compared toFIGS.15A and15Bis that whereas the symbols are arranged in order by carrier inFIGS.15A and15B, the symbols are not arranged in order by carrier inFIGS.17A and17B. It is obvious that, inFIGS.17A and17B, the reordering scheme of the symbols of the modulated signal z1 may differ from the reordering scheme of the symbols of the modulated signal z2, as inFIGS.16A and16B. FIGS.18A and18Bshow an example of a scheme of reordering symbols by the reordering units1404A and1404B inFIG.14, the horizontal axis representing frequency, and the vertical axis representing time, that differs fromFIGS.15A through17B.FIG.18Ashows the reordering scheme for symbols of the modulated signal z1, andFIG.18Bshows the reordering scheme for symbols of the modulated signal z2. InFIGS.15A through17B, symbols are arranged in the frequency domain, whereas inFIGS.18A and18B, symbols are arranged in both the frequency and time domains. InFIG.6, an example has been described of hopping between precoding weights over four slots. Here, however, an example of hopping over eight slots is described. The symbol groups1801and1802shown inFIGS.18A and18Bare the symbols for one period (cycle) when using the precoding weight hopping scheme (and are therefore eight-symbol groups). Symbol #0 is the symbol when using the precoding weight of slot 8i. Symbol #1 is the symbol when using the precoding weight of slot 8i+1. Symbol #2 is the symbol when using the precoding weight of slot 8i+2. Symbol #3 is the symbol when using the precoding weight of slot 8i+3. Symbol #4 is the symbol when using the precoding weight of slot 8i+4. Symbol #5 is the symbol when using the precoding weight of slot 8i+5. Symbol #6 is the symbol when using the precoding weight of slot 8i+6. Symbol #7 is the symbol when using the precoding weight of slot 8i+7. Accordingly, symbol #x is as follows. When x mod 8 is 0, the symbol #x is the symbol when using the precoding weight of slot 8i. When x mod 8 is 1, the symbol #x is the symbol when using the precoding weight of slot 8i+1. When x mod 8 is 2, the symbol #x is the symbol when using the precoding weight of slot 8i+2. When x mod 8 is 3, the symbol #x is the symbol when using the precoding weight of slot 8i+3. When x mod 8 is 4, the symbol #x is the symbol when using the precoding weight of slot 8i+4. When x mod 8 is 5, the symbol #x is the symbol when using the precoding weight of slot 8i+5. When x mod 8 is 6, the symbol #x is the symbol when using the precoding weight of slot 8i+6. When x mod 8 is 7, the symbol #x is the symbol when using the precoding weight of slot 8i+7. In the symbol ordering inFIGS.18A and18B, four slots in the time domain and two slots in the frequency domain for a total of 4×2=8 slots are used to arrange symbols for one period (cycle). In this case, letting the number of symbols in one period (cycle) be m×n symbols (in other words, m×n precoding weights exist), the number of slots (the number of carriers) in the frequency domain used to arrange symbols in one period (cycle) be n, and the number of slots used in the time domain be m, then m>n should be satisfied. This is because the phase of direct waves fluctuates more slowly in the time domain than in the frequency domain. Therefore, since the precoding weights are changed in the present embodiment to minimize the influence of steady direct waves, it is preferable to reduce the fluctuation in direct waves in the period (cycle) for changing the precoding weights. Accordingly, m>n should be satisfied. Furthermore, considering the above points, rather than reordering symbols only in the frequency domain or only in the time domain, direct waves are more likely to become stable when symbols are reordered in both the frequency and the time domains as inFIGS.18A and18B, thereby making it easier to achieve the advantageous effects of the present invention. When symbols are ordered in the frequency domain, however, fluctuations in the frequency domain are abrupt, leading to the possibility of yielding diversity gain. Therefore, reordering in both the frequency and the time domains is not necessarily always the best scheme. FIGS.19A and19Bshow an example of a scheme of reordering symbols by the reordering units1404A and1404B inFIG.14, the horizontal axis representing frequency, and the vertical axis representing time, that differs fromFIGS.18A and18B.FIG.19Ashows the reordering scheme for symbols of the modulated signal z1, andFIG.19Bshows the reordering scheme for symbols of the modulated signal z2. As inFIGS.18A and18B,FIGS.19A and19Bshow arrangement of symbols using both the frequency and the time axes. The difference as compared toFIGS.18A and18Bis that, whereas symbols are arranged first in the frequency domain and then in the time domain inFIGS.18A and18B, symbols are arranged first in the time domain and then in the frequency domain inFIGS.19A and19B. InFIGS.19A and19B, the symbol group1901and the symbol group1902are the symbols for one period (cycle) when using the precoding hopping scheme. Note that inFIGS.18A,18B,19A, and19B, as inFIGS.16A and16B, the present invention may be similarly embodied, and the advantageous effect of high reception quality achieved, with the symbol arranging scheme of the modulated signal z1 differing from the symbol arranging scheme of the modulated signal z2. Furthermore, inFIGS.18A,18B,19A, and19B, as inFIGS.17A and17B, the present invention may be similarly embodied, and the advantageous effect of high reception quality achieved, without arranging the symbols in order. FIG.27shows an example of a scheme of reordering symbols by the reordering units1404A and1404B inFIG.14, the horizontal axis representing frequency, and the vertical axis representing time, that differs from the above examples. The case of hopping between precoding matrices regularly over four slots, as in Equations 37-40, is considered. The characteristic feature ofFIG.27is that symbols are arranged in order in the frequency domain, but when progressing in the time domain, symbols are cyclically shifted by n symbols (in the example inFIG.27, n=1). In the four symbols shown in the symbol group2710in the frequency domain inFIG.27, precoding hops between the precoding matrices of Equations 37-40. In this case, symbol #0 is precoded using the precoding matrix in Equation 37, symbol #1 is precoded using the precoding matrix in Equation 38, symbol #2 is precoded using the precoding matrix in Equation 39, and symbol #3 is precoded using the precoding matrix in Equation 40. Similarly, for the symbol group2720in the frequency domain, symbol #4 is precoded using the precoding matrix in Equation 37, symbol #5 is precoded using the precoding matrix in Equation 38, symbol #6 is precoded using the precoding matrix in Equation 39, and symbol #7 is precoded using the precoding matrix in Equation 40. For the symbols at time $1, precoding hops between the above precoding matrices, but in the time domain, symbols are cyclically shifted. Therefore, precoding hops between precoding matrices for the symbol groups2701,2702,2703, and2704as follows. In the symbol group2701in the time domain, symbol #0 is precoded using the precoding matrix in Equation 37, symbol #9 is precoded using the precoding matrix in Equation 38, symbol #18 is precoded using the precoding matrix in Equation 39, and symbol #27 is precoded using the precoding matrix in Equation 40. In the symbol group2702in the time domain, symbol #28 is precoded using the precoding matrix in Equation 37, symbol #1 is precoded using the precoding matrix in Equation 38, symbol #10 is precoded using the precoding matrix in Equation 39, and symbol #19 is precoded using the precoding matrix in Equation 40. In the symbol group2703in the time domain, symbol #20 is precoded using the precoding matrix in Equation 37, symbol #29 is precoded using the precoding matrix in Equation 38, symbol #2 is precoded using the precoding matrix in Equation 39, and symbol #11 is precoded using the precoding matrix in Equation 40. In the symbol group2704in the time domain, symbol #12 is precoded using the precoding matrix in Equation 37, symbol #21 is precoded using the precoding matrix in Equation 38, symbol #30 is precoded using the precoding matrix in Equation 39, and symbol #3 is precoded using the precoding matrix in Equation 40. The characteristic ofFIG.27is that, for example focusing on symbol #11, the symbols on either side in the frequency domain at the same time (symbols #10 and #12) are both precoded with a different precoding matrix than symbol #11, and the symbols on either side in the time domain in the same carrier (symbols #2 and #20) are both precoded with a different precoding matrix than symbol #11. This is true not only for symbol #11. Any symbol having symbols on either side in the frequency domain and the time domain is characterized in the same way as symbol #11. As a result, precoding matrices are effectively hopped between, and since the influence on stable conditions of direct waves is reduced, the possibility of improved reception quality of data increases. InFIG.27, the case of n=1 has been described, but n is not limited in this way. The present invention may be similarly embodied with n=3. Furthermore, inFIG.27, when symbols are arranged in the frequency domain and time progresses in the time domain, the above characteristic is achieved by cyclically shifting the number of the arranged symbol, but the above characteristic may also be achieved by randomly (or regularly) arranging the symbols. Embodiment 2 In Embodiment 1, regular hopping of the precoding weights as shown inFIG.6has been described. In the present embodiment, a scheme for designing specific precoding weights that differ from the precoding weights inFIG.6is described. InFIG.6, the scheme for hopping between the precoding weights in Equations 37-40 has been described. By generalizing this scheme, the precoding weights may be changed as follows. (The hopping period (cycle) for the precoding weights has four slots, and Equations are listed similarly to Equations 37-40.) For symbol number 4i (where i is an integer greater than or equal to zero): Math⁢42(z⁢1⁢(4⁢i)z⁢2⁢(4⁢i))=12⁢(ej⁢θ11(4⁢i)ej⁢(θ11(4⁢i)+λ)ej⁢θ21(4⁢i)ej⁢(θ21(4⁢i)+λ+δ))⁢(s⁢1⁢(4⁢i)s⁢2⁢(4⁢i))Equation⁢42 Here, j is an imaginary unit. For symbol number 4i+1: Math⁢43(z⁢1⁢(4⁢i+1)z⁢2⁢(4⁢i+1))=12⁢(ej⁢θ11(4⁢i+1)ej⁢(θ11(4⁢i+1)+λ)ej⁢θ21(4⁢i+1)ej⁢(θ21(4⁢i+1)+λ+δ))⁢(s⁢1⁢(4⁢i+1)s⁢2⁢(4⁢i+1))Equation⁢43 For symbol number 4i+2: Math⁢44(z⁢1⁢(4⁢i+2)z⁢2⁢(4⁢i+2))=12⁢(ej⁢θ11(4⁢i+2)ej⁢(θ11(4⁢i+2)+λ)ej⁢θ21(4⁢i+2)ej⁢(θ21(4⁢i+2)+λ+δ))⁢(s⁢1⁢(4⁢i+2)s⁢2⁢(4⁢i+2))Equation⁢44 For symbol number 4i+3: Math⁢45(z⁢1⁢(4⁢i+3)z⁢2⁢(4⁢i+3))=12⁢(ej⁢θ11(4⁢i+3)ej⁢(θ11(4⁢i+3)+λ)ej⁢θ21(4⁢i+3)ej⁢(θ21(4⁢i+3)+λ+δ))⁢(s⁢1⁢(4⁢i+3)s⁢2⁢(4⁢i+3))Equation⁢45 From Equations 36 and 41, the received vector R(t)=(r1(t), r2(t))Tcan be represented as follows. For symbol number 4i: Math⁢46(r⁢1⁢(4⁢i)r⁢2⁢(4⁢i))=12⁢(h11(4⁢i)h12(4⁢i)h21(4⁢i)h2⁢2(4⁢i)⁢(ej⁢θ11(4⁢i)ej⁢(θ11(4⁢i)+λ)ej⁢θ21(4⁢i)ej⁢(θ21(4⁢i)+λ+δ))⁢(s⁢1⁢(4⁢i)s⁢2⁢(4⁢i))Equation⁢46 For symbol number 4i+1: Math⁢47(r⁢1⁢(4⁢i+1)r⁢2⁢(4⁢i+1))=12⁢(h11(4⁢i+1)h12(4⁢i+1)h21(4⁢i+1)h2⁢2(4⁢i+1))⁢(ej⁢θ11(4⁢i+1)ej⁢(θ11(4⁢i+1)+λ)ej⁢θ21(4⁢i+1)ej⁢(θ21(4⁢i+1)+λ+δ))⁢(s⁢1⁢(4⁢i+1)s⁢2⁢(4⁢i+1))Equation⁢47 For symbol number 4i+2: Math⁢48(r⁢1⁢(4⁢i+2)r⁢2⁢(4⁢i+2))=12⁢(h11(4⁢i+2)h12(4⁢i+2)h21(4⁢i+2)h2⁢2(4⁢i+2))⁢(ej⁢θ11(4⁢i+2)ej⁢(θ11(4⁢i+2)+λ)ej⁢θ21(4⁢i+2)ej⁢(θ21(4⁢i+2)+λ+δ))⁢(s⁢1⁢(4⁢i+2)s⁢2⁢(4⁢i+2))Equation⁢48 For symbol number 4i+3: Math⁢49(r⁢1⁢(4⁢i+3)r⁢2⁢(4⁢i+3))=12⁢(h11(4⁢i+3)h12(4⁢i+3)h21(4⁢i+3)h2⁢2(4⁢i+3))⁢(ej⁢θ11(4⁢i+3)ej⁢(θ11(4⁢i+3)+λ)ej⁢θ21(4⁢i+3)ej⁢(θ21(4⁢i+3)+λ+δ))⁢(s⁢1⁢(4⁢i+3)s⁢2⁢(4⁢i+3))Equation⁢49 In this case, it is assumed that only components of direct waves exist in the channel elements h11(t), h12(t), h21(t), and h22(t), that the amplitude components of the direct waves are all equal, and that fluctuations do not occur over time. With these assumptions, Equations 46-49 can be represented as follows. For symbol number 4i: Math⁢50(r⁢1⁢(4⁢i)r⁢2⁢(4⁢i))=12⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(4⁢i)ej⁢(θ11(4⁢i)+λ)ej⁢θ21(4⁢i)ej⁢(θ21(4⁢i)+λ+δ))⁢(s⁢1⁢(4⁢i)s⁢2⁢(4⁢i))Equation⁢50 For symbol number 4i+1: Math⁢51(r⁢1⁢(4⁢i+1)r⁢2⁢(4⁢i+1))=12⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(4⁢i+1)ej⁢(θ11(4⁢i+1)+λ)ej⁢θ21(4⁢i+1)ej⁢(θ21(4⁢i+1)+λ+δ))⁢(s⁢1⁢(4⁢i+1)s⁢2⁢(4⁢i+1))Equation⁢51 For symbol number 4i+2: Math⁢52(r⁢1⁢(4⁢i+2)r⁢2⁢(4⁢i+2))=12⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(4⁢i+2)ej⁢(θ11(4⁢i+2)+λ)ej⁢θ21(4⁢i+2)ej⁢(θ21(4⁢i+2)+λ+δ))⁢(s⁢1⁢(4⁢i+2)s⁢2⁢(4⁢i+2))Equation⁢52 For symbol number 4i+3: Math⁢53(r⁢1⁢(4⁢i+3)r⁢2⁢(4⁢i+3))=12⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(4⁢i+3)ej⁢(θ11(4⁢i+3)+λ)ej⁢θ21(4⁢i+3)ej⁢(θ21(4⁢i+3)+λ+δ))⁢(s⁢1⁢(4⁢i+3)s⁢2⁢(4⁢i+3))Equation⁢53 In Equations 50-53, let A be a positive real number and q be a complex number. The values of A and q are determined in accordance with the positional relationship between the transmission device and the reception device. Equations 50-53 can be represented as follows. For symbol number 4i: Math⁢54(r⁢1⁢(4⁢i)r⁢2⁢(4⁢i))=12⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(4⁢i)ej⁢(θ11(4⁢i)+λ)ej⁢θ21(4⁢i)ej⁢(θ21(4⁢i)+λ+δ))⁢(s⁢1⁢(4⁢i)s⁢2⁢(4⁢i))Equation⁢54 For symbol number 4i+1: Math⁢55(r⁢1⁢(4⁢i+1)r⁢2⁢(4⁢i+1))=12⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(4⁢i+1)ej⁢(θ11(4⁢i+1)+λ)ej⁢θ21(4⁢i+1)ej⁢(θ21(4⁢i+1)+λ+δ))⁢(s⁢1⁢(4⁢i+1)s⁢2⁢(4⁢i+1))Equation⁢55 For symbol number 4i+2: Math⁢56(r⁢1⁢(4⁢i+2)r⁢2⁢(4⁢i+2))=12⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(4⁢i+2)ej⁢(θ11(4⁢i+2)+λ)ej⁢θ21(4⁢i+2)ej⁢(θ21(4⁢i+2)+λ+δ))⁢(s⁢1⁢(4⁢i+2)s⁢2⁢(4⁢i+2))Equation⁢56 For symbol number 4i+3: Math⁢57(r⁢1⁢(4⁢i+3)r⁢2⁢(4⁢i+3))=12⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(4⁢i+3)ej⁢(θ11(4⁢i+3)+λ)ej⁢θ21(4⁢i+3)ej⁢(θ21(4⁢i+3)+λ+δ))⁢(s⁢1⁢(4⁢i+3)s⁢2⁢(4⁢i+3))Equation⁢57 As a result, when q is represented as follows, a signal component based on one of s1 and s2 is no longer included in r1 and r2, and therefore one of the signals s1 and s2 can no longer be obtained. For symbol number 4i: Math 58 q=−Aej(θ11(4i)−θ21(4i)),−Aej(θ11(4i)−θ21(4i)−δ)Equation 58 For symbol number 4i+1: Math 59 q=−Aej(θ11(4i)−θ21(4i+1)),−Aej(θ11(4i+1)−θ21(4i+1)−δ)Equation 59 For symbol number 4i+2: Math 60 q=−Aej(θ11(4i+2)−θ21(4i+2)),−Aej(θ11(4i+2)−θ21(4i+2)−δ)Equation 60 For symbol number 4i+3: Math 61 q=−Aej(θ11(4i+3)−θ21(4i+3)),−Aej(θ11(4i+3)−θ21(4i+3)−δ)Equation 61 In this case, if q has the same solution in symbol numbers 4i, 4i+1, 4i+2, and 4i+3, then the channel elements of the direct waves do not greatly fluctuate. Therefore, a reception device having channel elements in which the value of q is equivalent to the same solution can no longer obtain excellent reception quality for any of the symbol numbers. Therefore, it is difficult to achieve the ability to correct errors, even if error correction codes are introduced. Accordingly, for q not to have the same solution, the following condition is necessary from Equations 58-61 when focusing on one of two solutions of q which does not include δ. Math 62 ej(θ11(4i+x)−θ21(4i+x))≠ej(θ11(4i+y)−θ21(4i+y))for ∀x,∀y(x≠y;x,y=0,1,2,3)  Condition #1 (x is 0, 1, 2, 3; y is 0, 1, 2, 3; and x≠y.) In an example fulfilling Condition #1, values are set as follows: Example #1 θ11(4i)=θ11(4i+1)=θ11(4i+2)=θ11(4i+3)=0 radians,  (1) θ21(4i)=0 radians,  (2) θ21(4i+1)=π/2 radians,  (3) θ21(4i+2)=π radians, and  (4) θ21(4i+3)=3π/2 radians.  (5) (The above is an example. It suffices for one each of zero radians, π/2 radians, π radians, and 3π/2 radians to exist for the set (θ21(4i), θ21(4i+1), θ21(4i+2), θ21(4i+3)).) In this case, in particular under condition (1), there is no need to perform signal processing (rotation processing) on the baseband signal S1(t), which therefore offers the advantage of a reduction in circuit size. Another example is to set values as follows. Example #2 θ11(4i)=0 radians,  (6) θ11(4i+1)=π/2 radians,  (7) θ11(4i+2)=π radians,  (8) θ11(4i+3)=3π/2 radians, and  (9) θ21(4i)=θ21(4i+1)=θ21(4i+2)=θ21(4i+3)=0 radians.  (10) (The above is an example. It suffices for one each of zero radians, π/2 radians, π radians, and 3π/2 radians to exist for the set (θ11(4i), θ11(4i+1), θ11(4i+2), θ11(4i+3)).) In this case, in particular under condition (6), there is no need to perform signal processing (rotation processing) on the baseband signal S2(t), which therefore offers the advantage of a reduction in circuit size. Yet another example is as follows. Example #3 θ11(4i)=θ11(4i+1)=θ11(4i+2)=θ11(4i+3)=0 radians,  (11) θ21(4i)=0 radians,  (12) θ21(4i+1)=π/4 radians,  (13) θ21(4i+2)=π/2 radians, and  (14) θ21(4i+3)=3π/4 radians.  (15) (The above is an example. It suffices for one each of zero radians, π/4 radians, π/2 radians, and 3π/4 radians to exist for the set (θ21(4i), θ21(4i+1), θ21(4i+2), θ21(4i+3)).) Example #4 θ11(4i)=0 radians,  (16) θ11(4i+1)=π/4 radians,  (17) θ11(4i+2)=π/2 radians,  (18) θ11(4i+3)=3π/4 radians, and  (19) θ21(4i)=θ21(4i+1)=θ21(4i+2)=θ21(4i+3)=0 radians.  (20) (The above is an example. It suffices for one each of zero radians, π/4 radians, π/2 radians, and 3π/4 radians to exist for the set (θ11(4i), θ11(4i+1), θ11(4i+2), θ11(4i+3)).) While four examples have been shown, the scheme of satisfying Condition #1 is not limited to these examples. Next, design requirements for not only θ11and θ12, but also for λ and δ are described. It suffices to set λ to a certain value; it is then necessary to establish requirements for δ. The following describes the design scheme for δ when λ is set to zero radians. In this case, by defining δ so that π/2 radians≤|δ|≤π radians, excellent reception quality is achieved, particularly in an LOS environment. Incidentally, for each of the symbol numbers 4i, 4i+1, 4i+2, and 4i+3, two points q exist where reception quality becomes poor. Therefore, a total of 2×4=8 such points exist. In an LOS environment, in order to prevent reception quality from degrading in a specific reception terminal, these eight points should each have a different solution. In this case, in addition to Condition #1, Condition #2 is necessary. Math 63 ej(θ11(4i+x)−θ21(4i+x))≠ej(θ11(4i+y)−θ21(4i+y)−δ)for ∀x,∀y(x,y=0,1,2,3) and ej(θ11(4i+x)−θ21(4i+x)−δ)≠ej(θ11(4i+y)−θ21(4i+y)−δ)for ∀x,∀y(x≠y;x,y=0,1,2,3)  Condition #2 Additionally, the phase of these eight points should be evenly distributed (since the phase of a direct wave is considered to have a high probability of even distribution). The following describes the design scheme for δ to satisfy this requirement. In the case of example #1 and example #2, the phase becomes even at the points at which reception quality is poor by setting δ to ±3π/4 radians. For example, letting δ be 3π/4 radians in example #1 (and letting A be a positive real number), then each of the four slots, points at which reception quality becomes poor exist once, as shown inFIG.20. In the case of example #3 and example #4, the phase becomes even at the points at which reception quality is poor by setting δ to ±7E radians. For example, letting δ be π radians in example #3, then in each of the four slots, points at which reception quality becomes poor exist once, as shown inFIG.21. (If the element q in the channel matrix H exists at the points shown inFIGS.20and21, reception quality degrades.) With the above structure, excellent reception quality is achieved in an LOS environment. Above, an example of changing precoding weights in a four-slot period (cycle) is described, but below, changing precoding weights in an N-slot period (cycle) is described. Making the same considerations as in Embodiment 1 and in the above description, processing represented as below is performed on each symbol number. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢64(z⁢1⁢(Ni)z⁢2⁢(Ni))=12⁢(ej⁢θ11(Ni)ej⁢(θ11(Ni)+λ)ej⁢θ21(Ni)ej⁢(θ21(Ni)+λ+δ))⁢(s⁢1⁢(Ni)s⁢2⁢(Ni))Equation⁢62 Here, j is an imaginary unit. For symbol number Ni+1: Math⁢65(z⁢1⁢(Ni+1)z⁢2⁢(Ni+1))=12⁢(ej⁢θ11(Ni+1)ej⁢(θ11(Ni+1)+λ)ej⁢θ21(Ni+1)ej⁢(θ21(Ni+1)+λ+δ))⁢(s⁢1⁢(Ni+1)s⁢2⁢(Ni+1))Equation⁢63 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢66(z⁢1⁢(Ni+k)z⁢2⁢(Ni+k))=12⁢(ej⁢θ11(Ni+k)ej⁢(θ11(Ni+k)+λ)ej⁢θ21(Ni+k)ej⁢(θ21(Ni+k)+λ+δ))⁢(s⁢1⁢(Ni+k)s⁢2⁢(Ni+k))Equation⁢64 Furthermore, for symbol number Ni+N−1: Math⁢67(z⁢1⁢(Ni+N-1)z⁢2⁢(Ni+N-1))=12⁢(ej⁢θ11(Ni+N-1)ej⁢(θ11(Ni+N-1)+λ)ej⁢θ21(Ni+N-1)ej⁢(θ21(Ni+N-1)+λ+δ))⁢(s⁢1⁢(Ni+N-1)s⁢2⁢(Ni+N-1))Equation⁢65 Accordingly, r1 and r2 are represented as follows. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢68(r⁢1⁢(Ni)r⁢2⁢(Ni))=12⁢(h11(Ni)h12⁢(Ni)h21⁢(Ni)h22⁢(Ni))⁢(ej⁢θ11(Ni)ej⁢(θ11(Ni)+λ)ej⁢θ21(Ni)ej⁢(θ21(Ni)+λ+δ))⁢(s⁢1⁢(Ni)s⁢2⁢(Ni))Equation⁢66 Here, j is an imaginary unit. For symbol number Ni+1: Math⁢69(r⁢1⁢(Ni+1)r⁢2⁢(Ni+1))=12⁢(h11(Ni+1)h12(Ni+1)h21(Ni+1)h22(Ni+1))⁢(ej⁢θ11(Ni+1)ej⁢(θ11(Ni+1)+λ)ej⁢θ21(Ni+1)ej⁢(θ21(Ni+1)+λ+δ))⁢(s⁢1⁢(Ni+1)s⁢2⁢(Ni+1))Equation⁢67 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢70(r⁢1⁢(Ni+k)r⁢2⁢(Ni+k))=12⁢(h11(Ni+k)h12(Ni+k)h21(Ni+k)h22(Ni+k))⁢(ej⁢θ11(Ni+k)ej⁢(θ11(Ni+k)+λ)ej⁢θ21(Ni+k)ej⁢(θ21(Ni+k)+λ+δ))⁢(s⁢1⁢(Ni+k)s⁢2⁢(Ni+k))Equation⁢68 Furthermore, for symbol number Ni+N−1: Math⁢71(r⁢1⁢(Ni+N-1)r⁢2⁢(Ni+N-1))=12⁢(h11(Ni+N-1)h12⁢(Ni+N-1)h21⁢(Ni+N-1)h22⁢(Ni+N-1))⁢(ej⁢θ11(Ni+N-1)ej⁢(θ11(Ni+N-1)+λ)ej⁢θ21(Ni+N-1)ej⁢(θ21(Ni+N-1)+λ+δ))⁢(s⁢1⁢(Ni+N-1)s⁢2⁢(Ni+N-1))Equation⁢69 In this case, it is assumed that only components of direct waves exist in the channel elements h11(t), h12(t), h21(t), and h22(t), that the amplitude components of the direct waves are all equal, and that fluctuations do not occur over time. With these assumptions, Equations 66-69 can be represented as follows. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢72(r⁢1⁢(Ni)r⁢2⁢(Ni))=12⁢(A⁢ej⁢0qA⁢ej⁢0q)⁢(ej⁢θ11(Ni)ej⁢(θ11(Ni)+λ)ej⁢θ21(Ni)ej⁢(θ21(Ni)+λ+δ))⁢(s⁢1⁢(Ni)s⁢2⁢(Ni))Equation⁢70 Here, j is an imaginary unit. For symbol number Ni+1: Math⁢73(r⁢1⁢(Ni+1)r⁢2⁢(Ni+1))=12⁢(A⁢ej⁢0qA⁢ej⁢0q)⁢(ej⁢θ11(Ni+1)ej⁢(θ11(Ni+1)+λ)ej⁢θ21(Ni+1)ej⁢(θ21(Ni+1)+λ+δ))⁢(s⁢1⁢(Ni+1)s⁢2⁢(Ni+1))Equation⁢71 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢74(r⁢1⁢(Ni+k)r⁢2⁢(Ni+k))=12⁢(A⁢ej⁢0qA⁢ej⁢0q)⁢(ej⁢θ11(Ni+k)ej⁢(θ11(Ni+k)+λ)ej⁢θ21(Ni+k)ej⁢(θ21(Ni+k)+λ+δ))⁢(s⁢1⁢(Ni+k)s⁢2⁢(Ni+k))Equation⁢72 Furthermore, for symbol number Ni+N−1: Math⁢75(r⁢1⁢(Ni+N-1)r⁢2⁢(Ni+N-1))=12⁢(A⁢ej⁢0qA⁢ej⁢0q)⁢(ej⁢θ11(Ni+N-1)ej⁢(θ11(Ni+N-1)+λ)ej⁢θ21(Ni+N-1)ej⁢(θ21(Ni+N-1)+λ+δ))⁢(s⁢1⁢(Ni+N-1)s⁢2⁢(Ni+N-1))Equation⁢73 In Equations 70-73, let A be a real number and q be a complex number. The values of A and q are determined in accordance with the positional relationship between the transmission device and the reception device. Equations 70-73 can be represented as follows. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢76(r⁢1⁢(Ni)r⁢2⁢(Ni))=12⁢(ej⁢0ej⁢0)⁢(A⁢ej⁢0q)⁢(ej⁢θ11(Ni)ej⁢(θ11(Ni)+λ)ej⁢θ21(Ni)ej⁢(θ21(Ni)+λ+δ))⁢(s⁢1⁢(Ni)s⁢2⁢(Ni))Equation⁢74 Here, j is an imaginary unit. For symbol number Ni+1: Math⁢77(r⁢1⁢(Ni+1)r⁢2⁢(Ni+1))=12⁢(ej⁢0ej⁢0)⁢(A⁢ej⁢0q)⁢(ej⁢θ11(Ni+1)ej⁢(θ11(Ni+1)+λ)ej⁢θ21(Ni+1)ej⁢(θ21(Ni+1)+λ+δ))⁢(s⁢1⁢(Ni+1)s⁢2⁢(Ni+1))Equation⁢75 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢78(r⁢1⁢(Ni+k)r⁢2⁢(Ni+k))=12⁢(ej⁢0ej⁢0)⁢(A⁢ej⁢0q)⁢(ej⁢θ11(Ni+k)ej⁢(θ11(Ni+k)+λ)ej⁢θ21(Ni+k)ej⁢(θ21(Ni+k)+λ+δ))⁢(s⁢1⁢(Ni+k)s⁢2⁢(Ni+k))Equation⁢76 Furthermore, for symbol number Ni+N−1: Math⁢79(r⁢1⁢(Ni+N-1)r⁢2⁢(Ni+N-1))=12⁢(ej⁢0ej⁢0)⁢(A⁢ej⁢0q)⁢(ej⁢θ11(Ni+N-1)ej⁢(θ11(Ni+N-1)+λ)ej⁢θ21(Ni+N-1)ej⁢(θ21(Ni+N-1)+λ+δ))⁢(s⁢1⁢(Ni+N-1)s⁢2⁢(Ni+N-1))Equation⁢77 As a result, when q is represented as follows, a signal component based on one of s1 and s2 is no longer included in r1 and r2, and therefore one of the signals s1 and s2 can no longer be obtained. For symbol number Ni (where i is an integer greater than or equal to zero): Math 80 q=−Aej(θ11(Ni)−θ21(Ni)),−Aej(θ11(Ni)−θ21(Ni)−δ)Equation 78 For symbol number Ni+1: Math 81 q=−Aej(θ11(Ni+1)−θ21(Ni+1)),−Aej(θ11(Ni+1)−θ21(Ni+1)−δ)Equation 79 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math 82 q=−Aej(θ11(Ni+k)−θ21(Ni+k)),−Aej(θ11(Ni+k)−θ21(Ni+k)−δ)Equation 80 Furthermore, for symbol number Ni+N−1: Math 83 q=−Aej(θ11(Ni+N−1)−θ21(Ni+N−1)),−Aej(θ11(Ni+N−1)−θ21(Ni+N−1)−δ)Equation 81 In this case, if q has the same solution in symbol numbers Ni through Ni+N−1, then since the channel elements of the direct waves do not greatly fluctuate, a reception device having channel elements in which the value of q is equivalent to this same solution can no longer obtain excellent reception quality for any of the symbol numbers. Therefore, it is difficult to achieve the ability to correct errors, even if error correction codes are introduced. Accordingly, for q not to have the same solution, the following condition is necessary from Equations 78-81 when focusing on one of two solutions of q which does not include δ. Math 84 ej(θ11(Ni+x)−θ21(Ni+x))≠ej(θ11(Ni+y)−θ21(Ni+y))for ∀x,∀y(x≠y;x,y=0,1,2,3)  Condition #3 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Next, design requirements for not only θ11and θ12, but also for λ and δ are described. It suffices to set λ to a certain value; it is then necessary to establish requirements for δ. The following describes the design scheme for δ when λ is set to zero radians. In this case, similar to the scheme of changing the precoding weights in a four-slot period (cycle), by defining δ so that π/2 radians≤|δ|≤π radians, excellent reception quality is achieved, particularly in an LOS environment. In each symbol number Ni through Ni+N−1, two points labeled q exist where reception quality becomes poor, and therefore 2N such points exist. In an LOS environment, in order to achieve excellent characteristics, these 2N points should each have a different solution. In this case, in addition to Condition #3, Condition #4 is necessary. Math 85 ej(θ11(Ni+x)−θ21(Ni+x))≠ej(θ11(Ni+y)−θ21(Ni+y)−δ)for ∀x,∀y(x,y=0,1,2, . . . ,N−2,N−1) and ej(θ11(Ni+x)−θ21(Ni+x)−δ)≠ej(θ11(Ni+y)−θ21(Ni+y)−δ)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #4 Additionally, the phase of these 2N points should be evenly distributed (since the phase of a direct wave at each reception device is considered to have a high probability of even distribution). As described above, when a transmission device transmits a plurality of modulated signals from a plurality of antennas in a MIMO system, the advantageous effect of improved transmission quality, as compared to conventional spatial multiplexing MIMO system, is achieved in an LOS environment in which direct waves dominate by hopping between precoding weights regularly over time. In the present embodiment, the structure of the reception device is as described in Embodiment 1, and in particular with regards to the structure of the reception device, operations have been described for a limited number of antennas, but the present invention may be embodied in the same way even if the number of antennas increases. In other words, the number of antennas in the reception device does not affect the operations or advantageous effects of the present embodiment. Furthermore, in the present embodiment, similar to Embodiment 1, the error correction codes are not limited. In the present embodiment, in contrast with Embodiment 1, the scheme of changing the precoding weights in the time domain has been described. As described in Embodiment 1, however, the present invention may be similarly embodied by changing the precoding weights by using a multi-carrier transmission scheme and arranging symbols in the frequency domain and the frequency-time domain. Furthermore, in the present embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, and the like), symbols for control information, and the like, may be arranged in the frame in any way. Embodiment 3 In Embodiment 1 and Embodiment 2, the scheme of regularly hopping between precoding weights has been described for the case where the amplitude of each element in the precoding weight matrix is equivalent. In the present embodiment, however, an example that does not satisfy this condition is described. For the sake of contrast with Embodiment 2, the case of changing precoding weights over an N-slot period (cycle) is described. Making the same considerations as in Embodiment 1 and Embodiment 2, processing represented as below is performed on each symbol number. Let β be a positive real number, and β≠1. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢86(z⁢1⁢(Ni)z⁢2⁢(Ni))=1β2+1⁢(ej⁢θ11(Ni)β×ej⁢(θ11(Ni)+λ)β×ej⁢θ21(Ni)ej⁢(θ21(Ni)+λ+δ))⁢(s⁢1⁢(Ni)s⁢2⁢(Ni))Equation⁢82 Here, j is an imaginary unit. For symbol number Ni+1: Math⁢87(z⁢1⁢(Ni+1)z⁢2⁢(Ni+1))=1β2+1⁢(ej⁢θ11(Ni+1)β×ej⁢(θ11(Ni+1)+λ)β×ej⁢θ21(Ni+1)ej⁢(θ21(Ni+1)+λ+δ))⁢(s⁢1⁢(Ni+1)s⁢2⁢(Ni+1))Equation⁢83 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢88(z⁢1⁢(Ni+k)z⁢2⁢(Ni+k))=1β2+1⁢(ej⁢θ11(Ni+k)β×ej⁢(θ11(Ni+k)+λ)β×ej⁢θ21(Ni+k)ej⁢(θ21(Ni+k)+λ+δ))⁢(s⁢1⁢(Ni+k)s⁢2⁢(Ni+k))Equation⁢84 Furthermore, for symbol number Ni+N−1: Math⁢89(z⁢1⁢(Ni+N-1)z⁢2⁢(Ni+N-1))=1β2+1⁢(ej⁢θ11(Ni+N-1)β×ej⁢(θ11(Ni+N-1)+λ)β×ej⁢θ21(Ni+N-1)ej⁢(θ21(Ni+N-1)+λ+δ))⁢(s⁢1⁢(Ni+N-1)s⁢2⁢(Ni+N-1))Equation⁢85 Accordingly, r1 and r2 are represented as follows. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢90(r⁢1⁢(Ni)r⁢2⁢(Ni))=1β2+1⁢(h11(Ni)h12⁢(Ni)h21⁢(Ni)h22⁢(Ni))⁢(ej⁢θ11(Ni)β×ej⁢(θ11(Ni)+λ)β×ej⁢θ21(Ni)ej⁢(θ21(Ni)+λ+δ))⁢(s⁢1⁢(Ni)s⁢2⁢(Ni))Equation⁢86 Here, j is an imaginary unit. For symbol number Ni+1: Math⁢91(r⁢1⁢(Ni+1)r⁢2⁢(Ni+1))=1β2+1⁢(h11(Ni+1)h12(Ni+1)h21(Ni+1)h22(Ni+1))⁢(ej⁢θ11(Ni+1)β×ej⁢(θ11(Ni+1)+λ)β×ej⁢θ21(Ni+1)ej⁢(θ21(Ni+1)+λ+δ))⁢(s⁢1⁢(Ni+1)s⁢2⁢(Ni+1))Equation⁢87 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢92(r⁢1⁢(Ni+k)r⁢2⁢(N⁢i+k))=1β2+1⁢(h1⁢1(Ni+k)h1⁢2(Ni+k)h2⁢1(Ni+k)h2⁢2(Ni+k))⁢(ej⁢θ11(Ni+k)β×ej⁢(θ11(Ni+k)+λ)β×ej⁢θ21(Ni+k)ej⁢(θ21(Ni+k)+λ+δ))⁢(s⁢1⁢(Ni+k)s⁢2⁢(N⁢i+k))Equation⁢88 When generalized, this equation is as follows. For symbol number Ni+N−1: Math⁢93(r⁢1⁢(Ni+N-1)r⁢2⁢(Ni+N-1))=1β2+1⁢(h1⁢1(Ni+N-1)h1⁢2(Ni+N-1)h2⁢1(Ni+N-1)h2⁢2(Ni+N-1))⁢(ej⁢θ11(Ni+N-1)β×ej⁢(θ11(Ni+N-1)+λ)β×ej⁢θ21(Ni+N-1)ej⁢(θ21(Ni+N-1)+λ+δ))⁢(s⁢1⁢(Ni+N-1)s⁢2⁢(Ni+N-1))Equation⁢89 In this case, it is assumed that only components of direct waves exist in the channel elements h11(t), h12(t), h21(t), and h22(t), that the amplitude components of the direct waves are all equal, and that fluctuations do not occur over time. With these assumptions, Equations 86-89 can be represented as follows. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢94(r⁢1⁢(Ni)r⁢2⁢(Ni))=1β2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(Ni)β×ej⁢(θ11(Ni)+λ)β×ej⁢θ21(Ni)ej⁢(θ21(Ni)+λ+δ))⁢(s⁢1⁢(Ni)s⁢2⁢(Ni))Equation⁢90 Here, j is an imaginary unit. For symbol number Ni+1: Math⁢95(r⁢1⁢(Ni+1)r⁢2⁢(Ni+1))=1β2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(Ni+1)β×ej⁢(θ11(Ni+1)+λ)β×ej⁢θ21(Ni+1)ej⁢(θ21(Ni+1)+λ+δ))⁢(s⁢1⁢(Ni+1)s⁢2⁢(Ni+1))Equation⁢91 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢96(r⁢1⁢(Ni+k)r⁢2⁢(Ni+k))=1β2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(Ni+k)β×ej⁢(θ11(Ni+k)+λ)β×ej⁢θ21(Ni+k)ej⁢(θ21(Ni+k)+λ+δ))⁢(s⁢1⁢(Ni+k)s⁢2⁢(Ni+k))Equation⁢92 Furthermore, for symbol number Ni+N−1: Math⁢97(r⁢1⁢(Ni+N-1)r⁢2⁢(Ni+N-1))=1β2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(Ni+N-1)β×ej⁢(θ11(Ni+N-1)+λ)β×ej⁢θ21(Ni+N-1)ej⁢(θ21(Ni+N-1)+λ+δ))⁢(s⁢1⁢(Ni+N-1)s⁢2⁢(Ni+N-1))Equation⁢93 In Equations 90-93, let A be a real number and q be a complex number. Equations 90-93 can be represented as follows. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢98(r⁢1⁢(Ni)r⁢2⁢(Ni))=1β2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(Ni)β×ej⁢(θ11(Ni)+λ)β×ej⁢θ21(Ni)ej⁢(θ21(Ni)+λ+δ))⁢(s⁢1⁢(Ni)s⁢2⁢(Ni))Equation⁢94 Here, j is an imaginary unit. For symbol number Ni+1: Math⁢99(r⁢1⁢(Ni+1)r⁢2⁢(Ni+1))=1β2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(Ni+1)β×ej⁢(θ11(Ni+1)+λ)β×ej⁢θ21(Ni+1)ej⁢(θ21(Ni+1)+λ+δ))⁢(s⁢1⁢(Ni+1)s⁢2⁢(Ni+1))Equation⁢95 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢100(r⁢1⁢(Ni+k)r⁢2⁢(Ni+k))=1β2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(Ni+k)β×ej⁢(θ11(Ni+k)+λ)β×ej⁢θ21(Ni+k)ej⁢(θ21(Ni+k)+λ+δ))⁢(s⁢1⁢(Ni+k)s⁢2⁢(Ni+k))Equation⁢96 Furthermore, for symbol number Ni+N−1: Math⁢101(r⁢1⁢(Ni+N-1)r⁢2⁢(Ni+N-1))=1β2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(Ni+N-1)β×ej⁢(θ11(Ni+N-1)+λ)β×ej⁢θ21(Ni+N-1)ej⁢(θ21(Ni+N-1)+λ+δ))⁢(s⁢1⁢(Ni+N-1)s⁢2⁢(Ni+N-1))Equation⁢97 As a result, when q is represented as follows, one of the signals s1 and s2 can no longer be obtained. For symbol number Ni (where i is an integer greater than or equal to zero): Math⁢102q=-Aβ⁢ej⁡(θ11(Ni)-θ21(Ni)),-A⁢β⁢ej⁡(θ11(Ni)-θ21(Ni)-δ)Equation⁢98 For symbol number Ni+1: Math⁢103q=-Aβ⁢ej⁡(θ11(Ni+1)-θ21(Ni+1)),-A⁢β⁢ej⁡(θ11(Ni+1)-θ21(Ni+1)-δ)Equation⁢99 When generalized, this equation is as follows. For symbol number Ni+k (k=0, 1, . . . , N−1): Math⁢104q=-Aβ⁢ej⁡(θ11(Ni+k)-θ21(Ni+k)),-A⁢β⁢ej⁡(θ11(Ni+k)-θ21(Ni+k)-δ)Equation⁢100 Furthermore, for symbol number Ni+N−1: Math⁢105q=-Aβ⁢ej⁡(θ11(Ni+N-1)-θ21(Ni+N-1)),-A⁢β⁢ej⁡(θ11(Ni+N-1)-θ21(Ni+N-1)-δ)Equation⁢101 In this case, if q has the same solution in symbol numbers Ni through Ni+N−1, then since the channel elements of the direct waves do not greatly fluctuate, excellent reception quality can no longer be obtained for any of the symbol numbers. Therefore, it is difficult to achieve the ability to correct errors, even if error correction codes are introduced. Accordingly, for q not to have the same solution, the following condition is necessary from Equations 98-101 when focusing on one of two solutions of q which does not include δ. Math 106 ej(θ11(Ni+x)−θ21(Ni+x))≠ej(θ11(Ni+y)−θ21(Ni+y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)   Condition #5 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Next, design requirements for not only θ11and θ12, but also for λ and δ are described. It suffices to set λ to a certain value; it is then necessary to establish requirements for δ. The following describes the design scheme for δ when λ is set to zero radians. In this case, similar to the scheme of changing the precoding weights in a four-slot period (cycle), by defining δ so that π/2 radians≤|δ|≤π radians, excellent reception quality is achieved, particularly in an LOS environment. In each of symbol numbers Ni through Ni+N−1, two points q exist where reception quality becomes poor, and therefore 2N such points exist. In an LOS environment, in order to achieve excellent characteristics, these 2N points should each have a different solution. In this case, in addition to Condition #5, considering that β is a positive real number, and β≠1, Condition #6 is necessary. Math 107 ej(θ11(Ni+x)−θ21(Ni+x)−δ)≠ej(θ11(Ni+y)−θ21(Ni+y)−δ)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #6 As described above, when a transmission device transmits a plurality of modulated signals from a plurality of antennas in a MIMO system, the advantageous effect of improved transmission quality, as compared to conventional spatial multiplexing MIMO system, is achieved in an LOS environment in which direct waves dominate by hopping between precoding weights regularly over time. In the present embodiment, the structure of the reception device is as described in Embodiment 1, and in particular with regards to the structure of the reception device, operations have been described for a limited number of antennas, but the present invention may be embodied in the same way even if the number of antennas increases. In other words, the number of antennas in the reception device does not affect the operations or advantageous effects of the present embodiment. Furthermore, in the present embodiment, similar to Embodiment 1, the error correction codes are not limited. In the present embodiment, in contrast with Embodiment 1, the scheme of changing the precoding weights in the time domain has been described. As described in Embodiment 1, however, the present invention may be similarly embodied by changing the precoding weights by using a multi-carrier transmission scheme and arranging symbols in the frequency domain and the frequency-time domain. Furthermore, in the present embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, and the like), symbols for control information, and the like, may be arranged in the frame in any way. Embodiment 4 In Embodiment 3, the scheme of regularly hopping between precoding weights has been described for the example of two types of amplitudes for each element in the precoding weight matrix, 1 and β. In this case, 1β2+1Math⁢108 is ignored. Next, the example of changing the value of β by slot is described. For the sake of contrast with Embodiment 3, the case of changing precoding weights over a 2×N-slot period (cycle) is described. Making the same considerations as in Embodiment 1, Embodiment 2, and Embodiment 3, processing represented as below is performed on symbol numbers. Let β be a positive real number, and β≠1. Furthermore, let a be a positive real number, and α≠β. For symbol number 2Ni (where i is an integer greater than or equal to zero): Math⁢109(z⁢1⁢(2⁢N⁢i)z⁢2⁢(2⁢N⁢i))=1β2+1⁢(ej⁢θ11(2⁢Ni)β×ej(θ11(2⁢Ni)+λ)β×ej⁢θ21(2⁢Ni)ej⁢(θ21(2⁢Ni)+λ+δ))⁢(s⁢1⁢(2⁢N⁢i)s⁢2⁢(2⁢N⁢i))Equation⁢102 Here, j is an imaginary unit. For symbol number 2Ni+1: Math⁢110(z⁢1⁢(2⁢Ni+1)z⁢2⁢(2⁢Ni+1))=1β2+1⁢(ej⁢θ11(2⁢Ni+1)β×ej(θ11(2⁢Ni+1)+λ)β×ej⁢θ21(2⁢Ni+1)ej⁢(θ21(2⁢Ni+1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+1)s⁢2⁢(2⁢Ni+1))Equation⁢103 When generalized, this equation is as follows. For symbol number 2Ni+k (k=0, 1, . . . , N−1): Math⁢111(z⁢1⁢(2⁢Ni+k)z⁢2⁢(2⁢Ni+k))=1β2+1⁢(ej⁢θ11(2⁢Ni+k)β×ej(θ11(2⁢Ni+k)+λ)β×ej⁢θ21(2⁢Ni+k)ej⁢(θ21(2⁢Ni+k)+λ+δ))⁢(s⁢1⁢(2⁢Ni+k)s⁢2⁢(2⁢Ni+k))Equation⁢104 Furthermore, for symbol number 2Ni+N−1: Math⁢112(z⁢1⁢(2⁢Ni+N-1)z⁢2⁢(2⁢Ni+N-1))=1β2+1⁢(ej⁢θ11(2⁢Ni+N-1)β×ej(θ11(2⁢Ni+N-1)+λ)β×ej⁢θ21(2⁢Ni+N-1)ej⁢(θ21(2⁢Ni+N-1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N-1)s⁢2⁢(2⁢Ni+N-1))Equation⁢105 For symbol number 2Ni+N (where i is an integer greater than or equal to zero): Math⁢113(z⁢1⁢(2⁢Ni+N)z⁢2⁢(2⁢Ni+N))=1α2+1⁢(ej⁢θ11(2⁢Ni+N)α×ej(θ11(2⁢Ni+N)+λ)α×ej⁢θ21(2⁢Ni+N)ej⁢(θ21(2⁢Ni+N)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N)s⁢2⁢(2⁢Ni+N))Equation⁢106 Here, j is an imaginary unit. For symbol number 2Ni+N+1: Math⁢114(z⁢1⁢(2⁢Ni+N+1)z⁢2⁢(2⁢Ni+N+1))=1α2+1⁢(ej⁢θ11(2⁢Ni+N+1)α×ej(θ11(2⁢Ni+N+1)+λ)α×ej⁢θ21(2⁢Ni+N+1)ej⁢(θ21(2⁢Ni+N+1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N+1)s⁢2⁢(2⁢Ni+N+1))Equation⁢107 When generalized, this equation is as follows. For symbol number 2Ni+N+k (k=0, 1, . . . , N−1): Math⁢115(z⁢1⁢(2⁢Ni+N+k)z⁢2⁢(2⁢Ni+N+k))=1α2+1⁢(ej⁢θ11(2⁢Ni+N+k)α×ej(θ11(2⁢Ni+N+k)+λ)α×ej⁢θ21(2⁢Ni+N+k)ej⁢(θ21(2⁢Ni+N+k)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N+k)s⁢2⁢(2⁢Ni+N+k))Equation⁢108 Furthermore, for symbol number 2Ni+2N−1: Math⁢116(z⁢1⁢(2⁢Ni+2⁢N-1)z⁢2⁢(2⁢Ni+2⁢N-1))=1α2+1⁢(ej⁢θ11(2⁢Ni+2⁢N-1)α×ej(θ11(2⁢Ni+2⁢N-1)+λ)α×ej⁢θ21(2⁢Ni+2⁢N-1)ej⁢(θ21(2⁢Ni+2⁢N-1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+2⁢N-1)s⁢2⁢(2⁢Ni+2⁢N-1))Equation⁢109 Accordingly, r1 and r2 are represented as follows. For symbol number 2Ni (where i is an integer greater than or equal to zero): Math⁢117(r⁢1⁢(2⁢Ni)r⁢2⁢(2⁢N⁢i))=1β2+1⁢(h1⁢1(2⁢N⁢i)h1⁢2(2⁢N⁢i)h2⁢1(2⁢N⁢i)h2⁢2(2⁢N⁢i))⁢(ej⁢θ11(2⁢Ni)β×ej(θ11(2⁢Ni)+λ)β×ej⁢θ21(2⁢Ni)ej⁢(θ21(2⁢Ni)+λ+δ))⁢(s⁢1⁢(2⁢Ni)s⁢2⁢(2⁢N⁢i))Equation⁢110 Here, j is an imaginary unit. For symbol number 2Ni+1: Math⁢118(r⁢1⁢(2⁢Ni+1)r⁢2⁢(2⁢Ni+1))=1β2+1⁢(h1⁢1(2⁢Ni+1)h1⁢2(2⁢Ni+1)h2⁢1(2⁢Ni+1)h2⁢2(2⁢Ni+1))⁢(ej⁢θ11(2⁢Ni+1)β×ej(θ11(2⁢Ni+1)+λ)β×ej⁢θ21(2⁢Ni+1)ej⁢(θ21(2⁢Ni+1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+1)s⁢2⁢(2⁢Ni+1))Equation⁢111 When generalized, this equation is as follows. For symbol number 2Ni+k (k=0, 1, . . . , N−1): Math⁢119(r⁢1⁢(2⁢Ni+k)r⁢2⁢(2⁢Ni+k))=1β2+1⁢(h1⁢1(2⁢Ni+k)h1⁢2(2⁢Ni+k)h2⁢1(2⁢Ni+k)h2⁢2(2⁢Ni+k))⁢(ej⁢θ11(2⁢Ni+k)β×ej(θ11(2⁢Ni+k)+λ)β×ej⁢θ21(2⁢Ni+k)ej⁢(θ21(2⁢Ni+k)+λ+δ))⁢(s⁢1⁢(2⁢Ni+k)s⁢2⁢(2⁢Ni+k))Equation⁢112 Furthermore, for symbol number 2Ni+N−1: Math⁢120(r⁢1⁢(2⁢Ni+N-1)r⁢2⁢(2⁢Ni+N-1))=1β2+1⁢(h1⁢1(2⁢Ni+N-1)h1⁢2(2⁢Ni+N-1)h2⁢1(2⁢Ni+N-1)h2⁢2(2⁢Ni+N-1))⁢(ej⁢θ11(2⁢Ni+N-1)β×ej(θ11(2⁢Ni+N-1)+λ)β×ej⁢θ21(2⁢Ni+N-1)ej⁢(θ21(2⁢Ni+N-1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N-1)s⁢2⁢(2⁢Ni+N-1))Equation⁢113 For symbol number 2Ni+N (where i is an integer greater than or equal to zero): Math⁢121(r⁢1⁢(2⁢Ni+N)r⁢2⁢(2⁢Ni+N))=1α2+1⁢(h1⁢1(2⁢Ni+N)h1⁢2(2⁢Ni+N)h2⁢1(2⁢Ni+N)h2⁢2(2⁢Ni+N))⁢(ej⁢θ11(2⁢Ni+N)α×ej(θ11(2⁢Ni+N)+λ)α×ej⁢θ21(2⁢Ni+N)ej⁢(θ21(2⁢Ni+N)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N)s⁢2⁢(2⁢Ni+N))Equation⁢114 Here, j is an imaginary unit. For symbol number 2Ni+N+1: Math⁢122(r⁢1⁢(2⁢Ni+N+1)r⁢2⁢(2⁢Ni+N+1))=1α2+1⁢(h1⁢1(2⁢Ni+N+1)h1⁢2(2⁢Ni+N+1)h2⁢1(2⁢Ni+N+1)h2⁢2(2⁢Ni+N+1))⁢(ej⁢θ11(2⁢Ni+N+1)α×ej(θ11(2⁢Ni+N+1)+λ)α×ej⁢θ21(2⁢Ni+N+1)ej⁢(θ21(2⁢Ni+N+1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N+1)s⁢2⁢(2⁢Ni+N+1))Equation⁢115 When generalized, this equation is as follows. For symbol number 2Ni+N+k (k=0, 1, . . . , N−1): Math⁢123(r⁢1⁢(2⁢Ni+N+k)r⁢2⁢(2⁢Ni+N+k))=1α2+1⁢(h1⁢1(2⁢Ni+N+k)h1⁢2(2⁢Ni+N+k)h2⁢1(2⁢Ni+N+k)h2⁢2(2⁢Ni+N+k))⁢(ej⁢θ11(2⁢Ni+N+k)α×ej(θ11(2⁢Ni+N+k)+λ)α×ej⁢θ21(2⁢Ni+N+k)ej⁢(θ21(2⁢Ni+N+k)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N+k)s⁢2⁢(2⁢Ni+N+k))Equation⁢116 When generalized, this equation is as follows. For symbol number 2Ni+2N−1: Math⁢124(r⁢1⁢(2⁢Ni+2⁢N-1)r⁢2⁢(2⁢Ni+2⁢N-1))=1α2+1⁢(h11(2⁢Ni+2⁢N-1)h12⁢(2⁢Ni+2⁢N-1)h21⁢(2⁢Ni+2⁢N-1)h22⁢(2⁢Ni+2⁢N-1))⁢(ej⁢θ11(2⁢Ni+2⁢N-1)α×ej⁢(θ11(2⁢Ni+2⁢N-1)+λ)α×ej⁢θ21(2⁢Ni+2⁢N-1)ej⁢(θ21(2⁢Ni+2⁢N-1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+2⁢N-1)s⁢2⁢(2⁢Ni+2⁢N-1))Equation⁢117 In this case, it is assumed that only components of direct waves exist in the channel elements h11(t), h12(t), h21(t), and h22(t), that the amplitude components of the direct waves are all equal, and that fluctuations do not occur over time. With these assumptions, Equations 110-117 can be represented as follows. For symbol number 2Ni (where i is an integer greater than or equal to zero): Math⁢125(r⁢1⁢(2⁢Ni)r⁢2⁢(2⁢N⁢i))=1β2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(2⁢Ni)β×ej⁢(θ11(2⁢Ni)+λ)β×ej⁢θ21(2⁢Ni)ej⁢(θ21(2⁢Ni)+λ+δ))⁢(s⁢1⁢(2⁢Ni)s⁢2⁢(2⁢N⁢i))Equation⁢118 Here, j is an imaginary unit. For symbol number 2Ni+1: Math⁢126(r⁢1⁢(2⁢Ni+1)r⁢2⁢(2⁢Ni+1))=1β2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(2⁢Ni+1)β×ej⁢(θ11(2⁢Ni+1)+λ)β×ej⁢θ21(2⁢Ni+1)ej⁢(θ21(2⁢Ni+1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+1)s⁢2⁢(2⁢Ni+1))Equation⁢119 When generalized, this equation is as follows. For symbol number 2Ni+k (k=0, 1, . . . , N−1): Math⁢127(r⁢1⁢(2⁢Ni+k)r⁢2⁢(2⁢Ni+k))=1β2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(2⁢Ni+k)β×ej⁢(θ11(2⁢Ni+k)+λ)β×ej⁢θ21(2⁢Ni+k)ej⁢(θ21(2⁢Ni+k)+λ+δ))⁢(s⁢1⁢(2⁢Ni+k)s⁢2⁢(2⁢Ni+k))Equation⁢120 Furthermore, for symbol number 2Ni+N−1: Math⁢128(r⁢1⁢(2⁢Ni+N-1)r⁢2⁢(2⁢Ni+N-1))=1β2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(2⁢Ni+N-1)β×ej⁢(θ11(2⁢Ni+N-1)+λ)β×ej⁢θ21(2⁢Ni+N-1)ej⁢(θ21(2⁢Ni+N-1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N-1)s⁢2⁢(2⁢Ni+N-1))Equation⁢121 For symbol number 2Ni+N (where i is an integer greater than or equal to zero): Math⁢129(r⁢1⁢(2⁢Ni+N)r⁢2⁢(2⁢Ni+N))=1α2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(2⁢Ni+N)α×ej⁢(θ11(2⁢Ni+N)+λ)α×ej⁢θ21(2⁢Ni+N)ej⁢(θ21(2⁢Ni+N)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N)s⁢2⁢(2⁢Ni+N))Equation⁢122 Here, j is an imaginary unit. For symbol number 2Ni+N+1: Math⁢130(r⁢1⁢(2⁢Ni+N+1)r⁢2⁢(2⁢Ni+N+1))=1α2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(2⁢Ni+N+1)α×ej⁢(θ11(2⁢Ni+N+1)+λ)α×ej⁢θ21(2⁢Ni+N+1)ej⁢(θ21(2⁢Ni+N+1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N+1)s⁢2⁢(2⁢Ni+N+1))Equation⁢123 When generalized, this equation is as follows. For symbol number 2Ni+N+k (k=0, 1, . . . , N−1): Math⁢131(r⁢1⁢(2⁢Ni+N+k)r⁢2⁢(2⁢Ni+N+k))=1α2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(2⁢Ni+N+k)α×ej⁢(θ11(2⁢Ni+N+k)+λ)α×ej⁢θ21(2⁢Ni+N+k)ej⁢(θ21(2⁢Ni+N+k)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N+k)s⁢2⁢(2⁢Ni+N+k))Equation⁢124 Furthermore, for symbol number 2Ni+2N−1: Math⁢132(r⁢1⁢(2⁢Ni+2⁢N-1)r⁢2⁢(2⁢Ni+2⁢N-1))=1α2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(2⁢Ni+2⁢N-1)α×ej⁢(θ11(2⁢Ni+2⁢N-1)+λ)α×ej⁢θ21(2⁢Ni+2⁢N-1)ej⁢(θ21(2⁢Ni+2⁢N-1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+2⁢N-1)s⁢2⁢(2⁢Ni+2⁢N-1))Equation⁢125 In Equations 118-125, let A be a real number and q be a complex number. Equations 118-125 can be represented as follows. For symbol number 2Ni (where i is an integer greater than or equal to zero): Math⁢133(r⁢1⁢(2⁢Ni)r⁢2⁢(2⁢N⁢i))=1β2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(2⁢Ni)β×ej⁢(θ11(2⁢Ni)+λ)β×ej⁢θ21(2⁢Ni)ej⁢(θ21(2⁢Ni)+λ+δ))⁢(s⁢1⁢(2⁢Ni)s⁢2⁢(2⁢N⁢i))Equation⁢126 Here, j is an imaginary unit. For symbol number 2Ni+1: Math⁢134(r⁢1⁢(2⁢Ni+1)r⁢2⁢(2⁢Ni+1))=1β2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(2⁢Ni+1)β×ej⁢(θ11(2⁢Ni+1)+λ)β×ej⁢θ21(2⁢Ni+1)ej⁢(θ21(2⁢Ni+1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+1)s⁢2⁢(2⁢Ni+1))Equation⁢127 When generalized, this equation is as follows. For symbol number 2Ni+k (k=0, 1, . . . , N−1): Math⁢135(r⁢1⁢(2⁢Ni+k)r⁢2⁢(2⁢Ni+k))=1β2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(2⁢Ni+k)β×ej⁢(θ11(2⁢Ni+k)+λ)β×ej⁢θ21(2⁢Ni+k)ej⁢(θ21(2⁢Ni+k)+λ+δ))⁢(s⁢1⁢(2⁢Ni+k)s⁢2⁢(2⁢Ni+k))Equation⁢128 Furthermore, for symbol number 2Ni+N−1: Math⁢136(r⁢1⁢(2⁢Ni+N-1)r⁢2⁢(2⁢Ni+N-1))=1β2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(2⁢Ni+N-1)β×ej⁢(θ11(2⁢Ni+N-1)+λ)β×ej⁢θ21(2⁢Ni+N-1)ej⁢(θ21(2⁢Ni+N-1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N-1)s⁢2⁢(2⁢Ni+N-1))Equation⁢129 For symbol number 2Ni+N (where i is an integer greater than or equal to zero): Math⁢137(r⁢1⁢(2⁢Ni+N)r⁢2⁢(2⁢Ni+N))=1α2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(2⁢Ni+N)α×ej⁢(θ11(2⁢Ni+N)+λ)α×ej⁢θ21(2⁢Ni+N)ej⁢(θ21(2⁢Ni+N)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N)s⁢2⁢(2⁢Ni+N))Equation⁢130 Here, j is an imaginary unit. For symbol number 2Ni+N+1: Math⁢138(r⁢1⁢(2⁢Ni+N+1)r⁢2⁢(2⁢Ni+N+1))=1α2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(2⁢Ni+N+1)α×ej⁢(θ11(2⁢Ni+N+1)+λ)α×ej⁢θ21(2⁢Ni+N+1)ej⁢(θ21(2⁢Ni+N+1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N+1)s⁢2⁢(2⁢Ni+N+1))Equation⁢131 When generalized, this equation is as follows. For symbol number 2Ni+N+k (k=0, 1, . . . , N−1): Math⁢139(r⁢1⁢(2⁢Ni+N+k)r⁢2⁢(2⁢Ni+N+k))=1α2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ1⁢1(2⁢Ni+N+k)α×ej⁢(θ11(2⁢Ni+N+k)+λ)α×ej⁢θ21(2⁢Ni+N+k)ej⁡(θ2⁢1(2⁢Ni+N+k)+λ+δ))⁢(s⁢1⁢(2⁢Ni+N+k)s⁢2⁢(2⁢N⁢i+N+k))Equation⁢132 Furthermore, for symbol number 2Ni+2N−1: Math⁢140(r⁢1⁢(2⁢Ni+2⁢N-1)r⁢2⁢(2⁢Ni+2⁢N-1))=1α2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ1⁢1(2⁢Ni+2⁢N-1)α×ej⁢(θ11(2⁢Ni+2⁢N-1)+λ)α×ej⁢θ21(2⁢Ni+2⁢N-1)ej⁡(θ2⁢1(2⁢Ni+2⁢N-1)+λ+δ))⁢(s⁢1⁢(2⁢Ni+2⁢N-1)s⁢2⁢(2⁢Ni+2⁢N-1))Equation⁢133 As a result, when q is represented as follows, one of the signals s1 and s2 can no longer be obtained. For symbol number 2Ni (where i is an integer greater than or equal to zero): Math⁢141q=-Aβ⁢ej⁡(θ11(2⁢Ni)-θ21(2⁢Ni)),-A⁢β⁢ej⁡(θ11(2⁢Ni)-θ21(2⁢Ni)-δ)Equation⁢134 For symbol number 2Ni+1: Math⁢142q=-Aβ⁢ej⁡(θ11(2⁢Ni+1)-θ21(2⁢Ni+1)),-A⁢β⁢ej⁡(θ11(2⁢Ni+1)-θ21(2⁢Ni+1)-δ)Equation⁢135 When generalized, this equation is as follows. For symbol number 2Ni+k (k=0, 1, . . . , N−1): Math⁢143q=-Aβ⁢ej⁡(θ11(2⁢Ni+k)-θ21(2⁢Ni+k)),-A⁢β⁢ej⁡(θ11(2⁢Ni+k)-θ21(2⁢Ni+k)-δ)Equation⁢136 Furthermore, for symbol number 2Ni+N−1: Math⁢144q=-Aβ⁢ej⁡(θ11(2⁢Ni+N-1)-θ21(2⁢Ni+N-1)),-A⁢β⁢ej⁡(θ11(2⁢Ni+N-1)-θ21(2⁢Ni+N-1)-δ)Equation⁢137 For symbol number 2Ni+N (where i is an integer greater than or equal to zero): Math⁢145q=-Aα⁢ej⁡(θ11(2⁢Ni+N)-θ21(2⁢Ni+N)),-A⁢α⁢ej⁡(θ11(2⁢Ni+N)-θ21(2⁢Ni+N)-δ)Equation⁢138 For symbol number 2Ni+N+1: Math⁢146q=-Aα⁢ej⁡(θ11(2⁢Ni+N+1)-θ21(2⁢Ni+N+1)),-A⁢α⁢ej⁡(θ11(2⁢Ni+N+1)-θ21(2⁢Ni+N+1)-δ)Equation⁢139 When generalized, this equation is as follows. For symbol number 2Ni+N+k (k=0, 1, . . . , N−1): Math⁢147q=-Aα⁢ej⁡(θ11(2⁢Ni+N+k)-θ21(2⁢Ni+N+k)),-A⁢α⁢ej⁡(θ11(2⁢Ni+N+k)-θ21(2⁢Ni+N+k)-δ)Equation⁢140 Furthermore, for symbol number 2Ni+2N−1: Math⁢148q=-Aα⁢ej⁡(θ11(2⁢Ni+2⁢N-1)-θ21(2⁢Ni+2⁢N-1)),-A⁢α⁢ej⁡(θ11(2⁢Ni+2⁢N-1)-θ21(2⁢Ni+2⁢N-1)-δ)Equation⁢141 In this case, if q has the same solution in symbol numbers 2Ni through 2Ni+N−1, then since the channel elements of the direct waves do not greatly fluctuate, excellent reception quality can no longer be obtained for any of the symbol numbers. Therefore, it is difficult to achieve the ability to correct errors, even if error correction codes are introduced. Accordingly, for q not to have the same solution, Condition #7 or Condition #8 becomes necessary from Equations 134-141 and from the fact that α≠β when focusing on one of two solutions of q which does not include δ. Math 149 ej(θ11(2Ni+x)−θ21(2Ni+x))≠ej(θ11(2Ni+y)−θ21(2Ni+y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1) (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) and ej(θ11(2Ni+N+x)−θ21(2Ni+N+x))≠ej(θ11(2Ni+N+y)−θ21(2Ni+N+y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #7 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 150 ej(θ11(2Ni+x)−θ21(2Ni+x))≠ej(θ11(2Ni+y)−θ21(2Ni+y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,2N−2,2N−1)  Condition #8 In this case, Condition #8 is similar to the conditions described in Embodiment 1 through Embodiment 3. However, with regards to Condition #7, since α≠β, the solution not including 6 among the two solutions of q is a different solution. Next, design requirements for not only θ11and θ12, but also for λ and δ are described. It suffices to set λ to a certain value; it is then necessary to establish requirements for δ. The following describes the design scheme for δ when λ is set to zero radians. In this case, similar to the scheme of changing the precoding weights in a four-slot period (cycle), by defining δ so that π/2 radians≤|δ|≤π radians, excellent reception quality is achieved, particularly in an LOS environment. In symbol numbers 2Ni through 2Ni+2N−1, two points q exist where reception quality becomes poor, and therefore 4N such points exist. In an LOS environment, in order to achieve excellent characteristics, these 4N points should each have a different solution. In this case, focusing on amplitude, the following condition is necessary for Condition #7 or Condition #8, since α≠β. Math⁢151α≠1βCondition⁢#9 As described above, when a transmission device transmits a plurality of modulated signals from a plurality of antennas in a MIMO system, the advantageous effect of improved transmission quality, as compared to conventional spatial multiplexing MIMO system, is achieved in an LOS environment in which direct waves dominate by hopping between precoding weights regularly over time. In the present embodiment, the structure of the reception device is as described in Embodiment 1, and in particular with regards to the structure of the reception device, operations have been described for a limited number of antennas, but the present invention may be embodied in the same way even if the number of antennas increases. In other words, the number of antennas in the reception device does not affect the operations or advantageous effects of the present embodiment. Furthermore, in the present embodiment, similar to Embodiment 1, the error correction codes are not limited. In the present embodiment, in contrast with Embodiment 1, the scheme of changing the precoding weights in the time domain has been described. As described in Embodiment 1, however, the present invention may be similarly embodied by changing the precoding weights by using a multi-carrier transmission scheme and arranging symbols in the frequency domain and the frequency-time domain. Furthermore, in the present embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, and the like), symbols for control information, and the like, may be arranged in the frame in any way. Embodiment 5 In Embodiment 1 through Embodiment 4, the scheme of regularly hopping between precoding weights has been described. In the present embodiment, a modification of this scheme is described. In Embodiment 1 through Embodiment 4, the scheme of regularly hopping between precoding weights as inFIG.6has been described. In the present embodiment, a scheme of regularly hopping between precoding weights that differs fromFIG.6is described. As inFIG.6, this scheme hops between four different precoding weights (matrices).FIG.22shows the hopping scheme that differs fromFIG.6. InFIG.22, four different precoding weights (matrices) are represented as W1, W2, W3, and W4. (For example, W1 is the precoding weight (matrix) in Equation 37, W2 is the precoding weight (matrix) in Equation 38, W3 is the precoding weight (matrix) in Equation 39, and W4 is the precoding weight (matrix) in Equation 40.) InFIG.3, elements that operate in a similar way toFIG.3andFIG.6bear the same reference signs. The parts unique toFIG.22are as follows. The first period (cycle)2201, the second period (cycle)2202, the third period (cycle)2203, . . . are all four-slot period (cycle)s. A different precoding weight matrix is used in each of the four slots, i.e. W1, W2, W3, and W4 are each used once. It is not necessary for W1, W2, W3, and W4 to be in the same order in the first period (cycle)2201, the second period (cycle)2202, the third period (cycle)2203, . . . . In order to implement this scheme, a precoding weight generating unit2200receives, as an input, a signal regarding a weighting scheme and outputs information2210regarding precoding weights in order for each period (cycle). The weighting unit600receives, as inputs, this information, s1(t), and s2(t), performs weighting, and outputs z1(t) and z2(t). FIG.23shows a different weighting scheme thanFIG.22for the above precoding scheme. InFIG.23, the difference fromFIG.22is that a similar scheme toFIG.22is achieved by providing a reordering unit after the weighting unit and by reordering signals. InFIG.23, the precoding weight generating unit2200receives, as an input, information315regarding a weighting scheme and outputs information2210on precoding weights in the order of precoding weights W1, W2, W3, W4, W1, W2, W3, W4, . . . . Accordingly, the weighting unit600uses the precoding weights in the order of precoding weights W1, W2, W3, W4, W1, W2, W3, W4, . . . and outputs precoded signals2300A and2300B. A reordering unit2300receives, as inputs, the precoded signals2300A and2300B, reorders the precoded signals2300A and2300B in the order of the first period (cycle)2201, the second period (cycle)2202, and the third period (cycle)2203inFIG.23, and outputs z1(t) and z2(t). Note that in the above description, the period (cycle) for hopping between precoding weights has been described as having four slots for the sake of comparison withFIG.6. As in Embodiment 1 through Embodiment 4, however, the present invention may be similarly embodied with a period (cycle) having other than four slots. Furthermore, in Embodiment 1 through Embodiment 4, and in the above precoding scheme, within the period (cycle), the value of δ and β has been described as being the same for each slot, but the value of δ and β may change in each slot. As described above, when a transmission device transmits a plurality of modulated signals from a plurality of antennas in a MIMO system, the advantageous effect of improved transmission quality, as compared to conventional spatial multiplexing MIMO system, is achieved in an LOS environment in which direct waves dominate by hopping between precoding weights regularly over time. In the present embodiment, the structure of the reception device is as described in Embodiment 1, and in particular with regards to the structure of the reception device, operations have been described for a limited number of antennas, but the present invention may be embodied in the same way even if the number of antennas increases. In other words, the number of antennas in the reception device does not affect the operations or advantageous effects of the present embodiment. Furthermore, in the present embodiment, similar to Embodiment 1, the error correction codes are not limited. In the present embodiment, in contrast with Embodiment 1, the scheme of changing the precoding weights in the time domain has been described. As described in Embodiment 1, however, the present invention may be similarly embodied by changing the precoding weights by using a multi-carrier transmission scheme and arranging symbols in the frequency domain and the frequency-time domain. Furthermore, in the present embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, and the like), symbols for control information, and the like, may be arranged in the frame in any way. Embodiment 6 In Embodiments 1-4, a scheme for regularly hopping between precoding weights has been described. In the present embodiment, a scheme for regularly hopping between precoding weights is again described, including the content that has been described in Embodiments 1-4. First, out of consideration of an LOS environment, a scheme of designing a precoding matrix is described for a 2×2 spatial multiplexing MIMO system that adopts precoding in which feedback from a communication partner is not available. FIG.30shows a model of a 2×2 spatial multiplexing MIMO system that adopts precoding in which feedback from a communication partner is not available. An information vector z is encoded and interleaved. As output of the interleaving, an encoded bit vector u(p)=(u1(p), u2(p)) is acquired (where p is the slot time). Let ui(p)=(ui1(p), . . . , uih(p)) (where h is the number of transmission bits per symbol). Letting a signal after modulation (mapping) be s(p)=(s1(p), s2(p))Tand a precoding matrix be F(p), a precoded symbol x(p)=(x1(p), x2(p))Tis represented by the following equation. Math⁢152x⁢(p)=(x1(p),x2(p))T=F⁡(p)⁢s⁡(p)Equation⁢142 Accordingly, letting a received vector be y(p)=(y1(p), y2(p))T, the received vector y(p) is represented by the following equation. Math⁢153y⁢(p)=(y1(p),y2(p))T=H⁡(p)⁢F⁡(p)⁢s⁡(p)+n⁡(p)Equation⁢143 In this Equation, H(p) is the channel matrix, n(p)=(n1(p), n2(p))Tis the noise vector, and ni(p) is the i.i.d. complex Gaussian random noise with an average value 0 and variance σ2. Letting the Rician factor be K, the above equation can be represented as follows. Math⁢154y⁢(p)=(y1(p),y2(p))T=(KK+1⁢Hd(p)+1K+1⁢Hs(p))⁢F⁡(p)⁢s⁡(p)+n⁡(p)Equation⁢144 In this equation, Hd(p) is the channel matrix for the direct wave components, and Hs(p) is the channel matrix for the scattered wave components. Accordingly, the channel matrix H(p) is represented as follows. Math⁢155H⁡(p)=KK+1⁢Hd(p)+1K+1⁢Hs(p)=KK+1⁢(h11,dh12,dh21,dh2⁢2,d)+1K+1⁢(h11,s(p)h12,s(p)h21,s(p)h2⁢2,s(p))Equation⁢145 In Equation 145, it is assumed that the direct wave environment is uniquely determined by the positional relationship between transmitters, and that the channel matrix Hd(p) for the direct wave components does not fluctuate with time. Furthermore, in the channel matrix Hd(p) for the direct wave components, it is assumed that as compared to the interval between transmitting antennas, the probability of an environment with a sufficiently long distance between transmission and reception devices is high, and therefore that the channel matrix for the direct wave components can be treated as a non-singular matrix. Accordingly, the channel matrix Hd(p) is represented as follows. Math⁢156Hd(p)=(h11,dh12,dh21,dh22,d)=(Aej⁢ψqAej⁢ψq)Equation⁢146 In this equation, let A be a positive real number and q be a complex number. Subsequently, out of consideration of an LOS environment, a scheme of designing a precoding matrix is described for a 2×2 spatial multiplexing MIMO system that adopts precoding in which feedback from a communication partner is not available. From Equations 144 and 145, it is difficult to seek a precoding matrix without appropriate feedback in conditions including scattered waves, since it is difficult to perform analysis under conditions including scattered waves. Additionally, in a NLOS environment, little degradation in reception quality of data occurs as compared to an LOS environment. Therefore, the following describes a scheme of designing precoding matrices without appropriate feedback in an LOS environment (precoding matrices for a precoding scheme that hops between precoding matrices over time). As described above, since it is difficult to perform analysis under conditions including scattered waves, an appropriate precoding matrix for a channel matrix including components of only direct waves is sought from Equations 144 and 145. Therefore, in Equation 144, the case when the channel matrix includes components of only direct waves is considered. It follows that from Equation 146, Equation 144 can be represented as follows. Math⁢157(y1(p)y2(p))=Hd(p)⁢F⁡(p)⁢s⁡(p)+n⁡(p)=(Aej⁢ψqAej⁢ψq)⁢F⁡(p)⁢s⁡(p)+n⁡(p)Equation⁢147 In this equation, a unitary matrix is used as the precoding matrix. Accordingly, the precoding matrix is represented as follows. Math⁢158F⁡(p)=1α2+1⁢(ej⁢θ11(p)α×ej⁡(θ11(p)+λ)α×ej⁢θ21(p)ej⁡(θ21(p)+λ+π))Equation⁢148 In this equation, λ is a fixed value. Therefore, Equation 147 can be represented as follows. Math⁢159(y1(p)y2(p))=1α2+1⁢(Aej⁢ψqAej⁢ψq)⁢(ej⁢θ11(p)α×ej⁡(θ11(p)+λ)α×ej⁢θ21(p)ej⁡(θ21(p)+λ+π))⁢(s⁢1⁢(p)s⁢2⁢(p))+n⁡(p)Equation⁢149 As is clear from Equation 149, when the reception device performs linear operation of Zero Forcing (ZF) or the Minimum Mean Squared Error (MMSE), the transmitted bit cannot be determined by s1(p), s2(p). Therefore, the iterative APP (or iterative Max-log APP) or APP (or Max-log APP) described in Embodiment 1 is performed (hereafter referred to as Maximum Likelihood (ML) calculation), the log-likelihood ratio of each bit transmitted in s1(p), s2(p) is sought, and decoding with error correction codes is performed. Accordingly, the following describes a scheme of designing a precoding matrix without appropriate feedback in an LOS environment for a reception device that performs ML calculation. The precoding in Equation 149 is considered. The right-hand side and left-hand side of the first line are multiplied by e−jΨ, and similarly the right-hand side and left-hand side of the second line are multiplied by e−jΨ. The following equation represents the result. Math⁢160(e-j⁢ψ⁢y1(p)e-j⁢ψ⁢y2(p))=e-j⁢ψ⁢{1α2+1⁢(Aej⁢ψqAej⁢ψq)⁢(ej⁢θ11(p)α×ej(θ11(p)+λ)α×ej⁢θ21(p)ej(θ21(p)+λ+π))⁢(s⁢1⁢(p)s⁢2⁢(p))+n(p)}=1α2+1⁢(Aej⁢0e-j⁢ψ⁢qAej⁢0e-j⁢ψ⁢q)⁢(ej⁢θ11(p)α×ej(θ11(p)+λ)α×ej⁢θ21(p)ej(θ21(p)+λ+π))⁢(s⁢1⁢(p)s⁢2⁢(p))+e-j⁢ψ⁢n(p)Equation⁢150 e−jΨy1(p), e−jΨy2(p), and e−jΨq are respectively redefined as y1(p), y2(p), and q. Furthermore, since e−jΨn(p)=(e−jΨn1(p), e−jΨn2(p))T, and e−jΨn1(p), e−jΨn2(p) are the independent identically distributed (i.i.d.) complex Gaussian random noise with an average value 0 and variance σ2, e−jΨn(p) is redefined as n(p). As a result, generality is not lost by restating Equation 150 as Equation 151. Math⁢161(y1(p)y2(p))=1α2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ11(p)α×ej⁡(θ11(p)+λ)α×ej⁢θ21(p)ej⁡(θ21(p)+λ+π))⁢(s⁢1⁢(p)s⁢2⁢(p))+n⁡(p)Equation⁢151 Next, Equation 151 is transformed into Equation 152 for the sake of clarity. Math⁢162(y1(p)y2(p))=1α2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ11(p)α×ej⁡(θ11(p)+λ)α×ej⁢θ21(p)ej⁡(θ21(p)+λ+π))⁢(s⁢1⁢(p)s⁢2⁢(p))+n⁡(p)Equation⁢152 In this case, letting the minimum Euclidian distance between a received signal point and a received candidate signal point be dmin2, then a poor point has a minimum value of zero for dmin2, and two values of q exist at which conditions are poor in that all of the bits transmitted by s1(p) and all of the bits transmitted by s2(p) being eliminated. In Equation 152, when s1(p) does not exist. Math⁢163q=-Aα⁢ej⁡(θ11(p)-θ21(p))Equation⁢153 In Equation 152, when s2(p) does not exist. Math 164 q=−Aαej(θ11(p)−θ21(p)−π)Equation 154 (Hereinafter, the values of q satisfying Equations 153 and 154 are respectively referred to as “poor reception points for s1 and s2”). When Equation 153 is satisfied, since all of the bits transmitted by s1(p) are eliminated, the received log-likelihood ratio cannot be sought for any of the bits transmitted by s1(p). When Equation 154 is satisfied, since all of the bits transmitted by s2(p) are eliminated, the received log-likelihood ratio cannot be sought for any of the bits transmitted by s2(p). A broadcast/multicast transmission system that does not change the precoding matrix is now considered. In this case, a system model is considered in which a base station transmits modulated signals using a precoding scheme that does not hop between precoding matrices, and a plurality of terminals (Γ terminals) receive the modulated signals transmitted by the base station. It is considered that the conditions of direct waves between the base station and the terminals change little over time. Therefore, from Equations 153 and 154, for a terminal that is in a position fitting the conditions of Equation 155 or Equation 156 and that is in an LOS environment where the Rician factor is large, the possibility of degradation in the reception quality of data exists. Accordingly, to resolve this problem, it is necessary to change the precoding matrix over time. Math⁢165q≈-Aα⁢ej⁡(θ11(p)-θ21(p))Equation⁢155Math⁢166q≈-A⁢α⁢ej⁡(θ11(p)-θ21(p)-π)Equation⁢156 A scheme of regularly hopping between precoding matrices over a time period (cycle) with N slots (hereinafter referred to as a precoding hopping scheme) is considered. Since there are N slots in the time period (cycle), N varieties of precoding matrices F[i] based on Equation 148 are prepared (i=0, 1, . . . , N−1). In this case, the precoding matrices F[i] are represented as follows. Math⁢167F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁢θ21[i]ej⁡(θ21[i]+λ+π))Equation⁢157 In this equation, let α not change over time, and let λ also not change over time (though change over time may be allowed). As in Embodiment 1, F[i] is the precoding matrix used to obtain a precoded signal x (p=N×k+i) in Equation 142 for time N×k+i (where k is an integer equal to or greater than 0, and i=0, 1, . . . , N−1). The same is true below as well. At this point, based on Equations 153 and 154, design conditions such as the following are important for the precoding matrices for precoding hopping. Math 168 Condition #10 ej(θ11[x]−θ21[x])≠ej(θ11[y]−θ21[y])Equation 158 for ∀x, ∀y (x≠y; x,y=0, 1, . . . , N−1) Math 169 Condition #11 ej(θ11[x]−θ21[x]−π)≠ej(θ11[y]−θ21[y]−π)Equation 159 for ∀x, ∀y (x≠y; x, y=0, 1, . . . , N−1) From Condition #10, in all of the Γ terminals, there is one slot or less having poor reception points for s1 among the N slots in a time period (cycle). Accordingly, the log-likelihood ratio for bits transmitted by s1(p) can be obtained for at least N−1 slots. Similarly, from Condition #11, in all of the Γ terminals, there is one slot or less having poor reception points for s2 among the N slots in a time period (cycle). Accordingly, the log-likelihood ratio for bits transmitted by s2(p) can be obtained for at least N−1 slots. In this way, by providing the precoding matrix design model of Condition #10 and Condition #11, the number of bits for which the log-likelihood ratio is obtained among the bits transmitted by s1(p), and the number of bits for which the log-likelihood ratio is obtained among the bits transmitted by s2(p) is guaranteed to be equal to or greater than a fixed number in all of the Γ terminals. Therefore, in all of the Γ terminals, it is considered that degradation of data reception quality is moderated in an LOS environment where the Rician factor is large. The following shows an example of a precoding matrix in the precoding hopping scheme. The probability density distribution of the phase of a direct wave can be considered to be evenly distributed over [0 2π]. Therefore, the probability density distribution of the phase of q in Equations 151 and 152 can also be considered to be evenly distributed over [0 2π]. Accordingly, the following is established as a condition for providing fair data reception quality insofar as possible for Γ terminals in the same LOS environment in which only the phase of q differs. Condition #12 When using a precoding hopping scheme with an N-slot time period (cycle), among the N slots in the time period (cycle), the poor reception points for s1 are arranged to have an even distribution in terms of phase, and the poor reception points for s2 are arranged to have an even distribution in terms of phase. The following describes an example of a precoding matrix in the precoding hopping scheme based on Condition #10 through Condition #12. Let α=1.0 in the precoding matrix in Equation 157. Example #5 Let the number of slots N in the time period (cycle) be 8. In order to satisfy Condition #10 through Condition #12, precoding matrices for a precoding hopping scheme with an N=8 time period (cycle) are provided as in the following equation. Math⁢170F[i]=12⁢(ej⁢0ej⁢0ej⁢i⁢π4ej⁡(i⁢π4+π))Equation⁢160 Here, j is an imaginary unit, and i=0, 1, . . . , 7. Instead of Equation 160, Equation 161 may be provided (where λ and θ11[i] do not change over time (though change may be allowed)). Math⁢171F[i]=12⁢(ej⁢θ11[i]ej⁡(θ11[i]+λ)ej⁡(θ11[i]+i⁢π4)ej⁡(θ11[i]+i⁢π4+λ+π))Equation⁢161 Accordingly, the poor reception points for s1 and s2 become as inFIGS.31A and31B. (InFIGS.31A and31B, the horizontal axis is the real axis, and the vertical axis is the imaginary axis.) Instead of Equations 160 and 161, Equations 162 and 163 may be provided (where i=0, 1, . . . , 7, and where λ and θ11[i] do not change over time (though change may be allowed)). Math⁢172F[i]=12⁢(ej⁢0ej⁢0ej⁡(-i⁢π4)ej⁡(-i⁢π4+π))Equation⁢162Math⁢173F[i]=12⁢(ej⁢011[i]ej⁡(θ11[i]+λ)ej⁡(θ11[i]-i⁢π4)ej⁡(θ11[i]-i⁢π4+λ+π))Equation⁢163 Next, the following is established as a condition, different from Condition #12, for providing fair data reception quality insofar as possible for Γ terminals in the same LOS environment in which only the phase of q differs. Condition #13 When using a precoding hopping scheme with an N-slot time period (cycle), in addition to the condition Math 174 ej(θ11[x]−θ21[x])≠ej(θ11[y]−θ21[y]−π)for ∀x,∀y(x,y=0,1, . . . ,N−1)  Equation 164 the poor reception points for s1 and the poor reception points for s2 are arranged to be in an even distribution with respect to phase in the N slots in the time period (cycle). The following describes an example of a precoding matrix in the precoding hopping scheme based on Condition #10, Condition #11, and Condition #13. Let α=1.0 in the precoding matrix in Equation 157. Example #6 Let the number of slots N in the time period (cycle) be 4. Precoding matrices for a precoding hopping scheme with an N=4 time period (cycle) are provided as in the following equation. Math⁢175F[i]=12⁢(ej⁢0ej⁢0ej⁢i⁢π4ej⁡(i⁢π4+π))Equation⁢165 Here, j is an imaginary unit, and i=0, 1, 2, 3. Instead of Equation 165, Equation 166 may be provided (where λ and θ11[i] do not change over time (though change may be allowed)). Math⁢176F[i]=12⁢(ej⁢θ11[i]ej⁡(θ11[i]+λ)ej⁡(θ11[i]+i⁢π4)ej⁡(θ11[i]+i⁢π4+λ+π))Equation⁢166 Accordingly, the poor reception points for s1 and s2 become as inFIG.32. (InFIG.32, the horizontal axis is the real axis, and the vertical axis is the imaginary axis.) Instead of Equations 165 and 166, Equations 167 and 168 may be provided (where i=0, 1, 2, 3, and where λ and θ11[i] do not change over time (though change may be allowed)). Math⁢177F[i]=12⁢(ej⁢0ej⁢0ej⁡(-i⁢π4)ej⁡(-i⁢π4+π))Equation⁢167Math⁢178F[i]=12⁢(ej⁢θ1⁢1[i]ej⁡(θ1⁢1[i]+λ)ej⁡(θ1⁢1[i]-i⁢π4)ej⁡(θ1⁢1[i]-i⁢π4+λ+π))Equation⁢168 Next, a precoding hopping scheme using a non-unitary matrix is described. Based on Equation 148, the precoding matrices presently under consideration are represented as follows. Math⁢179F⁡(p)=1α2+1⁢(ej⁢θ1⁢1(p)α×ej⁡(θ1⁢1(p)+λ)α×ej⁢θ2⁢1(p)ej⁡(θ21(p)+λ+δ))Equation⁢169 Equations corresponding to Equations 151 and 152 are represented as follows. Math⁢180(y1(p)y2(p))=1α2+1⁢(Aej⁢0qAej⁢0q)⁢(ej⁢θ1⁢1(p)α×ej⁡(θ1⁢1(p)+λ)α×ej⁢θ2⁢1(p)ej⁡(θ2⁢1(p)+λ+δ))⁢(s⁢1⁢(p)s⁢2⁢(p))+n⁡(p)Equation⁢170Math⁢181(y1(p)y2(p))=1α2+1⁢(ej⁢0ej⁢0)⁢(Aej⁢0q)⁢(ej⁢θ1⁢1(p)α×ej⁡(θ1⁢1(p)+λ)α×ej⁢θ2⁢1(p)ej⁡(θ2⁢1(p)+λ+δ))⁢(s⁢1⁢(p)s⁢2⁢(p))+n⁡(p)Equation⁢171 In this case, there are two q at which the minimum value dmin2of the Euclidian distance between a received signal point and a received candidate signal point is zero. In Equation 171, when s1(p) does not exist: Math⁢182q=-Aα⁢ej⁡(θ1⁢1(p)-θ2⁢1(p))Equation⁢172 In Equation 171, when s2(p) does not exist: Math 183 q=−Aαej(θ11(p)−θ21(p)−δ)Equation 173 In the precoding hopping scheme for an N-slot time period (cycle), by referring to Equation 169, N varieties of the precoding matrix F[i] are represented as follows. Math⁢184F[i]=1α2+1⁢(ej⁢θ1⁢1[i]α×ej⁡(θ1⁢1[i]+λ)α×ej⁢θ21[i]ej⁡(θ21[i]+λ+δ))Equation⁢174 In this equation, let α and δ not change over time. At this point, based on Equations 34 and 35, design conditions such as the following are provided for the precoding matrices for precoding hopping. Math 185 Condition #14 ej(θ11[x]−θ21[x])≠ej(θ11[y]−θ21[y])Equation 175 for ∀x, ∀y(x≠y; x, y=0, 1, . . . , N−1) Math 186 Condition #15 ej(θ11[x]−θ21[x]−δ)≠ej(θ11[y]−θ21[y]−δ)Equation 176 for ∀x, ∀y(x≠y; x, y=0, 1, . . . , N−1) Example #7 Let α=1.0 in the precoding matrix in Equation 174. Let the number of slots N in the time period (cycle) be 16. In order to satisfy Condition #12, Condition #14, and Condition #15, precoding matrices for a precoding hopping scheme with an N=16 time period (cycle) are provided as in the following equations. For i=0, 1, . . . , 7: Math⁢187F[i]=12⁢(ej⁢0ej⁢0ej⁢i⁢π4ej⁡(i⁢π4+7⁢π8))Equation⁢177 For i=8, 9, . . . , 15: Math⁢188F[i]=12⁢(ej⁢i⁢π4ej⁡(i⁢π4+7⁢π8)ej⁢0ej⁢0)Equation⁢178 Furthermore, a precoding matrix that differs from Equations 177 and 178 can be provided as follows. For i=0, 1, . . . , 7: Math⁢189F[i]=12⁢(ej⁢θ1⁢1[i]ej⁡(θ1⁢1[i]+λ)ej⁡(θ1⁢1[i]+i⁢π4)ej⁡(θ1⁢1[i]+i⁢π4+λ+7⁢π8))Equation⁢179 For i=8, 9, . . . , 15: Math⁢190F[i]=12⁢(ej⁡(θ1⁢1[i]+i⁢π4)ej⁡(θ1⁢1[i]+i⁢π4+λ+7⁢π8)ej⁢θ1⁢1[i]ej⁡(θ1⁢1[i]+λ))Equation⁢180 Accordingly, the poor reception points for s1 and s2 become as inFIGS.33A and33B. (InFIGS.33A and33B, the horizontal axis is the real axis, and the vertical axis is the imaginary axis.) Instead of Equations 177 and 178, and Equations 179 and 180, precoding matrices may be provided as below. For i=0, 1, . . . , 7: Math⁢191F[i]=12⁢(ej⁢0ej⁢0ej⁡(-i⁢π4)ej⁡(-i⁢π4+7⁢π8))Equation⁢181 For i=8, 9, . . . , 15: Math⁢192F[i]=12⁢(ej⁡(-i⁢π4)ej⁡(-i⁢π4+7⁢π8)ej⁢0ej⁢0)Equation⁢182 or For i=0, 1, . . . , 7: Math⁢193F[i]=12⁢(ej⁢θ1⁢1[i]ej⁡(θ1⁢1[i]+λ)ej⁡(θ1⁢1[i]-i⁢π4)ej⁡(θ1⁢1[i]-i⁢π4+λ+7⁢π8))Equation⁢183 For i=8, 9, . . . , 15: Math⁢194F[i]=12⁢(ej⁡(θ1⁢1[i]-i⁢π4)ej⁡(θ1⁢1[i]-i⁢π4+λ+7⁢π8)ej⁢θ1⁢1[i]ej⁡(θ1⁢1[i]+λ))Equation⁢184 (In Equations 177-184, 7π/8 may be changed to −7π/8.) Next, the following is established as a condition, different from Condition #12, for providing fair data reception quality insofar as possible for Γ terminals in the same LOS environment in which only the phase of q differs. Condition #16 When using a precoding hopping scheme with an N-slot time period (cycle), the following condition is set: Math 195 ej(θ11[x]−θ21[x])≠ej(θ11[y]−θ21[y]−δ)for ∀x,∀y(x≠y;x,y=0,1, . . . ,N−1)  Equation 185 and the poor reception points for s1 and the poor reception points for s2 are arranged to be in an even distribution with respect to phase in the N slots in the time period (cycle). The following describes an example of a precoding matrix in the precoding hopping scheme based on Condition #14, Condition #15, and Condition #16. Let α=1.0 in the precoding matrix in Equation 174. Example #8 Let the number of slots N in the time period (cycle) be 8. Precoding matrices for a precoding hopping scheme with an N=8 time period (cycle) are provided as in the following equation. Math⁢196F[i]=12⁢(ej⁢0ej⁢0ej⁢i⁢π4ej⁡(i⁢π4+7⁢π8))Equation⁢186 Here, i=0, 1, . . . , 7. Furthermore, a precoding matrix that differs from Equation 186 can be provided as follows (where i=0, 1, . . . , 7, and where λ and θ11[i] do not change over time (though change may be allowed)). Math⁢197F[i]=12⁢(ej⁢θ1⁢1[i]ej⁡(θ1⁢1[i]+λ)ej⁡(θ1⁢1[i]+i⁢π4)ej⁡(θ1⁢1[i]+i⁢π4+λ+7⁢π8))Equation⁢187 Accordingly, the poor reception points for s1 and s2 become as inFIG.34. Instead of Equations 186 and 187, precoding matrices may be provided as follows (where i=0, 1, . . . , 7, and where λ and θ11[i] do not change over time (though change may be allowed)). Math⁢198F[i]=12⁢(ej⁢0ej⁢0ej⁡(-i⁢π4)ej⁡(-iπ4+7⁢π8))Equation⁢188orMath⁢199F[i]=12⁢(ej⁢θ11[i]ej(θ11[i}+λ)ej⁡(θ11[i]-i⁢π4)ej⁡(θ11[i]-i⁢π4+λ+7⁢π8))Equation⁢189 (In Equations 186-189, 7π/8 may be changed to −7π/8.) Next, in the precoding matrix of Equation 174, a precoding hopping scheme that differs from Example #7 and Example #8 by letting α≠1, and by taking into consideration the distance in the complex plane between poor reception points, is examined. In this case, the precoding hopping scheme for an N-slot time period (cycle) of Equation 174 is used, and from Condition #14, in all of the Γ terminals, there is one slot or less having poor reception points for s1 among the N slots in a time period (cycle). Accordingly, the log-likelihood ratio for bits transmitted by s1(p) can be obtained for at least N−1 slots. Similarly, from Condition #15, in all of the Γ terminals, there is one slot or less having poor reception points for s2 among the N slots in a time period (cycle). Accordingly, the log-likelihood ratio for bits transmitted by s2(p) can be obtained for at least N−1 slots. Therefore, it is clear that a larger value for N in the N-slot time period (cycle) increases the number of slots in which the log-likelihood ratio can be obtained. Incidentally, since the influence of scattered wave components is also present in an actual channel model, it is considered that when the number of slots N in the time period (cycle) is fixed, there is a possibility of improved data reception quality if the minimum distance in the complex plane between poor reception points is as large as possible. Accordingly, in the context of Example #7 and Example #8, precoding hopping schemes in which α≠1 and which improve on Example #7 and Example #8 are considered. The precoding scheme that improves on Example #8 is easier to understand and is therefore described first. Example #9 From Equation 186, the precoding matrices in an N=8 time period (cycle) precoding hopping scheme that improves on Example #8 are provided in the following equation. Math⁢200F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(i⁢π4+7⁢π8))Equation⁢190 Here, i=0, 1, . . . , 7. Furthermore, precoding matrices that differ from Equation 190 can be provided as follows (where i=0, 1, . . . , 7, and where λ and θ11[i] do not change over time (though change may be allowed)). Math⁢201F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁡(θ11[i]+i⁢π4)ej⁡(θ11[i]+i⁢π4+λ+7⁢π8))Equation⁢191orMath⁢202F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(-i⁢π4)ej⁡(-i⁢π4+7⁢π8))Equation⁢192orMath⁢203F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁡(θ11[i]-i⁢π4)ej⁡(θ11[i]-i⁢π4+λ+7⁢π8))Equation⁢193orMath⁢204F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(i⁢π4-7⁢π8))Equation⁢194orMath⁢205F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁡(θ11[i]+i⁢π4)ej⁡(θ11[i]+i⁢π4+λ-7⁢π8))Equation⁢195orMath⁢206F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(-i⁢π4)ej⁡(-i⁢π4-7⁢π8))Equation⁢196orMath⁢207F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁡(θ11[i]-i⁢π4)ej⁡(θ11[i]-i⁢π4+λ-7⁢π8))Equation⁢197 Therefore, the poor reception points for s1 and s2 are represented as inFIG.35Awhen α<1.0 and as inFIG.35Bwhen α>1.0. (i) When α<1.0 When α<1.0, the minimum distance in the complex plane between poor reception points is represented as min{d#1,#2, d#1,#3} when focusing on the distance (d#1,#2) between poor reception points #1 and #2 and the distance (d#1,#3) between poor reception points #1 and #3. In this case, the relationship between α and d#1,#2and between α and d#1,#3is shown inFIG.36. The α which makes min{d#1,#2, d#1,#3} the largest is as follows. Math⁢208α=1cos⁡(π8)+3⁢sin⁡(π8)≈0.7938Equation⁢198 The min{d#1,#2, d#1,#3} in this case is as follows. Math⁢209min⁢{d#1,#2,d#1,#3}=2⁢A⁢sin⁡(π8)cos⁡(π8)+3⁢sin⁡(π8)≈0.6076AEquation⁢199 Therefore, the precoding scheme using the value of α in Equation 198 for Equations 190-197 is effective. Setting the value of α as in Equation 198 is one appropriate scheme for obtaining excellent data reception quality. Setting α to be a value near Equation 198, however, may similarly allow for excellent data reception quality. Accordingly, the value to which a is set is not limited to Equation 198. (ii) When α>1.0 When α>1.0, the minimum distance in the complex plane between poor reception points is represented as min{d#4,#5, d#4,#6} when focusing on the distance (d#4,#5) between poor reception points #4 and #5 and the distance (d#4,#6) between poor reception points #4 and #6. In this case, the relationship between α and d#4,#5and between α and d#4,#6is shown inFIG.37. The α which makes min{d#4,#5, d#4,#6} the largest is as follows. Math⁢210α=cos⁡(π8)+3⁢sin⁡(π8)≈1.2596Equation⁢200 The min{d#4,#5, d#4,#6} in this case is as follows. Math⁢211min⁢{d#4,#5,d#4,#6}=2⁢A⁢sin(π8)cos⁡(π8)+3⁢sin⁡(π8)≈0.6076AEquation⁢201 Therefore, the precoding scheme using the value of α in Equation 200 for Equations 190-197 is effective. Setting the value of α as in Equation 200 is one appropriate scheme for obtaining excellent data reception quality. Setting a to be a value near Equation 200, however, may similarly allow for excellent data reception quality. Accordingly, the value to which a is set is not limited to Equation 200. Example #10 Based on consideration of Example #9, the precoding matrices in an N=16 time period (cycle) precoding hopping scheme that improves on Example #7 are provided in the following equations (where λ and θ11[i] do not change over time (though change may be allowed)). For i=0, 1, . . . , 7: Math⁢212F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(i⁢π4+7⁢π8))Equation⁢202 For i=8, 9, . . . , 15: Math⁢213F[i]=1α2+1⁢(α×ej⁢i⁢π4ej⁡(i⁢π4+7⁢π8)ej⁢0α×ej⁢0)Equation⁢203 or For i=0, 1, . . . , 7: Math⁢214F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁡(θ11[i]+i⁢π4)ej⁡(θ11[i]+i⁢π4+λ+7⁢π8))Equation⁢204 For i=8, 9, . . . , 15: Math⁢215F[i]=1α2+1⁢(α×ej⁡(θ11[i]+i⁢π4)ej⁡(θ11[i]+i⁢π4+λ+7⁢π8)ej⁢θ11[i]α×ej⁡(θ11[i]+λ))Equation⁢205 or For i=0, 1, . . . , 7: Math⁢216F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(-i⁢π4)ej⁡(-i⁢π4+7⁢π8))Equation⁢206 For i=8, 9, . . . , 15: Math⁢217F[i]=1α2+1⁢(α×ej⁡(-i⁢π4)ej⁡(-i⁢π4+7⁢π8)ej⁢0α×ej⁢0)Equation⁢207 or For i=0, 1, . . . , 7: Math⁢218F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁡(θ11[i]-i⁢π4)ej⁡(θ11[i]-i⁢π4+λ+7⁢π8))Equation⁢208 For i=8, 9, . . . , 15: Math⁢219F[i]=1α2+1⁢(α×ej⁡(θ11[i]-i⁢π4)ej⁡(θ11[i]-i⁢π4+λ+7⁢π8)ej⁢θ11[i]α×ej⁡(θ11[i]+λ))Equation⁢209 or For i=0, 1, . . . , 7: Math⁢220F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(i⁢π4-7⁢π8))Equation⁢210 For i=8, 9, . . . , 15: Math⁢221F[i]=1α2+1⁢(α×ej⁢i⁢π4ej⁡(i⁢π4-7⁢π8)ej⁢0α×ej⁢0)Equation⁢211 or For i=0, 1, . . . , 7: Math⁢222F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁡(θ11[i]+i⁢π4)ej⁡(θ11[i]+i⁢π4+λ-7⁢π8))Equation⁢212 For i=8, 9, . . . , 15: Math⁢223F[i]=1α2+1⁢(α×ej⁡(θ11[i]+i⁢π4)ej⁡(θ11[i]+i⁢π4+λ-7⁢π8)ej⁢θ11[i]α×ej⁡(θ11[i]+λ))Equation⁢213 or For i=0, 1, . . . , 7: Math⁢224F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(-i⁢π4)ej⁡(-i⁢π4-7⁢π8))Equation⁢214 For i=8, 9, . . . , 15: Math⁢225F[i]=1α2+1⁢(α×ej⁡(-i⁢π4)ej⁡(-i⁢π4-7⁢π8)ej⁢0α×ej⁢0)Equation⁢215 or For i=0, 1, . . . , 7: Math⁢226F[i]=1α2+1⁢(ej⁢θ11[i]α×ej⁡(θ11[i]+λ)α×ej⁡(θ11[i]-i⁢π4)ej⁡(θ11[i]-i⁢π4+λ-7⁢π8))Equation⁢216 For i=8, 9, . . . , 15: Math⁢227F[i]=1α2+1⁢(α×ej⁡(θ11[i]-i⁢π4)ej⁡(θ11[i]-i⁢π4+λ-7⁢π8)ej⁢θ11[i]α×ej⁡(θ11[i]+λ))Equation⁢217 The value of α in Equation 198 and in Equation 200 is appropriate for obtaining excellent data reception quality. The poor reception points for s1 are represented as inFIGS.38A and38Bwhen α<1.0 and as inFIGS.39A and39Bwhen α>1.0. In the present embodiment, the scheme of structuring N different precoding matrices for a precoding hopping scheme with an N-slot time period (cycle) has been described. In this case, as the N different precoding matrices, F[0], F[1], F[2], . . . , F[N−2], F[N−1] are prepared. In the present embodiment, an example of a single carrier transmission scheme has been described, and therefore the case of arranging symbols in the order F[0], F[1], F[2], . . . , F[N−2], F[N−1] in the time domain (or the frequency domain) has been described. The present invention is not, however, limited in this way, and the N different precoding matrices F[0], F[1], F[2], . . . , F[N−2], F[N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with an N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not necessarily need to be used in a regular period (cycle). Examples #5 through #10 have been shown based on Conditions #10 through #16. However, in order to achieve a precoding matrix hopping scheme with a longer period (cycle), the period (cycle) for hopping between precoding matrices may be lengthened by, for example, selecting a plurality of examples from Examples #5 through #10 and using the precoding matrices indicated in the selected examples. For example, a precoding matrix hopping scheme with a longer period (cycle) may be achieved by using the precoding matrices indicated in Example #7 and the precoding matrices indicated in Example #10. In this case, Conditions #10 through #16 are not necessarily observed. (In Equation 158 of Condition #10, Equation 159 of Condition #11, Equation 164 of Condition #13, Equation 175 of Condition #14, and Equation 176 of Condition #15, it becomes important for providing excellent reception quality for the conditions “all x and all y” to be “existing x and existing y”.) When viewed from a different perspective, in the precoding matrix hopping scheme over an N-slot period (cycle) (where N is a large natural number), the probability of providing excellent reception quality increases when the precoding matrices of one of Examples #5 through #10 are included. Embodiment 7 The present embodiment describes the structure of a reception device for receiving modulated signals transmitted by a transmission scheme that regularly hops between precoding matrices as described in Embodiments 1-6. In Embodiment 1, the following scheme has been described. A transmission device that transmits modulated signals, using a transmission scheme that regularly hops between precoding matrices, transmits information regarding the precoding matrices. Based on this information, a reception device obtains information on the regular precoding matrix hopping used in the transmitted frames, decodes the precoding, performs detection, obtains the log-likelihood ratio for the transmitted bits, and subsequently performs error correction decoding. The present embodiment describes the structure of a reception device, and a scheme of hopping between precoding matrices, that differ from the above structure and scheme. FIG.40is an example of the structure of a transmission device in the present embodiment. Elements that operate in a similar way toFIG.3bear the same reference signs. An encoder group (4002) receives transmission bits (4001) as input. The encoder group (4002), as described in Embodiment 1, includes a plurality of encoders for error correction coding, and based on the frame structure signal313, a certain number of encoders operate, such as one encoder, two encoders, or four encoders. When one encoder operates, the transmission bits (4001) are encoded to yield encoded transmission bits. The encoded transmission bits are allocated into two parts, and the encoder group (4002) outputs allocated bits (4003A) and allocated bits (4003B). When two encoders operate, the transmission bits (4001) are divided in two (referred to as divided bits A and B). The first encoder receives the divided bits A as input, encodes the divided bits A, and outputs the encoded bits as allocated bits (4003A). The second encoder receives the divided bits B as input, encodes the divided bits B, and outputs the encoded bits as allocated bits (4003B). When four encoders operate, the transmission bits (4001) are divided in four (referred to as divided bits A, B, C, and D). The first encoder receives the divided bits A as input, encodes the divided bits A, and outputs the encoded bits A. The second encoder receives the divided bits B as input, encodes the divided bits B, and outputs the encoded bits B. The third encoder receives the divided bits C as input, encodes the divided bits C, and outputs the encoded bits C. The fourth encoder receives the divided bits D as input, encodes the divided bits D, and outputs the encoded bits D. The encoded bits A, B, C, and D are divided into allocated bits (4003A) and allocated bits (4003B). The transmission device supports a transmission scheme such as, for example, the following Table 1 (Table 1A and Table 1B). TABLE 1ANumber ofmodulatedtransmissionErrorPrecod-signalscorrec-ing(number ofModula-NumbertionmatrixtransmittionofcodingTransmissionhoppingantennas)schemeencodersschemeinformationscheme1QPSK1A00000000—B00000001—C00000010—16QAM1A00000011—B00000100—C00000101—64QAM1A00000110—B00000111—C00001000—256QAM1A00001001—B00001010—C00001011—1024QAM1A00001100—B00001101—C00001110— TABLE 1BNumber ofmodulatedtransmissionErrorPrecod-signalscorrec-ing(number ofModula-NumbertionmatrixtransmittionofcodingTransmissionhoppingantennas)schemeencodersschemeinformationscheme2#1: QPSK,1A00001111D#2: QPSKB00010000DC00010001D2A00010010EB00010011EC00010100E#1: QPSK,1A00010101D#2: 16QAMB00010110DC00010111D2A00011000EB00011001EC00011010E#1: 16QAM,1A00011011D#2: 16QAMB00011100DC00011101D2A00011110EB00011111EC00100000E#1: 16QAM,1A00100001D#2: 64QAMB00100010DC00100011D2A00100100EB00100101EC00100110E#1: 64QAM,1A00100111F#2: 64QAMB00101000FC00101001F2A00101010GB00101011GC00101100G#1: 64QAM,1A00101101F#2:B00101110F256QAMC00101111F2A00110000GB00110001GC00110010G#1:1A00110011F256QAM,B00110100F#2:C00110101F256QAM2A00110110GB00110111GC00111000G4A00111001HB00111010HC00111011H#1:1A00111100F256QAM,B00111101F#2:C00111110F1024QAM2A00111111GB01000000GC01000001G4A01000010HB01000011HC01000100H#1:1A01000101F1024QAM,B01000110F#2:C01000111F1024QAM2A01001000GB01001001GC01001010G4A01001011HB01001100HC01001101H As shown in Table 1, transmission of a one-stream signal and transmission of a two-stream signal are supported as the number of transmission signals (number of transmit antennas). Furthermore, QPSK, 16QAM, 64QAM, 256QAM, and 1024QAM are supported as the modulation scheme. In particular, when the number of transmission signals is two, it is possible to set separate modulation schemes for stream #1 and stream #2. For example, “#1: 256QAM, #2: 1024QAM” in Table 1 indicates that “the modulation scheme of stream #1 is 256QAM, and the modulation scheme of stream #2 is 1024QAM” (other entries in the table are similarly expressed). Three types of error correction coding schemes, A, B, and C, are supported. In this case, A, B, and C may all be different coding schemes. A, B, and C may also be different coding rates, and A, B, and C may be coding schemes with different block sizes. The pieces of transmission information in Table 1 are allocated to modes that define a “number of transmission signals”, “modulation scheme”, “number of encoders”, and “error correction coding scheme”. Accordingly, in the case of “number of transmission signals: 2”, “modulation scheme: #1: 1024QAM, #2: 1024QAM”, “number of encoders: 4”, and “error correction coding scheme: C”, for example, the transmission information is set to 01001101. In the frame, the transmission device transmits the transmission information and the transmission data. When transmitting the transmission data, in particular when the “number of transmission signals” is two, a “precoding matrix hopping scheme” is used in accordance with Table 1. In Table 1, five types of the “precoding matrix hopping scheme”, D, E, F, G, and H, are prepared. The precoding matrix hopping scheme is set to one of these five types in accordance with Table 1. The following, for example, are ways of implementing the five different types. Prepare five different precoding matrices. Use five different types of period (cycle)s, for example a four-slot period (cycle) for D, an eight-slot period (cycle) for E, . . . . Use both different precoding matrices and different period (cycle)s. FIG.41shows an example of a frame structure of a modulated signal transmitted by the transmission device inFIG.40. The transmission device is assumed to support settings for both a mode to transmit two modulated signals, z1(t) and z2(t), and for a mode to transmit one modulated signal. InFIG.41, the symbol (4100) is a symbol for transmitting the “transmission information” shown in Table 1. The symbols (4101_1) and (4101_2) are reference (pilot) symbols for channel estimation. The symbols (4102_1,4103_1) are data transmission symbols for transmitting the modulated signal z1(t). The symbols (4102_2,4103_2) are data transmission symbols for transmitting the modulated signal z2(t). The symbol (4102_1) and the symbol (4102_2) are transmitted at the same time along the same (shared/common) frequency, and the symbol (4103_1) and the symbol (4103_2) are transmitted at the same time along the same (shared/common) frequency. The symbols (4102_1,4103_1) and the symbols (4102_2,4103_2) are the symbols after precoding matrix calculation using the scheme of regularly hopping between precoding matrices described in Embodiments 1-4 and Embodiment 6 (therefore, as described in Embodiment 1, the structure of the streams s1(t) and s2(t) is as inFIG.6). Furthermore, inFIG.41, the symbol (4104) is a symbol for transmitting the “transmission information” shown in Table 1. The symbol (4105) is a reference (pilot) symbol for channel estimation. The symbols (4106,4107) are data transmission symbols for transmitting the modulated signal z1(t). The data transmission symbols for transmitting the modulated signal z1(t) are not precoded, since the number of transmission signals is one. Accordingly, the transmission device inFIG.40generates and transmits modulated signals in accordance with Table 1 and the frame structure inFIG.41. InFIG.40, the frame structure signal313includes information regarding the “number of transmission signals”, “modulation scheme”, “number of encoders”, and “error correction coding scheme” set based on Table 1. The encoder (4002), the mapping units306A, B, and the weighting units308A, B receive the frame structure signal as an input and operate based on the “number of transmission signals”, “modulation scheme”, “number of encoders”, and “error correction coding scheme” that are set based on Table 1. “Transmission information” corresponding to the set “number of transmission signals”, “modulation scheme”, “number of encoders”, and “error correction coding scheme” is also transmitted to the reception device. The structure of the reception device may be represented similarly toFIG.7of Embodiment 1. The difference with Embodiment 1 is as follows: since the transmission device and the reception device store the information in Table 1 in advance, the transmission device does not need to transmit information for regularly hopping between precoding matrices, but rather transmits “transmission information” corresponding to the “number of transmission signals”, “modulation scheme”, “number of encoders”, and “error correction coding scheme”, and the reception device obtains information for regularly hopping between precoding matrices from Table 1 by receiving the “transmission information”. Accordingly, by the control information decoding unit709obtaining the “transmission information” transmitted by the transmission device inFIG.40, the reception device inFIG.7obtains, from the information corresponding to Table 1, a signal710regarding information on the transmission scheme, as notified by the transmission device, which includes information for regularly hopping between precoding matrices. Therefore, when the number of transmission signals is two, the signal processing unit711can perform detection based on a precoding matrix hopping pattern to obtain received log-likelihood ratios. Note that in the above description, “transmission information” is set with respect to the “number of transmission signals”, “modulation scheme”, “number of encoders”, and “error correction coding scheme” as in Table 1, and the precoding matrix hopping scheme is set with respect to the “transmission information”. However, it is not necessary to set the “transmission information” with respect to the “number of transmission signals”, “modulation scheme”, “number of encoders”, and “error correction coding scheme”. For example, as in Table 2, the “transmission information” may be set with respect to the “number of transmission signals” and “modulation scheme”, and the precoding matrix hopping scheme may be set with respect to the “transmission information”. TABLE 2Number ofmodulatedtransmissionsignalsPrecoding(number ofmatrixtransmitModulationTransmissionhoppingantennas)schemeinformationscheme1QPSK00000—16QAM00001—64QAM00010—256QAM00011—1024QAM00100—2#1: QPSK,10000D#2: QPSK#1: QPSK,10001E#2: 16QAM#1: 16QAM,10010E#2: 16QAM#1: 16QAM,10011E#2: 64QAM#1: 64QAM,10100F#2: 64QAM#1: 64QAM,10101F#2: 256QAM#1: 256QAM,10110G#2: 256QAM#1: 256QAM,10111G#2: 1024QAM#1: 1024QAM,11000H#2: 1024QAM In this context, the “transmission information” and the scheme of setting the precoding matrix hopping scheme is not limited to Tables 1 and 2. As long as a rule is determined in advance for hopping the precoding matrix hopping scheme based on transmission parameters, such as the “number of transmission signals”, “modulation scheme”, “number of encoders”, “error correction coding scheme”, or the like (as long as the transmission device and the reception device share a predetermined rule, or in other words, if the precoding matrix hopping scheme is hopped based on any of the transmission parameters (or on any plurality of transmission parameters)), the transmission device does not need to transmit information regarding the precoding matrix hopping scheme. The reception device can identify the precoding matrix hopping scheme used by the transmission device by identifying the information on the transmission parameters and can therefore accurately perform decoding and detection. Note that in Tables 1 and 2, a transmission scheme that regularly hops between precoding matrices is used when the number of modulated transmission signals is two, but a transmission scheme that regularly hops between precoding matrices may be used when the number of modulated transmission signals is two or greater. Accordingly, if the transmission device and reception device share a table regarding transmission patterns that includes information on precoding hopping schemes, the transmission device need not transmit information regarding the precoding hopping scheme, transmitting instead control information that does not include information regarding the precoding hopping scheme, and the reception device can infer the precoding hopping scheme by acquiring this control information. As described above, in the present embodiment, the transmission device does not transmit information directly related to the scheme of regularly hopping between precoding matrices. Rather, a scheme has been described wherein the reception device infers information regarding precoding for the “scheme of regularly hopping between precoding matrices” used by the transmission device. This scheme yields the advantageous effect of improved transmission efficiency of data as a result of the transmission device not transmitting information directly related to the scheme of regularly hopping between precoding matrices. Note that the present embodiment has been described as changing precoding weights in the time domain, but as described in Embodiment 1, the present invention may be similarly embodied when using a multi-carrier transmission scheme such as OFDM or the like. In particular, when the precoding hopping scheme only changes depending on the number of transmission signals, the reception device can learn the precoding hopping scheme by acquiring information, transmitted by the transmission device, on the number of transmission signals. In the present description, it is considered that a communications/broadcasting device such as a broadcast station, a base station, an access point, a terminal, a mobile phone, or the like is provided with the transmission device, and that a communications device such as a television, radio, terminal, personal computer, mobile phone, access point, base station, or the like is provided with the reception device. Additionally, it is considered that the transmission device and the reception device in the present description have a communications function and are capable of being connected via some sort of interface to a device for executing applications for a television, radio, personal computer, mobile phone, or the like. Furthermore, in the present embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, postamble, reference symbol, and the like), symbols for control information, and the like may be arranged in the frame in any way. While the terms “pilot symbol” and “symbols for control information” have been used here, any term may be used, since the function itself is what is important. It suffices for a pilot symbol, for example, to be a known symbol modulated with PSK modulation in the transmission and reception devices (or for the reception device to be able to synchronize in order to know the symbol transmitted by the transmission device). The reception device uses this symbol for frequency synchronization, time synchronization, channel estimation (estimation of Channel State Information (CSI) for each modulated signal), detection of signals, and the like. A symbol for control information is for transmitting information other than data (of applications or the like) that needs to be transmitted to the communication partner for achieving communication (for example, the modulation scheme, error correction coding scheme, coding rate of the error correction coding scheme, setting information in the upper layer, and the like). Note that the present invention is not limited to the above Embodiments 1-5 and may be embodied with a variety of modifications. For example, the above embodiments describe communications devices, but the present invention is not limited to these devices and may be implemented as software for the corresponding communications scheme. Furthermore, a precoding hopping scheme used in a scheme of transmitting two modulated signals from two antennas has been described, but the present invention is not limited in this way. The present invention may be also embodied as a precoding hopping scheme for similarly changing precoding weights (matrices) in the context of a scheme whereby four mapped signals are precoded to generate four modulated signals that are transmitted from four antennas, or more generally, whereby N mapped signals are precoded to generate N modulated signals that are transmitted from N antennas. In the description, terms such as “precoding” and “precoding weight” are used, but any other terms may be used. What matters in the present invention is the actual signal processing. Different data may be transmitted in streams s1(t) and s2(t), or the same data may be transmitted. Each of the transmit antennas of the transmission device and the receive antennas of the reception device shown in the figures may be formed by a plurality of antennas. Programs for executing the above transmission scheme may, for example, be stored in advance in Read Only Memory (ROM) and be caused to operate by a Central Processing Unit (CPU). Furthermore, the programs for executing the above transmission scheme may be stored in a computer-readable recording medium, the programs stored in the recording medium may be loaded in the Random Access Memory (RAM) of the computer, and the computer may be caused to operate in accordance with the programs. The components in the above embodiments may be typically assembled as a Large Scale Integration (LSI), a type of integrated circuit. Individual components may respectively be made into discrete chips, or part or all of the components in each embodiment may be made into one chip. While an LSI has been referred to, the terms Integrated Circuit (IC), system LSI, super LSI, or ultra LSI may be used depending on the degree of integration. Furthermore, the scheme for assembling integrated circuits is not limited to LSI, and a dedicated circuit or a general-purpose processor may be used. A Field Programmable Gate Array (FPGA), which is programmable after the LSI is manufactured, or a reconfigurable processor, which allows reconfiguration of the connections and settings of circuit cells inside the LSI, may be used. Furthermore, if technology for forming integrated circuits that replaces LSIs emerges, owing to advances in semiconductor technology or to another derivative technology, the integration of functional blocks may naturally be accomplished using such technology. The application of biotechnology or the like is possible. Embodiment 8 The present embodiment describes an application of the scheme described in Embodiments 1-4 and Embodiment 6 for regularly hopping between precoding weights. FIG.6relates to the weighting scheme (precoding scheme) in the present embodiment. The weighting unit600integrates the weighting units308A and308B inFIG.3. As shown inFIG.6, the stream s1(t) and the stream s2(t) correspond to the baseband signals307A and307B inFIG.3. In other words, the streams s1(t) and s2(t) are the baseband signal in-phase components I and quadrature components Q when mapped according to a modulation scheme such as QPSK, 16QAM, 64QAM, or the like. As indicated by the frame structure ofFIG.6, the stream s1(t) is represented as s1(u) at symbol number u, as s1(u+1) at symbol number u+1, and so forth. Similarly, the stream s2(t) is represented as s2(u) at symbol number u, as s2(u+1) at symbol number u+1, and so forth. The weighting unit600receives the baseband signals307A (s1(t)) and307B (s2(t)) and the information315regarding weighting information inFIG.3as inputs, performs weighting in accordance with the information315regarding weighting, and outputs the signals309A (z1(t)) and309B (z2(t)) after weighting inFIG.3. At this point, when for example a precoding matrix hopping scheme with an N=8 period (cycle) as in Example #8 in Embodiment 6 is used, z1(t) and z2(t) are represented as follows. For symbol number 8i (where i is an integer greater than or equal to zero): Math⁢228(z⁢1⁢(8⁢i)z⁢2⁢(8⁢i))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i)s⁢2⁢(8⁢i))Equation⁢218 Here, j is an imaginary unit, and k=0. For symbol number 8i+1: Math⁢229(z⁢1⁢(8⁢i+1)z⁢2⁢(8⁢i+1))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+1)s⁢2⁢(8⁢i+1))Equation⁢219 Here, k=1. For symbol number 8i+2: Math⁢230(z⁢1⁢(8⁢i+2)z⁢2⁢(8⁢i+2))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+2)s⁢2⁢(8⁢i+2))Equation⁢220 Here, k=2. For symbol number 8i+3: Math⁢231(z⁢1⁢(8⁢i+3)z⁢2⁢(8⁢i+3))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+3)s⁢2⁢(8⁢i+3))Equation⁢221 Here, k=3. For symbol number 8i+4: Math⁢232(z⁢1⁢(8⁢i+4)z⁢2⁢(8⁢i+4))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+4)s⁢2⁢(8⁢i+4))Equation⁢222 Here, k=4. For symbol number 8i+5: Math⁢233(z⁢1⁢(8⁢i+5)z⁢2⁢(8⁢i+5))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+5)s⁢2⁢(8⁢i+5))Equation⁢223 Here, k=5. For symbol number 8i+6: Math⁢234(z⁢1⁢(8⁢i+6)z⁢2⁢(8⁢i+6))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+6)s⁢2⁢(8⁢i+6))Equation⁢224 Here, k=6. For symbol number 8i+7: Math⁢235(z⁢1⁢(8⁢i+7)z⁢2⁢(8⁢i+7))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢i⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+7s⁢2⁢(8⁢i+7)Equation⁢225 Here, k=7. The symbol numbers shown here can be considered to indicate time. As described in other embodiments, in Equation 225, for example, z1(8i+7) and z2(8i+7) at time 8i+7 are signals at the same time, and the transmission device transmits z1(8i+7) and z2(8i+7) over the same (shared/common) frequency. In other words, letting the signals at time T be s1(T), s2(T), z1(T), and z2(T), then z1(T) and z2(T) are sought from some sort of precoding matrices and from s1(T) and s2(T), and the transmission device transmits z1(T) and z2(T) over the same (shared/common) frequency (at the same time). Furthermore, in the case of using a multi-carrier transmission scheme such as OFDM or the like, and letting signals corresponding to s1, s2, z1, and z2 for (sub)carrier L and time T be s1(T, L), s2(T, L), z1(T, L), and z2(T, L), then z1(T, L) and z2(T, L) are sought from some sort of precoding matrices and from s1(T, L) and s2(T, L), and the transmission device transmits z1(T, L) and z2(T, L) over the same (shared/common) frequency (at the same time). In this case, the appropriate value of α is given by Equation 198 or Equation 200. The present embodiment describes a precoding hopping scheme that increases period (cycle) size, based on the above-described precoding matrices of Equation 190. Letting the period (cycle) of the precoding hopping scheme be 8M, 8M different precoding matrices are represented as follows. Math⁢236F[8×k+i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(i⁢π4+k⁢π4⁢M)ej⁡(i⁢π4+k⁢π4⁢M+7⁢π8))Equation⁢226 In this case, i=0, 1, 2, 3, 4, 5, 6, 7, and k=0, 1, . . . , M−2, M−1. For example, letting M=2 and α<1, the poor reception points for s1 (∘) and for s2 (□) at k=0 are represented as inFIG.42A. Similarly, the poor reception points for s1 (∘) and for s2 (□) at k=1 are represented as inFIG.42B. In this way, based on the precoding matrices in Equation 190, the poor reception points are as inFIG.42A, and by using, as the precoding matrices, the matrices yielded by multiplying each term in the second line on the right-hand side of Equation 190 by ejX(see Equation 226), the poor reception points are rotated with respect toFIG.42A(seeFIG.42B). (Note that the poor reception points inFIG.42AandFIG.42Bdo not overlap. Even when multiplying by ejX, the poor reception points should not overlap, as in this case. Furthermore, the matrices yielded by multiplying each term in the first line on the right-hand side of Equation 190, rather than in the second line on the right-hand side of Equation 190, by ejXmay be used as the precoding matrices.) In this case, the precoding matrices F[0]−F[15] are represented as follows. Math⁢237F[8×k+i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(i⁢π4+Xk)ej⁡(i⁢π4+Xk+7⁢π8))Equation⁢227 Here, i=0, 1, 2, 3, 4, 5, 6, 7, and k=0, 1. In this case, when M=2, precoding matrices F[0]−F[15] are generated (the precoding matrices F[0]−F[15] may be in any order, and the matrices F[0]−F[15] may each be different). Symbol number 16i may be precoded using F[0], symbol number 16i+1 may be precoded using F[1], . . . , and symbol number 16i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 14, 15). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Summarizing the above considerations, with reference to Equations 82-85, N-period (cycle) precoding matrices are represented by the following equation. Math⁢238F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+δ))Equation⁢228 Here, since the period (cycle) has N slots, i=0, 1, 2, . . . , N−2, N−1. Furthermore, the N×M period (cycle) precoding matrices based on Equation 228 are represented by the following equation. Math⁢239F[N×k+i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁡(θ21(i)+Xk)ej⁡(θ21(i)+Xk+λ+δ))Equation⁢229 In this case, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. Precoding matrices F[0]−F[N×M−1] are thus generated (the precoding matrices F[0]−F[N×M−1] may be in any order for the N×M slots in the period (cycle)). Symbol number N×M×i may be precoded using F[0], symbol number N×M×i+1 may be precoded using F[1], . . . , and symbol number N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , N×M−2, N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. Note that while the N×M period (cycle) precoding matrices have been set to Equation 229, the N×M period (cycle) precoding matrices may be set to the following equation, as described above. Math⁢240F[N×k+i]=1α2+1⁢(ej⁡(θ11(i)+Xk)α×ej⁡(θ11(i)+Xk+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+δ))Equation⁢230 In this case, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. In Equations 229 and 230, when 0 radians≤δ<2π radians, the matrices are a unitary matrix when δ=π radians and are a non-unitary matrix when δ≠π radians. In the present scheme, use of a non-unitary matrix for π/2 radians≤|δ|<π radians is one characteristic structure (the conditions for δ being similar to other embodiments), and excellent data reception quality is obtained. Use of a unitary matrix is another structure, and as described in detail in Embodiment 10 and Embodiment 16, if N is an odd number in Equations 229 and 230, the probability of obtaining excellent data reception quality increases. Embodiment 9 The present embodiment describes a scheme for regularly hopping between precoding matrices using a unitary matrix. As described in Embodiment 8, in the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots, the precoding matrices prepared for the N slots with reference to Equations 82-85 are represented as follows. Math⁢241F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+δ))Equation⁢231 In this case, i=0, 1, 2, . . . , N−2, N−1. (Let α>0.) Since a unitary matrix is used in the present embodiment, the precoding matrices in Equation 231 may be represented as follows. Math⁢242F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+π))Equation⁢232 In this case, i=0, 1, 2, . . . , N−2, N−1. (Let α>0.) From Condition #5 (Math 106) and Condition #6 (Math 107) in Embodiment 3, the following condition is important for achieving excellent data reception quality. Math 243 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #17 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 244 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #18 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Embodiment 6 describes the distance between poor reception points. In order to increase the distance between poor reception points, it is important for the number of slots N to be an odd number three or greater. The following explains this point. In order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #19 and Condition #20 are provided. Math⁢245ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#19Math⁢246ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#20 In other words, Condition #19 means that the difference in phase is 2π/N radians. On the other hand, Condition #20 means that the difference in phase is −2π/N radians. Letting θ11(0)−θ21(0)=0 radians, and letting α<1, the distribution of poor reception points for s1 and for s2 in the complex plane for an N=3 period (cycle) is shown inFIG.43A, and the distribution of poor reception points for s1 and for s2 in the complex plane for an N=4 period (cycle) is shown inFIG.43B. Letting θ11(0)−θ21(0)=0 radians, and letting α>1, the distribution of poor reception points for s1 and for s2 in the complex plane for an N=3 period (cycle) is shown inFIG.44A, and the distribution of poor reception points for s1 and for s2 in the complex plane for an N=4 period (cycle) is shown inFIG.44B. In this case, when considering the phase between α line segment from the origin to a poor reception point and a half line along the real axis defined by real ≥0 (seeFIG.43A), then for either α>1 or α<1, when N=4, the case always occurs wherein the phase for the poor reception points for s1 and the phase for the poor reception points for s2 are the same value. (See4301,4302inFIG.43B, and4401,4402inFIG.44B.) In this case, in the complex plane, the distance between poor reception points becomes small. On the other hand, when N=3, the phase for the poor reception points for s1 and the phase for the poor reception points for s2 are never the same value. Based on the above, considering how the case always occurs wherein the phase for the poor reception points for s1 and the phase for the poor reception points for s2 are the same value when the number of slots N in the period (cycle) is an even number, setting the number of slots N in the period (cycle) to an odd number increases the probability of a greater distance between poor reception points in the complex plane as compared to when the number of slots N in the period (cycle) is an even number. However, when the number of slots N in the period (cycle) is small, for example when N≤16, the minimum distance between poor reception points in the complex plane can be guaranteed to be a certain length, since the number of poor reception points is small. Accordingly, when N≤16, even if N is an even number, cases do exist where data reception quality can be guaranteed. Therefore, in the scheme for regularly hopping between precoding matrices based on Equation 232, when the number of slots N in the period (cycle) is set to an odd number, the probability of improving data reception quality is high. Precoding matrices F[0]−F[N−1] are generated based on Equation 232 (the precoding matrices F[0]−F[N−1] may be in any order for the N slots in the period (cycle)). Symbol number Ni may be precoded using F[0], symbol number Ni+1 may be precoded using F[1], . . . , and symbol number N×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , N−2, N−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Furthermore, when the modulation scheme for both s1 and s2 is 16QAM, if α is set as follows, Math⁢247α=2+42+2Equation⁢233 the advantageous effect of increasing the minimum distance between 16×16=256 signal points in the I-Q plane for a specific LOS environment may be achieved. In the present embodiment, the scheme of structuring N different precoding matrices for a precoding hopping scheme with an N-slot time period (cycle) has been described. In this case, as the N different precoding matrices, F[0], F[1], F[2], . . . , F[N−2], F[N−1] are prepared. In the present embodiment, an example of a single carrier transmission scheme has been described, and therefore the case of arranging symbols in the order F[0], F[1], F[2], . . . , F[N−2], F[N−1] in the time domain (or the frequency domain) has been described. The present invention is not, however, limited in this way, and the N different precoding matrices F[0], F[1], F[2], . . . , F[N−2], F[N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with an N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. In this case, Condition #17 and Condition #18 can be replaced by the following conditions. (The number of slots in the period (cycle) is considered to be N.) Math 248 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∃x,∃y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #17′ (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 249 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∃x,∃y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #18′ (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Embodiment 10 The present embodiment describes a scheme for regularly hopping between precoding matrices using a unitary matrix that differs from the example in Embodiment 9. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢250for⁢i=0,⁣1,2,…,N-2,N-1:⁢F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+π))Equation⁢234 Let α be a fixed value (not depending on i), where α>0. Math⁢251for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=1α2+1⁢(α×ej⁢θ11(i)ej⁡(θ11(i)+λ)ej⁢θ21(i)α×ej⁡(θ21(i)+λ+π))Equation⁢235 Let α be a fixed value (not depending on i), where α>0. (Let the α in Equation 234 and the α in Equation 235 be the same value.) From Condition #5 (Math 106) and Condition #6 (Math 107) in Embodiment 3, the following conditions are important in Equation 234 for achieving excellent data reception quality. Math 252 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #21(x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 253 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #22 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Addition of the following condition is considered. Math 254 θ11(x)=θ11(x+N) for ∀x(x=0,1,2, . . . ,N−2,N−1)  Condition #23 and θ21(y)=θ21(y+N) for ∀y(y=0,1,2, . . . ,N−2,N−1) Next, in order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #24 and Condition #25 are provided. Math⁢255ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#24Math⁢256ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#25 In other words, Condition #24 means that the difference in phase is 2π/N radians. On the other hand, Condition #25 means that the difference in phase is −2π/N radians. Letting θ11(0)−θ21(0)=0 radians, and letting α>1, the distribution of poor reception points for s1 and for s2 in the complex plane when N=4 is shown inFIGS.45A and45B. As is clear fromFIGS.45A and45B, in the complex plane, the minimum distance between poor reception points for s1 is kept large, and similarly, the minimum distance between poor reception points for s2 is also kept large. Similar conditions are created when α<1. Furthermore, making the same considerations as in Embodiment 9, the probability of a greater distance between poor reception points in the complex plane increases when N is an odd number as compared to when N is an even number. However, when N is small, for example when N≤16, the minimum distance between poor reception points in the complex plane can be guaranteed to be a certain length, since the number of poor reception points is small. Accordingly, when N≤16, even if N is an even number, cases do exist where data reception quality can be guaranteed. Therefore, in the scheme for regularly hopping between precoding matrices based on Equations 234 and 235, when N is set to an odd number, the probability of improving data reception quality is high. Precoding matrices F[0]−F[2N−1] are generated based on Equations 234 and 235 (the precoding matrices F[0]−F[2N−1] may be arranged in any order for the 2N slots in the period (cycle)). Symbol number 2Ni may be precoded using F[0], symbol number 2Ni+1 may be precoded using F[1], . . . , and symbol number 2N×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2N−2, 2N−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Furthermore, when the modulation scheme for both s1 and s2 is 16QAM, if a is set as in Equation 233, the advantageous effect of increasing the minimum distance between 16×16=256 signal points in the I-Q plane for a specific LOS environment may be achieved. The following conditions are possible as conditions differing from Condition #23: Math 257 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=N,N+1,N+2, . . . ,2N−2,2N−1)  Condition #26(where x is N, N+1, N+2, . . . , 2N−2, 2N−1; y is N, N+1, N+2, . . . , 2N−2, 2N−1; and x≠y.) Math 258 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=N,N+1,N+2, . . . ,2N−2,2N−1)  Condition #27 (where x is N, N+1, N+2, . . . , 2N−2, 2N−1; y is N, N+1, N+2, . . . , 2N−2, 2N−1; and x≠y.) In this case, by satisfying Condition #21, Condition #22, Condition #26, and Condition #27, the distance in the complex plane between poor reception points for s1 is increased, as is the distance between poor reception points for s2, thereby achieving excellent data reception quality. In the present embodiment, the scheme of structuring 2N different precoding matrices for a precoding hopping scheme with a 2N-slot time period (cycle) has been described. In this case, as the 2N different precoding matrices, F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] are prepared. In the present embodiment, an example of a single carrier transmission scheme has been described, and therefore the case of arranging symbols in the order F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] in the time domain (or the frequency domain) has been described. The present invention is not, however, limited in this way, and the 2N different precoding matrices F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with a 2N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using 2N different precoding matrices. In other words, the 2N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots 2N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the 2N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment 11 The present embodiment describes a scheme for regularly hopping between precoding matrices using a non-unitary matrix. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢259for⁢i=0,1,2,…,N-2,N-1:⁢F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+δ))Equation⁢236 Let α be a fixed value (not depending on i), where α>0. Furthermore, let δ≠π radians. Math⁢260for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=1α2+1⁢(α×ej⁡(θ11(i)+λ)ej⁢θ11(i)ej⁡(θ21(i)+λ+δ)α×ej⁢θ21(i))Equation⁢237 Let α be a fixed value (not depending on i), where α>0. (Let the α in Equation 236 and the α in Equation 237 be the same value.) From Condition #5 (Math 106) and Condition #6 (Math 107) in Embodiment 3, the following conditions are important in Equation 236 for achieving excellent data reception quality. Math 261 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #28(x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 262 ej(θ11(x)−θ21(x)−δ)≠ej(θ11(y)−θ21(y)−δ)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #29(x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Addition of the following condition is considered. Math 263 θ11(x)=θ11(x+N) for ∀x(x=0,1,2, . . . ,N−2,N−1) and θ21(y)=θ21(y+N) for ∀y(y=0,1,2, . . . ,N−2,N−1)  Condition #30 Note that instead of Equation 237, the precoding matrices in the following Equation may be provided. Math⁢264for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=1α2+1⁢(α×ej⁢θ1⁢1(i)ej⁡(θ11(i)+λ)ej⁢θ21(i)α×ej⁡(θ21(i)+λ-δ))Equation⁢238 Let α be a fixed value (not depending on i), where α>0. (Let the α in Equation 236 and the α in Equation 238 be the same value.) As an example, in order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #31 and Condition #32 are provided. Math⁢265ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#31Math⁢266ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#32 In other words, Condition #31 means that the difference in phase is 2π/N radians. On the other hand, Condition #32 means that the difference in phase is −2π/N radians. Letting θ11(0)−θ21(0)=0 radians, letting α>1, and letting δ=(3π)/4 radians, the distribution of poor reception points for s1 and for s2 in the complex plane when N=4 is shown inFIGS.46A and46B. With these settings, the period (cycle) for hopping between precoding matrices is increased, and the minimum distance between poor reception points for s1, as well as the minimum distance between poor reception points for s2, in the complex plane is kept large, thereby achieving excellent reception quality. An example in which α>1, δ=(3π)/4 radians, and N=4 has been described, but the present invention is not limited in this way. Similar advantageous effects may be obtained for π/2 radians≤|δ|<π radians, α>0, and α≠1. The following conditions are possible as conditions differing from Condition #30: Math 267 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=N,N+1,N+2, . . . ,2N−2,2N−1) (where x is N, N+1, N+2, . . . , 2N−2, 2N−1; y is N, N+1, N+2, . . . , 2N−2, 2N−1; and x≠y.) Math 268 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=N,N+1,N+2, . . . ,2N−2,2N−1)  Condition #34 (where x is N, N+1, N+2, . . . , 2N−2, 2N−1; y is N, N+1, N+2, . . . , 2N−2, 2N−1; and x≠y.) In this case, by satisfying Condition #28, Condition #29, Condition #33, and Condition #34, the distance in the complex plane between poor reception points for s1 is increased, as is the distance between poor reception points for s2, thereby achieving excellent data reception quality. In the present embodiment, the scheme of structuring 2N different precoding matrices for a precoding hopping scheme with a 2N-slot time period (cycle) has been described. In this case, as the 2N different precoding matrices, F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] are prepared. In the present embodiment, an example of a single carrier transmission scheme has been described, and therefore the case of arranging symbols in the order F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] in the time domain (or the frequency domain) has been described. The present invention is not, however, limited in this way, and the 2N different precoding matrices F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with a 2N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using 2N different precoding matrices. In other words, the 2N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots 2N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the 2N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment 12 The present embodiment describes a scheme for regularly hopping between precoding matrices using a non-unitary matrix. In the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢269F[i]=1α2+1⁢(ej⁢θ1⁢1(i)α×ej⁡(θ1⁢1(i)+λ)α×ej⁢θ2⁢1(i)ej⁡(θ2⁢1(i)+λ+δ))Equation⁢239 Let α be a fixed value (not depending on i), where α>0. Furthermore, let δ #π radians (a fixed value not depending on i), and i=0, 1, 2, . . . , N−2, N−1. From Condition #5 (Math 106) and Condition #6 (Math 107) in Embodiment 3, the following conditions are important in Equation 239 for achieving excellent data reception quality. Math 270 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #35 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 271 ej(θ11(x)−θ21(x)−δ)≠ej(θ11(y)−θ21(y)−δ)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #36 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) As an example, in order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #37 and Condition #38 are provided. Math⁢272ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#37Math⁢273ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#38 In other words, Condition #37 means that the difference in phase is 2π/N radians. On the other hand, Condition #38 means that the difference in phase is −2π/N radians. In this case, if π/2 radians≤|δ|<π radians, α>0, and α≠1, the distance in the complex plane between poor reception points for s1 is increased, as is the distance between poor reception points for s2, thereby achieving excellent data reception quality. Note that Condition #37 and Condition #38 are not always necessary. In the present embodiment, the scheme of structuring N different precoding matrices for a precoding hopping scheme with an N-slot time period (cycle) has been described. In this case, as the N different precoding matrices, F[0], F[1], F[2], . . . , F[N−2], F[N−1] are prepared. In the present embodiment, an example of a single carrier transmission scheme has been described, and therefore the case of arranging symbols in the order F[0], F[1], F[2], . . . , F[N−2], F[N−1] in the time domain (or the frequency domain) has been described. The present invention is not, however, limited in this way, and the N different precoding matrices F[0], F[1], F[2], . . . , F[N−2], F[N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with an N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. In this case, Condition #35 and Condition #36 can be replaced by the following conditions. (The number of slots in the period (cycle) is considered to be N.) Math 274 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∃x,∃y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #35′ (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 275 ej(θ11(x)−θ21(x)−δ)≠ej(θ11(y)−θ21(y)−δ)for∃x,∃y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #36′ (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Embodiment 13 The present embodiment describes a different example than Embodiment 8. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢276for⁢i=0,1,2,…,N-2,N-1:⁢F[i]=1α2+1⁢(ej⁢θ1⁢1(i)α×ej⁡(θ1⁢1(i)+λ)α×ej⁢θ2⁢1(i)ej⁡(θ2⁢1(i)+λ+δ))Equation⁢240 Let α be a fixed value (not depending on i), where α>0. Furthermore, let δ≠π radians. Math⁢277for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=1α2+1⁢(α×ej⁡(θ11(i)+λ)ej⁢θ1⁢1(i)ej⁡(θ21(i)+λ+δ)α×ej⁢θ2⁢1(i))Equation⁢241 Let α be a fixed value (not depending on i), where α>0. (Let the α in Equation 240 and the α in Equation 241 be the same value.) Furthermore, the 2×N×M period (cycle) precoding matrices based on Equations 240 and 241 are represented by the following equations. Math⁢278for⁢i=0,1,2,…,N-2,N-1:Equation⁢242F[2×N×k+i]=1α2+1⁢(ej⁢θ1⁢1(i)α×ej⁡(θ11(i)+λ)α×ej⁡(θ21(i)+Xk)ej⁡(θ2⁢1(i)+Xk+λ+δ)) In this case, k=0, 1, . . . , M−2, M−1. Math⁢279for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢243F[2×N×k+i]=1α2+1⁢(α×ej⁡(θ11(i)+λ)ej⁢θ11(i)ej⁡(θ21(i)+λ+δ+Yk)α×ej⁢θ21(i+Yk)) In this case, k=0, 1, . . . , M−2, M−1. Furthermore, Xk=Yk may be true, or Xk≠Yk may be true. Precoding matrices F[0]−F[2×N×M−1] are thus generated (the precoding matrices F[0]−F[2×N×M−1] may be in any order for the 2×N×M slots in the period (cycle)). Symbol number 2×N×M×i may be precoded using F[0], symbol number 2×N×M×i+1 may be precoded using F[1], . . . , and symbol number 2×N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2×N×M−2, 2×N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. The 2×N×M period (cycle) precoding matrices in Equation 242 may be changed to the following equation. Math⁢280for⁢i=0,1,2,…,N-2,N-1:Equation⁢244F[2×N×k+i]=1α2+1⁢(ej⁡(θ11(i)+Xk)α×ej⁡(θ11(i)+Xk+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+δ)) In this case, k=0, 1, . . . , M−2, M−1. The 2×N×M period (cycle) precoding matrices in Equation 243 may also be changed to any of Equations 245-247. Math⁢281for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢245F[2×N×k+i]=1α2+1⁢(α×ej⁡(θ11(i)+λ+Yk)ej⁢θ11(i+Yk)ej⁡(θ21(i)+λ+δ)α×ej⁢θ21(i)) In this case, k=0, 1, . . . , M−2, M−1. Math⁢282for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢246F[2×N×k+i]=1α2+1⁢(α×ej⁢θ11(i)ej⁡(θ11(i)+λ)ej⁢θ21(i+Yk)α×ej⁡(θ21(i)+λ+δ+Yk)) In this case, k=0, 1, . . . , M−2, M−1. Math⁢283for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢247F[2×N×k+i]=1α2+1⁢(α×ej⁢θ11(i+Yk)ej⁡(θ11(i)+λ+Yk)ej⁢θ21(i)α×ej⁡(θ21(i)+λ-δ)) In this case, k=0, 1, . . . , M−2, M−1. Focusing on poor reception points, if Equations 242 through 247 satisfy the following conditions, Math 284 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #39(x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 285 ej(θ11(x)−θ21(x)−δ)≠ej(θ11(y)−θ21(y)−δ)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #40(x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 286 θ11(x)=θ11(x+N) for ∀x(x=0,1,2, . . . ,N−2,N−1) and θ21(y)=θ21(y+N) for ∀y(y=0,1,2, . . . ,N−2,N−1)  Condition #41 then excellent data reception quality is achieved. Note that in Embodiment 8, Condition #39 and Condition #40 should be satisfied. Focusing on Xk and Yk, if Equations 242 through 247 satisfy the following conditions, Math 287 Xa≠Xb+2×s×π for ∀a,∀b(a≠b;a,b=0,1,2, . . . ,M−2,M−1)  Condition #42 (a is 0, 1, 2, . . . , M−2, M−1; b is 0, 1, 2, . . . , M−2, M−1; and a≠b.) (Here, s is an integer.) Math 288 Ya≠Yb+2×u×π for ∀a,∀b(a≠b;a,b=0,1,2, . . . ,M−2,M−1)  Condition #43 (a is 0, 1, 2, . . . , M−2, M−1; b is 0, 1, 2, . . . , M−2, M−1; and a≠b.) (Here, u is an integer.) then excellent data reception quality is achieved. Note that in Embodiment 8, Condition #42 should be satisfied. In Equations 242 and 247, when 0 radians≤δ<2π radians, the matrices are a unitary matrix when δ=π radians and are a non-unitary matrix when δ≠π radians. In the present scheme, use of a non-unitary matrix for π/2 radians≤|δ|<π radians is one characteristic structure, and excellent data reception quality is obtained. Use of a unitary matrix is another structure, and as described in detail in Embodiment 10 and Embodiment 16, if N is an odd number in Equations 242 through 247, the probability of obtaining excellent data reception quality increases. Embodiment 14 The present embodiment describes an example of differentiating between usage of a unitary matrix and a non-unitary matrix as the precoding matrix in the scheme for regularly hopping between precoding matrices. The following describes an example that uses a two-by-two precoding matrix (letting each element be a complex number), i.e. the case when two modulated signals (s1(t) and s2(t)) that are based on a modulation scheme are precoded, and the two precoded signals are transmitted by two antennas. When transmitting data using a scheme of regularly hopping between precoding matrices, the mapping units306A and306B in the transmission device inFIG.3andFIG.13hop the modulation scheme in accordance with the frame structure signal313. The relationship between the modulation level (the number of signal points for the modulation scheme in the I-Q plane) of the modulation scheme and the precoding matrices is described. The advantage of the scheme of regularly hopping between precoding matrices is that, as described in Embodiment 6, excellent data reception quality is achieved in an LOS environment. In particular, when the reception device performs ML calculation or applies APP (or Max-log APP) based on ML calculation, the advantageous effect is considerable. Incidentally, ML calculation greatly impacts circuit scale (calculation scale) in accordance with the modulation level of the modulation scheme. For example, when two precoded signals are transmitted from two antennas, and the same modulation scheme is used for two modulated signals (signals based on the modulation scheme before precoding), the number of candidate signal points in the I-Q plane (received signal points1101inFIG.11) is 4×4=16 when the modulation scheme is QPSK, 16×16=256 when the modulation scheme is 16QAM, 64×64=4096 when the modulation scheme is 64QAM, 256×256=65,536 when the modulation scheme is 256QAM, and 1024×1024=1,048,576 when the modulation scheme is 256QAM. In order to keep the calculation scale of the reception device down to a certain circuit size, when the modulation scheme is QPSK, 16QAM, or 64QAM, ML calculation ((Max-log) APP based on ML calculation) is used, and when the modulation scheme is 256QAM or 1024QAM, linear operation such as MMSE or ZF is used in the reception device. (In some cases, ML calculation may be used for 256QAM.) When such a reception device is assumed, consideration of the Signal-to-Noise Power Ratio (SNR) after separation of multiple signals indicates that a unitary matrix is appropriate as the precoding matrix when the reception device performs linear operation such as MMSE or ZF, whereas either a unitary matrix or a non-unitary matrix may be used when the reception device performs ML calculation. Taking any of the above embodiments into consideration, when two precoded signals are transmitted from two antennas, the same modulation scheme is used for two modulated signals (signals based on the modulation scheme before precoding), a non-unitary matrix is used as the precoding matrix in the scheme for regularly hopping between precoding matrices, the modulation level of the modulation scheme is equal to or less than 64 (or equal to or less than 256), and a unitary matrix is used when the modulation level is greater than 64 (or greater than 256), then for all of the modulation schemes supported by the transmission system, there is an increased probability of achieving the advantageous effect whereby excellent data reception quality is achieved for any of the modulation schemes while reducing the circuit scale of the reception device. When the modulation level of the modulation scheme is equal to or less than 64 (or equal to or less than 256) as well, in some cases use of a unitary matrix may be preferable. Based on this consideration, when a plurality of modulation schemes are supported in which the modulation level is equal to or less than 64 (or equal to or less than 256), it is important that in some cases, in some of the plurality of supported modulation schemes where the modulation level is equal to or less than 64, a non-unitary matrix is used as the precoding matrix in the scheme for regularly hopping between precoding matrices. The case of transmitting two precoded signals from two antennas has been described above as an example, but the present invention is not limited in this way. In the case when N precoded signals are transmitted from N antennas, and the same modulation scheme is used for N modulated signals (signals based on the modulation scheme before precoding), a threshold βNmay be established for the modulation level of the modulation scheme. When a plurality of modulation schemes for which the modulation level is equal to or less than βNare supported, in some of the plurality of supported modulation schemes where the modulation level is equal to or less than βN, a non-unitary matrix is used as the precoding matrices in the scheme for regularly hopping between precoding matrices, whereas for modulation schemes for which the modulation level is greater than βN, a unitary matrix is used. In this way, for all of the modulation schemes supported by the transmission system, there is an increased probability of achieving the advantageous effect whereby excellent data reception quality is achieved for any of the modulation schemes while reducing the circuit scale of the reception device. (When the modulation level of the modulation scheme is equal to or less than βN, a non-unitary matrix may always be used as the precoding matrix in the scheme for regularly hopping between precoding matrices.) In the above description, the same modulation scheme has been described as being used in the modulation scheme for simultaneously transmitting N modulated signals. The following, however, describes the case in which two or more modulation schemes are used for simultaneously transmitting N modulated signals. As an example, the case in which two precoded signals are transmitted by two antennas is described. The two modulated signals (signals based on the modulation scheme before precoding) are either modulated with the same modulation scheme, or when modulated with different modulation schemes, are modulated with a modulation scheme having a modulation level of 2a1or a modulation level of 2a2. In this case, when the reception device uses ML calculation ((Max-log) APP based on ML calculation), the number of candidate signal points in the I-Q plane (received signal points1101inFIG.11) is 2a1×2a2=2a1+a2. As described above, in order to achieve excellent data reception quality while reducing the circuit scale of the reception device, a threshold 2βmay be provided for 2a1+a2, and when 2a1+a2≤2β, a non-unitary matrix may be used as the precoding matrix in the scheme for regularly hopping between precoding matrices, whereas a unitary matrix may be used when 2a1+a2>2β. Furthermore, when 2a1+a2≤2β, in some cases use of a unitary matrix may be preferable. Based on this consideration, when a plurality of combinations of modulation schemes are supported for which 2a1+a2≤2β, it is important that in some of the supported combinations of modulation schemes for which 2a1+a2≤2β, a non-unitary matrix is used as the precoding matrix in the scheme for regularly hopping between precoding matrices. As an example, the case in which two precoded signals are transmitted by two antennas has been described, but the present invention is not limited in this way. For example, N modulated signals (signals based on the modulation scheme before precoding) may be either modulated with the same modulation scheme or, when modulated with different modulation schemes, the modulation level of the modulation scheme for the ithmodulated signal may be 2ai(where i=1, 2, . . . , N−1, N). In this case, when the reception device uses ML calculation ((Max-log) APP based on ML calculation), the number of candidate signal points in the I-Q plane (received signal points1101inFIG.11) is 2a1×2a2× . . . ×2ai× . . . ×2aN=2a1+a2+ . . . +ai+ . . . +aN. As described above, in order to achieve excellent data reception quality while reducing the circuit scale of the reception device, a threshold 2βmay be provided for 2a1+a2+ . . . +ai+ . . . +aN. Math⁢2892a⁢1+a⁢2+…+a⁢i+…+aN=2Y≤2β⁢where⁢Y=∑i=1NaiCondition⁢#44 When a plurality of combinations of a modulation schemes satisfying Condition #44 are supported, in some of the supported combinations of modulation schemes satisfying Condition #44, a non-unitary matrix is used as the precoding matrix in the scheme for regularly hopping between precoding matrices. Math⁢2902a⁢1+a⁢2+…+a⁢i+…+aN=2Y>2β⁢where⁢Y=∑i=1NaiCondition⁢#45 By using a unitary matrix in all of the combinations of modulation schemes satisfying Condition #45, then for all of the modulation schemes supported by the transmission system, there is an increased probability of achieving the advantageous effect whereby excellent data reception quality is achieved while reducing the circuit scale of the reception device for any of the combinations of modulation schemes. (A non-unitary matrix may be used as the precoding matrix in the scheme for regularly hopping between precoding matrices in all of the supported combinations of modulation schemes satisfying Condition #44.) Embodiment 15 The present embodiment describes an example of a system that adopts a scheme for regularly hopping between precoding matrices using a multi-carrier transmission scheme such as OFDM. FIGS.47A and47Bshow an example according to the present embodiment of frame structure in the time and frequency domains for a signal transmitted by a broadcast station (base station) in a system that adopts a scheme for regularly hopping between precoding matrices using a multi-carrier transmission scheme such as OFDM. (The frame structure is set to extend from time $1 to time $T.)FIG.47Ashows the frame structure in the time and frequency domains for the stream s1 described in Embodiment 1, andFIG.47Bshows the frame structure in the time and frequency domains for the stream s2 described in Embodiment 1. Symbols at the same time and the same (sub)carrier in stream s1 and stream s2 are transmitted by a plurality of antennas at the same time and the same frequency. InFIGS.47A and47B, the (sub)carriers used when using OFDM are divided as follows: a carrier group #A composed of (sub)carrier a−(sub)carrier a+Na, a carrier group #B composed of (sub)carrier b−(sub)carrier b+Nb, a carrier group #C composed of (sub)carrier c−(sub)carrier c+Nc, a carrier group #D composed of (sub)carrier d−(sub)carrier d+Nd, . . . . In each subcarrier group, a plurality of transmission schemes are assumed to be supported. By supporting a plurality of transmission schemes, it is possible to effectively capitalize on the advantages of the transmission schemes. For example, inFIGS.47A and47B, a spatial multiplexing MIMO system, or a MIMO system with a fixed precoding matrix is used for carrier group #A, a MIMO system that regularly hops between precoding matrices is used for carrier group #B, only stream s1 is transmitted in carrier group #C, and space-time block coding is used to transmit carrier group #D. FIGS.48A and48Bshow an example according to the present embodiment of frame structure in the time and frequency domains for a signal transmitted by a broadcast station (base station) in a system that adopts a scheme for regularly hopping between precoding matrices using a multi-carrier transmission scheme such as OFDM.FIGS.48A and48Bshow a frame structure at a different time thanFIGS.47A and47B, from time $X to time $X+T′. InFIGS.48A and48B, as inFIGS.47A and47B, the (sub)carriers used when using OFDM are divided as follows: a carrier group #A composed of (sub)carrier a−(sub)carrier a+Na, a carrier group #B composed of (sub)carrier b−(sub)carrier b+Nb, a carrier group #C composed of (sub)carrier c−(sub)carrier c+Nc, a carrier group #D composed of (sub)carrier d−(sub)carrier d+Nd, . . . . The difference betweenFIGS.47A and47BandFIGS.48A and48Bis that in some carrier groups, the transmission scheme used inFIGS.47A and47Bdiffers from the transmission scheme used inFIGS.48A and48B. InFIGS.48A and48B, space-time block coding is used to transmit carrier group #A, a MIMO system that regularly hops between precoding matrices is used for carrier group #B, a MIMO system that regularly hops between precoding matrices is used for carrier group #C, and only stream s1 is transmitted in carrier group #D. Next, the supported transmission schemes are described. FIG.49shows a signal processing scheme when using a spatial multiplexing MIMO system or a MIMO system with a fixed precoding matrix.FIG.49bears the same numbers as inFIG.6. A weighting unit600, which is a baseband signal in accordance with a certain modulation scheme, receives as inputs a stream s1(t) (307A), a stream s2(t) (307B), and information315regarding the weighting scheme, and outputs a modulated signal z1(t) (309A) after weighting and a modulated signal z2(t) (309B) after weighting. Here, when the information315regarding the weighting scheme indicates a spatial multiplexing MIMO system, the signal processing in scheme #1 ofFIG.49is performed. Specifically, the following processing is performed. Math⁢291(z⁢1⁢(t)z⁢2⁢(t))=(ej⁢000ej⁢0)⁢(s⁢1⁢(t)s⁢2⁢(t))=(1001)⁢(s⁢1⁢(t)s⁢2⁢(t))=(s⁢1⁢(t)s⁢2⁢(t))Equation⁢250 When a scheme for transmitting one modulated signal is supported, from the standpoint of transmission power, Equation 250 may be represented as Equation 251. Math⁢292(z⁢1⁢(t)z⁢2⁢(t))=12⁢(ej⁢000ej⁢0)⁢(s⁢1⁢(t)s⁢2⁢(t))=12⁢(1001)⁢(s⁢1⁢(t)s⁢2⁢(t))=(12⁢s⁢1⁢(t)12⁢s⁢2⁢(t))Equation⁢251 When the information315regarding the weighting scheme indicates a MIMO system in which precoding matrices are regularly hopped between, signal processing in scheme #2, for example, ofFIG.49is performed. Specifically, the following processing is performed. Math⁢293(z⁢1⁢(t)z⁢2⁢(t))=1α2+1⁢(ej⁢θ1⁢1α×ej⁡(θ11+λ)α×ej⁢θ21ej⁡(θ2⁢1+λ+δ))⁢(s⁢1⁢(t)s⁢2⁢(t))Equation⁢252 Here, θ11, θ12, λ, and δ are fixed values. FIG.50shows the structure of modulated signals when using space-time block coding. A space-time block coding unit (5002) inFIG.50receives, as input, a baseband signal based on a certain modulation signal. For example, the space-time block coding unit (5002) receives symbol s1, symbol s2, . . . as inputs. As shown inFIG.50, space-time block coding is performed, z1(5003A) becomes “s1 as symbol #0”, “−s2* as symbol #0”, “s3 as symbol #2”, “−s4* as symbol #3” . . . , and z2(5003B) becomes “s2 as symbol #0”, “s1* as symbol #1”, “s4 as symbol #2”, “s3* as symbol #3” . . . . In this case, symbol #X in z1 and symbol #X in z2 are transmitted from the antennas at the same time, over the same frequency. InFIGS.47A,47B,48A, and48B, only symbols transmitting data are shown. In practice, however, it is necessary to transmit information such as the transmission scheme, modulation scheme, error correction scheme, and the like. For example, as inFIG.51, these pieces of information can be transmitted to a communication partner by regular transmission with only one modulated signal z1. It is also necessary to transmit symbols for estimation of channel fluctuation, i.e. for the reception device to estimate channel fluctuation (for example, a pilot symbol, reference symbol, preamble, a Phase Shift Keying (PSK) symbol known at the transmission and reception sides, and the like). InFIGS.47A,47B,48A, and48B, these symbols are omitted. In practice, however, symbols for estimating channel fluctuation are included in the frame structure in the time and frequency domains. Accordingly, each carrier group is not composed only of symbols for transmitting data. (The same is true for Embodiment 1 as well.) FIG.52is an example of the structure of a transmission device in a broadcast station (base station) according to the present embodiment. A transmission scheme determining unit (5205) determines the number of carriers, modulation scheme, error correction scheme, coding rate for error correction coding, transmission scheme, and the like for each carrier group and outputs a control signal (5206). A modulated signal generating unit #1 (5201_1) receives, as input, information (5200_1) and the control signal (5206) and, based on the information on the transmission scheme in the control signal (5206), outputs a modulated signal z1 (5202_1) and a modulated signal z2 (5203_1) in the carrier group #A ofFIGS.47A,47B,48A, and48B. Similarly, a modulated signal generating unit #2 (5201_2) receives, as input, information (5200_2) and the control signal (5206) and, based on the information on the transmission scheme in the control signal (5206), outputs a modulated signal z1 (5202_2) and a modulated signal z2 (5203_2) in the carrier group #B ofFIGS.47A,47B,48A, and48B. Similarly, a modulated signal generating unit #3 (5201_3) receives, as input, information (5200_3) and the control signal (5206) and, based on the information on the transmission scheme in the control signal (5206), outputs a modulated signal z1 (5202_3) and a modulated signal z2 (5203_3) in the carrier group #C ofFIGS.47A,47B,48A, and48B. Similarly, a modulated signal generating unit #4 (5201_4) receives, as input, information (5200_4) and the control signal (5206) and, based on the information on the transmission scheme in the control signal (5206), outputs a modulated signal z1 (5202_4) and a modulated signal z2 (5203_4) in the carrier group #D ofFIGS.47A,47B,48A, and48B. While not shown in the figures, the same is true for modulated signal generating unit #5 through modulated signal generating unit #M−1. Similarly, a modulated signal generating unit #M (5201_M) receives, as input, information (5200_M) and the control signal (5206) and, based on the information on the transmission scheme in the control signal (5206), outputs a modulated signal z1 (5202_M) and a modulated signal z2 (5203_M) in a certain carrier group. An OFDM related processor (5207_1) receives, as inputs, the modulated signal z1 (5202_1) in carrier group #A, the modulated signal z1 (5202_2) in carrier group #B, the modulated signal z1 (5202_3) in carrier group #C, the modulated signal z1 (5202_4) in carrier group #D, . . . , the modulated signal z1 (5202_M) in a certain carrier group #M, and the control signal (5206), performs processing such as reordering, inverse Fourier transform, frequency conversion, amplification, and the like, and outputs a transmission signal (5208_1). The transmission signal (5208_1) is output as a radio wave from an antenna (5209_1). Similarly, an OFDM related processor (5207_2) receives, as inputs, the modulated signal z1 (5203_1) in carrier group #A, the modulated signal z1 (5203_2) in carrier group #B, the modulated signal z1 (5203_3) in carrier group #C, the modulated signal z1 (5203_4) in carrier group #D, . . . , the modulated signal z1 (5203_M) in a certain carrier group #M, and the control signal (5206), performs processing such as reordering, inverse Fourier transform, frequency conversion, amplification, and the like, and outputs a transmission signal (5208_2). The transmission signal (5208_2) is output as a radio wave from an antenna (5209_2). FIG.53shows an example of a structure of the modulated signal generating units #1-#M inFIG.52. An error correction encoder (5302) receives, as inputs, information (5300) and a control signal (5301) and, in accordance with the control signal (5301), sets the error correction coding scheme and the coding rate for error correction coding, performs error correction coding, and outputs data (5303) after error correction coding. (In accordance with the setting of the error correction coding scheme and the coding rate for error correction coding, when using LDPC coding, turbo coding, or convolutional coding, for example, depending on the coding rate, puncturing may be performed to achieve the coding rate.) An interleaver (5304) receives, as input, error correction coded data (5303) and the control signal (5301) and, in accordance with information on the interleaving scheme included in the control signal (5301), reorders the error correction coded data (5303) and outputs interleaved data (5305). A mapping unit (5306_1) receives, as input, the interleaved data (5305) and the control signal (5301) and, in accordance with the information on the modulation scheme included in the control signal (5301), performs mapping and outputs a baseband signal (5307_1). Similarly, a mapping unit (5306_2) receives, as input, the interleaved data (5305) and the control signal (5301) and, in accordance with the information on the modulation scheme included in the control signal (5301), performs mapping and outputs a baseband signal (5307_2). A signal processing unit (5308) receives, as input, the baseband signal (53071), the baseband signal (5307_2), and the control signal (5301) and, based on information on the transmission scheme (for example, in this embodiment, a spatial multiplexing MIMO system, a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, or a transmission scheme for transmitting only stream s1) included in the control signal (5301), performs signal processing. The signal processing unit (5308) outputs a processed signal z1 (5309_1) and a processed signal z2 (5309_2). Note that when the transmission scheme for transmitting only stream s1 is selected, the signal processing unit (5308) does not output the processed signal z2 (5309_2). Furthermore, inFIG.53, one error correction encoder is shown, but the present invention is not limited in this way. For example, as shown inFIG.3, a plurality of encoders may be provided. FIG.54shows an example of the structure of the OFDM related processors (5207_1and5207_2) inFIG.52. Elements that operate in a similar way toFIG.14bear the same reference signs. A reordering unit (5402A) receives, as input, the modulated signal z1 (5400_1) in carrier group #A, the modulated signal z1 (5400_2) in carrier group #B, the modulated signal z1 (5400_3) in carrier group #C, the modulated signal z1 (5400_4) in carrier group #D, . . . , the modulated signal z1 (5400_M) in a certain carrier group, and a control signal (5403), performs reordering, and output reordered signals1405A and1405B. Note that inFIGS.47A,47B,48A,48B, and51, an example of allocation of the carrier groups is described as being formed by groups of subcarriers, but the present invention is not limited in this way. Carrier groups may be formed by discrete subcarriers at each time interval. Furthermore, inFIGS.47A,47B,48A,48B, and51, an example has been described in which the number of carriers in each carrier group does not change over time, but the present invention is not limited in this way. This point will be described separately below. FIGS.55A and55Bshow an example of frame structure in the time and frequency domains for a scheme of setting the transmission scheme for each carrier group, as inFIGS.47A,47B,48A,48B, and51. InFIGS.55A and55B, control information symbols are labeled5500, individual control information symbols are labeled5501, data symbols are labeled5502, and pilot symbols are labeled5503. Furthermore,FIG.55Ashows the frame structure in the time and frequency domains for stream s1, andFIG.55Bshows the frame structure in the time and frequency domains for stream s2. The control information symbols are for transmitting control information shared by the carrier group and are composed of symbols for the transmission and reception devices to perform frequency and time synchronization, information regarding the allocation of (sub)carriers, and the like. The control information symbols are set to be transmitted from only stream s1 at time $1. The individual control information symbols are for transmitting control information on individual subcarrier groups and are composed of information on the transmission scheme, modulation scheme, error correction coding scheme, coding rate for error correction coding, block size of error correction codes, and the like for the data symbols, information on the insertion scheme of pilot symbols, information on the transmission power of pilot symbols, and the like. The individual control information symbols are set to be transmitted from only stream s1 at time $1. The data symbols are for transmitting data (information), and as described with reference toFIGS.47A through50, are symbols of one of the following transmission schemes, for example: a spatial multiplexing MIMO system, a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, or a transmission scheme for transmitting only stream s1. Note that in carrier group #A, carrier group #B, carrier group #C, and carrier group #D, data symbols are shown in stream s2, but when the transmission scheme for transmitting only stream s1 is used, in some cases there are no data symbols in stream s2. The pilot symbols are for the reception device to perform channel estimation, i.e. to estimate fluctuation corresponding to h11(t), h12(t), h21(t), and h22(t) in Equation 36. (In this embodiment, since a multi-carrier transmission scheme such as an OFDM scheme is used, the pilot symbols are for estimating fluctuation corresponding to h11(t), h12(t), h21(t), and h22(t) in each subcarrier.) Accordingly, the PSK transmission scheme, for example, is used for the pilot symbols, which are structured to form a pattern known by the transmission and reception devices. Furthermore, the reception device may use the pilot symbols for estimation of frequency offset, estimation of phase distortion, and time synchronization. FIG.56shows an example of the structure of a reception device for receiving modulated signals transmitted by the transmission device inFIG.52. Elements that operate in a similar way toFIG.7bear the same reference signs. InFIG.56, an OFDM related processor (5600_X) receives, as input, a received signal702_X, performs predetermined processing, and outputs a processed signal704_X. Similarly, an OFDM related processor (5600_Y) receives, as input, a received signal702_Y, performs predetermined processing, and outputs a processed signal704_Y. The control information decoding unit709inFIG.56receives, as input, the processed signals704_X and704_Y, extracts the control information symbols and individual control information symbols inFIGS.55A and55Bto obtain the control information transmitted by these symbols, and outputs a control signal710that includes the obtained information. The channel fluctuation estimating unit705_1for the modulated signal z1 receives, as inputs, the processed signal704_X and the control signal710, performs channel estimation in the carrier group required by the reception device (the desired carrier group), and outputs a channel estimation signal7061. Similarly, the channel fluctuation estimating unit705_2for the modulated signal z2 receives, as inputs, the processed signal704_X and the control signal710, performs channel estimation in the carrier group required by the reception device (the desired carrier group), and outputs a channel estimation signal706_2. Similarly, the channel fluctuation estimating unit705_1for the modulated signal z1 receives, as inputs, the processed signal704_Y and the control signal710, performs channel estimation in the carrier group required by the reception device (the desired carrier group), and outputs a channel estimation signal708_1. Similarly, the channel fluctuation estimating unit705_2for the modulated signal z2 receives, as inputs, the processed signal704_Y and the control signal710, performs channel estimation in the carrier group required by the reception device (the desired carrier group), and outputs a channel estimation signal708_2. The signal processing unit711receives, as inputs, the signals706_1,706_2,708_1,708_2,704_X,704_Y, and the control signal710. Based on the information included in the control signal710on the transmission scheme, modulation scheme, error correction coding scheme, coding rate for error correction coding, block size of error correction codes, and the like for the data symbols transmitted in the desired carrier group, the signal processing unit711demodulates and decodes the data symbols and outputs received data712. FIG.57shows the structure of the OFDM related processors (5600_X,5600_Y) inFIG.56. A frequency converter (5701) receives, as input, a received signal (5700), performs frequency conversion, and outputs a frequency converted signal (5702). A Fourier transformer (5703) receives, as input, the frequency converted signal (5702), performs a Fourier transform, and outputs a Fourier transformed signal (5704). As described above, when using a multi-carrier transmission scheme such as an OFDM scheme, carriers are divided into a plurality of carrier groups, and the transmission scheme is set for each carrier group, thereby allowing for the reception quality and transmission speed to be set for each carrier group, which yields the advantageous effect of construction of a flexible system. In this case, as described in other embodiments, allowing for choice of a scheme of regularly hopping between precoding matrices offers the advantages of obtaining high reception quality, as well as high transmission speed, in an LOS environment. While in the present embodiment, the transmission schemes to which a carrier group can be set are “a spatial multiplexing MIMO system, a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, or a transmission scheme for transmitting only stream s1”, but the transmission schemes are not limited in this way. Furthermore, the space-time coding is not limited to the scheme described with reference toFIG.50, nor is the MIMO scheme using a fixed precoding matrix limited to scheme #2 inFIG.49, as any structure with a fixed precoding matrix is acceptable. In the present embodiment, the case of two antennas in the transmission device has been described, but when the number of antennas is larger than two as well, the same advantageous effects may be achieved by allowing for selection of a transmission scheme for each carrier group from among “a spatial multiplexing MIMO system, a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, or a transmission scheme for transmitting only stream s1”. FIGS.58A and58Bshow a scheme of allocation into carrier groups that differs fromFIGS.47A,47B,48A,48B, and51. InFIGS.47A,47B,48A,48B,51,55A, and55B, carrier groups have described as being formed by groups of subcarriers. InFIGS.58A and58B, on the other hand, the carriers in a carrier group are arranged discretely.FIGS.58A and58Bshow an example of frame structure in the time and frequency domains that differs fromFIGS.47A,47B,48A,48B,51,55A, and55B.FIGS.58A and58Bshow the frame structure for carriers 1 through H, times $1 through $K. Elements that are similar toFIGS.55A and55Bbear the same reference signs. Among the data symbols inFIGS.58A and58B, the “A” symbols are symbols in carrier group A, the “B” symbols are symbols in carrier group B, the “C” symbols are symbols in carrier group C, and the “D” symbols are symbols in carrier group D. The carrier groups can thus be similarly implemented by discrete arrangement along (sub)carriers, and the same carrier need not always be used in the time domain. This type of arrangement yields the advantageous effect of obtaining time and frequency diversity gain. InFIGS.47A,47B,48A,48B,51,58A, and58B, the control information symbols and the individual control information symbols are allocated to the same time in each carrier group, but these symbols may be allocated to different times. Furthermore, the number of (sub)carriers used by a carrier group may change over time. Embodiment 16 Like Embodiment 10, the present embodiment describes a scheme for regularly hopping between precoding matrices using a unitary matrix when N is an odd number. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢294for⁢i=0,1,2,…,N-2,N-1:⁢F[i]=1α2+1⁢(ej⁢θ1⁢1(i)α×ej⁡(θ1⁢1(i)+λ)α×ej⁢θ21(i)ej⁡(θ2⁢1(i)+λ+π))Equation⁢253 Let α be a fixed value (not depending on i), where α>0. Math⁢295for⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=1α2+1⁢(α×ej⁢θ1⁢1(i)ej⁡(θ11(i)+λ)ej⁢θ21(i)α×ej⁡(θ2⁢1(i)+λ+π))Equation⁢254 Let α be a fixed value (not depending on i), where α>0. (Let the α in Equation 253 and the α in Equation 254 be the same value.) From Condition #5 (Math 106) and Condition #6 (Math 107) in Embodiment 3, the following conditions are important in Equation 253 for achieving excellent data reception quality. Math 296 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #46 (x is 0, 1, 2, . . . , N−2, N−1; y is 0, 1, 2, . . . , N−2, N−1; and x≠y.) Math 297 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #47 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) Addition of the following condition is considered. Math 298 θ11(x)=θ11(x+N) for ∀x(x=0,1,2, . . . ,N−2,N−1) and θ21(y)=θ21(y+N) for ∀y(y=0,1,2, . . . ,N−2,N−1)  Condition #48 Next, in order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #49 and Condition #50 are provided. Math⁢299ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#49Math⁢300ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(X=0,1,2,…,N-2)Condition⁢#50 In other words, Condition #49 means that the difference in phase is 2π/N radians. On the other hand, Condition #50 means that the difference in phase is −2π/N radians. Letting θ11(0)−θ21(0)=0 radians, and letting α>1, the distribution of poor reception points for s1 and for s2 in the complex plane for N=3 is shown inFIGS.60A and60B. As is clear fromFIGS.60A and60B, in the complex plane, the minimum distance between poor reception points for s1 is kept large, and similarly, the minimum distance between poor reception points for s2 is also kept large. Similar conditions are created when α<1. Furthermore, upon comparison withFIGS.45A and45Bin Embodiment 10, making the same considerations as in Embodiment 9, the probability of a greater distance between poor reception points in the complex plane increases when N is an odd number as compared to when N is an even number. However, when N is small, for example when N≤16, the minimum distance between poor reception points in the complex plane can be guaranteed to be a certain length, since the number of poor reception points is small. Accordingly, when N≤16, even if N is an even number, cases do exist where data reception quality can be guaranteed. Therefore, in the scheme for regularly hopping between precoding matrices based on Equations 253 and 254, when N is set to an odd number, the probability of improving data reception quality is high. Precoding matrices F[0]−F[2N−1] are generated based on Equations 253 and 254 (the precoding matrices F[0]−F[2N−1] may be in any order for the 2N slots in the period (cycle)). Symbol number 2Ni may be precoded using F[0], symbol number 2Ni+1 may be precoded using F[1], . . . , and symbol number 2N×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2N−2, 2N−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Furthermore, when the modulation scheme for both s1 and s2 is 16QAM, if a is set as in Equation 233, the advantageous effect of increasing the minimum distance between 16×16=256 signal points in the I-Q plane for a specific LOS environment may be achieved. The following conditions are possible as conditions differing from Condition #48: Math 301 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=N,N+1,N+2, . . . ,2N−2,2N−1)  Condition #51 (where x is N, N+1, N+2, . . . , 2N−2, 2N−1; y is N, N+1, N+2, . . . , 2N−2, 2N−1; and x≠y.) Math 302 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=N,N+1,N+2, . . . ,2N−2,2N−1)  Condition #52 (where x is N, N+1, N+2, . . . ,2N−2,2N−1; y is N, N+1, N+2, . . . ,2N−2,2N−1; and x≠y.) In this case, by satisfying Condition #46, Condition #47, Condition #51, and Condition #52, the distance in the complex plane between poor reception points for s1 is increased, as is the distance between poor reception points for s2, thereby achieving excellent data reception quality. In the present embodiment, the scheme of structuring 2N different precoding matrices for a precoding hopping scheme with a 2N-slot time period (cycle) has been described. In this case, as the 2N different precoding matrices, F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] are prepared. In the present embodiment, an example of a single carrier transmission scheme has been described, and therefore the case of arranging symbols in the order F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] in the time domain (or the frequency domain) has been described. The present invention is not, however, limited in this way, and the 2N different precoding matrices F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with a 2N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using 2N different precoding matrices. In other words, the 2N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots 2N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the 2N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment 17 The present embodiment describes a concrete example of the scheme of regularly changing precoding weights, based on Embodiment 8. FIG.6relates to the weighting scheme (precoding scheme) in the present embodiment. The weighting unit600integrates the weighting units308A and308B inFIG.3. As shown inFIG.6, the stream s1(t) and the stream s2(t) correspond to the baseband signals307A and307B inFIG.3. In other words, the streams s1(t) and s2(t) are the baseband signal in-phase components I and quadrature components Q when mapped according to a modulation scheme such as QPSK, 16QAM, 64QAM, or the like. As indicated by the frame structure ofFIG.6, in the stream s1(t), a signal at symbol number u is represented as s1(u), a signal at symbol number u+1 as s1(u+1), and so forth. Similarly, in the stream s2(t), a signal at symbol number u is represented as s2(u), a signal at symbol number u+1 as s2(u+1), and so forth. The weighting unit600receives the baseband signals307A (s1(t)) and307B (s2(t)) and the information315regarding weighting information inFIG.3as inputs, performs weighting in accordance with the information315regarding weighting, and outputs the signals309A (z1(t)) and309B (z2(t)) after weighting inFIG.3. At this point, when for example a precoding matrix hopping scheme with an N=8 period (cycle) as in Example #8 in Embodiment 6 is used, z1(t) and z2(t) are represented as follows. For symbol number 8i (where i is an integer greater than or equal to zero): Math⁢303(z⁢1⁢(8⁢i)z⁢2⁢(8⁢i))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i)s⁢2⁢(8⁢i))Equation⁢255 Here, j is an imaginary unit, and k=0. For symbol number 8i+1: Math⁢304(z⁢1⁢(8⁢i+1)z⁢2⁢(8⁢i+1))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+1)s⁢2⁢(8⁢i+1))Equation⁢256 Here, k=1. For symbol number 8i+2: Math⁢305(z⁢1⁢(8⁢i+2)z⁢2⁢(8⁢i+2))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+2)s⁢2⁢(8⁢i+2))Equation⁢257 Here, k=2. For symbol number 8i+3: Math⁢306(z⁢1⁢(8⁢i+3)z⁢2⁢(8⁢i+3))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+3)s⁢2⁢(8⁢i+3))Equation⁢258 Here, k=3. For symbol number 8i+4: Math⁢307(z⁢1⁢(8⁢i+4)z⁢2⁢(8⁢i+4))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+4)s⁢2⁢(8⁢i+4))Equation⁢259 Here, k=4. For symbol number 8i+5: Math⁢308(z⁢1⁢(8⁢i+5)z⁢2⁢(8⁢i+5))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+5)s⁢2⁢(8⁢i+5))Equation⁢260 Here, k=5. For symbol number 8i+6: Math⁢309(z⁢1⁢(8⁢i+6)z⁢2⁢(8⁢i+6))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+6)s⁢2⁢(8⁢i+6))Equation⁢261 Here, k=6. For symbol number 8i+7: Math⁢310(z⁢1⁢(8⁢i+7)z⁢2⁢(8⁢i+7))=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢π4ej⁡(k⁢π4+7⁢π8))⁢(s⁢1⁢(8⁢i+7)s⁢2⁢(8⁢i+7))Equation⁢262 Here, k=7. The symbol numbers shown here can be considered to indicate time. As described in other embodiments, in Equation 262, for example, z1(8i+7) and z2(8i+7) at time 8i+7 are signals at the same time, and the transmission device transmits z1(8i+7) and z2(8i+7) over the same (shared/common) frequency. In other words, letting the signals at time T be s1(T), s2(T), z1(T), and z2(T), then z1(T) and z2(T) are sought from some sort of precoding matrices and from s1(T) and s2(T), and the transmission device transmits z1(T) and z2(T) over the same (shared/common) frequency (at the same time). Furthermore, in the case of using a multi-carrier transmission scheme such as OFDM or the like, and letting signals corresponding to s1, s2, z1, and z2 for (sub)carrier L and time T be s1(T, L), s2(T, L), z1(T, L), and z2(T, L), then z1(T, L) and z2(T, L) are sought from some sort of precoding matrices and from s1(T, L) and s2(T, L), and the transmission device transmits z1(T, L) and z2(T, L) over the same (shared/common) frequency (at the same time). In this case, the appropriate value of α is given by Equation 198 or Equation 200. Also, different values of a may be set in Equations 255-262. That is to say, when two equations (Equations X and Y) are extracted from Equations 255-262, the value of α given by Equation X may be different from the value of α given by Equation Y. The present embodiment describes a precoding hopping scheme that increases period (cycle) size, based on the above-described precoding matrices of Equation 190. Letting the period (cycle) of the precoding hopping scheme be 8M, 8M different precoding matrices are represented as follows. Math⁢311F[8×k+i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(i⁢π4+k⁢π4⁢M)ej⁡(i⁢π4+k⁢π4⁢M+7⁢π8))Equation⁢263 In this case, i=0, 1, 2, 3, 4, 5, 6, 7, and k=0, 1, . . . , M−2, M−1. For example, letting M=2 and α<1, the poor reception points for s1 (∘) and for s2 (□) at k=0 are represented as inFIG.42A. Similarly, the poor reception points for s1 (∘) and for s2 (□) at k=1 are represented as inFIG.42B. In this way, based on the precoding matrices in Equation 190, the poor reception points are as inFIG.42A, and by using, as the precoding matrices, the matrices yielded by multiplying each term in the second line on the right-hand side of Equation 190 by ejX(see Equation 226), the poor reception points are rotated with respect toFIG.42A(seeFIG.42B). (Note that the poor reception points inFIG.42AandFIG.42Bdo not overlap. Even when multiplying by ejX, the poor reception points should not overlap, as in this case. Furthermore, the matrices yielded by multiplying each term in the first line on the right-hand side of Equation 190, rather than in the second line on the right-hand side of Equation 190, by ejXmay be used as the precoding matrices.) In this case, the precoding matrices F[0]−F[15] are represented as follows. Math⁢312F[8×k+i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(i⁢π4+X⁢k)ej⁡(i⁢π4+X⁢k+7⁢π8))Equation⁢264 Here, i=0, 1, 2, 3, 4, 5, 6, 7, and k=0, 1. In this case, when M=2, precoding matrices F[0]−F[15] are generated (the precoding matrices F[0]−F[15] may be in any order. Also, matrices F[0]−F[15] may be different matrices). Symbol number 16i may be precoded using F[0], symbol number 16i+1 may be precoded using F[1], . . . , and symbol number 16i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 14, 15). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Summarizing the above considerations, with reference to Equations 82-85, N-period (cycle) precoding matrices are represented by the following equation. Math⁢313F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁢θ⁢(21(i)+λ+δ))Equation⁢265 Here, since the period (cycle) has N slots, i=0, 1, 2, . . . , N−2, N−1. Furthermore, the N×M period (cycle) precoding matrices based on Equation 265 are represented by the following equation. Math⁢314F[N×k+i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁡(θ21(i)+Xk)ej⁢(θ21(i)+Xk+λ+δ))Equation⁢266 In this case, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. In this case, precoding matrices F[0]−F[N×M−1] are generated. (Precoding matrices F[0]−F[N×M−1] may be in any order for the N×M slots in the period (cycle)). Symbol number N×M×i may be precoded using F[0], symbol number N×M×i+1 may be precoded using F[1], . . . , and symbol number N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , N×M−2, N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. Note that while the N×M period (cycle) precoding matrices have been set to Equation 266, the N×M period (cycle) precoding matrices may be set to the following equation, as described above. Math⁢315F[N×k+i]=1α2+1⁢(ej⁡(θ11(i)+Xk)α×ej⁡(θ1⁢1(i)+Xk+λ)α×ej⁢θ2⁢1(i)ej⁡(θ2⁢1(i)+λ+δ))Equation⁢267 In this case, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. In Equations 265 and 266, when 0 radians≤6<2π radians, the matrices are a unitary matrix when δ=π radians and are a non-unitary matrix when δ≠π radians. In the present scheme, use of a non-unitary matrix for π/2 radians≤|δ|<π radians is one characteristic structure (the conditions for δ being similar to other embodiments), and excellent data reception quality is obtained. However, not limited to this, a unitary matrix may be used instead. In the present embodiment, as one example of the case where λ is treated as a fixed value, a case where λ=0 radians is described. However, in view of the mapping according to the modulation scheme, λ may be set to a fixed value defined as λ=π/2 radians, λ=π radians, or λ=(3π)/2 radians. (For example, λ may be set to a fixed value defined as λ=π radians in the precoding matrices of the precoding scheme in which hopping between precoding matrices is performed regularly.) With this structure, as is the case where λ is set to a value defined as λ=0 radians, a reduction in circuit size is achieved. Embodiment 18 The present embodiment describes a scheme for regularly hopping between precoding matrices using a unitary matrix based on Embodiment 9. As described in Embodiment 8, in the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots, the precoding matrices prepared for the N slots with reference to Equations 82-85 are represented as follows. Math⁢316F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁢(θ21(i)+λ+δ))Equation⁢268 In this case, i=0, 1, 2, . . . , N−2, N−1. (α>0.) Since a unitary matrix is used in the present embodiment, the precoding matrices in Equation 268 may be represented as follows. Math⁢317F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁢(θ21(i)+λ+π))Equation⁢269 In this case, i=0, 1, 2, . . . , N−2, N−1. (α>0.) From Condition #5 (Math 106) and Condition #6 (Math 107) in Embodiment 3, the following condition is important for achieving excellent data reception quality. Math 318 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #53 (x is 0,1,2, . . . , N−2, N−1; y is 0,1,2, . . . , N−2, N−1; and x≠y.) Math 319 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)(xis 0,1,2, . . . ,N−2,N−1;yis 0,1,2, . . . ,N−2,N−1; andx≠y.)  Condition #54 Embodiment 6 has described the distance between poor reception points. In order to increase the distance between poor reception points, it is important for the number of slots N to be an odd number three or greater. The following explains this point. In order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #55 and Condition #56 are provided. Math⁢320ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,TagBox[",", "NumberComma", Rule[SyntaxForm, "0"]]1,TagBox[",", "NumberComma", Rule[SyntaxForm, "0"]]2,…,N-2)Condition⁢#55Math⁢321ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(x=0,TagBox[",", "NumberComma", Rule[SyntaxForm, "0"]]1,TagBox[",", "NumberComma", Rule[SyntaxForm, "0"]]2,…,N-2)Condition⁢#56 Letting θ11(0)−θ21(0)=0 radians, and letting α<1, the distribution of poor reception points for s1 and for s2 in the complex plane for an N=3 period (cycle) is shown inFIG.43A, and the distribution of poor reception points for s1 and for s2 in the complex plane for an N=4 period (cycle) is shown inFIG.43B. Letting θ11(0)−θ21(0)=0 radians, and letting α>1, the distribution of poor reception points for s1 and for s2 in the complex plane for an N=3 period (cycle) is shown inFIG.44A, and the distribution of poor reception points for s1 and for s2 in the complex plane for an N=4 period (cycle) is shown inFIG.44B. In this case, when considering the phase between α line segment from the origin to a poor reception point and a half line along the real axis defined by real≥0 (seeFIG.43A), then for either α>1 or α<1, when N=4, the case always occurs wherein the phase for the poor reception points for s1 and the phase for the poor reception points for s2 are the same value. (See4301,4302inFIG.43B, and4401,4402inFIG.44B.) In this case, in the complex plane, the distance between poor reception points becomes small. On the other hand, when N=3, the phase for the poor reception points for s1 and the phase for the poor reception points for s2 are never the same value. Based on the above, considering how the case always occurs wherein the phase for the poor reception points for s1 and the phase for the poor reception points for s2 are the same value when the number of slots N in the period (cycle) is an even number, setting the number of slots N in the period (cycle) to an odd number increases the probability of a greater distance between poor reception points in the complex plane as compared to when the number of slots N in the period (cycle) is an even number. However, when the number of slots N in the period (cycle) is small, for example when N≤16, the minimum distance between poor reception points in the complex plane can be guaranteed to be a certain length, since the number of poor reception points is small. Accordingly, when N≤16, even if N is an even number, cases do exist where data reception quality can be guaranteed. Therefore, in the scheme for regularly hopping between precoding matrices based on Equation 269, when the number of slots N in the period (cycle) is set to an odd number, the probability of improving data reception quality is high. Precoding matrices F[0]−F[N−1] are generated based on Equation 269 (the precoding matrices F[0]−F[N−1] may be in any order for the N slots in the period (cycle)). Symbol number Ni may be precoded using F[0], symbol number Ni+1 may be precoded using F[1], . . . , and symbol number N×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , N−2, N−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Furthermore, when the modulation scheme for both s1 and s2 is 16QAM, if α is set as follows, Math⁢322α=2+42+2Equation⁢270 the advantageous effect of increasing the minimum distance between 16×16=256 signal points in the I-Q plane for a specific LOS environment may be achieved. FIG.94shows signal point layout in the I-Q plane for 16QAM. InFIG.94, signal point9400is a signal point when bits to be transmitted (input bits) b0-b3 represent a value “(b0, b1, b2, b3)=(1, 0, 0, 0)” (as shown inFIG.94), and its coordinates in the I-Q plane are (−3×g, 3×g). With regard to the signal points other than signal point9400, the bits to be transmitted and the coordinates in the I-Q plane can be identified fromFIG.94. FIG.95shows signal point layout in the I-Q plane for QPSK. InFIG.95, signal point9500is a signal point when bits to be transmitted (input bits) b0 and b1 represent a value “(b0, b1)=(1, 0)” (as shown inFIG.95), and its coordinates in the I-Q plane are (−1×g, 1×g). With regard to the signal points other than signal point9500, the bits to be transmitted and the coordinates in the I-Q plane can be identified fromFIG.95. Also, when the modulation scheme for s1 is QPSK modulation and the modulation scheme for s2 is 16QAM, if a is set as follows, Math⁢323α=2+3+52+3-5Equation⁢271 the advantageous effect of increasing the minimum distance between candidate signal points in the I-Q plane for a specific LOS environment may be achieved. Note that a signal point layout in the I-Q plane for 16QAM is shown inFIG.94, and a signal point layout in the I-Q plane for QPSK is shown inFIG.95. Here, if g inFIG.94is set as follows, Math⁢324g=z10Equation⁢272 h inFIG.94is obtained as follows. Math⁢325h=z2Equation⁢273 As an example of the precoding matrices prepared for the N slots based on Equation 269, the following matrices are considered: Math⁢326F[i=0]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢0ej⁢π)Equation⁢274Math⁢327F[i=1]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢25⁢πej⁡(25⁢π+π))Equation⁢275Math⁢328F[i=2]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢45⁢πej⁡(45⁢π+π))Equation⁢276Math⁢329F[i=3]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢65⁢πej⁡(65⁢π+π))Equation⁢277Math⁢330F[i=4]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢85⁢πej⁡(85⁢π+π))Equation⁢278 Note that, in order to restrict the calculation scale of the above precoding in the transmission device, θ11(i)=0 radians and λ=0 radians may be set in Equation 269. In this case, however, in Equation 269, λ may vary depending on i, or may be the same value. That is to say, in Equation 269, λ in F[i=x] and λ F[i=y] (x≠y) may be the same value or may be different values. As the value to which α is set, the above-described set value is one of effective values. However, not limited to this, α may be set, for example, for each value of i in the precoding matrix F[i] as described in Embodiment 17. (That is to say, in F[i], α is not necessarily be always set to a constant value for i). In the present embodiment, the scheme of structuring N different precoding matrices for a precoding hopping scheme with an N-slot time period (cycle) has been described. In this case, as the N different precoding matrices, F[0], F[1], F[2], . . . , F[N−2], F[N−1] are prepared. In the single carrier transmission scheme, symbols are arranged in the order F[0], F[1], F[2], . . . , F[N−2], F[N−1] in the time domain (or the frequency domain in the case of the multi-carrier transmission scheme). The present invention is not, however, limited in this way, and the N different precoding matrices F[0], F[1], F[2], . . . , F[N−2], F[N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaptation in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with an N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. In this case, Condition #55 and Condition #56 can be replaced by the following conditions. (The number of slots in the period (cycle) is considered to be N.) Math 331 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∃x,∃y(x≠y;x,y=0,1,2, . . . ,N−2,N−1; andx≠y)  Condition #55′ (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) Math 332 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∃x,∃y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #56′ (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) In the present embodiment, as one example of the case where λ is treated as a fixed value, a case where λ=0 radians is described. However, in view of the mapping according to the modulation scheme, X may be set to a fixed value defined as λ=π/2 radians, λ=π radians, or λ=(3π)/2 radians. (For example, X may be set to a fixed value defined as λ=π radians in the precoding matrices of the precoding scheme in which hopping between precoding matrices is performed regularly.) With this structure, as is the case where λ is set to a value defined as λ=0 radians, a reduction in circuit size is achieved. Embodiment 19 The present embodiment describes a scheme for regularly hopping between precoding matrices using a unitary matrix based on Embodiment 10. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢333When⁢i=0,1,2,…,N-2,N-1:Equation⁢279F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+π)) α>0, and α is a fixed value (regardless of i). Math⁢334When⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢280F[i]=1α2+1⁢(α×ej⁢θ11(i)ej⁡(θ11(i)+λ)ej⁢θ21(i)α×ej⁡(θ21(i)+λ+π)) α>0, and α is a fixed value (regardless of i). (The value of α in Equation 279 is the same as the value of α in Equation 280.) (The value of α may be set as α<0.) From Condition #5 (Math 106) and Condition #6 (Math 107) in Embodiment 3, the following condition is important for achieving excellent data reception quality. Math 335 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #57 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) Math 336 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)(xis 0,1,2, . . . ,N−2,N−1;yis 0,1,2, . . . ,N−2,N−1; andx≠y.)  Condition #58 Addition of the following condition is considered. Math 337 θ11(x)=θ11(x+N) for ∀x(x=0,1,2, . . . ,N−2,N−1) and θ21(y)=θ21(y+N) for ∀y(y=0,1,2, . . . ,N−2,N−1)  Condition #59 Next, in order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #60 and Condition #61 are provided. Math⁢338ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#60Math⁢338ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#61 Letting θ11(0)−θ21(0)=0 radians, and letting α>1, the distribution of poor reception points for s1 and for s2 in the complex plane for N=4 is shown inFIGS.43A and43B. As is clear fromFIGS.43A and43B, in the complex plane, the minimum distance between poor reception points for s1 is kept large, and similarly, the minimum distance between poor reception points for s2 is also kept large. Similar conditions are created when α<1. Furthermore, making the same considerations as in Embodiment 9, the probability of a greater distance between poor reception points in the complex plane increases when N is an odd number as compared to when N is an even number. However, when N is small, for example when N≤16, the minimum distance between poor reception points in the complex plane can be guaranteed to be a certain length, since the number of poor reception points is small. Accordingly, when N≤16, even if N is an even number, cases do exist where data reception quality can be guaranteed. Therefore, in the scheme for regularly hopping between precoding matrices based on Equations 279 and 280, when N is set to an odd number, the probability of improving data reception quality is high. Note that precoding matrices F[0]−F[2N−1] have been generated based on Equations 279 and 280. (The precoding matrices F[0]−F[2N−1] may be in any order for the 2N slots in the period (cycle)). Symbol number 2Ni may be precoded using F[0], symbol number 2Ni+1 may be precoded using F[1], . . . , and symbol number 2N×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2N−2, 2N−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Furthermore, when the modulation scheme for both s1 and s2 is 16QAM, if α is set as in Equation 270, the advantageous effect of increasing the minimum distance between 16×16=256 signal points in the I-Q plane for a specific LOS environment may be achieved. Also, when the modulation scheme for s1 is QPSK modulation and the modulation scheme for s2 is 16QAM, if α is set as in Equation 271, the advantageous effect of increasing the minimum distance between candidate signal points in the I-Q plane for a specific LOS environment may be achieved. Note that a signal point layout in the I-Q plane for 16QAM is shown inFIG.60, and a signal point layout in the I-Q plane for QPSK is shown inFIG.94. Here, if “g” inFIG.60is set as in Equation 272, follows, “h” inFIG.94is obtained as in Equation 273. The following conditions are possible as conditions differing from Condition #59: Math 340 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=N,N+1,N+2, . . . ,2N−2,2N−1)  Condition #62 (x is N, N+1, N+2, . . . , 2N−2, 2N−1; y is N, N+1, N+2, . . . , 2N−2, 2N−1; and x≠y.) Math 341 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=N,N+1,N+2, . . . ,2N−2,2N−1)  Condition #63 (x is N, N+1, N+2, . . . , 2N−2, 2N−1; y is N, N+1, N+2, . . . , 2N−2, 2N−1; and x≠y.) In this case, by satisfying Condition #57 and Condition #58 and Condition #62 and Condition #63, the distance in the complex plane between poor reception points for s1 is increased, as is the distance between poor reception points for s2, thereby achieving excellent data reception quality. As an example of the precoding matrices prepared for the 2N slots based on Equations 279 and 280, the following matrices are considered when N=15: Math⁢342F[i=0]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢0ej⁢π)Equation⁢281Math⁢343F[i=1]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢21⁢5⁢πej⁡(21⁢5⁢π+π))Equation⁢282Math⁢344F[i=2]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢41⁢5⁢πej⁡(41⁢5⁢π+π))Equation⁢283Math⁢345F[i=3]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢61⁢5⁢πej⁡(61⁢5⁢π+π))Equation⁢284Math⁢346F[i=4]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢81⁢5⁢πej⁡(81⁢5⁢π+π))Equation⁢285Math⁢347F[i=5]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢101⁢5⁢πej⁡(101⁢5⁢π+π))Equation⁢286Math⁢348F[i=6]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢121⁢5⁢πej⁡(121⁢5⁢π+π))Equation⁢287Math⁢349F[i=7]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢141⁢5⁢πej⁡(141⁢5⁢π+π))Equation⁢288Math⁢350F[i=8]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢161⁢5⁢πej⁡(161⁢5⁢π+π))Equation⁢289Math⁢351F[i=9]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢181⁢5⁢πej⁡(181⁢5⁢π+π))Equation⁢290Math⁢352F[i=1⁢0]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢201⁢5⁢πej⁡(201⁢5⁢π+π))Equation⁢291Math⁢353F[i=1⁢1]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢221⁢5⁢πej⁡(221⁢5⁢π+π))Equation⁢292Math⁢354F[i=1⁢2]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢241⁢5⁢πej⁡(241⁢5⁢π+π))Equation⁢293Math⁢355F[i=1⁢3]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢261⁢5⁢πej⁡(261⁢5⁢π+π))Equation⁢294Math⁢356F[i=1⁢4]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢281⁢5⁢πej⁡(281⁢5⁢π+π))Equation⁢295Math⁢357F[i=1⁢5]=1α2+1⁢(α×ej⁢0ej⁢πej⁢0α×ej⁢0)Equation⁢296Math⁢358F[i=1⁢6]=1α2+1⁢(α×ej⁢21⁢5⁢πej⁡(21⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢297Math⁢359F[i=1⁢7]=1α2+1⁢(α×ej⁢41⁢5⁢πej⁡(41⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢298Math⁢360F[i=1⁢8]=1α2+1⁢(α×ej⁢61⁢5⁢πej⁡(61⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢299Math⁢361F[i=1⁢9]=1α2+1⁢(α×ej⁢81⁢5⁢πej⁡(81⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢300Math⁢362F[i=2⁢0]=1α2+1⁢(α×ej⁢101⁢5⁢πej⁡(101⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢301Math⁢363F[i=2⁢1]=1α2+1⁢(α×ej⁢121⁢5⁢πej⁡(121⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢302Math⁢364F[i=2⁢2]=1α2+1⁢(α×ej⁢141⁢5⁢πej⁡(141⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢303Math⁢365F[i=2⁢3]=1α2+1⁢(α×ej⁢161⁢5⁢πej⁡(161⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢304Math⁢366F[i=2⁢4]=1α2+1⁢(α×ej⁢181⁢5⁢πej⁡(181⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢305Math⁢367F[i=2⁢5]=1α2+1⁢(α×ej⁢201⁢5⁢πej⁡(201⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢306Math⁢368F[i=2⁢6]=1α2+1⁢(α×ej⁢221⁢5⁢πej⁡(221⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢307Math⁢369F[i=2⁢7]=1α2+1⁢(α×ej⁢241⁢5⁢πej⁡(241⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢308Math⁢370F[i=2⁢8]=1α2+1⁢(α×ej⁢261⁢5⁢πej⁡(261⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢309Math⁢371F[i=2⁢9]=1α2+1⁢(α×ej⁢281⁢5⁢πej⁡(281⁢5⁢π+π)ej⁢0α×ej⁢0)Equation⁢310 Note that, in order to restrict the calculation scale of the above precoding in the transmission device, θ11(i)=0 radians and λ=0 radians may be set in Equation 279, and θ21(i)=0 radians and λ=0 radians may be set in Equation 280. In this case, however, in Equations 279 and 280, λ may be set as a value that varies depending on i, or may be set as the same value. That is to say, in Equations 279 and 280, λ in F[i=x] and λ in F[i=y] (x≠y) may be the same value or may be different values. As another scheme, λ is set as a fixed value in Equation 279, λ is set as a fixed value in Equation 280, and the fixed values of λ in Equations 279 and 280 are set as different values. (As still another scheme, the fixed values of λ in Equations 279 and 280 are used.) As the value to which α is set, the above-described set value is one of effective values. However, not limited to this, α may be set, for example, for each value of i in the precoding matrix F[i] as described in Embodiment 17. (That is to say, in F[i], α is not necessarily be always set to a constant value for i.) In the present embodiment, the scheme of structuring 2N different precoding matrices for a precoding hopping scheme with a 2N-slot time period (cycle) has been described. In this case, as the 2N different precoding matrices, F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] are prepared. In the single carrier transmission scheme, symbols are arranged in the order F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] in the time domain (or the frequency domain in the case of the multi-carrier transmission scheme). The present invention is not, however, limited in this way, and the 2N different precoding matrices F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaptation in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with a 2N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using 2N different precoding matrices. In other words, the 2N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots 2N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the 2N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. In the present embodiment, as one example of the case where λ is treated as a fixed value, a case where λ=0 radians is described. However, in view of the mapping according to the modulation scheme, λ may be set to a fixed value defined as λ=π/2 radians, λ=π radians, or λ=(3π)/2 radians. (For example, λ may be set to a fixed value defined as λ=π radians in the precoding matrices of the precoding scheme in which hopping between precoding matrices is performed regularly.) With this structure, as is the case where λ is set to a value defined as λ=0 radians, a reduction in circuit size is achieved. Embodiment 20 The present embodiment describes a scheme for regularly hopping between precoding matrices using a unitary matrix based on Embodiment 13. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢372When⁢i=0,1,2,…,N-2,N-1:Equation⁢311F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+δ)) Let α be a fixed value (not depending on i), where α>0. Math⁢373When⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢312F[i]=1α2+1⁢(α×ej⁡(θ11(i)+λ)ej⁢θ11(i)ej⁡(θ21(i)+λ+δ)α×ej⁢θ21(i)) Let α be a fixed value (not depending on i), where α>0. (The value of α may be set as α<0.) Furthermore, the 2×N×M period (cycle) precoding matrices based on Equations 311 and 312 are represented by the following equations. Math⁢374When⁢i=0,1,2,…,N-2,N-1:Equation⁢313F[2×N×k+i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢(θ21(i)+Xk)ej⁡(θ21(i)+Xk+λ+δ)) In this case, k=0, 1, . . . , M−2, M−1. Math⁢375When⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢314F[2×N×k+i]=1α2+1⁢(α×ej⁡(θ11(i)+λ)ej⁢θ11(i)ej⁡(θ21(i)+λ+δ+Yk)α×ej⁢θ21(i+Yk)) In this case, k=0, 1, . . . , M−2, M−1. Furthermore, Xk=Yk may be true, or Xk≠Yk may be true. In this case, precoding matrices F[0]−F[2N×M−1] are generated. (Precoding matrices F[0]−F[2×N×M−1] may be in any order for the 2×N×M slots in the period (cycle)). Symbol number 2×N×M×i may be precoded using F[0], symbol number 2×N×M×i+1 may be precoded using F[1], . . . , and symbol number 2×N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2×N×M−2, 2×N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. The 2×N×M period (cycle) precoding matrices in Equation 313 may be changed to the following equation. Math⁢376When⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢315F[2×N×k+i]=1α2+1⁢(α×ej⁡(θ11(i)+λ)ej⁢θ11(i)ej⁡(θ21(i)+λ+δ+Yk)α×ej⁢(θ21(i)+Yk)) In this case, k=0, 1, . . . , M−2, M−1. The 2×N×M period (cycle) precoding matrices in Equation 314 may also be changed to any of Equations 316-318. Math⁢377When⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢316F[2×N×k+i]=1α2+1⁢(α×ej⁡(θ11(i)+λ+Yk)ej(θ11(i)+Yk)ej⁡(θ21(i)+λ+δ)α×ej⁢θ21(i)) In this case, k=0, 1, . . . , M−2, M−1. Math⁢378When⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢317F[2×N×k+i]=1α2+1⁢(α×ej⁢θ11(i)ej(θ11(i)+λ)ej⁡(θ21(i)+Yk)α×ej(θ21(i)+λ-δ+Yk)) In this case, k=0, 1, . . . , M−2, M−1. Math⁢379When⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢318F[2×N×k+i]=1α2+1⁢(α×ej⁢(θ11(i)+Yk)ej(θ11(i)+λ+Yk)ej⁢θ21(i)α×ej(θ21(i)+λ-δ)) In this case, k=0, 1, . . . , M−2, M−1. Focusing on poor reception points, if Equations 313 through 318 satisfy the following conditions, Math 380 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #64 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) Math 381 ej(θ11(x)−θ21(x)−δ)≠ej(θ11(y)−θ21(y)−δ)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #65 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) Math 382 θ11(x)=θ11(x+N) for ∀x(x=0,1,2, . . . ,N−2,N−1) and θ21(y)=θ21(y+N) for ∀y(y=0,1,2, . . . ,N−2,N−1)  Condition #66 then excellent data reception quality is achieved. Note that in Embodiment 8, Condition #39 and Condition #40 should be satisfied. Focusing on Xk and Yk, if Equations 313 through 318 satisfy the following conditions, Math 383 Xa≠Xb+2×s×π for ∀a,∀b(a≠b;a,b=0,1,2, . . . ,M−2,M−1)  Condition #67 (a is 0, 1,2, . . . , M−2, M−1; b is 0, 1,2, . . . , M−2, M−1; and a≠b.)(Here, s is an integer.) Math 384 Ya≠Yb+2×u×π for ∀a,∀b(a≠b;a,b=0,1,2, . . . ,M−2,M−1)  Condition #68 (a is 0, 1,2, . . . , M−2, M−1; b is 0, 1,2, . . . , M−2, M−1; and a≠b.)(Here, u is an integer.), then excellent data reception quality is achieved. Note that in Embodiment 8, Condition #42 should be satisfied. In Equations 313 and 318, when 0 radians≤δ<2π radians, the matrices are a unitary matrix when δ=π radians and are a non-unitary matrix when δ≠π radians. In the present scheme, use of a non-unitary matrix for π/2 radians≤|δ|≤π radians is one characteristic structure, and excellent data reception quality is obtained, but use of a unitary matrix is also possible. The following provides an example of precoding matrices in the precoding hopping scheme of the present embodiment. The following matrices are considered when N=5, M=2 as an example of the 2×N×M period (cycle) precoding matrices based on Equations 313 through 318: Math⁢385F[i=0]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢0ej⁢π)Equation⁢319Math⁢386F[i=1]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(25⁢π)ej⁢(25⁢π+π))Equation⁢320Math⁢387F[i=2]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(45⁢π)ej⁢(45⁢π+π))Equation⁢321Math⁢388F[i=3]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(65⁢π)ej⁢(65⁢π+π))Equation⁢322Math⁢389F[i=4]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(85⁢π)ej⁢(85⁢π+π))Equation⁢323Math⁢390F[i=5]=1α2+1⁢(α×ej⁢0ej⁢πej⁢0α×ej⁢0)Equation⁢324Math⁢391F[i=6]=1α2+1⁢(α×ej⁢25⁢πej(25⁢π+π)ej⁢0α×ej⁢0)Equation⁢325Math⁢392F[i=7]=1α2+1⁢(α×ej⁢45⁢πej(45⁢π+π)ej⁢0α×ej⁢0)Equation⁢326Math⁢393F[i=8]=1α2+1⁢(α×ej⁢65⁢πej(65⁢π+π)ej⁢0α×ej⁢0)Equation⁢327Math⁢394F[i=9]=1α2+1⁢(α×ej⁢85⁢πej(85⁢π+π)ej⁢0α×ej⁢0)Equation⁢328Math⁢395F[i=10]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(0+π)ej⁢(π+π))Equation⁢329Math⁢396F[i=11]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(25⁢π+π)ej⁢(25⁢π+π+π))Equation⁢330Math⁢397F[i=12]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(45⁢π+π)ej⁢(45⁢π+π+π))Equation⁢331Math⁢398F[i=13]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(65⁢π+π)ej⁢(65⁢π+π+π))Equation⁢332Math⁢399F[i=14]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢(85⁢π+π)ej⁢(85⁢π+π+π))Equation⁢333Math⁢400F[i=15]=1α2+1⁢(α×ej⁢0ej⁢πej⁢(0+π)α×ej⁢(0+π))Equation⁢334Math⁢401F[i=16]=1α2+1⁢(α×ej⁢25⁢πej(25⁢π+π)ej⁢(0+π)α×ej⁢(0+π))Equation⁢335Math⁢402F[i=17]=1α2+1⁢(α×ej⁢45⁢πej(45⁢π+π)ej⁢(0+π)α×ej⁢(0+π))Equation⁢336Math⁢403F[i=18]=1α2+1⁢(α×ej⁢65⁢πej(65⁢π+π)ej⁢(0+π)α×ej⁢(0+π))Equation⁢337Math⁢404F[i=19]=1α2+1⁢(α×ej⁢85⁢πej(85⁢π+π)ej⁢(0+π)α×ej⁢(0+π))Equation⁢338 In this way, in the above example, in order to restrict the calculation scale of the above precoding in the transmission device, λ=0 radians, δ=π radians, X1=0 radians, and X2=π radians are set in Equation 313, and λ=0 radians, δ=π radians, Y1=0 radians, and Y2=π radians are set in Equation 314. In this case, however, in Equations 313 and 314, X may be set as a value that varies depending on i, or may be set as the same value. That is to say, in Equations 313 and 314, λ in F[i=x] and λ in F[i=y] (x≠y) may be the same value or may be different values. As another scheme, λ is set as a fixed value in Equation 313, λ is set as a fixed value in Equation 314, and the fixed values of λ in Equations 313 and 314 are set as different values. (As still another scheme, the fixed values of λ in Equations 313 and 314 are used.) As the value to which α is set, the set value described in Embodiment 18 is one of effective values. However, not limited to this, α may be set, for example, for each value of i in the precoding matrix F[i] as described in Embodiment 17. (That is to say, in F[i], α is not necessarily be always set to a constant value for i.) In the present embodiment, as one example of the case where λ is treated as a fixed value, a case where λ=0 radians is described. However, in view of the mapping according to the modulation scheme, λ may be set to a fixed value defined as λ=π/2 radians, λ=π radians, or λ=(3π)/2 radians. (For example, λ may be set to a fixed value defined as λ=π radians in the precoding matrices of the precoding scheme in which hopping between precoding matrices is performed regularly.) With this structure, as is the case where λ is set to a value defined as λ=0 radians, a reduction in circuit size is achieved. Embodiment 21 The present embodiment describes an example of the precoding scheme of Embodiment 18 in which hopping between precoding matrices is performed regularly. As an example of the precoding matrices prepared for the N slots based on Equation 269, the following matrices are considered: Math⁢405F[i=0]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢0ej⁢π)Equation⁢339Math⁢406F[i=1]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢29⁢πej⁢(29⁢π+π))Equation⁢340Math⁢407F[i=2]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢49⁢πej⁢(49⁢π+π))Equation⁢341Math⁢408F[i=3]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢69⁢πej⁢(69⁢π+π))Equation⁢342Math⁢409F[i=4]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢89⁢πej⁢(89⁢π+π))Equation⁢343Math⁢410F[i=5]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢109⁢πej⁢(109⁢π+π))Equation⁢344Math⁢411F[i=6]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢129⁢πej⁢(129⁢π+π))Equation⁢345Math⁢412F[i=7]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢149⁢πej⁢(149⁢π+π))Equation⁢346Math⁢413F[i=8]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢169⁢πej⁢(169⁢π+π))Equation⁢347 In the above equations, there is a special case where α can be set to 1. In this case, Equations 339 through 347 are represented as follows. Math⁢414F[i=0]=12⁢(ej⁢0ej⁢0ej⁢0ej⁢π)Equation⁢348Math⁢415F[i=1]=12⁢(ej⁢0ej⁢0ej⁢29⁢πej⁢(29⁢π+π))Equation⁢349Math⁢416F[i=2]=12⁢(ej⁢0ej⁢0ej⁢49⁢πej⁢(49⁢π+π))Equation⁢350Math⁢417F[i=3]=12⁢(ej⁢0ej⁢0ej⁢69⁢πej⁢(69⁢π+π))Equation⁢351Math⁢418F[i=4]=12⁢(ej⁢0ej⁢0ej⁢89⁢πej⁢(89⁢π+π))Equation⁢352Math⁢419F[i=5]=12⁢(ej⁢0ej⁢0ej⁢109⁢πej⁢(109⁢π+π))Equation⁢353Math⁢420F[i=6]=12⁢(ej⁢0ej⁢0ej⁢129⁢πej⁢(129⁢π+π))Equation⁢354Math⁢421F[i=7]=12⁢(ej⁢0ej⁢0ej⁢149⁢πej⁢(149⁢π+π))Equation⁢355Math⁢422F[i=8]=12⁢(ej⁢0ej⁢0ej⁢169⁢πej⁢(169⁢π+π))Equation⁢356 As another example, as an example of the precoding matrices prepared for the N slots based on Equation 269, the following matrices are considered when N=15: Math⁢423F[i=0]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢0ej⁢π)Equation⁢357Math⁢424F[i=1]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢215⁢πej⁢(215⁢π+π))Equation⁢358Math⁢425F[i=2]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢415⁢πej⁢(415⁢π+π))Equation⁢359Math⁢426F[i=3]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢615⁢πej⁢(615⁢π+π))Equation⁢360Math⁢427F[i=4]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢815⁢πej⁢(815⁢π+π))Equation⁢361Math⁢428F[i=5]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1015⁢πej⁢(1015⁢π+π))Equation⁢362Math⁢429F[i=6]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1215⁢πej⁢(1215⁢π+π))Equation⁢363Math⁢430F[i=7]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1415⁢πej⁢(1415⁢π+π))Equation⁢364Math⁢431F[i=8]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1615⁢πej⁢(1615⁢π+π))Equation⁢365Math⁢432F[i=9]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1815⁢πej⁢(1815⁢π+π))Equation⁢366Math⁢433F[i=10]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢2015⁢πej⁢(2015⁢π+π))Equation⁢367Math⁢434F[i=11]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢2215⁢πej⁢(2215⁢π+π))Equation⁢368Math⁢435F[i=12]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢249⁢πej⁢(249⁢π+π))Equation⁢369Math⁢436F[i=13]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢2615⁢πej⁢(2615⁢π+π))Equation⁢370Math⁢437F[i=14]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢2815⁢πej⁢(2815⁢π+π))Equation⁢371 In the above equations, there is a special case where α can be set to 1. In this case, Equations 357 through 371 are represented as follows. Math⁢438F[i=0]=12⁢(ej⁢0ej⁢0ej⁢0ej⁢π)Equation⁢372Math⁢439F[i=1]=12⁢(ej⁢0ej⁢0ej⁢215⁢πej⁡(215⁢π+π))Equation⁢373Math⁢440F[i=2]=12⁢(ej⁢0ej⁢0ej⁢415⁢πej⁡(415⁢π+π))Equation⁢374Math⁢441F[i=3]=12⁢(ej⁢0ej⁢0ej⁢615⁢πej⁡(615⁢π+π))Equation⁢375Math⁢442F[i=4]=12⁢(ej⁢0ej⁢0ej⁢815⁢πej⁡(815⁢π+π))Equation⁢376Math⁢443F[i=5]=12⁢(ej⁢0ej⁢0ej⁢1015⁢πej⁡(1015⁢π+π))Equation⁢377Math⁢444F[i=6]=12⁢(ej⁢0ej⁢0ej⁢1215⁢πej⁡(1215⁢π+π))Equation⁢378Math⁢445F[i=7]=12⁢(ej⁢0ej⁢0ej⁢1415⁢πej⁡(1415⁢π+π))Equation⁢379Math⁢446F[i=8]=12⁢(ej⁢0ej⁢0ej⁢1615⁢πej⁡(1615⁢π+π))Equation⁢380Math⁢447F[i=9]=12⁢(ej⁢0ej⁢0ej⁢1815⁢πej⁡(1815⁢π+π))Equation⁢381Math⁢448F[i=10]=12⁢(ej⁢0ej⁢0ej⁢2015⁢πej⁡(2015⁢π+π))Equation⁢382Math⁢449F[i=11]=12⁢(ej⁢0ej⁢0ej⁢2215⁢πej⁡(2215⁢π+π))Equation⁢383Math⁢450F[i=12]=12⁢(ej⁢0ej⁢0ej⁢2415⁢πej⁡(2415⁢π+π))Equation⁢384Math⁢451F[i=13]=12⁢(ej⁢0ej⁢0ej⁢2615⁢πej⁡(2615⁢π+π))Equation⁢385Math⁢452F[i=14]=12⁢(ej⁢0ej⁢0ej⁢2815⁢πej⁡(2815⁢π+π))Equation⁢386 In the present example, α is set to 1. However, the value to which α is set is not limited to this. For example, the set value of α may be applied to the following case. That is to say, as shown inFIG.3or the like, the encoder performs an error correction coding. The value of α may be varied depending on the coding rate for error correction coding used in the error correction coding. For example, there is considered a scheme in which α is set to 1 when the coding rate is 1/2, and to a value other than 1 such as a value satisfying the relationship α>1 (or α<1) when the coding rate is 2/3. With this structure, in the reception device, excellent data reception quality may be achieved regardless of the coding rate. (Excellent data reception quality may be achieved even if α is set as a fixed value.) As another example, as described in Embodiment 17, α may be set for each value of i in the precoding matrix F[i]. (That is to say, in F[i], α is not necessarily be always set to a constant value for i.) In the present embodiment, the scheme of structuring N different precoding matrices for a precoding hopping scheme with an N-slot time period (cycle) has been described. In this case, as the N different precoding matrices, F[0], F[1], F[2], . . . , F[N−2], F[N−1] are prepared. In the single carrier transmission scheme, symbols are arranged in the order F[0], F[1], F[2], . . . , F[N−2], F[N−1] in the time domain (or the frequency domain in the case of the multi-carrier transmission scheme). The present invention is not, however, limited in this way, and the N different precoding matrices F[0], F[1], F[2], . . . , F[N−2], F[N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaptation in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with an N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not necessarily need to be used in a regular period (cycle). Embodiment 22 The present embodiment describes an example of the precoding scheme of Embodiment 19 in which hopping between precoding matrices is performed regularly. As an example of the precoding matrices prepared for the 2N slots based on Equations 279 and 280, the following matrices are considered when N=9: Math⁢453F[i=0]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢0ej⁢π)Equation⁢387Math⁢454F[i=1]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢29⁢πej⁡(29⁢π+π))Equation⁢388Math⁢455F[i=2]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢49⁢πej⁡(49⁢π+π))Equation⁢389Math⁢456F[i=3]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢69⁢πej⁡(69⁢π+π))Equation⁢390Math⁢457F[i=4]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢89⁢πej⁡(89⁢π+π))Equation⁢391Math⁢458F[i=5]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1⁢09⁢πej⁡(1⁢09⁢π+π))Equation⁢392Math⁢459F[i=6]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1⁢29⁢πej⁡(1⁢29⁢π+π))Equation⁢393Math⁢460F[i=7]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1⁢49⁢πej⁡(1⁢49⁢π+π))Equation⁢394Math⁢461F[i=8]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢1⁢69⁢πej⁡(1⁢69⁢π+π))Equation⁢395Math⁢462F[i=9]=1α2+1⁢(α×ej⁢0ej⁢πej⁢0α×ej⁢0)Equation⁢396Math⁢463F[i=1⁢0]=1α2+1⁢(α×ej⁢29⁢πej⁡(29⁢π+π)ej⁢0α×ej⁢0)Equation⁢397Math⁢464F[i=1⁢1]=1α2+1⁢(α×ej⁢49⁢πej⁡(49⁢π+π)ej⁢0α×ej⁢0)Equation⁢398Math⁢465F[i=1⁢2]=1α2+1⁢(α×ej⁢69⁢πej⁡(69⁢π+π)ej⁢0α×ej⁢0)Equation⁢399Math⁢466F[i=1⁢3]=1α2+1⁢(α×ej⁢89⁢πej⁡(89⁢π+π)ej⁢0α×ej⁢0)Equation⁢400Math⁢467F[i=1⁢4]=1α2+1⁢(α×ej⁢1⁢09⁢πej⁡(1⁢09⁢π+π)ej⁢0α×ej⁢0)Equation⁢401Math⁢468F[i=1⁢5]=1α2+1⁢(α×ej⁢1⁢29⁢πej⁡(1⁢29⁢π+π)ej⁢0α×ej⁢0)Equation⁢402Math⁢469F[i=1⁢6]=1α2+1⁢(α×ej⁢1⁢49⁢πej⁡(1⁢49⁢π+π)ej⁢0α×ej⁢0)Equation⁢403Math⁢470F[i=1⁢7]=1α2+1⁢(α×ej⁢1⁢69⁢πej⁡(1⁢69⁢π+π)ej⁢0α×ej⁢0)Equation⁢404 In the above equations, there is a special case where α can be set to 1. In this case, Equations 387 through 404 are represented as follows. Math⁢471F[i=0]=12⁢(ej⁢0α×ej⁢0α×ej⁢0ej⁢π)Equation⁢405Math⁢472F[i=1]=12⁢(ej⁢0α×ej⁢0α×ej⁢29⁢πej⁡(29⁢π+π))Equation⁢406Math⁢473F[i=2]=12⁢(ej⁢0α×ej⁢0α×ej⁢49⁢πej⁡(49⁢π+π))Equation⁢407Math⁢474F[i=3]=12⁢(ej⁢0α×ej⁢0α×ej⁢69⁢πej⁡(69⁢π+π))Equation⁢408Math⁢475F[i=4]=12⁢(ej⁢0α×ej⁢0α×ej⁢89⁢πej⁡(89⁢π+π))Equation⁢409Math⁢476F[i=5]=12⁢(ej⁢0α×ej⁢0α×ej⁢1⁢09⁢πej⁡(1⁢09⁢π+π))Equation⁢410Math⁢477F[i=6]=12⁢(ej⁢0α×ej⁢0α×ej⁢1⁢29⁢πej⁡(1⁢29⁢π+π))Equation⁢411Math⁢478F[i=7]=12⁢(ej⁢0α×ej⁢0α×ej⁢149⁢πej⁡(149⁢π+π))Equation⁢412Math⁢479F[i=8]=12⁢(ej⁢0α×ej⁢0α×ej⁢1⁢69⁢πej⁡(1⁢69⁢π+π))Equation⁢413Math⁢480F[i=9]=12⁢(α×ej⁢0ej⁢πej⁢0α×ej⁢0)Equation⁢414Math⁢481F[i=1⁢0]=12⁢(α×ej⁢29⁢πej⁡(29⁢π+π)ej⁢0α×ej⁢0)Equation⁢415Math⁢482F[i=1⁢1]=12⁢(α×ej⁢49⁢πej⁡(49⁢π+π)ej⁢0α×ej⁢0)Equation⁢416Math⁢483F[i=1⁢2]=12⁢(α×ej⁢69⁢πej⁡(69⁢π+π)ej⁢0α×ej⁢0)Equation⁢417Math⁢484F[i=1⁢3]=12⁢(α×ej⁢89⁢πej⁡(89⁢π+π)ej⁢0α×ej⁢0)Equation⁢418Math⁢485F[i=1⁢4]=12⁢(α×ej⁢1⁢09⁢πej⁡(1⁢09⁢π+π)ej⁢0α×ej⁢0)Equation⁢419Math⁢486F[i=1⁢5]=12⁢(α×ej⁢1⁢29⁢πej⁡(1⁢29⁢π+π)ej⁢0α×ej⁢0)Equation⁢420Math⁢487F[i=1⁢6]=12⁢(α×ej⁢1⁢49⁢πej⁡(1⁢49⁢π+π)ej⁢0α×ej⁢0)Equation⁢421Math⁢488F[i=1⁢7]=12⁢(α×ej⁢1⁢69⁢πej⁡(1⁢69⁢π+π)ej⁢0α×ej⁢0)Equation⁢422 Also, α may be set to 1 in Equations 281 through 310 presented in Embodiment 19. As the value to which α is set, the above-described set value is one of effective values. However, not limited to this, α may be set, for example, for each value of i in the precoding matrix F[i] as described in Embodiment 17. (That is to say, in F[i], α is not necessarily be always set to a constant value for i.) In the present embodiment, the scheme of structuring 2N different precoding matrices for a precoding hopping scheme with a 2N-slot time period (cycle) has been described. In this case, as the 2N different precoding matrices, F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] are prepared. In the single carrier transmission scheme, symbols are arranged in the order F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] in the time domain (or the frequency domain in the case of the multi-carrier transmission scheme). The present invention is not, however, limited in this way, and the 2N different precoding matrices F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaptation in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with a 2N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using 2N different precoding matrices. In other words, the 2N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots 2N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the 2N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment 23 In Embodiment 9, a scheme for regularly hopping between precoding matrices with use of a unitary matrix has been described. In the present embodiment, a scheme for regularly hopping between precoding matrices with use of a matrix different from that in Embodiment 9 is described. First, a precoding matrix F, a basic precoding matrix, is expressed by the following equation. Math⁢489F=(A×ej⁢μ1⁢1B×ej⁢μ12C×ej⁢μ2⁢10)Equation⁢423 In Equation 423, A, B, and C are real numbers, μ11, μ12, and μ21are real numbers, and the units of them are radians. In the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢490F[i]=(A×ej⁡(μ11+θ1⁢1(i))B×ej⁡(μ12+θ1⁢1(i))C×ej⁡(μ21+θ2⁢1(i))0)Equation⁢424 In this case, i=0, 1, 2, . . . , N−2, N−1. Also, A, B, and C are fixed values regardless of i, and μ11, μ12, and μ21are fixed values regardless of i. If a matrix represented by the format of Equation 424 is treated as a precoding matrix, “0” is present as one element of the precoding matrix, thus it has an advantageous effect that the poor reception points described in other embodiments can be reduced. Also, another basic precoding matrix different from that expressed by Equation 423 is expressed by the following equation. Math⁢491F=(A×ej⁢μ1⁢1B×ej⁢μ120D×ej⁢μ2⁢2)Equation⁢425 In Equation 425, A, B, and C are real numbers, μ11, μ12, and μ22are real numbers, and the units of them are radians. In the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢492F[i]=(A×ej⁡(μ11+θ1⁢1(i))B×ej⁡(μ12+θ1⁢1(i))0D×ej⁡(μ22+θ21(i)))Equation⁢426 In this case, i=0, 1, 2, . . . , N−2, N−1. Also, A, B, and D are fixed values regardless of i, and μ11, μ12, and μ22are fixed values regardless of i. If a matrix represented by the format of Equation 426 is treated as a precoding matrix, “0” is present as one element of the precoding matrix, thus it has an advantageous effect that the poor reception points described in other embodiments can be reduced. Also, another basic precoding matrix different from those expressed by Equations 423 and 425 is expressed by the following equation. Math⁢493F=(A×ej⁢μ1⁢10C×ej⁢μ2⁢1D×ej⁢μ2⁢2)Equation⁢427 In Equation 427, A, C, and D are real numbers, μ11, μ21, and μ22are real numbers, and the units of them are radians. In the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢494F[i]=(A×ej⁡(μ11+θ1⁢1(i))0C×ej⁡(μ21+θ2⁢1(i))D×ej⁡(μ22+θ21(i)))Equation⁢428 In this case, i=0, 1, 2, . . . , N−2, N−1. Also, A, C, and D are fixed values regardless of i, and μ11, μ21, and μ22are fixed values regardless of i. If a matrix represented by the format of Equation 428 is treated as a precoding matrix, “0” is present as one element of the precoding matrix, thus it has an advantageous effect that the poor reception points described in other embodiments can be reduced. Also, another basic precoding matrix different from those expressed by Equations 423, 425, and 427 is expressed by the following equation. Math⁢495F=(0B×ej⁢μ12C×ej⁢μ2⁢1D×ej⁢μ2⁢2)Equation⁢429 In Equation 429, B, C, and D are real numbers, μ12, μ21, and μ22are real numbers, and the units of them are radians. In the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢496F[i]=(0B×ej⁡(μ12+θ1⁢1(i))C×ej⁡(μ21+θ2⁢1(i))D×ej⁡(μ22+θ21(i)))Equation⁢430 In this case, i=0, 1, 2, . . . , N−2, N−1. Also, B, C, and D are fixed values regardless of i, and μ12, μ21, and μ22are fixed values regardless of i. If a matrix represented by the format of Equation 430 is treated as a precoding matrix, “0” is present as one element of the precoding matrix, thus it has an advantageous effect that the poor reception points described in other embodiments can be reduced. From Condition #5 (Math 106) and Condition #6 (Math 107) in Embodiment 3, the following conditions are important for achieving excellent data reception quality. Math 497 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #69 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) Math 498 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #70 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) In order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #71 and Condition #72 are provided. Math⁢499ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#71Math⁢500ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#72 With this structure, the reception device can avoid poor reception points in the LOS environment, and thus can obtain the advantageous effect of improving the data reception quality. Note that, as an example of the above-described scheme for regularly hopping between precoding matrices, there is a scheme for fixing θ11(i) to 0 radians (θ11(i) is set to a constant value regardless of i. In this case, θ11(i) may be set to a value other than 0 radians.) so that θ11(i) and θ21(i) satisfy the above-described conditions. Also, there is a scheme for not fixing θ11(i) to 0 radians, but fixing θ21(i) to 0 radians (θ21(i) is set to a constant value regardless of i. In this case, θ21(i) may be set to a value other than 0 radians.) so that θ11(i) and θ21(i) satisfy the above-described conditions. The present embodiment describes the scheme of structuring N different precoding matrices for a precoding hopping scheme with an N-slot time period (cycle). In this case, as the N different precoding matrices, F[0], F[1], F[2], . . . , F[N−2], F[N−1] are prepared. In a single carrier transmission scheme, symbols are arranged in the order F[0], F[1], F[2], . . . , F[N−2], F[N−1] in the time domain (or the frequency domain in the case of multi-carrier transmission scheme). However, this is not the only example, and the N different precoding matrices F[0], F[1], F[2], . . . , F[N−2], F[N−1] generated according to the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain or in the frequency-time domains. Note that a precoding hopping scheme with an N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. In this case, Condition #69 and Condition #70 can be replaced by the following conditions. (The number of slots in the period (cycle) is considered to be N.) Math 501 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∃x,∃y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #73 (x is 0,1,2, . . . , N−2, N−1; y is 0,1,2, . . . , N−2, N−1; and x≠y.) Math 502 ej(θ11(x)−θ21(x)−π)≠ej(θ11(y)−θ21(y)−π)for ∃x,∃y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #74 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x #y.) Embodiment 24 In Embodiment 10, the scheme for regularly hopping between precoding matrices using a unitary matrix is described. However, the present embodiment describes a scheme for regularly hopping between precoding matrices using a matrix different from that used in Embodiment 10. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢503Here,i=0,1,2,…,N-2,N-1.⁢F[i]=(A×ej⁡(μ11+θ1⁢1(i))B×ej⁡(μ12+θ1⁢1(i))C×ej⁡(μ21+θ2⁢1(i))0)Equation⁢431 Here, let A, B, and C be real numbers, and μ11, μ12, and μ21be real numbers expressed in radians. In addition, A, B, and C are fixed values not depending on i. Similarly, μ11, μ12, and μ21are fixed values not depending on i. Math⁢504For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=((α×ej⁡(ν1⁢1+ψ1⁢1(i))β×ej⁡(ν12+ψ1⁢1(i))0δ×ej⁡(ν2⁢2+ψ2⁢1(i)))Equation⁢432 Here, let α, β, and δ be real numbers, and ν11, ν12, and ν22be real numbers expressed in radians. In addition, α, β, and δ are fixed values not depending on i. Similarly, ν11, ν12, and ν22are fixed values not depending on i. The precoding matrices prepared for the 2N slots different from those in Equations 431 and 432 are represented by the following equations. Math⁢505For⁢i=0,1,2,…,N-2,N-1:⁢F[i]=(A×ej⁡(μ11+θ1⁢1(i))B×ej⁡(μ12+θ1⁢1(i))C×ej⁡(μ21+θ2⁢1(i))0)Equation⁢433 Here, let A, B, and C be real numbers, and ν11, ν12, and ν21be real numbers expressed in radians. In addition, A, B, and C are fixed values not depending on i. Similarly, ν11, ν12, and ν21are fixed values not depending on i. Math⁢506For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=(0β×ej⁡(ν12+ψ1⁢1(i))γ×ej⁡(ν21+ψ2⁢1(i))δ×ej⁡(ν2⁢2+ψ2⁢1(i)))Equation⁢434 Here, let β, γ, and δ be real numbers, and ν12, ν21, and ν22be real numbers expressed in radians. In addition, β, γ, and δ are fixed values not depending on i. Similarly, ν12, ν21, and ν22are fixed values not depending on i. The precoding matrices prepared for the 2N slots different from those described above are represented by the following equations. Math⁢507For⁢i=0,1,2,…,N-2,N-1:⁢F[i]=(A×ej⁡(μ11+θ1⁢1(i))0C×ej⁡(μ21+θ2⁢1(i))D×ej⁡(μ22+θ21(i)))Equation⁢435 Here, let A, C, and D be real numbers, and μ11, μ21, and μ22be real numbers expressed in radians. In addition, A, C, and D are fixed values not depending on i. Similarly, μ11, μ21, and μ22are fixed values not depending on i. Math⁢508For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=((α×ej⁡(ν1⁢1+ψ1⁢1(i))β×ej⁡(ν12+ψ1⁢1(i))0δ×ej⁡(ν2⁢2+ψ2⁢1(i)))Equation⁢436 Here, let α, β, and δ be real numbers, and ν11, ν12, and ν22be real numbers expressed in radians. In addition, α, β, and δ are fixed values not depending on i. Similarly, ν11, ν12, and ν22are fixed values not depending on i. The precoding matrices prepared for the 2N slots different from those described above are represented by the following equations. Math⁢509For⁢i=0,1,2,…,N-2,N-1:⁢F[i]=(A×ej⁡(μ11+θ1⁢1(i))0C×ej⁡(μ21+θ2⁢1(i))D×ej⁡(μ22+θ21(i)))Equation⁢437 Here, let A, C, and D be real numbers, and ν11, ν21, and ν22be real numbers expressed in radians. In addition, A, C, and D are fixed values not depending on i. Similarly, ν11, ν21, and ν22are fixed values not depending on i. Math⁢510For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=(0β×ej⁡(ν12+ψ1⁢1(i))γ×ej⁡(ν21+ψ2⁢1(i))δ×ej⁡(ν2⁢2+ψ2⁢1(i)))Equation⁢438 Here, let β, γ, and δ be real numbers, and ν11, ν21, and ν22be real numbers expressed in radians. In addition, β, γ, and δ are fixed values not depending on i. Similarly, ν11, ν21, and ν22are fixed values not depending on i. Making the same considerations as in Condition #5 (Math 106) and Condition #6 (Math 107) of Embodiment 3, the following conditions are important for achieving excellent data reception quality. Math 511 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #75 (x is 0,1,2, . . . , N−2, N−1; y is 0,1,2, . . . , N−2, N−1; and x≠y.) Math 512 ej(ψ11(x)−ψ21(x))≠ej(ψ11(y)−ψ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #76 (x is N, N+1, N+2, . . . ,2N−2,2N−1; y is N, N+1, N+2, . . . ,2N−2,2N−1; and x≠y.) Next, in order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 6, Condition #77 or Condition #78 is provided. Math⁢513ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#77Math⁢514ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#78 Similarly, in order to distribute the poor reception points evenly with regards to phase in the complex plane, Condition #79 or Condition #80 is provided. Math⁢515ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(2⁢πN)⁢for⁢∀x⁡(x=N,N+1,N+2,…,2⁢N-2)Condition⁢#79Math⁢516ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-2⁢πN)⁢for⁢x⁡(x=N,N+1,N+2,…,2⁢N-2)Condition⁢#80 The above arrangement ensures to reduce the number of poor reception points described in the other embodiments because one of the elements of precoding matrices is “0”. In addition, the reception device is enabled to improve reception quality because poor reception points are effectively avoided especially in an LOS environment. In an alternative scheme to the above-described precoding scheme of regularly hopping between precoding matrices, θ11(i) is fixed, for example, to 0 radians (a fixed value not depending on i, and a value other than 0 radians may be applicable) and θ11(i) and θ21(i) satisfy the conditions described above. In another alternative scheme, θ21(i) instead of θ11(i) is fixed, for example, to 0 radians (a fixed value not depending on i, and a value other than 0 radians may be applicable) and θ11(i) and θ21(i) satisfy the conditions described above. Similarly, in another alternative scheme, Ψ11(i) is fixed, for example, to 0 radians (a fixed value not depending on i, and a value other than 0 radians may be applicable) and Ψ11(i) and Ψ21(i) satisfy the conditions described above. Similarly, in another alternative scheme, Ψ21(i) instead of ψ11(i) is fixed, for example, to 0 radians (a fixed value not depending on i, and a value other than 0 radians may be applicable) and Ψ11(i) and Ψ21(i) satisfy the conditions described above. The present embodiment describes the scheme of structuring 2N different precoding matrices for a precoding hopping scheme with a 2N-slot time period (cycle). In this case, as the 2N different precoding matrices, F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] are prepared. In a single carrier transmission scheme, symbols are arranged in the order F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] in the time domain (or the frequency domain in the case of multi-carrier). However, this is not the only example, and the 2N different precoding matrices F[0], F[1], F[2], . . . , F[2N−2], F[2N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain or in the frequency-time domain. Note that a precoding hopping scheme with a 2N-slot time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using 2N different precoding matrices. In other words, the 2N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots 2N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the 2N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment 25 The present embodiment describes a scheme for increasing the period (cycle) size of precoding hops between the precoding matrices, by applying Embodiment 17 to the precoding matrices described in Embodiment 23. As described in Embodiment 23, in the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢517F[i]=(A×ej⁡(μ11+θ1⁢1(i))B×ej⁡(μ12+θ1⁢1(i))C×ej⁡(μ21+θ2⁢1(i))0)Equation⁢439 Here, i=0, 1, 2, . . . , N−2, N−1. In addition, A, B, and C are fixed values not depending on i. Similarly, μ11, μ12, and μ21are fixed values not depending on i. Furthermore, the N×M period (cycle) precoding matrices based on Equation 439 are represented by the following equation. Math⁢518F[N×k+i]=(A×ej⁡(μ11+θ1⁢1(i))B×ej⁡(μ12+θ1⁢1(i))C×ej⁡(μ21+θ2⁢1(i)+Xk)0)Equation⁢440 Here, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. Precoding matrices F[0] to F[N×M−1] are thus generated (the precoding matrices F[0] to F[N×M−1] may be in any order for the N×M slots in the period (cycle)). Symbol number N×M×i may be precoded using F[0], symbol number N×M×i+1 may be precoded using F[1], . . . , and symbol number N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , N×M−2, N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. Note that while the N×M period (cycle) precoding matrices have been set to Equation 440, the N×M period (cycle) precoding matrices may be set to the following equation, as described above. Math⁢519F[N×k+i]=(A×ej⁡(μ11+θ1⁢1(i)+Xk)B×ej⁡(μ12+θ1⁢1(i)+Xk)C×ej⁡(μ21+θ2⁢1(i))0)Equation⁢441 Here, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. As described in Embodiment 23, in the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots that is different from the above-described N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢520F[i]=(A×ej⁡(μ11+θ1⁢1(i))B×ej⁡(μ12+θ1⁢1(i))0D×ej⁡(μ22+θ21(i)))Equation⁢442 Here, i=0, 1, 2, . . . , N−2, N−1. In addition, A, B, and D are fixed values not depending on i. Similarly, μ11, μ12, and μ22are fixed values not depending on i. Furthermore, the N×M period (cycle) precoding matrices based on Equation 441 are represented by the following equation. Math⁢521F[N×k+i]=(A×ej⁡(μ11+θ11(i))B×ej⁡(μ12+θ11(i))0D×ej⁡(μ22+θ21(i)+Xk))Equation⁢443 Here, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. Precoding matrices F[0] to F[N×M−1] are thus generated (the precoding matrices F[0] to F[N×M−1] may be in any order for the N×M slots in the period (cycle)). Symbol number N×M×i may be precoded using F[0], symbol number N×M×i+1 may be precoded using F[1], . . . , and symbol number N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , N×M−2, N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. Note that while the N×M period (cycle) precoding matrices have been set to Equation 443, the N×M period (cycle) precoding matrices may be set to the following equation, as described above. Math⁢522F[N×k+i]=(A×ej⁡(μ11+θ11(i)+Xk)B×ej⁡(μ12+θ11(i)+Xk)0D×ej⁡(μ22+θ21(i)))Equation⁢444 Here, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. As described in Embodiment 23, in the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots that is different from the above-described N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢523F[i]=(A×ej⁡(μ11+θ11(i))0C×ej⁡(μ21+θ21(i))D×ej⁡(μ22+θ21(i)))Equation⁢445 Here, i=0, 1, 2, . . . , N−2, N−1. In addition, A, C, and D are fixed values not depending on i. Similarly, μ11, μ21, and μ22are fixed values not depending on i. Furthermore, the N×M period (cycle) precoding matrices based on Equation 445 are represented by the following equation. Math⁢524F[N×k+i]=(A×ej⁡(μ11+θ11(i))0C×ej⁡(μ21+θ21(i)+Xk)D×ej⁡(μ22+θ21(i)+Xk))Equation⁢446 Here, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. Precoding matrices F[0] to F[N×M−1] are thus generated (the precoding matrices F[0] to F[N×M−1] may be in any order for the N×M slots in the period (cycle)). Symbol number N×M×i may be precoded using F[0], symbol number N×M×i+1 may be precoded using F[1], . . . , and symbol number N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , N×M−2, N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. Note that while the N×M period (cycle) precoding matrices have been set to Equation 446, the N×M period (cycle) precoding matrices may be set to the following equation, as described above. Math⁢525F[N×k+i]=(A×ej⁡(μ11+θ11(i)+Xk)0C×ej⁡(μ21+θ21(i))D×ej⁡(μ22+θ21(i)))Equation⁢447 Here, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. As described in Embodiment 23, in the scheme of regularly hopping between precoding matrices over a period (cycle) with N slots that is different from the above-described N slots, the precoding matrices prepared for the N slots are represented as follows. Math⁢526F[i]=(0B×ej⁡(μ12+θ11(i))C×ej⁡(μ21+θ21(i))D×ej⁡(μ22+θ21(i)))Equation⁢448 Here, i=0, 1, 2, . . . , N−2, N−1. In addition, B, C, and D are fixed values not depending on i. Similarly, μ12, μ21, and μ22are fixed values not depending on i. Furthermore, the N×M period (cycle) precoding matrices based on Equation 448 are represented by the following equation. Math⁢527F[N×k+i]=(0B×ej⁡(μ12+θ11(i))C×ej⁡(μ21+θ21(i)+Xk)D×ej⁡(μ22+θ21(i)+Xk))Equation⁢449 Here, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. Precoding matrices F[0] to F[N×M−1] are thus generated (the precoding matrices F[0] to F[N×M−1] may be in any order for the N×M slots in the period (cycle)). Symbol number N×M×i may be precoded using F[0], symbol number N×M×i+1 may be precoded using F[1], . . . , and symbol number N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , N×M−2, N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. Note that while the N×M period (cycle) precoding matrices have been set to Equation 449, the N×M period (cycle) precoding matrices may be set to the following equation, as described above. Math⁢528F[N×k+i]=(0B×ej⁡(μ12+θ11(i)+Xk)C×ej⁡(μ21+θ21(i))D×ej⁡(μ22+θ21(i)))Equation⁢450 Here, i=0, 1, 2, . . . , N−2, N−1, and k=0, 1, . . . , M−2, M−1. The present embodiment describes the scheme of structuring N×M different precoding matrices for a precoding hopping scheme with N×M slots in the time period (cycle). In this case, as the N×M different precoding matrices, F[0], F[1], F[2], . . . , F[N×M−2], F[N×M−1] are prepared. In a single carrier transmission scheme, symbols are arranged in the order F[0], F[1], F[2], . . . , F[N×M−2], F[N×M−1] in the time domain (or the frequency domain in the case of multi-carrier). However, this is not the only example, and the N×M different precoding matrices F[0], F[1], F[2], . . . , F[N×M−2], F[N×M−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain or in the frequency-time domain. Note that a precoding hopping scheme with N×M slots in the time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N×M different precoding matrices. In other words, the N×M different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots N×M in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the N×M different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment 26 The present embodiment describes a scheme for increasing the period (cycle) size of precoding hops between the precoding matrices, by applying Embodiment 20 to the precoding matrices described in Embodiment 24. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢529For⁢i=0,1,2,…,N-2,N-1:Equation⁢451F[i]=(A×ej⁡(μ11+θ11(i))B×ej⁡(μ12+θ11(i))C×ej⁡(μ21+θ21(i))0) Here, let A, B, and C be real numbers, and μ11, μ12, and μ21be real numbers expressed in radians. In addition, A, B, and C are fixed values not depending on i. Similarly, μ11, μ12, and μ21are fixed values not depending on i. Math⁢530For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢452F[i]=(α×ej⁡(v11+ψ11(i))β×ej⁡(v12+ψ11(i))0δ×ej⁡(v22+ψ21(i))) Here, let α, β, and δ be real numbers, and ν11, ν12, and ν22be real numbers expressed in radians. In addition, α, β, and δ are fixed values not depending on i. Similarly, ν11, ν12, and ν22are fixed values not depending on i. Furthermore, the 2×N×M period (cycle) precoding matrices based on Equations 451 and 452 are represented by the following equation. Math⁢531For⁢i=0,1,2,…,N-2,N-1:Equation⁢453F[2×N×k+i]=(A×ej⁡(μ11+θ11(i))B×ej⁡(μ12+θ11(i))C×ej⁡(μ21+θ21(i)+Xk)0) Here, k=0, 1, . . . , M−2, M−1. Math⁢532For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢454F[2×N×k+i]=(α×ej⁡(v11+ψ11(i))β×ej⁡(v12+ψ11(i))0δ×ej⁡(v22+ψ21(i)+Yk)) Here, k=0, 1, . . . , M−2, M−1. In addition, Xk=Yk may be true or Xk≠Yk may be true. Precoding matrices F[0] to F[2×N×M−1] are thus generated (the precoding matrices F[0] to F[2×N×M−1] may be in any order for the 2×N×M slots in the period (cycle)). Symbol number 2×N×M×i may be precoded using F[0], symbol number 2×N×M×i+1 may be precoded using F[1], . . . , and symbol number 2×N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2×N×M−2, 2×N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. The 2×N×M period (cycle) precoding matrices in Equation 453 may be changed to the following equation. Math⁢533For⁢i=0,1,2,…,N-2,N-1:Equation⁢455F[2×N×k+i]=(A×ej⁡(μ11+θ11(i)+Xk)B×ej⁡(μ12+θ11(i)+Xk)C×ej⁡(μ21+θ21(i))0) Here, k=0, 1, . . . , M−2, M−1. The 2×N×M period (cycle) precoding matrices in Equation 454 may be changed to the following equation. Math⁢534For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢456F[2×N×k+i]=(α×ej⁡(v11+ψ11(i)+Yk)β×ej⁡(v12+ψ11(i)+Yk)0δ×ej⁡(v22+ψ21(i))) Here, k=0, 1, . . . , M−2, M−1 Another example is shown below. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢535For⁢i=0,1,2,…,N-2,N-1:Equation⁢457F[i]=(A×ej⁡(μ11+θ11(i))B×ej⁡(μ12+θ11(i))C×ej⁡(μ21+θ21(i))0) Here, let A, B, and C be real numbers, and μ11, μ12, and μ21be real numbers expressed in radians. In addition, A, B, and C are fixed values not depending on i. Similarly, μ11, μ12, and μ21are fixed values not depending on i. Math⁢536For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=(0β×ej⁡(v12+ψ11(i))γ×ej⁡(v21+v21(i))δ×ej⁡(v22+ψ21(i)))Equation⁢458 Here, let β, γ, and δ be real numbers, and ν12, ν21, and ν22be real numbers expressed in radians. In addition, β, γ, and δ are fixed values not depending on i. Similarly, ν12, ν21, and ν22are fixed values not depending on i. Furthermore, the 2×N×M period (cycle) precoding matrices based on Equations 457 and 458 are represented by the following equation. Math⁢537For⁢i=0,1,2,…,N-2,N-1:Equation⁢459F[2×N×k+i]=(A×ej⁡(μ11+θ11(i))B×ej⁡(μ12+θ11(i))C×ej⁡(μ21+θ21(i)+Xk)0) Here, k=0, 1, . . . , M−2, M−1. Math⁢538For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢460F[2×N×k+i]=(0β×ej⁡(v12+ψ11(i))γ×ej⁡(v21+ψ21(i)+Yk)δ×ej⁡(v22+ψ21(i)+Yk)) Here, k=0, 1, . . . , M−2, M−1. Furthermore, Xk=Yk may be true, or λk#Yk may be true. Precoding matrices F[0] to F[2×N×M−1] are thus generated (the precoding matrices F[0] to F[2×N×M−1] may be in any order for the 2×N×M slots in the period (cycle)). Symbol number 2×N×M×i may be precoded using F[0], symbol number 2×N×M×i+1 may be precoded using F[1], . . . , and symbol number 2×N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2×N×M−2, 2×N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. The 2×N×M period (cycle) precoding matrices in Equation 459 may be changed to the following equation. Math⁢539For⁢i=0,1,2,…,N-2,N-1:Equation⁢461F[2×N×k+i]=(A×ej⁡(μ11+θ11(i)+Xk)B×ej⁡(μ12+θ11(i)+Xk)C×ej⁡(μ21+θ21(i))0) Here, k=0, 1, . . . , M−2, M−1. The 2×N×M period (cycle) precoding matrices in Equation 460 may be changed to the following equation. Math⁢540For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢462F[2×N×k+i]=(0β×ej⁡(v12+ψ11(i)+Yk)γ×ej⁡(v21+ψ21(i))δ×ej⁡(v22+ψ21(i))) Here, k=0, 1, . . . , M−2, M−1. Another example is shown below. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢541For⁢i=0,1,2,…,N-2,N-1:⁢F[i]=(A×ej⁡(μ11+θ11(i))0C×ej⁡(μ21+θ21(i))D×ej⁡(μ22+θ21(i)))Equation⁢463 Here, let A, C, and D be real numbers, and μ11, μ21, and μ22be real numbers expressed in radians. In addition, A, C, and D are fixed values not depending on i. Similarly, μ11, μ21, and μ22are fixed values not depending on i. Math⁢542For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:⁢F[i]=(α×ej⁡(v11+ψ11(i))β×ej⁡(v12+ψ11(i))0δ×ej⁡(v22+ψ21(i)))Equation⁢464 Here, let α, β, and δ be real numbers, and ν11, ν12, and ν22be real numbers expressed in radians. In addition, α, β, and δ are fixed values not depending on i. Similarly, ν11, ν12, and ν22are fixed values not depending on i. Furthermore, the 2×N×M period (cycle) precoding matrices based on Equations 463 and 464 are represented by the following equation. Math⁢543For⁢i=0,1,2,…,N-2,N-1:Equation⁢465F[2×N×k+i]=(A×ej⁡(μ11+θ11(i))0C×ej⁡(μ21+θ21(i)+Xk)D×ej⁡(μ22+θ21(i)+Xk)) Here, k=0, 1, . . . , M−2, M−1. Math⁢544For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢466F[2×N×k+i]=(α×ej⁡(v11+ψ11(i))β×ej⁡(v12+ψ11(i))0δ×ej⁡(v22+ψ21(i)+Yk)) Here, k=0, 1, . . . , M−2, M−1. Furthermore, Xk=Yk may be true, or Xk≠Yk may be true. Precoding matrices F[0] to F[2×N×M−1] are thus generated (the precoding matrices F[0] to F[2×N×M−1] may be in any order for the 2×N×M slots in the period (cycle)). Symbol number 2×N×M×i may be precoded using F[0], symbol number 2×N×M×i+1 may be precoded using F[1], . . . , and symbol number 2×N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2×N×M−2, 2×N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. The 2×N×M period (cycle) precoding matrices in Equation 465 may be changed to the following equation. Math⁢545For⁢i=0,1,2,…⁢N-2,N-1:Equation⁢467F[2×N×k+i]=(A×ej⁡(μ11+θ11(i)+Xk)0C×ej⁡(μ21+θ21(i))D×ej⁡(μ22+θ21(i))) Here, k=0, 1, . . . , M−2, M−1. The 2×N×M period (cycle) precoding matrices in Equation 466 may be changed to the following equation. Math⁢546For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢468F[2×N×k+i]=(α×ej⁡(v11+ψ11(i)+Yk)β×ej⁡(v12+ψ11(i)+Yk)0δ×ej⁡(v22+ψ21(i))) Here, k=0, 1, . . . , M−2, M−1. Another example is shown below. In the scheme of regularly hopping between precoding matrices over a period (cycle) with 2N slots, the precoding matrices prepared for the 2N slots are represented as follows. Math⁢547For⁢i=0,1,2,…,N-2,N-1:Equation⁢469F[i]=(A×ej⁡(μ11+θ11(i))0C×ej⁡(μ21+θ21(i))D×ej⁡(μ22+θ21(i))) Here, let A, C, and D be real numbers, and μ11, μ21, and μ22be real numbers expressed in radians. In addition, A, C, and D are fixed values not depending on i. Similarly, μ11, μ21, and μ22are fixed values not depending on i. Math⁢548For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢470F[i]=(0β×ej(v12+ψ11(i)γ×ej⁡(v21+ψ21(i))δ×ej⁡(v22+ψ21(i))) Here, let β, γ, and δ be real numbers, and ν12, ν21, and ν22be real numbers expressed in radians. In addition, β, γ, and δ are fixed values not depending on i. Similarly, ν12, ν21, and ν22are fixed values not depending on i. Furthermore, the 2×N×M period (cycle) precoding matrices based on Equations 469 and 470 are represented by the following equation. Math⁢549For⁢i=0,1,2,…,N-2,N-1:Equation⁢471F[2×N×k+i]=(A×ej⁡(μ11+θ11(i))0C×ej⁡(μ21+θ21(i)+Xk)D×ej⁡(μ22+θ21(i)+Xk)) Here, k=0, 1, . . . , M−2, M−1. Math⁢550For⁢i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢472F[2×N×k+i]=(0β×ej⁡(v12+ψ11(i))γ×ej⁡(v21+ψ21(i)+Yk)δ×ej⁡(v22+ψ21(i)+Yk)) Here, k=0, 1, . . . , M−2, M−1. Furthermore, Xk=Yk may be true, or Xk≠Yk may be true. Precoding matrices F[0] to F[2×N×M−1] are thus generated (the precoding matrices F[0] to F[2×N×M−1] may be in any order for the 2×N×M slots in the period (cycle)). Symbol number 2×N×M×i may be precoded using F[0], symbol number 2×N×M×i+1 may be precoded using F[1], . . . , and symbol number 2×N×M×i+h may be precoded using F[h], for example (h=0, 1, 2, . . . , 2×N×M−2, 2×N×M−1). (In this case, as described in previous embodiments, precoding matrices need not be hopped between regularly.) Generating the precoding matrices in this way achieves a precoding matrix hopping scheme with a large period (cycle), allowing for the position of poor reception points to be easily changed, which may lead to improved data reception quality. The 2×N×M period (cycle) precoding matrices in Equation 471 may be changed to the following equation. Math⁢551i=0,1,2,…,N-2,N-1:Equation⁢473F[2×N×k+i]=(A×ej⁡(μ11+θ11(i)+Xk)0C×ej⁡(μ21+θ21(i))D×ej⁡(μ22+θ21(i))) Here k=0, 1, . . . , M−2, M−1. The 2×N×M period (cycle) precoding matrices in Equation 472 may be changed to the following equation. Math⁢552i=N,N+1,N+2,…,2⁢N-2,2⁢N-1:Equation⁢474F[2×N×k+i]=(0β×ej⁡(v12+ψ11(i)+Yk)γ×ej⁡(v21+ψ21(i))δ×ej⁡(v22+ψ21(i))) Here, k=0, 1, . . . , M−2, M−1. Focusing on poor reception points in the above examples, the following conditions are important. Math 553 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #81 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) Math 554 ej(ψ11(x)−ψ21(x))≠ej(ψ11(y)−ψ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #82 (x is N, N+1, N+2, . . . ,2N−2,2N−1; y is N, N+1, N+2, . . . 2N−2,2N−1; and x≠y.) Math 555 θ11(x)=θ11(x+N) for ∀x(x=0,1,2, . . . ,N−2,N−1) and θ21(y)=θ21(y+N) for ∀y(y=0,1,2, . . . ,N−2,N−1)  Condition #83 Math 556 ψ11(x)=ψ11(x+N) for ∀x(x=N,N+1,N+2, . . . ,2N−2,2N−1) and ψ21(y)=ψ21(y+N) for ∀y(y=N,N+1,N+2, . . . ,2N−2,2N−1)  Condition #84 By satisfying the conditions shown above, excellent data reception quality is achieved. Furthermore, the following conditions should be satisfied (See Embodiment 24). Math 557 ej(θ11(x)−θ21(x))≠ej(θ11(y)−θ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #85 (x is 0, 1,2, . . . , N−2, N−1; y is 0, 1,2, . . . , N−2, N−1; and x≠y.) Math 558 ej(ψ11(x)−ψ21(x))≠ej(ψ11(y)−ψ21(y))for ∀x,∀y(x≠y;x,y=0,1,2, . . . ,N−2,N−1)  Condition #86 (x is N, N+1, N+2, . . . , 2N−2, 2N−1; y is N, N+1, N+2, . . . 2N−2, 2N−1; and x≠y.) Focusing on Xk and Yk, the following conditions are noted. Math 559 Xa≠Xb+2×s×π for ∀a,∀b(a≠b;a,b=0,1,2, . . . ,M−2,M−1)  Condition #87 (a is 0, 1, 2, . . . , M−2, M−1; b is 0, 1,2, . . . , M−2, M−1; and a≠b.) Here, s is an integer. Math 560 Ya≠Yb+2×u×π for ∀a,∀b(a≠b;a,b=0,1,2, . . . ,M−2,M−1)  Condition #88 (a is 0, 1, 2, . . . , M−2, M−1; b is 0, 1,2, . . . , M−2, M−1; and a≠b.) (Here, u is an integer.) By satisfying the two conditions shown above, excellent data reception quality is achieved. In Embodiment 25, Condition #87 should be satisfied. The present embodiment describes the scheme of structuring 2×N×M different precoding matrices for a precoding hopping scheme with 2N×M slots in the time period (cycle). In this case, as the 2×N×M different precoding matrices, F[0], F[1], F[2], . . . , F[2×N×M−2], F[2×N×M−1] are prepared. In a single carrier transmission scheme, symbols are arranged in the order F[0], F[1], F[2], . . . , F[2×N×M−2], F[2×N×M−1] in the time domain (or the frequency domain in the case of multi-carrier). However, this is not the only example, and the 2×N×M different precoding matrices F[0], F[1], F[2], . . . , F[2×N×M−2], F[2×N×M−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain or in the frequency-time domain. Note that a precoding hopping scheme with 2×N×M slots the time period (cycle) has been described, but the same advantageous effects may be obtained by randomly using 2×N×M different precoding matrices. In other words, the 2×N×M different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots 2×N×M in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the 2×N×M different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment A1 In the present embodiment, a detailed description is given of a scheme for adapting the above-described transmission schemes that regularly hops between precoding matrices to a communications system compliant with the DVB (Digital Video Broadcasting)-T2 (T: Terrestrial) standard (DVB for a second generation digital terrestrial television broadcasting system). FIG.61is an overview of the frame structure of a signal a signal transmitted by a broadcast station according to the DVB-T2 standard. According to the DVB-T2 standard, an OFDM scheme is employed. Thus, frames are structured in the time and frequency domains.FIG.61shows the frame structure in the time and frequency domains. The frame is composed of P1 Signalling data (6101), L1 Pre-Signalling data (6102), L1 Post-Signalling data (6103), Common PLP (6104), and PLPs #1 to #N (6105_1to6105_N) (PLP: Physical Layer Pipe). (Here, L1 Pre-Signalling data (6102) and L1 Post-Signalling data (6103) are referred to as P2 symbols.) As above, the frame composed of P1 Signalling data (6101), L1 Pre-Signalling data (6102), L1 Post-Signalling data (6103), Common PLP (6104), and PLPs #1 to #N (6105_1to6105N) is referred to as a T2 frame, which is a unit of frame structure. The P1 Signalling data (6101) is a symbol for use by a reception device for signal detection and frequency synchronization (including frequency offset estimation). Also, the P1 Signalling data (6101) transmits information including information indicating the FFT (Fast Fourier Transform) size, and information indicating which of SISO (Single-Input Single-Output) and MISO (Multiple-Input Single-Output) is employed to transmit a modulated signal. (The SISO scheme is for transmitting one modulated signal, whereas the MISO scheme is for transmitting a plurality of modulated signals using space-time block coding.) The L1 Pre-Signalling data (6102) transmits information including: information about the guard interval used in transmitted frames; information about PAPR (Peak to Average Power Ratio) method; information about the modulation scheme, error correction scheme (FEC: Forward Error Correction), and coding rate of the error correction scheme all used in transmitting L1 Post-Signalling data; information about the size of L1 Post-Signalling data and the information size; information about the pilot pattern; information about the cell (frequency region) unique number; and information indicating which of the normal mode and extended mode (the respective modes differs in the number of subcarriers used in data transmission) is used. The L1 Post-Signalling data (6103) transmits information including: information about the number of PLPs; information about the frequency region used; information about the unique number of each PLP; information about the modulation scheme, error correction scheme, coding rate of the error correction scheme all used in transmitting the PLPs; and information about the number of blocks transmitted in each PLP. The Common PLP (6104) and PLPs #1 to #N (6105_1to6105N) are fields used for transmitting data. In the frame structure shown inFIG.61, the P1 Signalling data (6101), L1 Pre-Signalling data (6102), L1 Post-Signalling data (6103), Common PLP (6104), and PLPs #1 to #N (6105_1to6105_N) are illustrated as being transmitted by time-sharing. In practice, however, two or more of the signals are concurrently present.FIG.62shows such an example. As shown inFIG.62, L1 Pre-Signalling data, L1 Post-Signalling data, and Common PLP may be present at the same time, and PLP #1 and PLP #2 may be present at the same time. That is, the signals constitute a frame using both time-sharing and frequency-sharing. FIG.63shows an example of the structure of a transmission device obtained by applying the above-described schemes of regularly hopping between precoding matrices to a transmission device compliant with the DVB-T2 standard (i.e., to a transmission device of a broadcast station). A PLP signal generating unit6302receives PLP transmission data (transmission data for a plurality of PLPs)6301and a control signal6309as input, performs mapping of each PLP according to the error correction scheme and modulation scheme indicated for the PLP by the information included in the control signal6309, and outputs a (quadrature) baseband signal6303carrying a plurality of PLPs. A P2 symbol signal generating unit6305receives P2 symbol transmission data6304and the control signal6309as input, performs mapping according to the error correction scheme and modulation scheme indicated for each P2 symbol by the information included in the control signal6309, and outputs a (quadrature) baseband signal6306carrying the P2 symbols. A control signal generating unit6308receives P1 symbol transmission data6307and P2 symbol transmission data6304as input, and then outputs, as the control signal6309, information about the transmission scheme (the error correction scheme, coding rate of the error correction, modulation scheme, block length, frame structure, selected transmission schemes including a transmission scheme that regularly hops between precoding matrices, pilot symbol insertion scheme, IFFT (Inverse Fast Fourier Transform)/FFT, method of reducing PAPR, and guard interval insertion scheme) of each symbol group shown inFIG.61(P1 Signalling data (6101), L1 Pre-Signalling data (6102), L1 Post-Signalling data (6103), Common PLP (6104), PLPs #1 to #N (6105_1to6105_N)). A frame structuring unit6310receives, as input, the baseband signal6303carrying PLPs, the baseband signal6306carrying P2 symbols, and the control signal630. On receipt of the input, the frame structuring unit6310changes the order of input data in frequency domain and time domain based on the information about frame structure included in the control signal, and outputs a (quadrature) baseband signal6311_1corresponding to stream 1 and a (quadrature) baseband signal6311_2corresponding to stream 2 both in accordance with the frame structure. A signal processing unit6312receives, as input, the baseband signal6311_1corresponding to stream 1, the baseband signal6311_2corresponding to stream 2, and the control signal6309and outputs a modulated signal 1 (6313_1) and a modulated signal 2(6313_2) each obtained as a result of signal processing based on the transmission scheme indicated by information included in the control signal6309. The characteristic feature noted here lies in the following. That is, when a transmission scheme that regularly hops between precoding matrices is selected, the signal processing unit hops between precoding matrices and performs weighting (precoding) in a manner similar toFIGS.6,22,23, and26. Thus, precoded signals so obtained are the modulated signal 1 (63131) and modulated signal 2 (6313_2) obtained as a result of the signal processing. A pilot inserting unit6314_1receives, as input, the modulated signal 1 (63131) obtained as a result of the signal processing and the control signal6309, inserts pilot symbols into the received modulated signal 1 (63131), and outputs a modulated signal6315_1obtained as a result of the pilot signal insertion. Note that the pilot symbol insertion is carried out based on information indicating the pilot symbol insertion scheme included the control signal6309. A pilot inserting unit6314_2receives, as input, the modulated signal 2 (63132) obtained as a result of the signal processing and the control signal6309, inserts pilot symbols into the received modulated signal 2 (6313_2), and outputs a modulated signal6315_2obtained as a result of the pilot symbol insertion. Note that the pilot symbol insertion is carried out based on information indicating the pilot symbol insertion scheme included the control signal6309. An IFFT (Inverse Fast Fourier Transform) unit6316_1receives, as input, the modulated signal6315_1obtained as a result of the pilot symbol insertion and the control signal6309, and applies IFFT based on the information about the IFFT method included in the control signal6309, and outputs a signal6317_1obtained as a result of the IFFT. An IFFT unit6316_2receives, as input, the modulated signal6315_2obtained as a result of the pilot symbol insertion and the control signal6309, and applies IFFT based on the information about the IFFT method included in the control signal6309, and outputs a signal6317_2obtained as a result of the IFFT. A PAPR reducing unit6318_1receives, as input, the signal6317_1obtained as a result of the IFFT and the control signal6309, performs processing to reduce PAPR on the received signal6317_1, and outputs a signal6319_1obtained as a result of the PAPR reduction processing. Note that the PAPR reduction processing is performed based on the information about the PAPR reduction included in the control signal6309. A PAPR reducing unit6318_2receives, as input, the signal6317_2obtained as a result of the IFFT and the control signal6309, performs processing to reduce PAPR on the received signal6317_2, and outputs a signal6319_2obtained as a result of the PAPR reduction processing. Note that the PAPR reduction processing is carried out based on the information about the PAPR reduction included in the control signal6309. A guard interval inserting unit6320_1receives, as input, the signal6319_1obtained as a result of the PAPR reduction processing and the control signal6309, inserts guard intervals into the received signal6319_1, and outputs a signal6321_1obtained as a result of the guard interval insertion. Note that the guard interval insertion is carried out based on the information about the guard interval insertion scheme included in the control signal6309. A guard interval inserting unit6320_2receives, as input, the signal6319_2obtained as a result of the PAPR reduction processing and the control signal6309, inserts guard intervals into the received signal6319_2, and outputs a signal6321_2obtained as a result of the guard interval insertion. Note that the guard interval insertion is carried out based on the information about the guard interval insertion scheme included in the control signal6309. A P1 symbol inserting unit6322receives, as input, the signal6321_1obtained as a result of the guard interval insertion, the signal6321_2obtained as a result of the guard interval insertion, and the P1 symbol transmission data6307, generates a P1 symbol signal from the P1 symbol transmission data6307, adds the P1 symbol to the signal6321_1obtained as a result of the guard interval insertion, and adds the P1 symbol to the signal6321_2obtained as a result of the guard interval insertion. Then, the P1 symbol inserting unit6322outputs a signal6323_1obtained as a result of the processing related to P1 symbol and a signal6323_2obtained as a result of the processing related to P1 symbol. Note that a P1 symbol signal may be added to both the signals6323_1and6323_2or to one of the signals6323_1and6323_2. In the case where the P1 symbol signal is added to one of the signals6323_1and6323_2, the following is noted. For purposes of description, an interval of the signal to which α P1 symbol is added is referred to as a P1 symbol interval. Then, the signal to which α P1 signal is not added includes, as a baseband signal, a zero signal in an interval corresponding to the P1 symbol interval of the other signal. A wireless processing unit6324_1receives the signal6323_1obtained as a result of the processing related to P1 symbol, performs processing such as frequency conversion, amplification, and the like, and outputs a transmission signal6325_1. The transmission signal6325_1is then output as a radio wave from an antenna6326_1. A wireless processing unit6324_2receives the signal6323_2obtained as a result of the processing related to P1 symbol, performs processing such as frequency conversion, amplification, and the like, and outputs a transmission signal6325_2. The transmission signal6325_2is then output as a radio wave from an antenna6326_2. Next, a detailed description is given of the frame structure of a transmission signal and the transmission scheme of control information (information carried by the P1 symbol and P2 symbols) employed by a broadcast station (base station) in the case where the scheme of regularly hopping between precoding matrices is adapted to a DVB-T2 system. FIG.64shows an example of the frame structure in the time and frequency domains, in the case where a plurality of PLPs are transmitted after transmission of P1 symbol, P2 symbols, and Common PLP. InFIG.64, stream s1 uses subcarriers #1 to #M in the frequency domain. Similarly, stream s2 uses subcarriers #1 to #M in the frequency domain. Therefore, when streams s1 and s2 both have a symbol in the same subcarrier and at the same time, symbols of the two streams are present at the same frequency. In the case where precoding performed includes the precoding according to the scheme for regularly hopping between precoding matrices as described in the other embodiments, streams s1 and s2 are subjected to weighting performed using the precoding matrices and z1 and z2 are output from the respective antennas. As shown inFIG.64, in interval 1, a symbol group6401of PLP #1 is transmitted using streams s1 and s2, and the data transmission is carried out using the spatial multiplexing MIMO system shown inFIG.49or the MIMO system with a fixed precoding matrix. In interval 2, a symbol group6402of PLP #2 is transmitted using stream s1, and the data transmission is carried out by transmitting one modulated signal. In interval 3, a symbol group6403of PLP #3 is transmitted using streams s1 and s2, and the data transmission is carried out using a precoding scheme of regularly hopping between precoding matrices. In interval 4, a symbol group6404of PLP #4 is transmitted using streams s1 and s2, and the data transmission is carried out using space-time block coding shown inFIG.50. Note that the symbol arrangement used in space-time block coding is not limited to the arrangement in the time domain. Alternatively, the symbol arrangement may be in the frequency domain or in symbol groups formed in the time and frequency domains. In addition, the space-time block coding is not limited to the one shown inFIG.50. In the case where a broadcast station transmits PLPs in the frame structure shown inFIG.64, a reception device receiving the transmission signal shown inFIG.64needs to know the transmission scheme used for each PLP. As has been already described above, it is therefore necessary to transmit information indicating the transmission scheme for each PLP, using L1 Post-Signalling data (6103shown inFIG.61), which is a P2 symbol. The following describes an example of the scheme of structuring a P1 symbol used herein and the scheme of structuring a P2 symbol used herein. Table 3 shows a specific example of control information transmitted using a P1 symbol. TABLE 3S1000: T2_SISO (One modulated signal transmission compliantwith DVB-T2 standard)001: T2_MISO (Transmission using space-time block codingcompliant with DVB-T2 standard)010: NOT_T2 (compliant with standard other than DVB-T2) According to the DVB-T2 standard, the control information S1 (three bits) enables the reception device to determine whether or not the DVB-T2 standard is used and also to determine, if DVB-T2 is used, which transmission scheme is used. If the three bits are set to “000”, the S1 information indicates that the modulated signal transmitted in accordance with “transmission of a modulated signal compliant with the DVB-T2 standard”. If the three bits are set to “001”, the S1 information indicates that the modulated signal transmitted is in accordance with “transmission using space-time block coding compliant with the DVB-T2 standard”. In the DVB-T2 standard, the bit sets “010” to “111” are “Reserved” for future use. In order to adapt the present invention in a manner to establish compatibility with the DVB-T2, the three bits constituting the S1 information may be set to “010” (or any bit set other than “000” and “001”) to indicate that the modulated signal transmitted is compliant with a standard other than DVB-T2. On determining that the S1 information received is set to “010”, the reception device is informed that the modulated signal transmitted from the broadcast station is compliant with a standard other than DVB-T2. Next, a description is given of examples of the scheme of structuring a P2 symbol in the case where a modulated signal transmitted by the broadcast station is compliant with a standard other than DVB-T2. The first example is directed to a scheme in which P2 symbol compliant with the DVB-T2 standard is used. Table 4 shows a first example of control information transmitted using L1 Post-Signalling data, which is one of P2 symbols. TABLE 4PLP_MODE00: SISO/SIMO(2 bits)01: MISO/MIMO (Space-time block code)10: MIMO (Precoding scheme of regularly hoppingbetween precoding matrices)11: MIMO (MIMO system with fixed precoding matrixor Spatial multiplexing MIMO system) SISO: Single-Input Single-Output (one modulated signal is transmitted and receive with one antenna) SIMO: Single-Input Multiple-Output (one modulated signal is transmitted and received with a plurality of antennas) MISO: Multiple-Input Single-Output (a plurality of modulated signals are transmitted from a plurality of antennas and received with one antenna) MIMO: Multiple-Input Multiple-Output (a plurality of modulated signals are transmitted from a plurality of antennas and received with a plurality of antennas) The 2-bit information “PLP_MODE” shown in Table 4 is control information used to indicate the transmission scheme used for each PLP as shown inFIG.64(PLPs #1 to #4 inFIG.64). That is, a separate piece of “PLP_MODE” information is provided for each PLP. That is, in the example shown inFIG.64, PLP_MODE for PLP #1, PLP_MODE for PLP #2, PLP_MODE for PLP #3, PLP_MODE for PLP #4 . . . are transmitted from the broadcast station. As a matter of course, by demodulating (and also performing error correction) those pieces of information, the terminal at the receiving end is enabled to recognize the transmission scheme that the broadcast station used for transmitting each PLP. When the PLP_MODE is set to “00”, the data transmission by a corresponding PLP is carried out by “transmitting one modulated signal”. When the PLP_MODE is set to “01”, the data transmission by a corresponding PLP is carried out by “transmitting a plurality of modulated signals obtained by space-time block coding”. When the PLP_MODE is set to “10”, the data transmission by a corresponding PLP is carried out using a “precoding scheme of regularly hopping between precoding matrices”. When the PLP_MODE is set to “11”, the data transmission by a corresponding PLP is carried out using a “MIMO system with a fixed precoding matrix or spatial multiplexing MIMO system”. Note that when the PLP_MODE is set to “01” to “11”, the information indicating the specific processing conducted by the broadcast station (for example, the specific hopping scheme used in the scheme of regularly hopping between precoding matrices, the specific space-time block coding scheme used, and the structure of precoding matrices used) needs to be notified to the terminal. The following describes the scheme of structuring control information that includes such information and that is different from the example shown in Table 4. Table 5 shows a second example of control information transmitted using L1 Post-Signalling data, which is one of P2 symbols. The second example shown in Table 5 is different from the first example shown in Table 4. TABLE 5PLP_MODE (1 bit)0: SISO/SIMO1: MISO/MIMO(Space-time block coding, orPrecoding scheme of regularly hopping betweenprecoding matrices, orMIMO system with fixed precoding matrix, orSpatial multiplexing MIMO system)MIMO_MODE0: Precoding scheme of regularly hopping between(1 bit)precoding matrices - OFF1: Precoding scheme of regularly hopping betweenprecoding matrices - ONMIMO_PATTERN00: Space-time block coding#1 (2 bits)01: MIMO system with fixed precoding matrix andPrecoding matrix #110: MIMO system with fixed precoding matrix andPrecoding matrix #211: Spatial multiplexing MIMO systemMIMO_PATTERN00: Precoding scheme of regularly hopping between#2 (2 bits)precoding matrices, using precoding matrix hoppingscheme #101: Precoding scheme of regularly hopping betweenprecoding matrices, using precoding matrix hoppingscheme #210: Precoding scheme of regularly hopping betweenrecoding matrices, using precoding matrix hoppingscheme #311: Precoding scheme of regularly hopping betweenprecoding matrices, using precoding matrix hoppingscheme #4 As shown in Table 5, the control information includes “PLP_MODE” which is one bit long, “MIMO_MODE” which is one bit long, “MIMO_PATTERN #1” which is two bits long, and “MIMO_PATTERN #2” which is two bits long. As shown inFIG.64, these four pieces of control information is to notify the transmission scheme of a corresponding one of PLPs (PLPs #1 to #4 in the example shown inFIG.64). Thus, a set of four pieces of information is provided for each PLP. That is, in the example shown inFIG.64, the broadcast station transmits a set of PLP_MODE information, MIMO_MODE information, MIMO_PATTERN #1 information, and MIMO_PATTERN #2 information for PLP #1, a set of PLP_MODE information, MIMO_MODE information, MIMO_PATTERN #1 information, and MIMO_PATTERN #2 information for PLP #2, a set of PLP_MODE information, MIMO_MODE information, MIMO_PATTERN #1 information, and MIMO_PATTERN #2 information for PLP #3, a set of PLP_MODE information, MIMO_MODE information, MIMO_PATTERN #1 information, and MIMO_PATTERN #2 information for PLP #4 . . . . As a matter of course, by demodulating (and also performing error correction) those pieces of information, the terminal at the receiving end is enabled to recognize the transmission scheme that the broadcast station used for transmitting each PLP. With the PLP_MODE set to “0”, the data transmission by a corresponding PLP is carried out by “transmitting one modulated signal”. With the PLP_MODE set to “1”, the data transmission by a corresponding PLP is carried out by “transmitting a plurality of modulated signals obtained by space-time block coding”, “precoding scheme of regularly hopping between precoding matrices”, “MIMO system with a fixed precoding matrix”, or “spatial multiplexing MIMO system”. With the “PLP_MODE” set to “1”, the “MIMO_MODE” information is made effective. With “MIMO_MODE” set to “0”, data transmission is carried out by a scheme other than the “precoding scheme of regularly hopping between precoding matrices”. With “MIMO_MODE” set to “1”, on the other hand, data transmission is carried out by the “precoding scheme of regularly hopping between precoding matrices”. With “PLP_MODE” set to “1” and “MIMO_MODE” set to “0”, the “MIMO_PATTERN #1” information is made effective. With “MIMO_PATTERN #1” set to “00”, data transmission is carried out using space-time block coding. With “MIMO_PATTERN” set to “01”, data transmission is carried out using a precoding scheme in which weighting is performed using a fixed precoding matrix #1. With “MIMO_PATTERN” set to “10”, data transmission is carried out using a precoding scheme in which weighting is performed using a fixed precoding matrix #2 (Note that the precoding matrix #1 and precoding matrix #2 are mutually different). When “MIMO_PATTERN” set to “11”, data transmission is carried out using spatial multiplexing MIMO system (Naturally, it may be construed that Scheme 1 shown inFIG.49is selected here). With “PLP_MODE” set to “1” and “MIMO_MODE” set to “1”, the “MIMO_PATTERN #2” information is made effective. Then, with “MIMO_PATTERN #2” set to “00”, data transmission is carried out using the precoding matrix hopping scheme #1 according to which precoding matrices are regularly hopped. With “MIMO_PATTERN #2” set to “01”, data transmission is carried out using the precoding matrix hopping scheme #2 according to which precoding matrices are regularly hopped. With “MIMO_PATTERN #2” set to “10”, data transmission is carried out using the precoding matrix hopping scheme #3 according to which precoding matrices are regularly hopped. With “MIMO_PATTERN #2” set to “11”, data transmission is carried out using the precoding matrix hopping scheme #4 according to which precoding matrices are regularly hopped. Note that the precoding matrix hopping schemes #1 to #4 are mutually different. Here, to define a scheme being different, it is supposed that #A and #B are mutually different schemes and then one of the following is true.The precoding matrices used in #A include the same matrices used in #B but the periods (cycles) of the matrices are different.The precoding matrices used in #A include precoding matrices not used in #B.None of the precoding matrices used in #A is used in #B. In the above description, the control information shown in Tables 4 and 5 is transmitted on L1 Post-Signalling data, which is one of P2 symbols. According to the DVB-T2 standard, however, the amount of information that can be transmitted as P2 symbols is limited. Therefore, addition of information shown in Tables 4 and 5 to the information required in the DVB-T2 standard to be transmitted using P2 symbols may result in an amount exceeding the maximum amount that can be transmitted as P2 symbols. In such a case, Signalling PLP (6501) may be provided as shown inFIG.65to transmit control information required by a standard other than the DVB-T2 standard (that is, data transmission is carried out using both L1 Post-Signalling data and Signalling PLP). In the example shown inFIG.65, the same frame structure as shown inFIG.61is used. However, the frame structure is not limited to this specific example. For example, similarly to L1 Pre-signalling data and other data shown inFIG.62, Signalling PLP may be allocated to a specific carrier range in a specific time domain in the time and frequency domains. In short, Signalling PLP may be allocated in the time and frequency domains in any way. As described above, the present embodiment allows for choice of a scheme of regularly hopping between precoding matrices while using a multi-carrier scheme, such as an OFDM scheme, without compromising the compatibility with the DVB-T2 standard. This offers the advantages of obtaining high reception quality, as well as high transmission speed, in an LOS environment. While in the present embodiment, the transmission schemes to which α carrier group can be set are “a spatial multiplexing MIMO system, a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, or a transmission scheme for transmitting only stream s1”, but the transmission schemes are not limited in this way. Furthermore, the MIMO scheme using a fixed precoding matrix limited to scheme #2 inFIG.49, as any structure with a fixed precoding matrix is acceptable. Furthermore, the above description is directed to a scheme in which the schemes selectable by the broadcast station are “a spatial multiplexing MIMO system, a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, or a transmission scheme for transmitting only stream s1”. However, it is not necessary that all of the transmission schemes are selectable. Any of the following examples is also possible.A transmission scheme in which any of the following is selectable: a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, and a transmission scheme for transmitting only stream s1.A transmission scheme in which any of the following is selectable: a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, and space-time block coding.A transmission scheme in which any of the following is selectable: a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, and a transmission scheme for transmitting only stream s1.A transmission scheme in which any of the following is selectable: a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, and a transmission scheme for transmitting only stream s1.A transmission scheme in which any of the following is selectable: a MIMO scheme using a fixed precoding matrix, and a MIMO scheme for regularly hopping between precoding matrices.A transmission scheme in which any of the following is selectable: a MIMO scheme for regularly hopping between precoding matrices, and space-time block coding.A transmission scheme in which any of the following is selectable: a MIMO scheme for regularly hopping between precoding matrices, and a transmission scheme for transmitting only stream s1. As listed above, as long as a MIMO scheme for regularly hopping between precoding matrices is included as a selectable scheme, the advantageous effects of high-speed data transmission is obtained in an LOS environment, in addition to excellent reception quality for the reception device. Here, it is necessary to set the control information S1 in P1 symbols as described above. In addition, as P2 symbols, the control information may be set differently from a scheme (the scheme for setting the transmission scheme of each PLP) shown in Table 4. Table 6 shows one example of such a scheme. TABLE 6PLP-MODE00: SISO/SIMO(2 bits)01: MISO/MIMO (Space-time block code)10: MIMO (Precoding scheme of regularly hoppingbetween precoding matrices)11: Reserved Table 6 differs from Table 4 in that the “PLP_MODE” set to “11” is “Reserved.” In this way, the number of bits constituting the “PLP_MODE” shown in Tables 4 and 6 may be increased or decreased depending on the number of selectable PLP transmission schemes, in the case where the selectable transmission schemes are as shown in the above examples. The same holds with respect to Table 5. For example, if the only MIMO scheme supported is a precoding scheme of regularly hopping between precoding matrices, the control information “MIMO_MODE” is no longer necessary. Furthermore, the control information “MIMO_PATTERN #1” may not be necessary in the case, for example, where a MIMO scheme using a fixed precoding matrix is not supported. Furthermore, the control information “MIMO_PATTERN #1” may be one bit long instead of two bits long, in the case where, for example, no more than one precoding matrix is required for a MIMO scheme using a fixed precoding matrix. Furthermore, the control information “MIMO_PATTERN #1” may be two bits long or more in the case where a plurality of precoding matrices are selectable. The same applies to “MIMO_PATTERN #2”. That is, the control information “MIMO_PATTERN #2” may be one bit long instead of two bits long, in the case where no more than one precoding scheme of regularly hopping between precoding matrices is available. Alternatively, the control information “MIMO_PATTERN #2” may be two bits long or more in the case where a plurality of precoding schemes of regularly hopping between precoding matrices are selectable. In the present embodiment, the description is directed to the transmission device having two antennas, but the number of antennas is not limited to two. With a transmission device having more than two antennas, the control information may be transmitted in the same manner. Yet, to enable the modulated signal transmission with the use of four antennas in addition to the modulated signal transmission with the use of two antennas, there may be a case where the number of bits constituting respective pieces of control information needs to be increased. In such a modification, it still holds that the control information is transmitted by the P1 symbol and the control information is transmitted by P2 symbols as set forth above. The above description is directed to the frame structure of PLP symbol groups transmitted by a broadcast station in a time-sharing transmission scheme as shown inFIG.64. FIG.66shows another example of a symbol arranging scheme in the time and frequency domains, which is different from the symbol arranging scheme shown inFIG.64. The symbols shown inFIG.66are of the stream s1 and s2 and to be transmitted after the transmission of P1 symbol, P2 symbols, and Common PLP. InFIG.66, each symbol denoted by “#1” represents one symbol of the symbol group of PLP #1 shown inFIG.64. Similarly, each symbol denoted as “#2” represents one symbol of the symbol group of PLP #2 shown inFIG.64, each symbol denoted as “#3” represents one symbol of the symbol group of PLP #3 shown inFIG.64, and each symbol denoted as “#4” represents one symbol of the symbol group of PLP #4 shown inFIG.64. Similarly toFIG.64, PLP #1 transmits data using spatial multiplexing MIMO system shown inFIG.49or the MIMO system with a fixed precoding matrix. In addition, PLP #2 transmits data thereby to transmit one modulated signal. PLP #3 transmits data using a precoding scheme of regularly hopping between precoding matrices. PLP #4 transmits data using space-time block coding shown inFIG.50. Note that the symbol arrangement used in space-time block coding is not limited to the arrangement in the time domain. Alternatively, the symbol arrangement may be in the frequency domain or in symbol groups formed in the time and frequency domains. In addition, space-time block coding is not limited to the one shown inFIG.50. InFIG.66, where streams s1 and s2 both have a symbol in the same subcarrier and at the same time, symbols of the two streams are present at the same frequency. In the case where precoding performed includes the precoding according to the scheme for regularly hopping between precoding matrices as described in the other embodiments, streams s1 and s2 are subjected to weighting performed using the precoding matrices, and z1 and z2 are output from the respective antennas. FIG.66differs fromFIG.64in the following points. That is, the example shown inFIG.64is an arrangement of a plurality of PLPs using time-sharing, whereas the example shown inFIG.66is an arrangement of a plurality of PLPs using both time-sharing and frequency-sharing. That is, for example, at time 1, a symbol of PLP #1 and a symbol of PLP #2 are both present. Similarly, at time 3, a symbol of PLP #3 and a symbol of PLP #4 are both present. In this way, PLP symbols having different index numbers (#X; X=1, 2 . . . ) may be allocated on a symbol-by-symbol basis (for each symbol composed of one subcarrier per time). For the sake of simplicity,FIG.66only shows symbols denoted by “#1” and “#2” at time 1. However, this is not a limiting example, and PLP symbols having any index numbers other than “#1” and “#2” may be present at time 1. In addition, the relation between subcarriers present at time 1 and PLP index numbers are not limited to that shown inFIG.66. Alternatively, a PLP symbol having any index number may be allocated to any subcarrier. Similarly, in addition, a PLP symbol having any index number may be allocated to any subcarrier at any time other than time 1. FIG.67shows another example of a symbol arranging scheme in the time and frequency domains, which is different from the symbol arranging scheme shown inFIG.64. The symbols shown inFIG.67are of the stream s1 and s2 and to be transmitted after the transmission of P1 symbol, P2 symbols, and Common PLP. The characterizing feature of the example shown inFIG.67is that the “transmission scheme for transmitting only stream s1” is not selectable in the case where PLP transmission for T2 frames is carried out basically with a plurality of antennas. Therefore, data transmission by the symbol group6701of PLP #1 shown inFIG.67is carried out by “a spatial multiplexing MIMO system or a MIMO scheme using a fixed precoding matrix”. Data transmission by the symbol group6702of PLP #2 is carried out using “a precoding scheme of regularly hopping between precoding matrices”. Data transmission by the symbol group6703of PLP #3 is carried out by “space-time block coding”. Note that data transmission by the PLP symbol group6703of PLP #3 and the following symbol groups in T2 frame is carried out by using one of “a spatial multiplexing MIMO system or a MIMO scheme using a fixed precoding matrix,” “a precoding scheme of regularly hopping between precoding matrices” and “space-time block coding”. FIG.68shows another example of a symbol arranging scheme in the time and frequency domains, which is different from the symbol arranging scheme shown inFIG.66. The symbols shown inFIG.66are of the stream s1 and s2 and to be transmitted after the transmission of P1 symbol, P2 symbols, and Common PLP. InFIG.68, each symbol denoted by “#1” represents one symbol of the symbol group of PLP #1 shown inFIG.67. Similarly, each symbol denoted as “#2” represents one symbol of the symbol group of PLP #2 shown inFIG.67, each symbol denoted as “#3” represents one symbol of the symbol group of PLP #3 shown inFIG.67, and each symbol denoted as “#4” represents one symbol of the symbol group of PLP #4 shown inFIG.67. Similarly toFIG.67, PLP #1 transmits data using spatial multiplexing MIMO system shown inFIG.49or the MIMO system with a fixed precoding matrix. PLP #2 transmits data using a precoding scheme of regularly hopping between precoding matrices. PLP #3 transmits data using space-time block coding shown inFIG.50. Note that the symbol arrangement used in the space-time block coding is not limited to the arrangement in the time domain. Alternatively, the symbol arrangement may be in the frequency domain or in symbol groups formed in the time and frequency domains. In addition, the space-time block coding is not limited to the one shown inFIG.50. InFIG.68, where streams s1 and s2 both have a symbol in the same subcarrier and at the same time, symbols of the two streams are present at the same frequency. In the case where precoding performed includes the precoding according to the scheme for regularly hopping between precoding matrices as described in the other embodiments, streams s1 and s2 are subjected to weighting performed using the precoding matrices and z1 and z2 are output from the respective antennas. FIG.68differs fromFIG.67in the following points. That is, the example shown inFIG.67is an arrangement of a plurality of PLPs using time-sharing, whereas the example shown inFIG.68is an arrangement of a plurality of PLPs using both time-sharing and frequency-sharing. That is, for example, at time 1, a symbol of PLP #1 and a symbol of PLP #2 are both present. In this way, PLP symbols having different index numbers (#X; X=1, 2 . . . ) may be allocated on a symbol-by-symbol basis (for each symbol composed of one subcarrier per time). For the sake of simplicity,FIG.68only shows symbols denoted by “#1” and “#2” at time 1. However, this is not a limiting example, and PLP symbols having any index numbers other than “#1” and “#2” may be present at time 1. In addition, the relation between subcarriers present at time 1 and PLP index numbers are not limited to that shown inFIG.68. Alternatively, a PLP symbol having any index number may be allocated to any subcarrier. Similarly, in addition, a PLP symbol having any index number may be allocated to any subcarrier at any time other than time 1. Alternatively, on the other hand, only one PLP symbol may be allocated at a specific time as at time t3. That is, in a framing scheme of arranging PLP symbols in the time and frequency domains, any allocation is applicable. As set forth above, no PLPs using “a transmission scheme for transmitting only stream s1” exist in the T2 frame, so that the dynamic range of a signal received by the terminal is ensured to be narrow. As a result, the advantageous effect is achieved that the probability of excellent reception quality increases. Note that the description ofFIG.68is described using an example in which the transmission scheme selected is one of “spatial multiplexing MIMO system or a MIMO scheme using a fixed precoding matrix”, “a precoding scheme of regularly hopping between precoding matrices”, and “space-time block coding”. Yet, it is not necessary that all of these transmission schemes are selectable. For example, the following combinations of the transmission schemes may be made selectable.“a precoding scheme of regularly hopping between precoding matrices”, “space-time block coding”, and “a MIMO scheme using a fixed precoding matrix” are selectable.“a precoding scheme of regularly hopping between precoding matrices” and “space-time block coding” are selectable.“a precoding scheme of regularly hopping between precoding matrices” and “a MIMO scheme using a fixed precoding matrix” are selectable. The above description relates to an example in which the T2 frame includes a plurality of PLPs. The following describes an example in which T2 frame includes one PLP only. FIG.69shows an example of frame structure in the time and frequency domains for stream s1 and s2 in the case where only one PLP exits in T2 frame. InFIG.69, the denotation “control symbol” represents a symbol such as P1 symbol, P2 symbol, or the like. In the example shown inFIG.69, the first T2 frame is transmitted using interval 1. Similarly, the second T2 frame is transmitted using interval 2, the third T2 frame is transmitted using interval 3, and the fourth T2 frame is transmitted using interval 4. In the example shown inFIG.69, in the first T2 frame, a symbol group6801for PLP #1-1 is transmitted and the transmission scheme selected is “spatial multiplexing MIMO system or MIMO scheme using a fixed precoding matrix”. In the second T2 frame, a symbol group6802for PLP #2-1 is transmitted and the transmission scheme selected is “a scheme for transmitting one modulated signal”. In the third T2 frame, a symbol group6803for PLP #3-1 is transmitted and the transmission scheme selected is “a precoding scheme of regularly hopping between precoding matrices”. In the fourth T2 frame, a symbol group6804for PLP #4-1 is transmitted and the transmission scheme selected is “space-time block coding”. Note that the symbol arrangement used in the space-time block coding is not limited to the arrangement in the time domain. Alternatively, the symbol arrangement may be in the frequency domain or in symbol groups formed in the time and frequency domains. In addition, the space-time block coding is not limited to the one shown inFIG.50. InFIG.69, where streams s1 and s2 both have a symbol in the same subcarrier and at the same time, symbols of the two streams are present at the same frequency. In the case where precoding performed includes the precoding according to the scheme for regularly hopping between precoding matrices as described in the other embodiments, streams s1 and s2 are subjected to weighting performed using the precoding matrices and z1 and z2 are output from the respective antennas. In the above manner, a transmission scheme may be set for each PLP in consideration of the data transmission speed and the data reception quality at the receiving terminal, so that increase in data transmission seeped and excellent reception quality are both achieved. As an example scheme of structuring control information, the control information indicating, for example, the transmission scheme and other information of P1 symbol and P2 symbols (and also Signalling PLP where applicable) may be configured in a similar manner to Tables 3-6. The difference is as follows. In the frame structure shown, for example, inFIG.64, one T2 frame includes a plurality of PLPs. Thus, it is necessary to provide the control information indicating the transmission scheme and the like for each PLP. On the other hand, in the frame structure shown, for example, inFIG.69, one T2 frame includes one PLP only. Thus, it is sufficient to provide the control information indicating the transmission scheme and the like only for the one PLP. Although the above description is directed to the scheme of transmitting information about the PLP transmission scheme using P1 symbol and P2 symbols (and Signalling PLPs where applicable), the following describes in particular the scheme of transmitting information about the PLP transmission scheme without using P2 symbols. FIG.70shows a frame structure in the time and frequency domains for the case where a terminal at a receiving end of data broadcasting by a broadcast station supporting a standard other than the DVB-T2 standard. InFIG.70, the same reference signs are used to denote the blocks that operate in a similar way to those shown inFIG.61. The frame shown inFIG.70is composed of P1 Signalling data (6101), first Signalling data (7001), second Signalling data (7002), Common PLP (6104), and PLPs #1 to N (6105_1to6105_N) (PLP: Physical Layer Pipe). In this way, a frame composed of P1 Signalling data (6101), first Signalling data (7001), second Signalling data (7002), Common PLP (6104), PLPs #1 to N (6105_1to6105_N) constitutes one frame unit. By the P1 Signalling data (6101), data indicating that the symbol is for a reception device to perform signal detection and frequency synchronization (including frequency offset estimation) is transmitted. In this example, in addition, data identifying whether or not the frame supports the DVB-T2 standard needs to be transmitted. For example, by S1 shown in Table 3, data indicating whether or not the signal supports the DVB-T2 standard needs to be transmitted. By the first 1 Signalling data (7001), the following information may be transmitted for example: information about the guard interval used in the transmission frame; information about the method of PAPR (Peak to Average Power Ratio); information about the modulation scheme, error correction scheme, coding rate of the error correction scheme all of which are used in transmitting the second Signalling data; information about the size of the second Signalling data and about information size; information about the pilot pattern; information about the cell (frequency domain) unique number; and information indicating which of the norm mode and extended mode is used. Here, it is not necessary that the first Signalling data (7001) transmits data supporting the DVB-T2 standard. By L2 Post-Signalling data (7002), the following information may be transmitted for example: information about the number of PLPs; information about the frequency domain used; information about the unique number of each PLP; information about the modulation scheme, error correction scheme, coding rate of the error correction scheme all of which are used in transmitting the PLPs; and information about the number of blocks transmitted in each PLP. In the frame structure shown inFIG.70, first Signalling data (7001), second Signalling data (7002), L1 Post-Signalling data (6103), Common PLP (6104), PLPs #1 to #N (6105_1to6105_N) are appear to be transmitted by time sharing. In practice, however, two or more of the signals are concurrently present.FIG.71shows such an example. As shown inFIG.71, first Signalling data, second Signalling data, and Common PLP may be present at the same time, and PLP #1 and PLP #2 may be present at the same time. That is, the signals constitute a frame using both time-sharing and frequency-sharing. FIG.72shows an example of the structure of a transmission device obtained by applying the above-described schemes of regularly hopping between precoding matrices to a transmission device (of a broadcast station, for example) that is compliant with a standard other than the DVB-T2 standard. InFIG.72, the same reference signs are used to denote the components that operate in a similar way to those shown inFIG.63and the description of such components are the same as above. A control signal generating unit6308receives transmission data7201for the first and second Signalling data, transmission data6307for P1 symbol as input. As output, the control signal generating unit6308outputs a control signal6309carrying information about the transmission scheme of each symbol group shown inFIG.70. (The information about the transmission scheme output herein includes: error correction coding, coding rate of the error correction, modulation scheme, block length, frame structure, the selected transmission schemes including a transmission scheme that regularly hops between precoding matrices, pilot symbol insertion scheme, information about IFFT (Inverse Fast Fourier Transform)/FFT and the like, information about the method of reducing PAPR, and information about guard interval insertion scheme.) The control signal generating unit7202receives the control signal6309and the transmission data7201for first and second Signalling data as input. The control signal generating unit7202then performs error correction coding and mapping based on the modulation scheme, according to the information carried in the control signal6309(namely, information about the error correction of the first and second Signalling data, information about the modulation scheme) and outputs a (quadrature) baseband signal7203of the first and second Signalling data. Next, a detailed description is given of the frame structure of a transmission signal and the transmission scheme of control information (information carried by the P1 symbol and first and second 2 Signalling data) employed by a broadcast station (base station) in the case where the scheme of regularly hopping between precoding matrices is adapted to a system compliant with a standard other than the DVB-T2 standard. FIG.64shows an example of the frame structure in the time and frequency domains, in the case where a plurality of PLPs are transmitted after transmission of P1 symbol, first and second 2 Signalling data, and Common PLP. InFIG.64, stream s1 uses subcarriers #1 to #M in the frequency domain. Similarly, stream s2 uses subcarriers #1 to #M in the frequency domain. Therefore, when streams s1 and s2 both have a symbol in the same subcarrier and at the same time, symbols of the two streams are present at the same frequency. In the case where precoding performed includes the precoding according to the scheme for regularly hopping between precoding matrices as described in the other embodiments, streams s1 and s2 are subjected to weighting performed using the precoding matrices and z1 and z2 are output from the respective antennas. As shown inFIG.64, in interval 1, a symbol group6401of PLP #1 is transmitted using streams s1 and s2, and the data transmission is carried out using the spatial multiplexing MIMO system shown inFIG.49or the MIMO system with a fixed precoding matrix. In interval 2, a symbol group6402of PLP #2 is transmitted using stream s1, and the data transmission is carried out by transmitting one modulated signal. In interval 3, a symbol group6403of PLP #3 is transmitted using streams s1 and s2, and the data transmission is carried out using a precoding scheme of regularly hopping between precoding matrices. In interval 4, a symbol group6404of PLP #4 is transmitted using streams s1 and s2, and the data transmission is carried out using the space-time block coding shown inFIG.50. Note that the symbol arrangement used in the space-time block coding is not limited to the arrangement in the time domain. Alternatively, the symbol arrangement may be in the frequency domain or in symbol groups formed in the time and frequency domains. In addition, the space-time block coding is not limited to the one shown inFIG.50. In the case where a broadcast station transmits PLPs in the frame structure shown inFIG.64, a reception device receiving the transmission signal shown inFIG.64needs to know the transmission scheme used for each PLP. As has been already described above, it is therefore necessary to transmit information indicating the transmission scheme for each PLP, using the first and second Signalling data. The following describes an example of the scheme of structuring a P1 symbol used herein and the scheme of structuring first and second Signalling data used herein. Specific examples of control information transmitted using a P1 symbol are as shown in Table 3. According to the DVB-T2 standard, the control information S1 (three bits) enables the reception device to determine whether or not the DVB-T2 standard is used and also determine, if DVB-T2 is used, the transmission scheme used. If the three bits are set to “000”, the S1 information indicates that the modulated signal transmitted is in compliant with “transmission of a modulated signal compliant with the DVB-T2 standard”. If the three bits are set to “001”, the S1 information indicates that the modulated signal transmitted is in compliant with “transmission using space-time block coding compliant with the DVB-T2 standard”. In the DVB-T2 standard, the bit sets “010” to “111” are “Reserved” for future use. In order to adapt the present invention in a manner to establish compatibility with the DVB-T2, the three bits constituting the S1 information may be set to “010” (or any bit set other than “000” and “001”) to indicate that the modulated signal transmitted is compliant with a standard other than DVB-T2. On determining that the S1 information received is set to “010”, the reception device is informed that the modulated signal transmitted from the broadcast station is compliant with a standard other than DVB-T2. Next, a description is given of examples of the scheme of structuring first and second Signalling data in the case where a modulated signal transmitted by the broadcast station is compliant with a standard other than DVB-T2. A first example of the control information for the first and second Signalling data is as shown in Table 4. The 2-bit information “PLP_MODE” shown in Table 4 is control information used to indicate the transmission scheme used for each PLP as shown inFIG.64(PLPs #1 to #4 inFIG.64). That is, a separate piece of “PLP_MODE” information is provided for each PLP. That is, in the example shown inFIG.64, PLP_MODE for PLP #1, PLP_MODE for PLP #2, PLP_MODE for PLP #3, PLP_MODE for PLP #4 . . . are transmitted from the broadcast station. As a matter of course, by demodulating (and also performing error correction) those pieces of information, the terminal at the receiving end is enabled to recognize the transmission scheme that the broadcast station used for transmitting each PLP. With the PLP_MODE set to “00”, the data transmission by a corresponding PLP is carried out by “transmitting one modulated signal”. When the PLP_MODE is set to “01”, the data transmission by a corresponding PLP is carried out by “transmitting a plurality of modulated signals obtained by space-time block coding”. When the PLP_MODE is set to “10”, the data transmission by a corresponding PLP is carried out using a “precoding scheme of regularly hopping between precoding matrices”. When the PLP_MODE is set to “11”, the data transmission by a corresponding PLP is carried out using a “MIMO system with a fixed precoding matrix or spatial multiplexing MIMO system”. Note that when the PLP_MODE is set to “01” to “11”, the information indicating the specific processing conducted by the broadcast station (for example, the specific hopping scheme used in the scheme of regularly hopping between precoding matrices, the specific space-time block coding scheme used, and the structure of precoding matrices used) needs to be notified to the terminal. The following describes the scheme of structuring control information that includes such information and that is different from the example shown in Table 4. A second example of the control information for the first and second Signalling data is as shown in Table 5. As shown in Table 5, the control information includes “PLP_MODE” which is one bit long, “MIMO_MODE” which is one bit long, “MIMO_PATTERN #1” which is two bits long, and “MIMO_PATTERN #2” which is two bits long. As shown inFIG.64, these four pieces of control information is to notify the transmission scheme of a corresponding one of PLPs (PLPs #1 to #4 in the example shown inFIG.64). Thus, a set of four pieces of information is provided for each PLP. That is, in the example shown inFIG.64, the broadcast station transmits a set of PLP_MODE information, MIMO_MODE information, MIMO_PATTERN #1 information, and MIMO_PATTERN #2 information for PLP #1, a set of PLP_MODE information, MIMO_MODE information, MIMO_PATTERN #1 information, and MIMO_PATTERN #2 information for PLP #2, a set of PLP_MODE information, MIMO_MODE information, MIMO_PATTERN #1 information, and MIMO_PATTERN #2 information for PLP #3, a set of PLP_MODE information, MIMO_MODE information, MIMO_PATTERN #1 information, and MIMO_PATTERN #2 information for PLP #4 . . . . As a matter of course, by demodulating (and also performing error correction) those pieces of information, the terminal at the receiving end is enabled to recognize the transmission scheme that the broadcast station used for transmitting each PLP. With the PLP_MODE set to “0”, the data transmission by a corresponding PLP is carried out by “transmitting one modulated signal”. With the PLP_MODE set to “1”, the data transmission by a corresponding PLP is carried out by “transmitting a plurality of modulated signals obtained by space-time block coding”, “precoding scheme of regularly hopping between precoding matrices”, “MIMO system with a fixed precoding matrix or spatial multiplexing MIMO system”, or “spatial multiplexing MIMO system”. With the “PLP_MODE” set to “1”, the “MIMO_MODE” information is made effective. With “MIMO_MODE” set to “0”, data transmission is carried out by a scheme other than the “precoding scheme of regularly hopping between precoding matrices”. With “MIMO_MODE” set to “1”, on the other hand, data transmission is carried out by the “precoding scheme of regularly hopping between precoding matrices”. With “PLP_MODE” set to “1” and “MIMO_MODE” set to “0”, the “MIMO_PATTERN #1” information is made effective. With “MIMO_PATTERN #1” set to “00”, data transmission is carried out using space-time block coding. With “MIMO_PATTERN” set to “01”, data transmission is carried out using a precoding scheme in which weighting is performed using a fixed precoding matrix #1. With “MIMO_PATTERN” set to “10”, data transmission is carried out using a precoding scheme in which weighting is performed using a fixed precoding matrix #2 (Note that the precoding matrix #1 and precoding matrix #2 are mutually different). When “MIMO_PATTERN” set to “11”, data transmission is carried out using spatial multiplexing MIMO system (Naturally, it may be construed that Scheme 1 shown inFIG.49is selected here). With “PLP_MODE” set to “1” and “MIMO_MODE” set to “1”, the “MIMO_PATTERN #2” information is made effective. With “MIMO_PATTERN #2” set to “00”, data transmission is carried out using the precoding matrix hopping scheme #1 according to which precoding matrices are regularly hopped. With “MIMO_PATTERN #2” set to “01”, data transmission is carried out using the precoding matrix hopping scheme #2 according to which precoding matrices are regularly hopped. With “MIMO_PATTERN #3” set to “10”, data transmission is carried out using the precoding matrix hopping scheme #2 according to which precoding matrices are regularly hopped. With “MIMO_PATTERN #4” set to “11”, data transmission is carried out using the precoding matrix hopping scheme #2 according to which precoding matrices are regularly hopped. Note that the precoding matrix hopping schemes #1 to #4 are mutually different. Here, to define a scheme being different, it is supposed that #A and #B are mutually different schemes. Then one of the following is true.The precoding matrices used in #A include the same matrices used in #B but the periods (cycles) of the matrices are different.The precoding matrices used in #A include precoding matrices not used in #B.None of the precoding matrices used in #A is used in #B. In the above description, the control information shown in Tables 4 and 5 is transmitted by first and second Signalling data. In this case, the advantage of eliminating the need to specifically use PLPs to transmit control information is achieved. As described above, the present embodiment allows for choice of a scheme of regularly hopping between precoding matrices while using a multi-carrier scheme, such as an OFDM scheme and while allowing a standard other than DVB-T2 to be distinguished from DVB-T2. This offers the advantages of obtaining high reception quality, as well as high transmission speed, in an LOS environment. While in the present embodiment, the transmission schemes to which α carrier group can be set are “a spatial multiplexing MIMO system, a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, or a transmission scheme for transmitting only stream s1”, but the transmission schemes are not limited in this way. Furthermore, the MIMO scheme using a fixed precoding matrix limited to scheme #2 inFIG.49, as any structure with a fixed precoding matrix is acceptable. Furthermore, the above description is directed to a scheme in which the schemes selectable by the broadcast station are “a spatial multiplexing MIMO system, a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, or a transmission scheme for transmitting only stream s1”. However, it is not necessary that all of the transmission schemes are selectable. Any of the following examples is also possible.A transmission scheme in which any of the following is selectable: a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, and a transmission scheme for transmitting only stream s1.A transmission scheme in which any of the following is selectable: a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, and space-time block coding.A transmission scheme in which any of the following is selectable: a MIMO scheme using a fixed precoding matrix, a MIMO scheme for regularly hopping between precoding matrices, and a transmission scheme for transmitting only stream s1.A transmission scheme in which any of the following is selectable: a MIMO scheme for regularly hopping between precoding matrices, space-time block coding, and a transmission scheme for transmitting only stream s1.A transmission scheme in which any of the following is selectable: a MIMO scheme using a fixed precoding matrix, and a MIMO scheme for regularly hopping between precoding matrices.A transmission scheme in which any of the following is selectable: a MIMO scheme for regularly hopping between precoding matrices, and space-time block coding.A transmission scheme in which any of the following is selectable: a MIMO scheme for regularly hopping between precoding matrices, and a transmission scheme for transmitting only stream s1. As listed above, as long as a MIMO scheme for regularly hopping between precoding matrices is included as a selectable scheme, the advantageous effects of high-speed data transmission is obtained in an LOS environment, in addition to excellent reception quality for the reception device. Here, it is necessary to set the control information S1 in P1 symbols as described above. In addition, as first and second Signalling data, the control information may be set differently from a scheme (the scheme for setting the transmission scheme of each PLP) shown in Table 4. Table 6 shows one example of such a scheme. Table 6 differs from Table 4 in that the “PLP_MODE” set to “11” is “Reserved” In this way, the number of bits constituting the “PLP_MODE” shown in Tables 4 and 6 may be increased or decreased depending on the number of selectable PLP transmission schemes, which varies as in the examples listed above. The same holds with respect to Table 5. For example, if the only MIMO scheme supported is a precoding scheme of regularly hopping between precoding matrices, the control information “MIMO_MODE” is no longer necessary. Furthermore, the control information “MIMO_PATTERN #1” may not be necessary in the case, for example, where a MIMO scheme using a fixed precoding matrix is not supported. Furthermore, the control information “MIMO_PATTERN #1” may not necessarily be two bits long and may alternatively be one bit long in the case where, for example, no more than one precoding matrix is required for such a MIMO scheme using a fixed precoding matrix. Furthermore, the control information “MIMO_PATTERN #1” may be two bits long or more in the case where a plurality of precoding matrices are selectable. The same applies to “MIMO_PATTERN #2”. That is, the control information “MIMO_PATTERN #2” may be one bit long instead of two bits long, in the case where no more than one precoding scheme of regularly hopping between precoding matrices is available. Alternatively, the control information “MIMO_PATTERN #2” may be two bits long or more in the case where a plurality of precoding schemes of regularly hopping between precoding matrices are selectable. In the present embodiment, the description is directed to the transmission device having two antennas, but the number of antennas is not limited to two. With a transmission device having more than two antennas, the control information may be transmitted in the same manner. Yet, to enable the modulated signal transmission with the use of four antennas in addition to the modulated signal transmission with the use of two antennas may require that the number of bits constituting respective pieces of control information needs to be increased. In such a modification, it still holds that the control information is transmitted by the P1 symbol and the control information is transmitted by first and second Signalling data as set forth above. The above description is directed to the frame structure of PLP symbol groups transmitted by a broadcast station in a time-sharing transmission scheme as shown inFIG.64. FIG.66shows another example of a symbol arranging scheme in the time and frequency domains, which is different from the symbol arranging scheme shown inFIG.64. The symbols shown inFIG.66are of the stream s1 and s2 and to be transmitted after the transmission of the P1 symbol, first and second Signalling data, and Common PLP. InFIG.66, each symbol denoted by “#1” represents one symbol of the symbol group of PLP #1 shown inFIG.67. Similarly, each symbol denoted as “#2” represents one symbol of the symbol group of PLP #2 shown inFIG.64, each symbol denoted as “#3” represents one symbol of the symbol group of PLP #3 shown inFIG.64, and each symbol denoted as “#4” represents one symbol of the symbol group of PLP #4 shown inFIG.64. Similarly toFIG.64, PLP #1 transmits data using spatial multiplexing MIMO system shown inFIG.49or the MIMO system with a fixed precoding matrix. In addition, PLP #2 transmits data thereby to transmit one modulated signal. PLP #3 transmits data using a precoding scheme of regularly hopping between precoding matrices. PLP #4 transmits data using space-time block coding shown inFIG.50. Note that the symbol arrangement used in the space-time block coding is not limited to the arrangement in the time domain. Alternatively, the symbol arrangement may be in the frequency domain or in symbol groups formed in the time and frequency domains. In addition, the space-time block coding is not limited to the one shown inFIG.50. InFIG.66, where streams s1 and s2 both have a symbol in the same subcarrier and at the same time, symbols of the two streams are present at the same frequency. In the case where precoding performed includes the precoding according to the scheme for regularly hopping between precoding matrices as described in the other embodiments, streams s1 and s2 are subjected to weighting performed using the precoding matrices and z1 and z2 are output from the respective antennas. FIG.66differs fromFIG.64in the following points. That is, the example shown inFIG.64is an arrangement of a plurality of PLPs using time-sharing, whereas the example shown inFIG.66is an arrangement of a plurality of PLPs using both time-sharing and frequency-sharing. That is, for example, at time 1, a symbol of PLP #1 and a symbol of PLP #2 are both present. Similarly, at time 3, a symbol of PLP #3 and a symbol of PLP #4 are both present. In this way, PLP symbols having different index numbers (#X; X=1, 2 . . . ) may be allocated on a symbol-by-symbol basis (for each symbol composed of one subcarrier per time). For the sake of simplicity,FIG.66only shows symbols denoted by “#1” and “#2” at time 1. However, this is not a limiting example, and PLP symbols having any index numbers other than “#1” and “#2” may be present at time 1. In addition, the relation between subcarriers present at time 1 and PLP index numbers are not limited to that shown inFIG.66. Alternatively, a PLP symbol having any index number may be allocated to any subcarrier. Similarly, in addition, a PLP symbol having any index number may be allocated to any subcarrier at any time other than time 1. FIG.67shows another example of a symbol arranging scheme in the time and frequency domains, which is different from the symbol arranging scheme shown inFIG.64. The symbols shown inFIG.67are of the stream s1 and s2 and to be transmitted after the transmission of the P1 symbol, first and second Signalling data, and Common PLP. The characterizing feature of the example shown inFIG.67is that the “transmission scheme for transmitting only stream s1” is not selectable in the case where PLP transmission for T2 frames is carried out basically with a plurality of antennas. Therefore, data transmission by the symbol group6701of PLP #1 shown inFIG.67is carried out by “a spatial multiplexing MIMO system or a MIMO scheme using a fixed precoding matrix”. Data transmission by the symbol group6702of PLP #2 is carried out using “a precoding scheme of regularly hopping between precoding matrices”. Data transmission by the symbol group6703of PLP #3 is carried out by “space-time block coding”. Note that data transmission by the PLP symbol group6703of PLP #3 and the following symbol groups in unit frame is carried out by using one of “a spatial multiplexing MIMO system or a MIMO scheme using a fixed precoding matrix,” “a precoding scheme of regularly hopping between precoding matrices” and “space-time block coding”. FIG.68shows another example of a symbol arranging scheme in the time and frequency domains, which is different from the symbol arranging scheme shown inFIG.66. The symbols shown inFIG.68are of the stream s1 and s2 and to be transmitted after the transmission of the P1 symbol, first and second Signalling data, and Common PLP. InFIG.68, each symbol denoted by “#1” represents one symbol of the symbol group of PLP #1 shown inFIG.67. Similarly, each symbol denoted as “#2” represents one symbol of the symbol group of PLP #2 shown inFIG.67, each symbol denoted as “#3” represents one symbol of the symbol group of PLP #3 shown inFIG.67, and each symbol denoted as “#4” represents one symbol of the symbol group of PLP #4 shown inFIG.67. Similarly toFIG.67, PLP #1 transmits data using spatial multiplexing MIMO system shown inFIG.49or the MIMO system with a fixed precoding matrix. PLP #2 transmits data using a precoding scheme of regularly hopping between precoding matrices. PLP #3 transmits data using space-time block coding shown inFIG.50. Note that the symbol arrangement used in the space-time block coding is not limited to the arrangement in the time domain. Alternatively, the symbol arrangement may be in the frequency domain or in symbol groups formed in the time and frequency domains. In addition, the space-time block coding is not limited to the one shown inFIG.50. InFIG.68, where streams s1 and s2 both have a symbol in the same subcarrier and at the same time, symbols of the two streams are present at the same frequency. In the case where precoding performed includes the precoding according to the scheme for regularly hopping between precoding matrices as described in the other embodiments, streams s1 and s2 are subjected to weighting performed using the precoding matrices and z1 and z2 are output from the respective antennas. FIG.68differs fromFIG.67in the following points. That is, the example shown inFIG.67is an arrangement of a plurality of PLPs using time-sharing, whereas the example shown inFIG.68is an arrangement of a plurality of PLPs using both time-sharing and frequency-sharing. That is, for example, at time 1, a symbol of PLP #1 and a symbol of PLP #2 are both present. In this way, PLP symbols having different index numbers (#X; X=1, 2 . . . ) may be allocated on a symbol-by-symbol basis (for each symbol composed of one subcarrier per time). For the sake of simplicity,FIG.68only shows symbols denoted by “#1” and “#2” at time 1. However, this is not a limiting example, and PLP symbols having any index numbers other than “#1” and “#2” may be present at time 1. In addition, the relation between subcarriers present at time 1 and PLP index numbers are not limited to that shown inFIG.68. Alternatively, a PLP symbol having any index number may be allocated to any subcarrier. Similarly, in addition, a PLP symbol having any index number may be allocated to any subcarrier at any time other than time 1. Alternatively, on the other hand, only one PLP symbol may be allocated at a specific time as at time t3. That is, in a framing scheme of arranging PLP symbols in the time and frequency domains, any allocation is applicable. As set forth above, no PLPs using “a transmission scheme for transmitting only stream s1” exist in a unit frame, so that the dynamic range of a signal received by the terminal is ensured to be narrow. As a result, the advantageous effect is achieved that the probability of excellent reception quality increases. Note that the description ofFIG.68is described using an example in which the transmission scheme selected is one of “spatial multiplexing MIMO system or a MIMO scheme using a fixed precoding matrix”, “a precoding scheme of regularly hopping between precoding matrices”, and “space-time block coding”. Yet, it is not necessary that all of these transmission schemes are selectable. For example, the following combinations of the transmission schemes may be made selectable.A “precoding scheme of regularly hopping between precoding matrices”, “space-time block coding”, and “MIMO scheme using a fixed precoding matrix” are selectable.A “precoding scheme of regularly hopping between precoding matrices” and “space-time block coding” are selectable.A “precoding scheme of regularly hopping between precoding matrices” and “MIMO scheme using a fixed precoding matrix” are selectable. The above description relates to an example in which α unit frame includes a plurality of PLPs. The following describes an example in which α unit frame includes one PLP only. FIG.69shows an example of frame structure in the time and frequency domains for stream s1 and s2 in the case where only one PLP exits in a unit frame. InFIG.69, the denotation “control symbol” represents a symbol such as P1 symbol, first and second Signalling data, or the like. In the example shown inFIG.69, the first unit frame is transmitted using interval 1. Similarly, the second unit frame is transmitted using interval 2, the third unit frame is transmitted using interval 3, and the fourth unit frame is transmitted using interval 4. In the example shown inFIG.69, in the first unit frame, a symbol group6801for PLP #1-1 is transmitted and the transmission scheme selected is “spatial multiplexing MIMO system or MIMO scheme using a fixed precoding matrix”. In the second unit frame, a symbol group6802for PLP #2-1 is transmitted and the transmission scheme selected is “a scheme for transmitting one modulated signal.” In the third unit frame, a symbol group6803for PLP #3-1 is transmitted and the transmission scheme selected is “a precoding scheme of regularly hopping between precoding matrices”. In the fourth unit frame, a symbol group6804for PLP #4-1 is transmitted and the transmission scheme selected is “space-time block coding”. Note that the symbol arrangement used in the space-time block coding is not limited to the arrangement in the time domain. Alternatively, the symbols may be arranged in the frequency domain or in symbol groups formed in the time and frequency domains. In addition, the space-time block coding is not limited to the one shown inFIG.50. InFIG.69, where streams s1 and s2 both have a symbol in the same subcarrier and at the same time, symbols of the two streams are present at the same frequency. In the case where precoding performed includes the precoding according to the scheme for regularly hopping between precoding matrices as described in the other embodiments, streams s1 and s2 are subjected to weighting performed using the precoding matrices and z1 and z2 are output from the respective antennas. In the above manner, a transmission scheme may be set for each PLP in consideration of the data transmission speed and the data reception quality at the receiving terminal, so that increase in data transmission seeped and excellent reception quality are both achieved. As an example scheme of structuring control information, the control information indicating, for example, the transmission scheme and other information of the P1 symbol and first and second Signalling data may be configured in a similar manner to Tables 3-6. The difference is as follows. In the frame structure shown, for example, inFIG.64, one unit frame includes a plurality of PLPs. Thus, it is necessary to provide the control information indicating the transmission scheme and the like for each PLP. On the other hand, in the frame structure shown, for example, inFIG.69, one unit frame includes one PLP only. Thus, it is sufficient to provide the control information indicating the transmission scheme and the like only for the one PLP. The present embodiment has described how a precoding scheme of regularly hopping between precoding matrices is applied to a system compliant with the DVB standard. Embodiments 1 to 16 have described examples of the precoding scheme of regularly hopping between precoding matrices. However, the scheme of regularly hopping between precoding matrices is not limited to the schemes described in Embodiments 1 to 16. The present embodiment can be implemented in the same manner by using a scheme comprising the steps of (i) preparing a plurality of precoding matrices, (ii) selecting, from among the prepared plurality of precoding matrices, one precoding matrix for each slot, and (iii) performing the precoding while regularly hopping between precoding matrices to be used for each slot. Although control information has unique names in the present embodiment, the names of the control information do not influence the present invention. Embodiment A2 The present embodiment provides detailed descriptions of a reception scheme and the structure of a reception device used in a case where a scheme of regularly hopping between precoding matrices is applied to a communication system compliant with the DVB-T2 standard, which is described in Embodiment A1. FIG.73shows, by way of example, the structure of a reception device of a terminal used in a case where the transmission device of the broadcast station shown inFIG.63has adopted a scheme of regularly hopping between precoding matrices. InFIG.73, the elements that operate in the same manner as inFIGS.7and56have the same reference signs thereas. Referring toFIG.73, a P1 symbol detection/demodulation unit7301performs signal detection and temporal frequency synchronization by receiving a signal transmitted by a broadcast station and detecting a P1 symbol based on the inputs, namely signals704_X and704_Y that have been subjected to signal processing. The P1 symbol detection/demodulation unit7301also obtains control information included in the P1 symbol (by applying demodulation and error correction decoding) and outputs P1 symbol control information7302. The P1 symbol control information7302is input to OFDM related processors5600_X and5600_Y. Based on the input information, the OFDM related processors5600_X and5600_Y change a signal processing scheme for the OFDM scheme (this is because, as described in Embodiment A1, the P1 symbol includes information on a scheme for transmitting the signal transmitted by the broadcast station). Signals704_X and704_Y that have been subjected to signal processing, as well as the P1 symbol control information7302, are input to a P2 symbol demodulation unit7303(note, a P2 symbol may include a signalling PLP). The P2 symbol demodulation unit7303performs signal processing and demodulation (including error correction decoding) based on the P1 symbol control information, and outputs P2 symbol control information7304. The P1 symbol control information7302and the P2 symbol control information7304are input to a control signal generating unit7305. The control signal generating unit7305forms a set of pieces of control information (relating to receiving operations) and outputs the same as a control signal7306. As illustrated inFIG.73, the control signal7306is input to each unit. A signal processing unit711receives, as inputs, the signals706_1,706_2,708_1,708_2,704_X,704_Y, and the control signal7306. Based on the information included in the control signal7306on the transmission scheme, modulation scheme, error correction coding scheme, coding rate for error correction coding, block size of error correction codes, and the like used to transmit each PLP, the signal processing unit711performs demodulation processing and decoding processing, and outputs received data712. Here, the signal processing unit711may perform demodulation processing by using Equation 41 of Math 41 and Equation 143 of Math 153 in a case where any of the following transmission schemes is used for to transmit each PLP: a spatial multiplexing MIMO system; a MIMO scheme employing a fixed precoding matrix; and a precoding scheme of regularly hopping between precoding matrices. Note that the channel matrix (H) can be obtained from the resultant outputs from channel fluctuation estimating units (705_1,705_2,707_1and707_2). The matrix structure of the precoding matrix (F or W) differs depending on the transmission scheme actually used. Especially, when the precoding scheme of regularly hopping between precoding matrices is used, the precoding matrices to be used are hopped between and demodulation is performed every time. Also, when space-time block coding is used, demodulation is performed by using values obtained from channel estimation and a received (baseband) signal. FIG.74shows, by way of example, the structure of a reception device of a terminal used in a case where the transmission device of the broadcast station shown inFIG.72has adopted a scheme of regularly hopping between precoding matrices. InFIG.74, the elements that operate in the same manner as inFIGS.7,56and73have the same reference signs thereas. The reception device shown inFIG.74and the reception device shown inFIG.73are different in that the reception device shown inFIG.73can obtain data by receiving signals conforming to the DVB-T2 standard and signals conforming to standards other than the DVB-T2 standard, whereas the reception device shown inFIG.74can obtain data by receiving only signals conforming to standards other than the DVB-T2 standard. Referring toFIG.74, a P1 symbol detection/demodulation unit7301performs signal detection and temporal frequency synchronization by receiving a signal transmitted by a broadcast station and detecting a P1 symbol based on the inputs, namely signals704_X and704_Y that have been subjected to signal processing. The P1 symbol detection/demodulation unit7301also obtains control information included in the P1 symbol (by applying demodulation and error correction decoding) and outputs P1 symbol control information7302. The P1 symbol control information7302is input to OFDM related processors5600_X and5600_Y. Based on the input information, the OFDM related processors5600_X and5600_Y change a signal processing scheme for the OFDM scheme. (This is because, as described in Embodiment A1, the P1 symbol includes information on a scheme for transmitting the signal transmitted by the broadcast station.) Signals704_X and704_Y that have been subjected to signal processing, as well as the P1 symbol control information7302, are input to a first/second signalling data demodulation unit7401. The first/second signalling data demodulation unit7401performs signal processing and demodulation (including error correction decoding) based on the P1 symbol control information, and outputs first/second signalling data control information7402. The P1 symbol control information7302and the first/second signalling data control information7402are input to a control signal generating unit7305. The control signal generating unit7305forms a set of pieces of control information (relating to receiving operations) and outputs the same as a control signal7306. As illustrated inFIG.74, the control signal7306is input to each unit. A signal processing unit711receives, as inputs, the signals706_1,706_2,708_1,708_2,704_X,704_Y, and the control signal7306. Based on the information included in the control signal7306on the transmission scheme, modulation scheme, error correction coding scheme, coding rate for error correction coding, block size of error correction codes, and the like used to transmit each PLP, the signal processing unit711performs demodulation processing and decoding processing, and outputs received data712. Here, the signal processing unit711may perform demodulation processing by using Equation 41 of Math 41 and Equation 143 of Math 153 in a case where any of the following transmission schemes is used to transmit each PLP: a spatial multiplexing MIMO system; a MIMO scheme employing a fixed precoding matrix; and a precoding scheme of regularly hopping between precoding matrices. Note that the channel matrix (H) can be obtained from the resultant outputs from channel fluctuation estimating units (705_1,705_2,707_1and707_2). The matrix structure of the precoding matrix (F or W) differs depending on the transmission scheme actually used. Especially, when the precoding scheme of regularly hopping between precoding matrices is used, the precoding matrices to be used are hopped between and demodulation is performed every time. Also, when space-time block coding is used, demodulation is performed by using values obtained from channel estimation and a received (baseband) signal. FIG.75shows the structure of a reception device of a terminal compliant with both the DVB-T2 standard and standards other than the DVB-T2 standard. InFIG.75, the elements that operate in the same manner as inFIGS.7,56and73have the same reference signs thereas. The reception device shown inFIG.75is different from the reception devices shown inFIGS.73and74in that the reception device shown inFIG.75comprises a P2 symbol or first/second signalling data demodulation unit7501so as to be able to demodulate both signals compliant with the DVB-T2 standard and signals compliant with standards other than the DVB-T2 standard. Signals704_X and704_Y that have been subjected to signal processing, as well as P1 symbol control information7302, are input to the P2 symbol or first/second signalling data demodulation unit7501. Based on the P1 symbol control information, the P2 symbol or first/second signalling data demodulation unit7501judges whether the received signal is compliant with the DVB-T2 standard or with a standard other than the DVB-T2 standard (this judgment can be made with use of, for example, Table 3), performs signal processing and demodulation (including error correction decoding), and outputs control information7502that includes information indicating the standard with which the received signal is compliant. Other operations are similar toFIGS.73and74. As set forth above, the structure of the reception device described in the present embodiment allows obtaining data with high reception quality by receiving the signal transmitted by the transmission device of the broadcast station, which has been described in Embodiment A1, and by performing appropriate signal processing. Especially, when receiving a signal associated with a precoding scheme of regularly hopping between precoding matrices, both the data transmission efficiency and the data reception quality can be improved in an LOS environment. As the present embodiment has described the structure of the reception device that corresponds to the transmission scheme used by the broadcast station described in Embodiment A1, the reception device is provided with two receive antennas in the present embodiment. However, the number of antennas provided in the reception device is not limited to two. The present embodiment can be implemented in the same manner when the reception device is provided with three or more antennas. In this case, the data reception quality can be improved due to an increase in the diversity gain. Furthermore, when the transmission device of the broadcast station is provided with three or more transmit antennas and transmits three or more modulated signals, the present embodiment can be implemented in the same manner by increasing the number of receive antennas provided in the reception device of the terminal. In this case, it is preferable that the precoding scheme of regularly hopping between precoding matrices be used as a transmission scheme. Note that Embodiments 1 to 16 have described examples of the precoding scheme of regularly hopping between precoding matrices. However, the scheme of regularly hopping between precoding matrices is not limited to the schemes described in Embodiments 1 to 16. The present embodiment can be implemented in the same manner by using a scheme comprising the steps of (i) preparing a plurality of precoding matrices, (ii) selecting, from among the prepared plurality of precoding matrices, one precoding matrix for each slot, and (iii) performing the precoding while regularly hopping between precoding matrices to be used for each slot. Embodiment A3 In the system described in Embodiment A1 where the precoding scheme of regularly hopping between precoding matrices is applied to the DVB-T2 standard, there is control information for designating a pilot insertion pattern in L1 pre-signalling. The present embodiment describes how to apply the precoding scheme of regularly hopping between precoding matrices when the pilot insertion pattern is changed in the L1 pre-signalling. FIGS.76A,76B,77A and77Bshow examples of a frame structure represented in a frequency-time domain for the DVB-T2 standard in a case where a plurality of modulated signals are transmitted from a plurality of antennas using the same frequency bandwidth. In each ofFIGS.76A to77B, the horizontal axis represents frequency and carrier numbers are shown therealong, whereas the vertical axis represents time.FIGS.76A and77Aeach show a frame structure for a modulated signal z1 pertaining to the embodiments that have been described so far.FIGS.76B and77Beach show a frame structure for a modulated signal z2 pertaining to the embodiments that have been described so far. Indexes “f0, f1, f2, . . . ” are assigned as carrier numbers, and indexes “t1, t2, t3, . . . ” are assigned as time. InFIGS.76A to77B, symbols that are assigned the same carrier number and the same time exist over the same frequency at the same time. FIGS.76A to77Bshow examples of positions in which pilot symbols are inserted according to the DVB-T2 standard (when a plurality of modulated signals are transmitted by using a plurality of antennas according to the DVB-T2, there are eight schemes regarding the positions in which pilots are inserted;FIGS.76A to77Bshow two of such schemes).FIGS.76A to77Bshow two types of symbols, namely, symbols as pilots and symbols for data transmission (“data transmission symbols”). As described in other embodiments, when a precoding scheme of regularly hopping between precoding matrices or a precoding scheme employing a fixed precoding matrix is used, data transmission symbols in the modulated signal z1 are obtained as a result of performing weighting on the streams s1 and s2, and data transmission symbols in the modulated signal z2 are obtained as a result of performing weighting on the streams s1 and s2. When the space-time block coding or the spatial multiplexing MIMO system is used, data transmission symbols in the modulated signal z1 are either for the stream s1 or for the stream s2, and data transmission symbols in the modulated signal z2 are either for the stream s1 or for the stream s2. InFIGS.76A to77B, the symbols as pilots are each assigned an index “PP1” or “PP2”. A pilot symbol with the index “PP1” and a pilot symbol with the index “PP2” are structured by using different schemes. As mentioned earlier, according to the DVB-T2 standard, the broadcast station can designate one of the eight pilot insertion schemes (that differ from one another in the frequency of insertion of pilot symbols in a frame).FIGS.76A to77Bshow two of the eight pilot insertion schemes. Information on one of the eight pilot insertion schemes selected by the broadcast station is transmitted to a transmission destination (terminal) as L1 pre-signalling data of P2 symbols, which has been described in embodiment A1. Next, a description is given of how to apply the precoding scheme of regularly hopping between precoding matrices in association with a pilot insertion scheme. By way of example, it is assumed here that 10 different types of precoding matrices F are prepared for the precoding scheme of regularly hopping between precoding matrices, and these 10 different types of precoding matrices F are expressed as F[0], F[1], F[2], F[3], F[4], F[5], F[6], F[7], F[8], and F[9].FIGS.78A and78Bshow the result of allocating the precoding matrices to the frame structure represented in the frequency-time domains shown inFIGS.76A and76Bwhen the precoding scheme of regularly hopping between precoding matrices is applied.FIGS.79A and79Bshow the result of allocating the precoding matrices to the frame structure represented in the frequency-time domains shown inFIGS.77A and77Bwhen the precoding scheme of regularly hopping between precoding matrices is applied. For example, in both of the frame structure for the modulated signal z1 shown inFIG.78Aand the frame structure for the modulated signal z2 shown inFIG.78B, a symbol at the carrier f1 and the time t1 shows “#1”. This means that precoding is performed on this symbol by using the precoding matrix F[1]. Likewise, inFIGS.78A to79B, a symbol at the carrier fx and the time ty showing “#Z” denotes that precoding is performed on this symbol by using the precoding matrix F[Z] (here, x=0, 1, 2, . . . , and y=1, 2, 3, . . . ). It should be naturally appreciated that different schemes for inserting pilot symbols (different insertion intervals) are used for the frame structure represented in the frequency-time domain shown inFIGS.78A and78Band the frame structure represented in the frequency-time domain shown inFIGS.79A and79B. Furthermore, the precoding scheme of regularly hopping between the coding matrices is not applied to pilot symbols. For this reason, even if all of the signals shown inFIGS.78A to79Bare subjected to the same precoding scheme that regularly hops between precoding matrices over a certain period (cycle) (i.e., the same number of different precoding matrices are prepared for this scheme applied to all of the signals shown inFIGS.78A to79B), a precoding matrix allocated to a symbol at a certain carrier and a certain time inFIGS.78A and78Bmay be different from a precoding matrix allocated to the corresponding symbol inFIGS.79A and79B. This is apparent fromFIGS.78A to79B. For example, inFIGS.78A and78B, a symbol at the carrier f5 and the time t2 shows “#7”, meaning that precoding is performed thereon by using the precoding matrix F[7]. On the other hand, inFIGS.79A and79B, a symbol at the carrier f5 and the time t2 shows “#8”, meaning that precoding is performed thereon by using the precoding matrix F[8]. Therefore, the broadcast station transmits control information indicating a pilot pattern (pilot insertion scheme) using the L1 pre-signalling data. Note, when the broadcast station has selected the precoding scheme of regularly hopping between precoding matrices as a scheme for transmitting each PLP based on control information shown in Table 4 or 5, the control information indicating the pilot pattern (pilot insertion scheme) may additionally indicate a scheme for allocating the precoding matrices (hereinafter “precoding matrix allocation scheme”) prepared for the precoding scheme of regularly hopping between precoding matrices. Hence, the reception device of the terminal that receives modulated signals transmitted by the broadcast station can acknowledge the precoding matrix allocation scheme used in the precoding scheme of regularly hopping between precoding matrices by obtaining the control information indicating the pilot pattern, which is included in the L1 pre-signalling data (on the premise that the broadcast station has selected the precoding scheme of regularly hopping between precoding matrices as a scheme for transmitting each PLP based on control information shown in Table 4 or 5). Although the description of the present embodiment has been given with reference to L1 pre-signalling data, in the case of the frame structure shown inFIG.70where no P2 symbol exists, the control information indicating the pilot pattern and the precoding matrix allocation scheme used in the precoding scheme of regularly hopping between precoding matrices is included in first signalling data and second signalling data. The following describes another example. For example, the above description is also true of a case where the precoding matrices used in the precoding scheme of regularly hopping between precoding matrices are determined at the same time as designation of a modulation scheme, as shown in Table 2. In this case, by transmitting only the pieces of control information indicating a pilot pattern, a scheme for transmitting each PLP and a modulation scheme from P2 symbols, the reception device of the terminal can estimate, via obtainment of these pieces of control information, the precoding matrix allocation scheme used in the precoding scheme of regularly hopping between precoding matrices (note, the allocation is performed in the frequency-time domain). Assume a case where the precoding matrices used in the precoding scheme of regularly hopping between precoding matrices are determined at the same time as designation of a modulation scheme and an error correction coding scheme, as shown in Table 1B. In this case also, by transmitting only the pieces of control information indicating a pilot pattern, a scheme for transmitting each PLP and a modulation scheme, as well as an error correction coding scheme, from P2 symbols, the reception device of the terminal can estimate, via obtainment of these pieces of information, the precoding matrix allocation scheme used in the precoding scheme of regularly hopping between precoding matrices (note, the allocation is performed in the frequency-time domain). However, unlike the cases of Tables 1B and 2, a precoding matrix hopping scheme used in the precoding scheme of regularly hopping between precoding matrices is transmitted, as indicated by Table 5, in any of the following situations (i) to (iii): (i) when one of two or more different schemes of regularly hopping between precoding matrices can be selected even if the modulation scheme is determined (examples of such two or more different schemes include: precoding schemes that regularly hop between precoding matrices over different periods (cycles); and precoding schemes that regularly hop between precoding matrices, where the precoding matrices used in one scheme is different from those used in another; (ii) when one of two or more different schemes of regularly hopping between precoding matrices can be selected even if the modulation scheme and the error correction scheme are determined; and (iii) when one of two or more different schemes of regularly hopping between precoding matrices can be selected even if the error correction scheme is determined. In any of these situations (i) to (iii), it is permissible to transmit information on the precoding matrix allocation scheme used in the precoding scheme of regularly hopping between precoding matrices, in addition to the precoding matrix hopping scheme used in the precoding scheme of regularly hopping between precoding matrices (note, the allocation is performed in the frequency-time domain). Table 7 shows an example of the structure of control information for the information on the precoding matrix allocation scheme used in the precoding scheme of regularly hopping between precoding matrices (note, the allocation is performed in the frequency-time domain). TABLE 7MATRIX_FRAME_ARRANGEMENT00: Precoding matrix(2 bits)allocation scheme #1in frames01: Precoding matrixallocation scheme #2in frames10: Precoding matrixallocation scheme #3in frames11: Precoding matrixallocation scheme #4in frames By way of example, assume a case where the transmission device of the broadcast station has selected the pilot insertion pattern shown inFIGS.76A and76B, and selected a scheme A as the precoding scheme of regularly hopping between precoding matrices. In this case, the transmission device of the broadcast station can select either the precoding matrix allocation scheme shown inFIGS.78A and78Bor the precoding matrix allocation scheme shown inFIGS.80A and80B(note, the allocation is performed in the frequency-time domain). For example, when the transmission device of the broadcast station has selected the precoding matrix allocation scheme shown inFIGS.78A and78B, “MATRIX_FRAME_ARRANGEMENT” in Table 7 is set to “00”. On the other hand, when the transmission device has selected the precoding matrix allocation scheme shown inFIGS.80A and80B, “MATRIX_FRAME_ARRANGEMENT” in Table 7 is set to “01”. Then, the reception device of the terminal can acknowledge the precoding matrix allocation scheme by obtaining the control information shown in Table 7 (note, the allocation is performed in the frequency-time domain). Note that the control information shown in Table 7 can be transmitted by using P2 symbols, or by using first signalling data and second signalling data. As set forth above, by implementing the precoding matrix allocation scheme used in the precoding scheme of regularly hopping between precoding matrices based on the pilot insertion scheme, and by properly transmitting the information indicative of the precoding matrix allocation scheme to the transmission destination (terminal), the reception device of the terminal can achieve the advantageous effect of improving both the data transmission efficiency and the data reception quality. The present embodiment has described a case where the broadcast station transmits two signals. However, the present embodiment can be implemented in the same manner when the transmission device of the broadcast station is provided with three or more transmit antennas and transmits three or more modulated signals. Embodiments 1 to 16 have described examples of the precoding scheme of regularly hopping between precoding matrices. However, the scheme of regularly hopping between precoding matrices is not limited to the schemes described in Embodiments 1 to 16. The present embodiment can be implemented in the same manner by using a scheme comprising the steps of (i) preparing a plurality of precoding matrices, (ii) selecting, from among the prepared plurality of precoding matrices, one precoding matrix for each slot, and (iii) performing the precoding while regularly hopping between precoding matrices to be used for each slot. Embodiment A4 In the present embodiment, a description is given of a repetition scheme used in a precoding scheme of regularly hopping between precoding matrices in order to improve the data reception quality. FIGS.3,4,13,40and53each show the structure of a transmission device employing the precoding scheme of regularly hopping between precoding matrices. On the other hand, the present embodiment describes the examples where repetition is used in the precoding scheme of regularly hopping between precoding matrices. FIG.81shows an example of the structure of the signal processing unit pertaining to a case where repetition is used in the precoding scheme of regularly hopping between precoding matrices. In light ofFIG.53, the structure ofFIG.81corresponds to the signal processing unit5308. A baseband signal8101_1shown inFIG.81corresponds to the baseband signal5307_1shown inFIG.53. The baseband signal8101_1is obtained as a result of mapping, and constitutes the stream s1. Likewise, a baseband signal8101_2shown inFIG.81corresponds to the baseband signal5307_2shown inFIG.53. The baseband signal8101_2is obtained as a result of mapping, and constitutes the stream s2. The baseband signal8101_1and a control signal8104are input to a signal processing unit (duplicating unit)81021. The signal processing unit (duplicating unit)81021generates duplicates of the baseband signal in accordance with the information on the number of repetitions included in the control signal8104. For example, in a case where the information on the number of repetitions included in the control signal8104indicates four repetitions, provided that the baseband signal8101_1includes signals s11, s12, s13, s14, . . . arranged in the stated order along the time axis, the signal processing unit (duplicating unit)8102_1generates a duplicate of each signal four times, and outputs the resultant duplicates. That is, after the four repetitions, the signal processing unit (duplicating unit)8102_1outputs, as the baseband signal8103_1, four pieces of s11 (i.e., s11, s11, s11, s11), four pieces of s12 (i.e., s12, s12, s12, s12), four pieces of s13 (i.e., s13, s13, s13, s13), four pieces of s14 (i.e., s14, s14, s14, s14) and so on, in the stated order along the time axis. The baseband signal8101_2and the control signal8104are input to a signal processing unit (duplicating unit)8102_2. The signal processing unit (duplicating unit)81022generates duplicates of the baseband signal in accordance with the information on the number of repetitions included in the control signal8104. For example, in a case where the information on the number of repetitions included in the control signal8104indicates four repetitions, provided that the baseband signal81012includes signals s21, s22, s23, s24, . . . arranged in the stated order along the time axis, the signal processing unit (duplicating unit)8102_2generates a duplicate of each signal four times, and outputs the resultant duplicates. That is, after the four repetitions, the signal processing unit (duplicating unit)8102_2outputs, as the baseband signal8103_2, four pieces of s21 (i.e., s21, s21, s21, s21), four pieces of s22 (i.e., s22, s22, s22, s22), four pieces of s23 (i.e., s23, s23, s23, s13), four pieces of s24 (i.e., s14, s24, s24, s24) and so on, in the stated order along the time axis. The baseband signals8103_1and8103_2obtained as a result of repetitions, as well as the control signal8104, are input to a weighting unit (precoding operation unit)8105. The weighting unit (precoding operation unit)8105performs precoding based on the information on the precoding scheme of regularly hopping between precoding matrices, which is included in the control signal8104. More specifically, the weighting unit (precoding operation unit)8105performs weighting on the baseband signals8103_1and8103_2obtained as a result of repetitions, and outputs baseband signals8106_1and8106_2on which the precoding has been performed (here, the baseband signals8106_1and8106_2are respectively expressed as z1(i) and z2(i), where i represents the order (along time or frequency)). Provided that the baseband signals8103_1and8103_2obtained as a result of repetitions are respectively y1(i) and y2(i) and the precoding matrix is F(i), the following relationship is satisfied. Math⁢561(z⁢1⁢(i)z⁢2⁢(i))=F⁡(i)⁢(y⁢1⁢(i)y⁢2⁢(i))Equation⁢475 Provided that N precoding matrices prepared for the precoding scheme of regularly hopping between precoding matrices are F[0], F[1], F[2], F[3], . . . , F[N−1](where N is an integer larger than or equal to two), one of the precoding matrices F[0], F[1], F[2], F[3], . . . , F[N−1] is used as F(i) in Equation 475. By way of example, assume that i=0, 1, 2, 3; y1(i) represents four duplicated baseband signals s11, s11, s11, s11; and y2(i) represents four duplicated baseband signals s21, s21, s21, s21. Under this assumption, it is important that the following condition be met. Math 562 For∀α∀β, the relationship F(α)≠F(β) is satisfied (for α, β=0, 1, 2, 3 and α≠β). The following description is derived by generalizing the above. Assume that the number of repetitions is K; i=g0, g1, g2, . . . , gK-1(i.e., gjwhere j is an integer in a range of 0 to K−1); and y1(i) represents s11. Under this assumption, it is important that the following condition be met. Math 563 For∀α∀β, the relationship F(α)≠F(β) is satisfied (for α, β=gj(j being an integer in a range of 0 to K−1) and α≠β). Likewise, assume that the number of repetitions is K; i=h0, h1, h2, . . . , hK-1(i.e., hjwhere j is an integer in a range of 0 to K−1); and y2(i) represents s21. Under this assumption, it is important that the following condition be met. Math 564 For ∀α∀β, the relationship F(α)≠F(β) is satisfied (for α, β=hj(j being an integer in a range of 0 to K−1) and α≠β). Here, the relationship gj=hjmay be or may not be satisfied. This way, the identical streams generated through the repetitions are transmitted while using different precoding matrices therefor, and thus the advantageous effect of improving the data reception quality is achieved. The present embodiment has described a case where the broadcast station transmits two signals. However, the present embodiment can be implemented in the same manner when the transmission device of the broadcast station is provided with three or more transmit antennas and transmits three or more modulated signals. Assume that the number of transmitted signals is Q; the number of repetitions is K; i=g0, g1, g2, . . . , gK-1(i.e., gjwhere j is an integer in a range of 0 to K−1); and yb(i) represents sb1 (where b is an integer in a range of 1 to Q). Under this assumption, it is important that the following condition be met. Math 565 For ∀α∀β, the relationship F(α)≠F(β) is satisfied (for α, β=gj(j being an integer in a range of 0 to K−1) and α≠β). Note that F(i) is a precoding matrix pertaining to a case where the number of transmitted signals is Q. Next, an embodiment different from the embodiment illustrated inFIG.81is described with reference toFIG.82. InFIG.82, the elements that operate in the same manner as inFIG.81have the same reference signs thereas. The structure shown inFIG.82is different from the structure shown inFIG.81in that data pieces are reorders so as to transmit identical data pieces from different antennas. A baseband signal8101_1shown inFIG.82corresponds to the baseband signal5307_1shown inFIG.53. The baseband signal8101_1is obtained as a result of mapping, and constitutes the s1 stream. Similarly, a baseband signal8101_2shown inFIG.81corresponds to the baseband signal5307_2shown inFIG.53. The baseband signal81012is obtained as a result of mapping, and constitutes the s2 stream. The baseband signal8101_1and the control signal8104are input to a signal processing unit (duplicating unit)81021. The signal processing unit (duplicating unit)81021generates duplicates of the baseband signal in accordance with the information on the number of repetitions included in the control signal8104. For example, in a case where the information on the number of repetitions included in the control signal8104indicates four repetitions, provided that the baseband signal8101_1includes signals s11, s12, s13, s14, . . . arranged in the stated order along the time axis, the signal processing unit (duplicating unit)8102_1generates a duplicate of each signal four times, and outputs the resultant duplicates. That is, after the four repetitions, the signal processing unit (duplicating unit)8102_1outputs, as the baseband signal8103_1, four pieces of s11 (i.e., s11, s11, s11, s11), four pieces of s12 (i.e., s12, s12, s12, s12), four pieces of s13 (i.e., s13, s13, s13, s13), four pieces of s14 (i.e., s14, s14, s14, s14) and so on, in the stated order along the time axis. The baseband signal8101_2and the control signal8104are input to a signal processing unit (duplicating unit)8102_2. The signal processing unit (duplicating unit)81022generates duplicates of the baseband signal in accordance with the information on the number of repetitions included in the control signal8104. For example, in a case where the information on the number of repetitions included in the control signal8104indicates four repetitions, provided that the baseband signal8101_2includes signals s21, s22, s23, s24, . . . arranged in the stated order along the time axis, the signal processing unit (duplicating unit)8102_1generates a duplicate of each signal four times, and outputs the resultant duplicates. That is, after the four repetitions, the signal processing unit (duplicating unit)8102_2outputs, as the baseband signal8103_2, four pieces of s21 (i.e., s21, s21, s21, s21), four pieces of s22 (i.e., s22, s22, s22, s22), four pieces of s23 (i.e., s23, s23, s23, s23), four pieces of s24 (i.e., s24, s24, s24, s24) and so on, in the stated order along the time axis. The baseband signals8103_1and8103_2obtained as a result of repetitions, as well as the control signal8104, are input to a reordering unit8201. The reordering unit8201reorders the data pieces in accordance with information on a repetition scheme included in the control signal8104, and outputs baseband signals8202_1and8202_2obtained as a result of reordering. For example, assume that the baseband signal8103_1obtained as a result of repetitions is composed of four pieces of s11 (s11, s11, s11, s11) arranged along the time axis, and the baseband signal8103_2obtained as a result of repetitions is composed of four pieces of s21 (s21, s21, s21, s21) arranged along the time axis. InFIG.82, s11 is output as both y1(i) and y2(i) of Equation 475, and s21 is similarly output as both y1(i) and y2(i) of Equation 475. Likewise, the reordering similar to the reordering performed on s11 is performed on s12, s13, . . . , and the reordering similar to the reordering performed on s21 is performed on s22, s23, . . . . Hence, the baseband signal8202_1obtained as a result of reordering includes s11, s21, s11, s21, s12, s22, s12, s22, s13, s23, s13, s23, . . . arranged in the stated order, which are equivalent to y1(i) of Equation 475. Although the pieces of s11 and s21 are arranged in the order s11, s21, s11 and s21 in the above description, the pieces of s11 and s21 are not limited to being arranged in this way, but may be arranged in any order. Similarly, the pieces of s12 and s22, as well as the pieces of s13 and s23, may be arranged in any order. The baseband signal8202_2obtained as a result of reordering includes s21, s11, s21, s11, s22, s12, s22, s12, s23, s13, s23, s13, . . . in the stated order, which are equivalent to y2(i) of Equation 475. Although the pieces of s11 and s21 are arranged in the order s21, s11, s21 and s11 in the above description, the pieces of s11 and s21 are not limited to being arranged in this way, but may be arranged in any order. Similarly, the pieces of s12 and s22, as well as the pieces of s13 and s23, may be arranged in any order. The baseband signals8202_1and8202_2obtained as a result of reordering, as well as the control signal8104, are input to a weighting unit (precoding operation unit)8105. The weighting unit (precoding operation unit)8105performs precoding based on the information on the precoding scheme of regularly hopping between precoding matrices, which is included in the control signal8104. More specifically, the weighting unit (precoding operation unit)8105performs weighting on the baseband signals8202_1and8202_2obtained as a result of reordering, and outputs baseband signals8106_1and8106_2on which the precoding has been performed (here, the baseband signals8106_1and8106_2are respectively expressed as z1(i) and z2(i), where i represents the order (along time or frequency)). As described earlier, under the assumption that the baseband signals8202_1and8202_2obtained as a result of reordering are respectively y1(i) and y2(i) and the precoding matrix is F(i), the relationship in Equation 475 is satisfied. Provided that N precoding matrices prepared for the precoding scheme of regularly hopping between precoding matrices are F[0], F[1], F[2], F[3], . . . , F[N−1](where N is an integer larger than or equal to two), one of the precoding matrices F[0], F[1], F[2], F[3], . . . , F[N−1] is used as F(i) in Equation 475. Although it has been described above that four repetitions are performed, the number of repetitions is not limited to four. As with the structure shown inFIG.81, the structure shown inFIG.82also achieves high reception quality when the relationships set out in Math 304 to Math 307 are satisfied. The structure of the reception device is illustrated inFIGS.7and56. By taking advantage of fulfillment of the relationships set out in Equation 144 and Equation 475, the signal processing unit demodulates bits transmitted by each of s11, s12, s13, s14, . . . , and bits transmitted by each of s21, s22, s23, s24, . . . . Note that each bit may be calculated as a log-likelihood ratio or as a hard-decision value. Furthermore, by taking advantage of the fact that K repetitions are performed on s11, it is possible to obtain highly reliable estimate values for bits transmitted by s1. Likewise, by taking advantage of the fact that K repetitions are performed on s12, s13, . . . , and on s21, s22, s23, . . . , it is possible to obtain highly reliable estimate values for bits transmitted by s12, s13, . . . , and by s21, s22, s23, . . . . The present embodiment has described a scheme for applying a precoding scheme of regularly hopping between precoding matrices in the case where the repetitions are performed. When there are two types of slots, i.e., slots over which data is transmitted after performing the repetitions, and slots over which data is transmitted without performing the repetitions, either of a precoding scheme of regularly hopping between precoding matrices or a precoding scheme employing a fixed precoding matrix may be used as a transmission scheme for the slots over which data is transmitted without performing the repetitions. Put another way, in order for the reception device to achieve high data reception quality, it is important that the transmission scheme pertaining to the present embodiment be used for the slots over which data is transmitted after performing the repetitions. In the systems associated with the DVB standard that have been described in Embodiments A1 through A3, it is necessary to secure higher reception qualities for P2 symbols, first signalling data and second signalling data than for PLPs. When P2 symbols, first signalling data and second signalling data are transmitted by using the precoding scheme of regularly hopping between precoding matrices described in the present embodiment, which incorporates the repetitions, the reception quality of control information improves in the reception device. This is important for stable operations of the systems. Embodiments 1 to 16 have provided examples of the precoding scheme of regularly hopping between precoding matrices described in the present embodiment. However, the scheme of regularly hopping between precoding matrices is not limited to the schemes described in Embodiments 1 to 16. The present embodiment can be implemented in the same manner by using a scheme comprising the steps of (i) preparing a plurality of precoding matrices, (ii) selecting, from among the prepared plurality of precoding matrices, one precoding matrix for each slot, and (iii) performing the precoding while regularly hopping between precoding matrices for each slot. Embodiment A5 The present embodiment describes a scheme for transmitting modulated signals by applying common amplification to the transmission scheme described in Embodiment A1 FIG.83shows an example of the structure of a transmission device. InFIG.83, the elements that operate in the same manner as inFIG.52have the same reference signs thereas. Modulated signal generating units #1 to #M (i.e.,5201_1to5201_M) shown inFIG.83generate the signals6323_1and6323_2from the input signals (input data), the signals6323_1and6323_2being subjected to processing for a P1 symbol and shown inFIG.63or72. The modulated signal generating units #1 to #M output modulated signals z1 (5202_1to5202_M) and modulated signals z2 (5203_1to5203_M). The modulated signals z1 (5202_1to5202_M) are input to a wireless processing unit8301_1shown inFIG.83. The wireless processing unit8301_1performs signal processing (e.g., frequency conversion) and amplification, and outputs a modulated signal8302_1. Thereafter, the modulated signal8302_1is output from an antenna8303_1as a radio wave. Similarly, the modulated signals z2 (5203_1to5203_M) are input to a wireless processing unit8301_2. The wireless processing unit8301_2performs signal processing (e.g., frequency conversion) and amplification, and outputs a modulated signal8302_2. Thereafter, the modulated signal8302_2is output from an antenna8303_2as a radio wave. As set forth above, it is permissible to use the transmission scheme described in Embodiment A1 while performing frequency conversion and amplification simultaneously on modulated signals having different frequency bandwidths. Embodiment B1 The following describes a structural example of an application of the transmission schemes and reception schemes shown in the above embodiments and a system using the application. FIG.84shows an example of the structure of a system that includes devices implementing the transmission schemes and reception schemes described in the above embodiments. The transmission scheme and reception scheme described in the above embodiments are implemented in a digital broadcasting system8400, as shown inFIG.84, that includes a broadcasting station and a variety of reception devices such as a television8411, a DVD recorder8412, a Set Top Box (STB)8413, a computer8420, an in-car television8441, and a mobile phone8430. Specifically, the broadcasting station8401transmits multiplexed data, in which video data, audio data, and the like are multiplexed, using the transmission schemes in the above embodiments over a predetermined broadcasting band. An antenna (for example, antennas8560and8440) internal to each reception device, or provided externally and connected to the reception device, receives the signal transmitted from the broadcasting station8401. Each reception device obtains the multiplexed data by using the reception schemes in the above embodiments to demodulate the signal received by the antenna. In this way, the digital broadcasting system8400obtains the advantageous effects of the present invention described in the above embodiments. The video data included in the multiplexed data has been coded with a moving picture coding method compliant with a standard such as Moving Picture Experts Group (MPEG)-2, MPEG-4 Advanced Video Coding (AVC), VC-1, or the like. The audio data included in the multiplexed data has been encoded with an audio coding method compliant with a standard such as Dolby Audio Coding (AC)-3, Dolby Digital Plus, Meridian Lossless Packing (MLP), Digital Theater Systems (DTS), DTS-HD, Linear Pulse-Code Modulation (PCM), or the like. FIG.85is a schematic view illustrating an exemplary structure of a reception device8500for carrying out the reception schemes described in the above embodiments. As illustrated inFIG.85, in one exemplary structure, the reception device8500may be composed of a modem portion implemented on a single LSI (or a single chip set) and a codec portion implemented on another single LSI (or another single chip set). The reception device8500shown inFIG.85corresponds to a component that is included, for example, in the television8411, the DVD recorder8412, the STB8413, the computer8420, the in-car television8441, the mobile phone8430, or the like illustrated inFIG.84. The reception device8500includes a tuner8501, for transforming a high-frequency signal received by an antenna8560into a baseband signal, and a demodulation unit8502, for demodulating multiplexed data from the baseband signal obtained by frequency conversion. The reception schemes described in the above embodiments are implemented in the demodulation unit8502, thus obtaining the advantageous effects of the present invention described in the above embodiments. The reception device8500includes a stream input/output unit8520, a signal processing unit8504, an audio output unit8506, and a video display unit8507. The stream input/output unit8520demultiplexes video and audio data from multiplexed data obtained by the demodulation unit8502. The signal processing unit8504decodes the demultiplexed video data into a video signal using an appropriate method picture decoding method and decodes the demultiplexed audio data into an audio signal using an appropriate audio decoding scheme. The audio output unit8506, such as a speaker, produces audio output according to the decoded audio signal. The video display unit8507, such as a display monitor, produces video output according to the decoded video signal. For example, the user may operate the remote control8550to select a channel (of a TV program or audio broadcast), so that information indicative of the selected channel is transmitted to an operation input unit8510. In response, the reception device8500demodulates, from among signals received with the antenna8560, a signal carried on the selected channel and applies error correction decoding, so that reception data is extracted. At this time, the reception device8500receives control symbols included in a signal corresponding to the selected channel and containing information indicating the transmission scheme (the transmission scheme, modulation scheme, error correction scheme, and the like in the above embodiments) of the signal (exactly as described in Embodiments A1 through A4 and as shown inFIGS.5and41). With this information, the reception device8500is enabled to make appropriate settings for the receiving operations, demodulation scheme, scheme of error correction decoding, and the like to duly receive data included in data symbols transmitted from a broadcasting station (base station). Although the above description is directed to an example in which the user selects a channel using the remote control8550, the same description applies to an example in which the user selects a channel using a selection key provided on the reception device8500. With the above structure, the user can view a broadcast program that the reception device8500receives by the reception schemes described in the above embodiments. The reception device8500according to this embodiment may additionally include a recording unit (drive)8508for recording various data onto a recording medium, such as a magnetic disk, optical disc, or a non-volatile semiconductor memory. Examples of data to be recorded by the recording unit8508include data contained in multiplexed data that is obtained as a result of demodulation and error correction decoding by the demodulation unit8502, data equivalent to such data (for example, data obtained by compressing the data), and data obtained by processing the moving pictures and/or audio. (Note here that there may be a case where no error correction decoding is applied to a signal obtained as a result of demodulation by the demodulation unit8502and where the reception device8500conducts further signal processing after error correction decoding. The same holds in the following description where similar wording appears.) Note that the term “optical disc” used herein refers to a recording medium, such as Digital Versatile Disc (DVD) or BD (Blu-ray Disc), that is readable and writable with the use of a laser beam. Further, the term “magnetic disk” used herein refers to a recording medium, such as a floppy disk (FD, registered trademark) or hard disk, that is writable by magnetizing a magnetic substance with magnetic flux. Still further, the term “non-volatile semiconductor memory” refers to a recording medium, such as flash memory or ferroelectric random access memory, composed of semiconductor element(s). Specific examples of non-volatile semiconductor memory include an SD card using flash memory and a flash Solid State Drive (SSD). It should be naturally appreciated that the specific types of recording media mentioned herein are merely examples, and any other types of recording mediums may be usable. With the above structure, the user can record a broadcast program that the reception device8500receives with any of the reception schemes described in the above embodiments, and time-shift viewing of the recorded broadcast program is possible anytime after the broadcast. In the above description of the reception device8500, the recording unit8508records multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. However, the recording unit8508may record part of data extracted from the data contained in the multiplexed data. For example, the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502may contain contents of data broadcast service, in addition to video data and audio data. In this case, new multiplexed data may be generated by multiplexing the video data and audio data, without the contents of broadcast service, extracted from the multiplexed data demodulated by the demodulation unit8502, and the recording unit8508may record the newly generated multiplexed data. Alternatively, new multiplexed data may be generated by multiplexing either of the video data and audio data contained in the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502, and the recording unit8508may record the newly generated multiplexed data. The recording unit8508may also record the contents of data broadcast service included, as described above, in the multiplexed data. The reception device8500described in this embodiment may be included in a television, a recorder (such as DVD recorder, Blu-ray recorder, HDD recorder, SD card recorder, or the like), or a mobile telephone. In such a case, the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502may contain data for correcting errors (bugs) in software used to operate the television or recorder or in software used to prevent disclosure of personal or confidential information. If such data is contained, the data is installed on the television or recorder to correct the software errors. Further, if data for correcting errors (bugs) in software installed in the reception device8500is contained, such data is used to correct errors that the reception device8500may have. This arrangement ensures more stable operation of the TV, recorder, or mobile phone in which the reception device8500is implemented. Note that it may be the stream input/output unit8503that handles extraction of data from the whole data contained in multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502and multiplexing of the extracted data. More specifically, under instructions given from a control unit not illustrated in the figures, such as a CPU, the stream input/output unit8503demultiplexes video data, audio data, contents of data broadcast service etc. from the multiplexed data demodulated by the demodulation unit8502, extracts specific pieces of data from the demultiplexed data, and multiplexes the extracted data pieces to generate new multiplexed data. The data pieces to be extracted from demultiplexed data may be determined by the user or determined in advance for the respective types of recording mediums. With the above structure, the reception device8500is enabled to extract and record only data necessary to view a recorded broadcast program, which is effective to reduce the size of data to be recorded. In the above description, the recording unit8508records multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. Alternatively, however, the recording unit8508may record new multiplexed data generated by multiplexing video data newly yielded by encoding the original video data contained in the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. Here, the moving picture coding method to be employed may be different from that used to encode the original video data, so that the data size or bit rate of the new video data is smaller than the original video data. Here, the moving picture coding method used to generate new video data may be of a different standard from that used to generate the original video data. Alternatively, the same moving picture coding method may be used but with different parameters. Similarly, the recording unit8508may record new multiplexed data generated by multiplexing audio data newly obtained by encoding the original audio data contained in the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. Here, the audio coding method to be employed may be different from that used to encode the original audio data, such that the data size or bit rate of the new audio data is smaller than the original audio data. The process of converting the original video or audio data contained in the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502into the video or audio data of a different data size of bit rate is performed, for example, by the stream input/output unit8503and the signal processing unit8504. More specifically, under instructions given from the control unit such as the CPU, the stream input/output unit8503demultiplexes video data, audio data, contents of data broadcast service etc. from the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. Under instructions given from the control unit, the signal processing unit8504converts the demultiplexed video data and audio data respectively using a moving picture coding method and an audio coding method each different from the method that was used in the conversion applied to obtain the video and audio data. Under instructions given from the control unit, the stream input/output unit8503multiplexes the newly converted video data and audio data to generate new multiplexed data. Note that the signal processing unit8504may perform the conversion of either or both of the video or audio data according to instructions given from the control unit. In addition, the sizes of video data and audio data to be obtained by encoding may be specified by a user or determined in advance for the types of recording mediums. With the above arrangement, the reception device8500is enabled to record video and audio data after converting the data to a size recordable on the recording medium or to a size or bit rate that matches the read or write rate of the recording unit8508. This arrangement enables the recoding unit to duly record a program, even if the size recordable on the recording medium is smaller than the data size of the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502, or if the rate at which the recording unit records or reads is lower than the bit rate of the multiplexed data. Consequently, time-shift viewing of the recorded program by the user is possible anytime after the broadcast. Furthermore, the reception device8500additionally includes a stream output interface (IF)8509for transmitting multiplexed data demodulated by the demodulation unit8502to an external device via a transport medium8530. In one example, the stream output IF8509may be a wireless communication device that transmits multiplexed data via a wireless medium (equivalent to the transport medium8530) to an external device by modulating the multiplexed data in accordance with a wireless communication scheme compliant with a wireless communication standard such as Wi-Fi (registered trademark, a set of standards including IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, and IEEE 802.11n), WiGiG, Wireless HD, Bluetooth (registered trademark), ZigBee (registered trademark), or the like. The stream output IF8509may also be a wired communication device that transmits multiplexed data via a transmission line (equivalent to the transport medium8530) physically connected to the stream output IF8509to an external device, modulating the multiplexed data using a communication scheme compliant with wired communication standards, such as Ethernet (registered trademark), Universal Serial Bus (USB), Power Line Communication (PLC), or High-Definition Multimedia Interface (HDMI). With the above structure, the user can use, on an external device, multiplexed data received by the reception device8500using the reception scheme described according to the above embodiments. The usage of multiplexed data by the user mentioned herein includes use of the multiplexed data for real-time viewing on an external device, recording of the multiplexed data by a recording unit included in an external device, and transmission of the multiplexed data from an external device to a yet another external device. In the above description of the reception device8500, the stream output IF8509outputs multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. However, the reception device8500may output data extracted from data contained in the multiplexed data, rather than the whole data contained in the multiplexed data. For example, the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502may contain contents of data broadcast service, in addition to video data and audio data. In this case, the stream output IF8509may output multiplexed data newly generated by multiplexing video and audio data extracted from the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. In another example, the stream output IF8509may output multiplexed data newly generated by multiplexing either of the video data and audio data contained in the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. Note that it may be the stream input/output unit8503that handles extraction of data from the whole data contained in multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502and multiplexing of the extracted data. More specifically, under instructions given from a control unit not illustrated in the figures, such as a Central Processing Unit (CPU), the stream input/output unit8503demultiplexes video data, audio data, contents of data broadcast service etc. from the multiplexed data demodulated by the demodulation unit8502, extracts specific pieces of data from the demultiplexed data, and multiplexes the extracted data pieces to generate new multiplexed data. The data pieces to be extracted from demultiplexed data may be determined by the user or determined in advance for the respective types of the stream output IF8509. With the above structure, the reception device8500is enabled to extract and output only data necessary for an external device, which is effective to reduce the communication bandwidth used to output the multiplexed data. In the above description, the stream output IF8509outputs multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. Alternatively, however, the stream output IF8509may output new multiplexed data generated by multiplexing video data newly yielded by encoding the original video data contained in the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. The new video data is encoded with a moving picture coding method different from that used to encode the original video data, so that the data size or bit rate of the new video data is smaller than the original video data. Here, the moving picture coding method used to generate new video data may be of a different standard from that used to generate the original video data. Alternatively, the same moving picture coding method may be used but with different parameters. Similarly, the stream output IF8509may output new multiplexed data generated by multiplexing audio data newly obtained by encoding the original audio data contained in the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. The new audio data is encoded with an audio coding method different from that used to encode the original audio data, such that the data size or bit rate of the new audio data is smaller than the original audio data. The process of converting the original video or audio data contained in the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502into the video or audio data of a different data size of bit rate is performed, for example, by the stream input/output unit8503and the signal processing unit8504. More specifically, under instructions given from the control unit, the stream input/output unit8503demultiplexes video data, audio data, contents of data broadcast service etc. from the multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. Under instructions given from the control unit, the signal processing unit8504converts the demultiplexed video data and audio data respectively using a moving picture coding method and an audio coding method each different from the method that was used in the conversion applied to obtain the video and audio data. Under instructions given from the control unit, the stream input/output unit8503multiplexes the newly converted video data and audio data to generate new multiplexed data. Note that the signal processing unit8504may perform the conversion of either or both of the video or audio data according to instructions given from the control unit. In addition, the sizes of video data and audio data to be obtained by conversion may be specified by the user or determined in advance for the types of the stream output IF8509. With the above structure, the reception device8500is enabled to output video and audio data after converting the data to a bit rate that matches the transfer rate between the reception device8500and an external device. This arrangement ensures that even if multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502is higher in bit rate than the data transfer rate to an external device, the stream output IF duly outputs new multiplexed data at an appropriate bit rate to the external device. Consequently, the user can use the new multiplexed data on another communication device. Furthermore, the reception device8500also includes an audio and visual output interface (hereinafter, AV output IF)8511that outputs video and audio signals decoded by the signal processing unit8504to an external device via an external transport medium. In one example, the AV output IF8511may be a wireless communication device that transmits modulated video and audio signals via a wireless medium to an external device, using a wireless communication scheme compliant with wireless communication standards, such as Wi-Fi (registered trademark), which is a set of standards including IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, and IEEE 802.11n, WiGiG, Wireless HD, Bluetooth (registered trademark), ZigBee (registered trademark), or the like. In another example, the stream output IF8509may be a wired communication device that transmits modulated video and audio signals via a transmission line physically connected to the stream output IF8509to an external device, using a communication scheme compliant with wired communication standards, such as Ethernet (registered trademark), USB, PLC, HDMI, or the like. In yet another example, the stream output IF8509may be a terminal for connecting a cable to output the video and audio signals in analog form. With the above structure, the user is allowed to use, on an external device, the video and audio signals decoded by the signal processing unit8504. Furthermore, the reception device8500additionally includes an operation input unit8510for receiving a user operation. According to control signals indicative of user operations input to the operation input unit8510, the reception device8500performs various operations, such as switching the power ON or OFF, switching the reception channel, switching the display of subtitle text ON or OFF, switching the display of subtitle text to another language, changing the volume of audio output of the audio output unit8506, and changing the settings of channels that can be received. Additionally, the reception device8500may have a function of displaying the antenna level indicating the quality of the signal being received by the reception device8500. Note that the antenna level is an indicator of the reception quality calculated based on, for example, the Received Signal Strength Indication, Received Signal Strength Indicator (RSSI), received field strength, Carrier-to-noise power ratio (C/N), Bit Error Rate (BER), packet error rate, frame error rate, and channel state information of the signal received on the reception device8500. In other words, the antenna level is a signal indicating the level and quality of the received signal. In this case, the demodulation unit8502also includes a reception quality measuring unit for measuring the received signal characteristics, such as RSSI, received field strength, C/N, BER, packet error rate, frame error rate, and channel state information. In response to a user operation, the reception device8500displays the antenna level (i.e., signal indicating the level and quality of the received signal) on the video display unit8507in a manner identifiable by the user. The antenna level (i.e., signal indicating the level and quality of the received signal) may be numerically displayed using a number that represents RSSI, received field strength, C/N, BER, packet error rate, frame error rate, channel state information or the like. Alternatively, the antenna level may be displayed using an image representing RSSI, received field strength, C/N, BER, packet error rate, frame error rate, channel state information or the like. Furthermore, the reception device8500may display a plurality of antenna levels (signals indicating the level and quality of the received signal) calculated for each of the plurality of streams s1, s2, . . . received and separated using the reception schemes shown in the above embodiments, or one antenna level (signal indicating the level and quality of the received signal) calculated from the plurality of streams s1, s2, . . . . When video data and audio data composing a program are transmitted hierarchically, the reception device8500may also display the signal level (signal indicating the level and quality of the received signal) for each hierarchical level. With the above structure, users are able to grasp the antenna level (signal indicating the level and quality of the received signal) numerically or visually during reception with the reception schemes shown in the above embodiments. Although the reception device8500is described above as having the audio output unit8506, video display unit8507, recording unit8508, stream output IF8509, and AV output IF8511, it is not necessary for the reception device8500to have all of these units. As long as the reception device8500is provided with at least one of the units described above, the user is enabled to use multiplexed data obtained as a result of demodulation and error correction decoding by the demodulation unit8502. The reception device8300may therefore include any combination of the above-described units depending on its intended use. (Multiplexed Data) The following is a detailed description of an exemplary structure of multiplexed data. The data structure typically used in broadcasting is an MPEG2 transport stream (TS), so therefore the following description is given by way of an example related to MPEG2-TS. It should be naturally appreciated, however, that the data structure of multiplexed data transmitted by the transmission and reception schemes described in the above embodiments is not limited to MPEG2-TS and the advantageous effects of the above embodiments are achieved even if any other data structure is employed. FIG.86is a view illustrating an exemplary multiplexed data structure. As illustrated inFIG.86, multiplexed data is obtained by multiplexing one or more elementary streams, which are elements constituting a broadcast program (program or an event which is part of a program) currently provided through respective services. Examples of elementary streams include a video stream, audio stream, presentation graphics (PG) stream, and interactive graphics (IG) stream. In the case where a broadcast program carried by multiplexed data is a movie, the video streams represent main video and sub video of the movie, the audio streams represent main audio of the movie and sub audio to be mixed with the main audio, and the PG stream represents subtitles of the movie. The term “main video” used herein refers to video images normally presented on a screen, whereas “sub video” refers to video images (for example, images of text explaining the outline of the movie) to be presented in a small window inserted within the video images. The IG stream represents an interactive display constituted by presenting GUI components on a screen. Each stream contained in multiplexed data is identified by an identifier called PID uniquely assigned to the stream. For example, the video stream carrying main video images of a movie is assigned with “0x101”, each audio stream is assigned with a different one of “0x1100” to “0x111F”, each PG stream is assigned with a different one of “0x1200” to “0x121F”, each IG stream is assigned with a different one of “0x1400” to “0x141F”, each video stream carrying sub video images of the movie is assigned with a different one of “0x1B00” to “0x1B1F”, each audio stream of sub-audio to be mixed with the main audio is assigned with a different one of “0x1A00” to “0x1A1F”. FIG.87is a schematic view illustrating an example of how the respective streams are multiplexed into multiplexed data. First, a video stream8701composed of a plurality of video frames is converted into a PES packet sequence8702and then into a TS packet sequence8703, whereas an audio stream8704composed of a plurality of audio frames is converted into a PES packet sequence8705and then into a TS packet sequence8706. Similarly, the PG stream8711is first converted into a PES packet sequence8712and then into a TS packet sequence8713, whereas the IG stream8714is converted into a PES packet sequence8715and then into a TS packet sequence8716. The multiplexed data8717is obtained by multiplexing the TS packet sequences (8703,8706,8713and8716) into one stream. FIG.88illustrates the details of how a video stream is divided into a sequence of PES packets. InFIG.88, the first tier shows a sequence of video frames included in a video stream. The second tier shows a sequence of PES packets. As indicated by arrows yy1, yy2, yy3, and yy4 shown inFIG.88, a plurality of video presentation units, namely I pictures, B pictures, and P pictures, of a video stream are separately stored into the payloads of PES packets on a picture-by-picture basis. Each PES packet has a PES header and the PES header stores a Presentation Time-Stamp (PTS) and Decoding Time-Stamp (DTS) indicating the display time and decoding time of a corresponding picture. FIG.89illustrates the format of a TS packet to be eventually written as multiplexed data. The TS packet is a fixed length packet of 188 bytes and has a 4-byte TS header containing such information as PID identifying the stream and a 184-byte TS payload carrying actual data. The PES packets described above are divided to be stored into the TS payloads of TS packets. In the case of BD-ROM, each TS packet is attached with a TP_Extra_Header of 4 bytes to build a 192-byte source packet, which is to be written as multiplexed data. The TP_Extra_Header contains such information as an Arrival_Time_Stamp (ATS). The ATS indicates a time for starring transfer of the TS packet to the PID filter of a decoder. As shown on the lowest tier inFIG.89, multiplexed data includes a sequence of source packets each bearing a source packet number (SPN), which is a number incrementing sequentially from the start of the multiplexed data. In addition to the TS packets storing streams such as video, audio, and PG streams, multiplexed data also includes TS packets storing a Program Association Table (PAT), a Program Map Table (PMT), and a Program Clock Reference (PCR). The PAT in multiplexed data indicates the PID of a PMT used in the multiplexed data, and the PID of the PAT is “0”. The PMT includes PIDs identifying the respective streams, such as video, audio and subtitles, contained in multiplexed data and attribute information (frame rate, aspect ratio, and the like) of the streams identified by the respective PIDs. In addition, the PMT includes various types of descriptors relating to the multiplexed data. One of such descriptors may be copy control information indicating whether or not copying of the multiplexed data is permitted. The PCR includes information for synchronizing the Arrival Time Clock (ATC), which is the time axis of ATS, with the System Time Clock (STC), which is the time axis of PTS and DTS. More specifically, the PCR packet includes information indicating an STC time corresponding to the ATS at which the PCR packet is to be transferred. FIG.90is a view illustrating the data structure of the PMT in detail. The PMT starts with a PMT header indicating, for example, the length of data contained in the PMT. Following the PMT header, descriptors relating to the multiplexed data are disposed. One example of a descriptor included in the PMT is copy control information described above. Following the descriptors, pieces of stream information relating to the respective streams included in the multiplexed data are arranged. Each piece of stream information is composed of stream descriptors indicating a stream type identifying a compression codec employed for a corresponding stream, a PID of the stream, and attribute information (frame rate, aspect ratio, and the like) of the stream. The PMT includes as many stream descriptors as the number of streams included in the multiplexed data. When recorded onto a recoding medium, for example, the multiplexed data is recorded along with a multiplexed data information file. FIG.91is a view illustrating the structure of the multiplexed data file information. As illustrated inFIG.91, the multiplexed data information file is management information of corresponding multiplexed data and is composed of multiplexed data information, stream attribute information, and an entry map. Note that multiplexed data information files and multiplexed data are in a one-to-one relationship. As illustrated inFIG.91, the multiplexed data information is composed of a system rate, playback start time, and playback end time. The system rate indicates the maximum transfer rate of the multiplexed data to the PID filter of a system target decoder, which is described later. The multiplexed data includes ATSs at intervals set so as not to exceed the system rate. The playback start time is set to the time specified by the PTS of the first video frame in the multiplexed data, whereas the playback end time is set to the time calculated by adding the playback period of one frame to the PTS of the last video frame in the multiplexed data. FIG.92illustrates the structure of stream attribute information contained in multiplexed data file information. As illustrated inFIG.92, the stream attribute information includes pieces of attribute information of the respective streams included in multiplexed data, and each piece of attribute information is registered with a corresponding PID. That is, different pieces of attribute information are provided for different streams, namely a video stream, an audio stream, a PG stream and an IG stream. The video stream attribute information indicates the compression codec employed to compress the video stream, the resolutions of individual pictures constituting the video stream, the aspect ratio, the frame rate, and so on. The audio stream attribute information indicates the compression codec employed to compress the audio stream, the number of channels included in the audio stream, the language of the audio stream, the sampling frequency, and so on. These pieces of information are used to initialize a decoder before playback by a player. In the present embodiment, from among the pieces of information included in the multiplexed data, the stream type included in the PMT is used. In the case where the multiplexed data is recorded on a recording medium, the video stream attribute information included in the multiplexed data information is used. More specifically, the moving picture coding method and device described in any of the above embodiments may be modified to additionally include a step or unit of setting a specific piece of information in the stream type included in the PMT or in the video stream attribute information. The specific piece of information is for indicating that the video data is generated by the moving picture coding method and device described in the embodiment. With the above structure, video data generated by the moving picture coding method and device described in any of the above embodiments is distinguishable from video data compliant with other standards. FIG.93illustrates an exemplary structure of a video and audio output device9300that includes a reception device9304for receiving a modulated signal carrying video and audio data or data for data broadcasting from a broadcasting station (base station). Note that the structure of the reception device9304corresponds to the reception device8500illustrated inFIG.85. The video and audio output device9300is installed with an Operating System (OS), for example, and also with a communication device9306(a communication device for a wireless Local Area Network (LAN) or Ethernet, for example) for establishing an Internet connection. With this structure, hypertext (World Wide Web (WWW))9303provided over the Internet can be displayed on a display area9301simultaneously with images9302reproduced on the display area9301from the video and audio data or data provided by data broadcasting. By operating a remote control (which may be a mobile phone or keyboard)9307, the user can make a selection on the images9302reproduced from data provided by data broadcasting or the hypertext9303provided over the Internet to change the operation of the video and audio output device9300. For example, by operating the remote control to make a selection on the hypertext9303provided over the Internet, the user can change the WWW site currently displayed to another site. Alternatively, by operating the remote control9307to make a selection on the images9302reproduced from the video or audio data or data provided by the data broadcasting, the user can transmit information indicating a selected channel (such as a selected broadcast program or audio broadcasting). In response, an interface (IF)9305acquires information transmitted from the remote control, so that the reception device9304operates to obtain reception data by demodulation and error correction decoding of a signal carried on the selected channel. At this time, the reception device9304receives control symbols included in a signal corresponding to the selected channel and containing information indicating the transmission scheme of the signal (exactly as described in Embodiments A1 through A4 and as shown inFIGS.5and41). With this information, the reception device9304is enabled to make appropriate settings for the receiving operations, demodulation scheme, scheme of error correction decoding, and the like to duly receive data included in data symbols transmitted from a broadcasting station (base station). Although the above description is directed to an example in which the user selects a channel using the remote control9307, the same description applies to an example in which the user selects a channel using a selection key provided on the video and audio output device9300. In addition, the video and audio output device9300may be operated via the Internet. For example, a terminal connected to the Internet may be used to make settings on the video and audio output device9300for pre-programmed recording (storing). (The video and audio output device9300therefore would have the recording unit8508as illustrated inFIG.85.) In this case, before starting the pre-programmed recording, the video and audio output device9300selects the channel, so that the reception device9304operates to obtain reception data by demodulation and error correction decoding of a signal carried on the selected channel. At this time, the reception device9304receives control symbols included in a signal corresponding to the selected channel and containing information indicating the transmission scheme (the transmission scheme, modulation scheme, error correction scheme, and the like in the above embodiments) of the signal (exactly as described in Embodiments A1 through A4 and as shown inFIGS.5and41). With this information, the reception device9304is enabled to make appropriate settings for the receiving operations, demodulation scheme, scheme of error correction decoding, and the like to duly receive data included in data symbols transmitted from a broadcasting station (base station). Embodiment C1 Embodiment 2 describes a precoding scheme of regularly hopping between precoding matrices, and (Example #1) and (Example #2) as schemes of setting precoding matrices in consideration of poor reception points. The present embodiment is directed to generalization of (Example #1) and (Example #2) described in Embodiment 2. With respect to a scheme of regularly hopping between precoding matrices with an N-slot period (cycle), a precoding matrix prepared for an N-slot period (cycle) is represented as follows. Math⁢566F[i]=1α2+1⁢(ej⁢θ1⁢1(i)α×e(θ1⁢1(i)+λ)α×ej⁢θ21(i)ej⁡(θ2⁢1(i)+λ+δ))Equation⁢#1 In this case, i=0, 1, 2, . . . , N−2, N−1. (Let α>0.) In the present embodiment, a unitary matrix is used and the precoding matrix in Equation #1 is represented as follows. Math⁢567F[i]=1α2+1⁢(ej⁢θ1⁢1(i)α×e(θ1⁢1(i)+λ)α×ej⁢θ21(i)ej⁡(θ2⁢1(i)+λ+π))Equation⁢#2 In this case, i=0, 1, 2, . . . , N−2, N−1. (Let α>0.) (In order to simplify the mapping performed by the transmission device and the reception device, it is preferable that λ be one of the following fixed values: 0 radians; π/2 radians; R radians; and (3π)/2 radians.) Embodiment 2 is specifically implemented under the assumption α=1. In Embodiment 2, Equation #2 is represented as follows. Math⁢568F[i]=12⁢(ej⁢θ1⁢1(i)ej⁡(θ11(i)+λ)ej⁢θ2⁢1(i)ej⁡(θ21(i)+λ+π))Equation⁢#3 In order to distribute the poor reception points evenly with regards to phase in the complex plane, as described in Embodiment 2, Condition #101 or #102 is provided in Equation #1 or #2. Math⁢569ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))e(θ1⁢1(x)-θ2⁢1(x))=ej⁡(πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#101Math⁢570ej⁡(θ1⁢1(x+1)-θ2⁢1(x+1))ej⁡(θ1⁢1(x)-θ2⁢1(x))=ej⁡(-πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#102 Especially, when θ11(i) is a fixed value independent of i, Condition #103 or #104 may be provided. Math⁢571ej⁢θ2⁢1(x+1)ej⁢θ2⁢1(x)=ej⁡(πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#103Math⁢572ej⁢θ2⁢1(x+1)ej⁢θ2⁢1(x)=ej⁡(-πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#104 Similarly, when θ21(i) is a fixed value independent of i, Condition #105 or #106 may be provided. Math⁢573ej⁢θ11(x+1)ej⁢θ11(x)=ej⁡(πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#105Math⁢574ej⁢θ11(x+1)ej⁢θ11(x)=ej⁡(-πN)⁢for⁢∀x⁡(x=0,1,2,…,N-2)Condition⁢#106 The following is an example of a precoding matrix using the above-mentioned unitary matrix for the scheme of regularly hopping between precoding matrices with an N-slot period (cycle). A precoding matrix that is based on Equation #2 and prepared for an N-slot period (cycle) is represented as follows. (In Equation #2, λ is 0 radians, and θ11(i) is 0 radians.) Math⁢575F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁢θ2⁢1(i)ej⁡(θ2⁢1(i)+π))Equation⁢#10 In this case, i=0, 1, 2, . . . , N−2, N−1. (Let α>0.) Also, Condition #103 or #104 is satisfied. In addition, θ21(i=0) may be set to a certain value, such as 0 radians. With respect to a scheme of regularly hopping between precoding matrices with an N-slot period (cycle), another example of a precoding matrix prepared for an N-slot period (cycle) is represented as follows. (In Equation #2, λ is 0 radians, and θ11(i) is 0 radians.) Math⁢576F[i]=1α2+1⁢(ej⁢0α×ej⁢πα×ej⁢θ2⁢1(i)ej⁢θ2⁢1(i))Equation⁢#9 In this case, i=0, 1, 2, . . . , N−2, N−1. (Let α>0.) Also, Condition #103 or #104 is satisfied. In addition, θ21(i=0) may be set to a certain value, such as 0 radians. As yet another example, a precoding matrix prepared for an N-slot period (cycle) is represented as follows. (In Equation #2, λ is 0 radians, and θ21(i) is 0 radians.) Math⁢577F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁢(θ11(i))α×ej⁢0ej⁢π)Equation⁢#12 In this case, i=0, 1, 2, . . . , N−2, N−1. (Let α>0.) Also, Condition #105 or #106 is satisfied. In addition, θ11(i=0) may be set to a certain value, such as 0 radians. As yet another example, a precoding matrix prepared for an N-slot period (cycle) is represented as follows. (In Equation #2, λ is π radians, and θ21(i) is 0 radians.) Math⁢578F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁢(θ11(i)+π)α×ej⁢0ej⁢0)Equation⁢#13 In this case, i=0, 1, 2, . . . , N−2, N−1 (let α>0), and Condition #105 or #106 is satisfied. In addition, θ11(i=0) may be set to a certain value, such as 0 radians. In view of the examples of Embodiment 2, yet another example of a precoding matrix prepared for an N-slot period (cycle) is represented as follows. (In Equation #3, λ is 0 radians, and θ11(i) is 0 radians.) Math⁢579F[i]=12⁢(ej⁢0ej⁢0ej⁢θ2⁢1(i)ej⁡(θ2⁢1(i)+π))Equation⁢#14 In this case, i=0, 1, 2, . . . , N−2, N−1, and Condition #103 or #104 is satisfied. In addition, θ21(i=0) may be set to a certain value, such as 0 radians. With respect to a scheme of regularly hopping between precoding matrices with an N-slot period (cycle), yet another example of a precoding matrix prepared for an N-slot period (cycle) is represented as follows. (In Equation #3, λ is π radians, and θ11(i) is 0 radians.) Math⁢580F[i]=12⁢(ej⁢0ej⁢πej⁢θ2⁢1(i)ej⁢θ2⁢1(i))Equation⁢#15 In this case, i=0, 1, 2, . . . , N−2, N−1, and Condition #103 or #104 is satisfied. In addition, θ21(i=0) may be set to a certain value, such as 0 radians. As yet another example, a precoding matrix prepared for an N-slot period (cycle) is represented as follows. (In Equation #3, λ is 0 radians, and θ21(i) is 0 radians.) Math⁢581F[i]=12⁢(ej⁢θ11(i)ej⁡(θ11(i))ej⁢0ej⁢π)Equation⁢#16 In this case, i=0, 1, 2, . . . , N−2, N−1, and Condition #105 or #106 is satisfied. In addition, θ11(i=0) may be set to a certain value, such as 0 radians. As yet another example, a precoding matrix prepared for an N-slot period (cycle) is represented as follows. (In Equation #3, λ is π radians, and θ21(i) is 0 radians.) Math⁢582F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+π)α×ej⁢0ej⁢0)Equation⁢#17 In this case, i=0, 1, 2, . . . , N−2, N−1, and Condition #105 or #106 is satisfied. In addition, θ11(i=0) may be set to a certain value, such as 0 radians. As compared to the precoding scheme of regularly hopping between precoding matrices described in Embodiment 9, the precoding scheme pertaining to the present embodiment has a probability of achieving high data reception quality even if the length of the period (cycle) pertaining to the present embodiment is reduced to approximately half of the length of the period (cycle) pertaining to Embodiment 9. Therefore, the precoding scheme pertaining to the present embodiment can reduce the number of precoding matrices to be prepared, which brings about the advantageous effect of reducing the scale of circuits for the transmission device and the reception device. The above advantageous effect can be enhanced with a transmission device that is provided with one encoder and distributes encoded data as shown inFIG.4, or with a reception device corresponding to such a transmission device. A preferable example of α appearing in the above examples can be obtained by using any of the schemes described in Embodiment 18. However, α is not limited to being obtained in this way. In the present embodiment, the scheme of structuring N different precoding matrices for a precoding hopping scheme with an N-slot time period (cycle) has been described. In this case, the N different precoding matrices, F[0], F[1], F[2], . . . , F[N−2], F[N−1] are prepared. In the case of a single-carrier transmission scheme, the order F[0], F[1], F[2], . . . , F[N−2], F[N−1] is maintained in the time domain (or the frequency domain). The present invention is not, however, limited in this way, and the N different precoding matrices F[0], F[1], F[2], . . . , F[N−2], F[N−1] generated in the present embodiment may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that a precoding hopping scheme with an N-slot period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots N in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment C2 The following describes a precoding scheme of regularly hopping between precoding matrices that is different from Embodiment C1 where Embodiment 9 is incorporated—i.e., a scheme of implementing Embodiment C1 in a case where the number of slots in a period (cycle) is an odd number in Embodiment 9. With respect to a scheme of regularly hopping between precoding matrices with an N-slot period (cycle), a precoding matrix prepared for an N-slot period (cycle) is represented as follows. Math⁢583F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+δ))Equation⁢#18 In this case, i=0, 1, 2, . . . , N−2, N−1 (let α>0). In the present embodiment, a unitary matrix is used and the precoding matrix in Equation #1 is represented as follows. Math⁢584F[i]=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+π))Equation⁢#19 In this case, i=0, 1, 2, . . . , N−2, N−1 (let a>0). (In order to simplify the mapping performed by the transmission device and the reception device, it is preferable that λ be one of the following fixed values: 0 radians; π/2 radians; R radians; and (3π)/2 radians.) Specifically, it is assumed here that α=1. Here, Equation #19 is represented as follows. Math⁢585F[i]=12⁢(ej⁢θ11(i)ej⁡(θ11(i)+λ)ej⁢θ21(i)ej⁡(θ21(i)+λ+π))Equation⁢#20 The precoding matrices used in the precoding scheme of regularly hopping between precoding matrices pertaining to the present embodiment are expressed in the above manner. The present embodiment is characterized in that the number of slots in an N-slot period (cycle) for the precoding scheme of regularly hopping between precoding matrices pertaining to the present embodiment is an odd number, i.e., expressed as N=2n+1. To realize an N-slot period (cycle) where N=2n+1, the number of different precoding matrices to be prepared is n+1 (note, the description of these different precoding matrices will be given later). From among the n+1 different precoding matrices, each of the n precoding matrices is used twice in one period (cycle), and the remaining one precoding matrix is used once in one period (cycle), which results in an N-slot period (cycle) where N=2n+1. The following is a detailed description of these precoding matrices. Assume that the n+1 different precoding matrices, which are necessary to implement the precoding scheme of regularly hopping between precoding matrices with an N-slot period (cycle) where N=2n+1, are F[0], F[1], . . . , F[i], . . . , F[n−1], F[n] (i=0, 1, 2, . . . , n−2, n−1, n). Here, the n+1 different precoding matrices F[0], F[1], . . . , F[i], . . . , F[n−1], F[n] based on Equation #19 are represented as follows. Math⁢586F[i]=1α2+1⁢(ej⁢θ11α×ej⁡(θ11+λ)α×ej⁡(θ11+2⁢i⁢π2⁢n+1)ej⁡(θ11+2⁢i⁢π2⁢n+1+λ+π))Equation⁢#21 In this case, i=0, 1, 2, . . . , n−2, n−1, n. Out of the n+1 different precoding matrices according to Equation #21 (namely, F[0], F[1], . . . , F[i], . . . , F[n−1], F[n]), F[0] is used once, and each of F[1] through F[n] is used twice (i.e., F[1] is used twice, F[2] is used twice, . . . , F[n−1] is used twice, and F[n] is used twice). As a result, the precoding scheme of regularly hopping between precoding matrices with an N-slot period (cycle) where N=2n+1 is achieved, and the reception device can achieve excellent data reception quality, similarly to the case where the number of slots in a period (cycle) for the precoding scheme of regularly hopping between precoding matrices is an odd number in Embodiment 9. In this case, high data reception quality may be achieved even if the length of the period (cycle) pertaining to the present embodiment is reduced to approximately half of the length of the period (cycle) pertaining to Embodiment 9. This can reduce the number of precoding matrices to be prepared, which brings about the advantageous effect of reducing the scale of circuits for the transmission device and the reception device. The above advantageous effect can be enhanced with a transmission device that is provided with one encoder and distributes encoded data as shown inFIG.4, or with a reception device corresponding to such a transmission device. Especially, when λ=0 radians and θ11=0 radians, the above equation can be expressed as follows. Math⁢587F[i]=1α2+1⁢(ej⁢0α×ej⁢0α×ej⁡(2⁢i⁢π2⁢n+1)ej⁡(2⁢i⁢π2⁢n+1+π))Equation⁢#22 In this case, i=0, 1, 2, . . . , n−2, n−1, n. Out of the n+1 different precoding matrices according to Equation #22 (namely, F[0], F[1], . . . , F[i], . . . , F[n−1], F[n]), F[0] is used once, and each of F[1] through F[n] is used twice (i.e., F[1] is used twice, F[2] is used twice, . . . , F[n−1] is used twice, and F[n] is used twice). As a result, the precoding scheme of regularly hopping between precoding matrices with an N-slot period (cycle) where N=2n+1 is achieved, and the reception device can achieve excellent data reception quality, similarly to the case where the number of slots in a period (cycle) for the precoding scheme of regularly hopping between precoding matrices is an odd number in Embodiment 9. In this case, high data reception quality may be achieved even if the length of the period (cycle) pertaining to the present embodiment is reduced to approximately half of the length of the period (cycle) pertaining to Embodiment 9. This can reduce the number of precoding matrices to be prepared, which brings about the advantageous effect of reducing the scale of circuits for the transmission device and the reception device. Especially, when λ=π radians and θ11=0 radians, the following equation is true. Math⁢588F[i]=1α2+1⁢(ej⁢0α×ej⁢πα×ej⁡(2⁢i⁢π2⁢n+1)ej⁡(2⁢i⁢π2⁢n+1))Equation⁢#23 In this case, i=0, 1, 2, . . . , n−2, n−1, n. Out of the n+1 different precoding matrices according to Equation #23 (namely, F[0], F[1], . . . , F[i], . . . , F[n−1], F[n]), F[0] is used once, and each of F[1] through F[n] is used twice (i.e., F[1] is used twice, F[2] is used twice, . . . , F[n−1] is used twice, and F[n] is used twice). As a result, the precoding scheme of regularly hopping between precoding matrices with an N-slot period (cycle) where N=2n+1 is achieved, and the reception device can achieve excellent data reception quality, similarly to the case where the number of slots in a period (cycle) for the precoding scheme of regularly hopping between precoding matrices is an odd number in Embodiment 9. In this case, high data reception quality may be achieved even if the length of the period (cycle) pertaining to the present embodiment is reduced to approximately half of the length of the period (cycle) pertaining to Embodiment 9. This can reduce the number of precoding matrices to be prepared, which brings about the advantageous effect of reducing the scale of circuits for the transmission device and the reception device. Furthermore, when α=1 as in the relationships shown in Equation #19 and Equation #20, Equation #21 can be expressed as follows. Math⁢589F[i]=12⁢(ej⁢θ11ej⁡(θ11+λ)ej⁡(θ11+2⁢i⁢π2⁢n+1)ej(θ11+2⁢i⁢π2⁢n+1+λ+π)Equation⁢#24 In this case, i=0, 1, 2, . . . , n−2, n−1, n. Out of the n+1 different precoding matrices according to Equation #24 (namely, F[0], F[1], . . . , F[i], . . . , F[n−1], F[n]), F[0] is used once, and each of F[1] through F[n] is used twice (i.e., F[1] is used twice, F[2] is used twice, . . . , F[n−1] is used twice, and F[n] is used twice). As a result, the precoding scheme of regularly hopping between precoding matrices with an N-slot period (cycle) where N=2n+1 is achieved, and the reception device can achieve excellent data reception quality, similarly to the case where the number of slots in a period (cycle) for the precoding scheme of regularly hopping between precoding matrices is an odd number in Embodiment 9. In this case, high data reception quality may be achieved even if the length of the period (cycle) pertaining to the present embodiment is reduced to approximately half of the length of the period (cycle) pertaining to Embodiment 9. This can reduce the number of precoding matrices to be prepared, which brings about the advantageous effect of reducing the scale of circuits for the transmission device and the reception device. Similarly, when α=1 in Equation #22, the following equation is true. Math⁢590F[i]=12⁢(ej⁢0ej⁢0ej⁡(2⁢i⁢π2⁢n+1)ej⁡(2⁢i⁢π2⁢n+1+π))Equation⁢#25 In this case, i=0, 1, 2, . . . , n−2, n−1, n. Out of the n+1 different precoding matrices according to Equation #25 (namely, F[0], F[1], . . . , F[i], . . . , F[n−1], F[n]), F[0] is used once, and each of F[1] through F[n] is used twice (i.e., F[1] is used twice, F[2] is used twice, . . . , F[n−1] is used twice, and F[n] is used twice). As a result, the precoding scheme of regularly hopping between precoding matrices with an N-slot period (cycle) where N=2n+1 is achieved, and the reception device can achieve excellent data reception quality, similarly to the case where the number of slots in a period (cycle) for the precoding scheme of regularly hopping between precoding matrices is an odd number in Embodiment 9. In this case, high data reception quality may be achieved even if the length of the period (cycle) pertaining to the present embodiment is reduced to approximately half of the length of the period (cycle) pertaining to Embodiment 9. This can reduce the number of precoding matrices to be prepared, which brings about the advantageous effect of reducing the scale of circuits for the transmission device and the reception device. Similarly, when α=1 in Equation #23, the following equation is true. Math⁢591F[i]=12⁢(ej⁢0ej⁢πej⁡(2⁢i⁢π2⁢n+1)ej⁡(2⁢i⁢π2⁢n+1))Equation⁢#26 In this case, i=0, 1, 2, . . . , n−2, n−1, n. Out of the n+1 different precoding matrices according to Equation #26 (namely, F[0], F[1], . . . , F[i], . . . , F[n−1], F[n]), F[0] is used once, and each of F[1] through F[n] is used twice (i.e., F[1] is used twice, F[2] is used twice, . . . , F[n−1] is used twice, and F[n] is used twice). As a result, the precoding scheme of regularly hopping between precoding matrices with an N-slot period (cycle) where N=2n+1 is achieved, and the reception device can achieve excellent data reception quality, similarly to the case where the number of slots in a period (cycle) for the precoding scheme of regularly hopping between precoding matrices is an odd number in Embodiment 9. In this case, high data reception quality may be achieved even if the length of the period (cycle) pertaining to the present embodiment is reduced to approximately half of the length of the period (cycle) pertaining to Embodiment 9. This can reduce the number of precoding matrices to be prepared, which brings about the advantageous effect of reducing the scale of circuits for the transmission device and the reception device. A preferable example of α appearing in the above examples can be obtained by using any of the schemes described in Embodiment 18. However, α is not limited to being obtained in this way. According to the present embodiment, in the case of a single-carrier transmission scheme, the precoding matrices W[0], W[1], . . . , W[2n−1], W[2n] (which are constituted by F[0], F[1], F[2], . . . , F[n−1], F[n]) for a precoding hopping scheme with a an N-slot period (cycle) where N=2n+1 (i.e., a precoding scheme of regularly hopping between precoding matrices with an N-slot period (cycle) where N=2n+1) are arranged in the order W[0], W[1], . . . , W[2n−1], W[2n] in the time domain (or the frequency domain). The present invention is not, however, limited in this way, and the precoding matrices W[0], W[1], . . . , W[2n−1], W[2n] may be applied to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Although the above has described the precoding hopping scheme with an N-slot period (cycle) where N=2n+1, the same advantageous effects may be obtained by randomly using W[0], W[1], . . . , W[2n−1], W[2n]. In other words, W[0], W[1], . . . , W[2n−1], W[2n] do not necessarily need to be used in a regular period (cycle). Furthermore, in the precoding matrix hopping scheme over an H-slot period (cycle) (H being a natural number larger than the number of slots N=2n+1 in the period (cycle) of the above scheme of regularly hopping between precoding matrices), when the N different precoding matrices of the present embodiment are included, the probability of excellent reception quality increases. Embodiment C3 The present embodiment provides detailed descriptions of a case where, as shown in Non-Patent Literature 12 through Non-Patent Literature 15, a Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) code (or an LDPC (block) code other than a QC-LDPC code) and a block code (e.g., a concatenated code consisting of an LDPC code and a Bose-Chaudhuri-Hocquenghem (BCH) code, and a turbo code) are used, especially when the scheme of regularly hopping between precoding matrices described in Embodiments 16 through 26 and C1 is employed. This embodiment describes an example of transmitting two streams, s1 and s2. However, for the case of coding using block codes, when control information or the like is not necessary, the number of bits in a coded block matches the number of bits composing the block code (the control information or the like listed below may, however, be included therein). For the case of coding using block codes, when control information or the like (such as a cyclic redundancy check (CRC), transmission parameters, or the like) is necessary, the number of bits in a coded block is the sum of the number of bits composing the block code and the number of bits in the control information or the like. FIG.97shows a modification of the number of symbols and of slots necessary for one coded block when using block coding.FIG.97“shows a modification of the number of symbols and of slots necessary for one coded block when using block coding” for the case when, for example as shown in the transmission device inFIG.4, two streams, s1 and s2, are transmitted, and the transmission device has one encoder. (In this case, the transmission scheme may be either single carrier transmission, or multicarrier transmission such as OFDM.) As shown inFIG.97, the number of bits constituting one block that has been encoded via block coding is set to 6,000. In order to transmit these 6,000 bits, 3,000 symbols are required when the modulation scheme is QPSK, 1,500 when the modulation scheme is 16QAM, and 1,000 when the modulation scheme is 64QAM. Since the transmission device inFIG.4simultaneously transmits two streams, 1,500 of the 3,000 symbols when the modulation scheme is QPSK are allocated to s1, and 1,500 to s2. Therefore, 1,500 slots (the term “slot” is used here) are required to transmit the 1,500 symbols transmitted in s1 and the 1,500 symbols transmitted in s2. By similar reasoning, when the modulation scheme is 16QAM, 750 slots are necessary to transmit all of the bits constituting one coded block, and when the modulation scheme is 64QAM, 500 slots are necessary to transmit all of the bits constituting one block. The following describes the relationship between the slots defined above and the precoding matrices in the scheme of regularly hopping between precoding matrices. Here, the number of precoding matrices prepared for the scheme of regularly hopping between precoding matrices is set to five. In other words, five different precoding matrices are prepared for the weighting unit in the transmission device inFIG.4(the weighting unit selects one of the plurality of precoding matrices and performs precoding for each slot). These five different precoding matrices are represented as F[0]F[1], F[2], F[3], and F[4]. When the modulation scheme is QPSK, among the 1,500 slots described above for transmitting the 6,000 bits constituting one coded block, it is necessary for 300 slots to use the precoding matrix F[0], 300 slots to use the precoding matrix F[1], 300 slots to use the precoding matrix F[2], 300 slots to use the precoding matrix F[3], and 300 slots to use the precoding matrix F[4]. This is because if use of the precoding matrices is biased, the reception quality of data is greatly influenced by the precoding matrix that was used a greater number of times. When the modulation scheme is 16QAM, among the 750 slots described above for transmitting the 6,000 bits constituting one coded block, it is necessary for 150 slots to use the precoding matrix F[0], 150 slots to use the precoding matrix F[1], 150 slots to use the precoding matrix F[2], 150 slots to use the precoding matrix F[3], and 150 slots to use the precoding matrix F[4]. When the modulation scheme is 64QAM, among the 500 slots described above for transmitting the 6,000 bits constituting one coded block, it is necessary for 100 slots to use the precoding matrix F[0], 100 slots to use the precoding matrix F[1], 100 slots to use the precoding matrix F[2], 100 slots to use the precoding matrix F[3], and 100 slots to use the precoding matrix F[4]. As described above, in the scheme of regularly hopping between precoding matrices, if there are N different precoding matrices (represented as F[0], F[1], F[2], . . . , F[N−2], and F[N−1]), when transmitting all of the bits constituting one coded block, Condition #107 should be satisfied, wherein K0is the number of slots using the precoding matrix F[0], K1is the number of slots using the precoding matrix F[1], Kiis the number of slots using the precoding matrix F[i] (i=0, 1, 2, . . . , N−1), and KN-1is the number of slots using the precoding matrix F[N−1]. Condition #107 K0=K1= . . . =Ki= . . . =KN-1, i.e. Ka=Kb(for ∀a, ∀b, where a, b=0, 1, 2, . . . , N−1 (each of a and b being an integer in a range of 0 to N−1), and a≠b). If the communications system supports a plurality of modulation schemes, and the modulation scheme that is used is selected from among the supported modulation schemes, then a modulation scheme for which Condition #107 is satisfied should be selected. When a plurality of modulation schemes are supported, it is typical for the number of bits that can be transmitted in one symbol to vary from modulation scheme to modulation scheme (although it is also possible for the number of bits to be the same), and therefore some modulation schemes may not be capable of satisfying Condition #107. In such a case, instead of Condition #107, the following condition should be satisfied. Condition #108 The difference between Kaand Kbis 0 or 1, i.e. |Ka−Kb| is 0 or 1 (for ∀a, ∀b, where a, b=0, 1, 2, . . . , N−1 (each of a and b being an integer in a range of 0 to N−1), and a≠b). FIG.98shows a modification of the number of symbols and of slots necessary for two coded blocks when using block coding.FIG.98“shows a modification of the number of symbols and of slots necessary for two coded blocks when using block coding” for the case when, for example as shown in the transmission device inFIG.3and inFIG.13, two streams are transmitted, i.e. s1 and s2, and the transmission device has two encoders. (In this case, the transmission scheme may be either single carrier transmission, or multicarrier transmission such as OFDM.) As shown inFIG.98, the number of bits constituting one block that has been encoded via block coding is set to 6,000. In order to transmit these 6,000 bits, 3,000 symbols are required when the modulation scheme is QPSK, 1,500 when the modulation scheme is 16QAM, and 1,000 when the modulation scheme is 64QAM. The transmission device inFIG.3or inFIG.13transmits two streams simultaneously, and since two encoders are provided, different coded blocks are transmitted in the two streams. Accordingly, when the modulation scheme is QPSK, two coded blocks are transmitted in s1 and s2 within the same interval. For example, a first coded block is transmitted in s1, and a second coded block is transmitted in s2, and therefore, 3,000 slots are required to transmit the first and second coded blocks. By similar reasoning, when the modulation scheme is 16QAM, 1,500 slots are necessary to transmit all of the bits constituting two coded blocks, and when the modulation scheme is 64QAM, 1,000 slots are necessary to transmit all of the bits constituting two blocks. The following describes the relationship between the slots defined above and the precoding matrices in the scheme of regularly hopping between precoding matrices. Here, the number of precoding matrices prepared for the scheme of regularly hopping between precoding matrices is set to five. In other words, five different precoding matrices are prepared for the weighting unit in the transmission device inFIG.3or inFIG.13(the weighting unit selects one of the plurality of precoding matrices and performs precoding for each slot). These five different precoding matrices are represented as F[0], F[1], F[2], F[3], and F[4]. When the modulation scheme is QPSK, among the 3,000 slots described above for transmitting the 6,000×2 bits constituting two coded blocks, it is necessary for 600 slots to use the precoding matrix F[0], 600 slots to use the precoding matrix F[1], 600 slots to use the precoding matrix F[2], 600 slots to use the precoding matrix F[3], and 600 slots to use the precoding matrix F[4]. This is because if use of the precoding matrices is biased, the reception quality of data is greatly influenced by the precoding matrix that was used a greater number of times. To transmit the first coded block, it is necessary for the slot using the precoding matrix F[0] to occur 600 times, the slot using the precoding matrix F[1] to occur 600 times, the slot using the precoding matrix F[2] to occur 600 times, the slot using the precoding matrix F[3] to occur 600 times, and the slot using the precoding matrix F[4] to occur 600 times. To transmit the second coded block, the slot using the precoding matrix F[0] should occur 600 times, the slot using the precoding matrix F[1] should occur 600 times, the slot using the precoding matrix F[2] should occur 600 times, the slot using the precoding matrix F[3] should occur 600 times, and the slot using the precoding matrix F[4] should occur 600 times. Similarly, when the modulation scheme is 16QAM, among the 1,500 slots described above for transmitting the 6,000×2 bits constituting two coded blocks, it is necessary for 300 slots to use the precoding matrix F[0], 300 slots to use the precoding matrix F[1], 300 slots to use the precoding matrix F[2], 300 slots to use the precoding matrix F[3], and 300 slots to use the precoding matrix F[4]. To transmit the first coded block, it is necessary for the slot using the precoding matrix F[0] to occur 300 times, the slot using the precoding matrix F[1] to occur 300 times, the slot using the precoding matrix F[2] to occur 300 times, the slot using the precoding matrix F[3] to occur 300 times, and the slot using the precoding matrix F[4] to occur 300 times. To transmit the second coded block, the slot using the precoding matrix F[0] should occur 300 times, the slot using the precoding matrix F[1] should occur 300 times, the slot using the precoding matrix F[2] should occur 300 times, the slot using the precoding matrix F[3] should occur 300 times, and the slot using the precoding matrix F[4] should occur 300 times. Similarly, when the modulation scheme is 64QAM, among the 1,000 slots described above for transmitting the 6,000×2 bits constituting two coded blocks, it is necessary for 200 slots to use the precoding matrix F[0], 200 slots to use the precoding matrix F[1], 200 slots to use the precoding matrix F[2], 200 slots to use the precoding matrix F[3], and 200 slots to use the precoding matrix F[4]. To transmit the first coded block, it is necessary for the slot using the precoding matrix F[0] to occur 200 times, the slot using the precoding matrix F[1] to occur 200 times, the slot using the precoding matrix F[2] to occur 200 times, the slot using the precoding matrix F[3] to occur 200 times, and the slot using the precoding matrix F[4] to occur 200 times. To transmit the second coded block, the slot using the precoding matrix F[0] should occur 200 times, the slot using the precoding matrix F[1] should occur 200 times, the slot using the precoding matrix F[2] should occur 200 times, the slot using the precoding matrix F[3] should occur 200 times, and the slot using the precoding matrix F[4] should occur 200 times. As described above, in the scheme of regularly hopping between precoding matrices, if there are N different precoding matrices (represented as F[0], F[1], F[2], . . . , F[N−2], and F[N−1]), when transmitting all of the bits constituting two coded blocks, Condition #109 should be satisfied, wherein K0is the number of slots using the precoding matrix F[0], K1is the number of slots using the precoding matrix F[1], Kiis the number of slots using the precoding matrix F[i] (i=0, 1, 2, . . . , N−1), and KN-1is the number of slots using the precoding matrix F[N−1]. Condition #109 K0=K1= . . . =Ki= . . . =KN-1, i.e. Ka=Kb(for ∀a, ∀b, where a, b=0, 1, 2, . . . , N−1 (each of a and b being an integer in a range of 0 to N−1), and a≠b). When transmitting all of the bits constituting the first coded block, Condition #110 should be satisfied, wherein K0,1is the number of times the precoding matrix F[0] is used, K1,1is the number of times the precoding matrix F[1] is used, Ki,1is the number of times the precoding matrix F[i] is used (i=0, 1, 2, . . . , N−1), and KN-1,1is the number of times the precoding matrix F[N−1] is used. Condition #110 K0,1=K1,1= . . . =Ki,1= . . . =KN-1,1, i.e. Ka,1=Kb,1(for ∀a, ∀b, where a, b=0, 1, 2, . . . , N−1 (each of a and b being an integer in a range of 0 to N−1), and a≠b). When transmitting all of the bits constituting the second coded block, Condition #111 should be satisfied, wherein K0,2is the number of times the precoding matrix F[0] is used, K1,2is the number of times the precoding matrix F[1] is used, Ki,2is the number of times the precoding matrix F[i] is used (i=0, 1, 2, . . . , N−1), and KN-1,2is the number of times the precoding matrix F[N−1] is used. Condition #111 K0,2=K1,2= . . . =Ki,2= . . . =KN-1,2, i.e. Ka,2=Kb,2(for ∀a, ∀b, where a, b=0, 1, 2, . . . , N−1 (each of a and b being an integer in a range of 0 to N−1), and a≠b). If the communications system supports a plurality of modulation schemes, and the modulation scheme that is used is selected from among the supported modulation schemes, the selected modulation scheme preferably satisfies Conditions #109, #110, and #111. When a plurality of modulation schemes are supported, it is typical for the number of bits that can be transmitted in one symbol to vary from modulation scheme to modulation scheme (although it is also possible for the number of bits to be the same), and therefore some modulation schemes may not be capable of satisfying Conditions #109, #110, and #111. In such a case, instead of Conditions #109, #110, and #111, the following conditions should be satisfied. Condition #112 The difference between Kaand Kbis 0 or 1, i.e. |Ka−Kb| is 0 or 1 (for ∀a, ∀b, where a, b=0, 1, 2, . . . , N−1 (each of a and b being an integer in a range of 0 to N−1), and a≠b). Condition #113 The difference between Ka,1and Kb,1is 0 or 1, i.e. |Ka,1−Kb,1| is 0 or 1 (for ∀a, ∀b, where a, b=0, 1, 2, . . . , N−1 (each of a and b being an integer in a range of 0 to N−1), and a≠b). Condition #114 The difference between Ka,2and Kb,2is 0 or 1, i.e. |Ka,2−Kb,2| is 0 or 1 (for ∀a, ∀b, where a, b=0, 1, 2, . . . , N−1 (each of a and b being an integer in a range of 0 to N−1), and a≠b). Associating coded blocks with precoding matrices in this way eliminates bias in the precoding matrices that are used for transmitting coded blocks, thereby achieving the advantageous effect of improving reception quality of data by the reception device. In the present embodiment, in the scheme of regularly hopping between precoding matrices, N different precoding matrices are necessary for a precoding hopping scheme with an N-slot period (cycle). In this case, F[0], F[1], F[2], . . . , F[N−2], F[N−1] are prepared as the N different precoding matrices. These precoding matrices may be arranged in the frequency domain in the order of F[0], F[1], F[2], . . . , F[N−2], F[N−1], but arrangement is not limited in this way. With N different precoding matrices F[0], F[1], F[2], . . . , F[N−2], F[N−1] generated in the present Embodiment, precoding weights may be changed by arranging symbols in the time domain or in the frequency-time domains as in Embodiment 1. Note that a precoding hopping scheme with an N-slot period (cycle) has been described, but the same advantageous effects may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not necessarily need to be used in a regular period (cycle). Here, when the conditions provided in the present embodiment are satisfied, the reception device has a high possibility of achieving excellent data reception quality. Furthermore, as described in Embodiment 15, a spatial multiplexing MIMO system, a MIMO system in which precoding matrices are fixed, a space-time block coding scheme, a one-stream-only transmission mode, and modes for schemes of regularly hopping between precoding matrices may exist, and the transmission device (broadcast station, base station) may select the transmission scheme from among these modes. In this case, in the spatial multiplexing MIMO system, the MIMO system in which precoding matrices are fixed, the space-time block coding scheme, the one-stream-only transmission mode, and the modes for schemes of regularly hopping between precoding matrices, it is preferable to implement the present embodiment in the (sub)carriers for which α scheme of regularly hopping between precoding matrices is selected. Embodiment C4 The present embodiment provides detailed descriptions of a case where, as shown in Non-Patent Literature 12 through Non-Patent Literature 15, a QC-LDPC code (or an LDPC (block) code other than a QC-LDPC code) and a block code (e.g., a concatenated code consisting of an LDPC code and a BCH code, and a turbo code) are used, especially when the scheme of regularly hopping between precoding matrices described in Embodiments C2 is employed. This embodiment describes an example of transmitting two streams, s1 and s2. However, for the case of coding using block codes, when control information or the like is not necessary, the number of bits in a coded block matches the number of bits composing the block code (the control information or the like listed below may, however, be included therein). For the case of coding using block codes, when control information or the like (such as a cyclic redundancy check (CRC), transmission parameters, or the like) is necessary, the number of bits in a coded block is the sum of the number of bits composing the block code and the number of bits in the control information or the like. FIG.97shows a modification of the number of symbols and of slots necessary for one coded block when using block coding.FIG.97“shows a modification of the number of symbols and of slots necessary for one coded block when using block coding” for the case when, for example as shown in the transmission device inFIG.4, two streams, s1 and s2, are transmitted, and the transmission device has one encoder. (In this case, the transmission scheme may be either single carrier transmission, or multicarrier transmission such as OFDM.) As shown inFIG.97, the number of bits constituting one block that has been encoded via block coding is set to 6,000. In order to transmit these 6,000 bits, 3,000 symbols are required when the modulation scheme is QPSK, 1,500 when the modulation scheme is 16QAM, and 1,000 when the modulation scheme is 64QAM. Since the transmission device inFIG.4simultaneously transmits two streams, 1,500 of the 3,000 symbols when the modulation scheme is QPSK are allocated to s1, and 1,500 to s2. Therefore, 1,500 slots (the term “slot” is used here) are required to transmit the 1,500 symbols transmitted in s1 and the 1,500 symbols transmitted in s2. By similar reasoning, when the modulation scheme is 16QAM, 750 slots are necessary to transmit all of the bits constituting one coded block, and when the modulation scheme is 64QAM, 500 slots are necessary to transmit all of the bits constituting one block. The following describes the relationship between the slots defined above and the precoding matrices in the scheme of regularly hopping between precoding matrices. Here, five precoding matrices for realizing the precoding scheme of regularly hopping between precoding matrices with a five-slot period (cycle), as described in Embodiment C2, are expressed as W[0], W[1], W[2], W[3], and W[4] (the weighting unit of the transmission device selects one of a plurality of precoding matrices and performs precoding for each slot). When the modulation scheme is QPSK, among the 1,500 slots described above for transmitting the 6,000 bits constituting one coded block, it is necessary for 300 slots to use the precoding matrix W[0], 300 slots to use the precoding matrix W[1], 300 slots to use the precoding matrix W[2], 300 slots to use the precoding matrix W[3], and 300 slots to use the precoding matrix W[4]. This is because if use of the precoding matrices is biased, the reception quality of data is greatly influenced by the precoding matrix that was used a greater number of times. When the modulation scheme is 16QAM, among the 750 slots described above for transmitting the 6,000 bits constituting one coded block, it is necessary for 150 slots to use the precoding matrix W[0], 150 slots to use the precoding matrix W[1], 150 slots to use the precoding matrix W[2], 150 slots to use the precoding matrix W[3], and 150 slots to use the precoding matrix W[4]. When the modulation scheme is 64QAM, among the 500 slots described above for transmitting the 6,000 bits constituting one coded block, it is necessary for 100 slots to use the precoding matrix W[0], 100 slots to use the precoding matrix W[1], 100 slots to use the precoding matrix W[2], 100 slots to use the precoding matrix W[3], and 100 slots to use the precoding matrix W[4]. As described above, in the scheme of regularly hopping between precoding matrices pertaining to Embodiment C2, provided that the precoding matrices W[0], W[1], . . . , W[2n−1], and W[2n] (which are constituted by F[0], F[1], F[2], . . . , F[n−1], and F[n]; see Embodiment C2) are prepared to achieve an N-slot period (cycle) where N=2n+1, when transmitting all of the bits constituting one coded block, Condition #115 should be satisfied, wherein K0is the number of slots using the precoding matrix W[0], K1is the number of slots using the precoding matrix W[1], Kiis the number of slots using the precoding matrix W[i] (i=0, 1, 2, . . . , 2n−1, 2n), and K2n is the number of slots using the precoding matrix W[2n]. Condition #115 K0=K1= . . . =Ki= . . . =K2n, i.e. Ka=Kb(for ∀a, ∀b, where a, b=0, 1, 2, . . . , 2n−1, 2n (each of a and b being an integer in a range of 0 to 2n), and a≠b). In the scheme of regularly hopping between precoding matrices pertaining to Embodiment C2, provided that the different precoding matrices F[0], F[1], F[2], . . . , F[n−1], and F[n] are prepared to achieve an N-slot period (cycle) where N=2n+1, when transmitting all of the bits constituting one coded block, Condition #115 can be expressed as follows, wherein G0is the number of slots using the precoding matrix F[0], G1is the number of slots using the precoding matrix F[1], Giis the number of slots using the precoding matrix F[i] (i=0, 1, 2, . . . , n−1, n), and Gnis the number of slots using the precoding matrix F[n]. Condition #116 2×G0=G1= . . . =Gi= . . . =Gn, i.e. 2×G0=Ga(for ∀a, where a=1, 2, . . . , n−1, n(a being an integer in a range of 1 to n)). If the communications system supports a plurality of modulation schemes, and the modulation scheme that is used is selected from among the supported modulation schemes, then a modulation scheme for which Condition #115 (#116) is satisfied should be selected. When a plurality of modulation schemes are supported, it is typical for the number of bits that can be transmitted in one symbol to vary from modulation scheme to modulation scheme (although it is also possible for the number of bits to be the same), and therefore some modulation schemes may not be capable of satisfying Condition #115 (#116). In such a case, instead of Condition #115, the following condition should be satisfied. Condition #117 The difference between Kaand Kbis 0 or 1, i.e. |Ka−Kb| is 0 or 1 (for ∀a, ∀b, where a, b=0, 1, 2, . . . , 2n−1, 2n (each of a and b being an integer in a range of 0 to 2n), and a≠b). Condition #117 can also be expressed as follows. Condition #118 The difference between Gaand Gbis 0, 1 or 2, i.e. |Ga−Gb| is 0, 1 or 2 (for ∀a, ∀b, where a, b=1, 2, . . . , n−1, n (each of a and b being an integer in a range of 1 to n), and a≠b); and the difference between 2×G0and Gais 0, 1 or 2, i.e. |2×G0−Ga| is 0, 1 or 2 (for ∀a, where a=1, 2, . . . , n−1, n (a being an integer in a range of 1 to n)). FIG.98shows a modification of the number of symbols and of slots necessary for one coded block when using block coding.FIG.98“shows a modification of the number of symbols and of slots necessary for two coded blocks when using block coding” for the case when, for example as shown in the transmission device inFIG.3and inFIG.13, two streams are transmitted, i.e. s1 and s2, and the transmission device has two encoders. (In this case, the transmission scheme may be either single carrier transmission, or multicarrier transmission such as OFDM.) As shown inFIG.98, the number of bits constituting one block that has been encoded via block coding is set to 6,000. In order to transmit these 6,000 bits, 3,000 symbols are required when the modulation scheme is QPSK, 1,500 when the modulation scheme is 16QAM, and 1,000 when the modulation scheme is 64QAM. The transmission device inFIG.3or inFIG.13transmits two streams simultaneously, and since two encoders are provided, different coded blocks are transmitted in the two streams. Accordingly, when the modulation scheme is QPSK, two coded blocks are transmitted in s1 and s2 within the same interval. For example, a first coded block is transmitted in s1, and a second coded block is transmitted in s2, and therefore, 3,000 slots are required to transmit the first and second coded blocks. By similar reasoning, when the modulation scheme is 16QAM, 1,500 slots are necessary to transmit all of the bits constituting two coded blocks, and when the modulation scheme is 64QAM, 1,000 slots are necessary to transmit all of the bits constituting two blocks. The following describes the relationship between the slots defined above and the precoding matrices in the scheme of regularly hopping between precoding matrices. Below, the five precoding matrices prepared in Embodiment C2 to implement the precoding scheme of regularly hopping between precoding matrices with a five-slot period (cycle) are expressed as W[0], W[1], W[2], W[3], and W[4]. (The weighting unit in the transmission device selects one of a plurality of precoding matrices and performs precoding for each slot). When the modulation scheme is QPSK, among the 3,000 slots described above for transmitting the 6,000×2 bits constituting two coded blocks, it is necessary for 600 slots to use the precoding matrix W[0], 600 slots to use the precoding matrix W[1], 600 slots to use the precoding matrix W[2], 600 slots to use the precoding matrix W[3], and 600 slots to use the precoding matrix W[4]. This is because if use of the precoding matrices is biased, the reception quality of data is greatly influenced by the precoding matrix that was used a greater number of times. To transmit the first coded block, it is necessary for the slot using the precoding matrix W[0] to occur 600 times, the slot using the precoding matrix W[1] to occur 600 times, the slot using the precoding matrix W[2] to occur 600 times, the slot using the precoding matrix W[3] to occur 600 times, and the slot using the precoding matrix W[4] to occur 600 times. To transmit the second coded block, the slot using the precoding matrix W[0] should occur 600 times, the slot using the precoding matrix W[1] should occur 600 times, the slot using the precoding matrix W[2] should occur 600 times, the slot using the precoding matrix W[3] should occur 600 times, and the slot using the precoding matrix W[4] should occur 600 times. Similarly, when the modulation scheme is 16QAM, among the 1,500 slots described above for transmitting the 6,000×2 bits constituting two coded blocks, it is necessary for 300 slots to use the precoding matrix W[0], 300 slots to use the precoding matrix W[1], 300 slots to use the precoding matrix W[2], 300 slots to use the precoding matrix W[3], and 300 slots to use the precoding matrix W[4]. To transmit the first coded block, it is necessary for the slot using the precoding matrix W[0] to occur 300 times, the slot using the precoding matrix W[1] to occur 300 times, the slot using the precoding matrix W[2] to occur 300 times, the slot using the precoding matrix W[3] to occur 300 times, and the slot using the precoding matrix W[4] to occur 300 times. To transmit the second coded block, the slot using the precoding matrix W[0] should occur 300 times, the slot using the precoding matrix W[1] should occur 300 times, the slot using the precoding matrix W[2] should occur 300 times, the slot using the precoding matrix W[3] should occur 300 times, and the slot using the precoding matrix W[4] should occur 300 times. Similarly, when the modulation scheme is 64QAM, among the 1,000 slots described above for transmitting the 6,000×2 bits constituting two coded blocks, it is necessary for 200 slots to use the precoding matrix W[0], 200 slots to use the precoding matrix W[1], 200 slots to use the precoding matrix W[2], 200 slots to use the precoding matrix W[3], and 200 slots to use the precoding matrix W[4]. To transmit the first coded block, it is necessary for the slot using the precoding matrix W[0] to occur 200 times, the slot using the precoding matrix W[1] to occur 200 times, the slot using the precoding matrix W[2] to occur 200 times, the slot using the precoding matrix W[3] to occur 200 times, and the slot using the precoding matrix W[4] to occur 200 times. To transmit the second coded block, the slot using the precoding matrix W[0] should occur 200 times, the slot using the precoding matrix W[1] should occur 200 times, the slot using the precoding matrix W[2] should occur 200 times, the slot using the precoding matrix W[3] should occur 200 times, and the slot using the precoding matrix W[4] should occur 200 times. As described above, in the scheme of regularly hopping between precoding matrices pertaining to Embodiment C2, provided that the precoding matrices W[0], W[1], . . . , W[2n−1], and W[2n] (which are constituted by F[0], F[1], F[2], . . . , F[n−1], and F[n]; see Embodiment C2) are prepared to achieve an N-slot period (cycle) where N=2n+1, when transmitting all of the bits constituting two coded blocks, Condition #119 should be satisfied, wherein K0is the number of slots using the precoding matrix W[0], K1is the number of slots using the precoding matrix W[1], Kiis the number of slots using the precoding matrix W[i] (i=0, 1, 2, . . . , 2n−1, 2n), and K2n is the number of slots using the precoding matrix W[2n]. Condition #119 K0=K1= . . . =Ki= . . . =K2n, i.e. Ka=Kb(for ∀a, ∀b, where a, b=0, 1, 2, . . . , 2n−1,2n(each of a and b being an integer in a range of 0 to 2n), and a≠b). When transmitting all of the bits constituting the first coded block, Condition #120 should be satisfied, wherein K0,1is the number of times the precoding matrix W[0] is used, K1,1is the number of times the precoding matrix W[1] is used, Ki,1is the number of times the precoding matrix W[i] is used (i=0, 1, 2, . . . , 2n−1, 2n), and K2,1is the number of times the precoding matrix W[2n] is used. Condition #120 K0,1=K1,1= . . . =Ki,1= . . . =K2n,1, i.e. Ka,1=Kb,1(for ∀a, ∀b, where a, b=0, 1, 2, . . . , 2n−1, 2n(each of a and b being an integer in a range of 0 to 2n), and a≠b). When transmitting all of the bits constituting the second coded block, Condition #121 should be satisfied, wherein K0,2is the number of times the precoding matrix W[0] is used, K1,2is the number of times the precoding matrix W[1] is used, Ki,2is the number of times the precoding matrix W[i] is used (i=0, 1, 2, . . . , 2n−1, 2n), and K2n,2is the number of times the precoding matrix W[2n] is used. Condition #121 K0,2=K1,2= . . . =Ki,2= . . . =K2n,2, i.e. Ka,2=Kb,2(for ∀a, ∀b, where a, b=0, 1, 2, . . . , 2n−1, 2n (each of a and b being an integer in a range of 0 to 2n), and a≠b). In the scheme of regularly hopping between precoding matrices pertaining to Embodiment C2, provided that the different precoding matrices F[0], F[1], F[2], . . . , F[n−1], and F[n] are prepared to achieve an N-slot period (cycle) where N=2n+1, when transmitting all of the bits constituting two coded blocks, Condition #119 can be expressed as follows, wherein G0is the number of slots using the precoding matrix F[0], G1is the number of slots using the precoding matrix F[1], Giis the number of slots using the precoding matrix F[i] (i=0, 1, 2, . . . , n−1, n), and Gnis the number of slots using the precoding matrix F[n]. Condition #122 2×G0=G1= . . . =Gi= . . . =Gn, i.e. 2×G0=Ga(for ∀a, where a=1, 2, . . . , n−1, n(a being an integer in a range of 1 to n)). When transmitting all of the bits constituting the first coded block, Condition #123 should be satisfied, wherein G0,1is the number of times the precoding matrix F[0] is used, K1,1is the number of times the precoding matrix F[1] is used, Gi,1is the number of times the precoding matrix F[i] is used (i=0, 1, 2, . . . , n−1, n), and Gn,1is the number of times the precoding matrix F[n] is used. Condition #123 2×G0,1=G1,1= . . . =Gi,1= . . . =Gn,1, i.e. 2×G0,1=Ga,1(for ∀a, where a=1, 2, . . . , n−1, n (a being an integer in a range of 1 to n)). When transmitting all of the bits constituting the second coded block, Condition #124 should be satisfied, wherein G0,2is the number of times the precoding matrix F[0] is used, G1,2is the number of times the precoding matrix F[1] is used, Gi,2is the number of times the precoding matrix F[i] is used (i=0, 1, 2, . . . , n−1, n), and Gn,2is the number of times the precoding matrix F[n] is used. Condition #124 2×G0,2=G1,2= . . . =Gi,2= . . . =Gn,2, i.e. 2×G0,2=Ga,2(for ∀a, where a=1, 2, . . . , n−1, n (a being an integer in a range of 1 to n)). If the communications system supports a plurality of modulation schemes, and the modulation scheme that is used is selected from among the supported modulation schemes, then a modulation scheme for which Conditions #119, #120 and #121 (#122, #123 and #124) are satisfied should be selected. When a plurality of modulation schemes are supported, it is typical for the number of bits that can be transmitted in one symbol to vary from modulation scheme to modulation scheme (although it is also possible for the number of bits to be the same), and therefore some modulation schemes may not be capable of satisfying Conditions #119, #120, and #121 (#122, #123 and #124). In such a case, instead of Conditions #119, #120, and #121, the following conditions should be satisfied. Condition #125 The difference between Kaand Kbis 0 or 1, i.e. Ka−Kbis 0 or 1 (for ∀a, V b, where a, b=0, 1, 2, . . . , 2n−1, 2n (each of a and b being an integer in a range of 0 to 2n), and a≠b). Condition #126 The difference between Ka,1and Kb,i is 0 or 1, i.e. Ka,1−Kb,i is 0 or 1 (for ∀a, ∀b, where a, b=0, 1, 2, . . . , 2n−1, 2n (each of a and b being an integer in a range of 0 to 2n), and a≠b). Condition #127 The difference between Ka,2and Kb,2is 0 or 1, i.e. |Ka,2−Kb,2| is 0 or 1 (for ∀a, ∀b, where a, b=0, 1, 2, . . . , 2n−1, 2n (each of a and b being an integer in a range of 0 to 2n), and a≠b). Conditions #125, #126 and #127 can also be expressed as follows. Condition #128 The difference between Gaand Gbis 0, 1 or 2, i.e. |Ga-Gb| is 0, 1 or 2 (for ∀a, ∀b, where a, b=1, 2, . . . , n−1, n (each of a and b being an integer in a range of 1 to n), and a≠b); and the difference between 2×G0and Gais 0, 1 or 2, i.e. |2×G0−Ga| is 0, 1 or 2 (for ∀a, where a=1, 2, . . . , n−1, n (a being an integer in a range of 1 to n)). Condition #129 The difference between Ga,and Gb,1is 0, 1 or 2, i.e. |Ga,1−Gb,1| is 0, 1 or 2 (for ∀a, ∀b, where a, b=1, 2, . . . , n−1, n (each of a and b being an integer in a range of 1 to n), and a≠b); and the difference between 2×G0,1and Ga,1is 0, 1 or 2, i.e. |2×G0,1−Ga,1| is 0, 1 or 2 (for ∀a, where a=1, 2, . . . , n−1, n (a being an integer in a range of 1 to n)). Condition #130 The difference between Ga,2and Gb,2is 0, 1 or 2, i.e. Ga,2−Gb,2| is 0, 1 or 2 (for ∀a, ∀b, where a, b=1, 2, . . . , n−1, n (each of a and b being an integer in a range of 1 to n), and a≠b); and the difference between 2×G0,2and Ga,2is 0, 1 or 2, i.e. |2×G0,2−Ga,2| is 0, 1 or 2 (for V a, where α=1, 2, . . . , n−1, n (a being an integer in a range of 1 to n)). Associating coded blocks with precoding matrices in this way eliminates bias in the precoding matrices that are used for transmitting coded blocks, thereby achieving the advantageous effect of improving reception quality of data by the reception device. In the present embodiment, precoding matrices W[0], W[1], . . . , W[2n−1], W[2n] (note that W[0], W[1], . . . , W[2n−1], W[2n] are composed of F[0], F[1], F[2], . . . , F[n−1], F[n]) for the precoding hopping scheme with the period (cycle) of N=2n+1 slots as described in Embodiment C2 (the precoding scheme of regularly hopping between precoding matrices with the period (cycle) of N=2n+1 slots) are arranged in the order W[0], W[1], . . . , W[2n−1], W[2] in the time domain (or the frequency domain) in the single carrier transmission scheme. The present invention is not, however, limited in this way, and the precoding matrices W[0], W[1], . . . , W[2n−1], W[2n] may be adapted to a multi-carrier transmission scheme such as an OFDM transmission scheme or the like. As in Embodiment 1, as a scheme of adaption in this case, precoding weights may be changed by arranging symbols in the frequency domain and in the frequency-time domain. Note that the precoding hopping scheme with the period (cycle) of N=2n+1 slots has been described, but the same advantageous effect may be obtained by randomly using the precoding matrices W[0], W[1], . . . , W[2n−1], W[2n]. In other words, the precoding matrices W[0], W[1], . . . , W[2n−1], W[2n] do not need to be used in a regular period (cycle). In this case, when the conditions described in the present embodiment are satisfied, the probability that the reception device achieves excellent data reception quality is high. Furthermore, in the precoding matrix hopping scheme with an H-slot period (cycle) (H being a natural number larger than the number of slots N=2n+1 in the period (cycle) of the above-mentioned scheme of regularly hopping between precoding matrices), when n+1 different precoding matrices of the present embodiment are included, the probability of providing excellent reception quality increases. As described in Embodiment 15, there are modes such as the spatial multiplexing MIMO system, the MIMO system with a fixed precoding matrix, the space-time block coding scheme, the scheme of transmitting one stream and the scheme of regularly hopping between precoding matrices. The transmission device (broadcast station, base station) may select one transmission scheme from among these modes. In this case, from among the spatial multiplexing MIMO system, the MIMO system with a fixed precoding matrix, the space-time block coding scheme, the scheme of transmitting one stream and the scheme of regularly hopping between precoding matrices, a (sub)carrier group selecting the scheme of regularly hopping between precoding matrices may implement the present embodiment. Embodiment C5 As shown in Non-Patent Literature 12 through Non-Patent Literature 15, the present embodiment describes a case where Embodiment C3 and Embodiment C4 are generalized when using a Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) code (or an LDPC (block) code other than a QC-LDPC code), a block code such as a concatenated code consisting of an LDPC code and a Bose-Chaudhuri-Hocquenghem (BCH) code, and a block code such as a turbo code. The following describes a case of transmitting two streams s1 and s2 as an example. Note that, when the control information and the like are not required to perform encoding using the block code, the number of bits constituting the coded block is the same as the number of bits constituting the block code (however, the control information and the like described below may be included). When the control information and the like (e.g. CRC (cyclic redundancy check), a transmission parameter) are required to perform encoding using the block code, the number of bits constituting the coded block can be a sum of the number of bits constituting the block code and the number of bits of the control information and the like. FIG.97shows a change in the number of symbols and slots required for one coded block when the block code is used.FIG.97shows a change in the number of symbols and slots required for one coded block when the block code is used in a case where the two streams s1 and s2 are transmitted and the transmission device has a single encoder, as shown in the transmission device inFIG.4(note that, in this case, either the single carrier transmission or the multi-carrier transmission such as the OFDM may be used as a transmission system). As shown inFIG.97, let the number of bits constituting one coded block in the block code be 6000 bits. In order to transmit the 6000 bits, 3000 symbols, 1500 symbols and 1000 symbols are necessary when the modulation scheme is QPSK, 16QAM and 64QAM, respectively. Since two streams are to be simultaneously transmitted in the transmission device shown inFIG.4, when the modulation scheme is QPSK, 1500 symbols are allocated to s1 and remaining 1500 symbols are allocated to s2 out of the above-mentioned 3000 symbols. Therefore, 1500 slots (referred to as slots) are necessary to transmit 1500 symbols by s1 and transmit 1500 symbols by s2. Making the same considerations, 750 slots are necessary to transmit all the bits constituting one coded block when the modulation scheme is 16QAM, and 500 slots are necessary to transmit all the bits constituting one block when the modulation scheme is 64QAM. The following describes the relationship between the slots defined above and precoding matrices in the scheme of regularly hopping between precoding matrices. Here, let the precoding matrices for the scheme of regularly hopping between precoding matrices with a five-slot period (cycle) be W[0], W[1], W[2], W[3], W[4]. Note that at least two or more different precoding matrices may be included in W[0], W[1], W[2], W[3], W[4] (the same precoding matrices may be included in W[0], W[1], W[2], W[3], W[4]). In the weighting combination unit of the transmission device in FIG. 4, W[0], W[1], W[2], W[3], W[4] are used (the weighting combination unit selects one precoding matrix from among a plurality of precoding matrices in each slot, and performs precoding). Out of the above-mentioned 1500 slots required to transmit 6000 bits, which is the number of bits constituting one coded block, when the modulation scheme is QPSK, 300 slots are necessary for each of a slot using the precoding matrix W[0], a slot using the precoding matrix W[1], a slot using the precoding matrix W[2], a slot using the precoding matrix W[3] and a slot using the precoding matrix W[4]. This is because, if precoding matrices to be used are biased, data reception quality is greatly influenced by a large number of precoding matrices to be used. Similarly, out of the above-mentioned 750 slots required to transmit 6000 bits, which is the number of bits constituting one coded block, when the modulation scheme is 16QAM, 150 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. Similarly, out of the above-mentioned 500 slots required to transmit 6000 bits, which is the number of bits constituting one coded block, when the modulation scheme is 64QAM, 100 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. As described above, the precoding matrices in the scheme of regularly hopping between precoding matrices with an N-slot period (cycle) are represented as W[0], W[1], W[2], . . . , W[N−2], W[N−1]. Note that W[0], W[1], W[2], . . . , W[N−2], W[N−1] are composed of at least two or more different precoding matrices (the same precoding matrices may be included in W[0], W[1], W[2], . . . , W[N−2], W[N−1]). When all the bits constituting one coded block are transmitted, letting the number of slots using the precoding matrix W[0] be K0, letting the number of slots using the precoding matrix W[1] be K1, letting the number of slots using the precoding matrix W[i] be Ki(i=0, 1, 2, . . . , N−1), and letting the number of slots using the precoding matrix W[N−1] be KN-1, the following condition should be satisfied. Condition #131 K0=K1= . . . =Ki= . . . =KN-1, i.e., Ka=Kbfor ∀a, ∀b (a, b=0, 1, 2, . . . , N−1 (a, b are integers from 0 to N−1); a≠b) When the communication system supports a plurality of modulation schemes, and a modulation scheme is selected and used from among the supported modulation schemes, Condition #94 should be satisfied. When the plurality of modulation schemes are supported, however, since the number of bits that one symbol can transmit is generally different depending on modulation schemes (in some cases, the number of bits can be the same), there can be a modulation scheme that is not able to satisfy Condition #131. In such a case, instead of satisfying Condition #131, the following condition may be satisfied. Condition #132 The difference between Kaand Kbis 0 or 1, i.e., |Ka−Kb| is 0 or 1 for ∀a, ∀b (a, b=0, 1, 2, . . . , N−1 (a, b are integers from 0 to N−1); a≠b) FIG.98shows a change in the number of symbols and slots required for two coded blocks when the block code is used.FIG.98shows a change in the number of symbols and slots required for one coded block when the block code is used in a case where the two streams s1 and s2 are transmitted and the transmission device has two encoders, as shown in the transmission device inFIG.3and the transmission device inFIG.13(note that, in this case, either the single carrier transmission or the multi-carrier transmission such as the OFDM may be used as a transmission system). As shown inFIG.98, let the number of bits constituting one coded block in the block code be 6000 bits. In order to transmit the 6000 bits, 3000 symbols, 1500 symbols and 1000 symbols are necessary when the modulation scheme is QPSK, 16QAM and 64QAM, respectively. Since two streams are to be simultaneously transmitted in the transmission device shown inFIG.3and in the transmission device inFIG.13, and there are two encoders, different coded blocks are to be transmitted. Therefore, when the modulation scheme is QPSK, s1 and s2 transmit two coded blocks within the same interval. For example, s1 transmits a first coded block, and s2 transmits a second coded block. Therefore, 3000 slots are necessary to transmit the first coded block and the second coded block. Making the same considerations, 1500 slots are necessary to transmit all the bits constituting two coded blocks when the modulation scheme is 16QAM, and 1000 slots are necessary to transmit all the bits constituting 22 blocks when the modulation scheme is 64QAM. The following describes the relationship between the slots defined above and precoding matrices in the scheme of regularly hopping between precoding matrices. Here, let the precoding matrices for the scheme of regularly hopping between precoding matrices with a five-slot period (cycle) be W[0], W[1], W[2], W[3], W[4]. Note that at least two or more different precoding matrices may be included in W[0], W[1], W[2], W[3], W[4] (the same precoding matrices may be included in W[0], W[1], W[2], W[3], W[4]). In the weighting combination unit of the transmission device inFIG.3and the transmission device inFIG.13, W[0], W[1], W[2], W[3], W[4] are used (the weighting combination unit selects one precoding matrix from among a plurality of precoding matrices in each slot, and performs precoding). Out of the above-mentioned 3000 slots required to transmit 6000×2 bits, which is the number of bits constituting two coded blocks, when the modulation scheme is QPSK, 600 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. This is because, if precoding matrices to be used are biased, data reception quality is greatly influenced by a large number of precoding matrices to be used. Also, in order to transmit the first coded block, 600 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. In order to transmit the second coded block, 600 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. Similarly, out of the above-mentioned 1500 slots required to transmit 6000×2 bits, which is the number of bits constituting two coded blocks, when the modulation scheme is 64QAM, 300 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. Also, in order to transmit the first coded block, 300 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. In order to transmit the second coded block, 300 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. Similarly, out of the above-mentioned 1000 slots required to transmit 6000×2 bits, which is the number of bits constituting two coded blocks, when the modulation scheme is 64QAM, 200 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. Also, in order to transmit the first coded block, 200 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. In order to transmit the second coded block, 200 slots are necessary for each of the slot using the precoding matrix W[0], the slot using the precoding matrix W[1], the slot using the precoding matrix W[2], the slot using the precoding matrix W[3] and the slot using the precoding matrix W[4]. As described above, the precoding matrices in the scheme of regularly hopping between precoding matrices with an N-slot period (cycle) are represented as W[0], W[1], W[2], . . . , W[N−2], W[N−1]. Note that W[0], W[1], W[2], . . . , W[N−2], W[N−1] are composed of at least two or more different precoding matrices (the same precoding matrices may be included in W[0], W[1], W[2], . . . , W[N−2], W[N−1]). When all the bits constituting two coded blocks are transmitted, letting the number of slots using the precoding matrix W[0] be K0, letting the number of slots using the precoding matrix W[1] be K1, letting the number of slots using the precoding matrix W[i] be Ki(i=0, 1, 2, . . . , N−1), and letting the number of slots using the precoding matrix W[N−1] be KN-1, the following condition should be satisfied. Condition #133 K0=K1= . . . =Ki= . . . =KN-1, i.e., Ka=Kbfor ∀a, ∀b (a, b=0, 1, 2, . . . , N−1 (a, b are integers from 0 to N−1); a≠b) When all the bits constituting the first coded block are transmitted, letting the number of slots using the precoding matrix W[0] be K0,1, letting the number of slots using the precoding matrix W[1] be K1, 1, letting the number of slots using the precoding matrix W[i] be Ki,1(i=0, 1, 2, . . . , N−1), and letting the number of slots using the precoding matrix W[N−1] be KN-1, 1, the following condition should be satisfied. Condition #134 K0,1=K1,1= . . . =Ki,1= . . . =KN-1,1, i.e., Ka,1=Kb,1for ∀a, ∀b (a, b=0, 1,2, . . . , N−1 (a, b are integers from 0 to N−1); a≠b) When all the bits constituting the second coded block are transmitted, letting the number of slots using the precoding matrix W[0] be K0,2, letting the number of slots using the precoding matrix W[1] be K1, 2, letting the number of slots using the precoding matrix W[i] be Ki,2(i=0, 1, 2, . . . , N−1), and letting the number of slots using the precoding matrix W[N−1] be KN-1, 2, the following condition should be satisfied. Condition #135 K0,2=K1,2= . . . =Ki,2= . . . =KN-1, 2, i.e., Ka,2=Kb,2for ∀a, ∀b (a, b=0, 1, 2, . . . , N−1 (a, b are integers from 0 to N−1); a≠b) When the communication system supports a plurality of modulation schemes, and a modulation scheme is selected and used from among the supported modulation schemes, Condition #133, Condition #134 and Condition #135 should be satisfied. When the plurality of modulation schemes are supported, however, since the number of bits that one symbol can transmit is generally different depending on modulation schemes (in some cases, the number of bits can be the same), there can be a modulation scheme that is not able to satisfy Condition #133, Condition #134 and Condition #135. In such a case, instead of satisfying Condition #133, Condition #134 and Condition #135, the following condition may be satisfied. Condition #136 The difference between Kaand Kbis 0 or 1, i.e., |Ka−Kb| is 0 or 1 for ∀a, ∀b (a, b=0, 1, 2, . . . , N−1 (a, b are integers from 0 to N−1); a≠b) Condition #137 The difference between Ka,1and Kb,1is 0 or 1, i.e., |Ka,1−Kb,1is 0 or 1 for ∀a, ∀b (a, b=0, 1, 2, . . . , N−1 (a, b are integers from 0 to N−1); a≠b) Condition #138 The difference between Ka,2and Kb,2is 0 or 1, i.e., |Ka,2−Kb,2| is 0 or 1 for ∀a, ∀b (a, b=0, 1, 2, . . . , N−1 (a, b are integers from 0 to N−1); a≠b) By associating the coded blocks with precoding matrices as described above, precoding matrices used to transmit the coded block are unbiased. Therefore, an effect of improving data reception quality in the reception device is obtained. In the present embodiment, in the scheme of regularly hopping between precoding matrices, N precoding matrices W[0], W[1], W[2], . . . , W[N−2], W[N−1] are prepared for the precoding hopping scheme with an N-slot period (cycle). There is a way to arrange precoding matrices in the order W[0], W[1], W[2], . . . , W[N−2], W[N−1] in frequency domain. The present invention is not, however, limited in this way. As described in Embodiment 1, precoding weights may be changed by arranging N precoding matrices W[0], W[1], W[2], . . . , W[N−2], W[N−1] generated in the present embodiment in time domain and in the frequency-time domain. Note that a precoding hopping scheme with the N-slot period (cycle) has been described, but the same advantageous effect may be obtained by randomly using N different precoding matrices. In other words, the N different precoding matrices do not need to be used in a regular period (cycle). In this case, when the conditions described in the present embodiment are satisfied, the probability that the reception device achieves excellent data reception quality is high. As described in Embodiment 15, there are modes such as the spatial multiplexing MIMO system, the MIMO system with a fixed precoding matrix, the space-time block coding scheme, the scheme of transmitting one stream and the scheme of regularly hopping between precoding matrices. The transmission device (broadcast station, base station) may select one transmission scheme from among these modes. In this case, from among the spatial multiplexing MIMO system, the MIMO system with a fixed precoding matrix, the space-time block coding scheme, the scheme of transmitting one stream and the scheme of regularly hopping between precoding matrices, a (sub)carrier group selecting the scheme of regularly hopping between precoding matrices may implement the present embodiment. Supplementary Explanation In the present description, it is considered that a communication/broadcasting device such as a broadcast station, a base station, an access point, a terminal, a mobile phone, or the like is provided with the transmission device, and that a communication device such as a television, radio, terminal, personal computer, mobile phone, access point, base station, or the like is provided with the reception device. Additionally, it is considered that the transmission device and the reception device in the present invention have a communication function and are capable of being connected via some sort of interface (such as a USB) to a device for executing applications for a television, radio, personal computer, mobile phone, or the like. Furthermore, in the present embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, postamble, reference symbol, and the like), symbols for control information, and the like may be arranged in the frame in any way. While the terms “pilot symbol” and “symbols for control information” have been used here, any term may be used, since the function itself is what is important. It suffices for a pilot symbol, for example, to be a known symbol modulated with PSK modulation in the transmission and reception devices (or for the reception device to be able to synchronize in order to know the symbol transmitted by the transmission device). The reception device uses this symbol for frequency synchronization, time synchronization, channel estimation (estimation of Channel State Information (CSI) for each modulated signal), detection of signals, and the like. A symbol for control information is for transmitting information other than data (of applications or the like) that needs to be transmitted to the communication partner for achieving communication (for example, the modulation scheme, error correction coding scheme, coding rate of the error correction coding scheme, setting information in the upper layer, and the like). Note that the present invention is not limited to the above Embodiments 1-5 and may be embodied with a variety of modifications. For example, the above embodiments describe communication devices, but the present invention is not limited to these devices and may be implemented as software for the corresponding communication scheme. Furthermore, a precoding hopping scheme used in a scheme of transmitting two modulated signals from two antennas has been described, but the present invention is not limited in this way. The present invention may be also embodied as a precoding hopping scheme for similarly changing precoding weights (matrices) in the context of a scheme whereby four mapped signals are precoded to generate four modulated signals that are transmitted from four antennas, or more generally, whereby N mapped signals are precoded to generate N modulated signals that are transmitted from N antennas. In the present description, the terms “precoding”, “precoding weight”, “precoding matrix” and the like are used, but any term may be used (such as “codebook”, for example) since the signal processing itself is what is important in the present invention. Furthermore, in the present description, the reception device has been described as using ML calculation, APP, Max-log APP, ZF, MMSE, or the like, which yields soft decision results (log-likelihood, log-likelihood ratio) or hard decision results (“0” or “1”) for each bit of data transmitted by the transmission device. This process may be referred to as detection, demodulation, estimation, or separation. Assume that precoded baseband signals z1(i), z2(i) (where i represents the order in terms of time or frequency (carrier)) are generated by precoding baseband signals s1(i) and s2(i) for two streams while regularly hopping between precoding matrices. Let the in-phase component I and the quadrature component Q of the precoded baseband signal z1(i) be I1(i) and Q1(i) respectively, and let the in-phase component I and the quadrature component Q of the precoded baseband signal z2(i) be I2(i) and Q2(i) respectively. In this case, the baseband components may be switched, and modulated signals corresponding to the switched baseband signal r1(i) and the switched baseband signal r2(i) may be transmitted from different antennas at the same time and over the same frequency by transmitting a modulated signal corresponding to the switched baseband signal r1(i) from transmit antenna 1 and a modulated signal corresponding to the switched baseband signal r2(i) from transmit antenna 2 at the same time and over the same frequency. Baseband components may be switched as follows. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I1(i) and Q2(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be I2(i) and Q1(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I1(i) and I2(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q1(i) and Q2(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I2(i) and I1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q1(i) and Q2(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I1(i) and I2(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q2(i) and Q1(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I2(i) and I1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q2(i) and Q1(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I1(i) and Q2(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q1(i) and I2(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q2(i) and I1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be I2(i) and Q1(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q2(i) and I1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q1(i) and I2(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I1(i) and I2(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q1(i) and Q2(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I2(i) and I1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q1(i) and Q2(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I1(i) and I2(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q2(i) and Q1(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I2(i) and I1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q2(i) and Q1(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I1(i) and Q2(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be I2(i) and Q1(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I1(i) and Q2(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q1(i) and I2(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q2(i) and I1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be I2(i) and Q1(i) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q2(i) and I1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q1(i) and I2(i) respectively. In the above description, signals in two streams are precoded, and in-phase components and quadrature components of the precoded signals are switched, but the present invention is not limited in this way. Signals in more than two streams may be precoded, and the in-phase components and quadrature components of the precoded signals may be switched. In the above-mentioned example, switching between baseband signals at the same time (at the same frequency ((sub)carrier)) has been described, but the present invention is not limited to the switching between baseband signals at the same time. As an example, the following description can be made. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I1(i+v) and Q2(i+w) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be I2(i+w) and Q1(i+v) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I1(i+v) and I2(i+w) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q1(i+v) and Q2(i+w) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I2(i+w) and I1(i+v) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q1(i+v) and Q2(i+w) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I1(i+v) and I2(i+w) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q2(i+w) and Q1(i+v) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I2(i+w) and I1(i+v) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q2(i+w) and Q1(i+v) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be I1(i+v) and Q2(i+w) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q1(i+v) and I2(i+w) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q2(i+w) and I1(i+v) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be I2(i+w) and Q1(i+v) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q2(i+w) and I1(i+v) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q1(i+v) and I2(i+w) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I1(i+v) and I2(i+w) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q1(i+v) and Q2(i+w) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I2(i+w) and I1(i+v) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q1(i+v) and Q2(i+w) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I1(i+v) and I2(i+w) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q2(i+w) and Q1(i+v) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I2(i+w) and I1(i+v) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q2(i+w) and Q1(i+v) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I1(i+v) and Q2(i+w) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be I2(i+w) and Q1(i+v) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be I1(i+v) and Q2(i+w) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q1(i+v) and I2(i+w) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q2(i+w) and I1(i+v) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be I2(i+w) and Q1(i+v) respectively. Let the in-phase component and the quadrature component of the switched baseband signal r2(i) be Q2(i+w) and I1(i+v) respectively, and the in-phase component and the quadrature component of the switched baseband signal r1(i) be Q1(i+v) and I2(i+w) respectively. FIG.96explains the above description. As shown inFIG.96, let the in-phase component I and the quadrature component of the precoded baseband signal z1(i) be I1(i) and Q1(i) respectively, and the in-phase component I and the quadrature component of the precoded baseband signal z2(i) be I2(i) and Q2(i) respectively. Then, let the in-phase component and the quadrature component of the switched baseband signal r1(i) be Ir1(i) and Qr1(i) respectively, and the in-phase component and the quadrature component of the switched baseband signal r2(i) be Ir2(i) and Qr2(i) respectively, and the in-phase component Ir1(i) and the quadrature component Qr1(i) of the switched baseband signal r1(i) and the in-phase component Ir2(i) and the quadrature component Qr2(i) of the switched baseband signal r2(i) are represented by any of the above descriptions. Note that, in this example, switching between precoded baseband signals at the same time (at the same frequency ((sub)carrier)) has been described, but the present invention may be switching between precoded baseband signals at different times (at different frequencies ((sub)carrier)), as described above. In this case, modulated signals corresponding to the switched baseband signal r1(i) and the switched baseband signal r2(i) may be transmitted from different antennas at the same time and over the same frequency by transmitting a modulated signal corresponding to the switched baseband signal r1(i) from transmit antenna 1 and a modulated signal corresponding to the switched baseband signal r2(i) from transmit antenna 2 at the same time and over the same frequency. Each of the transmit antennas of the transmission device and the receive antennas of the reception device shown in the figures may be formed by a plurality of antennas. In this description, the symbol “∀” represents the universal quantifier, and the symbol “∃” represents the existential quantifier. Furthermore, in this description, the units of phase, such as argument, in the complex plane are radians. When using the complex plane, complex numbers may be shown in polar form by polar coordinates. If a complex number z=a+jb (where a and b are real numbers and j is an imaginary unit) corresponds to a point (a, b) on the complex plane, and this point is represented in polar coordinates as [r, θ], then the following math is satisfied. a=r×cos θ b=r×sin θ r=√{square root over (a2+b2)}  Math 592 r is the absolute value of z (r=|z|), and θ is the argument. Furthermore, z=a+jb is represented as rejθ. In the description of the present invention, the baseband signal, modulated signal s1, modulated signal s2, modulated signal z1, and modulated signal z2 are complex signals. Complex signals are represented as I+jQ (where j is an imaginary unit), I being the in-phase signal, and Q being the quadrature signal. In this case, I may be zero, or Q may be zero. FIG.59shows an example of a broadcasting system that uses the scheme of regularly hopping between precoding matrices described in this description. InFIG.59, a video encoder5901receives video images as input, encodes the video images, and outputs encoded video images as data5902. An audio encoder5903receives audio as input, encodes the audio, and outputs encoded audio as data5904. A data encoder5905receives data as input, encodes the data (for example by data compression), and outputs encoded data as data5906. Together, these encoders are referred to as information source encoders5900. A transmission unit5907receives, as input, the data5902of the encoded video, the data5904of the encoded audio, and the data5906of the encoded data, sets some or all of these pieces of data as transmission data, and outputs transmission signals5908_1through5908_N after performing processing such as error correction encoding, modulation, and precoding (for example, the signal processing of the transmission device inFIG.3). The transmission signals5908_1through5908_N are transmitted by antennas5909_1through5909_N as radio waves. A reception unit5912receives, as input, received signals5911_1through5911_M received by antennas5910_1through5910_M, performs processing such as frequency conversion, decoding of precoding, log-likelihood ratio calculation, and error correction decoding (processing by the reception device inFIG.7, for example), and outputs received data5913,5915, and5917. Information source decoders5919receive, as input, the received data5913,5915, and5917. A video decoder5914receives, as input, the received data5913, performs video decoding, and outputs a video signal. Video images are then shown on a television or display monitor. Furthermore, an audio decoder5916receives, as input, the received data5915, performs audio decoding, and outputs an audio signal. Audio is then produced by a speaker. A data encoder5918receives, as input, the received data5917, performs data decoding, and outputs information in the data. In the above embodiments describing the present invention, the number of encoders in the transmission device when using a multi-carrier transmission scheme such as OFDM may be any number, as described above. Therefore, as inFIG.4, for example, it is of course possible for the transmission device to have one encoder and to adapt a scheme of distributing output to a multi-carrier transmission scheme such as OFDM. In this case, the wireless units310A and310B inFIG.4are replaced by the OFDM related processors1301A and1301B inFIG.13. The description of the OFDM related processors is as per Embodiment 1. The symbol arrangement scheme described in Embodiments A1 through A5 and in Embodiment 1 may be similarly implemented as a precoding scheme for regularly hopping between precoding matrices using a plurality of different precoding matrices, the precoding scheme differing from the “scheme for hopping between different precoding matrices” in the present description. The same holds true for other embodiments as well. The following is a supplementary explanation regarding a plurality of different precoding matrices. Let N precoding matrices be represented as F[0], F[1], F[2], . . . , F[N−3], F[N−2], F[N−1] for a precoding scheme for regularly hopping between precoding matrices. In this case, the “plurality of different precoding matrices” referred to above are assumed to satisfy the following two conditions (Condition *1 and Condition *2). Math 593 F[x]≠F[y] for ∀x,∀y(x,y=0,1,2, . . . ,N−3,N−2,N−1;x≠y)  Condition *1 Here, x is an integer from 0 to N−1, y is an integer from 0 to N−1 and x≠y. With respect to all x and all y satisfying the above, the relationship F[x]≠F[y] holds. Math 594 F[x]=k×F[y]Condition *2 Letting x be an integer from 0 to N−1, y be an integer from 0 to N−1, and x≠y, for all x and all y, no real or complex number k satisfying the above equation exists. The following is a supplementary explanation using a 2×2 matrix as an example. Let 2×2 matrices R and S be represented as follows: Math⁢595R=(abcd)Math⁢596S=(efgh) Let a=Aejδ11, b=Bejδ12, c=Cejδ21and d=Dejδ22, and e=Eejγ11, f=Feiγ12, g=Geiγ21and h=Hejγ22. A, B, C, D, E, F, G, and H are real numbers 0 or greater, and δ11, δ12, δ21, δ22, γ11, γ12, γ21, and γ22are expressed in radians. In this case, R≠S means that at least one of the following holds: (1) a≠e, (2) b≠f, (3) c≠g and (4) d≠h. A precoding matrix may be the matrix R wherein one of a, b, c, and d is zero. In other words, the precoding matrix may be such that (1) a is zero, and b, c, and d are not zero; (2) b is zero, and a, c, and d are not zero; (3) c is zero, and a, b, and d are not zero; or (4) d is zero, and a, b, and c are not zero. In the system example in the description of the present invention, a communication system using a MIMO scheme was described, wherein two modulated signals are transmitted from two antennas and are received by two antennas. The present invention may, however, of course also be adopted in a communication system using a MISO (Multiple Input Single Output) scheme. In the case of the MISO scheme, adoption of a precoding scheme for regularly hopping between α plurality of precoding matrices in the transmission device is the same as described above. On the other hand, the reception device is not provided with the antenna701_Y, the wireless unit703_Y, the channel fluctuation estimating unit707_1for the modulated signal z1, or the channel fluctuation estimating unit707_2for the modulated signal z2 in the structure shown inFIG.7. In this case as well, however, the processing detailed in the present description may be performed to estimate data transmitted by the transmission device. Note that it is widely known that a plurality of signals transmitted at the same frequency and the same time can be received by one antenna and decoded (for one antenna reception, it suffices to perform calculation such as ML calculation (Max-log APP or the like)). In the present invention, it suffices for the signal processing unit711inFIG.7to perform demodulation (detection) taking into consideration the precoding scheme for regularly hopping that is used at the transmitting end. Programs for executing the above communication scheme may, for example, be stored in advance in ROM (Read Only Memory) and be caused to operate by a CPU (Central Processing Unit). Furthermore, the programs for executing the above communication scheme may be stored in a computer-readable recording medium, the programs stored in the recording medium may be loaded in the RAM (Random Access Memory) of the computer, and the computer may be caused to operate in accordance with the programs. The components in the above embodiments and the like may be typically assembled as an LSI (Large Scale Integration), a type of integrated circuit. Individual components may respectively be made into discrete chips, or part or all of the components in each embodiment may be made into one chip. While an LSI has been referred to, the terms IC (Integrated Circuit), system LSI, super LSI, or ultra LSI may be used depending on the degree of integration. Furthermore, the scheme for assembling integrated circuits is not limited to LSI, and a dedicated circuit or a general-purpose processor may be used. A FPGA (Field Programmable Gate Array), which is programmable after the LSI is manufactured, or a reconfigurable processor, which allows reconfiguration of the connections and settings of circuit cells inside the LSI, may be used. Furthermore, if technology for forming integrated circuits that replaces LSIs emerges, owing to advances in semiconductor technology or to another derivative technology, the integration of functional blocks may naturally be accomplished using such technology. The application of biotechnology or the like is possible. With the symbol arranging scheme described in Embodiments A1 through A5 and Embodiment 1, the present invention may be similarly implemented by replacing the “scheme of hopping between different precoding matrices” with a “scheme of regularly hopping between precoding matrices using a plurality of different precoding matrices”. Note that the “plurality of different precoding matrices” are as described above. The above describes that “with the symbol arranging scheme described in Embodiments A1 through A5 and Embodiment 1, the present invention may be similarly implemented by replacing the “scheme of hopping between different precoding matrices” with a “scheme of regularly hopping between precoding matrices using a plurality of different precoding matrices”. As the “scheme of hopping between precoding matrices using a plurality of different precoding matrices”, a scheme of preparing N different precoding matrices described above, and hopping between precoding matrices using the N different precoding matrices with an H-slot period (cycle) (H being a natural number larger than N) may be used (as an example, there is a scheme described in Embodiment C2). With the symbol arranging scheme described in Embodiment 1, the present invention may be similarly implemented using the precoding scheme of regularly hopping between precoding matrices described in Embodiments C1 through C5. Similarly, the present invention may be similarly implemented using the precoding scheme of regularly hopping between precoding matrices described in Embodiments C1 through C5 as the precoding scheme of regularly hopping between precoding matrices described in Embodiments A1 through A5. Embodiment D1 The following describes the scheme of regularly hopping between precoding matrices described in Non-Patent Literatures 12 through 15 when using a Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) code (or an LDPC code other than a QC-LDPC code), a concatenated code consisting of an LDPC code and a Bose-Chaudhuri-Hocquenghem (BCH) code, and a block code such as a turbo code or a duo-binary turbo code using tail-biting. Note that the present embodiment may be implemented using either a scheme of regularly hopping between precoding matrices represented by complex numbers or a scheme of regularly hopping between precoding matrices represented by real numbers, which is described below, as the scheme of regularly hopping between precoding matrices. The following describes a case of transmitting two streams s1 and s2 as an example. Note that, when the control information and the like are not required to perform encoding using the block code, the number of bits constituting the coded block is the same as the number of bits constituting the block code (however, the control information and the like described below may be included). When the control information and the like (e.g. CRC (cyclic redundancy check), a transmission parameter) are required to perform encoding using the block code, the number of bits constituting the coded block can be a sum of the number of bits constituting the block code and the number of bits of the control information and the like. FIG.97shows a change in the number of symbols and slots required for one coded block when the block code is used.FIG.97shows a change in the number of symbols and slots required for one coded block when the block code is used in a case where the two streams s1 and s2 are transmitted and the transmission device has a single encoder, as shown in the transmission device inFIG.4(note that, in this case, either the single carrier transmission or the multi-carrier transmission such as the OFDM may be used as a transmission system). As shown inFIG.97, let the number of bits constituting one coded block in the block code be 6000 bits. In order to transmit the 6000 bits, 3000 symbols, 1500 symbols and 1000 symbols are necessary when the modulation scheme is QPSK, 16QAM and 64QAM, respectively. Since two streams are to be simultaneously transmitted in the transmission device shown inFIG.4, when the modulation scheme is QPSK, 1500 symbols are allocated to s1 and remaining 1500 symbols are allocated to s2 out of the above-mentioned 3000 symbols. Therefore, 1500 slots (referred to as slots) are necessary to transmit 1500 symbols by s1 and transmit 1500 symbols by s2. Making the same considerations, 750 slots are necessary to transmit all the bits constituting one coded block when the modulation scheme is 16QAM, and 500 slots are necessary to transmit all the bits constituting one block when the modulation scheme is 64QAM. The present embodiment describes a scheme of initializing precoding matrices in a case where the transmission device inFIG.4is compatible with the multi-carrier scheme, such as the OFDM scheme, when the precoding scheme of regularly hopping between precoding matrices described in this description is used. Next, a case where the transmission device transmits modulated signals each having a frame structure shown inFIGS.99A and99Bis considered.FIG.99Ashows a frame structure in the time and frequency domain for a modulated signal z1 (transmitted by the antenna312A).FIG.99Bshows a frame structure in the time and frequency domain for a modulated signal z2 (transmitted by the antenna312B). In this case, the modulated signal z1 and the modulated signal z2 are assumed to occupy the same frequency (bandwidth), and the modulated signal z1 and the modulated signal z2 are assumed to exist at the same time. As shown inFIG.99A, the transmission device transmits a preamble (control symbol) in an interval A. The preamble is a symbol for transmitting control information to the communication partner and is assumed to include information on the modulation scheme for transmitting the first coded block and the second coded block. The transmission device is to transmit the first coded block in an interval B. The transmission device is to transmit the second coded block in an interval C. The transmission device transmits the preamble (control symbol) in an interval D. The preamble is a symbol for transmitting control information to the communication partner and is assumed to include information on the modulation scheme for transmitting the third coded block, the fourth coded block and so on. The transmission device is to transmit the third coded block in an interval E. The transmission device is to transmit the fourth coded block in an interval F. As shown inFIG.99B, the transmission device transmits a preamble (control symbol) in the interval A. The preamble is a symbol for transmitting control information to the communication partner and is assumed to include information on the modulation scheme for transmitting the first coded block and the second coded block. The transmission device is to transmit the first coded block in the interval B. The transmission device is to transmit the second coded block in the interval C. The transmission device transmits the preamble (control symbol) in the interval D. The preamble is a symbol for transmitting control information to the communication partner and is assumed to include information on the modulation scheme for transmitting the third coded block, the fourth coded block and so on. The transmission device is to transmit the third coded block in the interval E. The transmission device is to transmit the fourth coded block in the interval F. FIG.100shows the number of slots used when the coded blocks are transmitted as shown inFIG.97, and, in particular, when 16QAM is used as the modulation scheme in the first coded block. In order to transmit first coded block, 750 slots are necessary. Similarly,FIG.100shows the number of slots used when QPSK is used as the modulation scheme in the second coded block. In order to transmit first coded block, 1500 slots are necessary. FIG.101shows the number of slots used when the coded block is transmitted as shown inFIG.97, and, in particular, when QPSK is used as the modulation scheme in the third coded block. In order to transmit third coded block, 1500 slots are necessary. As described in this description, a case where phase shift is not performed for the modulated signal z1, i.e. the modulated signal transmitted by the antenna312A, and is performed for the modulated signal z2, i.e. the modulated signal transmitted by the antenna312B, is considered. In this case,FIGS.100and101show the scheme of regularly hopping between precoding matrices. First, assume that seven precoding matrices are prepared to regularly hop between the precoding matrices, and are referred to as #0, #1 , #2, #3, #4 , #5 and #6. The precoding matrices are to be regularly and cyclically used. That is to say, the precoding matrices are to be regularly and cyclically changed in the order #0, #1 , #2, #3, #4 , #5 , #6 , #0 , #1 , #2, #3, #4 , #5 , #6 , #0 , #1 , #2, #3, #4 , #5 , #6, . . . . First, as shown inFIG.100, 750 slots exist in the first coded block. Therefore, starting from #0, the precoding matrices are arranged in the order #0, #1 , #2, #3, #4 , #5 , #6 , #0 , #1 , #2, . . . , #4 , #5 , #6 , #0, and end using #0 for the 750thslot. Next, the precoding matrices are to be applied to each slot in the second coded block. Since this description is on the assumption that the precoding matrices are applied to the multicast communication and broadcast, one possibility is that a reception terminal does not need the first coded block and extracts only the second coded block. In such a case, even when precoding matrix #0 is used to transmit the last slot in the first coded block, the precoding matrix #1 is used first to transmit the second coded block. In this case, the following two schemes are considered:(a) The above-mentioned terminal monitors how the first coded block is transmitted, i.e. the terminal monitors a pattern of the precoding matrix used to transmit the last slot in the first coded block, and estimates the precoding matrix to be used to transmit the first slot in the second coded block; and(b) The transmission device transmits information on the precoding matrix used to transmit the first slot in the second coded block without performing (a). In the case of (a), since the terminal has to monitor transmission of the first coded block, power consumption increases. In the case of (b), transmission efficiency of data is reduced. Therefore, there is room for improvement in allocation of precoding matrices as described above. In order to address the above-mentioned problems, a scheme of fixing the precoding matrix used to transmit the first slot in each coded block is proposed. Therefore, as shown inFIG.100, the precoding matrix used to transmit the first slot in the second coded block is set to #0 as with the precoding matrix used to transmit the first slot in the first coded block. Similarly, as shown inFIG.101, the precoding matrix used to transmit the first slot in the third coded block is set not to #3 but to #0 as with the precoding matrix used to transmit the first slot in the first coded block and in the second coded block. With the above-mentioned scheme, an effect of suppressing the problems occurring in (a) and (b) is obtained. Note that, in the present embodiment, the scheme of initializing the precoding matrices in each coded block, i.e. the scheme in which the precoding matrix used to transmit the first slot in each coded block is fixed to #0, is described. As a different scheme, however, the precoding matrices may be initialized in units of frames. For example, in the symbol for transmitting the preamble and information after transmission of the control symbol, the precoding matrix used in the first slot may be fixed to #0. For example, inFIG.99, a frame is interpreted as starting from the preamble, the first coded block in the first frame is first coded block, and the first coded block in the second frame is the third coded block. This exemplifies a case where “the precoding matrix used in the first slot may be fixed (to #0) in units of frames” as described above usingFIGS.100and101. The following describes a case where the above-mentioned scheme is applied to a broadcasting system that uses the DVB-T2 standard. The frame structure of the broadcasting system that uses the DVB-T2 standard is as described in Embodiments A1 through A3. As described in Embodiments A1 through A3 usingFIGS.61and70, by the P1 symbol, P2 symbol and control symbol group, information on transmission scheme of each PLP (for example, a transmission scheme of transmitting a single modulated signal, a transmission scheme using space-time block coding and a transmission scheme of regularly hopping between precoding matrices) and a modulation scheme being used is transmitted to a terminal. In this case, if the terminal extracts only PLP that is necessary as information to perform demodulation (including separation of signals and signal detection) and error correction decoding, power consumption of the terminal is reduced. Therefore, as described usingFIGS.99through101, the scheme in which the precoding matrix used in the first slot in the PLP transmitted using, as the transmission scheme, the precoding scheme of regularly hopping between precoding matrices is fixed (to #0) is proposed. For example, assume that the broadcast station transmits each symbol having the frame structure as shown inFIGS.61and70. In this case, as an example,FIG.102shows a frame structure in frequency-time domain when the broadcast station transmits PLP $1 (to avoid confusion, #1 is replaced by $1) and PLP $K using the precoding scheme of regularly hopping between precoding matrices. Note that, in the following description, as an example, assume that seven precoding matrices are prepared in the precoding scheme of regularly hopping between the precoding matrices, and are referred to as #0, #1 , #2, #3, #4 , #5 and #6. The precoding matrices are to be regularly and cyclically used. That is to say, the precoding matrices are to be regularly and cyclically changed in the order #0, #1 , #2, #3, #4 , #5 , #6 , #0 , #1 , #2, #3, #4 , #5 , #6 , #0 , #1 , #2, #3, #4 , #5 , #6, . . . . As shown inFIG.102, the slot (symbol) in PLP $1 starts with a time T and a carrier 3 (10201inFIG.102) and ends with a time T+4 and a carrier 4 (10202inFIG.102) (seeFIG.102). This is to say, in PLP $1, the first slot is the time T and the carrier 3, the second slot is the time T and the carrier 4, the third slot is the time T and a carrier 5, . . . , the seventh slot is a time T+1 and a carrier 1, the eighth slot is the time T+1 and a carrier 2, the ninth slot is the time T+1 and the carrier 3, . . . , the fourteenth slot is the time T+1 and a carrier 8, the fifteenth slot is a time T+2 and a carrier 0, . . . . The slot (symbol) in PLP $K starts with a time S and a carrier 4 (10203inFIG.102) and ends with a time S+8 and the carrier 4 (10204inFIG.102) (seeFIG.102). This is to say, in PLP $K, the first slot is the time S and the carrier 4, the second slot is the time S and a carrier 5, the third slot is the time S and a carrier 6, . . . , the fifth slot is the time S and a carrier 8, the ninth slot is a time S+1 and a carrier 1, the tenth slot is the time S+1 and a carrier 2 . . . , the sixteenth slot is the time S+1 and the carrier 8, the seventeenth slot is a time S+2 and a carrier 0, . . . . Note that information on slot that includes information on the first slot (symbol) and the last slot (symbol) in each PLP and is used by each PLP is transmitted by the control symbol including the P1 symbol, the P2 symbol and the control symbol group. In this case, as described usingFIGS.99through101, the first slot in PLP $1, which is the time T and the carrier 3 (10201inFIG.102), is precoded using the precoding matrix #0. Similarly, the first slot in PLP $K, which is the time S and the carrier 4 (10203inFIG.102), is precoded using the precoding matrix #0 regardless of the number of the precoding matrix used in the last slot in PLP $K−1, which is the time S and the carrier 3 (10205inFIG.102). The first slot in another PLP transmitted using the precoding scheme of regularly hopping between the precoding matrices is also precoded using the precoding matrix #0. With the above-mentioned scheme, an effect of suppressing the above problems occurring in (a) and (b) is obtained. Naturally, the reception device extracts necessary PLP from the information on slot that is included in the control symbol including the P1 symbol, the P2 symbol and the control symbol group and is used by each PLP to perform demodulation (including separation of signals and signal detection) and error correction decoding. The reception device learns a rule of the precoding scheme of regularly hopping between the precoding matrices in advance (when there are a plurality of rules, the transmission device transmits information on the rule to be used, and the reception device learns the rule being used by obtaining the transmitted information). By synchronizing a timing of rules of hopping the precoding matrices based on the number of the first slot in each PLP, the reception device can perform demodulation of information symbols (including separation of signals and signal detection). Next, a case where the broadcast station (base station) transmits a modulated signal having a frame structure shown inFIG.103is considered (the frame composed of symbol groups shown inFIG.103is referred to as a main frame). InFIG.103, elements that operate in a similar way toFIG.61bear the same reference signs. The characteristic feature is that the main frame is separated into a subframe for transmitting a single modulated signal and a subframe for transmitting a plurality of modulated signals so that gain control of received signals can easily be performed. Note that the expression “transmitting a single modulated signal” also indicates that a plurality of modulated signals that are the same as the single modulated signal transmitted from a single antenna are generated, and the generated signals are transmitted from respective antennas. InFIG.103, PLP #1 (6105_1) through PLP #N (6105_N) constitute a subframe10300for transmitting a single modulated signal. The subframe10300is composed only of PLPs, and does not include PLP for transmitting a plurality of modulated signals. Also, PLP $1 (10302_1) through PLP $M (10302_M) constitute a subframe10301for transmitting a plurality of modulated signals. The subframe10301is composed only of PLPs, and does not include PLP for transmitting a single modulated signal. In this case, as described above, when the above-mentioned precoding scheme of regularly hopping between precoding matrices is used in the subframe10301, the first slot in PLP (PLP $1 (10302_1) through PLP $M (10302_M)) is assumed to be precoded using the precoding matrix #0 (referred to as initialization of the precoding matrices). The above-mentioned initialization of precoding matrices, however, is irrelevant to a PLP in which another transmission scheme, for example, one of the transmission scheme using a fixed precoding matrix, the transmission scheme using a spatial multiplexing MIMO system and the transmission scheme using the space-time block coding as described in Embodiments A1 through A3 is used in PLP $1 (10302_1) through PLP $M (10302_M). As shown inFIG.104, PLP $1 is assumed to be the first PLP in the subframe for transmitting a plurality of modulated signals in the Xthmain frame. Also, PLP $1′ is assumed to be the first PLP in the subframe for transmitting a plurality of modulated signals in the Ythmain frame. Both PLP $1 and PLP $1′ are assumed to use the precoding scheme of regularly hopping between precoding matrices. Note that, inFIG.104, elements that are similar to the elements shown inFIG.102bear the same reference signs. In this case, the first slot (10201inFIG.104(time T and carrier 3)) in PLP $1, which is the first PLP in the subframe for transmitting a plurality of modulated signals in the Xthmain frame, is assumed to be precoded using the precoding matrix #0. Similarly, the first slot (10401inFIG.104(time T′ and carrier 7)) in PLP $1′, which is the first PLP in the subframe for transmitting a plurality of modulated signals in the Ythmain frame, is assumed to be precoded using the precoding matrix #0. As described above, in each main frame, the first slot in the first PLP in the subframe for transmitting a plurality of modulated signals is characterized by being precoded using the precoding matrix #0. This is also important to suppress the above-mentioned problems occurring in (a) and (b). Note that, in the present embodiment, as shown inFIG.97, a case where the two streams s1 and s2 are transmitted and the transmission device has a single encoder as shown in the transmission device inFIG.4is taken as an example. The initialization of precoding matrices described in the present embodiment, however, is also applicable to a case where the two streams s1 and s2 are transmitted and the transmission device has two single encoders as shown in the transmission device inFIG.3, as shown inFIG.98. Supplementary Explanation 2 In each of the above-mentioned embodiments, the precoding matrices that the weighting combination unit uses for precoding are represented by complex numbers. The precoding matrices may also be represented by real numbers (referred to as a precoding scheme represented by real numbers). For example, let two mapped baseband signals (in the used modulation scheme) be s1(i) and s2(i) (where i represents time or frequency), and let two precoded baseband signals obtained by the precoding be z1(i) and z2(i). Then, let the in-phase component and the quadrature component of the mapped baseband signal s1(i) (in the used modulation scheme) be Is1(i) and Qs1(i) respectively, the in-phase component and the quadrature component of the mapped baseband signal s2(i) (in the used modulation scheme) be Is2(i) and Qs2(i) respectively, the in-phase component and the quadrature component of the precoded baseband signal z1(i) be Iz1(i) and Qz1(i) respectively, and in-phase component and the quadrature component of the precoded baseband signal z2(i) be Iz2(i) and Qz2(i) respectively. When the precoding matrix composed of real numbers (the precoding matrix represented by real numbers) Hris used, the following relationship holds. Math⁢597(Iz⁢1(i)Qz⁢1(i)Iz⁢2(i)Qz⁢2(i))=Hr(Is⁢1(i)Qs⁢1(i)Is⁢2(i)Qs⁢2(i)) The precoding matrix composed of real numbers Hr, however, is represented as follows. Math⁢598Hr=(a11a12a13a14a21a22a23a24a31a32a33a34a41a42a43a44) Here, a11, a12, a13, a14, a21, a22, a23, a24, a31, a32, a33, a34, a41, a42, a43and a44are real numbers. However, {a11=0, a12=0, a13=0 and a14=0} should not hold, {a21=0, a22=0, a23=0 and a24=0} should not hold, {a31=0, a32=0, a33=0and a34=0} should not hold and {a41=0, a42=0, a43=0 and a44=0} should not hold. Also, {a11=0, a21=0, a31=0and a41=0} should not hold, {a12=0, a22=0, a32=0and a42=0} should not hold, {a13=0, a23=0, a33=0 and a43=0} should not hold and {a14=0, a24=0, a34=0 and a44=0} should not hold. The “scheme of hopping between different precoding matrices” as an application of the precoding scheme of the present invention, such as the symbol arranging scheme described in Embodiments A1 through A5 and Embodiments 1 and 7, may also naturally be implemented as the precoding scheme of regularly hopping between precoding matrices using the precoding matrices represented by a plurality of different real numbers described as the “precoding scheme represented by real numbers”. The usefulness of hopping between precoding matrices in the present invention is the same as that in a case where the precoding matrices are represented by a plurality of different complex numbers. Note that the “plurality of different precoding matrices” are as described above. The above describes that “scheme of regularly hopping between different precoding matrices” as an application of the precoding scheme of the present invention, such as the symbol arranging scheme described in Embodiments A1 through A5 and Embodiments 1 and 7, may also naturally be implemented as the precoding scheme of regularly hopping between precoding matrices using the precoding matrices represented by a plurality of different real numbers described as the “precoding scheme represented by real numbers”. As the “precoding scheme of regularly hopping between precoding matrices using the precoding matrices represented by a plurality of different real numbers”, a scheme of preparing N different precoding matrices (represented by real numbers), and hopping between precoding matrices using the N different precoding matrices (represented by real numbers) with an H-slot period (cycle) (H being a natural number larger than N) may be used (as an example, there is a scheme described in Embodiment C2). With the symbol arranging scheme described in Embodiment 1, the present invention may be similarly implemented using the precoding scheme of regularly hopping between precoding matrices described in Embodiments C1 through C5. Similarly, the present invention may be similarly implemented using the precoding scheme of regularly hopping between precoding matrices described in Embodiments C1 through C5 as the precoding scheme of regularly hopping between precoding matrices described in Embodiments A1 through A5. INDUSTRIAL APPLICABILITY The present invention is widely applicable to wireless systems that transmit different modulated signals from a plurality of antennas, such as an OFDM-MIMO system. Furthermore, in a wired communication system with a plurality of transmission locations (such as a Power Line Communication (PLC) system, optical communication system, or Digital Subscriber Line (DSL) system), the present invention may be adapted to MIMO, in which case a plurality of transmission locations are used to transmit a plurality of modulated signals as described by the present invention. A modulated signal may also be transmitted from a plurality of transmission locations. REFERENCE SIGNS LIST 302A,302B encoder304A,304B interleaver306A,306B mapping unit314weighting information generating unit308A,308B weighting unit310A,310B wireless unit312A,312B antenna402encoder404distribution unit504#1,504#2transmit antenna505#1,505#2transmit antenna600weighting unit703_X wireless unit701_X antenna705_1channel fluctuation estimating unit705_2channel fluctuation estimating unit707_1channel fluctuation estimating unit707_2channel fluctuation estimating unit709control information decoding unit711signal processing unit803INNER MIMO detector805A,805B log-likelihood calculating unit807A,807B deinterleaver809A,809B log-likelihood ratio calculating unit811A,811B soft-in/soft-out decoder813A,813B interleaver815storage unit819weighting coefficient generating unit901soft-in/soft-out decoder903distribution unit1301A,1301B OFDM related processor1402A,1402A serial/parallel converter1404A,1404B reordering unit1406A,1406B inverse Fast Fourier transformer1408A,1408B wireless unit2200precoding weight generating unit2300reordering unit4002encoder group
610,940
11863264
DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, embodiments according to the present disclosure will be described in detail with reference to the drawings. Embodiment 1 A transmission method, transmission device, reception method, and reception device according to this embodiment will be described in detail. FIG.1illustrates one example of a configuration of a transmission device according to this embodiment, such as a base station, access point, or broadcast station. Error correction encoder102receives inputs of data101and control signal100, and based on information related to the error correction code included in control signal100(e.g., error correction code information, code length (block length), encode rate), performs error correction encoding, and outputs encoded data103. Note that error correction encoder102may include an interleaver. In such a case, error correction encoder102may rearrange the encoded data before outputting encoded data103. Mapper104receives inputs of encoded data103and control signal100, and based on information on the modulated signal included in control signal100, performs mapping in accordance with the modulation scheme, and outputs mapped signal (baseband signal)105_1and mapped signal (baseband signal)105_2. Note that mapper104generates mapped signal105_1using a first sequence and generates mapped signal105_2using a second sequence. Here, the first sequence and second sequence are different. Signal processor106receives inputs of mapped signals105_1and105_2, signal group110, and control signal100, performs signal processing based on control signal100, and outputs signal-processed signals106_A and106_B. Here, signal-processed signal106_A is expressed as u1(t), and signal-processed signal106_B is expressed as u2(t) (i is a symbol number; for example, i is an integer that is greater than or equal to 0). Note that details regarding the signal processing will be described with reference toFIG.2later. Radio unit107_A receives inputs of signal-processed signal106_A and control signal100, and based on control signal100, processes signal-processed signal106_A and outputs transmission signal108_A. Transmission signal108_A is then output as radio waves from antenna unit #A (109_A). Similarly, radio unit107_B receives inputs of signal-processed signal106_B and control signal100, and based on control signal100, processes signal-processed signal106_B and outputs transmission signal108_B. Transmission signal108_B is then output as radio waves from antenna unit #B (109_B). Antenna unit #A (109_A) receives an input of control signal100. Here, based on control signal100, antenna unit #A (108_A) processes transmission signal108_A and outputs the result as radio waves. However, antenna unit #A (109_A) may not receive an input of control signal100. Similarly, antenna unit #B (109_B) receives an input of control signal100. Here, based on control signal100, antenna unit #B (108_B) processes transmission signal108_B and outputs the result as radio waves. However, antenna unit #B (109_B) may not receive an input of control signal100. Note that control signal100may be generated based on information transmitted by a device that is the communication partner inFIG.1, and, alternatively, the device inFIG.1may include an input unit, and control signal100may be generated based on information input from the input unit. FIG.2illustrates one example of a configuration of signal processor106illustrated inFIG.1. Weighting synthesizer (precoder)203receives inputs of mapped signal201A (mapped signal105_1inFIG.1), mapped signal201B (mapped signal105_2inFIG.1), and control signal200(control signal100inFIG.1), performs weighting synthesis (precoding) based on control signal200, and outputs weighted signal204A and weighted signal204B. Here, mapped signal201A is expressed as s1(t), mapped signal201B is expressed as s2(t), weighted signal204A is expressed as z1(t), and weighted signal204B is expressed as z2′(t). Note that one example of t is time (s1(t), s2(t), z1(t), and z2′(t) are defined as complex numbers (accordingly, they may be real numbers)). Weighting synthesizer (precoder)203performs the following calculation. [MATH.1](z⁢1⁢(i)z⁢2′⁢(i))=(abcd)⁢(s⁢1⁢(i)s⁢2⁢(i))Equation⁢(1) In Equation (1), a, b, c, and d can be defined as complex numbers. Accordingly, a, b, c, and d are complex numbers (and may be real numbers). Note that i is a symbol number. Phase changer205B receives inputs of weighting synthesized signal204B and control signal200, applies a phase change to weighting synthesized signal204B based on control signal200, and outputs phase-changed signal206B. Note that phase-changed signal206B is expressed as z2(t), and z2(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205B will be described. In phase changer205B, for example, a phase change of y(i) is applied to z2′(i). Accordingly, z2(t) can be expressed as z2(t)=y(i)×z2′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as shown below (N is an integer that is greater than or equal to 2, N is a phase change cycle) (when N is set to an odd number greater than or equal to 3, data reception quality may improve). [MATH.2]y⁡(i)=ej⁢2×π×iNEquation⁢(2) (j is an imaginary number unit.) However, Equation (2) is merely a non-limiting example. Here, phase change value y(i)=ej×δ(i). Here, z1(t) and z2(t) can be expressed with the following equation. [MATH.3](z⁢1⁢(i)z⁢2⁢(i))=(100y⁡(i))⁢(abcd)⁢(s⁢1⁢(i)s⁢2⁢(i))=(100ej×δ⁡(i))⁢(abcd)⁢(s⁢1⁢(i)s⁢2⁢(i))Equation⁢(3) Note that δ(i) is a real number. z1(t) and z2(t) are transmitted from the transmission device at the same time and using the same frequency (same frequency band). In Equation (3), the phase change value is not limited to the value used in Equation (2); for example, a method in which the phase is changed cyclically or regularly is conceivable. The matrix (precoding matrix) in Equation (1) and Equation (3) is as follows. [MATH.4](abcd)=FEquation⁢(4) For example, using the following matrix for matrix F is conceivable. [MATH.5]F=(β×ej⁢0β×α×ej⁢0β×α×ej⁢0β×ej⁢π)Equation⁢(5)or[MATH.6]F=1α2+1⁢(ej⁢0a×ej⁢0α×ej⁢0β×ej⁢π)Equation⁢(6)or[MATH.7]F=(β×ej⁢0β×α×ej⁢πβ×α×ej⁢0β×ej⁢0)Equation⁢(7)or[MATH.8]F=1α2+1⁢(ej⁢0α×ej⁢πα×ej⁢0ej⁢0)Equation⁢(8)or[MATH.9]F=(β×α×ej⁢0β×ej⁢πβ×ej⁢0β×α×ej⁢0)Equation⁢(9)or[MATH.10]F=1α2+1⁢(α×ej⁢0ej⁢πej⁢0α×ej⁢0)Equation⁢(10)or[MATH.11]F=(β×α×ej⁢0β×ej⁢0β×ej⁢0β×α×ej⁢π)Equation⁢(11)or[MATH.12]F=1α2+1⁢(α×ej⁢0ej⁢0ej⁢0α×ej⁢π)Equation⁢(12) Note that in Equation (5), Equation (6), Equation (7), Equation (8), Equation (9), Equation (10), Equation (11), and Equation (12), α may be a real number and may be an imaginary number, and β may be a real number and may be an imaginary number. However, α is not 0 (zero). β is also not 0 (zero). or [MATH.13]F=(β×cos⁢θβ×sin⁢θβ×sin⁢θ-β×cos⁢θ)Equation⁢(13)or[MATH.14]F=(cos⁢θsin⁢θsin⁢θ-cos⁢θ)Equation⁢(14)or[MATH.15]F=(β×cos⁢θ-β×sin⁢θβ×sin⁢θβ×cos⁢θ)Equation⁢(15)or[MATH.16]F=(cos⁢θ-sin⁢θsin⁢θcos⁢θ)Equation⁢(16)or[MATH.17]F=(β×sin⁢θ-β×cos⁢θβ×cos⁢θβ×sin⁢θ)Equation⁢(17)or[MATH.18]F=(sin⁢θ-cos⁢θcos⁢θsin⁢θ)Equation⁢(18)or[MATH.19]F=(β×sin⁢θβ×cos⁢θβ×cos⁢θ-β×sin⁢θ)Equation⁢(19)or[MATH.20]F=(sin⁢θcos⁢θcos⁢θ-s⁢in⁢θ)Equation⁢(20) Note that in Equation (13), Equation (15), Equation (17), and Equation (19), β may be a real number and may be an imaginary number. However, β is not 0 (zero) (θ is a real number). or [MATH.21]F⁡(i)=(β×ej⁢θ11(i)β×α×ej⁡(θ11(i)+λ)β×α×ej⁢θ21(i)β×ej⁡(θ21(i)+λ+π))Equation⁢(21)or[MATH.22]F⁡(i)=1α2+1⁢(ej⁢θ11(i)α×ej⁡(θ11(i)+λ)α×ej⁢θ21(i)ej⁡(θ21(i)+λ+π))Equation⁢(22)or[MATH.23]F⁡(i)=(β×α×ej⁢θ2⁢1(i)β×ej⁡(θ21(i)+λ+π)β×ej⁢θ11(i)β×α×ej⁡(θ11(i)+λ))Equation⁢(23)or[MATH.24]F⁡(i)=1α2+1⁢(α×ej⁢θ2⁢1(i)ej⁡(θ⁡(i)+λ+π)ej⁢θ11(i)α×ej⁡(θ11(i)+λ))Equation⁢(24)or[MATH.25]F⁡(i)=(β×ej⁢θ11β×α×ej⁡(θ11+λ⁡(i))β×α×ej⁢θ2⁢1β×ej⁡(θ2⁢1+λ⁡(i)+π))Equation⁢(25)or[MATH.26]F⁡(i)=1α2+1⁢(ej⁢θ11α×ej⁡(θ11+λ⁡(i))α×ej⁢θ21ej(θ2⁢1+λ⁡(i)+π)Equation⁢(26)or[MATH.27]F⁡(i)=(β×α×ej⁢θ2⁢1β×ej⁡(θ21+λ⁡(i)+π)β×ej⁢θ11β×α×ej⁡(θ11+λ⁡(i)))Equation⁢(27)or[MATH.28]F⁡(i)=1α2+1⁢(α×ej⁢θ21ej⁡(θ2⁢1+λ⁡(i)+π)ej⁢θ1⁢1α×ej⁡(θ11+λ⁡(i)))Equation⁢(28)or[MATH.29]F=(β×ej⁢θ11β×α×ej⁡(θ11+λ)β×α×ej⁢θ2⁢1β×ej⁡(θ21+λ+π))Equation⁢(29)or[MATH.30]F=1α2+1⁢(ej⁢θ1⁢1α×ej⁡(θ1⁢1+λ)α×ej⁢θ21ej⁡(θ21+λ+π))Equation⁢(30)or[MATH.31]F=(β×α×ej⁢θ2⁢1β×ej⁡(θ2⁢1+λ+π)β×ej⁢θ11β×α×ej⁡(θ1⁢1+λ))Equation⁢(31)or[MATH.32]F=1α2+1⁢(α×ej⁢θ2⁢1ej⁡(θ21+λ+π)ej⁢θ1⁢1α×ej⁡(θ11+λ))Equation⁢(32) However, θ11(i), θ21(i), and λ(i) are functions (real numbers) of i (symbol number). λ is, for example, a fixed value (real number) (however, A need not be a fixed value). α may be a real number, and, alternatively, may be an imaginary number. β may be a real number, and, alternatively, may be an imaginary number. However, α is not 0 (zero). β is also not 0 (zero). Moreover, θ11and θ21are real numbers. Moreover, each exemplary embodiment in the present specification can also be carried out by using a precoding matrix other than these matrices. Or [MATH.33]F⁡(i)=(1001)Equation⁢(33)[MATH.34]F⁡(i)=(β00β)Equation⁢(34)[MATH.35]F⁡(i)=(100-1)Equation⁢(35)[MATH.36]F⁡(i)=(β00-β)Equation⁢(36) Note that in Equation (34) and Equation (36), β may be a real number and, alternatively, may be an imaginary number. However, β is not 0 (zero). Inserter207A receives inputs of weighting synthesized signal204A, pilot symbol signal (pa(t)) (t is time) (251A), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208A based on the frame configuration. Similarly, inserter207B receives inputs of phase-changed signal206B, pilot symbol signal (pb(t)) (251B), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208B based on the frame configuration. Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210B (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Although it will be described later, note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). FIG.3illustrates one example of a configuration of radio units107_A and107_B illustrated inFIG.1. Serial-parallel converter302receives inputs of signal301and control signal300(control signal100inFIG.1), applies a serial-parallel conversion based on control signal300, and outputs serial-parallel converted signal303. Inverse Fourier transform unit304receives inputs of serial-parallel converted signal303and control signal300, and based on control signal300, applies, as one example of an inverse Fourier transform, an inverse fast Fourier transform (IFFT), and outputs inverse Fourier transformed signal305. Processor306receives inputs of inverse Fourier transformed signal305and control signal300, applies processing such as frequency conversion and amplification based on control signal300, and outputs modulated signal307. (For example, when signal301is signal-processed signal106_A illustrated inFIG.1, modulated signal307corresponds to transmission signal108_A inFIG.1. Moreover, when signal301is signal-processed signal106_B illustrated inFIG.1, modulated signal307corresponds to transmission signal108_B inFIG.1.) FIG.4illustrates a frame configuration of transmission signal108_A illustrated inFIG.1. InFIG.4, frequency (carriers) is (are) represented on the horizontal axis and time is represented on the vertical axis. Since a multi-carrier transmission scheme such as OFDM is used, symbols are present in the carrier direction. InFIG.4, symbols from carriers 1 to 36 are shown. Moreover, inFIG.4, symbols for time $1 through time $11 are shown. InFIG.4,401is a pilot symbol (pilot signal251A (pa(t) inFIG.2)),402is a data symbol, and403is an other symbol. Here, a pilot symbol is, for example, a PSK (phase shift keying) symbol, and is a symbol for the reception device that receives this frame to perform channel estimation (propagation path fluctuation estimation), frequency offset estimation, and phase fluctuation estimation. For example, the transmission device illustrated inFIG.1and the reception device that receives the frame illustrated inFIG.4may share the transmission method of the pilot symbol. Note that mapped signal201A (mapped signal105_1inFIG.1) is referred to as “stream #1” and mapped signal201B (mapped signal105_2inFIG.1) is referred to as “stream #2”. Note that this also applied to subsequent descriptions. Data symbol402is a symbol that corresponds to baseband signal208A generated in the signal processing illustrated inFIG.2. Accordingly, data symbol402satisfies “a symbol including both the symbol “stream #1” and the symbol “stream #2””, “the symbol “stream #1””, or “the symbol “stream #2””, as determined by the configuration of the precoding matrix used by weighting synthesizer203. Other symbols403are symbols corresponding to preamble signal242and control information symbol signal253illustrated inFIG.2(however, the other symbols may include symbols other than a preamble or control information symbol). Here, a preamble may transmit data (control data), and may be configured as, for example, a symbol for signal detection, a signal for performing frequency and time synchronization, or a symbol for performing channel estimation (a symbol for performing propagation path fluctuation estimation). The control information symbol is a symbol including control information for the reception device that received the frame inFIG.4to demodulate and decode a data symbol. For example, carriers 1 to 36 from time $1 to time 4 inFIG.4are other symbols403. Then, at time $5, carrier 1 through carrier 11 are data symbols402. At time $5, carrier 12 is pilot symbol401, at time $5, carriers 13 to 23 are data symbols402, at time $5, carrier 24 is pilot symbol401. . . at time $6, carriers 1 and 2 are data symbols402, at time $6, carrier 3 is pilot symbol401. . . at time $11, carrier 30 is pilot symbol401, at time $11, carriers 31 to 36 are data symbols402. FIG.5illustrates a frame configuration of transmission signal108_B illustrated inFIG.1. InFIG.5, frequency (carriers) is (are) represented on the horizontal axis and time is represented on the vertical axis. Since a multi-carrier transmission scheme such as OFDM is used, symbols are present in the carrier direction. InFIG.5, symbols from carriers 1 to 36 are shown. Moreover, inFIG.5, symbols for time $1 through time $11 are shown. InFIG.5,501is a pilot symbol (pilot signal251B (pb(t) inFIG.2)),502is a data symbol, and503is an other symbol. Here, a pilot symbol is, for example, a PSK symbol, and is a symbol for the reception device that receives this frame to perform channel estimation (propagation path fluctuation estimation), frequency offset estimation, and phase fluctuation estimation. For example, the transmission device illustrated inFIG.1and the reception device that receives the frame illustrated inFIG.5may share the transmission method of the pilot symbol. Data symbol502is a symbol that corresponds to baseband signal208B generated in the signal processing illustrated inFIG.2. Accordingly, data symbol502satisfies “a symbol including both the symbol “stream #1” and the symbol “stream #2””, “the symbol “stream #1””, or “the symbol “stream #2””, as determined by the configuration of the precoding matrix used by weighting synthesizer203. Other symbols503are symbols corresponding to preamble signal252and control information symbol signal253illustrated inFIG.2(however, the other symbols may include symbols other than a preamble or control information symbol). Here, a preamble may transmit data (control data), and is configured as, for example, a symbol for signal detection, a signal for performing frequency and time synchronization, or a symbol for performing channel estimation (a symbol for performing propagation path fluctuation estimation). The control information symbol is a symbol including control information for the reception device that received the frame inFIG.5to demodulate and decode a data symbol. For example, carriers 1 to 36 from time $1 to time 4 inFIG.5are other symbols403. Then, at time $5, carrier 1 through carrier 11 are data symbols402. At time $5, carrier 12 is pilot symbol401, at time $5, carriers 13 to 23 are data symbols402, at time $5, carrier 24 is pilot symbol401. . . at time $6, carriers 1 and 2 are data symbols402, at time $6, carrier 3 is pilot symbol401. . . at time $11, carrier 30 is pilot symbol401, at time $11, carriers 31 to 36 are data symbols402. When a symbol is present in carrier A at time $B inFIG.4and a symbol is present in carrier A at time $B inFIG.5, the symbol in carrier A at time $B inFIG.4and the symbol in carrier A at time $B inFIG.5are transmitted at the same time and same frequency. Note that the frame configuration is not limited to the configurations illustrated inFIG.4andFIG.5;FIG.4andFIG.5are mere examples of frame configurations. The other symbols inFIG.4andFIG.5are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.2”. Accordingly, when an other symbol503inFIG.5at the same time and same frequency (same carrier) as an other symbol403inFIG.4transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.4and the frame ofFIG.5are received at the same time by the reception device, but even when the frame ofFIG.4or the frame ofFIG.5has been received, the reception device can obtain the data transmitted by the transmission device. FIG.6illustrates one example of components relating to control information generation for generating control information symbol signal253illustrated inFIG.2. Control information mapper602receives inputs of data601related to control information and control signal600, maps data601related to control information in using a modulation scheme based on control signal600, and outputs control information mapped signal603. Note that control information mapped signal603corresponds to control information symbol signal253inFIG.2. FIG.7illustrates one example of a configuration of antenna unit #A (109_A), antenna #B (109_B) illustrated inFIG.1(antenna unit #A (109_A) and antenna unit #B (109_B) are exemplified as including a plurality of antennas). Splitter702receives an input of transmission signal701, performs splitting, and outputs transmission signals703_1,703_2,703_3, and703_4. Multiplier704_1receives inputs of transmission signal703_1and control signal700, and based on the multiplication coefficient included in control signal700, multiplies a multiplication coefficient with transmission signal703_1, and outputs multiplied signal705_1. Multiplied signal705_1is output from antenna706_1as radio waves. When transmission signal703_1is expressed as Tx1(t) (t is time) and the multiplication coefficient is expressed as W1 (W1 can be defined as a complex number and thus may be a real number), multiplied signal705_1can be expressed as Tx1(t)×W1. Multiplier704_2receives inputs of transmission signal703_2and control signal700, and based on the multiplication coefficient included in control signal700, multiplies a multiplication coefficient with transmission signal703_2, and outputs multiplied signal705_2. Multiplied signal705_2is output from antenna706_2as radio waves. When transmission signal703_2is expressed as Tx2(t) and the multiplication coefficient is expressed as W2 (W2 can be defined as a complex number and thus may be a real number), multiplied signal705_2can be expressed as Tx2(t)×W2. Multiplier704_3receives inputs of transmission signal703_3and control signal700, and based on the multiplication coefficient included in control signal700, multiplies a multiplication coefficient with transmission signal703_3, and outputs multiplied signal705_3. Multiplied signal705_3is output from antenna706_3as radio waves. When transmission signal703_3is expressed as Tx3(t) and the multiplication coefficient is expressed as W3 (W3 can be defined as a complex number and thus may be a real number), multiplied signal705_3can be expressed as Tx3(t)×W3. Multiplier704_4receives inputs of transmission signal703_4and control signal700, and based on the multiplication coefficient included in control signal700, multiplies a multiplication coefficient with transmission signal703_4, and outputs multiplied signal705_4. Multiplied signal705_4is output from antenna706_4as radio waves. When transmission signal703_4is expressed as Tx4(t) and the multiplication coefficient is expressed as W4 (W4 can be defined as a complex number and thus may be a real number), multiplied signal705_4can be expressed as Tx4(t)×W4. Note that “the absolute value of W1, the absolute value of W2, the absolute value of W3, and the absolute value of W4 are equal” may be true. Here, this is the equivalent of having performed a phase change (it goes without saying that the absolute value of W1, the absolute value of W2, the absolute value of W3, and the absolute value of W4 may be unequal). Moreover, inFIG.7, the antenna unit is exemplified as including four antennas (and four multipliers), but the number of antennas is not limited to four; the antenna unit may include two or more antennas. When the configuration of antenna unit #A (109_A) inFIG.1is as illustrated inFIG.7, transmission signal701corresponds to transmission signal108_A inFIG.1. When the configuration of antenna unit #B (109_B) inFIG.1is as illustrated inFIG.7, transmission signal701corresponds to transmission signal108_B inFIG.1and transmission signal108_B inFIG.1. However, antenna unit #A (109_A) and antenna unit #B (109_B) need not have the configurations illustrated inFIG.7; as previously described, the antenna units need not receive an input of control signal100. FIG.8illustrates one example of a configuration of a reception device that receives a modulated signal upon the transmission device illustrated inFIG.1transmitting, for example, a transmission signal having the frame configuration illustrated inFIG.4orFIG.5. Radio unit803X receives an input of reception signal802X received by antenna unit #X (801X), applies processing such as frequency conversion and a Fourier transform, and outputs baseband signal804X. Similarly, radio unit803Y receives an input of reception signal802Y received by antenna unit #Y (801Y), applies processing such as frequency conversion and a Fourier transform, and outputs baseband signal804Y. Note thatFIG.8illustrates a configuration in which antenna unit #X (801X) and antenna unit #Y (801Y) receive control signal810as an input, but antenna unit #X (801X) and antenna unit #Y (801Y) may be configured to not receive an input of control signal810. Operations performed when control signal810is present as an input will be described in detail later. FIG.9illustrates the relationship between the transmission device and the reception device. Antennas901_1and901_2inFIG.9are transmitting antennas, and antenna901_1inFIG.9corresponds to antenna unit #A (109_A) inFIG.1. Antenna901_2inFIG.9corresponds to antenna unit #B (109_B) inFIG.1. Antennas902_1and902_2inFIG.9are receiving antennas, and antenna902_1inFIG.9corresponds to antenna unit #X (801X) inFIG.8. Antenna902_2inFIG.9corresponds to antenna unit #Y (801Y) inFIG.8. As illustrated inFIG.9, the signal transmitted from transmitting antenna901_1is u1(t), the signal transmitted from transmitting antenna901_2is u2(t), the signal received by receiving antenna902_1is r1(t), and the signal received by receiving antenna902_2is r2(t). Note that i is a symbol number, and, for example, is an integer that is greater than or equal to 0. The propagation coefficient from transmitting antenna901_1to receiving antenna902_1is h11(t), the propagation coefficient from transmitting antenna901_1to receiving antenna902_2is h21(t), the propagation coefficient from transmitting antenna901_2to receiving antenna902_1is h12(t), and the propagation coefficient from transmitting antenna901_2to receiving antenna902_2is h22(t). In this case, the following relation equation holds true. [MATH.37](r⁢1⁢(i)r⁢2⁢(i))=(h⁢11⁢(i)h⁢12⁢(i)h⁢21⁢(i)h⁢22⁢(i))⁢(u⁢1⁢(i)u⁢2⁢(i))+(n⁢1⁢(i)n⁢2⁢(i))Equation⁢(37) Note that n1(i) and n2(t) are noise. Channel estimation unit805_1of modulated signal u1 inFIG.8receives an input of baseband signal804X, and using the preamble and/or pilot symbol illustrated inFIG.4orFIG.5, performs channel estimation on modulated signal u1, that is to say, estimates h11(t) in Equation (37), and outputs channel estimated signal806_1. Channel estimation unit805_2of modulated signal u2 receives an input of baseband signal804X, and using the preamble and/or pilot symbol illustrated inFIG.4orFIG.5, performs channel estimation on modulated signal u2, that is to say, estimates h12(t) in Equation (37), and outputs channel estimated signal806_2. Channel estimation unit807_1of modulated signal u1 receives an input of baseband signal804Y, and using the preamble and/or pilot symbol illustrated inFIG.4orFIG.5, performs channel estimation on modulated signal u1, that is to say, estimates h21(t) in Equation (37), and outputs channel estimated signal808_1. Channel estimation unit807_2of modulated signal u2 receives an input of baseband signal804Y, and using the preamble and/or pilot symbol illustrated inFIG.4orFIG.5, performs channel estimation on modulated signal u2, that is to say, estimates h22(t) in Equation (37), and outputs channel estimated signal808_2. Control information decoder809receives inputs of baseband signals804X and804Y, demodulates and decodes control information including “other symbols” inFIG.4andFIG.5, and outputs control signal810including control information. Signal processor811receives inputs of channel estimated signals806_1,806_2,808_1, and808_2, baseband signals804X and804Y, and control signal810, performs demodulation and decoding using the relationship in Equation (37) or based on control information (for example, information on a modulation scheme or a scheme relating to the error correction code) in control signal810, and outputs reception data812. Note that control signal810need not be generated via the method illustrated inFIG.8. For example, control signal810inFIG.8may be generated based on information transmitted by a device that is the communication partner (FIG.1) inFIG.8, and, alternatively, the device inFIG.8may include an input unit, and control signal810may be generated based on information input from the input unit. FIG.10illustrates one example of a configuration of antenna unit #X (801X) and antenna unit #Y (801Y) illustrated inFIG.8(antenna unit #X (801X) and antenna unit #Y (801Y) are exemplified as including a plurality of antennas). Multiplier1003_1receives inputs of reception signal1002_1received by antenna1001_1and control signal1000, and based on information on a multiplication coefficient included in control signal1000, multiplies reception signal1002_1with the multiplication coefficient, and outputs multiplied signal1004_1. When reception signal1002_1is expressed as Rx1(t) (t is time) and the multiplication coefficient is expressed as D1 (D1 can be defined as a complex number and thus may be a real number), multiplied signal1004_1can be expressed as Rx1(t)×D1. Multiplier1003_2receives inputs of reception signal1002_2received by antenna1001_2and control signal1000, and based on information on a multiplication coefficient included in control signal1000, multiplies reception signal1002_2with the multiplication coefficient, and outputs multiplied signal1004_2. When reception signal1002_2is expressed as Rx2(t) and the multiplication coefficient is expressed as D2 (D2 can be defined as a complex number and thus may be a real number), multiplied signal1004_2can be expressed as Rx2(t)×D2. Multiplier1003_3receives inputs of reception signal1002_3received by antenna1001_3and control signal1000, and based on information on a multiplication coefficient included in control signal1000, multiplies reception signal1002_3with the multiplication coefficient, and outputs multiplied signal1004_3. When reception signal1002_3is expressed as Rx3(t) and the multiplication coefficient is expressed as D3 (D3 can be defined as a complex number and thus may be a real number), multiplied signal1004_3can be expressed as Rx3(t)×D3. Multiplier1003_4receives inputs of reception signal1002_4received by antenna1001_4and control signal1000, and based on information on a multiplication coefficient included in control signal1000, multiplies reception signal1002_4with the multiplication coefficient, and outputs multiplied signal1004_4. When reception signal1002_4is expressed as Rx4(t) and the multiplication coefficient is expressed as D4 (D4 can be defined as a complex number and thus may be a real number), multiplied signal1004_4can be expressed as Rx4(t)×D4. Synthesizer1005receives inputs of multiplied signals1004_1,1004_2,1004_3, and1004_4, synthesizes multiplied signals1004_1,1004_2,1004_3, and1004_4, and outputs synthesized signal1006. Note that synthesized signal1006is expressed as Rx1(t)×D1+Rx2(t)×D2+Rx3(t)×D3+Rx4(t) x D4. InFIG.10, the antenna unit is exemplified as including four antennas (and four multipliers), but the number of antennas is not limited to four; the antenna unit may include two or more antennas. When the configuration of antenna unit #X (801X) inFIG.8is as illustrated inFIG.10, reception signal802X corresponds to synthesized signal1006inFIG.10, and control signal710corresponds to control signal1000inFIG.10. When the configuration of antenna unit #Y (801Y) inFIG.8is as illustrated inFIG.10, reception signal802Y corresponds to synthesized signal1006inFIG.10, and control signal710corresponds to control signal1000inFIG.10. However, antenna unit #X (801X) and antenna unit #Y801Y need not have the configuration illustrated inFIG.10; as stated before, the antenna unit may not receive an input of control signal710. Note that control signal800may be generated based on information transmitted by a device that is the communication partner, and, alternatively, the device may include an input unit, and control signal800may be generated based on information input from the input unit. Next, signal processor106in the transmission device illustrated inFIG.1is inserted as phase changer205B and phase changer209B, as illustrated inFIG.2. The characteristics and advantageous effects of this configuration will be described. As described with reference toFIG.4andFIG.5, phase changer205B applies precoding (weighted synthesis) to mapped signal s1(t) (201A) (i is a symbol number; i is an integer greater than or equal to 0) obtained via mapping using the first sequence and mapped signal s2(t) (201B) obtained via mapping using the second sequence, and applies a phase change to one of the obtained weighting synthesized signals204A and204B. Weighting synthesized signal204A and phase-changed signal206B are then transmitted at the same frequency and at the same time. Accordingly, inFIG.4andFIG.5, a phase change is applied to data symbol502inFIG.5(in the case ofFIG.2, since phase changer205B applies this to weighting synthesized signal204B, a phase change is applied to data symbol502inFIG.5; when a phase change is applied to weighting synthesized signal204A, a phase change is applied to data symbol402inFIG.4; this will be described later). For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.5. Note that inFIG.11, similar toFIG.5,501is a pilot symbol,502is a data symbol, and503is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205B applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×δ15(i)” for (carrier 1, time $5), “ej×δ25(i)” for (carrier 2, time $5), “ej×δ35(i)” for (carrier 3, time $5), “ej×δ45(i)” for (carrier 4, time $5), “ej×δ55(i)” (carrier 5, time $5), “ej×δ16(i)” for (carrier 1, time $6), “ej×δ26(i)” for (carrier 2, time $6), “ej×δ46(i)” for (carrier 4, time $6), and “ej×δ56(i)” for (carrier 5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205B. This point is a characteristic of phase changer205B. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205B). One example of the phase change that phase changer205B applies to the data symbols is the method given in Equation (2) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). With this, when the environment is one in which the direct waves are dominant, such as in an LOS environment, it is possible to achieve improved data reception quality in the reception device with respect to the data symbols that perform MIMO transmission (transmit a plurality of streams). Next, the advantageous effects of this will be described. For example, the modulation scheme used by mapper104inFIG.1is quadrature phase shift keying (QPSK) (mapped signal201A inFIG.2is a QPSK signal, and mapped signal201B is a QPSK signal; in other words, two QPSK streams are transmitted). Accordingly, for example, using channel estimated signals806_1and806_2,16candidate signal points are obtained by signal processor811illustrated inFIG.8(2-bit transmission is possible with QPSK. Accordingly, since there are two streams, 4-bit transmission is achieved. Thus, there are 24=16 candidate signal points) (note that 16 other candidate signal points are obtained from using channel estimated signals808_1and808_2as well, but since description thereof is the same as described above, the following description will focus on the 16 candidate signal points obtained by using channel estimated signals806_1and806_2). FIG.12illustrates an example of the state resulting from such a case. In (A) and (B) inFIG.12, in-phase I is represented on the horizontal axis and quadrature Q is represented on the vertical axis, and 16 candidate signal points are present in the illustrated in-phase I-quadrature Q planes (among the 16 candidate signal points, one is a signal point that is transmitted by the transmission device; accordingly, this is referred to as “16 candidate signal points”). When the environment is one in which the direct waves are dominant, such as in an LOS environment, consider a first case in which phase changer205B is omitted from the configuration illustrated inFIG.2(in other words, a case in which phase change is not applied by phase changer205B inFIG.2). In the first case, since phase change is not applied, there is a possibility that the state illustrated in (A) inFIG.12will be realized. When the state falls into the state illustrated in (A) inFIG.12, as illustrated by “signal points1201and1202”, “signal points1203,1204,1205, and1206”, and “signal points1207,1208”, the signal points become dense (the distances between some signal points shorten). Accordingly, in the reception device illustrated inFIG.8, data reception quality may deteriorate. In order to remedy this phenomenon, inFIG.2, phase changer205B is inserted. When phase changer205B is inserted, due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12. With respect to this state, since error correction code is introduced, high error correction performance is achieved, and in the reception device illustrated inFIG.8, high data reception quality can be achieved. Note that inFIG.2, a phase change is not applied by phase changer205B inFIG.2to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation. With this, among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized. However, even if a phase change is applied by phase changer205B inFIG.2to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation, the following is possible: “among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized.” In such a case, a phase change must be applied to pilot symbols and/or a preamble under some condition. For example, one conceivable method is to implement a rule which is separate from the rule for applying a phase change to a data symbol, and “applying a phase change to a pilot symbol and/or a preamble”. Another example is a method of regularly applying a phase change to a data symbol in a cycle N, and regularly applying a phase change to a pilot symbol and/or a preamble in a cycle M (N and M are integers that are greater than or equal to 2). As described above, phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210B (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, and preambles (other symbols)) (in the case ofFIG.2, since phase changer209B applies a phase change to baseband signal208B, a phase change is applied to each symbol inFIG.5; when a phase change is applied to baseband signal208A inFIG.2, a phase change is applied to each symbol inFIG.4; this will be described later.) Accordingly, in the frame illustrated inFIG.5, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $1. Similarly, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $2, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $3, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $4, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $5, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $6, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $7, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $8, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $9, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $10, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $11. FIG.13illustrates a frame configuration different from the frame configuration illustrated inFIG.4of transmission signal108_A illustrated inFIG.1. InFIG.13, objects that operate the same as inFIG.4share like reference marks. InFIG.13, frequency (carriers) is (are) represented on the horizontal axis and time is represented on the vertical axis. Similar toFIG.4, since a multi-carrier transmission scheme such as OFDM is used, symbols are present in the carrier direction. InFIG.13, similar toFIG.4, symbols for carrier 1 to 36 are shown. Moreover, similar toFIG.4, inFIG.13as well, symbols for time $1 through time $11 are shown. InFIG.13, in addition to pilot symbols401(pilot signal251A (pat(t)) inFIG.2), data symbols402, and other symbols403, null symbols1301are also shown. Null symbol1301has an in-phase component I of zero (0) and a quadrature component Q of zero (0) (note that this symbol is referred to as a “null symbol” here, but this symbol may be referred to as something else). InFIG.13, null symbols are inserted in carrier 19 (note that the method in which the null symbols are inserted is not limited to the configuration illustrated inFIG.13; for example, a null symbol may be inserted at some certain time, a null symbol may be inserted at some certain frequency and time region, a null symbol may be inserted continuously at a time and frequency region, and a null symbol may be inserted discretely at a time and frequency region). FIG.14illustrates a frame configuration different from the frame configuration illustrated inFIG.5of transmission signal108_B illustrated inFIG.1. InFIG.14, objects that operate the same as inFIG.5share like reference marks. InFIG.14, frequency (carriers) is (are) represented on the horizontal axis and time is represented on the vertical axis. Similar toFIG.5, since a multi-carrier transmission scheme such as OFDM is used, symbols are present in the carrier direction. InFIG.14, similar toFIG.5, symbols for carrier 1 to 36 are shown. Moreover, similar toFIG.5, inFIG.14as well, symbols for time $1 through time $11 are shown. InFIG.14, in addition to pilot symbols501(pilot signal251B (pb(t)) inFIG.2), data symbols502, and other symbols503, null symbols1301are also shown. Null symbol1301has an in-phase component I of zero (0) and a quadrature component Q of zero (0) (note that this symbol is referred to as a “null symbol” here, but this symbol may be referred to as something else). InFIG.14, null symbols are inserted in carrier 19 (note that the method in which the null symbols are inserted is not limited to the configuration illustrated inFIG.14; for example, a null symbol may be inserted at some certain time, a null symbol may be inserted at some certain frequency and time region, a null symbol may be inserted continuously at a time and frequency region, and a null symbol may be inserted discretely at a time and frequency region). When a symbol is present in carrier A at time $B inFIG.13and a symbol is present in carrier A at time $B inFIG.14, the symbol in carrier A at time $B inFIG.13and the symbol in carrier A at time $B inFIG.14are transmitted at the same time and same frequency. Note that the frame configurations illustrated inFIG.13andFIG.14are merely examples. The other symbols inFIG.13andFIG.14are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.2”. Accordingly, when an other symbol403inFIG.13at the same time and same frequency (same carrier) as an other symbol503inFIG.14transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.13and the frame ofFIG.14are received at the same time by the reception device, but even when the frame ofFIG.13or the frame ofFIG.14has been received, the reception device can obtain the data transmitted by the transmission device. Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210B (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Here, a null symbol may be considered as a target for application of a phase change (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, preambles (other symbols), and null symbols). However, even if a phase change is applied to a null symbol, the signals before and after the phase change are the same (in-phase component I is zero (0) and the quadrature component Q is zero (0)). Accordingly, it is possible to construe a null symbol as not a target for a phase change (in the case ofFIG.2, since phase changer209B applies a phase change to baseband signal208B, a phase change is applied to each symbol inFIG.14; when a phase change is applied to baseband signal208A inFIG.2, a phase change is applied to each symbol inFIG.13; this will be described later). Accordingly, in the frame illustrated inFIG.14, phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $1. However, the handling of the phase change with respect to null symbol1301is as previously described. Similarly, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $2, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $3, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $4, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $5, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $6, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $7, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $8, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $9, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $10, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.2applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $11. However, the handling of the phase change with respect to null symbol1301is as previously described.” . . . . The phase change value of phase changer209B is expressed as Ω(i). Baseband signal208B is x′(i) and phase-changed signal210B is x(i). Accordingly, x(i)=Ω(i)×x′(i) holds true. For example, the phase change value is set as follows (Q is an integer that is greater than or equal to 2, and represents the number of phase change cycles). [MATH.38]Ω⁡(i)=ej⁢2×π×iQEquation⁢(38) (j is an imaginary number unit.) However, Equation (38) is merely a non-limiting example. For example, Ω(i) may be set so as to implement a phase change that yields a cycle Q. Moreover, for example, inFIG.5andFIG.14, the same phase change value is applied to the same carriers, and the phase change value may be set on a per carrier basis. For example, the following may be implemented. Regardless of time, the phase change value may be as follows for carrier 1 inFIG.5andFIG.14. [MATH. 39] ej×0×πEquation (39) Regardless of time, the phase change value may be as follows for carrier 2 inFIG.5andFIG.14. [MATH.40]ej⁢1×π6Equation⁢(40) Regardless of time, the phase change value may be as follows for carrier 3 inFIG.5andFIG.14. [MATH.41]ej⁢2×π6Equation⁢(41) Regardless of time, the phase change value may be as follows for carrier 4 inFIG.5andFIG.14. [MATH.42]ej⁢3×π6Equation⁢(42) This concludes the operational example of phase changer209B illustrated inFIG.2. Next, the advantageous effects obtained by phase changer209B illustrated inFIG.2will be described. The other symbols403,503in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include a control information symbol. As previously described, when an other symbol503inFIG.5at the same time and same frequency (in the same carrier) as an other symbol403transmits control information, it transmits the same data (same control information). However, consider the following cases. Case 2: transmitting a control information symbol using either antenna unit #A (109_A) or antenna unit #B (109_B) illustrated inFIG.1. When transmission according to “case 2” is performed, since only one antenna is used to transmit the control information symbol, compared to when “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is performed, spatial diversity gain is less. Accordingly, in “case 2”, data reception quality deteriorates even when received by the reception device illustrated inFIG.8. Accordingly, from the perspective of improving data reception quality, “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is more beneficial. Case 3: transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1. However, phase change by is not performed by phase changer209B illustrated inFIG.2. When transmission according to “case 3” is performed, since the modulated signal transmitted from antenna unit #A109_A and the modulated signal transmitted from antenna unit #B109_B are the same (or exhibit a specific phase shift), depending on the radio wave propagation environment, the reception device illustrated inFIG.8may receive an inferior reception signal, and both modulated signal may be subjected to the same multipath effect. Accordingly, in the reception device illustrated inFIG.8, data reception quality deteriorates. In order to remedy this phenomenon, inFIG.2, phase changer209B is inserted. Since this changes the phase along the time or frequency axis, in the reception device illustrated inFIG.8, it is possible to reduce the probability of reception of an inferior reception signal. Moreover, since there is a high probability that there will be a difference in the multipath effect that the modulated signal transmitted from antenna unit #A109_A is subjected to with respect to the multipath effect that the modulated signal transmitted from antenna unit #B109_B is subjected to, there is a high probability that diversity gain will result, and accordingly, that data reception quality in the reception device illustrated inFIG.8will improve. For these reasons, inFIG.2, phase changer209B is provided and phase change is implemented. Other symbols403and other symbols503include, in addition to control information symbols, for example, symbols for signal detection, symbols for performing frequency and time synchronization, and symbols for performing channel estimation (a symbol for performing propagation path fluctuation estimation), for demodulating and decoding control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include pilot symbols401,501, and by using these, it is possible to perform demodulation and decoding with high precision via control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” transmit a plurality of streams (perform MIMO transmission) at the same time and using the same frequency (frequency band) via data symbols402and data symbols502. In order to demodulate these data symbols, symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503, are used. Here, “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changer209B, as described above. Under these circumstances, when this processing is not performed on data symbols402and data symbols502(on data symbols402in the example above), in the reception device, when data symbols402and data symbols502are demodulated and decoded, there is a need to perform the demodulation and decoding in which the processing for the phase change by phase changer209B was performed, and there is a probability that this processing will be complicated (this is because “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changer209B). However, as illustrated inFIG.2, in phase changer209B, when a phase change is applied to data symbols402and data symbols502(to data symbols502in the example above), in the reception device, there is the advantage that data symbols402and data symbols502can (easily) be demodulated and decoded using the channel estimation signal (propagation path fluctuation signal) estimated by using “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503”. Additionally, as illustrated inFIG.2, in phase changer209B, when a phase change is applied to data symbols402and data symbols502(data symbols502in the example above), in multipath environments, it is possible to reduce the influence of sharp drops in electric field intensity along the frequency axis. Accordingly, it is possible to obtain the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502. In this way, the point that “symbols that are targets for implementation of a phase change by phase changer205B” and “symbols that are targets for implementation of a phase change by phase changer209B” are different is a characteristic point. As described above, by applying a phase change using phase changer205B illustrated inFIG.2, it is possible to achieve the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502in the reception device in, for example, LOS environments, and by applying a phase change using phase changer209B illustrated inFIG.2, for example, it is possible to achieve the advantageous effect of an improvement in data reception quality in the reception device of the control information symbols included in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” and the advantageous effect that operations of demodulation and decoding of data symbols402and data symbols502become simple. Note that the advantageous effect of an improvement in data reception quality in the reception device of data symbols402and data symbols502in, for example, LOS environments, is achieved as a result of the phase change implemented by phase changer205B illustrated inFIG.2, and furthermore, the reception quality of data symbols402and data symbols502is improved by applying a phase change to data symbols402and data symbols502using phase changer209B illustrated inFIG.2. Note thatFIG.2illustrates an example of a configuration in which phase changer209B is arranged after inserter207B and phase changer209B applies a phase change to baseband signal208B, but a configuration for achieving both the above-described advantageous effects of the phase change by phase changer205B and the phase change by phase changer209B is not limited to the example illustrated inFIG.2. One example of an acceptable variation is one in which phase changer209B is removed from the configuration illustrated inFIG.2, baseband signal208B output from inserter207B becomes processed signal106_B, phase changer209A that performs the same operations as phase changer209B is inserted after inserter207A, and phase-changed signal210A, which is generated by phase changer209A implementing a phase change on baseband signal208A, becomes processed signal106_A. Even with such a configuration, similar to the example illustrated inFIG.2and described above, the advantageous effect of an improvement in data reception quality in the reception device of data symbols402and data symbols502in, for example, LOS environments, is achieved as a result of the phase change implemented by phase changer205B illustrated inFIG.2, and furthermore, the reception quality of data symbols402and data symbols502is improved by applying a phase change to data symbols402and data symbols502using phase changer209A. Furthermore, it is possible to achieve the advantageous effect of an improvement in data reception quality in the reception device of the control information symbols included in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14”. (Supplemental Information 1) In, for example, Embodiment 1, it is described that the operation performed by “phase changer B” may be CDD (CSD) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. Next, supplemental information regarding this point will be given. FIG.15illustrates a configuration in the case that CDD (CSD) is used.1501is a modulated signal when cyclic delay is not implemented, and is expressed as X[n]. Cyclic delayer1502_1receives an input of modulated signal1501, applies a cyclic delay, and outputs a cyclic-delayed signal1503_1. When cyclic-delayed signal1503_1is expressed as X1[n], X1[n] is applied with the following equation. [MATH. 43] X1[n]=X[(n−δ1)modN]Equation (43) Note that δ1 is the cyclic delay amount (δ1 is a real number), and X[n] is configured as N symbols (N is an integer that is greater than or equal to 2). Accordingly, n is an integer that is greater than or equal to 0 and less than or equal to N−1. Cyclic delayer1502_M receives an input of modulated signal1501, applies a cyclic delay, and outputs a cyclic-delayed signal1503_M. When cyclic-delayed signal1503_M is expressed as XM[n], XM[n] is applied with the following equation. [MATH. 44] XM[n]=X[(n−δM)modN]Equation (44) Note that δM is the cyclic delay amount (δM is a real number), and X[n] is configured as N symbols (N is an integer that is greater than or equal to 2). Accordingly, n is an integer that is greater than or equal to 0 and less than or equal to N−1. Cyclic delayer1502_iis an integer that is greater than or equal to 1 and less than or equal to M (M is an integer that is greater than or equal to 1)) receives an input of modulated signal1501, applies a cyclic delay, and outputs a cyclic-delayed signal1503_i. When cyclic-delayed signal1503_iis expressed as Xi[n], Xi[n] is applied with the following equation. [MATH. 45] Xi[n]=X[(n−δi)modN]Equation (45) Note that δi is the cyclic delay amount (δi is a real number), and X[n] is configured as N symbols (N is an integer that is greater than or equal to 2). Accordingly, n is an integer that is greater than or equal to 0 and less than or equal to N−1. Cyclic-delayed signal1503_iis then transmitted from antenna i (accordingly, cyclic-delayed signal1503_1, . . . , and cyclic-delayed signal1503_M are each transmitted from different antennas). This makes it possible to achieve the diversity effect via cyclic delay (in particular, reduce the adverse effects of delayed radio waves), and in the reception device, achieve an advantageous effect of improved data reception quality. For example, phase changer209B inFIG.2may be replaced with the cyclic delayer illustrated inFIG.15, and may perform the same operations performed by phase changer209B. Accordingly, in phase changer209B inFIG.2, the cyclic delay amount 6 (δ is a real number) is applied, and the input signal for phase changer209B is expressed as Y[n]. When the output signal for phase changer209B is expressed as Z[n], Z[n] is applied with the following equation. [MATH. 46] Z[n]=Y[(n−δ)modN]Equation (46) Note that Y[n] is configured as N samples (N is an integer that is greater than or equal to 2). Accordingly, n is an integer that is greater than or equal to 0 and less than or equal to N−1. Next, the relationship between cyclic delay amount and phase change will be described. For example, consider a case in which CDD (CSD) is applied to OFDM. Note that the carrier arrangement when OFDM is used is as illustrated inFIG.16. InFIG.16,1601is a symbol, frequency (carriers) is (are) represented on the horizontal axis, with increasing frequency from left to right and carriers arranged in ascending order. Accordingly, the carrier of the lowest frequency is “carrier 1”, and subsequent carriers are “carrier 2”, “carrier 3”, “carrier 4”, . . . . For example, in phase changer209B illustrated inFIG.2, a cyclic delay amount τ is applied. In such as case, phase change value Ω[i] in “carrier i” is expressed as follows. [MATH. 47] Ω[i]=ej×μ×iEquation (47) Note that μ is a value capable of being calculated from cyclic delay amount and/or the size of the fast Fourier transform (FFT). When the baseband signal for “carrier i”, time t before being applied with a phase change (before cyclic delay processing) is expressed as v′[i][t], the signal v[i][t] for “carrier i”, time t after being applied with a phase change can be expressed as v[i][t]=Ω[i] x v′[i][t]. (Supplemental Information 2) As a matter of course, the embodiments may be carried out by combining a plurality of the exemplary embodiments and other contents described in the present specification. Moreover, each exemplary embodiment and the other contents are only examples. For example, while a “modulating method, an error correction coding method (an error correction code, a code length, a coding rate and the like to be used), control information and the like” are exemplified, it is possible to carry out the present disclosure with the same configuration even when other types of a “modulating method, an error correction coding method (an error correction code, a code length, a coding rate and the like to be used), control information and the like” are applied. Regarding the modulation scheme, even when a modulation scheme other than the modulation schemes described in the present specification is used, it is possible to carry out the embodiments and the other subject matter described herein. For example, amplitude phase shift keying (APSK) (such as 16APSK, 64APSK, 128APSK, 256APSK, 1024APSK and 4096APSK), pulse amplitude modulation (PAM) (such as 4PAM, 8PAM, 16PAM, 64PAM, 128PAM, 256PAM, 1024PAM and 4096PAM), phase shift keying (PSK) (such as BPSK, QPSK, 8PSK, 16PSK, 64PSK, 128PSK, 256PSK, 1024PSK and 4096PSK), and quadrature amplitude modulation (QAM) (such as 4QAM, 8QAM, 16QAM, 64QAM, 128QAM, 256QAM, 1024QAM and 4096QAM) may be applied, or in each modulation scheme, uniform mapping or non-uniform mapping may be performed. Moreover, a method for arranging 2, 4, 8, 16, 64, 128, 256, 1024, etc., signal points on an I-Q plane (a modulation scheme having 2, 4, 8, 16, 64, 128, 256, 1024, etc., signal points) is not limited to a signal point arrangement method of the modulation schemes described in the present specification. Hence, a function of outputting an in-phase component and a quadrature component based on a plurality of bits is a function in a mapper, and performing precoding and phase-change thereafter is one effective function of the present disclosure. In the present specification, when “∀” and/or “∃” is present, “∀” represents a universal quantifier, and “∃” represents an existential quantifier. Moreover, in the present specification, when there is a complex plane, the phase unit such as an argument is “radian”. When the complex plane is used, display in a polar form can be made as display by polar coordinates of a complex number. When point (a, b) on the complex plane is associated with complex number z=a+jb (a and b are both real numbers, and j is a unit of an imaginary number), and when this point is expressed by [r, θ] in polar coordinates, a=r×cos θ and b=r×sin θ, [MATH. 48] r=√{square root over (a2+b2)}  Equation (48) holds true, r is an absolute value of z (r=I z I), and 0 is an argument. Then, z=a+jb is expressed by r×ejθ. In the present specification, the reception device in the terminal and the antennas may be configured as separate devices. For example, the reception device includes an interface that receives an input, via a cable, of a signal received by an antenna or a signal generated by applying a signal received by an antenna with a frequency conversion, and the reception device performs subsequent processing. Moreover, data/information obtained by the reception device is subsequently converted into a video or audio, and a display (monitor) displays the video or a speaker outputs the audio. Further, the data/information obtained by the reception device may be subjected to signal processing related to a video or a sound (signal processing may not be performed), and may be output from an RCA terminal (a video terminal or an audio terminal), a Universal Serial Bus (USB), or a High-Definition Multimedia Interface (registered trademark) (HDMI) of the reception device. In the present specification, it can be considered that the apparatus which includes the transmission device is a communications and broadcast apparatus, such as a broadcast station, a base station, an access point, a terminal or a mobile phone. In such cases, it can be considered that the apparatus that includes the reception device is a communication apparatus such as a television, a radio, a terminal, a personal computer, a mobile phone, an access point, or a base station. Moreover, it can also be considered that the transmission device and reception device according to the present disclosure are each a device having communication functions that is formed so as to be connectable via some interface to an apparatus for executing an application in, for example, a television, a radio, a personal computer or a mobile phone. Moreover, in this embodiment, symbols other than data symbols, such as pilot symbols (preamble, unique word, post-amble, reference symbol, etc.) or symbols for control information, may be arranged in any way in a frame. Here, the terms “pilot symbol” and “control information” are used, but the naming of such symbols is not important; the functions that they perform are. A pilot symbol may be a known symbol that is modulated using PSK modulation in a transceiver (alternatively, a symbol transmitted by a transmitter can be known by a receiver by the receiver being periodic), and the receiver detects, for example, frequency synchronization, time synchronization, and a channel estimation (channel state information (CSI)) symbol (of each modulated signal) by using the symbol. Moreover, the symbol for control information is a symbol for transmitting information required to be transmitted to a communication partner in order to establish communication pertaining to anything other than data (such as application data) (this information is, for example, the modulation scheme, error correction encoding method, or encode rate of the error correction encoding method used in the communication, or settings information in an upper layer). Note that the present disclosure is not limited to each exemplary embodiment, and can be carried out with various modifications. For example, in each embodiment, the present disclosure is described as being performed as a communications device. However, the present disclosure is not limited to this case, and this communications method can also be used as software. Moreover, in the above description, precoding switching methods in a method for transmitting two modulated signals from two antennas are described, but these examples are not limiting. A precoding switching method in which precoding weight (matrix) is changed similarly in a method in which precoding is performed on four mapped signals to generate four modulated signals and transmitted from four antennas, that is to say, a method in which precoding is performed on N mapped signals to generate N modulated signals and transmitted from N antennas, can also be applied. The terms “precoding” and “precoding weight” are used in the present specification. The terms used to refer to such signal processing are not important per-se; the signal processing itself is what is important to the present disclosure. Streams s1(t) and s2(t) may transmit different data, and may transmit the same data. The transmitting antenna in the transmission device, the receiving antenna in the reception device, and each signal antenna illustrated in the drawings may be configured of a plurality of antennas. The transmission device needs to notify the reception device of the transmission method (MIMO, SISO, temporal-spatial block code, interleaving method), modulation scheme, and/or error correction encoding method (may be omitted depending on embodiment); this information is present in the frame transmitted by the transmission device; the reception device changes operation upon receipt. Note that a program for executing the above-described communications method may be stored in Read Only Memory (ROM) in advance to cause a Central Processing Unit (CPU) to operate this program. Moreover, the program for executing the communications method may be stored in a computer-readable storage medium, the program stored in the recording medium may be recorded in RAM (Random Access Memory) in a computer, and the computer may be caused to operate according to this program. Each configuration of each of the above-described embodiments, etc., may be realized as a LSI (large scale integration) circuit, which is typically an integrated circuit. These integrated circuits may be formed as separate chips, or may be formed as one chip so as to include the entire configuration or part of the configuration of each embodiment. LSI is described here, but the integrated circuit may also be referred to as an IC (integrated circuit), a system LSI circuit, a super LSI circuit or an ultra LSI circuit depending on the degree of integration. Moreover, the circuit integration technique is not limited to LSI, and may be realized by a dedicated circuit or a general purpose processor. After manufacturing of the LSI circuit, a programmable Field Programmable Gate Array (FPGA) or a reconfigurable processor which is reconfigurable in connection or settings of circuit cells inside the LSI circuit may be used. Further, when development of a semiconductor technology or another derived technology provides a circuit integration technology which replaces LSI, as a matter of course, functional blocks may be integrated by using this technology. Adaption of biotechnology, for example, is a possibility. The present disclosure can be widely applied to radio systems that transmit different modulated signals from different antennas. Moreover, the present disclosure can also be applied when MIMO transmission is used in a wired communications system including a plurality of transmission points (for example, a power line communication (PLC) system, an optical transmission system, a digital subscriber line (DSL) system). Embodiment 2 In this embodiment, an implementation method will be described that is different from the configuration illustrated inFIG.2and described in Embodiment 1. FIG.1illustrates one example of a configuration of a transmission device according to this embodiment, such as a base station, access point, or broadcast station. AsFIG.1is described in detail in Embodiment 1, description will be omitted from this embodiment. Signal processor106receives inputs of mapped signals105_1and105_2, signal group110, and control signal100, performs signal processing based on control signal100, and outputs signal-processed signals106_A and106_B. Here, signal-processed signal106_A is expressed as u1(t), and signal-processed signal106_B is expressed as u2(t) (i is a symbol number; for example, i is an integer that is greater than or equal to 0). Note that details regarding the signal processing will be described with reference toFIG.18later. FIG.18illustrates one example of a configuration of signal processor106illustrated inFIG.1. Weighting synthesizer (precoder)203receives inputs of mapped signal201A (mapped signal105_1inFIG.1), mapped signal201B (mapped signal105_2inFIG.1), and control signal200(control signal100inFIG.1), performs weighting synthesis (precoding) based on control signal200, and outputs weighted signal204A and weighted signal204B. Here, mapped signal201A is expressed as s1(t), mapped signal201B is expressed as s2(t), weighted signal204A is expressed as z1(t), and weighted signal204B is expressed as z2′(t). Note that one example of t is time (s1(t), s2(t), z1(t), and z2′(t) are defined as complex numbers (accordingly, they may be real numbers)). Here, these are given as functions of time, but may be functions of a “frequency (carrier number)”, and may be functions of “time and frequency”. These may also be a function of a “symbol number”. Note that this also applies to Embodiment 1. Weighting synthesizer (precoder)203performs the calculations indicated in Equation (1). Phase changer205B receives inputs of weighting synthesized signal204B and control signal200, applies a phase change to weighting synthesized signal204B based on control signal200, and outputs phase-changed signal206B. Note that phase-changed signal206B is expressed as z2(t), and z2(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205B will be described. In phase changer205B, for example, a phase change of y(i) is applied to z2′(i). Accordingly, z2(t) can be expressed as z2(t)=y(i)×z2′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as shown in Equation (2) (N is an integer that is greater than or equal to 2, N is a phase change cycle) (when N is set to an odd number greater than or equal to 3, data reception quality may improve). However, Equation (2) is merely a non-limiting example. Here, phase change value y(i)=ej×δ(i). Here, z1(i) and z2(t) can be expressed with Equation (3). Note that δ(i) is a real number. z1(i) and z2(t) are transmitted from the transmission device at the same time and using the same frequency (same frequency band). In Equation (3), the phase change value is not limited to the value used in Equation (2); for example, a method in which the phase is changed cyclically or regularly is conceivable. As described in Embodiment 1, conceivable examples of the (precoding) matrix inserted in Equation (1) and Equation (3) are illustrated in Equation (5) through Equation (36) (however, the precoding matrix is not limited to these examples (the same applies to Embodiment 1)). Inserter207A receives inputs of weighting synthesized signal204A, pilot symbol signal (pa(t)) (t is time) (251A), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208A based on the frame configuration. Similarly, inserter207B receives inputs of phase-changed signal206B, pilot symbol signal (pb(t)) (251B), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208B based on the frame configuration. Phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). As described in Embodiment 1, etc., note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). FIG.3illustrates one example of a configuration of radio units107_A and107_B illustrated inFIG.1.FIG.3is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.4illustrates a frame configuration of transmission signal108_A illustrated inFIG.1.FIG.4is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.5illustrates a frame configuration of transmission signal108_B illustrated inFIG.1.FIG.5is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.4and a symbol is present in carrier A at time $B inFIG.5, the symbol in carrier A at time $B inFIG.4and the symbol in carrier A at time $B inFIG.5are transmitted at the same time and same frequency. Note that the frame configuration is not limited to the configurations illustrated inFIG.4andFIG.5;FIG.4andFIG.5are mere examples of frame configurations. The other symbols inFIG.4andFIG.5are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.2”. Accordingly, when an other symbol503inFIG.5at the same time and same frequency (same carrier) as an other symbol403inFIG.4transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.4and the frame ofFIG.5are received at the same time by the reception device, but even when the frame ofFIG.4or the frame ofFIG.5has been received, the reception device can obtain the data transmitted by the transmission device. FIG.6illustrates one example of components relating to control information generation for generating control information symbol signal253illustrated inFIG.2.FIG.6is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.7illustrates one example of a configuration of antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1(in this example, antenna unit #A (109_A) and antenna unit #B (109_B) include a plurality of antennas).FIG.7is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.8illustrates one example of a configuration of a reception device that receives a modulated signal upon the transmission device illustrated inFIG.1transmitting, for example, a transmission signal having the frame configuration illustrated inFIG.4orFIG.5.FIG.8is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.10illustrates one example of a configuration of antenna unit #X (801X) and antenna unit #Y (801Y) illustrated inFIG.8(antenna unit #X (801X) and antenna unit #Y (801Y) are exemplified as including a plurality of antennas).FIG.10is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. Next, signal processor106in the transmission device illustrated inFIG.1is inserted as phase changer205B and phase changer209A, as illustrated inFIG.18. The characteristics and advantageous effects of this configuration will be described. As described with reference toFIG.4andFIG.5, phase changer205B applies precoding (weighted synthesis) to mapped signal s1(t) (201A) (i is a symbol number; i is an integer greater than or equal to 0) obtained via mapping using the first sequence and mapped signal s2(t) (201B) obtained via mapping using the second sequence, and applies a phase change to one of the obtained weighting synthesized signals204A and204B. Weighting synthesized signal204A and phase-changed signal206B are then transmitted at the same frequency and at the same time. Accordingly, inFIG.4andFIG.5, a phase change is applied to data symbol502inFIG.5(in the case ofFIG.18, since phase changer205applies this to weighting synthesized signal204B, a phase change is applied to data symbol502inFIG.5; when a phase change is applied to weighting synthesized signal204A, a phase change is applied to data symbol402inFIG.4; this will be described later). For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.5. Note that inFIG.11, similar toFIG.5,501is a pilot symbol,502is a data symbol, and503is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205B applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×δ15(i)” for (carrier 1, time $5), “ej×δ25(i)” for (carrier 2, time $5), “ej×δ35(i)” for (carrier 3, time $5), “ej×δ45(i)” for (carrier 4, time $5), “ej×δ55(i)” (carrier 5, time $5), “ej×δ16(i)” for (carrier 1, time $6), “ej×δ26(i)” for (carrier 2, time $6), “ej×δ46(i)” for (carrier 4, time $6), and “ej×δ56(i)” for (carrier 5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205B. This point is a characteristic of phase changer205B. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205B). One example of the phase change that phase changer205B applies to the data symbols is the method given in Equation (2) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). With this, when the environment is one in which the direct waves are dominant, such as in an LOS environment, it is possible to achieve improved data reception quality in the reception device with respect to the data symbols that perform MIMO transmission (transmit a plurality of streams). Next, the advantageous effects of this will be described. For example, the modulation scheme used by mapper104inFIG.1is quadrature phase shift keying (QPSK) (mapped signal201A inFIG.18is a QPSK signal, and mapped signal201B is a QPSK signal; in other words, two QPSK streams are transmitted). Accordingly, for example, using channel estimated signals806_1and806_2, 16 candidate signal points are obtained by signal processor811illustrated inFIG.8(2-bit transmission is possible with QPSK. Accordingly, since there are two streams, 4-bit transmission is achieved. Thus, there are 24=16 candidate signal points) (note that 16 other candidate signal points are obtained from using channel estimated signals808_1and808_2as well, but since description thereof is the same as described above, the following description will focus on the 16 candidate signal points obtained by using channel estimated signals806_1and806_2). FIG.12illustrates an example of the state resulting from such a case. In (A) and (B) inFIG.12, in-phase I is represented on the horizontal axis and quadrature Q is represented on the vertical axis, and 16 candidate signal points are present in the illustrated in-phase I-quadrature Q planes (among the 16 candidate signal points, one is a signal point that is transmitted by the transmission device; accordingly, this is referred to as “16 candidate signal points”). When the environment is one in which the direct waves are dominant, such as in an LOS environment, consider a first case in which phase changer205B is omitted from the configuration illustrated inFIG.18(in other words, a case in which phase change is not applied by phase changer205B inFIG.18). In the first case, since phase change is not applied, there is a possibility that the state illustrated in (A) inFIG.12will be realized. When the state falls into the state illustrated in (A) inFIG.12, as illustrated by “signal points1201and1202”, “signal points1203,1204,1205, and1206”, and “signal points1207,1208”, the signal points become dense (the distances between some signal points shorten). Accordingly, in the reception device illustrated inFIG.8, data reception quality may deteriorate. In order to remedy this phenomenon, inFIG.18, phase changer205B is inserted. When phase changer205B is inserted, due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12. With respect to this state, since error correction code is introduced, high error correction performance is achieved, and in the reception device illustrated inFIG.8, high data reception quality can be achieved. Note that inFIG.18, a phase change is not applied by phase changer205B inFIG.18to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation. With this, among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized. However, even if a phase change is applied by phase changer205B inFIG.18to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation, the following is possible: “among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized.” In such a case, a phase change must be applied to pilot symbols and/or a preamble under some condition. For example, one conceivable method is to implement a rule which is separate from the rule for applying a phase change to a data symbol, and “applying a phase change to a pilot symbol and/or a preamble”. Another example is a method of regularly applying a phase change to a data symbol in a cycle N, and regularly applying a phase change to a pilot symbol and/or a preamble in a cycle M (N and M are integers that are greater than or equal to 2). As described above, phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, and preambles (other symbols)) (in the case ofFIG.18, since phase changer209A applies a phase change to baseband signal208A, a phase change is applied to each symbol inFIG.4). Accordingly, in the frame illustrated inFIG.4, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $1. Similarly, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $2, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $3, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $4, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $5, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $6, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $7, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $8, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $9, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $10, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $11 . . . . FIG.13illustrates a frame configuration different from the frame configuration illustrated inFIG.4of transmission signal108_A illustrated inFIG.1.FIG.13is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.14illustrates a frame configuration different from the frame configuration illustrated inFIG.5of transmission signal108_B illustrated inFIG.1.FIG.14is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.13and a symbol is present in carrier A at time $B inFIG.14, the symbol in carrier A at time $B inFIG.13and the symbol in carrier A at time $B inFIG.14are transmitted at the same time and same frequency. Note that the frame configurations illustrated inFIG.13andFIG.14are merely examples. The other symbols inFIG.13andFIG.14are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.18”. Accordingly, when an other symbol403inFIG.13at the same time and same frequency (same carrier) as an other symbol503inFIG.14transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.13and the frame ofFIG.14are received at the same time by the reception device, but even when the frame ofFIG.13or the frame ofFIG.14has been received, the reception device can obtain the data transmitted by the transmission device. Phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Here, a null symbol may be considered as a target for application of a phase change (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, preambles (other symbols), and null symbols). However, even if a phase change is applied to a null symbol, the signals before and after the phase change are the same (in-phase component I is zero (0) and the quadrature component Q is zero (0)). Accordingly, it is possible to construe a null symbol as not a target for a phase change (in the case ofFIG.18, since phase changer209A applies a phase change to baseband signal208A, a phase change is applied to each symbol inFIG.13). Accordingly, in the frame illustrated inFIG.13, phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $1. However, the handling of the phase change with respect to null symbol1301is as previously described. Similarly, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $2, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $3, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $4, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $5, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $6, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $7, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $8, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $9, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $10, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.18applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $11. However, the handling of the phase change with respect to null symbol1301is as previously described.” . . . . The phase change value of phase changer209A is expressed as Ω(i). Baseband signal208A is x′(i) and phase-changed signal210A is x(i). Accordingly, x(i)=Ω(i)×x′(i) holds true. For example, the phase change value is set to Equation (38) (Q is an integer that is greater than or equal to 2, and represents the number of phase change cycles) (j is an imaginary number unit). However, Equation (38) is merely a non-limiting example. For example, Ω(i) may be set so as to implement a phase change that yields a cycle Q. Moreover, for example, inFIG.4andFIG.13, the same phase change value is applied to the same carriers, and the phase change value may be set on a per carrier basis. For example, the following may be implemented. Regardless of time, the phase change value may be as in Equation (39) for carrier 1 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (40) for carrier 2 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (41) for carrier 3 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (42) for carrier 4 inFIG.4andFIG.13. . . . This concludes the operational example of phase changer209A illustrated inFIG.18. Next, the advantageous effects obtained by phase changer209A illustrated inFIG.18will be described. The other symbols403,503in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include a control information symbol. As previously described, when an other symbol503inFIG.5at the same time and same frequency (in the same carrier) as an other symbol403transmits control information, it transmits the same data (same control information). However, consider the following cases. Case 2: transmitting a control information symbol using either antenna unit #A (109_A) or antenna unit #B (109_B) illustrated inFIG.1. When transmission according to “case 2” is performed, since only one antenna is used to transmit the control information symbol, compared to when “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is performed, spatial diversity gain is less. Accordingly, in “case 2”, data reception quality deteriorates even when received by the reception device illustrated inFIG.8. Accordingly, from the perspective of improving data reception quality, “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is more beneficial. Case 3: transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1. However, phase change by is not performed by phase changer209A illustrated inFIG.18. When transmission according to “case 3” is performed, since the modulated signal transmitted from antenna unit #A109_A and the modulated signal transmitted from antenna unit #B109_B are the same (or exhibit a specific phase shift), depending on the radio wave propagation environment, the reception device illustrated inFIG.8may receive an inferior reception signal, and both modulated signal may be subjected to the same multipath effect. Accordingly, in the reception device illustrated inFIG.8, data reception quality deteriorates. In order to remedy this phenomenon, inFIG.18, phase changer209A is inserted. Since this changes the phase along the time or frequency axis, in the reception device illustrated inFIG.8, it is possible to reduce the probability of reception of an inferior reception signal. Moreover, since there is a high probability that there will be a difference in the multipath effect that the modulated signal transmitted from antenna unit #A109_A is subjected to with respect to the multipath effect that the modulated signal transmitted from antenna unit #B109_B is subjected to, there is a high probability that diversity gain will result, and accordingly, that data reception quality in the reception device illustrated inFIG.8will improve. For these reasons, inFIG.18, phase changer209A is provided and phase change is implemented. Other symbols403and other symbols503include, in addition to control information symbols, for example, symbols for signal detection, symbols for performing frequency and time synchronization, and symbols for performing channel estimation (a symbol for performing propagation path fluctuation estimation), for demodulating and decoding control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include pilot symbols401,501, and by using these, it is possible to perform demodulation and decoding with high precision via control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” transmit a plurality of streams (perform MIMO transmission) at the same time and using the same frequency (frequency band) via data symbols402and data symbols502. In order to demodulate these data symbols, symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503, are used. Here, “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changer209A, as described above. Under these circumstances, when this processing is not performed on data symbols402and data symbols502(on data symbols402in the example above), in the reception device, when data symbols402and data symbols502are demodulated and decoded, there is a need to perform the demodulation and decoding in which the processing for the phase change by phase changer209A was performed, and there is a probability that this processing will be complicated (this is because “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changer209A). However, as illustrated inFIG.18, in phase changer209A, when a phase change is applied to data symbols402and data symbols502(to data symbols402in the example above), in the reception device, there is the advantage that data symbols402and data symbols502can (easily) be demodulated and decoded using the channel estimation signal (propagation path fluctuation signal) estimated by using “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503”. Additionally, as illustrated inFIG.18, in phase changer209A, when a phase change is applied to data symbols402and data symbols502(data symbols402in the example above), in multipath environments, it is possible to reduce the influence of sharp drops in electric field intensity along the frequency axis. Accordingly, it is possible to obtain the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502. In this way, the point that “symbols that are targets for implementation of a phase change by phase changer205B” and “symbols that are targets for implementation of a phase change by phase changer209A” are different is a characteristic point. As described above, by applying a phase change using phase changer205B illustrated inFIG.18, it is possible to achieve the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502in the reception device in, for example, LOS environments, and by applying a phase change using phase changer209A illustrated inFIG.18, for example, it is possible to achieve the advantageous effect of an improvement in data reception quality in the reception device of the control information symbols included in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” and the advantageous effect that operations of demodulation and decoding of data symbols402and data symbols502become simple. Note that the advantageous effect of an improvement in data reception quality in the reception device of data symbols402and data symbols502in, for example, LOS environments, is achieved as a result of the phase change implemented by phase changer205B illustrated inFIG.18, and furthermore, the reception quality of data symbols402and data symbols502is improved by applying a phase change to data symbols402and data symbols502using phase changer209A illustrated inFIG.18. Note that Q in Equation (38) may be an integer of −2 or less. In such a case, the value for the phase change cycle is the absolute value of Q. This feature is applicable to Embodiment 1 as well. Embodiment 3 In this embodiment, an implementation method will be described that is different from the configuration illustrated inFIG.2and described in Embodiment 1. FIG.1illustrates one example of a configuration of a transmission device according to this embodiment, such as a base station, access point, or broadcast station. AsFIG.1is described in detail in Embodiment 1, description will be omitted from this embodiment. Signal processor106receives inputs of mapped signals105_1and105_2, signal group110, and control signal100, performs signal processing based on control signal100, and outputs signal-processed signals106_A and106_B. Here, signal-processed signal106_A is expressed as u1(t), and signal-processed signal106_B is expressed as u2(t) (i is a symbol number; for example, i is an integer that is greater than or equal to 0). Note that details regarding the signal processing will be described with reference toFIG.19later. FIG.19illustrates one example of a configuration of signal processor106illustrated inFIG.1. Weighting synthesizer (precoder)203receives inputs of mapped signal201A (mapped signal105_1inFIG.1), mapped signal201B (mapped signal105_2inFIG.1), and control signal200(control signal100inFIG.1), performs weighting synthesis (precoding) based on control signal200, and outputs weighted signal204A and weighted signal204B. Here, mapped signal201A is expressed as s1(t), mapped signal201B is expressed as s2(t), weighted signal204A is expressed as z1(t), and weighted signal204B is expressed as z2′(t). Note that one example of t is time (s1(t), s2(t), z1(t), and z2′(t) are defined as complex numbers (accordingly, they may be real numbers)). Here, these are given as functions of time, but may be functions of a “frequency (carrier number)”, and may be functions of “time and frequency”. These may also be a function of a “symbol number”. Note that this also applies to Embodiment 1. Weighting synthesizer (precoder)203performs the calculations indicated in Equation (1). Phase changer205B receives inputs of weighting synthesized signal204B and control signal200, applies a phase change to weighting synthesized signal204B based on control signal200, and outputs phase-changed signal206B. Note that phase-changed signal206B is expressed as z2(t), and z2(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205B will be described. In phase changer205B, for example, a phase change of y(i) is applied to z2′(i). Accordingly, z2(t) can be expressed as z2(t)=y(i)×z2′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as shown in Equation (2) (N is an integer that is greater than or equal to 2, N is a phase change cycle) (when N is set to an odd number greater than or equal to 3, data reception quality may improve). However, Equation (2) is merely a non-limiting example. Here, phase change value y(i)=ej×δ(i). Here, z1(i) and z2(t) can be expressed with Equation (3). Note that δ(i) is a real number. z1(i) and z2(t) are transmitted from the transmission device at the same time and using the same frequency (same frequency band). In Equation (3), the phase change value is not limited to the value used in Equation (2); for example, a method in which the phase is changed cyclically or regularly is conceivable. As described in Embodiment 1, conceivable examples of the (precoding) matrix inserted in Equation (1) and Equation (3) are illustrated in Equation (5) through Equation (36) (however, the precoding matrix is not limited to these examples (the same applies to Embodiment 1)). Inserter207A receives inputs of weighting synthesized signal204A, pilot symbol signal (pa(t)) (t is time) (251A), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208A based on the frame configuration. Similarly, inserter207B receives inputs of phase-changed signal206B, pilot symbol signal (pb(t)) (251B), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208B based on the frame configuration. Phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). As described in Embodiment 1, etc., note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as y′(i). Then, phase-changed signal210B (y(i)) can be expressed as y(i)=ej×τ(i)×y′(i) (j is an imaginary number unit). As described in Embodiment 1, etc., note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). The characteristic feature here is that the phase changing method via ε(i) and the phase changing method via τ(i) are different. Alternatively, the characteristic feature here is that the CDD (Cyclic Delay Diversity) (CSD (Cyclic Shift Diversity)) cyclic delay amount value set by phase changer209A and the CDD (Cyclic Delay Diversity) (CSD (Cyclic Shift Diversity)) cyclic delay amount value set by phase changer209B are different. FIG.3illustrates one example of a configuration of radio units107_A and107_B illustrated inFIG.1.FIG.3is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.4illustrates a frame configuration of transmission signal108_A illustrated inFIG.1.FIG.4is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.5illustrates a frame configuration of transmission signal108_B illustrated inFIG.1.FIG.5is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.4and a symbol is present in carrier A at time $B inFIG.5, the symbol in carrier A at time $B inFIG.4and the symbol in carrier A at time $B inFIG.5are transmitted at the same time and same frequency. Note that the frame configuration is not limited to the configurations illustrated inFIG.4andFIG.5;FIG.4andFIG.5are mere examples of frame configurations. The other symbols inFIG.4andFIG.5are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.2”. Accordingly, when an other symbol503inFIG.5at the same time and same frequency (same carrier) as an other symbol403inFIG.4transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.4and the frame ofFIG.5are received at the same time by the reception device, but even when the frame ofFIG.4or the frame ofFIG.5has been received, the reception device can obtain the data transmitted by the transmission device. FIG.6illustrates one example of components relating to control information generation for generating control information symbol signal253illustrated inFIG.2.FIG.6is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.7illustrates one example of a configuration of antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1(in this example, antenna unit #A (109_A) and antenna unit #B (109_B) include a plurality of antennas).FIG.7is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.8illustrates one example of a configuration of a reception device that receives a modulated signal upon the transmission device illustrated inFIG.1transmitting, for example, a transmission signal having the frame configuration illustrated inFIG.4orFIG.5.FIG.8is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.10illustrates one example of a configuration of antenna unit #X (801X) and antenna unit #Y (801Y) illustrated inFIG.8(antenna unit #X (801X) and antenna unit #Y (801Y) are exemplified as including a plurality of antennas).FIG.10is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. Next, signal processor106in the transmission device illustrated inFIG.1is inserted as phase changer205B and phase changers209A,209B, as illustrated inFIG.19. The characteristics and advantageous effects of this configuration will be described. As described with reference toFIG.4andFIG.5, phase changer205B applies precoding (weighted synthesis) to mapped signal s1(t) (201A) (i is a symbol number; i is an integer greater than or equal to 0) obtained via mapping using the first sequence and mapped signal s2(t) (201B) obtained via mapping using the second sequence, and applies a phase change to one of the obtained weighting synthesized signals204A and204B. Weighting synthesized signal204A and phase-changed signal206B are then transmitted at the same frequency and at the same time. Accordingly, inFIG.4andFIG.5, a phase change is applied to data symbol502inFIG.5(in the case ofFIG.19, since phase changer205applies this to weighting synthesized signal204B, a phase change is applied to data symbol502inFIG.5; when a phase change is applied to weighting synthesized signal204A, a phase change is applied to data symbol402inFIG.4; this will be described later). For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.5. Note that inFIG.11, similar toFIG.5,501is a pilot symbol,502is a data symbol, and503is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205B applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×δ15(i)” for (carrier 1, time $5), “ej×δ25(i)” for (carrier 2, time $5), “ej×δ35(i)” for (carrier 3, time $5), “ej×δ45(i)” for (carrier 4, time $5), “ej×δ55(i)” (carrier 5, time $5), “ej×δ16(i)” for (carrier 1, time $6), “ej×δ26(i)” for (carrier 2, time $6), “ej×δ46(i)” for (carrier 4, time $6), and “ej×δ56(i)” for (carrier 5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205B. This point is a characteristic of phase changer205B. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205B). One example of the phase change that phase changer205B applies to the data symbols is the method given in Equation (2) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). With this, when the environment is one in which the direct waves are dominant, such as in an LOS environment, it is possible to achieve improved data reception quality in the reception device with respect to the data symbols that perform MIMO transmission (transmit a plurality of streams). Next, the advantageous effects of this will be described. For example, the modulation scheme used by mapper104inFIG.1is quadrature phase shift keying (QPSK) (mapped signal201A inFIG.19is a QPSK signal, and mapped signal201B is a QPSK signal; in other words, two QPSK streams are transmitted). Accordingly, for example, using channel estimated signals806_1and806_2,16candidate signal points are obtained by signal processor811illustrated inFIG.8(2-bit transmission is possible with QPSK. Accordingly, since there are two streams, 4-bit transmission is achieved. Thus, there are 24=16 candidate signal points) (note that 16 other candidate signal points are obtained from using channel estimated signals808_1and808_2as well, but since description thereof is the same as described above, the following description will focus on the 16 candidate signal points obtained by using channel estimated signals806_1and806_2). FIG.12illustrates an example of the state resulting from such a case. In (A) and (B) inFIG.12, in-phase I is represented on the horizontal axis and quadrature Q is represented on the vertical axis, and 16 candidate signal points are present in the illustrated in-phase I-quadrature Q planes (among the 16 candidate signal points, one is a signal point that is transmitted by the transmission device; accordingly, this is referred to as “16 candidate signal points”). When the environment is one in which the direct waves are dominant, such as in an LOS environment, consider a first case in which phase changer205B is omitted from the configuration illustrated inFIG.19(in other words, a case in which phase change is not applied by phase changer205B inFIG.19). In the first case, since phase change is not applied, there is a possibility that the state illustrated in (A) inFIG.12will be realized. When the state falls into the state illustrated in (A) inFIG.12, as illustrated by “signal points1201and1202”, “signal points1203,1204,1205, and1206”, and “signal points1207,1208”, the signal points become dense (the distances between some signal points shorten). Accordingly, in the reception device illustrated inFIG.8, data reception quality may deteriorate. In order to remedy this phenomenon, inFIG.19, phase changer205B is inserted. When phase changer205B is inserted, due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12. With respect to this state, since error correction code is introduced, high error correction performance is achieved, and in the reception device illustrated inFIG.8, high data reception quality can be achieved. Note that inFIG.19, a phase change is not applied by phase changer205B inFIG.19to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation. With this, among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized. However, even if a phase change is applied by phase changer205B inFIG.19to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation, the following is possible: “among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized.” In such a case, a phase change must be applied to pilot symbols and/or a preamble under some condition. For example, one conceivable method is to implement a rule which is separate from the rule for applying a phase change to a data symbol, and “applying a phase change to a pilot symbol and/or a preamble”. Another example is a method of regularly applying a phase change to a data symbol in a cycle N, and regularly applying a phase change to a pilot symbol and/or a preamble in a cycle M (N and M are integers that are greater than or equal to 2). As described above, phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, and preambles (other symbols)) (in the case ofFIG.19, since phase changer209A applies a phase change to baseband signal208A, a phase change is applied to each symbol inFIG.4). Accordingly, in the frame illustrated inFIG.4, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $1. Similarly, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $2, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $3, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $4, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $5, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $6, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $7, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $8, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $9, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $10, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $11 . . . . As described above, phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as y′(i). Then, phase-changed signal210B (y(i)) can be expressed as y(i)=ej×τ(i)×y′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, and preambles (other symbols)) (in the case ofFIG.19, since phase changer209B applies a phase change to baseband signal208B, a phase change is applied to each symbol inFIG.5). Accordingly, in the frame illustrated inFIG.5, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $1. Similarly, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $2, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $3, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $4, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $5, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $6, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $7, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $8, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $9, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $10, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $11 . . . . FIG.13illustrates a frame configuration different from the frame configuration illustrated inFIG.4of transmission signal108_A illustrated inFIG.1.FIG.13is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.14illustrates a frame configuration different from the frame configuration illustrated inFIG.5of transmission signal108_B illustrated inFIG.1.FIG.14is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.13and a symbol is present in carrier A at time $B inFIG.14, the symbol in carrier A at time $B inFIG.13and the symbol in carrier A at time $B inFIG.14are transmitted at the same time and same frequency. Note that the frame configurations illustrated inFIG.13andFIG.14are merely examples. The other symbols inFIG.13andFIG.14are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.19”. Accordingly, when an other symbol403inFIG.13at the same time and same frequency (same carrier) as an other symbol503inFIG.14transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.13and the frame ofFIG.14are received at the same time by the reception device, but even when the frame ofFIG.13or the frame ofFIG.14has been received, the reception device can obtain the data transmitted by the transmission device. Phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Here, a null symbol may be considered as a target for application of a phase change (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, preambles (other symbols), and null symbols). However, even if a phase change is applied to a null symbol, the signals before and after the phase change are the same (in-phase component I is zero (0) and the quadrature component Q is zero (0)). Accordingly, it is possible to construe a null symbol as not a target for a phase change (in the case ofFIG.19, since phase changer209A applies a phase change to baseband signal208A, a phase change is applied to each symbol inFIG.13). Accordingly, in the frame illustrated inFIG.13, phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $1. However, the handling of the phase change with respect to null symbol1301is as previously described. Similarly, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $2, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $3, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $4, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $5, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $6, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $7, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $8, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $9, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $10, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $11. However, the handling of the phase change with respect to null symbol1301is as previously described.” . . . . The phase change value of phase changer209A is expressed as Ω(i). Baseband signal208A is x′(i) and phase-changed signal210A is x(i). Accordingly, x(i)=Ω(i)×x′(i) holds true. For example, the phase change value is set to Equation (38) (Q is an integer that is greater than or equal to 2, and represents the number of phase change cycles) (j is an imaginary number unit). However, Equation (38) is merely a non-limiting example. For example, Ω(i) may be set so as to implement a phase change that yields a cycle Q. Moreover, for example, inFIG.4andFIG.13, the same phase change value is applied to the same carriers, and the phase change value may be set on a per carrier basis. For example, the following may be implemented. Regardless of time, the phase change value may be as in Equation (39) for carrier 1 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (40) for carrier 2 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (41) for carrier 3 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (42) for carrier 4 inFIG.4andFIG.13. . . . This concludes the operational example of phase changer209A illustrated inFIG.19. Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as y′(i). Then, phase-changed signal210B (y(i)) can be expressed as y(i)=ej×τ(i)×y′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Here, a null symbol may be considered as a target for application of a phase change (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, preambles (other symbols), and null symbols). However, even if a phase change is applied to a null symbol, the signals before and after the phase change are the same (in-phase component I is zero (0) and the quadrature component Q is zero (0)). Accordingly, it is possible to construe a null symbol as not a target for a phase change (in the case ofFIG.19, since phase changer209B applies a phase change to baseband signal208B, a phase change is applied to each symbol inFIG.14). Accordingly, in the frame illustrated inFIG.14, phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $1. However, the handling of the phase change with respect to null symbol1301is as previously described. Similarly, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $2, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $3, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $4, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $5, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $6, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $7, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $8, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $9, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $10, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.19applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $11. However, the handling of the phase change with respect to null symbol1301is as previously described.” . . . . The phase change value of phase changer209B is expressed as Ω(i). Baseband signal208B is y′(i) and phase-changed signal210B is y(i). Accordingly, y(i)=Δ(i)×y′(i) holds true. For example, the phase change value is set as in the following equation (R is an integer that is greater than or equal to 2, and represents the number of phase change cycles. Note that the values for Q and R in Equation (38) may be different values). [MATH.49]Δ⁡(i)=ej⁢2×π×iREquation⁢(49) (j is an imaginary number unit.) However, Equation (49) is merely a non-limiting example. For example, Δ(i) may be set so as to implement a phase change that yields a cycle R. Note that the phase changing methods used by phase changer209A and phase changer209B may be different. For example, the cycle may be the same and, alternatively, may be different. Moreover, for example, inFIG.5andFIG.14, the same phase change value is applied to the same carriers, and the phase change value may be set on a per carrier basis. For example, the following may be implemented. Regardless of time, the phase change value may be as in Equation (39) for carrier 1 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (40) for carrier 2 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (41) for carrier 3 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (42) for carrier 4 inFIG.5andFIG.14. . . . Although the phase change value is described as Equation (39), (40), (41), and (42), the phase changing methods of phase changer209A and phase changer209B are different. This concludes the operational example of phase changer209B illustrated inFIG.19. Next, the advantageous effects obtained by phase changers209A,209B illustrated inFIG.19will be described. The other symbols403,503in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include a control information symbol. As previously described, when an other symbol503inFIG.5at the same time and same frequency (in the same carrier) as an other symbol403transmits control information, it transmits the same data (same control information). However, consider the following cases. Case 2: transmitting a control information symbol using either antenna unit #A (109_A) or antenna unit #B (109_B) illustrated inFIG.1. When transmission according to “case 2” is performed, since only one antenna is used to transmit the control information symbol, compared to when “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is performed, spatial diversity gain is less. Accordingly, in “case 2”, data reception quality deteriorates even when received by the reception device illustrated inFIG.8. Accordingly, from the perspective of improving data reception quality, “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is more beneficial. Case 3: transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1. However, phase change by is not performed by phase changers209A and209B illustrated inFIG.19. When transmission according to “case 3” is performed, since the modulated signal transmitted from antenna unit #A109_A and the modulated signal transmitted from antenna unit #B109_B are the same (or exhibit a specific phase shift), depending on the radio wave propagation environment, the reception device illustrated inFIG.8may receive an inferior reception signal, and both modulated signal may be subjected to the same multipath effect. Accordingly, in the reception device illustrated inFIG.8, data reception quality deteriorates. In order to remedy this phenomenon, inFIG.19, phase changers209A and209B are inserted. Since this changes the phase along the time or frequency axis, in the reception device illustrated inFIG.8, it is possible to reduce the probability of reception of an inferior reception signal. Moreover, since there is a high probability that there will be a difference in the multipath effect that the modulated signal transmitted from antenna unit #A109_A is subjected to with respect to the multipath effect that the modulated signal transmitted from antenna unit #B109_B is subjected to, there is a high probability that diversity gain will result, and accordingly, that data reception quality in the reception device illustrated inFIG.8will improve. For these reasons, inFIG.19, phase changers209A,209B are provided and phase change is implemented. Other symbols403and other symbols503include, in addition to control information symbols, for example, symbols for signal detection, symbols for performing frequency and time synchronization, and symbols for performing channel estimation (a symbol for performing propagation path fluctuation estimation), for demodulating and decoding control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include pilot symbols401,501, and by using these, it is possible to perform demodulation and decoding with high precision via control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” transmit a plurality of streams (perform MIMO transmission) at the same time and using the same frequency (frequency band) via data symbols402and data symbols502. In order to demodulate these data symbols, symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503, are used. Here, “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changers209A,209B, as described above. Under these circumstances, when this processing is not performed on data symbols402and data symbols502, in the reception device, when data symbols402and data symbols502are demodulated and decoded, there is a need to perform the demodulation and decoding in which the processing for the phase change by phase changers209A and209B was performed, and there is a probability that this processing will be complicated (this is because “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changers209A and209B). However, as illustrated inFIG.19, in phase changers209A,209B, when a phase change is applied to data symbols402and data symbols502, in the reception device, there is the advantage that data symbols402and data symbols502can (easily) be demodulated and decoded using the channel estimation signal (propagation path fluctuation signal) estimated by using “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbol for estimating propagation path fluctuation), which are included in other symbols403and other symbols503”. Additionally, as illustrated inFIG.19, in phase changers209A,209B, when a phase change is applied to data symbols402and data symbols502, in multipath environments, it is possible to reduce the influence of sharp drops in electric field intensity along the frequency axis. Accordingly, it is possible to obtain the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502. In this way, the point that “symbols that are targets for implementation of a phase change by phase changer205B” and “symbols that are targets for implementation of a phase change by phase changers209A,209B” are different is a characteristic point. As described above, by applying a phase change using phase changer205B illustrated inFIG.19, it is possible to achieve the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502in the reception device in, for example, LOS environments, and by applying a phase change using phase changers209A,209B illustrated inFIG.19, for example, it is possible to achieve the advantageous effect of an improvement in data reception quality in the reception device of the control information symbols included in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” and the advantageous effect that operations of demodulation and decoding of data symbols402and data symbols502become simple. Note that the advantageous effect of an improvement in data reception quality in the reception device of data symbols402and data symbols502in, for example, LOS environments, is achieved as a result of the phase change implemented by phase changer205B illustrated inFIG.19, and furthermore, the reception quality of data symbols402and data symbols502is improved by applying a phase change to data symbols402and data symbols502using phase changers209A,209B illustrated inFIG.19. Note that Q in Equation (38) may be an integer of −2 or less. In such a case, the value for the phase change cycle is the absolute value of Q. This feature is applicable to Embodiment 1 as well. Note that R in Equation (49) may be an integer of −2 or less. In such a case, the value for the phase change cycle is the absolute value of R. Moreover, taking into consideration the descriptions provided in Supplemental Information 1, the cyclic delay amount set in phase changer209A and the cyclic delay amount set in phase changer209B may be different values. Embodiment 4 In this embodiment, an implementation method will be described that is different from the configuration illustrated inFIG.2and described in Embodiment 1. FIG.1illustrates one example of a configuration of a transmission device according to this embodiment, such as a base station, access point, or broadcast station. AsFIG.1is described in detail in Embodiment 1, description will be omitted from this embodiment. Signal processor106receives inputs of mapped signals105_1and105_2, signal group110, and control signal100, performs signal processing based on control signal100, and outputs signal-processed signals106_A and106_B. Here, signal-processed signal106_A is expressed as u1(t), and signal-processed signal106_B is expressed as u2(t) (i is a symbol number; for example, i is an integer that is greater than or equal to 0). Note that details regarding the signal processing will be described with reference toFIG.20later. FIG.20illustrates one example of a configuration of signal processor106illustrated inFIG.1. Weighting synthesizer (precoder)203receives inputs of mapped signal201A (mapped signal105_1inFIG.1), mapped signal201B (mapped signal105_2inFIG.1), and control signal200(control signal100inFIG.1), performs weighting synthesis (precoding) based on control signal200, and outputs weighted signal204A and weighted signal204B. Here, mapped signal201A is expressed as s1(t), mapped signal201B is expressed as s2(t), weighted signal204A is expressed as z1′(t), and weighted signal204B is expressed as z2′(t). Note that one example of t is time (s1(t), s2(t), z1′(t), and z2′(t) are defined as complex numbers (accordingly, they may be real numbers)). Here, these are given as functions of time, but may be functions of a “frequency (carrier number)”, and may be functions of “time and frequency”. These may also be a function of a “symbol number”. Note that this also applies to Embodiment 1. Weighting synthesizer (precoder)203performs the following calculation. [MATH.50](z⁢1′⁢(i)z⁢2′⁢(i))=(abcd)⁢(s⁢1⁢(i)s⁢2⁢(i))Equation⁢(50) Phase changer205A receives inputs of weighting synthesized signal204A and control signal200, applies a phase change to weighting synthesized signal204A based on control signal200, and outputs phase-changed signal206A. Note that phase-changed signal206A is expressed as z1(t), and z1(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205A will be described. In phase changer205A, for example, a phase change of w(i) is applied to z1′(i). Accordingly, z1(t) can be expressed as z1(t)=w(i)×z1′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as follows. [MATH.51]w⁡(i)=ej⁢2×π×iMEquation⁢(51) (M is an integer that is greater than or equal to 2, M is a phase change cycle) (when M is set to an odd number greater than or equal to 3, data reception quality may improve). However, Equation (51) is merely a non-limiting example. Here, phase change value is expressed as w(i)=ej×λ(i). Phase changer205B receives inputs of weighting synthesized signal204B and control signal200, applies a phase change to weighting synthesized signal204B based on control signal200, and outputs phase-changed signal206B. Note that phase-changed signal206B is expressed as z2(t), and z2(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205B will be described. In phase changer205B, for example, a phase change of y(i) is applied to z2′(i). Accordingly, z2(t) can be expressed as z2(t)=y(i)×z2′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as shown in Equation (2) (N is an integer that is greater than or equal to 2, N is a phase change cycle, N M) (when N is set to an odd number greater than or equal to 3, data reception quality may improve). However, Equation (2) is merely a non-limiting example. Here, phase change value y(i)=ej×δ(i). Here, z1(i) and z2(t) can be expressed with the following equation. [MATH.52](z⁢1⁢(i)z⁢2⁢(i))=(w⁡(i)00y⁡(i))⁢(abcd)⁢(s⁢1⁢(i)s⁢2⁢(i))=(ej×λ⁡(i)00ej×λ⁡(i))⁢(abcd)⁢(s⁢1⁢(i)s⁢2⁢(i))Equation⁢52 Note that δ(i) and λ(i) are real numbers. z1(i) and z2(t) are transmitted from the transmission device at the same time and using the same frequency (same frequency band). In Equation (52), the phase change value is not limited to the value used in Equations (2) and (52); for example, a method in which the phase is changed cyclically or regularly is conceivable. As described in Embodiment 1, conceivable examples of the (precoding) matrix inserted in Equation (50) and Equation (52) are illustrated in Equation (5) through Equation (36) (however, the precoding matrix is not limited to these examples (the same applies to Embodiment 1)). Inserter207A receives inputs of weighting synthesized signal204A, pilot symbol signal (pa(t)) (t is time) (251A), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208A based on the frame configuration. Similarly, inserter207B receives inputs of phase-changed signal206B, pilot symbol signal (pb(t)) (251B), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208B based on the frame configuration. Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210B (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). As described in Embodiment 1, etc., note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). FIG.3illustrates one example of a configuration of radio units107_A and107_B illustrated inFIG.1.FIG.3is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.4illustrates a frame configuration of transmission signal108_A illustrated inFIG.1.FIG.4is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.5illustrates a frame configuration of transmission signal108_B illustrated inFIG.1.FIG.5is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.4and a symbol is present in carrier A at time $B inFIG.5, the symbol in carrier A at time $B inFIG.4and the symbol in carrier A at time $B inFIG.5are transmitted at the same time and same frequency. Note that the frame configuration is not limited to the configurations illustrated inFIG.4andFIG.5;FIG.4andFIG.5are mere examples of frame configurations. The other symbols inFIG.4andFIG.5are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.2”. Accordingly, when an other symbol503inFIG.5at the same time and same frequency (same carrier) as an other symbol403inFIG.4transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.4and the frame ofFIG.5are received at the same time by the reception device, but even when the frame ofFIG.4or the frame ofFIG.5has been received, the reception device can obtain the data transmitted by the transmission device. FIG.6illustrates one example of components relating to control information generation for generating control information symbol signal253illustrated inFIG.2.FIG.6is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.7illustrates one example of a configuration of antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1(in this example, antenna unit #A (109_A) and antenna unit #B (109_B) include a plurality of antennas).FIG.7is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.8illustrates one example of a configuration of a reception device that receives a modulated signal upon the transmission device illustrated inFIG.1transmitting, for example, a transmission signal having the frame configuration illustrated inFIG.4orFIG.5.FIG.8is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.10illustrates one example of a configuration of antenna unit #X (801X) and antenna unit #Y (801Y) illustrated inFIG.8(antenna unit #X (801X) and antenna unit #Y (801Y) are exemplified as including a plurality of antennas).FIG.10is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. Next, signal processor106in the transmission device illustrated inFIG.1is inserted as phase changers205A,205B and phase changer209A, as illustrated inFIG.20. The characteristics and advantageous effects of this configuration will be described. As described with reference toFIG.4andFIG.5, phase changers205A,205B apply precoding (weighted synthesis) to mapped signal s1(t) (201A) (i is a symbol number; i is an integer greater than or equal to 0) obtained via mapping using the first sequence and mapped signal s2(t) (201B) obtained via mapping using the second sequence, and applies a phase change to one of the obtained weighting synthesized signals204A and204B. Phase-changed signal206A and phase-changed signal206B are then transmitted at the same frequency and at the same time. Accordingly, inFIG.4andFIG.5, a phase change is applied to data symbol402inFIG.4and data symbol502inFIG.5. For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.4. Note that inFIG.11, similar toFIG.4,401is a pilot symbol,402is a data symbol, and403is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205A applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×λ15(i)” for (carrier 1, time $5), “ej×λ25(i)” for (carrier 2, time $5), “ej×λ35(i)” for (carrier 3, time $5), “ej×λ45(i)” for (carrier 4, time $5), “ej×λ55(i)” (carrier 5, time $5), “ej×λ16(i)” for (carrier 1, time $6), “ej×λ26(i)” for (carrier 2, time $6), “ej×λ46(i)” for (carrier 4, time $6), and “ej×λ56(i)” for (carrier 5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205A. This point is a characteristic of phase changer205A. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205A). One example of the phase change that phase changer205A applies to the data symbols is the method given in Equation (50) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.5. Note that inFIG.11, similar toFIG.5,501is a pilot symbol,502is a data symbol, and503is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205B applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×δ15(i)” for (carrier 1, time $5), “ej×δ25(i)” for (carrier 2, time $5), “ej×δ35(i)” for (carrier 3, time $5), “ej×δ45(i)” for (carrier 4, time $5), “ej×δ55(i)” (carrier 5, time $5), “ej×δ16(i)” for (carrier 1, time $6), “ej×δ26(i)” for (carrier 2, time $6), “ej×δ46(i)” for (carrier 4, time $6), and “ej×δ56(i)” for (carrier 5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205B. This point is a characteristic of phase changer205B. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205B). One example of the phase change that phase changer205B applies to the data symbols is the method given in Equation (2) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). With this, when the environment is one in which the direct waves are dominant, such as in an LOS environment, it is possible to achieve improved data reception quality in the reception device with respect to the data symbols that perform MIMO transmission (transmit a plurality of streams). Next, the advantageous effects of this will be described. For example, the modulation scheme used by mapper104inFIG.1is quadrature phase shift keying (QPSK) (mapped signal201A inFIG.18is a QPSK signal, and mapped signal201B is a QPSK signal; in other words, two QPSK streams are transmitted). Accordingly, for example, using channel estimated signals806_1and806_2,16candidate signal points are obtained by signal processor811illustrated inFIG.8(2-bit transmission is possible with QPSK. Accordingly, since there are two streams, 4-bit transmission is achieved. Thus, there are 24=16 candidate signal points) (note that 16 other candidate signal points are obtained from using channel estimated signals808_1and808_2as well, but since description thereof is the same as described above, the following description will focus on the 16 candidate signal points obtained by using channel estimated signals806_1and806_2). FIG.12illustrates an example of the state resulting from such a case. In (A) and (B) inFIG.12, in-phase I is represented on the horizontal axis and quadrature Q is represented on the vertical axis, and 16 candidate signal points are present in the illustrated in-phase I-quadrature Q planes (among the 16 candidate signal points, one is a signal point that is transmitted by the transmission device; accordingly, this is referred to as “16 candidate signal points”). When the environment is one in which the direct waves are dominant, such as in an LOS environment, consider a first case in which phase changers205A and205B are omitted from the configuration illustrated inFIG.20(in other words, a case in which phase change is not applied by phase changers205A and205B inFIG.20). In the first case, since phase change is not applied, there is a possibility that the state illustrated in (A) inFIG.12will be realized. When the state falls into the state illustrated in (A) inFIG.12, as illustrated by “signal points1201and1202”, “signal points1203,1204,1205, and1206”, and “signal points1207,1208”, the signal points become dense (the distances between some signal points shorten). Accordingly, in the reception device illustrated inFIG.8, data reception quality may deteriorate. In order to remedy this phenomenon, inFIG.20, phase changers205A,205B are inserted. When phase changers205A,205B are inserted, due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12. With respect to this state, since error correction code is introduced, high error correction performance is achieved, and in the reception device illustrated inFIG.8, high data reception quality can be achieved. Note that inFIG.20, a phase change is not applied by phase changers205A,205B inFIG.20to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation. With this, among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized. However, even if a phase change is applied by phase changers205A,205B inFIG.20to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation, the following is possible: “among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized.” In such a case, a phase change must be applied to pilot symbols and/or a preamble under some condition. For example, one conceivable method is to implement a rule which is separate from the rule for applying a phase change to a data symbol, and “applying a phase change to a pilot symbol and/or a preamble”. Another example is a method of regularly applying a phase change to a data symbol in a cycle N, and regularly applying a phase change to a pilot symbol and/or a preamble in a cycle M (N and M are integers that are greater than or equal to 2). As described above, phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210B (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, and preambles (other symbols)) (in the case ofFIG.20, since phase changer209B applies a phase change to baseband signal208B, a phase change is applied to each symbol inFIG.5). Accordingly, in the frame illustrated inFIG.5, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $1. Similarly, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $2, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $3, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $4, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $5, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $6, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $7, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $8, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $9, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $10, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $11 . . . . FIG.13illustrates a frame configuration different from the frame configuration illustrated inFIG.4of transmission signal108_A illustrated inFIG.1.FIG.13is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.14illustrates a frame configuration different from the frame configuration illustrated inFIG.5of transmission signal108_B illustrated inFIG.1.FIG.14is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.13and a symbol is present in carrier A at time $B inFIG.14, the symbol in carrier A at time $B inFIG.13and the symbol in carrier A at time $B inFIG.14are transmitted at the same time and same frequency. Note that the frame configurations illustrated inFIG.13andFIG.14are merely examples. The other symbols inFIG.13andFIG.14are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.20”. Accordingly, when an other symbol403inFIG.13at the same time and same frequency (same carrier) as an other symbol503inFIG.14transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.13and the frame ofFIG.14are received at the same time by the reception device, but even when the frame ofFIG.13or the frame ofFIG.14has been received, the reception device can obtain the data transmitted by the transmission device. Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210B (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Here, a null symbol may be considered as a target for application of a phase change (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, preambles (other symbols), and null symbols). However, even if a phase change is applied to a null symbol, the signals before and after the phase change are the same (in-phase component I is zero (0) and the quadrature component Q is zero (0)). Accordingly, it is possible to construe a null symbol as not a target for a phase change (in the case ofFIG.20, since phase changer209B applies a phase change to baseband signal208B, a phase change is applied to each symbol inFIG.14). Accordingly, in the frame illustrated inFIG.14, phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $1. However, the handling of the phase change with respect to null symbol1301is as previously described. Similarly, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $2, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $3, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $4, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $5, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $6, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $7, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $8, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $9, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $10, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.20applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $11. However, the handling of the phase change with respect to null symbol1301is as previously described.” . . . . . . . The phase change value of phase changer209B is expressed as Ω(i). Baseband signal208B is x′(i) and phase-changed signal210B is x(i). Accordingly, x(i)=Ω(i)×x′(i) holds true. For example, the phase change value is set to Equation (38) (Q is an integer that is greater than or equal to 2, and represents the number of phase change cycles) (j is an imaginary number unit). However, Equation (38) is merely a non-limiting example. For example, Ω(i) may be set so as to implement a phase change that yields a cycle Q. Moreover, for example, inFIG.5andFIG.14, the same phase change value is applied to the same carriers, and the phase change value may be set on a per carrier basis. For example, the following may be implemented. Regardless of time, the phase change value may be as in Equation (39) for carrier 1 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (40) for carrier 2 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (41) for carrier 3 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (42) for carrier 4 inFIG.5andFIG.14. . . . This concludes the operational example of phase changer209B illustrated inFIG.20. Next, the advantageous effects obtained by phase changer209B illustrated inFIG.20will be described. The other symbols403,503in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include a control information symbol. As previously described, when an other symbol503inFIG.5at the same time and same frequency (in the same carrier) as an other symbol403transmits control information, it transmits the same data (same control information). However, consider the following cases. Case 2: transmitting a control information symbol using either antenna unit #A (109_A) or antenna unit #B (109_B) illustrated inFIG.1. When transmission according to “case 2” is performed, since only one antenna is used to transmit the control information symbol, compared to when “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is performed, spatial diversity gain is less. Accordingly, in “case 2”, data reception quality deteriorates even when received by the reception device illustrated inFIG.8. Accordingly, from the perspective of improving data reception quality, “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is more beneficial. Case 3: transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1. However, phase change by is not performed by phase changer209B illustrated inFIG.20. When transmission according to “case 3” is performed, since the modulated signal transmitted from antenna unit #A109_A and the modulated signal transmitted from antenna unit #B109_B are the same (or exhibit a specific phase shift), depending on the radio wave propagation environment, the reception device illustrated inFIG.8may receive an inferior reception signal, and both modulated signal may be subjected to the same multipath effect. Accordingly, in the reception device illustrated inFIG.8, data reception quality deteriorates. In order to remedy this phenomenon, inFIG.20, phase changer209B is inserted. Since this changes the phase along the time or frequency axis, in the reception device illustrated inFIG.8, it is possible to reduce the probability of reception of an inferior reception signal. Moreover, since there is a high probability that there will be a difference in the multipath effect that the modulated signal transmitted from antenna unit #A109_A is subjected to with respect to the multipath effect that the modulated signal transmitted from antenna unit #B109_B is subjected to, there is a high probability that diversity gain will result, and accordingly, that data reception quality in the reception device illustrated inFIG.8will improve. For these reasons, inFIG.20, phase changer209B is provided and phase change is implemented. Other symbols403and other symbols503include, in addition to control information symbols, for example, symbols for signal detection, symbols for performing frequency and time synchronization, and symbols for performing channel estimation (a symbol for performing propagation path fluctuation estimation), for demodulating and decoding control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include pilot symbols401,501, and by using these, it is possible to perform demodulation and decoding with high precision via control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” transmit a plurality of streams (perform MIMO transmission) at the same time and using the same frequency (frequency band) via data symbols402and data symbols502. In order to demodulate these data symbols, symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503, are used. Here, “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changer209B, as described above. Under these circumstances, when this processing is not performed on data symbols402and data symbols502(on data symbols402in the example above), in the reception device, when data symbols402and data symbols502are demodulated and decoded, there is a need to perform the demodulation and decoding in which the processing for the phase change by phase changer209B was performed, and there is a probability that this processing will be complicated (this is because “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changer209B). However, as illustrated inFIG.20, in phase changer209B, when a phase change is applied to data symbols402and data symbols502(to data symbols502in the example above), in the reception device, there is the advantage that data symbols402and data symbols502can (easily) be demodulated and decoded using the channel estimation signal (propagation path fluctuation signal) estimated by using “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503”. Additionally, as illustrated inFIG.20, in phase changer209B, when a phase change is applied to data symbols402and data symbols502(to data symbols502in the example above), in multipath environments, it is possible to reduce the influence of sharp drops in electric field intensity along the frequency axis. Accordingly, it is possible to obtain the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502. In this way, the point that “symbols that are targets for implementation of a phase change by phase changers205A,205B” and “symbols that are targets for implementation of a phase change by phase changer209B” are different is a characteristic point. As described above, by applying a phase change using phase changers205A,205B illustrated inFIG.20, it is possible to achieve the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502in the reception device in, for example, LOS environments, and by applying a phase change using phase changer209B illustrated inFIG.20, for example, it is possible to achieve the advantageous effect of an improvement in data reception quality in the reception device of the control information symbols included in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” and the advantageous effect that operations of demodulation and decoding of data symbols402and data symbols502become simple. Note that the advantageous effect of an improvement in data reception quality in the reception device of data symbols402and data symbols502in, for example, LOS environments, is achieved as a result of the phase change implemented by phase changers205A,205B illustrated inFIG.20, and furthermore, the reception quality of data symbols402and data symbols502is improved by applying a phase change to data symbols402and data symbols502using phase changer209B illustrated inFIG.20. Note that Q in Equation (38) may be an integer of −2 or less. In such a case, the value for the phase change cycle is the absolute value of Q. This feature is applicable to Embodiment 1 as well. Embodiment 5 In this embodiment, an implementation method will be described that is different from the configuration illustrated inFIG.2and described in Embodiment 1. FIG.1illustrates one example of a configuration of a transmission device according to this embodiment, such as a base station, access point, or broadcast station. AsFIG.1is described in detail in Embodiment 1, description will be omitted from this embodiment. Signal processor106receives inputs of mapped signals105_1and105_2, signal group110, and control signal100, performs signal processing based on control signal100, and outputs signal-processed signals106_A and106_B. Here, signal-processed signal106_A is expressed as u1(t), and signal-processed signal106_B is expressed as u2(t) (i is a symbol number; for example, i is an integer that is greater than or equal to 0). Note that details regarding the signal processing will be described with reference toFIG.21later. FIG.21illustrates one example of a configuration of signal processor106illustrated inFIG.1. Weighting synthesizer (precoder)203receives inputs of mapped signal201A (mapped signal105_1inFIG.1), mapped signal201B (mapped signal105_2inFIG.1), and control signal200(control signal100inFIG.1), performs weighting synthesis (precoding) based on control signal200, and outputs weighted signal204A and weighted signal204B. Here, mapped signal201A is expressed as s1(t), mapped signal201B is expressed as s2(t), weighted signal204A is expressed as z1′(t), and weighted signal204B is expressed as z2′(t). Note that one example of t is time (s1(t), s2(t), z1′(t), and z2′(t) are defined as complex numbers (accordingly, they may be real numbers)). Here, these are given as functions of time, but may be functions of a “frequency (carrier number)”, and may be functions of “time and frequency”. These may also be a function of a “symbol number”. Note that this also applies to Embodiment 1. Weighting synthesizer (precoder)203performs the calculations indicated in Equation (49). Phase changer205A receives inputs of weighting synthesized signal204A and control signal200, applies a phase change to weighting synthesized signal204A based on control signal200, and outputs phase-changed signal206A. Note that phase-changed signal206A is expressed as z1(t), and z1(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205A will be described. In phase changer205A, for example, a phase change of w(i) is applied to z1′(i). Accordingly, z1(t) can be expressed as z1(i)=w(i)×z1′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as indicated in Equation (50). (M is an integer that is greater than or equal to 2, M is a phase change cycle) (when M is set to an odd number greater than or equal to 3, data reception quality may improve). However, Equation (50) is merely a non-limiting example. Here, phase change value is expressed as w(i)=ej×λ(i). Phase changer205B receives inputs of weighting synthesized signal204B and control signal200, applies a phase change to weighting synthesized signal204B based on control signal200, and outputs phase-changed signal206B. Note that phase-changed signal206B is expressed as z2(t), and z2(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205B will be described. In phase changer205B, for example, a phase change of y(i) is applied to z2′(i). Accordingly, z2(t) can be expressed as z2(t)=y(i)×z2′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as shown in Equation (2) (N is an integer that is greater than or equal to 2, N is a phase change cycle, N M) (when N is set to an odd number greater than or equal to 3, data reception quality may improve). However, Equation (2) is merely a non-limiting example. Here, phase change value y(i)=ej×δ(i). Here, z1(i) and z2(t) can be expressed with Equation (51). Note that δ(i) and λ(i) are real numbers. z1(i) and z2(t) are transmitted from the transmission device at the same time and using the same frequency (same frequency band). In Equation (51), the phase change value is not limited to the value used in Equations (2) and (51); for example, a method in which the phase is changed cyclically or regularly is conceivable. As described in Embodiment 1, conceivable examples of the (precoding) matrix inserted in Equation (49) and Equation (51) are illustrated in Equation (5) through Equation (36) (however, the precoding matrix is not limited to these examples (the same applies to Embodiment 1)). Inserter207A receives inputs of weighting synthesized signal204A, pilot symbol signal (pa(t)) (t is time) (251A), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208A based on the frame configuration. Similarly, inserter207B receives inputs of phase-changed signal206B, pilot symbol signal (pb(t)) (251B), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208B based on the frame configuration. Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210B (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). As described in Embodiment 1, etc., note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). FIG.3illustrates one example of a configuration of radio units107_A and107_B illustrated inFIG.1.FIG.3is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.4illustrates a frame configuration of transmission signal108_A illustrated inFIG.1.FIG.4is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.5illustrates a frame configuration of transmission signal108_B illustrated inFIG.1.FIG.5is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.4and a symbol is present in carrier A at time $B inFIG.5, the symbol in carrier A at time $B inFIG.4and the symbol in carrier A at time $B inFIG.5are transmitted at the same time and same frequency. Note that the frame configuration is not limited to the configurations illustrated inFIG.4andFIG.5;FIG.4andFIG.5are mere examples of frame configurations. The other symbols inFIG.4andFIG.5are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.2”. Accordingly, when an other symbol503inFIG.5at the same time and same frequency (same carrier) as an other symbol403inFIG.4transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.4and the frame ofFIG.5are received at the same time by the reception device, but even when the frame ofFIG.4or the frame ofFIG.5has been received, the reception device can obtain the data transmitted by the transmission device. FIG.6illustrates one example of components relating to control information generation for generating control information symbol signal253illustrated inFIG.2.FIG.6is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.7illustrates one example of a configuration of antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1(in this example, antenna unit #A (109_A) and antenna unit #B (109_B) include a plurality of antennas).FIG.7is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.8illustrates one example of a configuration of a reception device that receives a modulated signal upon the transmission device illustrated inFIG.1transmitting, for example, a transmission signal having the frame configuration illustrated inFIG.4orFIG.5.FIG.8is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.10illustrates one example of a configuration of antenna unit #X (801X) and antenna unit #Y (801Y) illustrated inFIG.8(antenna unit #X (801X) and antenna unit #Y (801Y) are exemplified as including a plurality of antennas).FIG.10is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. Next, signal processor106in the transmission device illustrated inFIG.1is inserted as phase changers205A,205B and phase changer209B, as illustrated inFIG.21. The characteristics and advantageous effects of this configuration will be described. As described with reference toFIG.4andFIG.5, phase changers205A,205B apply precoding (weighted synthesis) to mapped signal s1(t) (201A) (i is a symbol number; i is an integer greater than or equal to 0) obtained via mapping using the first sequence and mapped signal s2(t) (201B) obtained via mapping using the second sequence, and applies a phase change to one of the obtained weighting synthesized signals204A and204B. Phase-changed signal206A and phase-changed signal206B are then transmitted at the same frequency and at the same time. Accordingly, inFIG.4andFIG.5, a phase change is applied to data symbol402inFIG.4and data symbol502inFIG.5. For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.4. Note that inFIG.11, similar toFIG.4,401is a pilot symbol,402is a data symbol, and403is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205A applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×λ15(i)” for (carrier 1, time $5), “ej×λ25(i)” for (carrier 2, time $5), “ej×λ35(i)” for (carrier 3, time $5), “ej×λ45(i)” for (carrier 4, time $5), “ej×λ55(i)” (carrier 5, time $5), “ej×λ16(i)” for (carrier 1, time $6), “ej×λ26(i)” for (carrier 2, time $6), “ej×λ46(i)” for (carrier 4, time $6), and “ej×λ56(i)” for (carrier5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205A. This point is a characteristic of phase changer205A. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205A). One example of the phase change that phase changer205A applies to the data symbols is the method given in Equation (50) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.5. Note that inFIG.11, similar toFIG.5,501is a pilot symbol,502is a data symbol, and503is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205B applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×δ15(i)” for (carrier 1, time $5), “ej×δ25(i)” for (carrier 2, time $5), “ej×δ35(i)” for (carrier 3, time $5), “ej×δ45(i)” for (carrier 4, time $5), “ej×δ55(i)” (carrier 5, time $5), “ej×δ16(i)” for (carrier 1, time $6), “ej×δ26(i)” for (carrier 2, time $6), “ej×δ46(i)” for (carrier 4, time $6), and “ej×δ56(i)” for (carrier 5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205B. This point is a characteristic of phase changer205B. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205B). One example of the phase change that phase changer205B applies to the data symbols is the method given in Equation (2) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). With this, when the environment is one in which the direct waves are dominant, such as in an LOS environment, it is possible to achieve improved data reception quality in the reception device with respect to the data symbols that perform MIMO transmission (transmit a plurality of streams). Next, the advantageous effects of this will be described. For example, the modulation scheme used by mapper104inFIG.1is quadrature phase shift keying (QPSK) (mapped signal201A inFIG.18is a QPSK signal, and mapped signal201B is a QPSK signal; in other words, two QPSK streams are transmitted). Accordingly, for example, using channel estimated signals806_1and806_2,16candidate signal points are obtained by signal processor811illustrated inFIG.8(2-bit transmission is possible with QPSK. Accordingly, since there are two streams, 4-bit transmission is achieved. Thus, there are 24=16 candidate signal points) (note that 16 other candidate signal points are obtained from using channel estimated signals808_1and808_2as well, but since description thereof is the same as described above, the following description will focus on the 16 candidate signal points obtained by using channel estimated signals806_1and806_2). FIG.12illustrates an example of the state resulting from such a case. In (A) and (B) inFIG.12, in-phase I is represented on the horizontal axis and quadrature Q is represented on the vertical axis, and 16 candidate signal points are present in the illustrated in-phase I-quadrature Q planes (among the 16 candidate signal points, one is a signal point that is transmitted by the transmission device; accordingly, this is referred to as “16 candidate signal points”). When the environment is one in which the direct waves are dominant, such as in an LOS environment, consider a first case in which phase changers205A and205B are omitted from the configuration illustrated inFIG.21(in other words, a case in which phase change is not applied by phase changers205A and205B inFIG.21). In the first case, since phase change is not applied, there is a possibility that the state illustrated in (A) inFIG.12will be realized. When the state falls into the state illustrated in (A) inFIG.12, as illustrated by “signal points1201and1202”, “signal points1203,1204,1205, and1206”, and “signal points1207,1208”, the signal points become dense (the distances between some signal points shorten). Accordingly, in the reception device illustrated inFIG.8, data reception quality may deteriorate. In order to remedy this phenomenon, inFIG.21, phase changers205A,205B are inserted. When phase changers205A,205B are inserted, due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12. With respect to this state, since error correction code is introduced, high error correction performance is achieved, and in the reception device illustrated inFIG.8, high data reception quality can be achieved. Note that inFIG.21, a phase change is not applied by phase changers205A,205B inFIG.21to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation. With this, among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized. However, even if a phase change is applied by phase changers205A,205B inFIG.21to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation, the following is possible: “among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized.” In such a case, a phase change must be applied to pilot symbols and/or a preamble under some condition. For example, one conceivable method is to implement a rule which is separate from the rule for applying a phase change to a data symbol, and “applying a phase change to a pilot symbol and/or a preamble”. Another example is a method of regularly applying a phase change to a data symbol in a cycle N, and regularly applying a phase change to a pilot symbol and/or a preamble in a cycle M (N and M are integers that are greater than or equal to 2). As described above, phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, and preambles (other symbols)) (in the case ofFIG.21, since phase changer209A applies a phase change to baseband signal208A, a phase change is applied to each symbol inFIG.4). Accordingly, in the frame illustrated inFIG.4, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $1. Similarly, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $2, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $3, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $4, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $5, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $6, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $7, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $8, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $9, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $10, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $11 . . . . FIG.13illustrates a frame configuration different from the frame configuration illustrated inFIG.4of transmission signal108_A illustrated inFIG.1.FIG.13is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.14illustrates a frame configuration different from the frame configuration illustrated inFIG.5of transmission signal108_B illustrated inFIG.1.FIG.14is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.13and a symbol is present in carrier A at time $B inFIG.14, the symbol in carrier A at time $B inFIG.13and the symbol in carrier A at time $B inFIG.14are transmitted at the same time and same frequency. Note that the frame configurations illustrated inFIG.13andFIG.14are merely examples. The other symbols inFIG.13andFIG.14are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.21”. Accordingly, when an other symbol403inFIG.13at the same time and same frequency (same carrier) as an other symbol503inFIG.14transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.13and the frame ofFIG.14are received at the same time by the reception device, but even when the frame ofFIG.13or the frame ofFIG.14has been received, the reception device can obtain the data transmitted by the transmission device. Phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Here, a null symbol may be considered as a target for application of a phase change (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, preambles (other symbols), and null symbols). However, even if a phase change is applied to a null symbol, the signals before and after the phase change are the same (in-phase component I is zero (0) and the quadrature component Q is zero (0)). Accordingly, it is possible to construe a null symbol as not a target for a phase change (in the case ofFIG.21, since phase changer209A applies a phase change to baseband signal208A, a phase change is applied to each symbol inFIG.13). Accordingly, in the frame illustrated inFIG.13, phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $1. However, the handling of the phase change with respect to null symbol1301is as previously described. Similarly, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $2, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $3, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $4, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $5, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $6, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $7, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $8, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $9, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $10, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.21applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $11. However, the handling of the phase change with respect to null symbol1301is as previously described.” . . . . The phase change value of phase changer209A is expressed as Ω(i). Baseband signal208A is x′(i) and phase-changed signal210A is x(i). Accordingly, x(i)=Ω(i)×x′(i) holds true. For example, the phase change value is set to Equation (38) (Q is an integer that is greater than or equal to 2, and represents the number of phase change cycles) (j is an imaginary number unit). However, Equation (38) is merely a non-limiting example. For example, Ω(i) may be set so as to implement a phase change that yields a cycle Q. Moreover, for example, inFIG.4andFIG.13, the same phase change value is applied to the same carriers, and the phase change value may be set on a per carrier basis. For example, the following may be implemented. Regardless of time, the phase change value may be as in Equation (39) for carrier 1 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (40) for carrier 2 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (41) for carrier 3 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (42) for carrier 4 inFIG.4andFIG.13. . . . This concludes the operational example of phase changer209A illustrated inFIG.21. Next, the advantageous effects obtained by phase changer209A illustrated inFIG.21will be described. The other symbols403,503in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include a control information symbol. As previously described, when an other symbol503inFIG.5at the same time and same frequency (in the same carrier) as an other symbol403transmits control information, it transmits the same data (same control information). However, consider the following cases. Case 2: transmitting a control information symbol using either antenna unit #A (109_A) or antenna unit #B (109_B) illustrated inFIG.1. When transmission according to “case 2” is performed, since only one antenna is used to transmit the control information symbol, compared to when “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is performed, spatial diversity gain is less. Accordingly, in “case 2”, data reception quality deteriorates even when received by the reception device illustrated inFIG.8. Accordingly, from the perspective of improving data reception quality, “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is more beneficial. Case 3: transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1. However, phase change by is not performed by phase changer209A illustrated inFIG.21. When transmission according to “case 3” is performed, since the modulated signal transmitted from antenna unit #A109_A and the modulated signal transmitted from antenna unit #B109_B are the same (or exhibit a specific phase shift), depending on the radio wave propagation environment, the reception device illustrated inFIG.8may receive an inferior reception signal, and both modulated signal may be subjected to the same multipath effect. Accordingly, in the reception device illustrated inFIG.8, data reception quality deteriorates. In order to remedy this phenomenon, inFIG.21, phase changer209A is inserted. Since this changes the phase along the time or frequency axis, in the reception device illustrated inFIG.8, it is possible to reduce the probability of reception of an inferior reception signal. Moreover, since there is a high probability that there will be a difference in the multipath effect that the modulated signal transmitted from antenna unit #A109_A is subjected to with respect to the multipath effect that the modulated signal transmitted from antenna unit #B109_B is subjected to, there is a high probability that diversity gain will result, and accordingly, that data reception quality in the reception device illustrated inFIG.8will improve. For these reasons, inFIG.21, phase changer209A is provided and phase change is implemented. Other symbols403and other symbols503include, in addition to control information symbols, for example, symbols for signal detection, symbols for performing frequency and time synchronization, and symbols for performing channel estimation (a symbol for performing propagation path fluctuation estimation), for demodulating and decoding control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include pilot symbols401,501, and by using these, it is possible to perform demodulation and decoding with high precision via control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” transmit a plurality of streams (perform MIMO transmission) at the same time and using the same frequency (frequency band) via data symbols402and data symbols502. In order to demodulate these data symbols, symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503, are used. Here, “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changer209A, as described above. Under these circumstances, when this processing is not performed on data symbols402and data symbols502(on data symbols402in the example above), in the reception device, when data symbols402and data symbols502are demodulated and decoded, there is a need to perform the demodulation and decoding in which the processing for the phase change by phase changer209A was performed, and there is a probability that this processing will be complicated (this is because “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changer209A). However, as illustrated inFIG.21, in phase changer209A, when a phase change is applied to data symbols402and data symbols502(to data symbols402in the example above), in the reception device, there is the advantage that data symbols402and data symbols502can (easily) be demodulated and decoded using the channel estimation signal (propagation path fluctuation signal) estimated by using “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503”. Additionally, as illustrated inFIG.21, in phase changer209A, when a phase change is applied to data symbols402and data symbols502(data symbols402in the example above), in multipath environments, it is possible to reduce the influence of sharp drops in electric field intensity along the frequency axis. Accordingly, it is possible to obtain the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502. In this way, the point that “symbols that are targets for implementation of a phase change by phase changers205A,205B” and “symbols that are targets for implementation of a phase change by phase changer209A” are different is a characteristic point. As described above, by applying a phase change using phase changers205A,205B illustrated inFIG.21, it is possible to achieve the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502in the reception device in, for example, LOS environments, and by applying a phase change using phase changer209A illustrated inFIG.21, for example, it is possible to achieve the advantageous effect of an improvement in data reception quality in the reception device of the control information symbols included in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” and the advantageous effect that operations of demodulation and decoding of data symbols402and data symbols502become simple. Note that the advantageous effect of an improvement in data reception quality in the reception device of data symbols402and data symbols502in, for example, LOS environments, is achieved as a result of the phase change implemented by phase changers205A and205B illustrated inFIG.21, and furthermore, the reception quality of data symbols402and data symbols502is improved by applying a phase change to data symbols402and data symbols502using phase changer209A illustrated inFIG.21. Note that Q in Equation (38) may be an integer of −2 or less. In such a case, the value for the phase change cycle is the absolute value of Q. This feature is applicable to Embodiment 1 as well. Embodiment 6 In this embodiment, an implementation method will be described that is different from the configuration illustrated inFIG.2and described in Embodiment 1. FIG.1illustrates one example of a configuration of a transmission device according to this embodiment, such as a base station, access point, or broadcast station. AsFIG.1is described in detail in Embodiment 1, description will be omitted from this embodiment. Signal processor106receives inputs of mapped signals105_1and105_2, signal group110, and control signal100, performs signal processing based on control signal100, and outputs signal-processed signals106_A and106_B. Here, signal-processed signal106_A is expressed as u1(t), and signal-processed signal106_B is expressed as u2(t) (i is a symbol number; for example, i is an integer that is greater than or equal to 0). Note that details regarding the signal processing will be described with reference toFIG.22later. FIG.22illustrates one example of a configuration of signal processor106illustrated inFIG.1. Weighting synthesizer (precoder)203receives inputs of mapped signal201A (mapped signal105_1inFIG.1), mapped signal201B (mapped signal105_2inFIG.1), and control signal200(control signal100inFIG.1), performs weighting synthesis (precoding) based on control signal200, and outputs weighted signal204A and weighted signal204B. Here, mapped signal201A is expressed as s1(t), mapped signal201B is expressed as s2(t), weighted signal204A is expressed as z1′(t), and weighted signal204B is expressed as z2′(t). Note that one example of t is time (s1(t), s2(t), z1′(t), and z2′(t) are defined as complex numbers (accordingly, they may be real numbers)). Here, these are given as functions of time, but may be functions of a “frequency (carrier number)”, and may be functions of “time and frequency”. These may also be a function of a “symbol number”. Note that this also applies to Embodiment 1. Weighting synthesizer (precoder)203performs the calculations indicated in Equation (49). Phase changer205A receives inputs of weighting synthesized signal204A and control signal200, applies a phase change to weighting synthesized signal204A based on control signal200, and outputs phase-changed signal206A. Note that phase-changed signal206A is expressed as z1(t), and z1(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205A will be described. In phase changer205A, for example, a phase change of w(i) is applied to z1′(i). Accordingly, z1(t) can be expressed as z1(t)=w(i)×z1′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as indicated in Equation (50). (M is an integer that is greater than or equal to 2, M is a phase change cycle) (when M is set to an odd number greater than or equal to 3, data reception quality may improve). However, Equation (50) is merely a non-limiting example. Here, phase change value is expressed as w(i)=ej×λ(i). Phase changer205B receives inputs of weighting synthesized signal204B and control signal200, applies a phase change to weighting synthesized signal204B based on control signal200, and outputs phase-changed signal206B. Note that phase-changed signal206B is expressed as z2(t), and z2(t) is defined as a complex number (and may be a real number). Next, specific operations performed by phase changer205B will be described. In phase changer205B, for example, a phase change of y(i) is applied to z2′(i). Accordingly, z2(t) can be expressed as z2(t)=y(i)×z2′(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). For example, the phase change value is set as shown in Equation (2) (N is an integer that is greater than or equal to 2, N is a phase change cycle, N M) (when N is set to an odd number greater than or equal to 3, data reception quality may improve). However, Equation (2) is merely a non-limiting example. Here, phase change value y(i)=ej×δ(i). Here, z1(t) and z2(t) can be expressed with Equation (51). Note that δ(i) and λ(i) are real numbers. z1(t) and z2(t) are transmitted from the transmission device at the same time and using the same frequency (same frequency band). In Equation (51), the phase change value is not limited to the value used in Equations (2) and (51); for example, a method in which the phase is changed cyclically or regularly is conceivable. As described in Embodiment 1, conceivable examples of the (precoding) matrix inserted in Equation (49) and Equation (51) are illustrated in Equation (5) through Equation (36) (however, the precoding matrix is not limited to these examples (the same applies to Embodiment 1)). Inserter207A receives inputs of weighting synthesized signal204A, pilot symbol signal (pa(t)) (t is time) (251A), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208A based on the frame configuration. Similarly, inserter207B receives inputs of phase-changed signal206B, pilot symbol signal (pb(t)) (251B), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208B based on the frame configuration. Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210B (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). As described in Embodiment 1, etc., note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). FIG.3illustrates one example of a configuration of radio units107_A and107_B illustrated inFIG.1.FIG.3is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.4illustrates a frame configuration of transmission signal108_A illustrated inFIG.1.FIG.4is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.5illustrates a frame configuration of transmission signal108_B illustrated inFIG.1.FIG.5is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.4and a symbol is present in carrier A at time $B inFIG.5, the symbol in carrier A at time $B inFIG.4and the symbol in carrier A at time $B inFIG.5are transmitted at the same time and same frequency. Note that the frame configuration is not limited to the configurations illustrated inFIG.4andFIG.5;FIG.4andFIG.5are mere examples of frame configurations. The other symbols inFIG.4andFIG.5are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.2”. Accordingly, when an other symbol503inFIG.5at the same time and same frequency (same carrier) as an other symbol403inFIG.4transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.4and the frame ofFIG.5are received at the same time by the reception device, but even when the frame ofFIG.4or the frame ofFIG.5has been received, the reception device can obtain the data transmitted by the transmission device. FIG.6illustrates one example of components relating to control information generation for generating control information symbol signal253illustrated inFIG.2.FIG.6is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.7illustrates one example of a configuration of antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1(in this example, antenna unit #A (109_A) and antenna unit #B (109_B) include a plurality of antennas).FIG.7is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.8illustrates one example of a configuration of a reception device that receives a modulated signal upon the transmission device illustrated inFIG.1transmitting, for example, a transmission signal having the frame configuration illustrated inFIG.4orFIG.5.FIG.8is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.10illustrates one example of a configuration of antenna unit #X (801X) and antenna unit #Y (801Y) illustrated inFIG.8(antenna unit #X (801X) and antenna unit #Y (801Y) are exemplified as including a plurality of antennas).FIG.10is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. Next, signal processor106in the transmission device illustrated inFIG.1is inserted as phase changers205A,205B and phase changer209B, as illustrated inFIG.22. The characteristics and advantageous effects of this configuration will be described. As described with reference toFIG.4andFIG.5, phase changers205A,205B apply precoding (weighted synthesis) to mapped signal s1(t) (201A) (i is a symbol number; i is an integer greater than or equal to 0) obtained via mapping using the first sequence and mapped signal s2(t) (201B) obtained via mapping using the second sequence, and applies a phase change to one of the obtained weighting synthesized signals204A and204B. Phase-changed signal206A and phase-changed signal206B are then transmitted at the same frequency and at the same time. Accordingly, inFIG.4andFIG.5, a phase change is applied to data symbol402inFIG.4and data symbol502inFIG.5. For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.4. Note that inFIG.11, similar toFIG.4,401is a pilot symbol,402is a data symbol, and403is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205A applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×λ15(i)” for (carrier 1, time $5), “ej×λ25(i)” for (carrier 2, time $5), “ej×λ35(i)” for (carrier 3, time $5), “ej×λ45(i)” for (carrier 4, time $5), “ej×λ55(i)” (carrier 5, time $5), “ej×λ16(i)” for (carrier 1, time $6), “ej×λ26(i)” for (carrier 2, time $6), “ej×λ46(i)” for (carrier 4, time $6), and “ej×λ56(i)” for (carrier 5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205A. This point is a characteristic of phase changer205A. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205A). One example of the phase change that phase changer205A applies to the data symbols is the method given in Equation (50) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). For example,FIG.11illustrates an extraction of carrier 1 through carrier 5 and time $4 through time $6 from the frame illustrated inFIG.5. Note that inFIG.11, similar toFIG.5,501is a pilot symbol,502is a data symbol, and503is an other symbol. As described above, among the symbols illustrated inFIG.11, phase changer205B applies a phase change to the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). Accordingly, the phase change values for the data symbols illustrated inFIG.11can be expressed as “ej×δ15(i)” for (carrier 1, time $5), “ej×δ25(i)” for (carrier 2, time $5), “ej×δ35(i)” for (carrier 3, time $5), “ej×δ45(i)” for (carrier 4, time $5), “ej×δ55(i)” for (carrier 5, time $5), “ej×δ16(i)” for (carrier 1, time $6), “ej×δ26(i)” for (carrier 2, time $6), “ej×δ46(i)” for (carrier 4, time $6), and “ej×δ56(i)” for (carrier 5, time $6). Among the symbols illustrated inFIG.11, the other symbols located at (carrier 1, time $4), (carrier 2, time $4), (carrier 3, time $4), (carrier 4, time $4), and (carrier 5, time $4), and the pilot symbol located at (carrier 3, time $6) are not subject to phase change by phase changer205B. This point is a characteristic of phase changer205B. Note that, as illustrated inFIG.4, data carriers are arranged at “the same carriers and the same times” as the symbols subject to phase change inFIG.11, which are the data symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6). In other words, inFIG.4, the symbols located at (carrier 1, time $5), (carrier 2, time $5), (carrier 3, time $5), (carrier 4, time $5), (carrier 5, time $5), (carrier 1, time $6), (carrier 2, time $6), (carrier 4, time $6), and (carrier 5, time $6) are data symbols (in other words, data symbols that perform MIMO transmission (transmit a plurality of streams) are subject to phase change by phase changer205B). One example of the phase change that phase changer205B applies to the data symbols is the method given in Equation (2) in which phase change is applied to the data symbols regularly (such as at each cycle N) (however, the phase change method implemented on the data symbols is not limited to this example). With this, when the environment is one in which the direct waves are dominant, such as in an LOS environment, it is possible to achieve improved data reception quality in the reception device with respect to the data symbols that perform MIMO transmission (transmit a plurality of streams). Next, the advantageous effects of this will be described. For example, the modulation scheme used by mapper104inFIG.1is quadrature phase shift keying (QPSK) (mapped signal201A inFIG.18is a QPSK signal, and mapped signal201B is a QPSK signal; in other words, two QPSK streams are transmitted). Accordingly, for example, using channel estimated signals806_1and806_2,16candidate signal points are obtained by signal processor811illustrated inFIG.8(2-bit transmission is possible with QPSK. Accordingly, since there are two streams, 4-bit transmission is achieved. Thus, there are 24=16 candidate signal points) (note that 16 other candidate signal points are obtained from using channel estimated signals808_1and808_2as well, but since description thereof is the same as described above, the following description will focus on the 16 candidate signal points obtained by using channel estimated signals806_1and806_2). FIG.12illustrates an example of the state resulting from such a case. In (A) and (B) inFIG.12, in-phase I is represented on the horizontal axis and quadrature Q is represented on the vertical axis, and 16 candidate signal points are present in the illustrated in-phase I-quadrature Q planes (among the 16 candidate signal points, one is a signal point that is transmitted by the transmission device; accordingly, this is referred to as “16 candidate signal points”). When the environment is one in which the direct waves are dominant, such as in an LOS environment, consider a first case in which phase changers205A and205B are omitted from the configuration illustrated inFIG.22(in other words, a case in which phase change is not applied by phase changers205A,205B inFIG.22). In the first case, since phase change is not applied, there is a possibility that the state illustrated in (A) inFIG.12will be realized. When the state falls into the state illustrated in (A) inFIG.12, as illustrated by “signal points1201and1202”, “signal points1203,1204,1205, and1206”, and “signal points1207,1208”, the signal points become dense (the distances between some signal points shorten). Accordingly, in the reception device illustrated inFIG.8, data reception quality may deteriorate. In order to remedy this phenomenon, inFIG.22, phase changers205A,205B are inserted. When phase changers205A,205B are inserted, due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12. With respect to this state, since error correction code is introduced, high error correction performance is achieved, and in the reception device illustrated inFIG.8, high data reception quality can be achieved. Note that inFIG.22, a phase change is not applied by phase changers205A,205B inFIG.22to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation. With this, among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized. However, even if a phase change is applied by phase changers205A,205B inFIG.22to “pilot symbols, preamble” for demodulating (wave detection of) data symbols, such as pilot symbols and a preamble, and for channel estimation, the following is possible: “among data symbols, “due to symbol number i, there is a mix of symbol numbers whose signal points are dense (the distances between some signal points shorten), such as in (A) inFIG.12, and symbol numbers whose “distance between signal points is long”, such as in (B) inFIG.12” can be realized.” In such a case, a phase change must be applied to pilot symbols and/or a preamble under some condition. For example, one conceivable method is to implement a rule which is separate from the rule for applying a phase change to a data symbol, and “applying a phase change to a pilot symbol and/or a preamble”. Another example is a method of regularly applying a phase change to a data symbol in a cycle N, and regularly applying a phase change to a pilot symbol and/or a preamble in a cycle M (N and M are integers that are greater than or equal to 2). As described above, phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, and preambles (other symbols)) (in the case ofFIG.22, since phase changer209A applies a phase change to baseband signal208A, a phase change is applied to each symbol inFIG.4). Accordingly, in the frame illustrated inFIG.4, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $1. Similarly, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $2, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $3, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $4, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $5, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $6, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $7, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $8, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $9, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $10, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $11 . . . . As described above, phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as y′(i). Then, phase-changed signal210B (y(i)) can be expressed as y(i)=x y′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, and preambles (other symbols)) (in the case ofFIG.22, since phase changer209B applies a phase change to baseband signal208B, a phase change is applied to each symbol inFIG.5). Accordingly, in the frame illustrated inFIG.5, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $1. Similarly, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $2, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $3, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $4, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $5, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $6, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $7, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $8, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $9, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $10, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $11 . . . . FIG.13illustrates a frame configuration different from the frame configuration illustrated inFIG.4of transmission signal108_A illustrated inFIG.1.FIG.13is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. FIG.14illustrates a frame configuration different from the frame configuration illustrated inFIG.5of transmission signal108_B illustrated inFIG.1.FIG.14is described in Embodiment 1. Accordingly, description will be omitted from this embodiment. When a symbol is present in carrier A at time $B inFIG.13and a symbol is present in carrier A at time $B inFIG.14, the symbol in carrier A at time $B inFIG.13and the symbol in carrier A at time $B inFIG.14are transmitted at the same time and same frequency. Note that the frame configurations illustrated inFIG.13andFIG.14are merely examples. The other symbols inFIG.13andFIG.14are symbols corresponding to “preamble signal252and control information symbol signal253inFIG.22”. Accordingly, when an other symbol403inFIG.13at the same time and same frequency (same carrier) as an other symbol503inFIG.14transmits control information, it transmits the same data (the same control information). Note that this is under the assumption that the frame ofFIG.13and the frame ofFIG.14are received at the same time by the reception device, but even when the frame ofFIG.13or the frame ofFIG.14has been received, the reception device can obtain the data transmitted by the transmission device. Phase changer209A receives inputs of baseband signal208A and control signal200, applies a phase change to baseband signal208A based on control signal200, and outputs phase-changed signal210A. Baseband signal208A is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as x′(i). Then, phase-changed signal210A (x(i)) can be expressed as x(i)=ej×ε(i)×x′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209A may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209A is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Here, a null symbol may be considered as a target for application of a phase change (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, preambles (other symbols), and null symbols). However, even if a phase change is applied to a null symbol, the signals before and after the phase change are the same (in-phase component I is zero (0) and the quadrature component Q is zero (0)). Accordingly, it is possible to construe a null symbol as not a target for a phase change (in the case ofFIG.22, since phase changer209A applies a phase change to baseband signal208A, a phase change is applied to each symbol inFIG.13). Accordingly, in the frame illustrated inFIG.13, phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $1. However, the handling of the phase change with respect to null symbol1301is as previously described. Similarly, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $2, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $3, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols403) for all carriers 1 to 36 at time $4, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $5, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $6, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $7, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $8, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $9, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $10, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209A illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols401or data symbols402) for all carriers 1 to 36 at time $11. However, the handling of the phase change with respect to null symbol1301is as previously described.” . . . . The phase change value of phase changer209A is expressed as Ω(i). Baseband signal208A is x′(i) and phase-changed signal210A is x(i). For example, the phase change value is set to Equation (38) (Q is an integer that is greater than or equal to 2, and represents the number of phase change cycles) (j is an imaginary number unit). However, Equation (38) is merely a non-limiting example. For example, Ω(i) may be set so as to implement a phase change that yields a cycle Q. Moreover, for example, inFIG.4andFIG.13, the same phase change value is applied to the same carriers, and the phase change value may be set on a per carrier basis. For example, the following may be implemented. Regardless of time, the phase change value may be as in Equation (39) for carrier 1 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (40) for carrier 2 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (41) for carrier 3 inFIG.4andFIG.13. Regardless of time, the phase change value may be as in Equation (42) for carrier 4 inFIG.4andFIG.13. . . . This concludes the operational example of phase changer209A illustrated inFIG.22. Phase changer209B receives inputs of baseband signal208B and control signal200, applies a phase change to baseband signal208B based on control signal200, and outputs phase-changed signal210B. Baseband signal208B is a function of symbol number i (i is an integer that is greater than or equal to 0), and is expressed as y′(i). Then, phase-changed signal210B (x(i)) can be expressed as y(i)=ej×η(i)×y′(i) (j is an imaginary number unit). Note that the operation performed by phase changer209B may be CDD (cyclic delay diversity) (CSD (cycle shift diversity)) disclosed in “Standard conformable antenna diversity techniques for OFDM and its application to the DVB-T system,” IEEE Globecom 2001, pp. 3100-3105, November 2001, and in IEEE P802.11n (D3.00) Draft STANDARD for Information Technology-Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, 2007. One characteristic of phase changer209B is that it applies a phase change to a symbol present along the frequency axis (i.e., applies a phase change to, for example, a data symbol, a pilot symbol, and/or a control information symbol). Here, a null symbol may be considered as a target for application of a phase change (accordingly, in such a case, symbols subject to symbol number i include data symbols, pilot symbols, control information symbols, preambles (other symbols), and null symbols). However, even if a phase change is applied to a null symbol, the signals before and after the phase change are the same (in-phase component I is zero (0) and the quadrature component Q is zero (0)). Accordingly, it is possible to construe a null symbol as not a target for a phase change (in the case ofFIG.22, since phase changer209B applies a phase change to baseband signal208B, a phase change is applied to each symbol inFIG.14). Accordingly, in the frame illustrated inFIG.14, phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $1. However, the handling of the phase change with respect to null symbol1301is as previously described. Similarly, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $2, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $3, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, all other symbols503) for all carriers 1 to 36 at time $4, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $5, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $6, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $7, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $8, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $9, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $10, However, the handling of the phase change with respect to null symbol1301is as previously described.”, “phase changer209B illustrated inFIG.22applies a phase change to all symbols (in this case, pilot symbols501or data symbols502) for all carriers 1 to 36 at time $11. However, the handling of the phase change with respect to null symbol1301is as previously described.” . . . . The phase change value of phase changer209B is expressed as Δ(i). Baseband signal208B is y′(i) and phase-changed signal210B is y(i). Accordingly, y(i)=Δ(i)×y′(i) holds true. For example, the phase change value is set as shown in Equation (49) (R is an integer that is greater than or equal to 2, and represents the number of phase change cycles. Note that the values for Q and R in Equation (38) may be different values). For example, Δ(i) may be set so as to implement a phase change that yields a cycle R. Moreover, for example, inFIG.5andFIG.14, the same phase change value is applied to the same carriers, and the phase change value may be set on a per carrier basis. For example, the following may be implemented. Regardless of time, the phase change value may be as in Equation (39) for carrier 1 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (40) for carrier 2 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (41) for carrier 3 inFIG.5andFIG.14. Regardless of time, the phase change value may be as in Equation (42) for carrier 4 inFIG.5andFIG.14. . . . This concludes the operational example of phase changer209B illustrated inFIG.20. Next, the advantageous effects obtained by phase changers209A,209B illustrated inFIG.22will be described. The other symbols403,503in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include a control information symbol. As previously described, when an other symbol503inFIG.5at the same time and same frequency (in the same carrier) as an other symbol403transmits control information, it transmits the same data (same control information). However, consider the following cases. Case 2: transmitting a control information symbol using either antenna unit #A (109_A) or antenna unit #B (109_B) illustrated inFIG.1. When transmission according to “case 2” is performed, since only one antenna is used to transmit the control information symbol, compared to when “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is performed, spatial diversity gain is less. Accordingly, in “case 2”, data reception quality deteriorates even when received by the reception device illustrated inFIG.8. Accordingly, from the perspective of improving data reception quality, “transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B)” is more beneficial. Case 3: transmitting a control information symbol using both antenna unit #A (109_A) and antenna unit #B (109_B) illustrated inFIG.1. However, phase change by is not performed by phase changers209A and209B illustrated inFIG.22. When transmission according to “case 3” is performed, since the modulated signal transmitted from antenna unit #A109_A and the modulated signal transmitted from antenna unit #B109_B are the same (or exhibit a specific phase shift), depending on the radio wave propagation environment, the reception device illustrated inFIG.8may receive an inferior reception signal, and both modulated signal may be subjected to the same multipath effect. Accordingly, in the reception device illustrated inFIG.8, data reception quality deteriorates. In order to remedy this phenomenon, inFIG.22, phase changers209A and209B are inserted. Since this changes the phase along the time or frequency axis, in the reception device illustrated inFIG.8, it is possible to reduce the probability of reception of an inferior reception signal. Moreover, since there is a high probability that there will be a difference in the multipath effect that the modulated signal transmitted from antenna unit #A109_A is subjected to with respect to the multipath effect that the modulated signal transmitted from antenna unit #B109_B is subjected to, there is a high probability that diversity gain will result, and accordingly, that data reception quality in the reception device illustrated inFIG.8will improve. For these reasons, inFIG.22, phase changers209A,209B are provided and phase change is implemented. Other symbols403and other symbols503include, in addition to control information symbols, for example, symbols for signal detection, symbols for performing frequency and time synchronization, and symbols for performing channel estimation (a symbol for performing propagation path fluctuation estimation), for demodulating and decoding control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” include pilot symbols401,501, and by using these, it is possible to perform demodulation and decoding with high precision via control information symbols. Moreover, “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” transmit a plurality of streams (perform MIMO transmission) at the same time and using the same frequency (frequency band) via data symbols402and data symbols502. In order to demodulate these data symbols, symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503, are used. Here, “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changers209A,209B, as described above. Under these circumstances, when this processing is not performed on data symbols402and data symbols502(on data symbols402in the example above), in the reception device, when data symbols402and data symbols502are demodulated and decoded, there is a need to perform the demodulation and decoding in which the processing for the phase change by phase changer209A was performed, and there is a probability that this processing will be complicated (this is because “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbols for propagation path variation estimation), which are included in other symbols403and other symbols503” are applied with a phase change by phase changers209A and209B). However, as illustrated inFIG.22, in phase changers209A,209B, when a phase change is applied to data symbols402and data symbols502, in the reception device, there is the advantage that data symbols402and data symbols502can (easily) be demodulated and decoded using the channel estimation signal (propagation path fluctuation signal) estimated by using “symbols for signal detection, symbols for frequency and time synchronization, and symbols for channel estimation (symbol for estimating propagation path fluctuation), which are included in other symbols403and other symbols503”. Additionally, as illustrated inFIG.22, in phase changers209A,209B, when a phase change is applied to data symbols402and data symbols502, in multipath environments, it is possible to reduce the influence of sharp drops in electric field intensity along the frequency axis. Accordingly, it is possible to obtain the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502. In this way, the point that “symbols that are targets for implementation of a phase change by phase changers205A,205B” and “symbols that are targets for implementation of a phase change by phase changers209A,209B” are different is a characteristic point. As described above, by applying a phase change using phase changer205B illustrated inFIG.22, it is possible to achieve the advantageous effect of an improvement in data reception quality of data symbols402and data symbols502in the reception device in, for example, LOS environments, and by applying a phase change using phase changers209A,209B illustrated inFIG.22, for example, it is possible to achieve the advantageous effect of an improvement in data reception quality in the reception device of the control information symbols included in “the frames ofFIG.4andFIG.5” or “the frames ofFIG.13andFIG.14” and the advantageous effect that operations of demodulation and decoding of data symbols402and data symbols502become simple. Note that the advantageous effect of an improvement in data reception quality in the reception device of data symbols402and data symbols502in, for example, LOS environments, is achieved as a result of the phase change implemented by phase changers205A,205B illustrated inFIG.22, and furthermore, the reception quality of data symbols402and data symbols502is improved by applying a phase change to data symbols402and data symbols502using phase changers209A and209B illustrated inFIG.22. Note that Q in Equation (38) may be an integer of −2 or less. In such a case, the value for the phase change cycle is the absolute value of Q. This feature is applicable to Embodiment 1 as well. Note that R in Equation (49) may be an integer of −2 or less. In such a case, the value for the phase change cycle is the absolute value of R. Moreover, taking into consideration the descriptions provided in Supplemental Information 1, the cyclic delay amount set in phase changer209A and the cyclic delay amount set in phase changer209B may be different values. Embodiment 7 In this embodiment, an example of a communications system that employs the transmission method and reception method described in Embodiments 1 to 6 will be described. FIG.23illustrates one example of a configuration of a base station (or access point or the like) according to this embodiment. Transmission device2303receives inputs of data2301, signal group2302, and control signal2309, generates a modulated signal corresponding to data2301and signal group2302, and transmits the modulated signal from an antenna. One example of a configuration of transmission device2303is as is shown inFIG.1, where data2301corresponds to101inFIG.1, signal group2302corresponds to110inFIG.1, and control signal2309corresponds to100inFIG.1. Reception device2304receives a modulated signal transmitted by the communication partner such as a terminal, performs signal processing, demodulation, and decoding on the modulated signal, and outputs control information signal2305from the communication partner and reception data2306. One example of a configuration of reception device2304is as shown inFIG.8, where reception data2306corresponds to reception data812inFIG.8, and control information signal2305from the communication partner corresponds to control signal810inFIG.8. Control signal generator2308receives inputs of control information signal2305from the communication partner and settings signal2307, and generates and outputs control signal2309based on these inputs. FIG.24illustrates one example of a configuration of a terminal, which is the communication partner of the base station illustrated inFIG.23. Transmission device2403receives inputs of data2401, signal group2402, and control signal2409, generates a modulated signal corresponding to data2401and signal group2402, and transmits the modulated signal from an antenna. One example of a configuration of transmission device2403is as is shown inFIG.1, where data2401corresponds to data101inFIG.1, signal group2402corresponds to signal group110inFIG.1, and control signal2409corresponds to control signal110inFIG.1. Reception device2404receives a modulated signal transmitted by the communication partner such as a base station, performs signal processing, demodulation, and decoding on the modulated signal, and outputs control information signal2405from the communication partner and reception data2406. One example of a configuration of reception device2404is as shown inFIG.8, where reception data2406corresponds to reception data812inFIG.8, and control information signal2405from the communication partner corresponds to control signal810inFIG.8. Control signal generator2408receives inputs of control information signal2305from the communication partner and settings signal2407, and generates and outputs control signal2409based on this information. FIG.25illustrates one example of a frame configuration of a modulated signal transmitted by the terminal illustrated inFIG.24. Time is represented on the horizontal axis.2501is a preamble, and is a symbol, such as a PSK symbol, for the communication partner (for example, a base station) to perform signal detection, frequency synchronization, time synchronization, frequency offset estimation, and/or channel estimation. Preamble2501may include a training symbol for directionality control. Note that, here, the terminology “preamble” is used, but different terminology may be used. 2502is a control information symbol, and2503is a data symbol including data to be transmitted to the communication partner. 2502is a control information symbol that includes, for example: information on an error correction encoding method used to generate data symbol2503(such as information on the code length (block length) and/or encode rate); modulation scheme information, and control information for notifying the communication partner. Note thatFIG.25is merely one non-limiting example of a frame configuration. Moreover other symbols, such as a pilot symbol and/or reference symbol, may be included in the symbols illustrated inFIG.25. InFIG.25, frequency is represented on the vertical axis and symbols are present along the frequency axis (carrier direction). As examples of a frame configuration transmitted by the base station illustrated inFIG.23have been described with reference toFIG.4,FIG.5,FIG.13, andFIG.14, further description is herein omitted. Note that other symbols403,503may include a training symbol for performing directionality control. Accordingly, in this embodiment, the base station covers a case in which a plurality of modulated signals are transmitted using a plurality of antennas. Next, operations performed by a base station in a communications system such as described above will be described in detail. Transmission device2303in the base station illustrated inFIG.23has the configuration illustrated inFIG.1. Signal processor106illustrated inFIG.1has the configuration illustrated in any one ofFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. Note thatFIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33will be described later. Here, operation performed by phase changers205A,205B may be switched depending on the communications environment or the settings. Control information relating to operations performed by phase changers205A,205B is transmitted by the base station as a part of the control information transmitted via control information symbols, namely, other symbols403,503in the frame configurations illustrated inFIG.4,FIG.5,FIG.13, andFIG.14. Here, control information relating to operations performed by phase changers205A,205B is expressed as u0, u1. The relationship between [u0 u1] and phase changers205A and205B is illustrated in Table 1 (note that u0, u1 are transmitted by the base station as some of the control information symbols, namely, other symbols403,503. The terminal obtains [u0 u1] included in control information symbols, namely, other symbols403,503, becomes aware of operations performed by phase changers205A,205B from [u0 u1], and demodulates and decodes data symbols). TABLE 1u0 u1phase changer operations00no phase change01change phase change valueon a per-symbol basis (cyclically/regularly)10implement phase change usingspecified phase change value (set)11reserve Interpretation of Table 1 is as follows. When the settings in the base station are configured such that phase changers205A,205B do not implement a phase change, u0 is set to 0 (u0=0) and u1 is set to 0 (u1=0). Accordingly, phase changer205A outputs signal (206A) without implementing a phase change on input signal (204A). Similarly, phase changer205B outputs a signal (206B) without implementing a phase change on the input signal (204B). When the settings in the base station are configured such that phase changers205A,205B implement a phase change cyclically/regularly on a per-symbol basis, u0 is set to 0 (u0=0) and u1 is set to 1 (u1=1). Note that since the method used by phase changers205A,205B to implement a phase change cyclically/regularly on a per-symbol basis is described in detail in Embodiments 1 through 6, detailed description thereof is omitted. When signal processor106illustrated inFIG.1is configured as illustrated in any one ofFIG.20,FIG.21, andFIG.22, u0 is also set to 0 (u0=0) and u1 is also set to 1 (u1=1) when the settings in the base station are configured such that phase changer205A implements a phase change cyclically/regularly on a per-symbol basis and phase changer205B does not implement a phase change cyclically/regularly on a per-symbol basis, and when the settings in the base station are configured such that phase changer205A does not implement a phase change cyclically/regularly on a per-symbol basis and phase changer205B implements a phase change cyclically/regularly on a per-symbol basis. When the settings in the base station are configured such that phase changers205A,205B implement phase change using a specific phase change value, u0 is set to 1 (u0=1) and u1 is set to 0 (u1=0). Next, implementation of a phase change using a specific phase change value will be described. For example, in phase changer205A, a phase change is implemented using a specific phase change value. Here, the input signal (204A) is expressed as z1(i) (i is a symbol number). Accordingly, when a phase change is implemented using a specific phase change value, output signal (206A) is expressed as ejα×z1(i) (a is the specific phase change value, and is a real number). Here, the amplitude may be changed. In such a case, output signal (206A) is expressed as A×ejα×z1(i) (A is a real number). Similarly, in phase changer206A, a phase change is implemented using a specific phase change value. Here, input signal (204B) is expressed as z2(t) (i is a symbol number). Accordingly, when a phase change is implemented using a specific phase change value, output signal (206B) is expressed as ejβ×z2(t) (a is the specific phase change value, and is a real number). Here, the amplitude may be changed. In such a case, output signal206B is expressed as B×ejβ×z2(t) (B is a real number). Note that when signal processor106illustrated inFIG.1is configured as illustrated in any one ofFIG.20,FIG.21,FIG.22,FIG.31,FIG.32, andFIG.33, u0 is also set to 1 (u0=1) and u1 is also set to 0 (u1=0) when the settings in the base station are configured such that phase changer205A implements a phase change using a specific phase change value and phase changer205B does not implement a phase change using a specific phase change value, and when the settings in the base station are configured such that phase changer205A does not implement a phase change using a specific phase change value and phase changer205B implements a phase change using a specific phase change value. Next, an example of a method for setting a specific phase change value will be described. Hereinafter, a first method and a second method will be described. First Method: The base station transmits a training symbol. The terminal, which is the communication partner, uses the training symbol to transmit information on the specific phase change value (set) to the base station. The base station implements a phase change based on the information on the specific phase change value (set) obtained from the terminal. Another alternative example is as follows. The base station transmits a training symbol. The terminal, which is the communication partner, transmits, to the base station, information relating to the reception result of the training symbol (e.g., information relating to a channel estimation value). Based on the information relating to the reception result of the training symbol from the terminal, the base station calculates a suitable value for the specific phase change value (set) and implements a phase change. Note that it is necessary for the base station to notify the terminal of the information relating to the specific phase change value (set) set in the settings, and in this case, the control information symbols, namely, other symbols403,503illustrated inFIG.4,FIG.5,FIG.13, andFIG.14transmit information relating to the specific phase change value (set) set in the settings by the base station. Next, an implementation example of the first method will be described with reference toFIG.26. InFIG.26, (A) illustrates symbols transmitted by the base station arranged on the time axis, which is the horizontal axis. InFIG.26, (B) illustrates symbols transmitted by the terminal arranged on the time axis, which is the horizontal axis. Hereinafter,FIG.26will be described in detail. First, the terminal requests communication with the base station. Then, the base station transmits at least training symbol2601for estimating the specific phase change value (set) to be used by the base station for the transmission of data symbol2604. Note that the terminal may perform other estimation using training symbol2601, and training symbol2601may use PSK modulation, for example. The training symbol is then transmitted from a plurality of antennas, just like the pilot symbol described in Embodiments 1 through 6. The terminal receives training symbol2601transmitted by the base station, calculates, using training symbol2601, a suitable specific phase change value (set) for phase changer205A and/or phase changer205B included in the base station to use upon implementing a phase change, and transmits feedback information symbol2602including the calculated value. The base station receives feedback information symbol2602transmitted by the terminal, and demodulates and decodes the symbol to obtain information on the suitable specific phase change value (set). Based on this information, the phase change value (set) used in the implementation of the phase change by phase changer205A and/or phase changer205B in the base station is set. The base station then transmits control information symbol2603and data symbol2604. Here, at least data symbol2604is implemented with a phase change using the set phase change value (set). Note that regarding data symbol2604, the base station transmits a plurality of modulated signals from a plurality of antennas, just as described in Embodiments 1 through 6. However, unlike Embodiments 1 through 6, phase changer205A and/or phase changer205B implement a phase change using the specific phase change value (set) described above. The frame configurations of the base station and terminal illustrated inFIG.26are mere non-limiting examples; other symbols may be included. Training symbol2601, feedback information symbol2602, control information symbol2603, and data symbol2604may each include another symbol such as a pilot symbol. Moreover, control information symbol2603includes information relating to the specific phase change value (set) used upon transmitting data symbol2604, and the terminal becomes capable of demodulating and decoding data symbol2604as a result of obtaining this information. Similar to as described in Embodiments 1 through 6, for example, when the base station transmits a modulated signal having a frame configuration such as illustrated inFIG.4,FIG.5,FIG.13, orFIG.14, the subject of the phase change implemented using the specific phase change value (set) by phase changer205A and/or phase changer205B, as described above, are data symbols (402,502). The symbol that is subject to phase change implemented by phase changer209A and/or phase changer209B is, just as described in Embodiments 1 through 6, “pilot symbol401,501”, “other symbol403,503”. However, in phase changer205A and/or phase changer205B, if a phase change is applied to “pilot symbol401,501”, “other symbol403,503” as well, demodulating and decoding is possible. A note regarding the recitation “specific phase change value (set)” follows. In the examples illustrated inFIG.2,FIG.18,FIG.19,FIG.31,FIG.32, andFIG.33, phase changer205A is omitted, and phase changer205B is included. Accordingly, in such a case, there is a need to prepare a specific phase change value to be used by phase changer205B. On the other hand, in the examples illustrated inFIG.20,FIG.21,FIG.22,FIG.31,FIG.32, andFIG.33, phase changer205A and phase changer205B are included. In such a case, there is a need to prepare a specific phase change value #A to be used by phase changer205A and a specific phase change value #B to be used by phase changer205B. Accordingly, the terminology “specific phase change value (set)” is used. Second Method: The base station starts transmission of a frame to the terminal. In this case, for example, the base station sets the specific phase change value (set) based on a random value, implements a phase change using the specific phase change value, and transmits the modulated signal. Thereafter, the terminal transmits, to the base station, information indicating that the frame (or packet) could not be obtained, and the base station receives this information. In this case, for example, the base station sets the specific phase change value (set) based on a random value, and transmits the modulated signal. Here, at least a data symbol including the frame (packet) data that the terminal could not obtain is transmitted via a modulated signal implemented with a phase change based on the newly set specific phase change value (set). In other words, when the base station performs transmission two (or more) times as a result of, for example, retransmitting the first frame (packet) data, the specific phase change value (set) used for the first transmission and the specific phase change value (set) used for the second transmission may be different. This makes it possible to achieve the advantageous effect that the frame (or packet) is highly likely to be obtained by the terminal upon the second transmission when retransmission is performed. Thereafter, when the base station receives, from the terminal, information indicating that a frame (or packet) could not be obtained, the base station changes the specific change value (set) based on, for example, a random number. Note that it is necessary for the base station to notify the terminal of the information relating to the specific phase change value (set) set in the settings, and in this case, the control information symbols, namely, other symbols403,503illustrated inFIG.4,FIG.5,FIG.13, andFIG.14transmit information relating to the specific phase change value (set) set in the settings by the base station. Note that in the above description of the second method, the specific phase change value (set) is set by the base station based on a random value, but the method for setting the specific phase change value (set) is not limited to this example. So long as the specific phase change value (set) is set to a new value upon setting the specific phase change value (set), any method may be used to set the specific phase change value (set). Take the following for example. For example, the specific phase change value (set) is set based on some rule. The specific phase change value (set) may be set randomly. The specific phase change value (set) may be set based on information obtained from the communication partner. The specific phase change value (set) may be set in any of these ways (however, the method is not limited to these examples). Next, an implementation example of the second method will be described with reference toFIG.27. InFIG.27, (A) illustrates symbols transmitted by the base station arranged on the time axis, which is the horizontal axis. InFIG.27, (B) illustrates symbols transmitted by the terminal arranged on the time axis, which is the horizontal axis. Hereinafter,FIG.27will be described in detail. Note that in order to describeFIG.27, descriptions ofFIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33will also be described. Examples of the configuration of signal processor106illustrated inFIG.1are given inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21, andFIG.22, and variations on those configurations are illustrated inFIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. FIG.28is an example in which the configuration inFIG.2is modified by moving phase changer205B in front of weighting synthesizer203. Next, operations inFIG.28different from those with respect toFIG.2will be described. Phase changer205B receives inputs of mapped signal201B (s2(t)) and control signal200, and based on control signal200, applies a phase change to mapped signal201B, and outputs phase-changed signal2801B. In phase changer205B, for example, a phase change of y(i) is applied to s2(t). Accordingly, when phase-changed signal2801B is expressed as s2′(i), s2′(i) can be expressed as s2′(i)=y(i)×s2(t) (i is a symbol number (i is an integer that is greater than or equal to 0)). Note that the application method for phase change value y(i) is as described in Embodiment 1. Weighting synthesizer203receives inputs of mapped signal201A (s1(t)), phase-changed signal2801B (s2′(i)), and control signal200, performs weighting synthesis (precoding) based on control signal200, and outputs weighting synthesized signal204A and weighting synthesized signal204B. More specifically, weighting synthesizer203multiplies a precoding matrix with the vectors of mapped signal201A (s1(t)) and phase-changed signal2801B (s2′(i)) to obtain weighting synthesized signal204A and weighting synthesized signal204B. Note that the configuration example for the precoding matrix is as described in Embodiment 1 (subsequent description is the same as made with reference toFIG.2, and as such, is omitted). FIG.29is an example in which the configuration inFIG.18is modified by moving phase changer205B in front of weighting synthesizer203. In this case, the operations performed by phase changer205B and weighting synthesizer203are the same as described with reference toFIG.28, and as such, description will be omitted. Moreover, operations down the line of weighting synthesizer203are also the same as made with reference toFIG.18, and as such, description thereof is omitted. FIG.30is an example in which the configuration inFIG.19is modified by moving phase changer205B in front of weighting synthesizer203. In this case, the operations performed by phase changer205B and weighting synthesizer203are the same as described with reference toFIG.28, and as such, description will be omitted. Moreover, operations down the line of weighting synthesizer203are also the same as made with reference toFIG.19, and as such, description thereof is omitted. FIG.31is an example in which the configuration inFIG.20is modified by moving phase changer205A in front of weighting synthesizer203and moving phase changer205B in front of weighting synthesizer203. Phase changer205A receives inputs of mapped signal201A (s1(t)) and control signal200, and based on control signal200, applies a phase change to mapped signal201A, and outputs phase-changed signal2801A. In phase changer205A, for example, a phase change of w(i) is applied to s1(t). Accordingly, when phase-changed signal2901A is expressed as s1′(i), s1′(i) can be expressed as s1′(i)=w(i)×s1(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). Note that the application method for phase change value w(i) is as described in Embodiment 1. In phase changer205B, for example, a phase change of y(i) is applied to s2(t). Accordingly, when phase-changed signal2801B is expressed as s2′(i), s2′(i) can be expressed as s2′(i)=y(i)×s2(t) (i is a symbol number (i is an integer that is greater than or equal to 0)). Note that the application method for phase change value y(i) is as described in Embodiment 1. Weighting synthesizer203receives inputs of mapped signal2801A (s1′(i)), phase-changed signal2801B (s2′(i)), and control signal200, performs weighting synthesis (precoding) based on control signal200, and outputs weighting synthesized signal204A and weighting synthesized signal204B. More specifically, weighting synthesizer203multiplies a precoding matrix with the vectors of mapped signal2801A (s1′(i)) and phase-changed signal2801B (s2′(i)) to obtain weighting synthesized signal204A and weighting synthesized signal204B. Note that the configuration example for the precoding matrix is as described in Embodiment 1 (subsequent description is the same as made with reference toFIG.20, and as such, is omitted). FIG.32is an example in which the configuration inFIG.21is modified by moving phase changer205A in front of weighting synthesizer203and moving phase changer205B in front of weighting synthesizer203. In this case, the operations performed by phase changer205A, phase changer205B, and weighting synthesizer203are the same as described with reference toFIG.31, and as such, description will be omitted. Moreover, operations down the line of weighting synthesizer203are also the same as made with reference to FIG.21, and as such, description thereof is omitted. FIG.33is an example in which the configuration inFIG.22is modified by moving phase changer205A in front of weighting synthesizer203and moving phase changer205B in front of weighting synthesizer203. In this case, the operations performed by phase changer205A, phase changer205B, and weighting synthesizer203are the same as described with reference toFIG.31, and as such, description will be omitted. Moreover, operations down the line of weighting synthesizer203are also the same as made with reference toFIG.22, and as such, description thereof is omitted. InFIG.27, the terminal requests communication with the base station. In this case, the base station determines the phase change value to be implemented by phase changer205A and/or phase changer205B to be a first specific phase change value (set) by using a random number, for example. Then, the base station implements a phase change via phase changer205A and/or phase changer205B based on the determined first specific phase change value (set). Here, control information symbol2701_1includes information on the first specific phase change value (set). A note regarding the terminology “first specific phase change value (set)” follows. In the examples illustrated inFIG.2,FIG.18,FIG.19,FIG.28,FIG.29, andFIG.30, phase changer205A is omitted, and phase changer205B is included. Accordingly, in such a case, there is a need to prepare a first specific phase change value to be used by phase changer205B. On the other hand, in the examples illustrated inFIG.20,FIG.21,FIG.22,FIG.31,FIG.32, andFIG.33, phase changer205A and phase changer205B are included. In such a case, there is a need to prepare a first specific phase change value #A to be used by phase changer205A and a first specific phase change value #B to be used by phase changer205B. Accordingly, the terminology “first specific phase change value (set)” is used. The base station then transmits control information symbol2701_1and data symbol #1 (2702_1). Here, at least data symbol #1 (2702_1) is implemented with a phase change using the determined first specific phase change value (set). The terminal receives control information symbol2701_1and data symbol #1 (2702_1) transmitted by the base station, and demodulates and decodes data symbol #1 (2702_1) based at least on information on the first specific phase change value (set) included in control information symbol2701_1. As a result, the terminal determines that the data included in data symbol #1 (2702_1) is obtained without error. The terminal then transmits, to the base station, terminal transmission symbol2750_1including at least information indicating that the data included in data symbol #1 (2702_1) was obtained without error. The base station receives terminal transmission symbol2750_1transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_1and indicates that the data included in data symbol #1 (2702_1) was obtained without error, determines the phase change (set) to be implemented by phase changer205A and/or phase changer205B to be the first specific phase change value (set), just as in the case where data symbol #1 (2702_1) is transmitted (since the base station obtained the data included in data symbol #1 (2702_1) without error, the terminal can determine that it is highly probable that data can be obtained without error when the next data symbol is transmitted and the first specific phase change value (set) is used (this makes it possible to achieve an advantageous effect that it is highly probable that the terminal can achieve a high data reception quality)). Then, the base station implements a phase change via phase changer205A and/or phase changer205B based on the determined first specific phase change value (set). Here, control information symbol2701_2includes information on the first specific phase change value (set). The base station then transmits control information symbol2701_2and data symbol #2 (2702_2). Here, at least data symbol #2 (2702_2) is implemented with a phase change using the determined first specific phase change value (set). The terminal receives control information symbol2701_2and data symbol #2 (2702_2) transmitted by the base station, and demodulates and decodes data symbol #2 (2702_2) based at least on information on the first specific phase change value (set) included in control information symbol2701_2. As a result, the terminal determines that the data included in data symbol #2 (2702_2) is not successfully obtained. The terminal then transmits, to the base station, terminal transmission symbol2750_2including at least information indicating that the data included in data symbol #2 (2702_2) was not successfully obtained. The base station receives terminal transmission symbol2750_2transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_2and indicates that the data included in data symbol #2 (2702_2) was not successfully obtained, determines the phase change (set) to be implemented by phase changer205A and/or phase changer205B to be changed from the first specific phase change value (set) (since the base station did not obtain the data included in data symbol #2 (2702_2) successfully, the terminal can determine that it is highly probable that data can be obtained without error when the next data symbol is transmitted and the phase change value is changed from the first specific phase change value (set) (this makes it possible to achieve an advantageous effect that it is highly probable that the terminal can achieve a high data reception quality)). Accordingly, the base station determines the phase change value (set) to be implemented by phase changer205A and/or phase changer205B to be changed from the first specific phase change value (set) to a second specific phase change value (set), by using a random number, for example. Then, the base station implements a phase change via phase changer205A and/or phase changer205B based on the determined second specific phase change value (set). Here, control information symbol2701_3includes information on the second specific phase change value (set). A note regarding the terminology “second specific phase change value (set)” follows. In the examples illustrated inFIG.2,FIG.18,FIG.19,FIG.28,FIG.29, andFIG.30, phase changer205A is omitted, and phase changer205B is included. Accordingly, in such a case, there is a need to prepare a second specific phase change value to be used by phase changer205B. On the other hand, in the examples illustrated inFIG.20,FIG.21,FIG.22,FIG.31,FIG.32, andFIG.33, phase changer205A and phase changer205B are included. In such a case, there is a need to prepare a second specific phase change value #A to be used by phase changer205A and a second specific phase change value #B to be used by phase changer205B. Accordingly, the terminology “second specific phase change value (set)” is used. The base station then transmits control information symbol2701_3and data symbol #2 (2702_2-1). Here, at least data symbol #2 (2702_2-1) is implemented with a phase change using the determined second specific phase change value (set). Note that regarding “data symbol #2 (2702_2) present immediately behind control information symbol2701_2” and “data symbol #2 (2702_2-1) present immediately behind control information symbol2701_3”, the modulation scheme of “data symbol #2 (2702_2) present immediately behind control information symbol2701_2” and the modulation scheme of “data symbol #2 (2702_2-1) present immediately behind control information symbol2701_3” may be the same or different. Moreover, all or some data included in “data symbol #2 (2702_2) present immediately behind control information symbol2701_2” is included in “data symbol #2 (2702_2-1) present immediately behind control information symbol2701_3” (because “data symbol #2 (2702_2-1) present immediately behind control information symbol2701_3” is a retransmission symbol). The terminal receives control information symbol2701_3and data symbol #2 (2702_2) transmitted by the base station, and demodulates and decodes data symbol #2 (2702_2-1) based at least on information on the second specific phase change value (set) included in control information symbol2701_3. As a result, the terminal determines that the data included in data symbol #2 (2702_2-1) is not successfully obtained. The terminal then transmits, to the base station, terminal transmission symbol2750_3including at least information indicating that the data included in data symbol #2 (2702_2-1) was not successfully obtained. The base station receives terminal transmission symbol2750_3transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_3and indicates that the data included in data symbol #2 (2702_2-1) was not successfully obtained, determines the phase change (set) to be implemented by phase changer A and phase changer B to be changed from the second specific phase change value (set) (since the base station did not obtain the data included in data symbol #2 (2702_2-1) successfully, the terminal can determine that it is highly probable that data can be obtained without error when the next data symbol is transmitted and the phase change value is changed from the second specific phase change value (set) (this makes it possible to achieve an advantageous effect that it is highly probable that the terminal can achieve a high data reception quality)). Accordingly, the base station determines the phase change value (set) to be implemented by phase changer205A and/or phase changer205B to be changed from the second specific phase change value (set) to a third specific phase change value (set), by using a random number, for example. Here, control information symbol2701_4includes information on the third specific phase change value (set). A note regarding the terminology “third specific phase change value (set)” follows. In the examples illustrated inFIG.2,FIG.18,FIG.19,FIG.28,FIG.29, andFIG.30, phase changer205A is omitted, and phase changer205B is included. Accordingly, in such a case, there is a need to prepare a third specific phase change value to be used by phase changer205B. On the other hand, in the examples illustrated inFIG.20,FIG.21,FIG.22,FIG.31,FIG.32, andFIG.33, phase changer205A and phase changer205B are included. In such a case, there is a need to prepare a third specific phase change value #A to be used by phase changer205A and a third specific phase change value #B to be used by phase changer205B. Accordingly, the terminology “third specific phase change value (set)” is used. The base station then transmits control information symbol2701_4and data symbol #2 (2702_2-2). Here, at least data symbol #2 (2702_2-2) is implemented with a phase change using the determined third specific phase change value (set). Note that regarding “data symbol #2 (2702_2-1) present immediately behind control information symbol2701_3” and “data symbol #2 (2702_2-2) present immediately behind control information symbol2701_4”, the modulation scheme of “data symbol #2 (2702_2-1) present immediately behind control information symbol2701_3” and the modulation scheme of “data symbol #2 (2702_2-2) present immediately behind control information symbol2701_4” may be the same or different. Moreover, all or some data included in “data symbol #2 (2702_2-1) present immediately behind control information symbol2701_3” is included in “data symbol #2 (2702_2-2) present immediately behind control information symbol2701_4” (because “data symbol #2 (2702_2-2) present immediately behind control information symbol2701_4” is a retransmission symbol). The terminal receives control information symbol2701_4and data symbol #2 (2702_2-2) transmitted by the base station, and demodulates and decodes data symbol #2 (2702_2-2) based at least on information on the third specific phase change value (set) included in control information symbol2701_4. As a result, the terminal determines that the data included in data symbol #2 (2702_2-2) is obtained without error. The terminal then transmits, to the base station, terminal transmission symbol2750_4including at least information indicating that the data included in data symbol #2 (2702_2-2) was obtained without error. The base station receives terminal transmission symbol2750_4transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_4and indicates that the data included in data symbol #2 (2702_2-2) was obtained without error, determines the phase change (set) to be implemented by phase changer205A and/or phase changer205B to be the third specific phase change value (set), just as in the case where data symbol #2 (2702_2-2) is transmitted (since the base station obtained the data included in data symbol #2 (2702_2-2) without error, the terminal can determine that it is highly probable that data can be obtained without error when the next data symbol is transmitted and the third specific phase change value (set) is used (this makes it possible to achieve an advantageous effect that it is highly probable that the terminal can achieve a high data reception quality)). Then, the base station implements a phase change via phase changer205A and/or phase changer205B based on the determined third specific phase change value (set). Here, control information symbol2701_5includes information on the third specific phase change value (set). The base station then transmits control information symbol2701_5and data symbol #3 (2702_3). Here, at least data symbol #3 (2702_3) is implemented with a phase change using the determined third specific phase change value (set). The terminal receives control information symbol2701_5and data symbol #3 (2702_3) transmitted by the base station, and demodulates and decodes data symbol #3 (2702_3) based at least on information on the third specific phase change value (set) included in control information symbol2701_5. As a result, the terminal determines that the data included in data symbol #3 (2702_3) is obtained without error. The terminal then transmits, to the base station, terminal transmission symbol2750_5including at least information indicating that the data included in data symbol #3 (2702_3) was obtained without error. The base station receives terminal transmission symbol2750_5transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_5and indicates that the data included in data symbol #3 (2702_3) was obtained without error, determines the phase change (set) to be implemented by phase changer205A and/or phase changer205B to be the third specific phase change value (set), just as in the case where data symbol #3 (2702_3) is transmitted (since the base station obtained the data included in data symbol #3 (2702_3) without error, the terminal can determine that it is highly probable that data can be obtained without error when the next data symbol is transmitted and the third specific phase change value (set) is used (this makes it possible to achieve an advantageous effect that it is highly probable that the terminal can achieve a high data reception quality)). Then, the base station implements a phase change via phase changer205A and/or phase changer205B based on the determined third specific phase change value (set). Here, control information symbol2701_6includes information on the third specific phase change value (set). The base station then transmits control information symbol2701_6and data symbol #4 (2702_4). Here, at least data symbol #4 (2702_4) is implemented with a phase change using the determined third specific phase change value (set). The terminal receives control information symbol2701_6and data symbol #4 (2702_4) transmitted by the base station, and demodulates and decodes data symbol #4 (2702_4) based at least on information on the third specific phase change value (set) included in control information symbol2701_6. As a result, the terminal determines that the data included in data symbol #4 (2702_4) is not successfully obtained. The terminal then transmits, to the base station, terminal transmission symbol2750_6including at least information indicating that the data included in data symbol #4 (2702_4) was not successfully obtained. The base station receives terminal transmission symbol2750_6transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_6and indicates that the data included in data symbol #4 (2702_4) was not successfully obtained, determines the phase change (set) to be implemented by phase changer205A and/or phase changer205B to be changed from the third specific phase change value (set) (since the base station did not obtain the data included in data symbol #4 (2702_4) successfully, the terminal can determine that it is highly probable that data can be obtained without error when the next data symbol is transmitted and the phase change value is changed from the third specific phase change value (set) (this makes it possible to achieve an advantageous effect that it is highly probable that the terminal can achieve a high data reception quality)). Accordingly, the base station determines the phase change value (set) to be implemented by phase changer205A and/or phase changer205B to be changed from the third specific phase change value (set) to a fourth specific phase change value (set), by using a random number, for example. Then, the base station implements a phase change via phase changer205A and/or phase changer205B based on the determined fourth specific phase change value (set). Here, control information symbol2701_7includes information on the fourth specific phase change value (set). A note regarding the terminology “fourth specific phase change value (set)” follows. In the examples illustrated inFIG.2,FIG.18,FIG.19,FIG.28,FIG.29, andFIG.30, phase changer205A is omitted, and phase changer205B is included. Accordingly, in such a case, there is a need to prepare a fourth specific phase change value to be used by phase changer205B. On the other hand, in the examples illustrated inFIG.20,FIG.21,FIG.22,FIG.31,FIG.32, andFIG.33, phase changer205A and phase changer205B are included. In such a case, there is a need to prepare a fourth specific phase change value #A to be used by phase changer205A and a fourth specific phase change value #B to be used by phase changer205B. Accordingly, the terminology “fourth specific phase change value (set)” is used. Note that regarding “data symbol #4 (2702_4) present immediately behind control information symbol2701_6” and “data symbol #4 (2702_4-1) present immediately behind control information symbol2701_7”, the modulation scheme of “data symbol #4 (2702_4) present immediately behind control information symbol2701_6” and the modulation scheme of “data symbol #4 (2702_4-1) present immediately behind control information symbol2701_7” may be the same or different. Moreover, “data symbol #4 (2702_4-1) present immediately behind control information symbol2701_7” includes all or some data included in “data symbol #4 (2702_4) present immediately behind control information symbol2701_6” (because “data symbol #4 (2702_4-1) present immediately behind control information symbol2701_7” is a retransmission symbol). The terminal receives control information symbol2701_7and data symbol #4 (2702_4-1) transmitted by the base station, and demodulates and decodes data symbol #4 (2702_4-1) based at least on information on the fourth specific phase change value (set) included in control information symbol2701_7. Note that regarding data symbol #1 (2702_1), data symbol #2 (2702_2), data symbol #3 (2702_3), and data symbol #4 (2702_4), the base station transmits a plurality of modulated signals from a plurality of antennas, just as described in Embodiments 1 through 6. However, unlike Embodiments 1 through 6, phase changer205A and/or phase changer205B implement a phase change using the specific phase change value described above. The frame configurations of the base station and terminal illustrated inFIG.27are mere non-limiting examples; other symbols may be included. Moreover, control information symbol2701_1,2701_2,2701_3,2701_4,2701_5,2701_6, data symbol #1 (2702_1), data symbol #2 (2702_2), data symbol #3 (2702_3), and data symbol #4 (2702_4) may each include other symbols, such as a pilot symbol. Moreover, control information symbol2701_1,2701_2,2701_3,2701_4,2701_5, and2701_6include information relating to the specific phase change value (set) used upon transmitting data symbol #1 (2702_1), data symbol #2 (2702_2), data symbol #3 (2702_3), and data symbol #4 (2702_4), and the terminal becomes capable of demodulating and decoding data symbol #1 (2702_1), data symbol #2 (2702_2), data symbol #3 (2702_3), and data symbol #4 (2702_4) as a result of obtaining this information. Note that in the above description, the base station determines the value (set) for the specific phase change value (set) by using a “random number”, but the determination of the value for the specific phase change value (set) is not limited to this method. The base station may regularly change the value (set) for the specific phase change value (set) (any method may be used to determine the value for the specific phase change value (set); when the specific phase change value (set) needs to be changed, the specific phase change value (set) before and after the change may be different). Similar to as described in Embodiments 1 through 6, for example, when the base station transmits a modulated signal having a frame configuration such as illustrated inFIG.4,FIG.5,FIG.13, orFIG.14, the subject of the phase change implemented using the specific phase change value by phase changer205A and/or phase changer205B, as described above, are data symbols (402,502). The symbol that is subject to phase change implemented by phase changer209A and/or phase changer209B is, just as described in Embodiments 1 through 6, “pilot symbol401,501”, “other symbol403,503”. However, in phase changer205A and/or phase changer205B, if a phase change is applied to “pilot symbol401,501”, “other symbol403,503” as well, demodulating and decoding is possible. Even if this transmission method is implemented independently, the method of implementation of a phase change using a specific phase change value described above can achieve an advantageous effect in that high data reception quality can be achieved with the terminal. Moreover, examples of the configuration of signal processor106illustrated inFIG.1and included in the transmission device of the base station are given inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.23,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33, but phase change need not be implemented in phase changer209A and phase changer209B. In other words, inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.23,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33, phase changer209A and phase changer209B may be removed. In such cases, signal208A corresponds to signal106_A inFIG.1, and signal208B corresponds to signal106_B inFIG.1. When [u0 u1], which is described above and used to control operations performed by phase changers205A,205B included in the base station, is set to [01] (i.e., u0=0, u1=1), that is to say, when phase changers205A,205B implement a phase change cyclically/regularly on a per-symbol basis, control information for setting the phase change in detail is set to u2, u3. The relationship between [u2 u3] and the phase change implemented by phase changers205A and205B in detail is illustrated in Table 2 (note that u2, u3 are, for example, transmitted by the base station as some of the control information symbols, namely, other symbols403,503. The terminal obtains [u2 u3] included in control information symbols, namely, other symbols403,503, becomes aware of operations performed by phase changers205A,205B from [u2 u3], and demodulates and decodes data symbols. Also, the control information for “detailed phase change” is 2-bit information, but the number of bits may be other than 2 bits). TABLE 2u2 u3phase change method when [u0 u1] = [01]00method 01_101method 01_210method 01_311method 01_4 A first example of an interpretation of Table 2 is as follows. When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[00] (i.e., u2=0, u3=0), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_1. Method 01_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.53]y⁢1⁢(i)=ej⁢2×π×i9Equation⁢53 Phase changer205B does not implement a phase change. When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[01] (i.e., u2=0, u3=1), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_2. Method 01_2: Phase changer205A does not implement a phase change. Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.54]y⁢2⁢(i)=ej⁢2×π×i9Equation⁢(54) When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[10] (i.e., u2=1, u3=0), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_3. Method 01_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.55]y⁢1⁢(i)=ej⁢2⨯π⨯i9Equation⁢(55) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.56]y⁢2⁢(i)=e-j⁢2⨯π⨯i7Equation⁢(56) When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[11] (i.e., u2=1, u3=1), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_4. Method 01_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.57]y⁢1⁢(i)=e-j⁢2⨯π⨯i7Equation⁢(57) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.58]y⁢2⁢(i)=ej⁢2⨯π⨯i9Equation⁢(58) A second example of an interpretation of Table 2 is as follows. When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[00] (i.e., u2=0, u3=0), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_1. Method 01_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.59]y⁢1⁢(i)=ej⁢2⨯π⨯i3Equation⁢(59) Phase changer205B does not implement a phase change. When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[01] (i.e., u2=0, u3=1), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_2. Method 01_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.60]y⁢1⁢(i)=ej⁢2⨯π⨯i5Equation⁢(60) Phase changer205B does not implement a phase change. When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[10] (i.e., u2=1, u3=0), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_3. Method 01_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.61]y⁢1⁢(i)=ej⁢2⨯π⨯i7Equation⁢(61) Phase changer205B does not implement a phase change. When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[11] (i.e., u2=1, u3=1), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_4. Method 01_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.62]y⁢1⁢(i)=ej⁢2⨯π⨯i9Equation⁢(62) Phase changer205B does not implement a phase change. A third example of an interpretation of Table 2 is as follows. When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[00] (i.e., u2=0, u3=0), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_1. Method 01_1: Phase changer205A does not implement a phase change. Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.63]y⁢2⁢(i)=ej⁢2⨯π⨯i3Equation⁢(63) When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[01] (i.e., u2=0, u3=1), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_2. Method 01_2: Phase changer205A does not implement a phase change. Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.64]y⁢2⁢(i)=ej⁢2⨯π⨯i5Equation⁢(64) When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[10] (i.e., u2=1, u3=0), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_3. Method 01_3: Phase changer205A does not implement a phase change. Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.65]y⁢2⁢(i)=ej⁢2⨯π⨯i7Equation⁢(65) When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[11] (i.e., u2=1, u3=1), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_4. Method 01_4: Phase changer205A does not implement a phase change. Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.66]y⁢2⁢(i)=ej⁢2⨯π⨯i9Equation⁢(66) A fourth example of an interpretation of Table 2 is as follows. When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[00] (i.e., u2=0, u3=0), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_1. Method 01_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.67]y⁢1⁢(i)=ej⁢2⨯π⨯i5Equation⁢(67) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.68]y⁢2⁢(i)=e-j⁢2⨯π⨯i3Equation⁢(68) When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[01] (i.e., u2=0, u3=1), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_2. Method 01_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.69]y⁢1⁢(i)=ej⁢2⨯π⨯i7Equation⁢(69) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.70]y⁢2⁢(i)=e-j⁢2×π×i3Equation⁢(70) When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[10] (i.e., u2=1, u3=0), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_3. Method 01_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.71]y⁢1⁢(i)=ej⁢2×π×i7Equation⁢(71) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.72]y⁢2⁢(i)=e-j⁢2×π×i5Equation⁢(72) When [u0 u1]=[01] (i.e., u0=0, u1=1), and [u2 u3]=[11] (i.e., u2=1, u3=1), the base station causes phase changer205A, phase changer205B to implement a phase change cyclically/regularly on a per-symbol basis in accordance with method 01_4. Method 01_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.73]y⁢1⁢(i)=ej⁢2×π×i9Equation⁢(73) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.74]y⁢2⁢(i)=e-j⁢2×π×i5Equation⁢(74) Although first through fourth examples are given above, the detailed phase change method employed by phase changer205A, phase changer205B is not limited to these examples. <1> In phase changer205A, a phase change is implemented cyclically/regularly on a per-symbol basis. <2> In phase changer205B, a phase change is implemented cyclically/regularly on a per-symbol basis. <3> In phase changer205A and phase changer205B, a phase change is implemented cyclically/regularly on a per-symbol basis. So long as a method according to one or more of <1>, <2>, and <3> is set in detail according to [u2 u3], it may be implemented in the same manner as described above. When [u0 u1], which is described above and used to control operations performed by phase changers205A,205B included in the base station, is set to [10] (i.e., u0=1, u1=0), that is to say, when phase changers205A,205B implement a phase change using a specific phase change value (set), control information for setting the phase change in detail is set to u4, u5. The relationship between [u4 u5] and the phase change implemented by phase changers205A,205B in detail is illustrated in Table 3 (note that u4, u5 are, for example, transmitted by the base station as some of the control information symbols, namely, other symbols403,503. The terminal obtains [u4 u5] included in control information symbols, namely, other symbols403,503, becomes aware of operations performed by phase changers205A,205B from [u4 u5], and demodulates and decodes data symbols. Also, the control information for “detailed phase change” is 2-bit information, but the number of bits may be other than 2 bits). TABLE 3u4 u5phase change method when [u0 u1] = [10]00method 10_101method 10_210method 10_311method 10_4 A first example of an interpretation of Table 3 is as follows. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[00] (i.e., u4=0, u5=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_1. Method 10_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.75]y⁢1⁢(i)=ej⁢π4Equation⁢(75) Phase changer205B does not implement a phase change. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[01] (i.e., u4=0, u5=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_2. Method 10_2: Phase changer205A does not implement a phase change. Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.76]y⁢2⁢(i)=ej⁢π3Equation⁢(76) When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[10] (i.e., u4=1, u5=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_3. Method 10_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.77]y⁢1⁢(i)=ej⁢π4Equation⁢(77) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.78]y⁢2⁢(i)=e-j⁢π8Equation⁢(78) When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[11] (i.e., u4=1, u5=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_4. Method 10_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.79]y⁢1⁢(i)=e-j⁢2×π7Equation⁢(79) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.80]y⁢2⁢(i)=ej⁢2×π9Equation⁢(80) A second example of an interpretation of Table 3 is as follows. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[00] (i.e., u4=0, u5=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_1. Method 10_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH. 81] y1(i)=ej0Equation (81) (In the case of Equation (81), phase changer205A does not implement a phase.). Phase changer205B does not implement a phase change. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[01] (i.e., u4=0, u5=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_2. Method 10_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.82]y⁢1⁢(i)=ej⁢π8Equation⁢(82) Phase changer205B does not implement a phase change. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[10] (i.e., u4=1, u5=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_3. Method 10_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.83]y⁢1⁢(i)=ej⁢π4Equation⁢(83) Phase changer205B does not implement a phase change. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[11] (i.e., u4=1, u5=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_4. Method 10_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.84]y⁢1⁢(i)=ej⁢3×π8Equation⁢(84) Phase changer205B does not implement a phase change. A third example of an interpretation of Table 3 is as follows. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[00] (i.e., u4=0, u5=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_1. Method 10_1: Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH. 85] y2(i)=ej0Equation (85) In the case of Equation (85), phase changer205B does not implement a phase. Phase changer205A does not implement a phase change. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[01] (i.e., u4=0, u5=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_2. Method 10_2: Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.86]y⁢2⁢(i)=ej⁢π8Equation⁢(86) Phase changer205A does not implement a phase change. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[10] (i.e., u4=1, u5=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_3. Method 10_3: Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.87]y⁢2⁢(i)=ej⁢π4Equation⁢(87) Phase changer205A does not implement a phase change. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[11] (i.e., u4=1, u5=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_4. Method 10_4: Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.88]y⁢2⁢(i)=ej⁢3⨯π8Equation⁢(88) Phase changer205A does not implement a phase change. A fourth example of an interpretation of Table 3 is as follows. When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[00] (i.e., u4=0, u5=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_1. Method 10_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.89]y⁢1⁢(i)=ej⁢π8Equation⁢(89) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH. 90] y2(i)=ej0Equation (90) (In the case of Equation (90), phase changer205B does not implement a phase.) When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[01] (i.e., u4=0, u5=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_2. Method 10_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.91]y⁢1⁢(i)=ej⁢π8Equation⁢(91) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.92]y⁢2⁢(i)=e-j⁢π8Equation⁢(92) When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[10] (i.e., u4=1, u5=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_3. Method 10_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.93]y⁢1⁢(i)=ej⁢π4Equation⁢(93) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.94]y⁢2⁢(i)=e-j⁢π8Equation⁢(94) When [u0 u1]=[10] (i.e., u0=1, u1=0), and [u4 u5]=[11] (i.e., u4=1, u5=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a specific phase change value (set) in accordance with method 10_4. Method 10_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH. 95] y1(i)=ej0Equation (95) (In the case of Equation (95), phase changer205A does not implement a phase.). Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows (this acts as a fixed phase value independent of symbol number). [MATH.96]y⁢2⁢(i)=e-j⁢π4Equation⁢(96) Although first through fourth examples are given above, the detailed phase change method employed by phase changer205A, phase changer205B is not limited to these examples. <4> In phase changer205A, phase change is implemented using a specific phase change value. <5> In phase changer205B, phase change is implemented using a specific phase change value. <6> In phase changer205A and phase changer205B, phase change is implemented using a specific phase change value. So long as a method according to one or more of <4>, <5>, and <6> is set in detail according to [u4 u5], it may be implemented in the same manner as described above. Moreover, in phase changers205A,205B included in the base station, a combination of the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value may be used. A mode in which phase changers205A,205B use a combination of the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value is indicated as “reserve” in Table 1, and is allotted as [u0 u1]=[11] (i.e., u0=1, u1=1). When [u0 u1], which is described above and used to control operations performed by phase changers205A,205B included in the base station, is set to [11] (i.e., u0=1, u1=1), that is to say, when phase changers205A,205B implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value, control information for setting the phase change in detail is set to u6, u7. The relationship between [u6 u7] and the phase change implemented by phase changers205A,205B in detail is illustrated in Table 4 (note that u6, u7 are, for example, transmitted by the base station as some of the control information symbols, namely, other symbols403,503. The terminal obtains [u6 u7] included in control information symbols, namely, other symbols403,503, becomes aware of operations performed by phase changers205A,205B from [u6 u7], and demodulates and decodes data symbols. Also, the control information for “detailed phase change” is 2-bit information, but the number of bits may be other than 2 bits). TABLE 4u6 u7phase change method when [u0 u1] = [10]00method 11_101method 11_210method 11_311method 11_4 A first example of an interpretation of Table 4 is as follows. When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[00] (i.e., u6=0, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_1. Method 11_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.97]y⁢1⁢(i)=ej⁢2⨯π⨯i9Equation⁢(97) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH. 98] y2(i)=ej0Equation (98) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[01] (i.e., u6=0, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_2. Method 11_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.99]y⁢1⁢(i)=ej⁢2⨯π⨯i9Equation⁢(99) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.100]y⁢2⁢(i)=ej⁢π4Equation⁢(100) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[10] (i.e., u6=1, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_3. Method 11_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH. 101] y1(i)=ej0Equation (101) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.102]y⁢2⁢(i)=ej⁢2⨯π⨯i9Equation⁢(102) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[11] (i.e., u6=1, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_4. Method 11_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.103]y⁢1⁢(i)=ej⁢π4Equation⁢(103) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.104]y⁢2⁢(i)=ej⁢2⨯π⨯i9Equation⁢(104) A second example of an interpretation of Table 4 is as follows. When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[00] (i.e., u6=0, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_1. Method 11_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.105]y⁢1⁢(i)=ej⁢2⨯π⨯i9Equation⁢(105) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH. 106] y2(i)=ej0Equation (106) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[01] (i.e., u6=0, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_2. Method 11_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.107]y⁢1⁢(i)=ej⁢2×π×i9Equation⁢(107) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.108]y⁢2⁢(i)=ej⁢π8Equation⁢(108) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[10] (i.e., u6=1, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_3. Method 11_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.109]y⁢1⁢(i)=ej⁢2×π×i9Equation⁢(109) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.110]y⁢2⁢(i)=ej⁢π4Equation⁢(110) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[11] (i.e., u6=1, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_4. Method 11_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.111]y⁢1⁢(i)=ej⁢2×π×i9Equation⁢(111) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.112]y⁢2⁢(i)=ej⁢3×π8Equation⁢(112) A third example of an interpretation of Table 4 is as follows. When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[00] (i.e., u6=0, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_1. Method 11_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.113]y⁢1⁢(i)=ej⁢2×π×i8Equation⁢(113) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.114]y⁢2⁢(i)=ej⁢π4Equation⁢(114) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[01] (i.e., u6=0, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_2. Method 11_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.115]y⁢1⁢(i)=ej⁢2×π×i5Equation⁢(115) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.116]y⁢2⁢(i)=ej⁢π4Equation⁢(116) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[10] (i.e., u6=1, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_3. Method 11_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.117]y⁢1⁢(i)=ej⁢2×π×i7Equation⁢(117) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.118]y⁢2⁢(i)=ej⁢π4Equation⁢(118) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[11] (i.e., u6=1, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_4. Method 11_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.119]y⁢1⁢(i)=ej⁢2×π×i9Equation⁢(119) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.120]y⁢2⁢(i)=ej⁢π4Equation⁢(120) A fourth example of an interpretation of Table 4 is as follows. When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[00] (i.e., u6=0, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_1. Method 11_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH. 121] y1(i)=ej0Equation (121) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.122]y⁢2⁢(i)=ej⁢2×π×i9Equation⁢(122) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[Oi] (i.e., u6=0, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_2. Method 11_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.123]y⁢1⁢(i)=ej⁢π8Equation⁢(123) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.124]y⁢2⁢(i)=ej⁢2×π×i9Equation⁢(124) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[10] (i.e., u6=1, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_3. Method 11_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.125]y⁢1⁢(i)=ej⁢π4Equation⁢(125) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.126]y⁢2⁢(i)=ej⁢2×π×i9Equation⁢(126) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[11] (i.e., u6=1, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_4. Method 11_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.127]y⁢1⁢(i)=ej⁢3×π8Equation⁢(127) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.128]y⁢2⁢(i)=ej⁢2×π×i9Equation⁢(128) A fifth example of an interpretation of Table 4 is as follows. When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[00] (i.e., u6=0, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_1. Method 11_1: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.129]y⁢1⁢(i)=ej⁢π4Equation⁢(129) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.130]y⁢2⁢(i)=ej⁢2×π×i3Equation⁢(130) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[01] (i.e., u6=0, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_2. Method 11_2: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.131]y⁢1⁢(i)=ej⁢π4Equation⁢(131) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.132]y⁢2⁢(i)=ej⁢2×π×i5Equation⁢(132) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[10] (i.e., u6=1, u7=0), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_3. Method 11_3: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.133]y⁢1⁢(i)=ej⁢π4Equation⁢(133) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.134]y⁢2⁢(i)=ej⁢2×π×i7Equation⁢(134) When [u0 u1]=[11] (i.e., u0=1, u1=1), and [u6 u7]=[11] (i.e., u6=1, u7=1), the base station causes phase changer205A, phase changer205B to implement a phase change using a combination the method of implementing a phase change cyclically/regularly on a per-symbol basis and the method of implementing a phase change using a specific phase change value in accordance with method 11_4. Method 11_4: Phase changer205A sets the coefficient used in the multiplication for the phase change to y1(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y1(i) is expressed as follows. [MATH.135]y⁢1⁢(i)=ej⁢π4Equation⁢(135) Phase changer205B sets the coefficient used in the multiplication for the phase change to y2(i) (i indicates a symbol number and is an integer that is greater than or equal to 0). Here, y2(i) is expressed as follows. [MATH.136]y⁢2⁢(i)=ej⁢2×π×i9Equation⁢(136) Although first through fifth examples are given above, the detailed phase change method employed by phase changer205A, phase changer205B is not limited to these examples. <7> In phase changer205A, phase change is implemented cyclically/regularly on a per-symbol basis, and in phase changer205B, phase change is implemented using a specific phase change value (set). <8> In phase changer205B, phase change is implemented using a specific phase change value (set), and in phase changer205B, phase change is implemented cyclically/regularly on a per-symbol basis. <3> In phase changer205A and phase changer205B, a phase change is implemented cyclically/regularly on a per-symbol basis. So long as a method according to one or more of <7> and <8> is set in detail according to [u2 u3], it may be implemented in the same manner as described above. In weighting synthesizer203included in the base station, the matrix used for the weighting synthesis may be changed. Control information for setting the weighting synthesis matrix shall be referred to as u8, u9. The relationship between [u8 u9] and the weighting synthesis matrix to be used in detail by weighting synthesizer203is given in Table 5 (note that u8, u9 are, for example, transmitted by the base station as some of the control information symbols, namely, other symbols403,503. The terminal obtains [u8 u9] included in control information symbols, namely, other symbols403,503, becomes aware of operations performed by weighting synthesizer203from [u8 u9], and demodulates and decodes data symbols. Also, the control information for identifying “detailed weighting matrix” is 2-bit information, but the number of bits may be other than 2 bits). TABLE 5u8 u9phase change method when [u0 u1] = [10]00precoding using matrix 101precoding using matrix 210precoding using matrix 311determine precoding method based oninformation from communication partner When [u8 u9]=[00] (i.e., u8=0, u9=0), in weighting synthesizer203in the base station, precoding that uses matrix 1 is performed. When [u8 u9]=[01] (i.e., u8=0, u9=1), in weighting synthesizer203in the base station, precoding that uses matrix 2 is performed. When [u8 u9]=[10] (i.e., u8=1, u9=0), in weighting synthesizer203in the base station, precoding that uses matrix 3 is performed. When [u8 u9]=[11] (i.e., u8=1, u9=1), the base station obtains, from the communication partner, for example, feedback information, and based on the feedback information, in weighting synthesizer203of the base station, calculates a precoding matrix to be used, and performs precoding using the calculated (precoding) matrix. As described above, weighting synthesizer203in the base station switches between precoding matrices. The terminal, which is the communication partner of the base station, obtains u8, u9 included in the control information symbol, and based on u8, u9, can demodulate and decode the data symbols. With this, since a suitable precoding matrix can be set based on the communications situation such as the state of the radio wave propagation environment, the terminal can achieve an advantageous effect of achieving a high data reception quality. Although identification methods such as those for phase changers205A,205B in the base station indicated in Table 1 have been described, settings such as those in Table 6 may be used instead of those in Table 1. Transmission device2303in the base station illustrated inFIG.23has the configuration illustrated inFIG.1. Signal processor106illustrated inFIG.1has the configuration illustrated in any one ofFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. Here, operation performed by phase changers205A,205B may be switched depending on the communications environment or the settings. Control information relating to operations performed by phase changers205A,205B is transmitted by the base station as a part of the control information transmitted via control information symbols, namely, other symbols403,503in the frame configurations illustrated inFIG.4,FIG.5,FIG.13, andFIG.14. Here, control information relating to operations performed by phase changers205A,205B is expressed as u10. The relationship between [u10] and phase changers205A,205B is illustrated in Table 6. TABLE 6change phase change value on a per-symbolu10basis (cyclically/regularly)0OFF1ON (Note that u10 is transmitted by the base station as some of the control information symbols, namely, other symbols403,503. The terminal obtains [u10] included in control information symbols, namely, other symbols403,503, becomes aware of operations performed by phase changers205A,205B from [u10], and demodulates and decodes data symbols.) Interpretation of Table 6 is as follows. When the settings in the base station are configured such that phase changers205A,205B do not implement a phase change, u10 is set to 0 (u10=0). Accordingly, phase changer205A outputs signal (206A) without implementing a phase change on input signal (204A). Similarly, phase changer205B outputs a signal (206B) without implementing a phase change on the input signal (204B). When the settings in the base station are configured such that phase changers205A,205B implement a phase change cyclically/regularly on a per-symbol basis, u10 is set to 1 (u10=1). Note that since the method used by phase changers205A,205B to implement a phase change cyclically/regularly on a per-symbol basis is described in detail in Embodiments 1 through 6, detailed description thereof is omitted. When signal processor106illustrated inFIG.1is configured as illustrated in any one ofFIG.20,FIG.21, andFIG.22, u10 is also set to 1 (u10=1) when the settings in the base station are configured such that phase changer205A implements a phase change cyclically/regularly on a per-symbol basis and phase changer205B does not implement a phase change cyclically/regularly on a per-symbol basis, and when the settings in the base station are configured such that phase changer205A does not implement a phase change cyclically/regularly on a per-symbol basis and phase changer205B implements a phase change cyclically/regularly on a per-symbol basis. With this, the terminal can achieve an advantageous effect of achieving a high data reception quality by turning the operation of the phase change performed by phase changers205A,205B on and off based on the communications situation such as the state of the radio wave propagation environment. Transmission device2303in the base station illustrated inFIG.23has the configuration illustrated inFIG.1. Signal processor106illustrated inFIG.1has the configuration illustrated in any one ofFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. Here, operations performed by phase changers209A,209B may be switched depending on the communications environment or the settings. Control information relating to operations performed by phase changers209A,209B is transmitted by the base station as a part of the control information transmitted via control information symbols, namely, other symbols403,503in the frame configurations illustrated inFIG.4,FIG.5,FIG.13, andFIG.14. Here, control information relating to operations performed by phase changers209A,209B is expressed as u11. The relationship between [u11] and phase changers209A,209B is illustrated in Table 7. TABLE 7u11phase change (or cyclic delay diversity)0OFF1ON (Note that u11 is transmitted by the base station as some of the control information symbols, namely, other symbols403,503. The terminal obtains [u11] included in control information symbols, namely, other symbols403,503, becomes aware of operations performed by phase changers209A,209B from [u11], and demodulates and decodes data symbols.) Interpretation of Table 7 is as follows. When the settings in the base station are configured such that phase changers209A,209B do not implement a phase change, u11 is set to 0 (u11=0). Accordingly, phase changer209A outputs a signal (210A) without implementing a phase change on the input signal (208A). Similarly, phase changer209B outputs a signal (210B) without implementing a phase change on the input signal (208B). When the settings in the base station are configured such that phase changers209A,209B implement a phase change cyclically/regularly on a per-symbol basis (or apply cyclic delay diversity), u11 is set to 1 (u11=1). Note that since the method used by phase changers209A,209B to implement a phase change cyclically/regularly on a per-symbol basis is described in detail in Embodiments 1 through 6, detailed description thereof is omitted. When signal processor106illustrated inFIG.1is configured as illustrated in any one ofFIG.19andFIG.22, u11 is also set to 1 (u11=1) when the settings in the base station are configured such that phase changer209A implements a phase change cyclically/regularly on a per-symbol basis and phase changer209B does not implement a phase change cyclically/regularly on a per-symbol basis, and when the settings in the base station are configured such that phase changer209A does not implement a phase change cyclically/regularly on a per-symbol basis and phase changer209B implements a phase change cyclically/regularly on a per-symbol basis. With this, the terminal can achieve an advantageous effect of achieving a high data reception quality by turning the operation of the phase change performed by phase changers209A,209B on and off based on the communications situation such as the state of the radio wave propagation environment. Next, an example of switching the operations performed by phase changers205A,205B shown in Table 1 will be given. For example, the base station and the terminal may communicate as illustrated inFIG.27. Note that communication based onFIG.27has been described above, and as such, description will be partially omitted. First, the terminal requests communication with the base station. The base station then selects “implement phase change using a specific phase change value (set)” in Table 1, whereby phase changer205A and/or phase changer205B perform signal processing equivalent to “implement phase change using a specific phase change value (set)”, and transmit data symbol #1 (2702_1). The terminal receives control information symbol2701_1and data symbol #1 (2702_1) transmitted by the base station, and demodulates and decodes data symbol #1 (2702_1) based at least on the transmission method included in control information symbol2701_1. As a result, the terminal determines that the data included in data symbol #1 (2702_1) is obtained without error. The terminal then transmits, to the base station, terminal transmission symbol2750_1including at least information indicating that the data included in data symbol #1 (2702_1) was obtained without error. The base station receives terminal transmission symbol2750_1transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_1and indicates that the data included in data symbol #1 (2702_1) was obtained without error, determines the phase change (set) to be implemented by phase changer205A and/or phase changer205B to be “implement a phase change using the specific phase change value (set)”, just as in the case where data symbol #1 (2702_1) is transmitted (since the base station obtained the data included in data symbol #1 (2702_1) without error, the terminal can determine that it is highly probable that data can be obtained without error when the next data symbol is transmitted and “implement a phase change using the specific phase change value (set)” is used (this makes it possible to achieve an advantageous effect that it is highly probable that the terminal can achieve a high data reception quality)). Then, the base station implements a phase change via phase changer205A and/or phase changer205B based on the determined “implement a phase change at a specific phase change value (set)”. The base station then transmits control information symbol2701_2and data symbol #2 (2702_2). Here, at least data symbol #2 (2702_2) is implemented with a phase change in accordance with “implement a phase change using the specific phase change value (set)”. The terminal receives control information symbol2701_2and data symbol #2 (2702_2) transmitted by the base station, and demodulates and decodes data symbol #2 (2702_2) based at least on information on transmission method included in control information symbol2701_2. As a result, the terminal determines that the data included in data symbol #2 (2702_2) is not successfully obtained. The terminal then transmits, to the base station, terminal transmission symbol2750_2including at least information indicating that the data included in data symbol #2 (2702_2) was not successfully obtained. The base station receives terminal transmission symbol2750_2transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_2and indicates that the data included in data symbol #2 (2702_2) was not successfully obtained, determines the phase change to be implemented by phase changer205A and/or phase changer205B to be changed to “(cyclically/regularly) change the phase change value on a per symbol basis” (since the base station did not obtain the data included in data symbol #2 (2702_2) successfully, the terminal can determine that it is highly probable that data can be obtained without error when the phase change method is changed to “(cyclically/regularly) change the phase change value on a per symbol basis” when the next data symbol is transmitted (this makes it possible to achieve an advantageous effect that it is highly probable that the terminal can achieve a high data reception quality)). Accordingly, the base station implements a phase change via phase changer205A and/or phase changer205B based on “(cyclically/regularly) change the phase change value on a per symbol basis”. Here, the base station transmits control information symbol2701_3and data symbol #2 (2702_2-1), but at least with respect to data symbol #2 (2702_2-1), a phase change is performed based on “(cyclically/regularly) change the phase change value on a per symbol basis”. The terminal receives control information symbol2701_3and data symbol #2 (2702_2) transmitted by the base station, and demodulates and decodes data symbol #2 (2702_2-1) based at least on information on the first specific phase change value (set) included in control information symbol2701_3. As a result, the terminal determines that the data included in data symbol #2 (2702_2-1) is not successfully obtained. The terminal then transmits, to the base station, terminal transmission symbol2750_3including at least information indicating that the data included in data symbol #2 (2702_2-1) was not successfully obtained. The base station receives terminal transmission symbol2750_3transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_3and indicates that the data included in data symbol #22702_2-1was not successfully obtained, determines to set the phase change to be implemented by phase changer A and phase changer B to once again be “(cyclically/regularly) change the phase change value on a per symbol basis”. Accordingly, the base station implements a phase change via phase changer205A and/or phase changer205B based on “(cyclically/regularly) change the phase change value on a per symbol basis”. Here, the base station transmits control information symbol2701_4and data symbol #2 (2702_2-2), but at least with respect to data symbol #2 (2702_2-2), a phase change is performed based on “(cyclically/regularly) change the phase change value on a per symbol basis”. The terminal receives control information symbol2701_4and data symbol #2 (2702_2-2) transmitted by the base station, and demodulates and decodes data symbol #2 (2702_2-2) based at least on information on the transmission method included in control information symbol2701_4. As a result, the terminal determines that the data included in data symbol #2 (2702_2-2) is obtained without error. The terminal then transmits, to the base station, terminal transmission symbol2750_4including at least information indicating that the data included in data symbol #2 (2702_2-2) was obtained without error. The base station receives terminal transmission symbol2750_4transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_4and indicates that the data included in data symbol #2 (2702-2) was obtained without error, determines the phase change (set) to be implemented by phase changer205A and/or phase changer205B to be “implement a phase change at a specific phase change value (set)”. Then, the base station implements a phase change via phase changer205A and/or phase changer205B based on the “implement a phase change at a specific phase change value (set)”. The base station then transmits control information symbol2701_5and data symbol #3 (2702_3). Here, at least data symbol #3 (2702_3) is implemented with a phase change based on the “implement a phase change at a specific phase change value (set)”. The terminal receives control information symbol2701_5and data symbol #3 (2702_3) transmitted by the base station, and demodulates and decodes data symbol #3 (2702_3) based at least on information on the transmission method included in control information symbol2701_5. As a result, the terminal determines that the data included in data symbol #3 (2702_3) is obtained without error. The terminal then transmits, to the base station, terminal transmission symbol2750_5including at least information indicating that the data included in data symbol #3 (2702_3) was obtained without error. The base station receives terminal transmission symbol2750_5transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_5and indicates that the data included in data symbol #3 (2702_3) was obtained without error, determines the method to be implemented by phase changer205A and/or phase changer205B to be the method “implement a phase change at a specific phase change value (set)”. The base station then transmits data symbol #4 (2702_4) based on “implement a phase change at a specific phase change value (set)”. The terminal receives control information symbol2701_6and data symbol #4 (2702_4) transmitted by the base station, and demodulates and decodes data symbol #4 (2702_4) based at least on information on the transmission method included in control information symbol2701_6. As a result, the terminal determines that the data included in data symbol #4 (2702_4) is not successfully obtained. The terminal then transmits, to the base station, terminal transmission symbol2750_6including at least information indicating that the data included in data symbol #4 (2702_4) was not successfully obtained. The base station receives terminal transmission symbol2750_6transmitted by the terminal, and based at least on the information that is included in terminal transmission symbol2750_6and indicates that the data included in data symbol #4 (2702_4) was not successfully obtained, determines the phase change (set) to be implemented by phase changer205A and/or phase changer205B to be changed to “(cyclically/regularly) change the phase change value on a per symbol basis”. Accordingly, the base station implements a phase change via phase changer205A and/or phase changer205B based on “(cyclically/regularly) change the phase change value on a per symbol basis”. Here, the base station transmits control information symbol2701_7and data symbol #4 (2702_4-1), but at least with respect to data symbol #4 (2702_4-1), a phase change is performed based on “(cyclically/regularly) change the phase change value on a per symbol basis”. The terminal receives control information symbol2701_7and data symbol #4 (2702_4-1) transmitted by the base station, and demodulates and decodes data symbol #4 (2702_4-1) based on information on the transmission method included in control information symbol2701_7. Note that regarding data symbol #1 (2702_1), data symbol #2 (2702_2), data symbol #3 (2702_3), and data symbol #4 (2702_4), the base station transmits a plurality of modulated signals from a plurality of antennas, just as described in Embodiments 1 through 6. The frame configurations of the base station and terminal illustrated inFIG.27are mere non-limiting examples; other symbols may be included. Moreover, control information symbol2701_1,2701_2,2701_3,2701_4,2701_5,2701_6, data symbol #1 (2702_1), data symbol #2 (2702_2), data symbol #3 (2702_3), and data symbol #4 (2702_4) may each include other symbols, such as a pilot symbol. Moreover, control information symbol2701_1,2701_2,2701_3,2701_4,2701_5, and2701_6include information relating to the specific phase change value (set) used upon transmitting data symbol #1 (2702_1), data symbol #2 (2702_2), data symbol #3 (2702_3), and data symbol #4 (2702_4), and the terminal becomes capable of demodulating and decoding data symbol #1 (2702_1), data symbol #2 (2702_2), data symbol #3 (2702_3), and data symbol #4 (2702_4) as a result of obtaining this information. Note that the switching of the transmission method based on Table 1 described in this embodiment of the base station with reference toFIG.27is not limited to the above description. The above description is merely one example. The switching of the transmission method based on Table 1 may be performed more flexibly. As described above, by switching the transmission method, switching the phase change method, and switching implementation of the phase change on or off in a more flexible manner in accordance with, for example, the communications network, the reception device of the communication partner can achieve an advantageous effect of an improvement in data reception quality. Note that a method for switching the precoding matrix based on, for example, information from the communication partner, may be allotted to “reserve” in Table 1 according to this embodiment, which is associated with u0=1 and u1=1. In other words, when the base station selects the MIMO transmission method, the base station may be allowed to also select a method for selecting a precoding matrix based on information from the communication partner. In this embodiment, the configuration of signal processor106illustrated inFIG.1was exemplified usingFIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33, but for Embodiments 1 through 6 as well, signal processor106illustrated inFIG.1can be configured as illustrated inFIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. (Supplemental Information 3) The method used to map each symbol in the mapper described in the present specification may be switched regularly/cyclically, for example. For example, a modulation scheme that has 16 signal points in an in-phase I-quadrature Q plane for transmitting 4 bits is implemented. Here, the arrangement of the 16 signal points for transmitting the four bits in the in-phase I-quadrature Q plane may be changed on a per-symbol basis. Moreover, in Embodiments 1 through 6, a case in which a multi-carrier scheme such as OFDM is implemented is described, but a single-carrier scheme may be implemented in the same manner. Moreover, the embodiments according to the present specification may be implemented in the same manner even when a spread spectrum communication method is implemented. (Supplemental Information 4) In each embodiment disclosed in the present specification, an example of the configuration of the transmission device is given inFIG.1, and examples of the configuration of signal processor106illustrated inFIG.1are given inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. However, the configuration of transmission device is not limited to the configuration illustrated inFIG.1, and the configuration of signal processor106is not limited to the examples illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. In other words, the transmission device and signal processor106included in the transmission device may be configured in any manner so long as the transmission device can generate a signal equivalent to either of the processed signal106_A or106_B described in the above embodiments according to the present specification and transmit the signal using a plurality of antenna units. Hereinafter, a different configuration example of the transmission device and signal processor106included in the transmission device that meet this requirement will be given. One example of a different configuration is one in which mapper104illustrated inFIG.1generates, as mapped signal105_1,105_2, a signal equivalent to weighting synthesized signal204A,204B illustrated in any one ofFIG.2,FIG.18,FIG.19,FIG.20,FIG.21, andFIG.22, based on encoded data103and control signal100. Signal processor106includes a configuration in which weighting synthesizer203is removed from a configuration illustrated in any one ofFIG.2,FIG.18,FIG.19,FIG.20,FIG.21, andFIG.22. Mapped signal105_1is input into phase changer205A or inserter207A, and mapped signal105_2is input into phase changer205B or inserter207B. Another example of a different configuration is one in which, when the weighting synthesis (precoding) processing is expressed as (precoding) matrix F illustrated in Equation (33) or Equation (34), weighting synthesizer203illustrated inFIG.2does not perform signal processing for weighting synthesis on mapped signal201A,201B, outputs mapped signal201A as weighting synthesized signal204A, and outputs mapped signal201B as weighting synthesized signal204B. In such a case, weighting synthesizer203performs, based on control signal200, control of switching between (i) performing signal processing corresponding to weighting synthesis to generate weighting synthesized signal204A,204B, and (ii) outputting mapped signal201A as weighting synthesized signal204A and outputting mapped signal201B as weighting synthesized signal204B without performing signal processing for weighting synthesis. Moreover, when the only weighting synthesis (precoding) processing that is performed is the processing expressed as (precoding) matrix F in Equation (33) or Equation (34), weighting synthesizer203may be omitted. In the present specification, even if the specifics of the transmission device configuration are different, by generating a signal equivalent to any one of signal-processed signal106_A,106_B described above in any of the embodiments of the present specification and transmitting the signal using a plurality of antenna units, when the reception device is in an environment in which direct waves are dominant, in particular when in an LOS environment, it is possible to achieve an advantageous effect in which the reception quality of the reception device that is performing MIMO data symbol transferring (transfer via a plurality of streams) can be improved (other advantageous effects described in the present specification are also achievable). Note that in signal processor106illustrated inFIG.1, a phase change may be provided both before and after weighting synthesizer203. More specifically, signal processor106includes, before weighting synthesizer203, one or both of phase changer205A_1that generates phase-changed signal2801A by applying a phase change to mapped signal201A, and phase changer205B_1that generates phase-changed signal2801B by applying a phase change to mapped signal201B. Signal processor106further includes, before inserter207A,207B, one or both of phase changer205A_2that generates phase-changed signal206A by applying a phase change to weighting synthesized signal204A, and phase changer205B_2that generates phase-changed signal206B by applying a phase change to weighting synthesized signal204B. Here, when signal processor106includes phase changer205A_1, one input of weighting synthesizer203is phase-changed signal2801A, and when signal processor106does not include phase changer205A_1, one input of weighting synthesizer203is mapped signal201A. When signal processor106includes phase changer205B_1, the other input of weighting synthesizer203is phase-changed signal2801B, and when signal processor106does not include phase changer205B_1, the other input of weighting synthesizer203is mapped signal201B. When signal processor106includes phase changer205A_2, the input of inserter207A is phase-changed signal206A, and when signal processor106does not include phase changer205A_2, the input of inserter207A is weighting synthesized signal204A. When signal processor106includes phase changer205B_2, the input of inserter207B is phase-changed signal206B, and when signal processor106does not include phase changer205B_2, the input of inserter207B is weighting synthesized signal204B. Moreover, the transmission device illustrated inFIG.1may include a second signal processor that implements different signal processing on processed signal106_A,106_B, i.e., the output of signal processor106. Here, radio unit107_A receives an input of signal A processed with second signal processing and performs predetermined processing on the input signal, and radio unit107_B receives an input of signal B processed with second signal processing and performs predetermined processing on the input signal, where signal A and signal B processed with second signal processing are two signals output from a second signal processor. Embodiment A1 Hereinafter, a case in which the base station (AP) and the terminal communicate with each other will be described. Here, the base station (AP) can transmit a plurality of modulated signals including a plurality of streams of data using a plurality of antennas. For example, the base station (AP) includes the transmission device illustrated inFIG.1in order to transmit a plurality of modulated signals including a plurality of streams of data using a plurality of antennas. Moreover, the base station (AP) includes, as the configuration of signal processor106illustrated inFIG.1, a configuration illustrated in any one ofFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. The following will describe a case in which the transmission device described above implements phase change on at least one modulated signal after precoding. In this embodiment, the base station (AP) is capable switching between implementing and not implementing a phase change, based on a control signal. Accordingly, the following holds true. <When Phase Change is Implemented> The base station (AP) implements a phase change on at least one modulated signal. A plurality of modulated signals are transmitted from a plurality of antennas (note that the transmission method of implementing a phase change on at least one modulated signal and transmitting a plurality of modulated signals using a plurality of antennas is as described in the plurality of embodiments according to the present specification). <When Phase Change is not Implemented> The base station (AP) performs precoding (weighting synthesis) described in the present specification on a plurality of streams of modulated signals (baseband signals), and transmits the generated plurality of modulated signals using a plurality of antennas (here, a phase change is not implemented). However, as described above in the present specification, the precoder (weighting synthesizer) is not required to perform precoding, and a configuration in which precoding is never performed and a precoder (weighting synthesizer) is not included is also acceptable. Note that the base station (AP) transmits control information for notifying the terminal, which is the communication partner, whether or not phase change is to be implemented, using a preamble, for example. FIG.34illustrates one example of a system configuration in a state in which base station (AP)3401and terminal3402are communicating. As illustrated inFIG.34, base station (AP)3401transmits a modulated signal and terminal3402, which is the communication partner, receives the modulated signal. Terminal3402then transmits a modulated signal, and base station3401, which is the communication partner, receives the modulated signal. FIG.35illustrates one example of communication between base station (AP)3401and terminal3402. InFIG.35, (A) illustrates the temporal state of a signal transmitted by base station (AP)3401. Time is represented on the horizontal axis. InFIG.35, (B) illustrates the temporal state of a signal transmitted by terminal3402. Time is represented on the horizontal axis. First, base station (AP)3401transmits transmission request3501including requested information indicating a request to transmit a modulated signal, for example. Terminal3402receives transmission request3501transmitted by base station (AP)3401, which is requested information indicating a request to transmit a modulated signal, and, for example, transmits reception capability notification symbol3502including information indicating the reception ability of terminal3402(or a receivable scheme). Base station (AP)3401receives reception capability notification symbol3502transmitted by terminal3402, and based on the information included in reception capability notification symbol3502, determines an error correction encoding method, modulation scheme (or modulation scheme set), and a transmission method, and transmits modulated signal3503that includes, for example, data symbols, and is generated by mapping and implementing other signal processing (such as precoding, phase change) on information (data) to be transmitted within the error correction encoding and modulation scheme, based on the determined schemes and methods. Note that, for example, data symbols3503may include a control information symbol. In such a case, when transmitting the data symbols using a transmission method of transmitting a plurality of modulated signals including a plurality of streams of data using a plurality of antennas, a control symbol may be transmitted that includes information for notifying the communication partner of whether a phase change was implemented on at least one modulated signal or not (this allows the communication partner to easily change demodulation methods). Terminal3402obtains data upon receiving, for example, data symbols3503transmitted by base station3401. FIG.36illustrates an example of data included in reception capability notification symbol3502transmitted by the terminal illustrated inFIG.35. FIG.36illustrates data3601indicating information relating to support for demodulation of modulated signals with phase changes, and data3602indicating information relating to reception directionality control support. Note that in data3601indicating information relating to support for demodulation of modulated signals with phase changes, “supported” indicates, for example, the following state. “Demodulation of modulated signals with phase changes is supported” means, when base station (AP)3401applies a phase change to at least one modulated signal and a plurality of modulated signals are transmitted using a plurality of antennas (note that the transmission method of implementing a phase change on at least one modulated signal and transmitting a plurality of modulated signals using a plurality of antennas is as described in the plurality of embodiments according to the present specification), terminal3402can receive and demodulate the modulated signals (in other words, demodulation taking into consideration phase change can be performed to obtain data). In data3601indicating information relating to support for demodulation of modulated signals with phase changes, “not supported” indicates, for example, the following state. “Demodulation of modulated signals with phase changes is not supported” means, when base station (AP)3401applies a phase change to at least one modulated signal and a plurality of modulated signals are transmitted using a plurality of antennas (note that the transmission method of implementing a phase change on at least one modulated signal and transmitting a plurality of modulated signals using a plurality of antennas is as described in the plurality of embodiments according to the present specification), even if terminal3402receives the modulated signals, demodulation of the modulated signals is not possible (in other words, demodulation taking into consideration phase change cannot be performed). For example, when terminal3402supports phase change, as described above, data3601indicating information relating to support for demodulation of modulated signals with phase changes is set to “0”, and terminal3402transmits reception capability notification symbol3502. Moreover, when terminal3402does not support phase change, as described above, data3601indicating information relating to support for demodulation of modulated signals with phase changes is set to “1”, and terminal3402transmits reception capability notification symbol3502. Then, base station (AP)3401receives data3601transmitted by terminal3402indicating information relating to support for demodulation of modulated signals with phase changes. When the reception indicates “supported” with regard to phase change (in other words, “0” is received as data3601indicating information relating to support for demodulation of modulated signals with phase changes) and base station (AP)3401determines to transmit a plurality of streams of modulated signals using a plurality of antennas, base station (AP)3401may transmit the modulated signals using either <method #1> or <method #2> described below. Alternatively, base station (AP)3401transmits the modulated signals using <method #2>. <Method #1> Base station (AP)3401performs precoding (weighting synthesis) described in the present specification on a plurality of streams of modulated signals (baseband signals), and transmits the generated plurality of modulated signals using a plurality of antennas (here, a phase change is not implemented). However, as described in the present specification, the precoder (weighting synthesizer) need not perform a precoding process. <Method #2> Base station (AP)3401implements a phase change on at least one modulated signal. A plurality of modulated signals are transmitted from a plurality of antennas (note that the transmission method of implementing a phase change on at least one modulated signal and transmitting a plurality of modulated signals using a plurality of antennas is as described in the plurality of embodiments according to the present specification). Here, what is important is that <method #2> is included as a transmission method selectable by base station (AP)3401. Accordingly, base station (AP)3401may transmit modulated signals using a method other than <Method #1> and <Method #2>. Then, base station (AP)3401receives data3601transmitted by terminal3402indicating information relating to support for demodulation of modulated signals with phase changes. When the reception indicates “not supported” with regard to phase change (in other words, “1” is received as data3601indicating information relating to support for demodulation of modulated signals with phase changes) and base station (AP)3401determines to transmit a plurality of streams of modulated signals using a plurality of antennas, base station (AP)3401may transmit the modulated signals using <method #1>. Here, <method #2> is not included as a transmission method selectable by base station (AP)3401. Accordingly, base station (AP)3401may transmit modulated signals using a transmission method that is different from <method #1> and is not <method #2>. Note that reception capability notification symbol3502may include data indicating information other than data3601indicating information relating to support for demodulation of modulated signals with phase changes. For example, the reception device of terminal3402may include data3602indicating information relating to reception directionality control support. Accordingly, the configuration of reception capability notification symbol3502is not limited to the configuration illustrated inFIG.36. For example, when base station (AP)3401includes a function of transmitting a modulated signal using a method other than <method #1> and <method #2>, the reception device in terminal3402may include data indicating information relating to support of that method other than <method #1> and <method #2>. For example, when terminal3402can perform reception directionality control, “0” is set as data3602indicating information relating to reception directionality control support. When terminal3402cannot perform reception directionality control, “1” is set as data3602indicating information relating to reception directionality control support. Terminal3402transmits information on data3602relating to reception directionality control support. Base station (AP)3401receives this information, and when it is determined that terminal3402supports reception directionality control, base station (AP)3401and terminal3402transmits, for example, a training symbol, reference symbol, and/or control information symbol for reception directionality control for terminal3402. FIG.37illustrates an example of data included in reception capability notification symbol3502transmitted by the terminal illustrated inFIG.35, different from the example illustrated inFIG.36. Note that components that perform the same operations as inFIG.36share like reference numerals. Accordingly, since data3601indicating information relating to support for demodulation of modulated signals with phase changes inFIG.37has already been described, repeated description will be omitted. Next, data3702indicating information relating to support for reception for a plurality of streams inFIG.37will be described. In data3702indicating information relating to support for reception for a plurality of streams, “supported” indicates, for example, the following state. When base station (AP)3401that supports reception for a plurality of streams transmits a plurality of modulated signals from a plurality of antennas to transmit a plurality of streams, this means the terminal can receive and demodulate the plurality of modulated signals transmitted by the base station. However, for example, when base station (AP)3401transmits a plurality of modulated signals from a plurality of antennas, whether a phase change has been implemented or not is not distinguished. In other words, when base station (AP)3401defines a plurality of transmission methods for transmitting a plurality of modulated signals from a plurality of antennas to transmit a plurality of streams, the terminal may depend on at least one transmission method with which demodulation is possible. In data3702indicating information relating to support for reception for a plurality of streams, “not supported” indicates, for example, the following state. When base station (AP)3401does not support reception for a plurality of streams and a plurality of transmission methods are defined as transmission methods for transmitting, from a plurality of antennas, a plurality of modulated signals for transmitting a plurality of streams, terminal3402cannot demodulate the modulated signals even if transmitted by base station using any one of the transmission methods. For example, when terminal3402supports reception for a plurality of streams, data3702relating to support for reception for a plurality of streams is set to “0”. When the terminal (3402) does not support reception for a plurality of streams, data3702relating to support for reception for a plurality of streams is set to “1”. Accordingly, when terminal3402has data3702relating to support for reception for a plurality of streams set to “0”, data3601relating to support for demodulation of modulated signals with phase changes is valid, and in such a case, base station (AP)3401determines the transmission method to use to transmit data based on data3601relating to support for demodulation of modulated signals with phase changes and data3702relating to support for reception for a plurality of streams. When terminal3402has data3702relating to support for reception for a plurality of streams set to “1”, data3601indicating information relating to support for demodulation of modulated signals with phase changes is null, and in such a case, base station (AP)3401determines the transmission method to use to transmit data based on data3702relating to support for reception for a plurality of streams. With this, as a result of terminal3402transmitting reception capability notification symbol3502and base station (AP)3401determining a transmission method to use to transmit data based on this symbol, there is an advantageous point that data can be actually transmitted to the terminal (since it is possible to reduce instances in which data is transmitted using a transmission method via which demodulation cannot be performed by terminal3402), and, accordingly, an advantages effect that data transfer efficiency of base station (AP)3401can be improved. Moreover, when data3601indicating information relating to support for demodulation of modulated signals with phase changes is present as reception capability notification symbol3502and terminal3402that supports demodulation of modulated signals with phase changes and base station (AP)3401communicate, base station (AP)3401can accurate select the mode “transmit modulated signal using transmission method that implements a phase change”, whereby an advantageous effect that terminal3402can obtain a high reception quality even in an environment in which direct waves are dominant can be achieved. Moreover, when a terminal that does not support the demodulation of modulated signals with phase changes and base station (AP)3401communicate, base station (AP)3401can accurately select a transmission method via which reception is possible by terminal3402, which makes it possible to achieve an advantageous effect that it is possible to improve data transfer efficiency. Note that inFIG.35, (A) illustrates a signal transmitted by base station (AP)3401and (B) illustrates a signal transmitted by terminal3402, but these examples are not limiting. For example, (A) inFIG.35may illustrate a signal transmitted by terminal3402and (B) may illustrate a signal transmitted by base station (AP)3401. Moreover, inFIG.35, (A) may illustrate a signal transmitted by terminal #1 and (B) may illustrate a signal transmitted by terminal #2. In other words,FIG.35may illustrate communication between terminals. Moreover, inFIG.35, (A) may illustrate a signal transmitted by base station (AP) #1 and (B) may illustrate a signal transmitted by base station (AP) #2. In other words,FIG.35may illustrate communication between base stations (APs). Note that these are non-limiting examples; communication between communication devices is acceptable. Moreover, the data symbol in the transmission of, for example, data symbol3503in (A) inFIG.35may be a multi-carrier scheme signal such as an OFDM signal, and may be a single-carrier scheme signal. Similarly, reception capability notification symbol3502inFIG.35may be a multi-carrier scheme signal such as an OFDM signal, and may be a single-carrier scheme signal. For example, when reception capability notification symbol3502inFIG.35is a single-carrier scheme symbol, in the case ofFIG.35, terminal3402can achieve an advantageous effect that power consumption can be reduced. Embodiment A2 Next, a different example will be given. FIG.38illustrates an example of data included in reception capability notification symbol (3502) transmitted by the terminal illustrated inFIG.35, different from the examples illustrated inFIG.36andFIG.37. Note that components that perform the same operations as inFIG.36andFIG.37share like reference numerals. Moreover, duplicate description of components that perform the same operations as inFIG.36andFIG.37will be omitted. First, data3801relating to “supported scheme” inFIG.38will be described. Transmission of a modulated signal from the base station (AP) to the terminal and transmission of a modulated signal from the terminal to the base station (AP) inFIG.34are transmission of a modulated signal under a specific frequency (frequency band) communications scheme. Communications scheme #A and communications scheme #B are examples of such a specific frequency (frequency band) communications scheme. For example, data3801relating to “supported scheme” is 2-bit data. When the terminal supports only “communications scheme #A”, data3801relating to “supported scheme” is set to “01” (when data3801relating to “supported scheme” is set to “01”, even if the base station (AP) transmits a “communications scheme #B” modulated signal, the terminal cannot demodulate and obtain the data). When the terminal supports only “communications scheme #B”, data3801relating to “supported scheme” is set to “10” (when data3801relating to “supported scheme” is set to “10”, even if the base station (AP) transmits a “communications scheme #A” modulated signal, the terminal cannot demodulate and obtain the data). When the terminal supports both communications scheme #A and communications scheme #B, data3801relating to “supported scheme” is set to “11”. Note that communications scheme #A does not include support for a scheme that transmits a plurality of modulated signals including a plurality of streams using a plurality of antennas (there is no selection of “a scheme that transmits a plurality of modulated signals including a plurality of streams using a plurality of antennas” for communications scheme #A). Communications scheme #B does include support for a scheme that transmits a plurality of modulated signals including a plurality of streams using a plurality of antennas (selection of “a transmission method that transmits a plurality of modulated signals including a plurality of streams using a plurality of antennas” for communications scheme #B is possible). Next, data3802relating to multi-carrier scheme support inFIG.38will be described. “Single-carrier scheme” and “multi-carrier scheme such as OFDM” are selectable for communications scheme #A as a transmission method for a modulated signal. Moreover, “single-carrier scheme” and “multi-carrier scheme such as OFDM” are selectable for communications scheme #B as a transmission method for a modulated signal. For example, data3802relating to “multi-carrier scheme compatibility” is 2-bit data. When the terminal supports only “single-carrier scheme”, data3802relating to multi-carrier scheme support is set to “01” (when data3802relating to multi-carrier scheme support is set to “01”, even if the base station (AP) transmits a “multi-carrier scheme such as OFDM” modulated signal, the terminal cannot demodulate and obtain the data). When the terminal supports only “multi-carrier scheme such as OFDM”, data3802relating to multi-carrier scheme support is set to “10” (when data3802relating to multi-carrier scheme support is set to “10”, even if the base station (AP) transmits a “single-carrier scheme” modulated signal, the terminal cannot demodulate and obtain the data). When the terminal supports both a single-carrier scheme and a multi-carrier scheme such as OFDM, data3802relating to multi-carrier scheme support is set to “11”. Next, data3803relating to “supported error correction encoding scheme” inFIG.38will be described. For example, “error correction encoding scheme #C” is an error correction encoding method that supports one or more encode rates for a code length (block length) of c-bits (c is an integer that is greater than or equal to 1), and “error correction encoding scheme #D” is an error correction encoding method that supports one or more encode rates for a code length (block length) of d-bits (d is an integer that is greater than or equal to 1; d is greater than c (d>c)). Note that the method that supports one or more encode rates may be a method that uses a different error correction code for each encode rate, and may be a method that supports one or more encode rates via puncturing. Moreover, a combination of these methods may be used for support with one or more encode rates. Note that the only selectable choice for communications scheme #A is error correction encoding scheme #C, whereas error correction encoding scheme #C and error correction encoding scheme #D are selectable choices for communications scheme #B. For example, data3803relating to “supported error correction encoding scheme” is 2-bit data. When the terminal supports only “error correction encoding scheme #C”, data3803relating to “supported error correction encoding scheme” is set to “01” (when data3803relating to “supported error correction encoding scheme” is set to “01”, even if the base station (AP) uses error correction encoding scheme #D to generate and transmit a modulated signal, the terminal cannot demodulate and decode the modulated signal to obtain the data). When the terminal supports only “error correction encoding scheme #D”, data3803relating to “supported error correction encoding scheme” is set to “10” (when data3803relating to “supported error correction encoding scheme” is set to “10”, even if the base station (AP) uses error correction encoding scheme #C to generate and transmit a modulated signal, the terminal cannot demodulate and decode the modulated signal to obtain the data). When the terminal supports both error correction encoding scheme #C and error correction encoding scheme #D, data3803relating to “supported error correction encoding scheme” is set to “11”. The base station (AP) receives, for example, reception capability notification symbol3502configured as illustrated inFIG.38and transmitted by the terminal, and base station (AP) determines a method for generating a modulated signal including a data symbol for the terminal based on information in reception capability notification symbol3502, and transmits a modulated signal to the terminal. Next, the characteristic points in such a case will be described. Example 1 When the terminal performs transmission when data3801relating to “supported scheme” is set to “01” (communications scheme #A), the base station (AP) that receives this data determines that data3803relating to “supported error correction encoding scheme” is null, and when the base station (AP) generates the modulated signal for the terminal, error correction encoding is performed using error correction encoding scheme #C (since “error correction encoding scheme #D” cannot be selected in communications scheme #A). Example 21 When the terminal performs transmission when data3801relating to “supported scheme” is set to “01” (communications scheme #A), the base station (AP) that receives this data determines that data3601relating to support for demodulation of modulated signals with phase changes and data3702relating to support for reception for a plurality of streams are null, and when the base station (AP) generates the modulated signal for the terminal, a single stream of a modulated signal is generated and transmitted (since “a scheme that transmits a plurality of modulated signals including a plurality of streams using a plurality of antennas” is not supported in communications scheme #A). In addition to the above examples, for example, consider a case in which the following constraints are in place. [Constraint Condition 1] In “communications scheme #B”, with a single-carrier scheme, in “a scheme that transmits a plurality of modulated signals including a plurality of streams using a plurality of antennas”, a scheme in which “among a plurality of modulated signals, a phase change is implemented on at least one modulated signal” is not supported (but another scheme may be supported). Additionally, in a multi-carrier scheme such as an OFDM scheme, at least a scheme in which “among a plurality of modulated signals, a phase change is implemented on at least one modulated signal” is supported (but another scheme may be supported). The following applies in such a case. Example 3 When the terminal performs transmission under when “data3802relating to multi-carrier scheme support is set to “01” (single-carrier scheme)”, the base station (AP) that receives this data determines that data3601relating to support for demodulation of modulated signals with phase changes is null, and when the base station (AP) generates the modulated signal for the terminal, the base station (AP) does not use the scheme in which “among a plurality of modulated signals, a phase change is implemented on at least one modulated signal”. Note thatFIG.38is one example of a “reception capability notification symbol” (3502) that is transmitted by the terminal. As described with reference toFIG.38, when the terminal transmits information on a plurality of reception abilities (for example,3601,3702,3801,3802, and3803inFIG.38), when the base station (AP) determines a method for generating the modulated signal for the terminal based on a “reception capability notification symbol” (3502), there are cases in which the base station (AP) is required to determine whether a portion of the information on the plurality of reception abilities is null or not. Taking this into consideration, when the terminal bundles and transfers the information on the plurality of reception abilities as a “reception capability notification symbol” (3502), the base station (AP) can achieve an advantageous effect in which the generation of the modulated signal for the terminal can be determined easily, with low delay. Embodiment A3 In this embodiment, an operational example in which a single-carrier scheme is implemented in an embodiment described in the present specification will be given. FIG.39illustrates an example of a frame configuration of transmission signal106_A illustrated inFIG.1. InFIG.39, time is represented on the horizontal axis. The frame configuration illustrated inFIG.39is an example of a frame configuration when a single-carrier scheme is used. Symbols are present along the time axis. InFIG.39, symbols from time t1 to t22 are shown. Preamble3901inFIG.39corresponds to preamble signal252in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. Here, a preamble may transmit data (for control purposes), and may be configured as, for example, a symbol for signal detection, a signal for performing frequency and time synchronization, a symbol for performing channel estimation, or a symbol for frame synchronization (a symbol for performing propagation path fluctuation estimation). Control information symbol3902inFIG.39is a symbol that corresponds to control information symbol signal253in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33, and is a symbol including control information for realizing demodulation and decoding of data symbols by the reception device that received the frame illustrated inFIG.39. Pilot symbol3904illustrated inFIG.39is a symbol corresponding to pilot signal251A (pa(t)) such as inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33. Pilot symbol3904is, for example, a PSK symbol, and is used by the reception device that receives the frame for, for example, channel estimation (propagation path variation estimation), frequency offset estimation, and phase variation estimation. For example, the transmission device illustrated inFIG.1and the reception device that receives the frame illustrated inFIG.39may share the pilot symbol transmission method. 3903inFIG.39is a data symbol for transmitting data. Note that mapped signal201A (mapped signal105_1inFIG.1) is referred to as “stream #1” and mapped signal201B (mapped signal105_2inFIG.1) is referred to as “stream #2”. Data symbol3903is a symbol corresponding to a data symbol included in baseband signal208A generated by signal processing illustrated in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. Accordingly, data symbol3903is either (i) a symbol including both the symbol “stream #1” and the symbol “stream #2”, or (ii) either one of symbol “stream #1” and the symbol “stream #2”. This is determined by the precoding matrix configuration used by weighting synthesizer203(in other words, data symbol3903corresponds to weighting synthesized signal204A (z1(t))). Note that, although not illustrated inFIG.39, the frame may include symbols other than a preamble, control information symbol, data symbol, and pilot symbol. Moreover, not each of preamble3901, control information symbol3902, and pilot symbol3904need be present in the frame. For example, inFIG.39, the transmission device transmits preamble3901at time t1, transmits control information symbol3902at time t2, transmits data symbols3903from time t3 to time t11, transmits pilot symbol3904at time t12, transmits data symbols3903from time t13 to time t21, and transmits pilot symbol3904at time t22. FIG.40illustrates an example of a frame configuration of transmission signal106_B illustrated inFIG.1. InFIG.40, time is represented on the horizontal axis. The frame configuration illustrated inFIG.40is an example of a frame configuration when a single-carrier scheme is used. Symbols are present along the time axis. InFIG.40, symbols from time t1 to t22 are shown. Preamble4001inFIG.40corresponds to preamble signal252in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. Here, a preamble may transmit data (for control purposes), and may be configured as, for example, a symbol for signal detection, a signal for performing frequency and time synchronization, a symbol for performing channel estimation, or a symbol for frame synchronization (a symbol for performing propagation path fluctuation estimation). Control information symbol1102inFIG.40is a symbol that corresponds to control information symbol signal253in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33, and is a symbol including control information for realizing demodulation and decoding of data symbols by the reception device that received the frame illustrated inFIG.40. Pilot symbol4004illustrated inFIG.40is a symbol corresponding to pilot signal251B (pb(t)) such as inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33. Pilot symbol4004is, for example, a PSK symbol, and is used by the reception device that receives the frame for, for example, channel estimation (propagation path variation estimation), frequency offset estimation, and phase variation estimation. For example, the transmission device illustrated inFIG.1and the reception device that receives the frame illustrated inFIG.40may share the pilot symbol transmission method. 4003inFIG.40is a data symbol for transmitting data. Note that mapped signal201A (mapped signal105_1inFIG.1) is referred to as “stream #1” and mapped signal201B (mapped signal105_2inFIG.1) is referred to as “stream #2”. Data symbol4003is a symbol corresponding to a data symbol included in baseband signal208B generated by signal processing illustrated in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. Accordingly, data symbol4003is either (i) a symbol including both the symbol “stream #1” and the symbol “stream #2”, or (ii) either one of symbol “stream #1” and the symbol “stream #2”. This is determined by the precoding matrix configuration used by weighting synthesizer203(in other words, data symbol4003corresponds to phase-changed signal206B (z2(t))). Note that, although not illustrated inFIG.40, the frame may include symbols other than a preamble, control information symbol, data symbol, and pilot symbol. Moreover, not each of preamble4001, control information symbol4002, and pilot symbol4004need be present in the frame. For example, inFIG.40, the transmission device transmits preamble4001at time t1, transmits control information symbol4002at time t2, transmits data symbols4003from time t3 to time t11, transmits pilot symbol4004at time t12, transmits data symbols4003from time t13 to time t21, and transmits pilot symbol4004at time t22. When a symbol is present at time tp inFIG.39and a symbol is present at time tp inFIG.40(where p is an integer that is greater than or equal to 1), the symbol at time tp inFIG.39and the symbol at time tp inFIG.40are transmitted at the same time and same frequency or at the same time and same frequency band. For example, the data symbol at time t3 inFIG.39and the data symbol at time t3 inFIG.40are transmitted at the same time and at the same frequency, or at the same time and at the same frequency band. Note that the frame configuration is not limited to the configurations illustrated inFIG.39andFIG.40;FIG.39andFIG.40are mere examples of frame configurations. Moreover, a method in which the preamble and control information symbol inFIG.39andFIG.40transmit the same data (same control information) may be used. Note that this is under the assumption that the frame ofFIG.39and the frame ofFIG.40are received at the same time by the reception device, but even when the frame ofFIG.39or the frame ofFIG.40has been received, the reception device can obtain the data transmitted by the transmission device. Note that a combination of the single-carrier scheme transmission method, transmission device described in this embodiment and the embodiments described in the specification may be implemented. Embodiment A4 In this embodiment, using the example described in Embodiment A2, an operational example of the terminal will be given. FIG.24illustrates one example of a configuration of a terminal. As this example has already been described, repeated description will be omitted. FIG.41illustrates one example of a configuration of reception device2404in the terminal illustrated inFIG.24. Radio unit4103receives an input of reception signal4102received by antenna unit4101, performs processing such as frequency conversion, and outputs baseband signal4104. Control information decoder4107receives an input of baseband signal4104, demodulates the control information symbol, and outputs control information4108. Channel estimator4105receives an input of baseband signal4104, extracts preamble and pilot symbol, performs channel fluctuation estimation, and outputs channel estimation signal4106. Signal processor4109receives inputs of baseband signal4104, channel estimation signal4106, and control information4108, demodulates and performs error correction decoding on a data symbol based on control information4108, and outputs reception data4110. FIG.42illustrates an example of a frame configuration upon single modulated signal transmission by a base station or AP, which is the communication partner of the terminal, using a multi-carrier transmission scheme such as OFDM. InFIG.42, components that operate the same as inFIG.4share like reference marks. InFIG.42, frequency is represented on the horizontal axis, and symbols for carrier 1 through carrier 36 are shown inFIG.42. Moreover, inFIG.42, time is represented on the vertical axis, and symbols for time $1 through time $11 are shown. For example, the transmission device in the base station illustrated inFIG.1may transmit a single stream modulated signal having the frame configuration illustrated inFIG.42. FIG.43illustrates an example of a frame configuration upon single modulated signal transmission by a base station or AP, which is the communication partner of the terminal, using a single-carrier transmission scheme. InFIG.43, components that operate the same as inFIG.39share like reference marks. InFIG.43, time is represented on the horizontal axis, and symbols from time t1 to time t22 are shown inFIG.43. For example, the transmission device in the base station illustrated inFIG.1may transmit a single stream modulated signal having the frame configuration illustrated inFIG.43. For example, the transmission device in the base station illustrated inFIG.1may transmit a plurality of streams of a plurality of modulated signals having the frame configuration illustrated inFIG.4and/orFIG.5. Furthermore, for example, the transmission device in the base station illustrated inFIG.1may transmit a plurality of streams of a plurality of modulated signals having the frame configuration illustrated inFIG.39and/orFIG.40. The reception device of the terminal has the configuration illustrated inFIG.41. For example, the reception device of the terminal supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” described in Embodiment A2. Accordingly, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal does not support reception of such. Thus, when the communication partner transmits a plurality of streams of a plurality of modulated signals and phase change is implemented, the terminal does not support reception of such. The terminal supports only single-carrier schemes. The terminal supports only decoding of “error correction encoding scheme #C” as an error correction encoding scheme. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.41that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.38and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.38in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from reception capability notification symbol3502, and the terminal knows that communications scheme #A is supported from supported scheme3801. Accordingly, based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38being null and communications scheme #A being supported, control signal generator2308in the base station determines to not transmit a phase-changed modulated signal, and outputs control signal2309including such information. This is because communications scheme #A does not support transmission or reception of a plurality of modulated signals for a plurality of streams. Based on information3702relating to support for reception for a plurality of streams inFIG.38being null and communications method #A being supported, control signal generator2308in the base station determines to not transmit a phase-changed modulated signal, and outputs control signal2309including such information. This is because communications scheme #A does not support transmission or reception of a plurality of modulated signals for a plurality of streams. Based on information3803relating to supported error correction encoding scheme inFIG.38being null and communications method #A being supported, control signal generator2308in the base station determines to use error correction encoding scheme #C, and outputs control signal2309including such information. This is because communications scheme #A supports error correction encoding scheme #C. For example, as illustrated inFIG.41, since this is supported by communications method #A, the above-described operations are performed so that the base station or AP does not transmit a plurality of modulated signals for a plurality of streams, whereby the base station or AP can achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal, due to the communications method #A modulated signal being accurately transmitted. As a second example, the reception device of the terminal has the configuration illustrated inFIG.41, and supports the following. For example, the reception device of the terminal supports reception under “communications scheme #B” described in Embodiment A2. Accordingly, since the reception device has the configuration illustrated inFIG.41, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal does not support reception of such. Thus, when the communication partner transmits a plurality of streams of a plurality of modulated signals and phase change is implemented, the terminal does not support reception of such. The terminal supports a single-carrier scheme and a multi-carrier scheme such as OFDM. The terminal supports decoding of “error correction encoding scheme #C”, “error correction encoding scheme #D” as an error correction encoding scheme. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.41that supports the above transmits reception capability notification symbol3502illustrated inFIG.38. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from reception capability notification symbol3502, and the terminal knows that communications scheme #B is supported from supported scheme3801. Moreover, based on information3702relating to support for reception for a plurality of streams illustrated inFIG.38, control signal generator2308in the base station knows that the terminal, which is the communication partner, cannot demodulate the plurality of modulated signals for the plurality of streams. Accordingly, based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38being null, control information signal generator2308in the base station determines to not transmit a phase-changed modulated signal, and outputs control signal2309including such information. This is because the terminal does not support “reception for a plurality of streams”. Based on information3802relating to multi-carrier scheme support inFIG.38, control signal generator2308in the base station outputs control signal2309including information indicating that the terminal, which is the communication partner, supports a multi-carrier scheme and/or a single-carrier scheme. Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station outputs control signal2309including information indicating that the terminal, which is the communication partner, supports error correction encoding scheme #C and/or error correction encoding scheme #D. Accordingly, the above-described operations are performed so that the base station or AP does not transmit a plurality of modulated signals for a plurality of streams, whereby the base station or AP can achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal, due to the single stream modulated signal being accurately transmitted. As a third example, the reception device of the terminal has the configuration illustrated inFIG.41, and, for example, supports the following. The reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, even if the communication partner transmits a plurality of streams of a plurality of modulated signals using either one of “communications scheme #A” or “communications scheme #B”, the terminal does not support reception of such. Thus, when the communication partner transmits a plurality of streams of a plurality of modulated signals and phase change is implemented, the terminal does not support reception of such. Single-carrier schemes are supported in either one of “communications scheme #A” or “communications scheme #B”. Regarding error correction encoding schemes, the terminal supports decoding of “error correction encoding scheme #C” as “communications scheme #A”, and “error correction encoding scheme #C” and “error correction encoding scheme #D” as “communications scheme #B”. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.41that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from the reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Moreover, based on information3702relating to support for reception for a plurality of streams illustrated inFIG.38, control signal generator2308in the base station knows that the terminal does not support reception for a plurality of streams. Accordingly, based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38being null and communications scheme #A being supported, control signal generator2308in the base station determines to not transmit a phase-changed modulated signal, and outputs control signal2309including such information. This is because terminal A does not support transmission or reception of a plurality of modulated signals for a plurality of streams. Control signal generator2308in the base station knows whether the terminal supports a single-carrier scheme and knows whether the terminal supports a multi-carrier scheme such as OFDM from information3802relating to multi-carrier scheme support inFIG.38. Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and error correction encoding scheme #D. Accordingly, the above-described operations are performed so that the base station or AP does not transmit a plurality of modulated signals for a plurality of streams, whereby the base station or AP can achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal, due to the single stream modulated signal being accurately transmitted. As a fourth example, the reception device of the terminal has the configuration illustrated inFIG.41, and, for example, supports the following. The reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, even if the communication partner transmits a plurality of streams of a plurality of modulated signals using either one of “communications scheme #A” or “communications scheme #B”, the terminal does not support reception of such. Thus, when the communication partner transmits a plurality of streams of a plurality of modulated signals and phase change is implemented, the terminal does not support reception of such. The terminal supports a single-carrier scheme as “communications scheme #A”, and supports both a single-carrier scheme and a multi-carrier scheme such as OFDM as “communications scheme #B”. Regarding error correction encoding schemes, the terminal supports decoding of “error correction encoding scheme #C” as “communications scheme #A”, and “error correction encoding scheme #C” and “error correction encoding scheme #D” as “communications scheme #B”. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.41that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from the reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Moreover, based on information3702relating to support for reception for a plurality of streams illustrated inFIG.38, control signal generator2308in the base station knows that the terminal does not support reception for a plurality of streams. Accordingly, based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38being null and communications scheme #A being supported, control signal generator2308in the base station determines to not transmit a phase-changed modulated signal, and outputs control signal2309including such information. This is because terminal A does not support transmission or reception of a plurality of modulated signals for a plurality of streams. Control signal generator2308in the base station knows whether the terminal supports a single-carrier scheme and knows whether the terminal supports a multi-carrier scheme such as OFDM from information3802relating to multi-carrier scheme support inFIG.38. Here, information3802relating to multi-carrier scheme support is required to have a configuration such as the following. Information3802relating to multi-carrier scheme support is 4-bit information, and the 4 bits are expressed as g0, g1, g2, and g3. When the terminal supports single-carrier demodulation for communications scheme #A, (g0, g1)=(0, 0) is transmitted, when the terminal supports multi-carrier scheme demodulation such as OFDM for communications scheme #A, (g0, g1)=(0, 1) is transmitted, and when the terminal supports single-carrier demodulation and multi-carrier scheme demodulation such as OFDM for communications scheme #A, (g0, g1)=(1, 1) is transmitted. When the terminal supports single-carrier demodulation for communications scheme #B, (g2, g3)=(0, 0) is transmitted, when the terminal supports multi-carrier scheme demodulation such as OFDM for communications scheme #B, (g2, g3)=(0, 1) is transmitted, and when the terminal supports single-carrier demodulation and multi-carrier scheme demodulation such as OFDM for communications scheme #B, (g2, g3)=(1, 1) is transmitted. Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and error correction encoding scheme #D. Accordingly, the above-described operations are performed so that the base station or AP does not transmit a plurality of modulated signals for a plurality of streams, whereby the base station or AP can achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal, due to the single stream modulated signal being accurately transmitted. As a fifth example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Thus, when the communication partner transmits a plurality of streams of modulated signals and phase change is implemented, the terminal supports reception of such. The terminal supports only single-carrier schemes. The terminal supports only decoding of “error correction encoding scheme #C” as an error correction encoding scheme. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.38and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.38in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from the reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Accordingly, based on information3702relating to support for reception for a plurality of streams inFIG.38, control signal generator2308in the base station knows that in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such and in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Control signal generator2308in the base station then knows that the terminal supports demodulation of modulated signals with phase changes based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38. Control signal generator2308in the base station knows that the terminal supports only single-carrier schemes based on information3802relating to multi-carrier scheme support inFIG.38. Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As a sixth example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. When the communication partner transmits a plurality of streams of modulated signals and phase change is implemented, the terminal does not support reception of such. Only single-carrier scheme is supported. The terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction encoding scheme. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.38and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.38in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from the reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Accordingly, based on information3702relating to support for reception for a plurality of streams inFIG.38, control signal generator2308in the base station knows that in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such and in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Control signal generator2308in the base station then knows that the terminal does not support demodulation of modulated signals with phase changes based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38. Accordingly, the base station or AP transmits a modulated signal without implementing a phase change upon transmission of a plurality of streams of modulated signals to the terminal. Control signal generator2308in the base station knows that the terminal supports only single-carrier schemes based on information3802relating to multi-carrier scheme support inFIG.38. Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and decoding of error correction encoding scheme #D. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As a seventh example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. The terminal supports a single-carrier scheme as “communications scheme #A”, and supports both a single-carrier scheme and a multi-carrier scheme such as OFDM as “communications scheme #B”. However, only in the case of a communications scheme #B multi-carrier scheme such as OFDM, implementation of a phase change by the communication partner upon transmitting a plurality of streams of modulated signals is possible. Thus, when the communication partner transmits a plurality of streams of modulated signals and phase change is implemented, the terminal supports reception of such. The terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction encoding scheme. Therefore, based on the rules described in Embodiment A2 and this embodiment, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.38and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.38in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from the reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Accordingly, based on information3702relating to support for reception for a plurality of streams inFIG.38, control signal generator2308in the base station knows that in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such and in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Control signal generator2308in the base station then knows that the terminal does not support demodulation of modulated signals with phase changes based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38. Accordingly, the base station or AP transmits a modulated signal without implementing a phase change upon transmission of a plurality of streams of modulated signals to the terminal. Note that as described above, when the terminal obtains information indicating “demodulation of modulated signals with phase changes is supported” from information3601relating to “support for demodulation of modulated signals with phase changes”, the terminal understands that this is only when the scheme is “communications scheme #B”. Control signal generator2308in the base station knows that the terminal supports single-carrier schemes as “communications scheme #A” and supports single-carrier schemes and multi-carrier schemes such as OFDM as “communications scheme #B” based on information3802relating to multi-carrier scheme support inFIG.38(here, as described above, a configuration is acceptable in which the terminal notifies status regarding (i) support of a single-carrier scheme of “communications scheme #A” and support of a multi-carrier scheme such as OFDM, and (ii) support of a single-carrier scheme of “communications scheme #B” and support of a multi-carrier scheme such as OFDM to the base station or AP). Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and decoding of error correction encoding scheme #D. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As an eighth example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Accordingly, in a single-carrier scheme of “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. However, in a multi-carrier scheme such as OFDM of “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal does not support reception of such. Moreover, in the case of a single-carrier scheme of “communications scheme #A”, when the communication partner transmits a single stream, the terminal supports reception of such (but does not support reception of a multi-carrier scheme such as OFDM). Thus, when the communication partner transmits a plurality of streams of modulated signals and phase change is implemented, the terminal supports reception of such. The terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction encoding scheme. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.38and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.38in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from the reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Moreover, based on information3702relating to support for reception for a plurality of streams inFIG.38, control signal generator2308in the base station knows that even when the base station transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such in the case of a single-carrier scheme of “communications scheme #B”, and that even when the base station transmits a plurality of streams of a plurality of modulated signals, the terminal does not support reception of such in the case of a multi-carrier scheme such as OFDM of “communications scheme #B”. Moreover, based on information3702relating to support for reception for a plurality of streams inFIG.38, control signal generator2308in the base station knows that in “communications scheme #A” and “communications scheme #B”, even if the base station transmits a single stream of a modulated signal, the terminal supports reception of such. Here, information3702relating to support for reception for a plurality of streams is required to have a configuration such as the following. Information3702relating to support for reception for a plurality of streams is 2-bit information, and the 2 bits are expressed as h0 and h1. In the case of a single-carrier scheme of “communications scheme #B”, when the communication partner transmits a plurality of streams of modulated signals and the terminal supports demodulation, h0=1 is transmitted, and when the terminal does not support demodulation, h0=0 is transmitted. In the case of a multi-carrier scheme such as OFDM of “communications scheme #B”, when the communication partner transmits a plurality of streams of modulated signals and the terminal supports demodulation, h1=1 is transmitted, and when the terminal does not support demodulation, h1=0 is transmitted. Control signal generator2308in the base station then knows that the terminal supports demodulation of modulated signals with phase changes based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38. Control signal generator2308in the base station knows that the terminal supports only single-carrier schemes based on information3802relating to multi-carrier scheme support inFIG.38. Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and error correction encoding scheme #D. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As a ninth example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream of a modulated signal, the terminal supports reception of such. In “communications scheme #B”, the base station or AP can transmit a plurality of modulated signals for a plurality of streams in the case of a single-carrier scheme and a multi-carrier scheme such as OFDM. However, only in the case of a communications scheme #B multi-carrier scheme such as OFDM, implementation of a phase change by the communication partner upon transmitting a plurality of streams of modulated signals is possible. Thus, when the communication partner transmits a plurality of streams of modulated signals and phase change is implemented, the terminal supports reception of such. The terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction scheme. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.38and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.38in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Based on information3702relating to support for reception for a plurality of streams inFIG.38, control signal generator2308in the base station knows that in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such, and in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Moreover, based on information3802relating to multi-carrier scheme support inFIG.38, control signal generator2308in the base station knows whether the terminal supports a single-carrier scheme, supports a multi-carrier scheme such as OFDM, or supports both a single-carrier scheme and a multi-carrier scheme such as OFDM. When the terminal supports a single-carrier scheme, upon control signal generator2308in the base station knowing this, control signal generator2308in the base station ignores information3601relating to support for demodulation of modulated signals with phase changes inFIG.38, and this is interpreted as not supporting demodulation (since, in the case of single-carrier scheme, phase-change is not supported). When the terminal supports a multi-carrier scheme such as OFDM or supports both a multi-carrier scheme such as OFDM and a single-carrier scheme, based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38, control signal generator2308in the base station obtains information indicating that the terminal supports a multi-carrier scheme such as OFDM or information indicating that it is not. Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and decoding of error correction encoding scheme #D. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As a tenth example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. In “communications scheme #B”, the base station or AP can transmit a plurality of modulated signals for a plurality of streams in the case of a single-carrier scheme and a multi-carrier scheme such as OFDM. Then, in the case of a single-carrier scheme, when the communication partner transmits a plurality of streams of modulated signals, whether to implement a phase change or not can be set, and in the case of a multi-carrier scheme such as OFDM, when the communication partner transmits a plurality of streams of modulated signals, whether to implement a phase change or not can be set. The terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction scheme. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.38and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.38and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.38in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Based on information3702relating to support for reception for a plurality of streams inFIG.38, control signal generator2308in the base station knows that in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such, and in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Moreover, based on information3802relating to multi-carrier scheme support inFIG.38, control signal generator2308in the base station knows whether the terminal supports a single-carrier scheme, supports a multi-carrier scheme such as OFDM, or supports both a single-carrier scheme and a multi-carrier scheme such as OFDM. Control signal generator2308in the base station then knows whether the terminal supports phase change, based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.38. Here, information3802relating to support for demodulation of modulated signals with phase changes is required to have a configuration such as the following. Information3802relating to support for demodulation of modulated signals with phase changes is 2-bit information, and the 2 bits are expressed as k0 and k1. In the case of a single-carrier scheme of “communications scheme #B”, when the communication partner transmits a plurality of streams for a plurality of modulated signals and a phase change has been implemented, when the terminal supports demodulation, k0=1 is transmitted, and when the terminal does not support demodulation, k0=0 is transmitted. In the case of a multi-carrier scheme such as OFDM of “communications scheme #B”, when the communication partner transmits a plurality of streams for a plurality of modulated signals and a phase change has been implemented, when the terminal supports demodulation, k1=1 is transmitted, and when the terminal does not support demodulation, k1=0 is transmitted. Then, based on information3803relating to supported error correction encoding scheme inFIG.38, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and error correction encoding scheme #D. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As described above, the base station or AP obtains, from the terminal, which is the communication partner of the base station or AP, information relating to a scheme in which demodulation is supported by the terminal, and based on that information, determines the number of modulated signals, the communications method of the modulated signals, and the signal processing method of the modulated signals, for example, and as a result, the base station or AP can accurately generate and transmit a modulated signal receivable by the terminal, which makes it possible to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. Here, for example, as illustrated inFIG.38, by configuring a reception capability notification symbol of a plurality of items of information, the base station or AP can easily determine the validity of information included in the reception capability notification symbol, and as a result, it is possible to rapidly determine, for example, a modulated signal scheme and signal processing method to be used for transmission. Then, based on information on the reception ability symbol transmitted by the terminals, the base station or AP can improve data transmission efficiency by transmitting modulated signals to each terminal using a suitable transmission method. Note that the method of configuring the information on the reception capability notification symbol described in this embodiment is merely one non-limiting example. Moreover, the order in which and timing at which the terminal transmits the reception capability notification symbols to the base station or AP described in this embodiment are merely non-limiting examples. Embodiment A5 In the present specification, one example of a configuration of a transmission device, such as a base station, access point, broadcast station, illustrated inFIG.1was described. In this embodiment, another example of a configuration of a transmission device, such as a base station, access point, broadcast station that is illustrated inFIG.44and different fromFIG.1will be described. InFIG.44, components that operate the same as inFIG.1share like reference marks. Accordingly, repeated description will be omitted. InFIG.44, the point of difference fromFIG.1is the inclusion of a plurality of error correction encoders. InFIG.44, there are two error correction encoders (note that the number of error correction encoders is not limited to one in the case ofFIG.1or two in the case ofFIG.44; for example, three or more may be provided, and the mapper may use the data output by each of the error correction encoders to perform mapping). InFIG.44, error correction encoder102_1receives inputs of first data101_1and control signal100, error correction encodes first data101_1based on information on the error correction encoding method included in control signal100, and outputs encoded data103_1. Mapper104_1receives inputs of encoded data103_1and control signal100, and based on information on the modulation scheme included in control signal100, performs mapping on encoded data103_1, and outputs mapped signal105_1. Error correction encoder102_1receives inputs of second data101_2and control signal100, error correction encodes second data101_2based on information on the error correction encoding method included in control signal100, and outputs encoded data103_2. Mapper104_2receives inputs of encoded data103_2and control signal100, and based on information on the modulation scheme included in control signal100, performs mapping on encoded data103_2, and outputs mapped signal105_2. Then, even when operations described in this embodiment are performed with respect to the configuration of the transmission device illustrated inFIG.44, implementation just like inFIG.1is possible and the same advantageous effects are also obtainable. Note that, for example, the transmission device such as a base station, AP, or broadcast station may switch between transmitting a modulated signal with the configuration illustrated inFIG.1and transmitting a modulated signal with the configuration illustrated inFIG.44. Embodiment A6 Examples of configurations of signal processor106described with reference to, for exampleFIG.1, are illustrated inFIG.20,FIG.21, andFIG.22. Next, an example of operations performed by phase changers205A,205B illustrated inFIG.20,FIG.21, andFIG.22will be given. As described in Embodiment 4, the phase change value of phase changer205A is expressed as w(i), and the phase change value of phase changer205B is expressed as y(i). Here, z1(i) and z2(t) are expressed as in Equation (52). The phase change cycle of phase changer205A is N, and the phase change cycle of phase changer205B is N. However, N is an integer that is greater than or equal to 3. In other words, the number of transmission streams or number of transmission modulated signals is an integer that is greater than 2. Here, phase change value w(i) and phase change value y(i) are applied as follows. [MATH.137]w⁡(i)=ej⁡(π×iN+Δ)Equation⁢(137)[MATH.138]y⁡(i)=ej⁡(-π×iN+Ω)Equation⁢(138) Note that A in Equation (137) and Ω in Equation (138) are real numbers (in one extremely simple, non-limiting example, A and Ω are both zero). When set in this manner, the peak-to-average power ratio (PAPR) of signal z1(t) (or z1(i)), and the PAPR of signal z2(t) (or z2(t)) inFIG.20,FIG.21, andFIG.22is, in the case of a single-carrier scheme, are the same. Accordingly, the phase noise in radio unit107_A and108_B in, for example,FIG.1, and the linear required criteria for the transmission power unit are the same, which is advantageous since low power consumption is easily achievable and a common radio unit configuration can be used (note that there is a high probability that the same advantageous effects can be achieved when a multi-carrier scheme such as OFDM is used). Phase changer w(i) and y(i) may be applied in the following manner. [Math.139]w⁡(i)=ej⁡(-π×iN+Δ)Equation⁢(139)[Math.140]y⁡(i)=ej⁡(π×iN+Ω)Equation⁢(140) Even when applied as in Equation (139) and Equation (140), the same advantageous effects as above can be achieved. Phase changer w(i) and y(i) may be applied in the following manner. [Math.141]w⁡(i)=ej⁡(k×π×iN+Δ)Equation⁢(141)[Math.142]y⁡(i)=ej⁡(-k×π×iN+Ω)Equation⁢(142) Note that k is an integer other than 0 (for example, k may be 1, may be −1, may be 2, and may be −2; these are non-limiting examples). Even when applied as in Equation (141) and Equation (142), the same advantageous effects as above can be achieved. Embodiment A7 Examples of configurations of signal processor106described with reference to, for exampleFIG.1, are illustrated inFIG.31,FIG.32, and FIG.33. Next, an example of operations performed by phase changers205A,205B illustrated inFIG.31,FIG.32, andFIG.33will be given. As described in Embodiment 7, in phase changer205B, for example, a phase change of y(i) is applied to s2(t). Accordingly, when phase-changed signal2801B is expressed as s2′(i), s2′(i) can be expressed as s2′(i)=y(i)×s2(t) (i is a symbol number (i is an integer that is greater than or equal to 0)). In phase changer205A, for example, a phase change of w(i) is applied to s1(t). Accordingly, when phase-changed signal2901A is expressed as s1′(i), s1′(i) can be expressed as s1′(i)=w(i)×s1(i) (i is a symbol number (i is an integer that is greater than or equal to 0)). The phase change cycle of phase changer205A is N, and the phase change cycle of phase changer205B is N. However, N is an integer that is greater than or equal to 3. In other words, the number of transmission streams or number of transmission modulated signals is an integer that is greater than 2. Here, phase change value w(i) and phase change value y(i) are applied as follows. [Math.143]w⁡(i)=ej⁡(π×iN+Δ)Equation⁢(143)[Math.144]y⁡(i)=ej⁡(-π×iN+Ω)Equation⁢(144) Note that A in Equation (143) and Ω in Equation (144) are real numbers (in one extremely simple, non-limiting example, A and Ω are both zero). When set in this manner, the peak-to-average power ratio (PAPR) of signal z1(t) (or z1(i)), and the PAPR of signal z2(t) (or z2(t)) inFIG.31,FIG.32, andFIG.33is, in the case of a single-carrier scheme, are the same. Accordingly, the phase noise in radio units107_A and108_B in, for example,FIG.1, and the linear required criteria for the transmission power unit are the same, which is advantageous since low power consumption is easily achievable and a common radio unit configuration can be used (note that there is a high probability that the same advantageous effects can be achieved when a multi-carrier scheme such as OFDM is used). Phase changer w(i) and y(i) may be applied in the following manner. [Math.145]w⁡(i)=ej⁡(-π×iN+Δ)Equation⁢(145)[Math.146]y⁡(i)=ej⁡(π×iN+Ω)Equation⁢(146) Even when applied as in Equation (145) and Equation (146), the same advantageous effects as above can be achieved. Phase changer w(i) and y(i) may be applied in the following manner. [Math.147]w⁡(i)=ej⁡(k×π×iN+Δ)Equation⁢(147)[Math.148]y⁡(i)=ej⁡(-k×π×iN+Ω)Equation⁢(148) Note that k is an integer other than 0 (for example, k may be 1, may be −1, may be 2, and may be −2; these are non-limiting examples). Even when applied as in Equation (147) and Equation (148), the same advantageous effects as above can be achieved. (Supplemental Information 5) The embodiments of the present specification may be implemented for multi-carrier schemes such as OFDM and may be implemented for single-carrier schemes. Hereinafter, additional information will be given for cases in which a single-carrier scheme is applied. For example, in Embodiment 1, using, for example, Equation (1) to Equation (36) andFIG.2, or in other embodiments, usingFIG.18toFIG.22andFIG.28toFIG.33, signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)) are generated, and signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)) are transmitted from the transmission device at the same time and at the same frequency (same frequency band). Note that i is a symbol number. Here, for example, in cases in which a multi-carrier scheme such as OFDM is used, as described in Embodiments 1 through 6, signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)) are taken as functions of a frequency (carrier number), functions of time and frequency, or functions of time, and, for example, are arranged as follows. Signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)) are arranged along the frequency axis. Signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)) are arranged along the time axis. Signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)) are arranged along both the frequency and time axis. Next, a specific example will be given. FIG.45illustrates an example of a method of arranging symbols on the time axis for signal z1(t) and signal z2(t) (or signal z1′(i) and signal z2′(i)). InFIG.45, for example, zq(0) is shown. Here, q is 1 or 2. Accordingly, zq(0) inFIG.45indicates “in z1(i) and z2(t), when symbol number i=0, z1(0) and z2(0)”. Similarly, zq(1) indicates “in z1(i) and z2(t), when symbol number i=1, z1(1) and z2(1)” (in other words, zq(X) indicates “in z1(i) and z2(t), when symbol number i=X, z1(X) and z2(X)”). Note that this also applies toFIG.46,FIG.47,FIG.48,FIG.49, andFIG.50. As illustrated inFIG.45, symbol zq(0) whose symbol number i=0 is arranged at time 0, symbol zq(1) whose symbol number i=1 is arranged at time 1, symbol zq(2) whose symbol number i=2 is arranged at time 2, symbol zq(3) whose symbol number i=3 is arranged at time 3, and so on. With this, symbols are arranged on the time axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). However,FIG.45merely illustrates one example; the relationship between time and symbol number is not limited to this example. FIG.46illustrates an example of a method of arranging symbols on the frequency axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). As illustrated inFIG.46, symbol zq(0) whose symbol number i=0 is arranged at carrier 0, symbol zq(1) whose symbol number i=1 is arranged at carrier 1, symbol zq(2) whose symbol number i=2 is arranged at carrier 2, symbol zq(3) whose symbol number i=3 is arranged at carrier 3, and so on. With this, symbols are arranged on the frequency axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). However,FIG.46merely illustrates one example; the relationship between frequency and symbol number is not limited to this example. FIG.47illustrates an example of a method of arranging symbols on the time and frequency axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). As illustrated inFIG.47, symbol zq(0) whose symbol number i=0 is arranged at time 0 and carrier 0, symbol zq(1) whose symbol number i=1 is arranged at time 0 and carrier 1, symbol zq(2) whose symbol number i=2 is arranged at time 1 and carrier 0, symbol zq(3) whose symbol number i=3 is arranged at time 1 and carrier 1, and so on. With this, symbols are arranged on the time and frequency axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). However,FIG.47merely illustrates one example; the relationship between time and frequency and symbol number is not limited to this example. FIG.48illustrates a second example of an arrangement symbols on the time axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). As illustrated inFIG.48, symbol zq(0) whose symbol number i=0 is arranged at time 0, symbol zq(1) whose symbol number i=1 is arranged at time16, symbol zq(2) whose symbol number i=2 is arranged at time12, symbol zq(3) whose symbol number i=3 is arranged at time 5, and so on. With this, symbols are arranged on the time axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). However,FIG.48merely illustrates one example; the relationship between time and symbol number is not limited to this example. FIG.49illustrates a second example of an arrangement symbols on the frequency axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). As illustrated inFIG.49, symbol zq(0) whose symbol number i=0 is arranged at carrier 0, symbol zq(1) whose symbol number i=1 is arranged at carrier 16, symbol zq(2) whose symbol number i=2 is arranged at carrier 12, symbol zq(3) whose symbol number i=3 is arranged at carrier 5, and so on. With this, symbols are arranged on the frequency axis for signal z1(t) and signal z2(t) (or signal z1′(i) and signal z2′(i)). However,FIG.49merely illustrates one example; the relationship between frequency and symbol number is not limited to this example. FIG.50illustrates an example of an arrangement of symbols on the time and frequency axis for signal z1(t) and signal z2(t) (or signal z1′(i) and signal z2′(i)). As illustrated inFIG.50, symbol zq(0) whose symbol number i=0 is arranged at time 1 and carrier 1, symbol zq(1) whose symbol number i=1 is arranged at time 3 and carrier 3, symbol zq(2) whose symbol number i=2 is arranged at time 1 and carrier 0, symbol zq(3) whose symbol number i=3 is arranged at time 1 and carrier 3, and so on. With this, symbols are arranged on the time and frequency axis for signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)). However,FIG.50merely illustrates one example; the relationship between time and frequency and symbol number is not limited to this example. Moreover, in cases where a single-carrier scheme is used, after signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)) are generated, symbols are arranged along the time axis. Accordingly, as described above, signal z1(i) and signal z2(t) (or signal z1′(i) and signal z2′(i)) are generated, symbols are arranged along the time axis, such as illustrated inFIG.45andFIG.48. However,FIG.45andFIG.48merely illustrate examples; the relationship between time and symbol number is not limited to these examples. Moreover, various frame configurations are described in the present specification. The modulated signals having a frame configuration described in the present specification are transmitted by a base station or AP using a multi-carrier scheme such as OFDM. Here, when a terminal communicating with the base station (AP) transmits a modulated signal, the modulated signal to be transmitted by the terminal is preferably a single-carrier scheme modulated signal (as a result of the base station or AP using the OFDM scheme, it is possible to concurrently transmit a data symbol group to a plurality of terminals; moreover, as a result of the terminal using a single-carrier scheme, power consumption can be reduced). Using part of a frequency band used by the modulated signal transmitted by the base station or AP, the terminal may implement a time division duplex (TDD) scheme for modulation scheme transmission. In the present specification, phase changer205A and/or phase changer205B are described as implementing a phase change. Here, when the phase change cycle of phase changer205A is expressed as NA, and NA is an integer that is greater than or equal to 3, that is to say, the number of transmission streams or the number of modulated signals is an integer greater than 2, there is a high probability that the reception device in the communication partner can achieve a beneficial data reception quality. Similarly, when the phase change cycle of phase changer205B is expressed as NB, and NB is an integer that is greater than or equal to 3, that is to say, the number of transmission streams or the number of modulated signals is an integer greater than 2, there is a high probability that the reception device in the communication partner can achieve a beneficial data reception quality. As a matter of course, the embodiments may be carried out by combining a plurality of the exemplary embodiments and other contents described in the present specification. Embodiment A8 In this embodiment, an operational example of a communications device based on the operations described in, for example, Embodiment 7 and Supplemental Information 1, will be given. First Example FIG.51illustrates one example of a configuration of a modulated signal transmitted by a base station or AP according to this embodiment. InFIG.51, time is represented on the horizontal axis. As illustrated inFIG.51, the transmission device in the base station or AP performs “single stream modulated signal transmission5101” and subsequently performs “multi-stream multi-modulated-signal transmission5102”. FIG.52illustrates one example of a frame configuration when single stream modulated signal transmission5101inFIG.51is performed. InFIG.52, time is represented on the horizontal axis. As illustrated inFIG.52, the base station or AP transmits preamble5201and subsequently transmits control information symbol5201. Note that preamble5201conceivably includes a symbol for the terminal, which is the communication partner of the base station or AP, to perform signal detection, time synchronization, frequency synchronization, frequency offset estimation, channel estimation, and/or frame synchronization. For example, preamble5201is conceivably a PSK (phase shift keying) scheme symbol. Control information symbol5201is a symbol including, for example, information relating to the communications method of the modulated signal transmitted by the base station and AP and/or information required by the terminal to demodulate a data symbol. However, the information included in control information symbol5202is not limited to this example; control information symbol5202may include data (a data symbol), and may include other control information. Moreover, the configuration of the symbols included in the single stream modulated signal is not limited to the example illustrated inFIG.52, and the symbols included in the single stream modulated signal are not limited to the example illustrated inFIG.52. FIG.53illustrates one example of a frame configuration when multi-stream multi-modulated-signal transmission5102inFIG.51is performed. InFIG.53, time is represented on the horizontal axis. As illustrated inFIG.53, the base station or AP transmits preamble5301, subsequently transmits control information symbol5302, and subsequently transmits, for example, data symbol5303. Note that regarding at least data symbols, a plurality of modulated signals for a plurality of streams are transmitted at the same time and at the same frequency. Note that preamble5301conceivably includes a symbol for the terminal, which is the communication partner of the base station or AP, to perform signal detection, time synchronization, frequency synchronization, frequency offset estimation, channel estimation, and/or frame synchronization. For example, preamble5301is conceivably a PSK scheme symbol. Moreover, as a result of a symbol for channel estimation being transmitted from a plurality of antennas, demodulation of a data symbol included in, for example, data symbol5303becomes possible. Control information symbol5302is a symbol including, for example, information relating to the communications method of the modulated signal transmitted by the base station and AP and/or information required by the terminal to demodulate a data symbol. However, the information included in control information symbol5302is not limited to this example; control information symbol5302may include data (a data symbol), and may include other control information. Moreover, the symbols included in the plurality of modulated signals for plurality of streams are not limited to the example illustrated inFIG.53. Note that hereinafter, the scheme used for “single stream modulated signal transmission5101” inFIG.51may be a single-carrier scheme, and the scheme used for “multi-stream multi-modulated-signal transmission5102” inFIG.51may be a single-carrier scheme or a multi-carrier scheme. Note that in the following description, the multi-carrier scheme is exemplified as the OFDM scheme (however, note that the multi-carrier scheme used is not limited to the OFDM scheme). One characteristic of this embodiment is that CDD(CSD) as described in Supplemental Information 1 is implemented upon performing single stream modulated signal transmission5101using a single-carrier scheme inFIG.51. Then, upon performing multi-stream multi-modulated-signal transmission5102inFIG.51, phase change is switched between implementation and non-implementation. Next, operations performed by the transmission device in the base station will be described with reference toFIG.54. FIG.54illustrates one example of a configuration of signal processor106in, for example, the transmission device in the base station illustrated inFIG.1orFIG.44. Multi-stream multi-modulated-signal generator5402has the configuration illustrated in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33. Multi-stream multi-modulated-signal generator5402receives inputs of mapped signal5401A (s1(t)), mapped signal5401B (s2(t)), and control signal5400. Here, mapped signal5401A (s1(t)) corresponds to mapped signal201A, mapped signal5401B (s2(t)) corresponds to mapped signal201B, and control signal5400corresponds to control signal200. Multi-stream multi-modulated-signal generator5402performs processing described with reference to, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33, and outputs signals5403A,5403B. Note that signal5403A corresponds to208A inFIG.2, and5403Bcorresponds to210B inFIG.2. Signal5403A corresponds to210A inFIG.18, and5403B corresponds to208B inFIG.18. Signal5403A corresponds to210A inFIG.19, and5403Bcorresponds to210B inFIG.19. Signal5403A corresponds to208A inFIG.20, and5403Bcorresponds to210B inFIG.20. Signal5403A corresponds to210A inFIG.21, and5403Bcorresponds to208B inFIG.21. Signal5403A corresponds to210A inFIG.22, and5403Bcorresponds to210B inFIG.22. Signal5403A corresponds to208A inFIG.28, and5403B corresponds to210B inFIG.28. Signal5403A corresponds to210A inFIG.29, and5403Bcorresponds to208B inFIG.29. Signal5403A corresponds to210A inFIG.30, and5403Bcorresponds to210B inFIG.30. Signal5403A corresponds to208A inFIG.31, and5403Bcorresponds to210B inFIG.31. Signal5403A corresponds to210A inFIG.32, and5403Bcorresponds to208B inFIG.32. Signal5403A corresponds to208A inFIG.33, and5403B corresponds to210B inFIG.33. Then, based on information included in control signal200relating to whether it is time to perform single stream modulated signal transmission or time to perform multi-stream multi-modulated-signal transmission, when multi-stream multi-modulated-signal generator5402determines that it is time to perform multi-stream multi-modulated-signal transmission, each signal processor operates, and signals5403A,5403B are generated and output. Inserter5405receives inputs of mapped signal5401A, preamble and control symbol signal5404, and control signal5400, and based on information included in control signal5400relating to whether it is time to perform single stream modulated signal transmission or time to perform multi-stream multi-modulated-signal transmission, when inserter5405determines that it is time to perform single stream modulated signal transmission, for example, inserter5405generates and outputs (single-carrier scheme) signal5406in accordance with the frame configuration illustrated inFIG.52, based on mapped signal5401A and preamble and control symbol signal5404. Note that inFIG.54, inserter5405is illustrated as receiving an input of mapped signal5401A, but when generating a signal in accordance with the frame configuration illustrated inFIG.52, mapped signal5401A is not used. CDD (CSD) processor5407receives inputs of (single-carrier scheme) signal5406in accordance with the frame configuration and control signal5400, and when control signal5400indicates that it is time to perform single stream modulated signal transmission, performs CDD(CSD) processing on (single-carrier scheme) signal5406in accordance with the frame configuration and outputs CDD (CSD) processed signal5408in accordance with the frame configuration. Selector5409A receives inputs of signal5403A, signal5406in accordance with the frame configuration, and control signal5400, and based on control signal5400, selects either signal5403A or signal5406in accordance with frame configuration, and outputs selected signal5410A. For example, in single stream modulated signal transmission5101inFIG.51, selector5409A outputs signal5406in accordance with the frame configuration as selected signal5410A, and in multi-stream multi-modulated-signal transmission5102inFIG.51, selector5409A outputs signal5403A as selected signal5410A. Selector5409B receives inputs of signal5403B, CDD (CSD) processed signal5408in accordance with the frame configuration, and control signal5400, and based on control signal5400, selects either signal5403B or CDD (CSD) processed signal5408in accordance with the frame configuration, and outputs selected signal5410B. For example, in single stream modulated signal transmission5101inFIG.51, selector5409B outputs CDD (CSD) processed signal5408in accordance with the frame configuration as selected signal5410B, and in multi-stream multi-modulated-signal transmission5102inFIG.51, selector5409B outputs signal5403B as selected signal5410B. Note that selected signal5410A corresponds to processed signal106_A inFIG.1,FIG.44, and selected signal5410B corresponds to processed signal106_B inFIG.1,FIG.44. FIG.55illustrates one example of a configuration of radio units107_A,107_B inFIG.1,FIG.44. OFDM scheme radio unit5502receives inputs of processed signal5501and control signal5500, and when information included in control signal5500relating to whether either OFDM scheme or single-carrier scheme has been selected indicates that OFDM scheme has been selected, processes processed signal5501and outputs OFDM scheme modulated signal5503. Note that OFDM is presented as an example, but another multi-carrier scheme may be used. Single-carrier scheme radio unit5504receives inputs of processed signal5501and control signal5500, and when information included in control signal5500relating to whether either OFDM scheme or single-carrier scheme has been selected indicates that single-carrier scheme has been selected, processes processed signal5501and outputs single-carrier scheme modulated signal5505. Selector5506receives inputs of OFDM scheme modulated signal5503, single-carrier scheme modulated signal5505, and control signal5500, and when information included in control signal5500relating to whether either OFDM scheme or single-carrier scheme has been selected indicates that OFDM scheme has been selected, outputs OFDM scheme modulated signal5503as selected signal5507, and when information included in control signal5500relating to whether either OFDM scheme or single-carrier scheme has been selected indicates that single-carrier scheme has been selected, outputs single-carrier scheme modulated signal5505as selected signal5507. Note that when radio unit107_A has the configuration illustrated inFIG.55, processed signal5501corresponds to signal106_A, control signal5500corresponds to control signal100, and selected signal5507corresponds to108_A. Moreover, when radio unit107_B has the configuration illustrated inFIG.55, processed signal5501corresponds to signal106_B, control signal5500corresponds to control signal100, and selected signal5507corresponds to108_B. Hereinafter, the operations described above will be described further with reference to the description of Embodiment 7. Example 1-1 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, CDD (CSD) processing is not performed, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, for example, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33do not implement a phase change. Accordingly, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is ignored in “multi-stream multi-modulated-signal transmission5102”. Note that in such cases, phase changer209A and/or209B need not be included in multi-stream multi-modulated-signal generator5402illustrated inFIG.54. In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, inFIG.51, in “single stream modulated signal transmission5101”, cyclic delay diversity (CDD (CSD)) processing is always performed. In such cases, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is not necessary. Example 1-2 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, CDD (CSD) processing is not performed, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, for example, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33do not implement a phase change. Accordingly, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is ignored in “multi-stream multi-modulated-signal transmission5102”. Note that in such cases, phase changer209A and/or209B need not be included in multi-stream multi-modulated-signal generator5402illustrated inFIG.54. In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, in “single stream modulated signal transmission”, cyclic delay diversity (CDD (CSD)) processing is controlled via control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. However, as described above, when the base station or AP transmits a modulated signal in accordance withFIG.51,FIG.52, and/orFIG.53, in “single stream modulated signal transmission5101” inFIG.51, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) indicates “ON”, and in “single stream modulated signal transmission5101” inFIG.51, CDD (CSD) processing is performed. Example 1-3 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, CDD (CSD) processing is performed, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, for example, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33implements a phase change or performs CDD (CSD) processing. Accordingly, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is ignored in “multi-stream multi-modulated-signal transmission5102”. In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, inFIG.51, in “single stream modulated signal transmission5101”, cyclic delay diversity (CDD (CSD)) processing is always performed. In such cases, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is not necessary. Example 1-4 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, CDD (CSD) processing is performed, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, for example, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33implements a phase change or performs CDD (CSD) processing. Accordingly, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is ignored in “multi-stream multi-modulated-signal transmission5102”. In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, in “single stream modulated signal transmission”, cyclic delay diversity (CDD (CSD)) processing is controlled via control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. However, as described above, when the base station or AP transmits a modulated signal in accordance withFIG.51,FIG.52, and/orFIG.53, in “single stream modulated signal transmission5101” inFIG.51, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) indicates “ON”, and in “single stream modulated signal transmission5101” inFIG.51, CDD (CSD) processing is performed. Example 1-5 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, whether CDD (CSD) processing is performed or not is selectable, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, based on control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33selects whether to (i) implement a phase change or perform CDD (CSD) or (ii) do not implement a phase change or do not perform CDD (CSD). In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, inFIG.51, in “single stream modulated signal transmission5101”, cyclic delay diversity (CDD (CSD)) processing is always performed. In such cases, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is not necessary. Example 1-6 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, whether CDD (CSD) processing is performed or not is selectable, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, based on control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33selects whether to (i) implement a phase change or perform CDD (CSD) or (ii) do not implement a phase change or do not perform CDD (CSD). In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, in “single stream modulated signal transmission”, cyclic delay diversity (CDD (CSD)) processing is controlled via control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. However, as described above, when the base station or AP transmits a modulated signal in accordance withFIG.51,FIG.52, and/orFIG.53, in “single stream modulated signal transmission5101” inFIG.51, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) indicates “ON”, and in “single stream modulated signal transmission5101” inFIG.51, CDD (CSD) processing is performed. Second Example FIG.51illustrates one example of a configuration of a modulated signal transmitted by a base station or AP according to this embodiment. AsFIG.51has already been described, repeated description will be omitted. FIG.52illustrates one example of a frame configuration when single stream modulated signal transmission5101inFIG.51is performed. AsFIG.52has already been described, repeated description will be omitted. FIG.53illustrates one example of a frame configuration when multi-stream multi-modulated-signal transmission5102inFIG.51is performed. AsFIG.53has already been described, repeated description will be omitted. Note that hereinafter, the scheme used for “single stream modulated signal transmission5101” inFIG.51may be a single-carrier scheme, and the scheme used for “multi-stream multi-modulated-signal transmission5102” inFIG.51may be a single-carrier scheme or a multi-carrier scheme. Note that in the following description, the multi-carrier scheme is exemplified as the OFDM scheme (however, note that the multi-carrier scheme used is not limited to the OFDM scheme). One characteristic of this embodiment is that CDD(CSD) as described in Supplemental Information 1 is implemented upon performing single stream modulated signal transmission5101using a single-carrier scheme inFIG.51. Then, upon performing multi-stream multi-modulated-signal transmission5102inFIG.51, phase change is switched between implementation and non-implementation. Next, operations performed by the transmission device in the base station will be described with reference toFIG.56. FIG.56illustrates one example of a configuration of signal processor106in, for example, the transmission device in the base station illustrated inFIG.1orFIG.44. InFIG.56, components that operate the same as inFIG.54share like reference marks. Accordingly, repeated description will be omitted. CDD (CSD) processor5601receives inputs of (single-carrier scheme) signal5406in accordance with the frame configuration and control signal5400, and when control signal5400indicates that it is time to perform single stream modulated signal transmission, performs CDD(CSD) processing on (single-carrier scheme) signal5406in accordance with the frame configuration and outputs CDD (CSD) processed signal5602in accordance with the frame configuration. Selector5409A receives inputs of signal5403A, CDD (CSD) processed signal5602in accordance with the frame configuration, and control signal5400, and based on control signal5400, selects either signal5403A or CDD (CSD) processed signal5602in accordance with the frame configuration in accordance with frame configuration, and outputs selected signal5410A. For example, in single stream modulated signal transmission5101inFIG.51, selector5409A outputs CDD (CSD) processed signal5602in accordance with the frame configuration as selected signal5410A, and in multi-stream multi-modulated-signal transmission5102inFIG.51, selector5409A outputs signal5403A as selected signal5410A. FIG.55illustrates one example of a configuration of radio units107_A,107_B inFIG.1,FIG.44. AsFIG.55has already been described, repeated description will be omitted Example 2-1 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, CDD (CSD) processing is not performed, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, for example, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33do not implement a phase change. Accordingly, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is ignored in “multi-stream multi-modulated-signal transmission5102”. Note that in such cases, phase changer209A and/or209B need not be included in multi-stream multi-modulated-signal generator5402illustrated inFIG.56 In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, inFIG.51, in “single stream modulated signal transmission5101”, cyclic delay diversity (CDD (CSD)) processing is always performed. In such cases, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is not necessary. Example 2-2 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, CDD (CSD) processing is not performed, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, for example, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33do not implement a phase change. Accordingly, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is ignored in “multi-stream multi-modulated-signal transmission5102”. Note that in such cases, phase changer209A and/or209B need not be included in multi-stream multi-modulated-signal generator5402illustrated inFIG.54. In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, in “single stream modulated signal transmission”, cyclic delay diversity (CDD (CSD)) processing is controlled via control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. However, as described above, when the base station or AP transmits a modulated signal in accordance withFIG.51,FIG.52, and/orFIG.53, in “single stream modulated signal transmission5101” inFIG.51, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) indicates “ON”, and in “single stream modulated signal transmission5101” inFIG.51, CDD (CSD) processing is performed. Example 2-3 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, CDD (CSD) processing is performed, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, for example, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33implements a phase change or performs CDD (CSD) processing. Accordingly, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is ignored in “multi-stream multi-modulated-signal transmission5102”. In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, inFIG.51, in “single stream modulated signal transmission5101”, cyclic delay diversity (CDD (CSD)) processing is always performed. In such cases, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is not necessary. Example 2-4 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, CDD (CSD) processing is performed, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, for example, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33implements a phase change or performs CDD (CSD) processing. Accordingly, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is ignored in “multi-stream multi-modulated-signal transmission5102”. In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, in “single stream modulated signal transmission”, cyclic delay diversity (CDD (CSD)) processing is controlled via control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. However, as described above, when the base station or AP transmits a modulated signal in accordance withFIG.51,FIG.52, and/orFIG.53, in “single stream modulated signal transmission5101” inFIG.51, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) indicates “ON”, and in “single stream modulated signal transmission5101” inFIG.51, CDD (CSD) processing is performed. Example 2-5 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, whether CDD (CSD) processing is performed or not is selectable, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, based on control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33selects whether to (i) implement a phase change or perform CDD (CSD) or (ii) do not implement a phase change or do not perform CDD (CSD). In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, inFIG.51, in “single stream modulated signal transmission5101”, cyclic delay diversity (CDD (CSD)) processing is always performed. In such cases, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7 is not necessary. Example 2-6 InFIG.51, in “multi-stream multi-modulated-signal transmission5102”, whether CDD (CSD) processing is performed or not is selectable, and in “multi-stream multi-modulated-signal transmission5102”, a single-carrier scheme or OFDM scheme can be selected. Accordingly, based on control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7, phase changer209A and/or209B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33selects whether to (i) implement a phase change or perform CDD (CSD) or (ii) do not implement a phase change or do not perform CDD (CSD). In “multi-stream multi-modulated-signal transmission5102”, the switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis is possible. Accordingly, phase changer205A and/or205B in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, orFIG.33can control the ON/OFF of a phase change operation. Accordingly, the ON/OFF of the phase change operation by phase changer205A and/or phase changer205B is controlled via the control information (u10) for switching between ON/OFF of operation for (cyclically/regularly) changing the phase change value on a per-symbol basis described in Embodiment 7. Moreover, in “single stream modulated signal transmission”, cyclic delay diversity (CDD (CSD)) processing is controlled via control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. However, as described above, when the base station or AP transmits a modulated signal in accordance withFIG.51,FIG.52, and/orFIG.53, in “single stream modulated signal transmission5101” inFIG.51, control information (u11) for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) indicates “ON”, and in “single stream modulated signal transmission5101” inFIG.51, CDD (CSD) processing is performed. Third Example FIG.57illustrates one example of a configuration of a modulated signal transmitted by a base station or AP according to this embodiment. InFIG.57, time is represented on the horizontal axis. Operations that are the same as inFIG.51share like reference marks. As illustrated inFIG.57, the transmission device in the base station or AP performs “single stream modulated signal transmission5101” and subsequently performs “single stream modulated signal transmission5701” again. FIG.52illustrates one example of a frame configuration when single stream modulated signal transmission5101inFIG.57is performed. AsFIG.52has already been described, repeated description will be omitted. FIG.58illustrates one example of a frame configuration when single stream modulated signal transmission5701inFIG.57is performed. InFIG.58, time is represented on the horizontal axis. As illustrated inFIG.58, the base station or AP transmits preamble5801, subsequently transmits control information symbol5802, and subsequently transmits, for example, data symbol5803. Note that preamble5801, control information symbol,5802, and, for example, data symbol5803are each transmitted via a single stream. Preamble5801conceivably includes a symbol for the terminal, which is the communication partner of the base station or AP, to perform signal detection, time synchronization, frequency synchronization, frequency offset estimation, channel estimation, and/or frame synchronization. For example, preamble5801is conceivably a PSK scheme symbol. Control information symbol5802is a symbol including, for example, information relating to the communications method of the modulated signal transmitted by the base station and AP and/or information required by the terminal to demodulate a data symbol. However, the information included in control information symbol5802is not limited to this example; control information symbol5802may include other control information. Note that hereinafter, the scheme used for “single stream modulated signal transmission5101” inFIG.57may be a single-carrier scheme, and the scheme used for “single stream modulated signal transmission5701” inFIG.57may be a single-carrier scheme or a multi-carrier scheme. Note that in the following description, the multi-carrier scheme is exemplified as the OFDM scheme (however, note that the multi-carrier scheme used is not limited to the OFDM scheme). One characteristic of this embodiment is that CDD(CSD) as described in Supplemental Information 1 is implemented upon performing single stream modulated signal transmission5101using a single-carrier scheme inFIG.51. Example 3-1 InFIG.57, in “single stream modulated signal transmission5701”, CDD (CSD) processing is not performed, and in “single stream modulated signal transmission5701”, a single-carrier scheme or OFDM scheme can be selected. When “single stream modulated signal transmission5701” is performed, it is possible to select “multi-stream multi-modulated-signal transmission” instead of “single stream modulated signal transmission”. Note that since “multi-stream multi-modulated-signal transmission” has already been described, repeated description will be omitted. Next, operations performed by the transmission device in the base station will be described with reference toFIG.54. FIG.54illustrates one example of a configuration of signal processor106in, for example, the transmission device in the base station illustrated inFIG.1orFIG.44. As the general operations illustrated inFIG.54have already been described, repeated description will be omitted. In this example, the characteristic feature is that, inFIG.57, when “single stream modulated signal transmission5101” is performed, CDD (CSD) processing is performed, and when “single stream modulated signal transmission5701” is performed, CDD (CSD) processing is not performed. As the operations performed by inserter5405have already been described, repeated description will be omitted. CDD (CSD) unit5407switches the CDD (CSD) processing ON and OFF based on control signal5400. CDD (CSD) unit5407knows the timing of the “single stream modulated signal transmission5101” inFIG.57from information included in control signal5400indicating whether it is time to transmit a plurality of modulated signals for a plurality of streams or time to transmit a single stream modulated signal. In such cases, CDD (CSD) unit5407determines to perform cyclic delay diversity based on control information (u11) included in control signal5400for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. Accordingly, when “single stream modulated signal transmission5101” inFIG.57is performed, CDD (CSD) unit5407performs signal processing for cyclic delay diversity, and outputs CDD (CSD) processed signal5408in accordance with the frame configuration. CDD (CSD) unit5407knows the timing of the “single stream modulated signal transmission5701” inFIG.57from information included in the control signal indicating whether it is time to transmit a plurality of modulated signals for a plurality of streams or time to transmit a single stream modulated signal. CDD (CSD) unit5407determines to not perform cyclic delay diversity based on control information (u11) included in control signal5400for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. Accordingly, when “single stream modulated signal transmission5701” inFIG.57is performed, CDD (CSD) unit5407does not perform signal processing for cyclic delay diversity, and, for example, does not output a signal. Selector5409A receives inputs of signal5403A, signal5406in accordance with the frame configuration, and control signal5400, and based on control signal5400, selects either signal5403A or signal5406in accordance with frame configuration, and outputs selected signal5410A. Accordingly, when “single stream modulated signal transmission5101” is performed and when “single stream modulated signal transmission5701” is performed, in either case, selector5409A outputs signal5406in accordance with the frame configuration as selected signal5410A. When “single stream modulated signal transmission5101” is performed, selector5409B outputs CDD (CSD) processed signal5408in accordance with the frame configuration as selected signal5410B, and when “single stream modulated signal transmission5701” is performed, for example, does not output selected signal5410B. As the operations performed by radio units107_A,107_B in the base station illustrated inFIG.1,FIG.44have already been described, repeated description will be omitted. Example 3-2 InFIG.57, in “single stream modulated signal transmission5701”, whether CDD (CSD) processing is performed or not is selectable, and in “single stream modulated signal transmission5701”, a single-carrier scheme or OFDM scheme can be selected. When “single stream modulated signal transmission5701” is performed, it is possible to select “multi-stream multi-modulated-signal transmission” instead of “single stream modulated signal transmission”. Note that since “multi-stream multi-modulated-signal transmission” has already been described, repeated description will be omitted. Next, operations performed by the transmission device in the base station will be described with reference toFIG.54. FIG.54illustrates one example of a configuration of signal processor106in, for example, the transmission device in the base station illustrated inFIG.1orFIG.44. As the general operations illustrated inFIG.54have already been described, repeated description will be omitted. In this example, the characteristic feature is that, inFIG.57, when “single stream modulated signal transmission5101” is performed, CDD (CSD) processing is performed, and when “single stream modulated signal transmission5701” is performed, whether to perform CDD (CSD) processing or not is selectable. As the operations performed by inserter5405have already been described, repeated description will be omitted. CDD (CSD) unit5407switches the CDD (CSD) processing ON and OFF based on control signal5400. CDD (CSD) unit5407knows the timing of the “single stream modulated signal transmission5101” inFIG.57from information included in control signal5400indicating whether it is time to transmit a plurality of modulated signals for a plurality of streams or time to transmit a single stream modulated signal. In such cases, CDD (CSD) unit5407determines to perform cyclic delay diversity based on control information (u11) included in control signal5400for (controlling ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. Accordingly, when “single stream modulated signal transmission5101” inFIG.57is performed, CDD (CSD) unit5407performs signal processing for cyclic delay diversity, and outputs CDD (CSD) processed signal5408in accordance with the frame configuration. CDD (CSD) unit5407knows the timing of the “single stream modulated signal transmission5701” inFIG.57from information included in the control signal indicating whether it is time to transmit a plurality of modulated signals for a plurality of streams or time to transmit a single stream modulated signal. When “single stream modulated signal transmission5701” is performed, CDD (CSD) unit5407determines to not perform cyclic delay diversity based on control information (u11) included in control signal5400for (controlling the ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. Accordingly, when “single stream modulated signal transmission5701” inFIG.57is performed, CDD (CSD) unit5407does not perform signal processing for cyclic delay diversity, and, for example, does not output a signal. Next, operations different from this example will be described. CDD (CSD) unit5407knows the timing of the “single stream modulated signal transmission5701” inFIG.57from information included in the control signal indicating whether it is time to transmit a plurality of modulated signals for a plurality of streams or time to transmit a single stream modulated signal. When “single stream modulated signal transmission5701” is performed, CDD (CSD) unit5407determines to perform cyclic delay diversity based on control information (u11) included in control signal5400for controlling (the ON/OFF of) cyclic delay diversity (CDD (CSD)) described in Embodiment 7. Accordingly, when “single stream modulated signal transmission5701” inFIG.57is performed, CDD (CSD) unit5407performs signal processing for cyclic delay diversity, and outputs CDD (CSD) processed signal5408in accordance with the frame configuration. Selector5409A receives inputs of signal5403A, signal5406A in accordance with the frame configuration, and control signal5400, and based on control signal5400, selects either signal5403A or signal5406in accordance with frame configuration, and outputs selected signal5410A. Accordingly, when “single stream modulated signal transmission5101” is performed and when “single stream modulated signal transmission5701” is performed, in either case, selector5409A outputs signal5406in accordance with the frame configuration as selected signal5410A. When “single stream modulated signal transmission5101” is performed, selector5409B outputs CDD (CSD) processed signal5408in accordance with the frame configuration as selected signal5410B. When “single stream modulated signal transmission5701” is performed, when selector5409B determines to not perform CDD (CSD) processing in “single stream modulated signal transmission5701”, selector5409B, for example, does not output selected signal5410B. When “single stream modulated signal transmission5701” is performed, when selector5409B determines to perform CDD (CSD) processing in “single stream modulated signal transmission5701”, selector5409B outputs CDD (CSD) processed signal5408in accordance with the frame configuration as selected signal5410B. As the operations performed by radio units107_A,107_B in the base station illustrated inFIG.1,FIG.44have already been described, repeated description will be omitted. As described above, control over whether to implement a phase change or not and control over whether to perform CDD (CSD) or not based on, for example, the number of transmission streams and/or the transmission method can be done in an appropriate manner. This makes it possible to achieve an advantageous effect in which it is possible to improve data reception quality of the communication partner. An advantageous characteristic is that, by performing CDD (CSD), the probability that data reception quality of the communication partner will improve increases, and, in particular, when single stream transmission is performed, it is possible to effectively use the plurality of transmitting antennas of the transmission device. Another advantageous characteristic is that, when performing multi-stream transmission, based the propagation/communications environment and/or phase change support by the communication partner, for example, it is possible to achieve favorable data reception quality by controlling whether a phase change is implemented or not. Note that althoughFIG.54is used as an example of a portion of the configuration of signal processor106illustrated inFIG.1and/orFIG.44, the configurations illustrated in any one ofFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33may be implemented. For example, in the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33, when single stream transmission is performed, mapped signal201B of s2(t) is nullified. In weighting synthesizer203, as precoding matrix F, for example, any of the following can be applied. [Math.149]F⁡(i)=(β0β0)Equation⁢(149)[Math.150]F⁡(i)=(0β0β)Equation⁢(150)[Math.151]F⁡(i)=(α0β0)Equation⁢(151)[Math.152]F⁡(i)=(0α0β)Equation⁢(152) Note that α may be a real number, and, alternatively, may be an imaginary number. Note that β may be a real number, and, alternatively, may be an imaginary number. However, α is not zero, and β is not zero. The above was described in terms of expressions, the signal may be split instead of implementing the weighting synthesis (calculation using a matrix) as per the expressions above. In single stream cases, phase changers205A,205B do not implement a phase change (the input signal is output as-is). Moreover, in single stream cases, phase changers209A,209B may perform signal processing for CDD (CSD) instead of implementing a phase change. Embodiment A9 In Supplemental Information 4, for example, it is stated that phase changers may be included before and after weighting synthesizer203in the configurations illustrated in, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33. In this embodiment, supplemental information regarding this point will be given. A first example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.59. InFIG.59, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. As illustrated inFIG.59, phase changer5901A receives inputs of mapped signal201A (s1(t)) and control signal200, and, for example, based information on the phase change method included in control signal200, implements a phase change on mapped signal201A (s1(t)) and outputs phase-changed signal5902A. Similarly, phase changer5901B receives inputs of mapped signal201B (s2(t)) and control signal200, and, for example, based information on the phase change method included in control signal200, implements a phase change on mapped signal201B (s2(t)) and outputs phase-changed signal5902B. Then, phase-changed signal206A is input into inserter207A illustrated in, for example,FIG.2, and phase-changed signal206B is input into inserter207B illustrated in, for example,FIG.2. A second example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.60. InFIG.60, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. Moreover, components that operate the same as inFIG.59share like reference marks. Accordingly, descriptions that overlap withFIG.59will be omitted. UnlikeFIG.59, inFIG.60, only phase changer205B is inserted after weighting synthesizer203. Then, weighting synthesized signal204A is input into inserter207A illustrated in, for example,FIG.2, and phase-changed signal206B is input into inserter207B illustrated in, for example,FIG.2. A third example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.61. InFIG.61, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. Moreover, components that operate the same as inFIG.59share like reference marks. Accordingly, descriptions that overlap withFIG.59will be omitted. UnlikeFIG.60, inFIG.61, phase changer205A is inserted after weighting synthesizer203on the top line. Then, phase-changed signal206A is input into inserter207A illustrated in, for example,FIG.2, and weighting synthesized signal204B is input into inserter207B illustrated in, for example,FIG.2. A fourth example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.62. InFIG.62, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. Moreover, components that operate the same as inFIG.59share like reference marks. Accordingly, descriptions that overlap withFIG.59will be omitted. UnlikeFIG.59, inFIG.62, only phase changer5901B is inserted before the weighting synthesizer. Then, phase-changed signal206A is input into inserter207A illustrated in, for example,FIG.2, and phase-changed signal206B is input into inserter207B illustrated in, for example,FIG.2. A fifth example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.63. InFIG.63, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. Moreover, components that operate the same as inFIG.59share like reference marks. Accordingly, descriptions that overlap withFIG.59will be omitted. UnlikeFIG.62, inFIG.63, phase changer5901A is inserted before weighting synthesizer203on the top line. Then, phase-changed signal206A is input into inserter207A illustrated in, for example,FIG.2, and phase-changed signal206B is input into inserter207B illustrated in, for example,FIG.2. A sixth example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.64. InFIG.64, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. Moreover, components that operate the same as inFIG.59share like reference marks. Accordingly, descriptions that overlap withFIG.59will be omitted. InFIG.64, phase changers5901B,205B are present before and after weighting synthesizer203, on the bottom line. Then, weighting synthesized signal204A is input into inserter207A illustrated in, for example,FIG.2, and phase-changed signal206B is input into inserter207B illustrated in, for example,FIG.2. A seventh example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.65. InFIG.65, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. Moreover, components that operate the same as inFIG.59share like reference marks. Accordingly, descriptions that overlap withFIG.59will be omitted. InFIG.65, phase changers5901B,205A are present before and after weighting synthesizer203, on the bottom and top lines, respectively. Then, phase-changed signal206A is input into inserter207A illustrated in, for example,FIG.2, and weighting synthesized signal204B is input into inserter207B illustrated in, for example,FIG.2. An eighth example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.66. InFIG.66, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. Moreover, components that operate the same as inFIG.59share like reference marks. Accordingly, descriptions that overlap withFIG.59will be omitted. InFIG.66, phase changers5901A,205B are present before and after weighting synthesizer203, on the top and bottom lines, respectively. Then, weighting synthesized signal204B is input into inserter207A illustrated in, for example,FIG.2, and phase-changed signal206B is input into inserter207B illustrated in, for example,FIG.2. A ninth example of how phase changers are arranged before and after weighting synthesizer203is illustrated inFIG.67. InFIG.67, components that operate the same as in, for example,FIG.2share like reference marks. Accordingly, descriptions that overlap with, for example,FIG.2will be omitted. Moreover, components that operate the same as inFIG.59share like reference marks. Accordingly, descriptions that overlap withFIG.59will be omitted. InFIG.67, phase changers5901A,205A are present before and after weighting synthesizer203, on the top line. Then, phase-changed signal206A is input into inserter207A illustrated in, for example,FIG.2, and weighting synthesized signal204B is input into inserter207B illustrated in, for example,FIG.2. The embodiments described in the present specification may be implemented using these configurations. The phase change method used by phase changers5901A,5901B,205A, and205B inFIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67is, for example, set according to control signal200. Embodiment A10 In this embodiment, an example of a robust communications method will be given. First Example FIG.68illustrates operations performed by, for example, mapper104inFIG.1of a base station or AP. Mapper6802receives inputs of encoded data6801and control signal6800, and when a robust transmission method is specified by control signal6800, performs mapping processes such as those described below, and outputs mapped signals6803A,6803B. Note that control signal6800corresponds to100inFIG.1, encoded data6801corresponds to103inFIG.1, mapper6802corresponds to104inFIG.1, mapped signal6803A corresponds to105_1inFIG.1, and mapped signal6801B corresponds to105_2inFIG.1. For example, mapper6802receives inputs of bit c0(k), bit c1(k), bit c2(k), and bit c3(k) as encoded data6801. Note that k is an integer that is greater than or equal to 0. For example, mapper6802performs QPSK modulation on c0(k) and c1(k) to obtain mapped signal a(k). For example, mapper6802performs QPSK modulation on c2(k) and c3(k) to obtain mapped signal b(k). For example, mapper6802performs QPSK modulation on c0(k) and c1(k) to obtain mapped signal a′(k). For example, mapper6802performs QPSK modulation on c2(k) and c3(k) to obtain mapped signal b′(k). Mapped signal6803A whose symbol number i=2k is represented as s1(i=2k), mapped signal6803B whose symbol number i=2k is represented as s2(i=2k), mapped signal6803A whose symbol number i=2k+1 is represented as s1(i=2k+1), and mapped signal6803B whose symbol number i=2k+1 is represented as s2(i=2k+1). s1(i=2k), i.e., mapped signal6803A whose symbol number i=2k, is expressed as a(k), s2(i=2k), i.e., mapped signal6803B whose symbol number i=2k, is expressed as b(k), s1(i=2k+1), i.e., mapped signal6803A whose symbol number i=2k+1, is expressed as b′(k), and s2(i=2k+1), i.e., mapped signal6803B whose symbol number i=2k+1, is expressed as a′(k). Next, the relationship between “a(k) and a′(k)” and “b(k) and b′(k)” will be described. FIG.69illustrates an example of a distribution of signal points in an in-phase I-quadrature Q plane when QPSK is used, and illustrates the relationship between signal points for the values for bit x0 and bit x1. When bits [x0 x1]=[0 0] (i.e., when x0 is 0 and x1 is 0), in-phase component I is set to z and quadrature component Q is set to z (which matches signal point6901). Note that z is a real number that is greater than 0. When bits [x0 x1]=[01] (i.e., when x0 is 0 and x1 is 1), in-phase component I is set to −z and quadrature component Q is set to z (which matches signal point6902). When bits [x0 x1]=[10] (i.e., when x0 is 1 and x1 is 0), in-phase component I is set to z and quadrature component Q is set to −z (which matches signal point6903). When bits [x0 x1]=[11] (i.e., when x0 is 1 and x1 is 1), in-phase component I is set to −z and quadrature component Q is set to −z (which matches signal point6904). FIG.70illustrates an example of a distribution of signal points in an in-phase I-quadrature Q plane when QPSK is used, and illustrates the relationship between signal points for the values for bit x0 and bit x1. However, “the relationship between signal points for the values for bit x0 and bit x1” inFIG.69and “the relationship between signal points for the values for bit x0 and bit x1” inFIG.70are different. When bits [x0 x1]=[0 0] (i.e., when x0 is 0 and x1 is 0), in-phase component I is set to z and quadrature component Q is set to −z (which matches signal point7003). Note that z is a real number that is greater than 0. When bits [x0 x1]=[01] (i.e., when x0 is 1 and x1 is 1), in-phase component I is set to −z and quadrature component Q is set to −z (which matches signal point7004) When bits [x0 x1]=[10] (i.e., when x0 is 1 and x1 is 0), in-phase component I is set to z and quadrature component Q is set to z (which matches signal point7001). When bits [x0 x1]=[11] (i.e., when x0 is 1 and x1 is 1), in-phase component I is set to −z and quadrature component Q is set to z (which matches signal point7002). FIG.71illustrates an example of a distribution of signal points in an in-phase I-quadrature Q plane when QPSK is used, and illustrates the relationship between signal points for the values for bit x0 and bit x1. However, “the relationship between signal points for the values for bit x0 and bit x1” inFIG.71is different from “the relationship between signal points for the values for bit x0 and bit x1” inFIG.69and “the relationship between signal points for the values for bit x0 and bit x1” inFIG.70. When bits [x0 x1]=[0 0] (i.e., when x0 is 0 and x1 is 0), in-phase component I is set to −z and quadrature component Q is set to z (which matches signal point7102). Note that z is a real number that is greater than 0. When bits [x0 x1]=[01] (i.e., when x0 is 0 and x1 is 1), in-phase component I is set to z and quadrature component Q is set to z (which matches signal point7101). When bits [x0 x1]=[10] (i.e., when x0 is 1 and x1 is 0), in-phase component I is set to −z and quadrature component Q is set to −z (which matches signal point7104). When bits [x0 x1]=[11] (i.e., when x0 is 1 and x1 is 1), in-phase component I is set to −z and quadrature component Q is set to −z (which matches signal point7103). FIG.72illustrates an example of an distribution of signal points in an in-phase I-quadrature Q plane when QPSK is used, and illustrates the relationship between signal points for the values for bit x0 and bit x1. However, “the relationship between signal points for the values for bit x0 and bit x1” inFIG.72is different from “the relationship between signal points for the values for bit x0 and bit x1” inFIG.69, “the relationship between signal points for the values for bit x0 and bit x1” inFIG.70, and “the relationship between signal points for the values for bit x0 and bit x1” inFIG.71. When bits [x0 x1]=[0 0] (i.e., when x0 is 0 and x1 is 0), in-phase component I is set to −z and quadrature component Q is set to −z (which matches signal point7204). Note that z is a real number that is greater than 0. When bits [x0 x1]=[01] (i.e., when x0 is 0 and x1 is 1), in-phase component I is set to z and quadrature component Q is set to −z (which matches signal point7203). When bits [x0 x1]=[10] (i.e., when x0 is 1 and x1 is 0), in-phase component I is set to −z and quadrature component Q is set to z (which matches signal point7202). When bits [x0 x1]=[11] (i.e., when x0 is 1 and x1 is 1), in-phase component I is set to z and quadrature component Q is set to z (which matches signal point7201). For example, in order to generate a(k), the mapping illustrated inFIG.69is used. For example, c0(k)=0 and c1(k)=0, signal point6901is mapped using the mapping illustrated inFIG.69, and signal point6901corresponds to a(k). In order to generate a′(k), the mapping to be used is set to any one of the mapping illustrated inFIG.69, the mapping illustrated inFIG.70, the mapping illustrated inFIG.71, or the mapping illustrated inFIG.72. <1> In order to generate a′(k) when the mapping to be used is set to the mapping illustrated inFIG.69, since c0(k)=0 and c1(k)=0, signal point6901is mapped using the mapping illustrated inFIG.69, and signal point6901corresponds to a′(k). <2> In order to generate a′(k) when the mapping to be used is set to the mapping illustrated inFIG.70, since c0(k)=0 and c1(k)=0, signal point7003is mapped using the mapping illustrated inFIG.70, and signal point7003corresponds to a′(k). <3> In order to generate a′(k) when the mapping to be used is set to the mapping illustrated inFIG.71, since c0(k)=0 and c1(k)=0, signal point7102is mapped using the mapping illustrated inFIG.71, and signal point7102corresponds to a′(k). <4> In order to generate a′(k) when the mapping to be used is set to the mapping illustrated inFIG.72, since c0(k)=0 and c1(k)=0, signal point7204is mapped using the mapping illustrated inFIG.72, and signal point7204corresponds to a′(k). As described above, the relationship between “bits (for example x0, x1) to be transmitted for generation of a(k) and the distribution of signal points” and the relationship between “bits (for example x0, x1) to be transmitted for generation of a′(k) and the distribution of signal points” may be the same, and, alternatively, may be different. An example of a case in which the relationships are the same is one in whichFIG.69is used to generate a(k) andFIG.69is used to generate a′(k) as described above. Examples of cases in which the relationships are different include those in whichFIG.69is used to generate a(k) andFIG.70is used to generate a′(k),FIG.69is used to generate a(k) andFIG.71is used to generate a′(k), andFIG.69is used to generate a(k) andFIG.72is used to generate a′(k), as described above. Other examples include “the modulation scheme for generating a(k) and the modulation scheme for generating a′(k) are different” and “the signal point distribution in the in-phase I-quadrature Q plane for generating a(k) and the signal point distribution in the in-phase I-quadrature Q plane for generating a′(k) are different”. For example, as described above, QPSK may be used as the modulation scheme for generating a(k), and a signal point distribution modulation scheme other than QPSK may be used as the modulation scheme for generating a′(k). Moreover, the signal point distribution in the in-phase I-quadrature Q plane for generating a(k) may be the distribution illustrated inFIG.69, and the signal point distribution in the in-phase I-quadrature Q plane for generating a′(k) may be a distribution different from that illustrated inFIG.69. Note that “different signal point distributions in the in-phase I-quadrature Q plane” means, for example, when the coordinates of four signal points in the in-phase I-quadrature Q plane for generating a(k) are distributed as illustrated inFIG.69, at least one of the four signal points in the in-phase I-quadrature Q plane for generating a′(k) does not overlap with any one of the four signal points inFIG.69. For example, in order to generate b(k), the mapping illustrated in FIG.69is used. For example, c2(k)=0 and c3(k)=0, signal point6901is mapped using the mapping illustrated inFIG.69, and signal point6901corresponds to b(k). In order to generate b′(k), the mapping to be used is set to any one of the mapping illustrated inFIG.69, the mapping illustrated inFIG.70, the mapping illustrated inFIG.71, or the mapping illustrated inFIG.72. <5> In order to generate b′(k) when the mapping to be used is set to the mapping illustrated inFIG.69, since c2(k)=0 and c3(k)=0, signal point6901is mapped using the mapping illustrated inFIG.69, and signal point6901corresponds to b′(k). <6> In order to generate b′(k) when the mapping to be used is set to the mapping illustrated inFIG.70, since c2(k)=0 and c3(k)=0, signal point7003is mapped using the mapping illustrated inFIG.70, and signal point7003corresponds to b′(k). <7> In order to generate b′(k) when the mapping to be used is set to the mapping illustrated inFIG.71, since c2(k)=0 and c3(k)=0, signal point7102is mapped using the mapping illustrated inFIG.71, and signal point7102corresponds to b′(k). <8> In order to generate b′(k) when the mapping to be used is set to the mapping illustrated inFIG.72, since c2(k)=0 and c3(k)=0, signal point7204is mapped using the mapping illustrated inFIG.72, and signal point7204corresponds to b′(k). As described above, the relationship between “bits (for example x0, x1) to be transmitted for generation of b(k) and the distribution of signal points” and the relationship between “bits (for example x0, x1) to be transmitted for generation of b′(k) and the distribution of signal points” may be the same, and, alternatively, may be different. An example of a case in which the relationships are the same is one in whichFIG.69is used to generate b(k) andFIG.69is used to generate b′(k) as described above. Examples of cases in which the relationships are different include those in whichFIG.69is used to generate b(k) andFIG.70is used to generate b′(k),FIG.69is used to generate b(k) andFIG.71is used to generate b′(k), andFIG.69is used to generate b(k) andFIG.72is used to generate b′(k), as described above. Other examples include “the modulation scheme for generating b(k) and the modulation scheme for generating b′(k) are different” and “the signal point distribution in the in-phase I-quadrature Q plane for generating b(k) and the signal point distribution in the in-phase I-quadrature Q plane for generating b′(k) are different”. For example, as described above, QPSK may be used as the modulation scheme for generating b(k), and a signal point distribution modulation scheme other than QPSK may be used as the modulation scheme for generating b′(k). Moreover, the signal point distribution in the in-phase I-quadrature Q plane for generating b(k) may be the distribution illustrated inFIG.69, and the signal point distribution in the in-phase I-quadrature Q plane for generating b′(k) may be a distribution different from that illustrated inFIG.69. Note that “different signal point distributions in the in-phase I-quadrature Q plane” means, for example, when the coordinates of four signal points in the in-phase I-quadrature Q plane for generating b(k) are distributed as illustrated inFIG.69, at least one of the four signal points in the in-phase I-quadrature Q plane for generating b′(k) does not overlap with any one of the four signal points inFIG.69. As described above, since mapped signal6803A corresponds to105_1inFIG.1and mapped signal6803B corresponds to105_2FIG.1, mapped signal6803A and mapped signal6803B are applied with a phase change and/or weighting synthesis processing based on, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, which correspond to signal processor106illustrated inFIG.1. Second Example Hereinbefore, the transmission device included in the base station or AP was exemplified as having the configuration inFIG.1, but here operations for when the transmission device in the base station or AP has the configuration illustrated inFIG.73, which differs fromFIG.1, will be described. InFIG.73, components that operate the same as inFIG.1,FIG.44share like reference marks. Accordingly, repeated description thereof will be omitted. Mapper7301illustrated inFIG.73receives inputs of encoded data103_1,103_2, and control signal100, performs mapping based on information relating to a mapping method included in control signal100, and outputs mapped signals105_1,105_2. FIG.74illustrates operations performed by mapper7301illustrated inFIG.73. InFIG.74, components that operate the same as inFIG.68share like reference marks. Accordingly, repeated description thereof will be omitted. Mapper6802receives inputs of encoded data7401_1,7401_2, and control signal6800, and when a robust transmission method is specified by control signal6800, performs mapping processes such as those described below, and outputs mapped signals6803A,6803B. Note that control signal6800corresponds to100inFIG.73, encoded data7401_1corresponds to103_1inFIG.73, encoded data7401_2corresponds to103_2inFIG.73, mapper6802corresponds to7301inFIG.73, mapped signal6803A corresponds to105_1inFIG.73, and mapped signal6801B corresponds to105_2inFIG.73. For example, mapper6802receives inputs of bit c0(k) and bit c1(k) as encoded data7401_1, and bit c2(k), and bit c3(k) as encoded data7401_2. Note that k is an integer that is greater than or equal to 0. For example, mapper6802performs QPSK modulation on c0(k) and c1(k) to obtain mapped signal a(k). For example, mapper6802performs QPSK modulation on c2(k) and c3(k) to obtain mapped signal b(k). For example, mapper6802performs QPSK modulation on c0(k) and c1(k) to obtain mapped signal a′(k). For example, mapper6802performs QPSK modulation on c2(k) and c3(k) to obtain mapped signal b′(k). [Mapped signal6803A whose symbol number i=2k is represented as s1(i=2k), mapped signal6803B whose symbol number i=2k is represented as s2(i=2k), mapped signal6803A whose symbol number i=2k+1 is represented as s1(i=2k+1), and mapped signal6803B whose symbol number i=2k+1 is represented as s2(i=2k+1). s1(i=2k), i.e., mapped signal6803A whose symbol number i=2k, is expressed as a(k), s2(i=2k), i.e., mapped signal6803B whose symbol number i=2k, is expressed as b(k), s1(i=2k+1), i.e., mapped signal6803A whose symbol number i=2k+1, is expressed as b′(k), and s2(i=2k+1), i.e., mapped signal6803B whose symbol number i=2k+1, is expressed as a′(k). Next, the relationship between “a(k) and a′(k)” and “b(k) and b′(k)” will be described with reference toFIG.69,FIG.70,FIG.71, andFIG.72. As described above, since mapped signal6803A corresponds to105_1inFIG.73and mapped signal6803B corresponds to105_2FIG.73, mapped signal6803A and mapped signal6803B are applied with a phase change and/or weighting synthesis processing based on, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, which correspond to signal processor106illustrated inFIG.73. Third Example Hereinbefore, the transmission device included in the base station or AP was exemplified as having the configuration inFIG.1, but here operations for when the transmission device in the base station or AP has the configuration illustrated inFIG.73, which differs fromFIG.1, will be described. InFIG.73, components that operate the same as inFIG.1,FIG.44share like reference marks. Accordingly, repeated description thereof will be omitted. Mapper7301illustrated inFIG.73receives inputs of encoded data103_1,103_2, and control signal100, performs mapping based on information relating to a mapping method included in control signal100, and outputs mapped signals105_1,105_2. FIG.75illustrates operations performed by mapper7301illustrated inFIG.73. InFIG.75, components that operate the same as inFIG.68,FIG.74share like reference marks. Accordingly, repeated description thereof will be omitted. Mapper6802receives inputs of encoded data7401_1,7401_2, and control signal6800, and when a robust transmission method is specified by control signal6800, performs mapping processes such as those described below, and outputs mapped signals6803A,6803B. Note that control signal6800corresponds to100inFIG.73, encoded data7401_1corresponds to103_1inFIG.73, encoded data7401_2corresponds to103_2inFIG.73, mapper6802corresponds to7301inFIG.73, mapped signal6803A corresponds to105_1inFIG.73, and mapped signal6801B corresponds to105_2inFIG.73. For example, mapper6802receives inputs of bit c0(k) and bit c2(k) as encoded data7401_1, and bit c1(k), and bit c3(k) as encoded data7401_2. Note that k is an integer that is greater than or equal to 0. For example, mapper6802performs QPSK modulation on c0(k) and c1(k) to obtain mapped signal a(k). For example, mapper6802performs QPSK modulation on c2(k) and c3(k) to obtain mapped signal b(k). For example, mapper6802performs QPSK modulation on c0(k) and c1(k) to obtain mapped signal a′(k). For example, mapper6802performs QPSK modulation on c2(k) and c3(k) to obtain mapped signal b′(k). Mapped signal6803A whose symbol number i=2k is represented as s1(i=2k), mapped signal6803B whose symbol number i=2k is represented as s2(i=2k), mapped signal6803A whose symbol number i=2k+1 is represented as s1(i=2k+1), and mapped signal6803B whose symbol number i=2k+1 is represented as s2(i=2k+1). s1(i=2k), i.e., mapped signal6803A whose symbol number i=2k, is expressed as a(k), s2(i=2k), i.e., mapped signal6803B whose symbol number i=2k, is expressed as b(k), s1(i=2k+1), i.e., mapped signal6803A whose symbol number i=2k+1, is expressed as b′(k), and s2(i=2k+1), i.e., mapped signal6803B whose symbol number i=2k+1, is expressed as a′(k). Next, the relationship between “a(k) and a′(k)” and “b(k) and b′(k)” will be described with reference toFIG.69,FIG.70,FIG.71, andFIG.72. As described above, since mapped signal6803A corresponds to105_1inFIG.73and mapped signal6803B corresponds to105_2FIG.73, mapped signal6803A and mapped signal6803B are applied with a phase change and/or weighting synthesis processing based on, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, which correspond to signal processor106illustrated inFIG.73. Fourth Example FIG.76illustrates operations performed by mapper104inFIG.1of a base station or AP. InFIG.76, components that operate the same as inFIG.68share like reference marks. Mapper6802receives inputs of encoded data6801and control signal6800, and when a robust transmission method is specified by control signal6800, performs mapping processes such as those described below, and outputs mapped signals6803A,6803B. Note that control signal6800corresponds to100inFIG.1, encoded data6801corresponds to103inFIG.1, mapper6802corresponds to104inFIG.1, mapped signal6803A corresponds to105_1inFIG.1, and mapped signal6801B corresponds to105_2inFIG.1. For example, mapper6802receives inputs of bit c0(k), bit c1(k), bit c2(k), bit c3(k), bit c4(k), bit c5(k), bit c6(k), and bit c7(k) as encoded data6801. Note that k is an integer that is greater than or equal to 0. Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c0(k), bit c1(k), bit c2(k), and bit c3(k), to obtain mapped signal a(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c4(k), bit c5(k), bit c6(k), and bit c7(k), to obtain mapped signal b(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c0(k), bit c1(k), bit c2(k), and bit c3(k), to obtain mapped signal a′(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c4(k), bit c5(k), bit c6(k), and bit c7(k), to obtain mapped signal b′(k). Mapped signal6803A whose symbol number i=2k is represented as s1(i=2k), mapped signal6803B whose symbol number i=2k is represented as s2(i=2k), mapped signal6803A whose symbol number i=2k+1 is represented as s1(i=2k+1), and mapped signal6803B whose symbol number i=2k+1 is represented as s2(i=2k+1). s1(i=2k), i.e., mapped signal6803A whose symbol number i=2k, is expressed as a(k), s2(i=2k), i.e., mapped signal6803B whose symbol number i=2k, is expressed as b(k), s1(i=2k+1), i.e., mapped signal6803A whose symbol number i=2k+1, is expressed as b′(k), and s2(i=2k+1), i.e., mapped signal6803B whose symbol number i=2k+1, is expressed as a′(k). Regarding the relationship between “a(k) and a′(k)” and “b(k) and b′(k)”, as described above, for example, the relationship between “bits (for example x0, x1, x2, x3 (x2 and x3 are added since there are 16 signal points)) to be transmitted for generation of a(k) and the distribution of signal points” and the relationship between “bits (for example x0 x1, x2, x3) to be transmitted for generation of a′(k) and the distribution of signal points” may be the same, and, alternatively, may be different. Other examples include “the modulation scheme for generating a(k) and the modulation scheme for generating a′(k) are different” and “the signal point distribution in the in-phase I-quadrature Q plane for generating a(k) and the signal point distribution in the in-phase I-quadrature Q plane for generating a′(k) are different”. Note that “different signal point distributions in the in-phase I-quadrature Q plane” means, for example, when the coordinates of 16 signal points in the in-phase I-quadrature Q plane for generating a(k), at least one of the 16 signal points in the in-phase I-quadrature Q plane for generating a′(k) does not overlap with any one of the 16 signal points in the in-phase I-quadrature Q plane for generating a(k). Regarding the relationship between “a(k) and a′(k)” and “b(k) and b′(k)”, as described above, for example, the relationship between “bits (for example x0, x1, x2, and x3 (x2 and x3 are added since there are 16 signal points)) to be transmitted for generation of b(k) and the distribution of signal points” and the relationship between “bits (for example x0 x1, x2, x3) to be transmitted for generation of b′(k) and the distribution of signal points” may be the same, and, alternatively, may be different. Other examples include “the modulation scheme for generating b(k) and the modulation scheme for generating b′(k) are different” and “the signal point distribution in the in-phase I-quadrature Q plane for generating b(k) and the signal point distribution in the in-phase I-quadrature Q plane for generating b′(k) are different”. Note that “different signal point distributions in the in-phase I-quadrature Q plane” means, for example, when the coordinates of 16 signal points in the in-phase I-quadrature Q plane for generating b(k), at least one of the 16 signal points in the in-phase I-quadrature Q plane for generating b′(k) does not overlap with any one of the 16 signal points in the in-phase I-quadrature Q plane for generating b(k). As described above, since mapped signal6803A corresponds to105_1inFIG.1and mapped signal6803B corresponds to105_2FIG.1, mapped signal6803A and mapped signal6803B are applied with a phase change and/or weighting synthesis processing based on, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, which correspond to signal processor106illustrated inFIG.1. Fifth Example Hereinbefore, the transmission device included in the base station or AP was exemplified as having the configuration inFIG.1, but here operations for when the transmission device in the base station or AP has the configuration illustrated inFIG.73, which differs fromFIG.1, will be described. InFIG.73, components that operate the same as inFIG.1,FIG.44share like reference marks. Accordingly, repeated description thereof will be omitted. Mapper7301illustrated inFIG.73receives inputs of encoded data103_1,103_2, and control signal100, performs mapping based on information relating to a mapping method included in control signal100, and outputs mapped signals105_1,105_2. FIG.77illustrates operations performed by mapper7301illustrated inFIG.73. InFIG.77, components that operate the same as inFIG.68,FIG.74share like reference marks. Accordingly, repeated description thereof will be omitted. Mapper6802receives inputs of encoded data7401_1,7401_2, and control signal6800, and when a robust transmission method is specified by control signal6800, performs mapping processes such as those described below, and outputs mapped signals6803A,6803B. Note that control signal6800corresponds to100inFIG.73, encoded data7401_1corresponds to103_1inFIG.73, encoded data7401_2corresponds to103_2inFIG.73, mapper6802corresponds to7301inFIG.73, mapped signal6803A corresponds to105_1inFIG.73, and mapped signal6801B corresponds to105_2inFIG.73. For example, mapper6802receives inputs of bits c0(k), c1(k), c2(k), and c3(k) as encoded data7401_1, and bits c4(k), c5(k), c6(k), and c7(k) as encoded data7401_2. Note that k is an integer that is greater than or equal to 0. Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c0(k), bit c1(k), bit c2(k), and bit c3(k), to obtain mapped signal a(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c4(k), bit c5(k), bit c6(k), and bit c7(k), to obtain mapped signal b(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c0(k), bit c1(k), bit c2(k), and bit c3(k), to obtain mapped signal a′(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c4(k), bit c5(k), bit c6(k), and bit c7(k), to obtain mapped signal b′(k) Mapped signal6803A whose symbol number i=2k is represented as s1(i=2k), mapped signal6803B whose symbol number i=2k is represented as s2(i=2k), mapped signal6803A whose symbol number i=2k+1 is represented as s1(i=2k+1), and mapped signal6803B whose symbol number i=2k+1 is represented as s2(i=2k+1). s1(i=2k), i.e., mapped signal6803A whose symbol number i=2k, is expressed as a(k), s2(i=2k), i.e., mapped signal6803B whose symbol number i=2k, is expressed as b(k), s1(i=2k+1), i.e., mapped signal6803A whose symbol number i=2k+1, is expressed as b′(k), and s2(i=2k+1), i.e., mapped signal6803B whose symbol number i=2k+1, is expressed as a′(k). Next, the relationship between “a(k) and a′(k)” and “b(k) and b′(k)” will be described with reference to the fourth example. As described above, since mapped signal6803A corresponds to105_1inFIG.73and mapped signal6803B corresponds to105_2FIG.73, mapped signal6803A and mapped signal6803B are applied with a phase change and/or weighting synthesis processing based on, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, which correspond to signal processor106illustrated inFIG.73. Sixth Example Hereinbefore, the transmission device included in the base station or AP was exemplified as having the configuration inFIG.1, but here operations for when the transmission device in the base station or AP has the configuration illustrated inFIG.73, which differs fromFIG.1, will be described. InFIG.73, components that operate the same as inFIG.1,FIG.44share like reference marks. Accordingly, repeated description thereof will be omitted. Mapper7301illustrated inFIG.73receives inputs of encoded data103_1,103_2, and control signal100, performs mapping based on information relating to a mapping method included in control signal100, and outputs mapped signals105_1,105_2. FIG.78illustrates operations performed by mapper7301illustrated inFIG.73. InFIG.78, components that operate the same as inFIG.68,FIG.74share like reference marks. Accordingly, repeated description thereof will be omitted. Mapper6802receives inputs of encoded data7401_1,7401_2, and control signal6800, and when a robust transmission method is specified by control signal6800, performs mapping processes such as those described below, and outputs mapped signals6803A,6803B. Note that control signal6800corresponds to100inFIG.73, encoded data7401_1corresponds to103_1inFIG.73, encoded data7401_2corresponds to103_2inFIG.73, mapper6802corresponds to7301inFIG.73, mapped signal6803A corresponds to105_1inFIG.73, and mapped signal6801B corresponds to105_2inFIG.73. For example, mapper6802receives inputs of bits c0(k), c1(k), c4(k), and c5(k) as encoded data7401_1, and bits c2(k), c3(k), c6(k), and c7(k) as encoded data7401_2. Note that k is an integer that is greater than or equal to 0. Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c0(k), bit c1(k), bit c2(k), and bit c3(k), to obtain mapped signal a(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c4(k), bit c5(k), bit c6(k), and bit c7(k), to obtain mapped signal b(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c0(k), bit c1(k), bit c2(k), and bit c3(k), to obtain mapped signal a′(k). Mapper6802performs modulation using a modulation scheme that uses 16 signal points, such as 16QAM, on, for example, bit c4(k), bit c5(k), bit c6(k), and bit c7(k), to obtain mapped signal b′(k). Mapped signal6803A whose symbol number i=2k is represented as s1(i=2k), mapped signal6803B whose symbol number i=2k is represented as s2(i=2k), mapped signal6803A whose symbol number i=2k+1 is represented as s1(i=2k+1), and mapped signal6803B whose symbol number i=2k+1 is represented as s2(i=2k+1). s1(i=2k), i.e., mapped signal6803A whose symbol number i=2k, is expressed as a(k), s2(i=2k), i.e., mapped signal6803B whose symbol number i=2k, is expressed as b(k), s1(i=2k+1), i.e., mapped signal6803A whose symbol number i=2k+1, is expressed as b′(k), and s2(i=2k+1), i.e., mapped signal6803B whose symbol number i=2k+1, is expressed as a′(k). Next, the relationship between “a(k) and a′(k)” and “b(k) and b′(k)” will be described with reference to the fourth example. As described above, since mapped signal6803A corresponds to105_1inFIG.73and mapped signal6803B corresponds to105_2FIG.73, mapped signal6803A and mapped signal6803B are applied with a phase change and/or weighting synthesis processing based on, for example,FIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, which correspond to signal processor106illustrated inFIG.73. As described above, as a result of the transmission device transmitting a modulated signal, advantageous effects such as the reception device being able to achieve high data reception quality, and, for example, in environments in which direct waves are dominant, favorable data reception quality can be realized are achievable. Note that a configuration in which the communications method (transmission method) described in this embodiment is selectable by the base station or AP and a configuration in which the terminal described in Embodiments A1, A2, and A4 transmit a reception capability notification symbol may be combined. For example, when the terminal notifies the base station or AP that it supports phase change demodulated via information3601relating to support for demodulation of modulated signals with phase changes inFIG.38, or notifies the base station or AP that it supports the transmission method (communications method) described in this embodiment via information3702relating to support for reception for a plurality of streams, the base station or AP can determine to transmit a plurality of modulated signals for a plurality of streams via the transmission method (communications method) described in this embodiment and then transmit the modulated signals. Accordingly, the terminal can achieve high data reception quality, and the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. Embodiment A11 In this embodiment, using the examples described in Embodiment A1, Embodiment A2, and Embodiment A4, another implementation method for operations performed by the terminal will be given. FIG.24illustrates one example of a configuration of a terminal. As this example has already been described, repeated description will be omitted. FIG.41illustrates one example of a configuration of reception device2404in the terminal illustrated inFIG.24. As operations have already been described in Embodiment A4 in detail, description will be omitted from this embodiment. FIG.42illustrates an example of a frame configuration upon single modulated signal transmission by a base station or AP, which is the communication partner of the terminal, using a multi-carrier transmission scheme such as OFDM. InFIG.42, components that operate the same as inFIG.4share like reference marks. As operations have already been described in Embodiment A4 in detail, description will be omitted from this embodiment. For example, the transmission device in the base station illustrated inFIG.1may transmit a single stream modulated signal having the frame configuration illustrated inFIG.42. FIG.43illustrates an example of a frame configuration upon single modulated signal transmission by a base station or AP, which is the communication partner of the terminal, using a single-carrier transmission scheme. InFIG.43, components that operate the same as inFIG.39share like reference marks. For example, the transmission device in the base station illustrated inFIG.1may transmit a single stream modulated signal having the frame configuration illustrated inFIG.43. For example, the transmission device in the base station illustrated inFIG.1may transmit a plurality of streams of a plurality of modulated signals having the frame configuration illustrated inFIG.4and/orFIG.5. Furthermore, for example, the transmission device in the base station illustrated inFIG.1may transmit a plurality of streams of a plurality of modulated signals having the frame configuration illustrated inFIG.39and/orFIG.40. FIG.79illustrates an example of data included in the reception capability notification symbol (3502) transmitted by the terminal illustrated inFIG.35, different from the examples illustrated inFIG.36,FIG.37, andFIG.38. Note that inFIG.79, operations that are the same as inFIG.36,FIG.37, andFIG.38share like reference marks. Moreover, duplicate description of components that perform the same operations as inFIG.36,FIG.37, andFIG.38will be omitted. Data7901relating to “supported precoding method” inFIG.79will be described. When the base station or AP transmits a plurality of modulated signals for a plurality of streams, a single precoding method is selected from among a plurality of precoding schemes, and weighted synthesis is performed according to the selected precoding method (by, for example, weighting synthesizer203illustrated inFIG.2) to generate a modulated signal to be transmitted. Note that, as described in the present specification, the base station or AP may perform a phase change. Here, data for the terminal to notify the base station or AP of “whether the base station or AP is capable of demodulating the modulated signal when any one of the precoding is implemented” is data7901related to “supported precoding method”. For example, assume that the base station or AP may possibly support “Equation (33) or Equation (34)” as precoding method #A and support “θ=π/4 radians in Equation (15) or Equation (16)” as precoding method #B upon generating a plurality of streams of modulated signals. Upon generating a plurality of streams of modulated signals, assume the base station or AP selects one of precoding method #A and precoding method #B and implements precoding (weighted synthesis) based on the selected precoding method, and transmits the modulated signals. Here, the terminal transmits modulated signals including “information on whether, upon the base station or AP transmitting a plurality of modulated signals using precoding method #A, the terminal is capable of receiving the modulated signals, demodulating the modulated signals and obtaining data” and “information on whether, upon the base station or AP transmitting a plurality of modulated signals using precoding method #B, the terminal is capable of receiving the modulated signals, demodulating the modulated signals and obtaining data”, and by receiving these modulated signals, the base station or AP can know of “whether the terminal, which is the communication partner, supports precoding method #A and/or precoding method #B and can demodulate the modulated signals”. For example, information7901on supported precoding method illustrated inFIG.79and included in reception capability notification symbol (3502) that is transmitted by the terminal is configured as follows. Information7901on supported precoding method is configured of two bits, bit m0 and bit m1, and the terminal transmits bit m0 and bit m1 to the base station or AP, which is the communication partner, as information7901on supported precoding method. If the terminal receives modulated signals generated using precoding method #A by the base station or AP and can demodulate (supports demodulation of) the received modulated signals, the terminal sets m0 to 1, and transmits, to the base station or AP, which is the communication partner, bit m0 as part of information7901on supported precoding method. Moreover, if the terminal receives modulated signals generated using precoding method #A by the base station or AP but does not support demodulation of the received modulated signals, the terminal sets m0 to 0, and transmits, to the base station or AP, which is the communication partner, bit m0 as part of information7901on supported precoding method. If the terminal receives modulated signals generated using precoding method #B by the base station or AP and can demodulate (supports demodulation of) the received modulated signals, the terminal sets m1 to 1, and transmits, to the base station or AP, which is the communication partner, bit m1 as part of information7901on supported precoding method. Moreover, if the terminal receives modulated signals generated using precoding method #B by the base station or AP but does not support demodulation of the received modulated signals, the terminal sets m1 to 0, and transmits, to the base station or AP, which is the communication partner, bit m1 as part of information7901on supported precoding method. Next, a specific operational example will be given. As a first example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Thus, when the communication partner transmits a plurality of streams of modulated signals and phase change is implemented, the terminal supports reception of such. The reception device of the terminal supports a single-carrier scheme and an OFDM scheme. The reception device of the terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction encoding scheme. The reception device of the terminal supports reception under “precoding method #A” and “precoding method #B” described above. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.79and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.79and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.79in accordance with the sequence illustrated inFIG.35. Note that in the case of the first example, bit m0 and bit m1 of information7901on supported precoding method are set to 1 and 1, respectively. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Accordingly, based on information3702relating to support for reception for a plurality of streams inFIG.79, control signal generator2308in the base station knows that even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such and in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Control signal generator2308in the base station then knows that the terminal supports demodulation of modulated signals with phase changes based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.79. Control signal generator2308in the base station knows that the terminal supports a single-carrier scheme and an OFDM scheme based on information3802relating to multi-carrier scheme support inFIG.79. Then, based on information3803relating to supported error correction encoding scheme inFIG.79, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and decoding of error correction encoding scheme #D. Based on information7901relating to supported precoding method inFIG.79, control signal generator2308in the base station knows that the terminal supports reception under precoding method #A and reception under precoding method #B. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As a second example, the reception device of the terminal has the configuration illustrated inFIG.41, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the reception device of the terminal does not support reception of such. Thus, when the communication partner transmits a plurality of streams of a plurality of modulated signals and phase change is implemented, the terminal does not support reception of such. The reception device of the terminal supports a single-carrier scheme and an OFDM scheme. The reception device of the terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction encoding scheme. The reception device of the terminal does not support reception under “precoding method #A” and “precoding method #B” described above. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.41that supports the above generates reception capability notification symbol3502illustrated inFIG.79and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.79and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.79in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Accordingly, based on information3702relating to support for reception for a plurality of streams inFIG.79, control signal generator2308in the base station knows that in “even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal does not support reception of such”. Accordingly, based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.79being null, control signal generator2308in the base station determines to not transmit a phase-changed modulated signal, and outputs control signal2309including such information. Control signal generator2308in the base station determines that information7901related to supported precoding method inFIG.79is null and the plurality of modulated signals for the plurality of streams will not be transmitted, and outputs control signal2309including such information. Control signal generator2308in the base station knows that the terminal supports a single-carrier scheme and an OFDM scheme based on information3601relating to multi-carrier scheme support inFIG.79. Then, based on information3803relating to supported error correction encoding scheme inFIG.79, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and decoding of error correction encoding scheme #D. For example, the terminal has the configuration illustrated inFIG.41, and thus operates are described above to prevent the plurality of modulated signals for the plurality of streams from being transmitted by the base station or AP to allow the base station or AP to accurately transmit modulated signals that can be demodulated and decoded by the terminal. This makes it possible to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As a third example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Thus, when the communication partner transmits a plurality of streams of modulated signals and phase change is implemented, the terminal supports reception of such. The reception device of the terminal supports a single-carrier scheme and an OFDM scheme. The reception device of the terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction encoding scheme. The reception device of the terminal supports reception of “precoding method #A” described above. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.79and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.79and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.79in accordance with the sequence illustrated inFIG.35. Note that in the case of the third example, bit m0 and bit m1 of information7901on supported precoding method are set to 1 and 0, respectively. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from reception capability notification symbol3502, and the terminal knows that communications scheme #A and communications scheme #B are supported from supported scheme3801. Accordingly, based on information3702relating to support for reception for a plurality of streams inFIG.79, control signal generator2308in the base station knows that even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such and in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Control signal generator2308in the base station then knows that the terminal supports demodulation of modulated signals with phase changes based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.79. Control signal generator2308in the base station knows that the terminal supports a single-carrier scheme and an OFDM scheme based on information3802relating to multi-carrier scheme support inFIG.79. Then, based on information3803relating to supported error correction encoding scheme inFIG.79, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and decoding of error correction encoding scheme #D. Then, based on information7901relating to supported precoding method inFIG.79, control signal generator2308in the base station knows that the terminal supports reception under precoding method #A. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As a fourth example, the reception device of the terminal has the configuration illustrated inFIG.8, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” and “communications scheme #B” described in Embodiment A2. Accordingly, in “communications scheme #B”, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such. Moreover, in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. The reception device of the terminal supports single-carrier schemes. Note that in a single-carrier scheme, the base station, which is the communication partner, does not support “implementation of a phase change for a plurality of streams of a plurality of modulated signals, and does not support “implementations of precoding”. Thus, when the communication partner transmits a plurality of streams of a plurality of modulated signals and phase change is implemented, the terminal does not support reception of such. The reception device of the terminal supports decoding of “error correction encoding scheme #C” and decoding of “error correction encoding scheme #D” as an error correction encoding scheme. The reception device of the terminal supports reception of “precoding method #A” described above. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.8that supports the above generates reception capability notification symbol3502illustrated inFIG.79and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.79and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.79in accordance with the sequence illustrated inFIG.35. Accordingly, based on information3702relating to support for reception for a plurality of streams inFIG.79, control signal generator2308in the base station knows that even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal supports reception of such, and in “communications scheme #A” and “communications scheme #B”, even if the communication partner transmits a single stream modulated signal, the terminal supports reception of such. Control signal generator2308in the base station knows that the terminal supports single-carrier schemes based on information3802relating to multi-carrier scheme support inFIG.79. Accordingly, based on information3601relating to support for demodulation of modulated signals with phase changes inFIG.79being null, control signal generator2308in the base station determines to not transmit a phase-changed modulated signal, and outputs control signal2309including such information. Control signal generator2308in the base station determines that information7901related to supported precoding method inFIG.79is null, and outputs control information2309indicating that precoding method #A is supported. Then, based on information3803relating to supported error correction encoding scheme inFIG.79, control signal generator2308in the base station knows that the terminal supports decoding of error correction encoding scheme #C and decoding of error correction encoding scheme #D. Accordingly, the base station or AP takes into consideration the communications method supported by the terminal and the communications environment, for example, and accurately generates and transmits a modulated signal receivable by the terminal to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. As a fifth example, the reception device of the terminal has the configuration illustrated inFIG.41, and, for example, supports the following. For example, the reception device of the terminal supports reception under “communications scheme #A” described in Embodiment A2. Accordingly, even if the communication partner transmits a plurality of streams of a plurality of modulated signals, the terminal does not support reception of such. Thus, when the communication partner transmits a plurality of streams of a plurality of modulated signals and phase change is implemented, the terminal does not support reception of such. Furthermore, even if the communication partner transmits a plurality of streams of a plurality of modulated signals generated using “precoding method #A”, the terminal does not support reception of such, and even if the communication partner transmits a plurality of streams of a plurality of modulated signals generated using “precoding method #B”, the terminal does not support reception of such. Only single-carrier scheme is supported. The terminal supports only decoding of “error correction encoding scheme #C” as an error correction encoding scheme. Therefore, based on the rules described in Embodiment A2, a terminal having the configuration illustrated inFIG.41that supports the above generates reception capability notification symbol3502illustrated inFIG.79and, for example, transmits reception capability notification symbol3502in accordance with the sequence illustrated inFIG.35. Here, the terminal uses, for example, transmission device2403illustrated inFIG.24to generate reception capability notification symbol3502illustrated inFIG.79and transmission device2403illustrated inFIG.24transmits reception capability notification symbol3502illustrated inFIG.79in accordance with the sequence illustrated inFIG.35. Reception device2304in the base station or AP illustrated inFIG.23receives reception capability notification symbol3502transmitted by the terminal. Control signal generator2308in the base station illustrated inFIG.23then extracts data from reception capability notification symbol3502, and the terminal knows that communications scheme #A is supported from supported scheme3801. Based on information3601related to support for demodulation of modulated signals with phase changes inFIG.79being null and communications scheme #A being supported, control signal generator2308in the base station determines to not transmit modulated signals implemented with a phase change, and outputs control signal2309including such information. This is because communications scheme #A does not support multi-stream multi-modulated-signal transmission or reception. Based on information3702relating to support for reception for a plurality of streams inFIG.79being null and communications method #A being supported, control signal generator2308in the base station determines to not transmit a plurality of modulated signals for a plurality of streams, and outputs control signal2309including such information. This is because communications scheme #A does not support transmission or reception of a plurality of modulated signals for a plurality of streams. Control signal generator2308in the base station determines that information7901related to supported precoding method inFIG.79is null since communications scheme #A is supported, determines not to transmit the plurality of modulated signals for the plurality of streams, and outputs control signal2309including such information. Based on information3803relating to supported error correction encoding scheme inFIG.79being null and communications method #A being supported, control signal generator2308in the base station determines to use error correction encoding scheme #C, and outputs control signal2309including such information. This is because communications scheme #A supports error correction encoding scheme #C. For example, as illustrated inFIG.41, since communications method #A is supported, the above-described operations are performed so that the base station or AP does not transmit a plurality of modulated signals for a plurality of streams, whereby the base station or AP can achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal, due to the communications method #A modulated signal being accurately transmitted. As described above, the base station or AP obtains, from the terminal, which is the communication partner of the base station or AP, information relating to a scheme in which demodulation is supported by the terminal, and based on that information, determines the number of modulated signals, the communications method of the modulated signals, and the signal processing method of the modulated signals, for example, and as a result, the base station or AP can accurately generate and transmit a modulated signal receivable by the terminal, which makes it possible to achieve an advantageous effect of an improvement in data transmission efficiency in the system including the base station or AP and terminal. Here, for example, as illustrated inFIG.79, by configuring a reception capability notification symbol of a plurality of items of information, the base station or AP can easily determine the validity of information included in the reception capability notification symbol, and as a result, it is possible to rapidly determine, for example, a modulated signal scheme and signal processing method to be used for transmission. Then, based on information on the reception capability notification symbol transmitted by the terminals, the base station or AP can improve data transmission efficiency by transmitting modulated signals to each terminal using a suitable transmission method. Note that the method of configuring the information on the reception capability notification symbol described in this embodiment is merely one non-limiting example. Moreover, the order in which and timing at which the terminal transmits the reception capability notification symbols to the base station or AP described in this embodiment are merely non-limiting examples. Embodiment B1 In this embodiment, an example of a specific phase change method used under a single-carrier (SC) scheme will be described. In this embodiment, a case in which the base station or AP and the terminal communicate with each other will be supposed. Here, one example of the configuration of the transmission device in the base station or AP is as illustrated inFIG.1. Since this configuration has been described in other embodiments, repeated description will be omitted. FIG.81illustrates an example of a frame configuration of transmission signal108_A illustrated inFIG.1. InFIG.81, time is represented on the horizontal axis (accordingly, this relates to a single-carrier scheme signal). As illustrated inFIG.81, in transmission signal108_A, the base station or AP transmits preamble8101from time t1 to time t20, transmits guard8102using time t21 through time t30, transmits data symbol8103using time t31 through time t60, transmits guard8104using t61 through t70, and transmits data symbol8105using t71 through t100. FIG.82illustrates an example of a frame configuration of transmission signal108_B illustrated inFIG.1. InFIG.82, time is represented on the horizontal axis (accordingly, this relates to a single-carrier scheme signal). As illustrated inFIG.82, in transmission signal108_B, the base station or AP transmits preamble8201from time t1 to time t20, transmits guard8202using time t21 through time t30, transmits data symbol8203using time t31 through time t60, transmits guard8204using t61 through t70, and transmits data symbol8205using t71 through t100. Note that preamble8101and8201are symbols for channel estimation by the terminal, which is the communication partner of the base station or AP, and, for example, the mapping method is PSK (phase shift keying) known to the base station and terminal. Preambles8101and8201are transmitted at the same time using the same frequency. Guards8102and8202are symbols that are inserted upon generation of single-carrier scheme modulated signals. Guards8102and8202are transmitted at the same time using the same frequency. Data symbols8103and8203are data symbols for the base station or AP to transmit data to the terminal. Data symbols8103and8203are transmitted at the same time using the same frequency. Guards8104and8204are symbols that are inserted upon generation of single-carrier scheme modulated signals. Guards8104and8204are transmitted at the same time using the same frequency. Data symbols8105and8205are data symbols for the base station or AP to transmit data to the terminal. Data symbols8105and8205are transmitted at the same time using the same frequency. Similar to Embodiment 1, the base station or AP generates mapped signal s1(t) and mapped signal s2(t). When data symbols8102and8105include only mapped signal s1(t), data symbols8202and8205include only mapped signal s2(t). Moreover, when data symbols8102and8105include only mapped signal s2(t), data symbols8202and8205include only mapped signal s1(t). When data symbols8102and8105include both mapped signal s1(t) and mapped signal s2(t), data symbols8202and8205include both mapped signal s1(t) and mapped signal s2(t). As this has already been described in, for example, Embodiment 1, detailed description will be omitted. For example, the configuration of signal processor106illustrated inFIG.1is as illustrated inFIG.2. Hereinafter, two suitable examples of when a single-carrier scheme is used will be given. Suitable Example 1 As a first measure in the first example, a phase change is implemented in phase changer205B, and a phase change is not implemented in phase changer209B. Note that control of this is performed by control signal200. Here, the signal corresponding to transmission signal108A inFIG.1is signal208A inFIG.2, and the signal corresponding to transmission signal108B inFIG.1is signal210B inFIG.2. As a second measure in the first example, a phase change is implemented in phase changer205B, and phase changer209B is omitted. Here, the signal corresponding to transmission signal108A inFIG.1is signal208A inFIG.2, and the signal corresponding to transmission signal108B inFIG.1is signal208B inFIG.2. In suitable Example 1, either one of the first and second measures may be implemented. Next, operations performed by phase changer205B will be described. Similar to the description given in Embodiment 1, in phase changer205B, a phase change is implemented on a data symbol. Similar to Embodiment 1, the phase change value of symbol number i in phase changer205B is expressed as y(i). y(i) is applied with the following equation. [MATH. 153] y(i)=ejλ(i)Equation (153) InFIG.81andFIG.82, data symbols are present at i=t31, t32, t33 . . . t58, t59, and t60, and i=t71, t72, t73 . . . t98, t99, and t100. Here, one important condition is that either one of Equation (154) and Equation (155) is satisfied. [Math.154]π2⁢radians<λ⁡(i)-λ⁡(i-1)<π⁢radiansEquation⁢(154)[Math.155]π⁢radians<λ⁡(i)-λ⁡(i-1)<3⁢π2⁢radiansEquation⁢(155) Note that in Equation (154) and Equation (155), i=t32, t33, t34 . . . t58, t59, and t60, or i=t72, t73, t74 . . . t98, t99, t100. To rephrase “either one of Equation (154) and Equation (155) is satisfied”, when λ(i)−λ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible. Taking into consideration the transmission spectrum, λ(i)−λ(i−1) need be a fixed value. As described in other embodiments, in environments in which direct waves are dominant, it is important λ(i) be switched regularly by the reception device in the terminal, which is the communication partner of the base station or AP, in order to achieve good data reception quality. The cycle of λ(i) may be increased as needed. For example, consider a case in which the cycle is set to 5 or higher. When cycle X=2×n+1 (note that n is an integer that is greater than or equal to 2), it is sufficient if the following conditions are satisfied. When i satisfies i=t32, t33, t34 . . . t58, t59, and t60, or i=t72, t73, t74 . . . t98, t99, t100, in any instance of i, Equation (156) is satisfied. [Math.156]λ⁡(i)-λ⁡(i-1)=π-π2×n+1⁢radiansEquation⁢(156) When cycle X=2×m (note that m is an integer that is greater than or equal to 3), it is sufficient if the following conditions are satisfied. When i satisfies i=t32, t33, t34 . . . t58, t59, and t60, or i=t72, t73, t74 . . . t98, t99, t100, in any instance of i, Equation (157) is satisfied. [Math.157]λ⁡(i)-λ⁡(i-1)=π-πm⁢radiansEquation⁢(157) It was stated that “when λ(i)−λ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible”. This will be described next. InFIG.83, a phase change is not implemented, that is to say, the spectrum of transmission signal108A inFIG.1(signal208A inFIG.2) is illustrated by solid line8301inFIG.83. InFIG.83, frequency is represented on the horizontal axis and amplitude is represented on the vertical axis. In phase changer205B illustrated inFIG.2, when λ(i)−λ(i−1) is set to n radians and a phase change is implemented, the spectrum of transmission signal108B inFIG.1is expressed by dotted line8302inFIG.83. As illustrated inFIG.83, spectrum8301and spectrum8302effectively partially overlap. When transmission is performed to achieve this state, when the propagation environment of the base station and the terminal, which is the communication partner, is a multi-path environment, the multi-path effect on transmission signal108A and the multi-path effect on transmission signal108B are different, thereby improving the possibility that spatial diversity can be achieved. The effect of spatial diversity decreases as λ(i)−λ(i−1) nears 0. Accordingly, “when λ(i)−λ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible”. However, when a phase change is implemented in phase changer205B inFIG.2, as described in the present specification, in an environment in which direct waves are dominant, it is possible to achieve the advantageous effect that the effect of data reception quality will increase. Accordingly, when λ(i)−λ(i−1) is set to satisfy the above-described conditions, in a multi-path environment, an environment in which direct waves are dominant, or in both environments, it is possible to achieve a superior advantageous effect, namely that high data reception quality can be achieved by the terminal, which is the communication partner. Suitable Example 2 In Example 2, phase changer205B does not implement a phase change, and phase changer209B does implement a phase change. Note that control of this is performed by control signal200. Here, the signal corresponding to transmission signal108A inFIG.1is signal208A inFIG.2, and the signal corresponding to transmission signal108B inFIG.1is signal210B inFIG.2. Next, operations performed by phase changer209B will be described. In phase changer209B, in the frame configuration illustrated inFIG.82, a phase change is implemented on at least guards8202and8204and data symbols8203and8205. Note that a phase change may or may not be applied to preamble8201. The phase change value of phase changer209B is expressed as g(i). g(i) is applied with the following equation. [Math.158]g⁡(i)=ej⁢ρ⁡(i)Equation⁢(158) InFIG.81andFIG.82, data symbols and guards are present at i=t21, t22, t23 . . . t98, t99, and t100. Here, one important condition is that either one of Equation (159) and Equation (160) is satisfied. [Math.159]π2⁢radians<ρ⁡(i)-ρ⁡(i-1)<π⁢radiansEquation⁢(159)[Math.160]x⁢radians<ρ⁡(i)-ρ⁡(i-1)<3⁢π2⁢radiansEquation⁢(160) Note that in Equation (159) and Equation (160), i=t22, t23, t24 . . . t98, t99, and t100. To rephrase “either one of Equation (159) and Equation (160) is satisfied”, when ρ(i)−ρ(i−1) is greater than or equal to 0 radians and less than 2n radians, the value is as close to n as possible. Taking into consideration the transmission spectrum, ρ(i)−ρ(i−1) need be a fixed value. As described in other embodiments, in environments in which direct waves are dominant, it is important ρ(i) be switched regularly by the reception device in the terminal, which is the communication partner of the base station or AP, in order to achieve good data reception quality. The cycle of ρ(i) may be increased as needed. For example, consider a case in which the cycle is set to 5 or higher. When cycle X=2×n+1 (note that n is an integer that is greater than or equal to 2), it is sufficient if the following conditions are satisfied. When i satisfies i=t22, t23, t24 . . . t98, t99, t100, in any instance of i, Equation (161) is satisfied. [Math.161]ρ⁡(i)-ρ⁡(i-1)=π-π2×n+1⁢radiansEquation⁢(161) When cycle X=2×m (note that m is an integer that is greater than or equal to 3), it is sufficient if the following conditions are satisfied. When i satisfies i=t22, t23, t24 . . . t98, t99, t100, in any instance of i, Equation (162) is satisfied. [Math.162]ρ⁡(i)-ρ⁡(i-1)=π-πm⁢radiansEquation⁢(162) It was stated that “when ρ(i)−ρ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible”. This will be described next. InFIG.83, a phase change is not implemented, that is to say, the spectrum of transmission signal108A inFIG.1(signal208A inFIG.2) is illustrated by solid line8301inFIG.83. InFIG.83, frequency is represented on the horizontal axis and amplitude is represented on the vertical axis. In phase changer209B illustrated inFIG.2, when ρ(i)−ρ(i−1) is set to n radians and a phase change is implemented, the spectrum of transmission signal108B inFIG.1is expressed by dotted line8302inFIG.83. As illustrated inFIG.83, spectrum8301and spectrum8302effectively partially overlap. When transmission is performed to achieve this state, when the propagation environment of the base station and the terminal, which is the communication partner, is a multi-path environment, the multi-path effect on transmission signal108A and the multi-path effect on transmission signal108B are different, thereby improving the possibility that spatial diversity can be achieved. The effect of spatial diversity decreases as ρ(i)−ρ(i−1) nears 0. Accordingly, “when ρ(i)−ρ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible”. However, when a phase change is implemented in phase changer209B inFIG.2, as described in the present specification, in an environment in which direct waves are dominant, it is possible to achieve the advantageous effect that the effect of data reception quality will increase. Accordingly, when ρ(i)−ρ(i−1) is set to satisfy the above-described conditions, in a multi-path environment, an environment in which direct waves are dominant, or in both environments, it is possible to achieve a superior advantageous effect, namely that high data reception quality can be achieved by the terminal, which is the communication partner. By setting the phase change value as described in the present embodiment, in both an environment including multiple paths and in an environment which direct waves are dominant, it is possible to achieve the advantageous effect of improvement in data reception quality in the terminal, which is the communication partner. Note that one conceivable configuration for the reception device in the terminal is a configuration like the one illustrated inFIG.8, for example. However, as the operations illustrated inFIG.8have already been described in other embodiments, description will be omitted. There are many methods for generating single-carrier scheme modulated signals. This embodiment can implement any of them for any of the schemes. Examples of single-carrier schemes include DFT (Discrete Fourier Transform)-Spread OFDM (Orthogonal Frequency Division Multiplexing), Trajectory Constrained DFT-Spread OFDM, OFDM based SC (Single Carrier), SC (Single Carrier)-FDMA (Frequency Division Multiple Access), and Guard interval DFT-Spread OFDM. Moreover, the phase change method according to this embodiment achieves the same advantageous effects even when applied to a multi-carrier scheme such as OFDM. Note that when applied to a multi-carrier scheme, symbols may be aligned along the temporal axis, may be aligned along the frequency axis (carrier axis), and may be aligned along both temporal and frequency axes. This is also explained in other embodiments. Embodiment B2 In this embodiment, preferable examples of the precoding method used in the transmission device in the base station or AP will be given. In this embodiment, a case in which the base station or AP and the terminal communicate with each other will be supposed. Here, one example of the configuration of the transmission device in the base station or AP is as illustrated inFIG.1. Since this configuration has been described in other embodiments, repeated description will be omitted. Examples of the configuration of signal processor106inFIG.1are illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32, andFIG.33, and examples of configurations including before and after weighting synthesizer203are illustrated inFIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66andFIG.67. In this embodiment, preferable examples of the weighting synthesis method used in weighting synthesizer203based on the modulation scheme (set) of mapped signal201A (s1(t)) and mapped signal201B (s2(t)) inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67will be given. As a first example, the precoding method used in weighting synthesizer203when mapped signal201A (s1(t)) is BPSK (Binary Phase Shift Keying) and mapped signal201B (s2(t)) is BPSK or when mapped signal201A (s1(t)) is π/2 shift BPSK and mapped signal201B (s2(t)) is π/2 shift BPSK will be described. First, a simple description of BPSK will be given.FIG.84illustrates an arrangement of signal points in an in-phase I-quadrature Q plane in the case of BPSK. InFIG.84,8401and8402indicate signal points. For example, at symbol number i=0, when “x0=0” is transmitted in a BPSK symbol, the signal point is 8401, i.e., I=z, Q=0. Note that z is a real number that is greater than 0. When “x0=1” is transmitted in a BPSK symbol, the signal point is 8402, i.e., I=−z, Q=0. However, the relationship between x0 and the signal points is not limited to the example illustrated inFIG.84. Next, a simple description of π/2 shift BPSK will be given. The symbol number is expressed as i. Note that i is an integer. When symbol number i is an odd number, the signal points are arranged as illustrated inFIG.84. When symbol number i is an even number, the signal points are arranged as illustrated inFIG.85. However, the relationship between x0 and the signal points is not limited to the examples illustrated inFIG.84andFIG.85. Next,FIG.85will be described. InFIG.85,8501and8502indicate signal points. At symbol number i=1, when “x0=0” is transmitted, the signal point is8501, i.e., I=0, Q=z. When “X0=1” is transmitted, the signal point is8502, i.e., I=0, Q=−z. However, the relationship between x0 and the signal points is not limited to the example illustrated inFIG.85. As a different example of π/2 shift BPSK, when symbol number i is an odd number, the signal points are arranged as illustrated inFIG.85, and when symbol number i is an even number, the signal points are arranged as illustrated inFIG.84. However, the relationship between x0 and the signal points is not limited to the examples illustrated inFIG.84andFIG.85. When the configuration of signal processor106inFIG.1is any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60, consider a case in which, for example, precoding matrix F or F(i) used in weighting synthesizer203is only a real number. For example, precoding matrix F is expressed with the following equation. [Math.163]F=(1-111)Equation⁢(163) For example, in the case of BPSK, the signal points of the signal after precoding in in-phase I-quadrature Q plane include three points, namely, signal points8601,8602, and8603illustrated inFIG.86(one point overlaps with a signal point). In this state, consider a case in which, as illustrated inFIG.1, transmission signals108_A and108_B are transmitted and in the terminal, which is the communication partner, the reception power of either of transmission signal108_A or transmission signal108_B is low. Here, as illustrated inFIG.86, since there are only three signal points, a problem arises in which data reception quality is bad. Taking this into consideration, a method is proposed in which precoding matrix F is comprised of not only real numbers. In one example, precoding matrix F can be applied as follows. [Math.164]F=(1jj1)⁢orEquation⁢(164)[Math.165]F=α2⁢(1jj1)⁢orEquation⁢(165)[Math.166]F=(α×1α×jα×jα×1)⁢orEquation⁢(166)[Math.167]F=(j11j)⁢orEquation⁢(167)[Math.168]F=α2⁢(j11j)⁢orEquation⁢(168)[Math.169]F=(a×jα×1α×1α×j).⁢orEquation⁢(169)[Math.170]F=(1j1-j)⁢orEquation⁢(170)[Math.171]F=α2⁢(1j1-j)⁢orEquation⁢(171)[Math.172]F=(a×1α×jα×1α×j)⁢orEquation⁢(172)[Math.173]F=(1-j1j)⁢orEquation⁢(173)[Math.174]F=α2⁢(1-j1j)⁢orEquation⁢(174)[Math.175]F=(a×1-α×jα×1α×j)⁢orEquation⁢(175)[Math.176]F=(j1-j1)⁢orEquation⁢(176)[Math.177]F=α2⁢(j1-j1)Equation⁢(177)[Math.178]F=(α×jα×1-α×jα×1)⁢orEquation⁢(178)[Math.179]F=(-j1j1)Equation⁢(179)[Math.180]F=α2⁢(-j1j1)Equation⁢(180)[Math.181]F=(-α×jα×1α×jα×1)Equation⁢(181) Note that α may be a real number, and, alternatively, may be an imaginary number. However, α is not 0 (zero). In weighting synthesizer203, when precoding is performed using either one of the precoding matrices expressed in Equation (164) or Equation (181), the signal points in the in-phase I-quadrature Q plane of weighting synthesized signals204A,204B are arranged like signal points8701,8702,8703, and8704illustrated inFIG.87. Accordingly, when the base station or AP transmits transmission signals108_A and108_B and in the terminal, which is the communication partner, the reception power of either of transmission signal108_A or transmission signal108_B is low, taking into consideration the state illustrated inFIG.87, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Note that in the above description, the configuration of signal processor106in the transmission device inFIG.1included in the base station or AP is described as being any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, but in phase changer205A, phase changer205B, phase changer209A, and phase changer209B inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, a phase change need not be implemented. Here, a phase change is not implemented on input signals, and the signals are output as-is. For example, inFIG.2, when phase changer205B does not implement a phase change, signal204B becomes signal206B. When phase changer209B does not perform a phase change, signal208B becomes signal210B. Phase changer205A, phase changer205B, phase changer209A, and/or phase changer209B may be omitted. For example, inFIG.2, when phase changer205B is omitted, input206B of inserter207B corresponds to signal204B. Moreover, when phase changer209B is omitted, signal210B corresponds to signal208B. Next, as a second example, the precoding method used in weighting synthesizer203when mapped signal201A (s1(t)) is QPSK (Quadrature Phase Shift Keying) and mapped signal201B (s2(t)) is QPSK will be described. First, a simple description of QPSK will be given.FIG.85illustrates an arrangement of signal points in an in-phase I-quadrature Q plane in the case of QPSK. InFIG.85,8701,8702,8703, and8704indicate signal points. In a QPSK symbol, mapping of any one of signal points8701,8702,8703, and8704is performed on the two-bit input of x0, x1 to obtain in-phase component I, quadrature component Q. When the configuration of signal processor106inFIG.1is any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60, for example, the following is applied as the precoding matrix F used in weighting synthesizer203. [Math.182]F=(12-21)⁢orEquation⁢(182)[Math.183]F=β5⁢(12-21)⁢orEquation⁢(183)[Math.184]F=(β×1β×2-β×2β×1)⁢orEquation⁢(184)[Math.185]F=(211-2)⁢orEquation⁢(185)[Math.186]F=β5⁢(211-2)⁢orEquation⁢(186)[Math.187]F=(β×2β×1β×1-β×2)⁢orEquation⁢(187)[Math.188]F=(1-221)⁢orEquation⁢(188)[Math.189]F=β5⁢(1-221)⁢orEquation⁢(189)[Math.190]F=(β×1-β×2β×2β×1)⁢orEquation⁢(190)[Math.191]F=(-2112)⁢orEquation⁢(191)[Math.192]F=β5⁢(-2112)⁢orEquation⁢(192)[Math.193]F=(-β×2β×1β×1β×2)⁢orEquation⁢(193)[Math.194]F=(122-1)⁢orEquation⁢(194)[Math.195]F=β5⁢(122-1)⁢orEquation⁢(195)[Math.196]F=(β×1β×2β×2-β×1)⁢orEquation⁢(196)[Math.197]F=(21-12)⁢orEquation⁢(197)[Math.198]F=β5⁢(21-12)⁢orEquation⁢(198)[Math.199]F=(β×2β×1-β×1β×2)⁢orEquation⁢(199)[Math.200]F=(-1221)⁢orEquation⁢(200)[Math.201]F=β5⁢(-1221)⁢orEquation⁢(201)[Math.202]F=(-β×1β×2β×2β×1)⁢orEquation⁢(202)[Math.203]F=(2-112)⁢orEquation⁢(203)[Math.204]F=β5⁢(2-112)⁢orEquation⁢(204)[Math.205]F=(β×2-β×1β×1β×2)Equation⁢(205) β may be a real number, and, alternatively, may be an imaginary number. However, β is not 0 (zero). In weighting synthesizer203, when precoding is performed using either one of the precoding matrices expressed in Equation (182) or Equation (205), the signal points in the in-phase I-quadrature Q plane of weighting synthesized signals204A,204B do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signals108_A and108_B and in the terminal, which is the communication partner, the reception power of either of transmission signal108_A or transmission signal108_B is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Note that in the above description, it is described that the configuration of signal processor106in the transmission device inFIG.1included in the base station or AP is any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60, phase changer205A, phase changer205B, phase changer209A, and phase changer209B inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60need not apply a phase change. Here, a phase change is not implemented on input signals, and the signals are output as-is. For example, (inFIG.2) when phase changer205B does not implement a phase change, signal204B corresponds to206B. When phase changer209B does not implement a phase change, signal208B corresponds to signal210B. When phase changer205A does not implement a phase change, signal204A corresponds to signal206A. When phase changer209A does not implement a phase change, signal208A corresponds to210B. Phase changer205A, phase changer205B, phase changer209A, and/or phase changer209B may be omitted. For example, (inFIG.2) when phase changer205B is omitted, input206B of inserter207B corresponds to signal204B. When phase changer209B is omitted, signal210B corresponds to signal208B. When phase changer205A is omitted, input206A of inserter207A corresponds to signal204A. When phase changer209A is omitted, signal210A corresponds to signal208A. When the precoding matrices are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments, including Embodiment B1. Embodiment B3 In this embodiment, the configuration method of the preamble and control information symbol transmitted by the base station or AP and the operations performed by the terminal, which is the communication partner of the base station or AP will be described. In Embodiment A8, the base station or AP is described as being able to selectively transmit a multi-carrier scheme, such as OFDM, modulated signal and a single-carrier scheme modulated signal (in particular, in the second example). In this embodiment, the configuration method and transmission method of preambles and control information symbols in such a case will be described. As described in Embodiment A8, the configuration of the transmission device in the base station or AP is the configuration illustrated inFIG.1or FIG.44. However, the transmission device in the base station may be configured so as to include one error correction encoder illustrated inFIG.1, and may be configured so as to include the plurality of error correction encoders illustrated inFIG.44. Radio unit107_A and radio unit107_B illustrated inFIG.1,FIG.44have the configuration illustrated inFIG.55, and are characterized in that they can selectively switch between a single-carrier scheme and an OFDM scheme. Note that since operations pertaining toFIG.55have already been described in Embodiment A8 in detail, description will be omitted from this embodiment. FIG.88illustrates one example of a frame configuration of a transmission signal transmitted by the base station or AP. Time is represented on the horizontal axis. The base station or AP first transmits preamble8801, and subsequently transmits control information symbol (header block)8802and data symbol8803. Preamble8801is a symbol for the reception device in the terminal, which is the communication partner of the base station or AP, to perform, for example, signal detection of a modulated signal transmitted by the base station or AP, frame synchronization, time synchronization, frequency synchronization, frequency offset estimation, and/or channel estimation. For example, preamble8801is configured as a PSK symbol known to the base station and terminal. control information symbol (also referred to as a header block)8802is a symbol for transmitting control information related to data symbol8803, and includes, for example, the transmission method of data symbol8803, such as information on whether the transmission method is a single-carrier scheme or an OFDM scheme, information on whether the transmission method is single stream transmission or multi-stream transmission, information on the modulation scheme, and/or information on the error correction encoding method used upon generating the data symbols (for example, error correction code information, code length information, information on the encode rate of the error correction code). Moreover, control information symbol (also referred to as a header block)8802may include, for example, information on the data length to be transmitted. Data symbol8803is a symbol for the base station or AP to transmit data, and the transmission method of which is switched as described above. Note thatFIG.88is merely one non-limiting example of a frame configuration. Moreover, not each of preamble8801, control information symbol8802, and data symbol8803need be present in the frame. For example, a pilot symbol or reference symbol may be included in the data symbol. In this embodiment, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and a single-carrier scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change is not implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Then, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and an OFDM scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, switching can be performed for whether a phase change is implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Next, information v1, v2, v3, and v4 included in control information symbol (header block)8802illustrated inFIG.88and transmitted by the base station or AP will be described. TABLE 8v1transmission method0single-carrier scheme1OFDM scheme Interpretation of Table 8 is as follows. When the transmission scheme of data symbol8803inFIG.88is a single-carrier scheme, v1 is set to 0 (v1=0), and the base station or AP transmits v1. When the transmission scheme of data symbol8803inFIG.88is an OFDM scheme, v1 is set to 1 (v1=1), and the base station or AP transmits v1. TABLE 9v2stream(s) to be transmitted0single stream1plural streams (MIMO) Interpretation of Table 9 is as follows. When single stream transmission is to be used upon transmitting data symbol8803illustrated inFIG.88, v2 is set to 0 (v2=0), and the base station or AP transmits v2. When a plurality of modulated signals are to be transmitted at the same frequency and time using a plurality of antennas upon transmitting data symbol8803illustrated inFIG.88, v2 is set to 1 (v2=1), and the base station or AP transmits v2. However, in Table 9, the meaning of v2=1 may be interpreted as “transmission other than single stream transmission”. Moreover, a configuration method of information that can be interpreted the same as in Table 9 includes a method of preparing a plurality of bits and transmitting information on the number of transmission streams. For example, when v21 and v22 are prepared and v21 and v22 are set such that v21=0 and v22=0, the base station or AP transmits a single stream, when v21 and v22 are set such that v21=1 and v22=0, the base station or AP transmits two streams, when v21 and v22 are set such that v21=0 and v22=1, the base station or AP transmits four streams, and when v21 and v22 are set such that v21=1 and v22=1, the base station or AP transmits eight streams. Then, the base station or AP transmits v21 and v22 as control information. TABLE 10v3phase changer operation0phase change not implementedcyclically/regularly (OFF)1phase change implementedcyclically/regularly (ON) Interpretation of Table 10 is as follows. When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas upon transmitting data symbol8803illustrated inFIG.88, and signal processor106has any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, when a phase change is not implemented in phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B, v3 is set to 0 (v3=0), and the base station or AP transmits v3. When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas upon transmitting data symbol8803illustrated inFIG.88, and signal processor106has any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B, v3 is set to 1 (v3=1), and the base station or AP transmits v3. TABLE 11precoding method when phase change isv4implemented cyclically/regularly0use precoding matrix #11use precoding matrix #2 Interpretation of Table 11 is as follows. When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas upon transmitting data symbol8803illustrated inFIG.88, and signal processor106has any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B, if precoding is to be performed using precoding matrix #1 in weighting synthesizer203, v4 is set to 0 (v4=0), and the base station transmits v4. When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas upon transmitting data symbol8803illustrated inFIG.88, and signal processor106has any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B, if precoding is to be performed using precoding matrix #2 in weighting synthesizer203, v4 is set to 1 (v4=1), and the base station transmits v4. Hereinbefore, v1, v2 (or v21 and v22), v3, and v4 have been described. Hereinafter, details regarding v3 and v4 in particular will be described. As described above, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and a single-carrier scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change is not implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Accordingly, when the base station or AP sets v1 to 0 (v1=0), and the transmission scheme used for the data symbol inFIG.88is a single-carrier scheme, (regardless of whether v2 indicates 0 or 1), the information on v3 is null (v3 may be set to 0 and may be set to 1) (then, when the data symbol inFIG.88is a single stream modulated signal or includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change is not implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B, and a plurality of modulated signals are transmitted using a MIMO scheme. Note that the base station or AP may have a configuration in which phase changer205A, phase changer205B, and phase changer5901A are omitted). On the other hand, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and an OFDM scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, switching can be performed for whether a phase change is implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Accordingly, when a single stream is used when the base station or AP sets v1 to 1 (v1=1), the transmission scheme of the data symbol inFIG.88is OFDM, v2 is set to 0 (v2=0) (or v21 and v22 are set to 0 (v21=0, v22=0)), and data symbol8803inFIG.88is transmitted, information on v3 is null (v3 may be set to 0 or 1) (here, the base station or AP transmits a single stream modulated signal). When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas when the base station or AP sets v1 to 1 (v1=1), the transmission scheme of the data symbol inFIG.88is OFDM, v2 is set to 1 (v2=1) (or v21 and v22 are set to something other than 0 (something other than v21=0, v22=0)), and data symbol8803inFIG.88is transmitted, information on v3 “the base station or AP supports phase change”, and “reception is possible even when the terminal, which is the communication partner of the base station or AP, has performed a phase change” is valid. Then, when the setting for v3 is valid, when the base station or AP does not implement a phase change in phase changer205A, phase changer205B, phase changer5901A, or phase changer5901B, v3 is set to 0 (v3=0), and the base station or AP transmits v3. When the base station or AP does implement a phase change in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, v3 is set to 1 (v3=1), and the base station or AP transmits v3. Note that since the determination of whether the terminal, which is the communication partner of the base station or AP, is capable of reception even when a phase change is implemented has already been described in another embodiment, repeated description will be omitted in this embodiment. Moreover, when the base station or AP does not support implementation of a phase change, the base station or AP does not include phase changer205A, phase changer205B, phase changer5901A, phase changer5901B. Next, v4 will be described. As described above, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and a single-carrier scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change is not implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Accordingly, when the base station or AP sets v1 to 0 (v1=0), and the transmission scheme used for the data symbol inFIG.88is a single-carrier scheme, (regardless of whether v2 indicates 0 or 1), the information on v4 is null (v4 may be set to 0 and may be set to 1) (then, when the data symbol inFIG.88is a single-carrier scheme modulated signal or includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change is not implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B, and a plurality of modulated signals are transmitted using a MIMO scheme. Note that the base station or AP may have a configuration in which phase changer205A, phase changer205B, and phase changer5901A are omitted). On the other hand, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and an OFDM scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, switching can be performed for whether a phase change is implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Accordingly, when a single stream is used when the base station or AP sets v1 to 1 (v1=1), the transmission scheme of the data symbol inFIG.88is OFDM, v2 is set to 0 (v2=0) (or v21 and v22 are set to 0 (v21=0, v22=0)), and data symbol8803inFIG.88is transmitted, information on v4 is null (v4 may be set to 0 or 1) (here, the base station or AP transmits a single stream modulated signal). When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas when the base station or AP sets v1 to 1 (v1=1), the transmission scheme of the data symbol inFIG.88is OFDM, v2 is set to 1 (v2=1) (or v21 and v22 are set to something other than 0 (something other than v21=0, v22=0)), and data symbol8803inFIG.88is transmitted, there is a possibility that information on v4 “the base station or AP supports phase change”, and “reception is possible even when the terminal, which is the communication partner of the base station or AP, has performed a phase change” is valid. When the base station or AP does not implement a phase change in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, v4 is null and may be set to 0 or 1 (and the base station or AP transmits v4 information). When the base station or AP does implement a phase change in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, v4 information is valid, and in weighting synthesizer203, if precoding is to be performed using precoding matrix #1, v4 is set to 0 (v4=0), and the base station or AP transmits v4. In weighting synthesizer203, if precoding is to be performed using precoding matrix #2, v4 is set to 1 (v4=1), and the base station transmits v4. Note that since the determination of whether the terminal, which is the communication partner of the base station or AP, is capable of reception even when a phase change is implemented has already been described in another embodiment, repeated description will be omitted in this embodiment. Moreover, when the base station or AP does not support implementation of a phase change, the base station or AP does not include phase changer205A, phase changer205B, phase changer5901A, phase changer5901B. Although an example is given above in which control information symbol8802includes information v1, v2, v3, and v4, the base station or AP need not transmit all of information v1, v2, v3, and v4 in control information symbol8802. For example, regarding at least some of the signals in preamble8801inFIG.88, when the transmission method of data symbol8803differs in regard to being a single-carrier scheme or an OFDM scheme, the base station or AP may transmit information v1 in the control information symbol. In such cases, based on the signal transmitted as preamble8801, the terminal determines whether the transmission scheme of data symbol8803is a single-carrier scheme or an OFDM scheme. Note that, regarding at least some of the signals in preamble8801inFIG.88, when the transmission method of data symbol8803differs in regard to being a single-carrier scheme or an OFDM scheme, the base station or AP may transmit information v1 in control information symbol8802. In such cases, based on one or both of (i) the signal transmitted as preamble8801and (ii) information v1 included in control information symbol8802, the terminal determines whether the transmission scheme of data symbol8803is a single-carrier scheme or an OFDM scheme. In the above description, an example is given in which the terminal can determine the information known by information v1 based on a single other than control information symbol8802, but regarding information v2, v3, and v4 as well, when the terminal can make a determination based on a signal other than control information symbol8802, information that enables said determination need not be transmitted in control information symbol8802. However, similar to the example given regarding information v1, even information indicating that the terminal can make the determination based on a signal other than control information symbol8802may be transmitted in control information symbol8802. Moreover, for example, when, depending on whether the transmission scheme of data symbol8803is a single-carrier scheme or an OFDM scheme, control information symbol8802includes other control information in which the possible values are different, this other control information may be taken as information v1. In such cases, based on this other control information, the terminal determines whether the transmission scheme of data symbol8803is a single-carrier scheme or an OFDM scheme. In the above description, when the transmission device in the base station or AP has any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change need not be implemented in phase changer209A, phase changer209B. Here, a phase change is not implemented on input signals, and the signals are output as-is. For example, (inFIG.2) when phase changer209B does not implement a phase change, signal208B corresponds to signal210B. Moreover, when phase changer209A does not implement a phase change, signal208A corresponds to signal210A. As another configuration, phase changer209A and phase changer209B may be omitted. For example, (inFIG.2) when phase changer209B is omitted, signal210B corresponds to signal208B. When phase changer209A is omitted, signal210A corresponds to signal208A. Next, operations performed by the reception device of the terminal, which is the communication partner of the base station or AP, will be described. The configuration of the reception device of the terminal is illustrated inFIG.89. InFIG.89, components that operate the same as inFIG.8share like reference marks. Accordingly, repeated description thereof will be omitted. Signal detector, synchronizer8901receives inputs of baseband signal804X,804Y, detects preamble8801included in baseband signal804X,804Y, performs signal detection, frame synchronization, time synchronization, frequency synchronization, frequency offset estimation, etc., and outputs the result as system control signal8902. Channel estimation unit805_1,807_1of modulated signal u1 and channel estimation unit805_2807_2of modulated signal u2 receive an input of system control signal8902, and based on system control signal8902, for example, detect preamble8801and perform channel estimation. Control information decoder (control information detector)809receives inputs of baseband signal804X,804Y and system control signal8902, detects control information symbol (header block)8802illustrated inFIG.88and included in baseband signal804X,804Y, performs demodulation and decoding to obtain control information, and outputs the result as control signal810. Then, signal processor811, radio unit803X,803Y, antenna unit #X (801X), antenna unit #Y (801Y) receive an input of control signal810, and may switch operations to be performed based on control signal810. Note that details will be described later. Control information decoder (control information detector)809receives inputs of baseband signal804X,804Y and system control signal8902, detects control information symbol (header block)8802illustrated inFIG.88and included in baseband signal804X,804Y, performs demodulation and decoding, and at least obtains v1 in Table 8, v2 in Table 9, v3 in Table 10, and v4 in Table 11 transmitted by the base station or AP. Hereinafter, a detailed example of operations performed by control information decoder (control information detector)809will be given. Consider a terminal capable of demodulating only a single-carrier scheme modulated signal. In such a case, the terminal determines that v3 information (v3 bit) obtained by control information decoder (control information detector)809is null (v3 information (v3 bit) is not necessary). Accordingly, since the modulated signal generated by the base station or AP when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B is not transmitted, signal processor911does not perform corresponding signal processing, but instead performs demodulation and/or decoding corresponding to signal processing under a different scheme to obtain and output reception data812. More specifically, when the terminal receives a signal transmitted from another communications device such as the base station or AP, the terminal determines, based on preamble8801and control information symbol8802, whether data symbol8803is an OFDM scheme modulated signal or a single-carrier scheme modulated signal. When determined to be an OFDM scheme modulated signal, since the terminal is not functionally equipped to demodulate data symbol8803, data symbol8803is not demodulated. On the other hand, when determined to be a single-carrier scheme modulated signal, the terminal demodulates data symbol8803. Here, the terminal determines a demodulation method for data symbol8803based on information obtained by control information decoder (control information detector)809. Here, since a phase change is not implemented cyclically/regularly on a single-carrier scheme modulated signal, the terminal uses, among control information obtained by control information decoder (control information detector)809, control information excluding at least the bit corresponding to v3 information to determine the demodulation method for data symbol8803. Consider a terminal capable of demodulating only a single stream modulated signal. In such a case, the terminal determines that v3 information (v3 bit) obtained by control information decoder (control information detector)809is null (v3 information (v3 bit) is not necessary). Accordingly, since the modulated signal generated by the base station or AP when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B is not transmitted, signal processor911does not perform corresponding signal processing, but instead performs demodulation and/or decoding corresponding to signal processing under a different scheme to obtain and output reception data812. More specifically, when the terminal receives a signal transmitted from another communications device such as the base station or AP, the terminal determines, based on preamble8801and control information symbol8802, whether data symbol8803is a single stream modulated signal or a multi-stream modulated signal. When determined to be a multi-stream modulated signal, since the terminal is not functionally equipped to demodulate data symbol8803, data symbol8803is not demodulated. On the other hand, when determined to be a single stream modulated signal, the terminal demodulates data symbol8803. Here, the terminal determines a demodulation method for data symbol8803based on information obtained by control information decoder (control information detector)809. Here, since a phase change is not implemented cyclically/regularly on a single stream modulated signal, the terminal uses, among control information obtained by control information decoder (control information detector)809, control information excluding at least the bit corresponding to v3 information to determine the demodulation method for data symbol8803. Even if the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, a terminal that does not support demodulation of such a modulated signal determines that v3 information (v3 bit) obtained by control information demodulator (control information detector)809is null (v3 information (v3 bit) is not necessary). Accordingly, since the modulated signal generated by the base station or AP when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B is not transmitted, signal processor911does not perform corresponding signal processing, but instead performs demodulation and/or decoding corresponding to signal processing under a different scheme to obtain and output reception data812. More specifically, when the terminal receives a signal transmitted from another communications device such as the base station or AP, the terminal demodulates and decodes data symbol8803based on preamble8801and control information symbol8802, but since “even if the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, the terminal does not support demodulation of such a modulated signal”, a phase change is not implemented cyclically/regularly, and the terminal determines a demodulation method for data symbol8803using, from among control information obtained by control information decoder (control information detector)809, at least control information excluding at least the bit corresponding to v3 information. When the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, when a terminal that supports demodulation of such a modulated signal determines in control information demodulator (control information detector)809that the modulated signal is an OFDM scheme modulated signal from v1, v3 information (v3 bit) is determined to be valid. Here, control information decoder (control information detector)809determines a demodulation method for data symbol8803based on control information including v3 information (v3 bit). Then, signal processor811performs operations for demodulation and decoding using a method based on the determined demodulation method. When the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, when a terminal that supports demodulation of such a modulated signal determines in control information demodulator (control information detector)809that the modulated signal is single-carrier scheme modulated signal from v1, v3 information (v3 bit) is determined to be null (v3 information (v3 bit) is not necessary). Here, control information decoder (control information detector)809determines a demodulation method for data symbol8803using control information excluding at least the bit corresponding to v3 information. Then, signal processor811performs operations for demodulation and decoding using a method based on the determined demodulation method. When the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, when a terminal that supports demodulation of such a modulated signal determines in control information demodulator (control information detector)809that the modulated signal is a single stream modulated signal from v2 (or v21, v22), v3 information (v3 bit) is determined to be null (v3 information (v3 bit) is not necessary). Here, control information decoder (control information detector)809determines a demodulation method for data symbol8803using control information excluding at least the bit corresponding to v3 information. Then, signal processor811performs operations for demodulation and decoding using a method based on the determined demodulation method. Consider a terminal capable of demodulating only a single-carrier scheme modulated signal. In such a case, the terminal determines that v4 information (v4 bit) obtained by control information decoder (control information detector)809is null (v4 information (v4 bit) is not necessary). Accordingly, since the modulated signal generated by the base station or AP when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B is not transmitted, signal processor911does not perform corresponding signal processing, but instead performs demodulation and/or decoding corresponding to signal processing under a different scheme to obtain and output reception data812. More specifically, when the terminal receives a signal transmitted from another communications device such as the base station or AP, the terminal determines, based on preamble8801and control information symbol8802, whether data symbol8803is an OFDM scheme modulated signal or a single-carrier scheme modulated signal. When determined to be an OFDM scheme modulated signal, since the terminal is not functionally equipped to demodulate data symbol8803, data symbol8803is not demodulated. On the other hand, when determined to be a single-carrier scheme modulated signal, the terminal demodulates data symbol8803. Here, the terminal determines a demodulation method for data symbol8803based on information obtained by control information decoder (control information detector)809. Here, since a phase change is not implemented cyclically/regularly on a single-carrier scheme modulated signal, the terminal uses, among control information obtained by control information decoder (control information detector)809, control information excluding at least the bit corresponding to (v3 information and) v4 information to determine the demodulation method for data symbol8803. Consider a terminal capable of demodulating only a single stream modulated signal. In such a case, the terminal determines that v4 information (v4 bit) obtained by control information decoder (control information detector)809is null (v4 information (v4 bit) is not necessary). Accordingly, since the modulated signal generated by the base station or AP when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B is not transmitted, signal processor911does not perform corresponding signal processing, but instead performs demodulation and/or decoding corresponding to signal processing under a different scheme to obtain and output reception data812. More specifically, when the terminal receives a signal transmitted from another communications device such as the base station or AP, the terminal determines, based on preamble8801and control information symbol8802, whether data symbol8803is a single stream modulated signal or a multi-stream modulated signal. When determined to be a multi-stream modulated signal, since the terminal is not functionally equipped to demodulate data symbol8803, data symbol8803is not demodulated. On the other hand, when determined to be a single stream modulated signal, the terminal demodulates data symbol8803. Here, the terminal determines a demodulation method for data symbol8803based on information obtained by control information decoder (control information detector)809. Here, since a phase change is not implemented cyclically/regularly on a single stream modulated signal, the terminal uses, among control information obtained by control information decoder (control information detector)809, control information excluding at least the bit corresponding to (v3 information and) v4 information to determine the demodulation method for data symbol8803. Even if the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, a terminal that does not support demodulation of such a modulated signal determines that v4 information (v4 bit) obtained by control information demodulator (control information detector)809is null (v4 information (v4 bit) is not necessary). Accordingly, since the modulated signal generated by the base station or AP when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B is not transmitted, signal processor911does not perform corresponding signal processing, but instead performs demodulation and/or decoding corresponding to signal processing under a different scheme to obtain and output reception data812. More specifically, when the terminal receives a signal transmitted from another communications device such as the base station or AP, the terminal demodulates and decodes data symbol8803based on preamble8801and control information symbol8802, but since “even if the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, the terminal does not support demodulation of such a modulated signal”, a phase change is not implemented cyclically/regularly, and the terminal determines a demodulation method for data symbol8803using, from among control information obtained by control information decoder (control information detector)809, at least control information excluding at least the bit corresponding to (v3 information and) v4 information. When the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, when a terminal that supports demodulation of such a modulated signal determines in control information demodulator (control information detector)809that the modulated signal is an OFDM scheme modulated signal from v1, v4 information (v4 bit) is determined to be valid. Here, control information decoder (control information detector)809determines a demodulation method for data symbol8803based on control information including v4 information (v4 bit). Then, signal processor811performs operations for demodulation and decoding using a method based on the determined demodulation method. When the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, when a terminal that supports demodulation of such a modulated signal determines in control information demodulator (control information detector)809that the modulated signal is single-carrier scheme modulated signal from v1, v4 information (v4 bit) is determined to be null (v4 information (v4 bit) is not necessary). Here, control information decoder (control information detector)809determines a demodulation method for data symbol8803using control information excluding at least the bit corresponding to (v3 information and) v4 information. Then, signal processor811performs operations for demodulation and decoding using a method based on the determined demodulation method. When the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, when a terminal that supports demodulation of such a modulated signal determines in control information demodulator (control information detector)809that the modulated signal is a single stream modulated signal from v2 (or v21, v22), v3 information (v3 bit) is determined to be null (v4 information (v4 bit) is not necessary). Here, control information decoder (control information detector)809determines a demodulation method for data symbol8803using control information excluding at least the bit corresponding to (v3 information and) v4 information. Then, signal processor811performs operations for demodulation and decoding using a method based on the determined demodulation method. By the base station or AP and the terminal, which is the communication partner of the base station or AP, operating as described in the present embodiment, the base station or AP and the terminal can perform communication accurately, and as a result, it is possible to achieve an advantageous effect in that data reception quality is improved and data transmission speed is improved. Moreover, when the base station or AP uses an OFDM scheme and implements a phase change upon transmitting a plurality of streams, in an environment in which direct waves are dominant, the terminal, which is the communication partner, can achieve an advantageous effect of an improvement in data reception quality. Embodiment C1 In this embodiment, an example of a specific phase change method used under a single-carrier (SC) scheme that differs from the example described in Embodiment B1 will be described. In this embodiment, a case in which the base station or AP and the terminal communicate with each other will be supposed. Here, one example of the configuration of the transmission device in the base station or AP is as illustrated inFIG.1. Since this configuration has been described in other embodiments, repeated description will be omitted. FIG.81illustrates an example of a frame configuration of transmission signal108_A illustrated inFIG.1. InFIG.81, time is represented on the horizontal axis (accordingly, this relates to a single-carrier scheme signal). As illustrated inFIG.81, in transmission signal108_A, the base station or AP transmits preamble8101from time t1 to time t20, transmits guard8102using time t21 through time t30, transmits data symbol8103using time t31 through time t60, transmits guard8104using t61 through t70, and transmits data symbol8105using t71 through t100. FIG.82illustrates an example of a frame configuration of transmission signal108_B illustrated inFIG.1. InFIG.82, time is represented on the horizontal axis (accordingly, this relates to a single-carrier scheme signal). As illustrated inFIG.82, in transmission signal108_B, the base station or AP transmits preamble8201from time t1 to time t20, transmits guard8202using time t21 through time t30, transmits data symbol8203using time t31 through time t60, transmits guard8204using t61 through t70, and transmits data symbol8205using t71 through t100. Note that preamble8101and8201are symbols for channel estimation by the terminal, which is the communication partner of the base station or AP, and, for example, the mapping method is PSK (phase shift keying) known to the base station and terminal. Preambles8101and8201are transmitted at the same time using the same frequency. Guards8102and8202are symbols that are inserted upon generation of single-carrier scheme modulated signals. Guards8102and8202are transmitted at the same time using the same frequency. Data symbols8103and8203are data symbols for the base station or AP to transmit data to the terminal. Data symbols8103and8203are transmitted at the same time using the same frequency. Guards8104and8204are symbols that are inserted upon generation of single-carrier scheme modulated signals. Guards8104and8204are transmitted at the same time using the same frequency. Data symbols8105and8205are data symbols for the base station or AP to transmit data to the terminal. Data symbols8105and8205are transmitted at the same time using the same frequency. Similar to Embodiment 1, the base station or AP generates mapped signal s1(t) and mapped signal s2(t). When data symbols8102and8105include only mapped signal s1(t), data symbols8202and8205include only mapped signal s2(t). Moreover, when data symbols8102and8105include only mapped signal s2(t), data symbols8202and8205include only mapped signal s1(t). When data symbols8102and8105include both mapped signal s1(t) and mapped signal s2(t), data symbols8202and8205include both mapped signal s1(t) and mapped signal s2(t). As this has already been described in, for example, Embodiment 1, detailed description will be omitted. For example, the configuration of signal processor106illustrated inFIG.1is as illustrated inFIG.2. Hereinafter, two suitable examples of when a single-carrier scheme is used will be given. Suitable Example 1 As a first measure in the first example, a phase change is implemented in phase changer205B, and a phase change is not implemented in phase changer209B. Note that control of this is performed by control signal200. Here, the signal corresponding to transmission signal108A inFIG.1is signal208A inFIG.2, and the signal corresponding to transmission signal108B inFIG.1is signal210B inFIG.2. As a second measure in the first example, a phase change is implemented in phase changer205B, and phase changer209B is omitted. Here, the signal corresponding to transmission signal108A inFIG.1is signal208A inFIG.2, and the signal corresponding to transmission signal108B inFIG.1is signal208B inFIG.2. In suitable Example 1, either one of the first and second measures may be implemented. Next, operations performed by phase changer205B will be described. Similar to the description given in Embodiment 1, in phase changer205B, a phase change is implemented on a data symbol. Similar to Embodiment 1, the phase change value of symbol number i in phase changer205B is expressed as y(i). y(i) is applied with the following equation. [MATH. 206] y(i)=ejλ(i)Equation (206) InFIG.81andFIG.82, data symbols are present at i=t31, t32, t33 . . . t58, t59, and t60, and i=t71, t72, t73 . . . t98, t99, and t100. Here, one important condition is that either one of Equation (207) and Equation (208) is satisfied. [Math.207]π2⁢radians⁢<λ⁡(i)-λ⁡(i-1)<π⁢radiansEquation⁢(207)[Math.208]π⁢radians<λ⁡(i)-λ⁡(i-1)<3⁢π2⁢radiansEquation⁢(208) Note that in Equation (207) and Equation (208), i=t32, t33, t34 . . . t58, t59, and t60, or i=t72, t73, t74 . . . t98, t99, t100. To rephrase “either one of Equation (207) and Equation (208) is satisfied”, when λ(i)−λ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible. Taking into consideration the transmission spectrum, λ(i)−λ(i−1) need be a fixed value. As described in other embodiments, in environments in which direct waves are dominant, it is important λ(i) be switched regularly by the reception device in the terminal, which is the communication partner of the base station or AP, in order to achieve good data reception quality. The cycle of λ(i) may be increased as needed. For example, consider a case in which the cycle is set to 5 or higher. When cycle X=2×n+1 (note that n is an integer that is greater than or equal to 2), it is sufficient if the following conditions are satisfied. When i satisfies i=t32, t33, t34 . . . t58, t59, and t60, or i=t72, t73, t74 . . . t98, t99, t100, in any instance of i, Equation (209) is satisfied. [Math.209]λ⁡(i)-λ⁡(i-1)=π+π2×n+1⁢radiansEquation⁢(209) When cycle X=2×m (note that m is an integer that is greater than or equal to 3), it is sufficient if the following conditions are satisfied. When i satisfies i=t32, t33, t34 . . . t58, t59, and t60, or i=t72, t73, t74 . . . t98, t99, t100, in any instance of i, Equation (210) is satisfied. [Math.210]λ⁡(i)-λ⁡(i-1)=π+πm⁢radiansEquation⁢(210) It was stated that “when λ(i)−λ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible”. This will be described next. InFIG.83, a phase change is not implemented, that is to say, the spectrum of transmission signal108A inFIG.1(signal208A inFIG.2) is illustrated by solid line8301inFIG.83. InFIG.83, frequency is represented on the horizontal axis and amplitude is represented on the vertical axis. In phase changer205B illustrated inFIG.2, when λ(i)−λ(i−1) is set to n radians and a phase change is implemented, the spectrum of transmission signal108B inFIG.1is expressed by dotted line8302inFIG.83. As illustrated inFIG.83, spectrum8301and spectrum8302effectively partially overlap. When transmission is performed to achieve this state, when the propagation environment of the base station and the terminal, which is the communication partner, is a multi-path environment, the multi-path effect on transmission signal108A and the multi-path effect on transmission signal108B are different, thereby improving the possibility that spatial diversity can be achieved. The effect of spatial diversity decreases as λ(i)−λ(i−1) nears 0. Accordingly, “when λ(i)−λ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible”. However, when a phase change is implemented in phase changer205B inFIG.2, as described in the present specification, in an environment in which direct waves are dominant, it is possible to achieve the advantageous effect that the effect of data reception quality will increase. Accordingly, when λ(i)−λ(i−1) is set to satisfy the above-described conditions, in a multi-path environment, an environment in which direct waves are dominant, or in both environments, it is possible to achieve a superior advantageous effect, namely that high data reception quality can be achieved by the terminal, which is the communication partner. Suitable Example 2 In Example 2, phase changer205B does not implement a phase change, and phase changer209B does implement a phase change. Note that control of this is performed by control signal200. Here, the signal corresponding to transmission signal108A inFIG.1is signal208A inFIG.2, and the signal corresponding to transmission signal108B inFIG.1is signal210B inFIG.2. Next, operations performed by phase changer209B will be described. In phase changer209B, in the frame configuration illustrated inFIG.82, a phase change is implemented on at least guards8202and8204and data symbols8203and8205. Note that a phase change may or may not be applied to preamble8201. The phase change value of phase changer209B is expressed as g(i). g(i) is applied with the following equation. [MATH. 211] g(i)=ejρ(i)Equation (211) InFIG.81andFIG.82, data symbols and guards are present at i=t21, t22, t23 . . . t98, t99, and t100. Here, one important condition is that either one of Equation (212) and Equation (213) is satisfied. [Math.212]π2⁢radians<ρ⁡(i)-ρ⁡(i-1)<π⁢radiansEquation⁢(212)[Math.213]π⁢radians<ρ⁡(i)-ρ⁡(i-1)<3⁢π2⁢radiansEquation⁢(213) Note that in Equation (212) and Equation (213), i=t22, t23, t24 . . . t98, t99, and t100. To rephrase “either one of Equation (159) and Equation (160) is satisfied”, when ρ(i)−ρ(i−1) is greater than or equal to 0 radians and less than 2n radians, the value is as close to n as possible. Taking into consideration the transmission spectrum, ρ(i)−ρ(i−1) need be a fixed value. As described in other embodiments, in environments in which direct waves are dominant, it is important ρ(i) be switched regularly by the reception device in the terminal, which is the communication partner of the base station or AP, in order to achieve good data reception quality. The cycle of ρ(i) may be increased as needed. For example, consider a case in which the cycle is set to 5 or higher. When cycle X=2×n+1 (note that n is an integer that is greater than or equal to 2), it is sufficient if the following conditions are satisfied. When i satisfies i=t22, t23, t24 . . . t98, t99, t100, in any instance of i, Equation (214) is satisfied. [Math.214]ρ⁡(i)-ρ⁡(i-1)=π+π2×n+1⁢radiansEquation⁢(214) When cycle X=2×m (note that m is an integer that is greater than or equal to 3), it is sufficient if the following conditions are satisfied. When i satisfies i=t22, t23, t24 . . . t98, t99, t100, in any instance of i, Equation (215) is satisfied. [Math.215]ρ⁡(i)-ρ⁡(i-1)=π+πm⁢radiansEquation⁢(215) It was stated that “when ρ(i)−ρ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible”. This will be described next. InFIG.83, a phase change is not implemented, that is to say, the spectrum of transmission signal108A inFIG.1(signal208A inFIG.2) is illustrated by solid line8301inFIG.83. InFIG.83, frequency is represented on the horizontal axis and amplitude is represented on the vertical axis. In phase changer209B illustrated inFIG.2, when ρ(i)−ρ(i−1) is set to n radians and a phase change is implemented, the spectrum of transmission signal108B inFIG.1is expressed by dotted line8302inFIG.83. As illustrated inFIG.83, spectrum8301and spectrum8302effectively partially overlap. When transmission is performed to achieve this state, when the propagation environment of the base station and the terminal, which is the communication partner, is a multi-path environment, the multi-path effect on transmission signal108A and the multi-path effect on transmission signal108B are different, thereby improving the possibility that spatial diversity can be achieved. The effect of spatial diversity decreases as ρ(i)−ρ(i−1) nears 0. Accordingly, “when ρ(i)−ρ(i−1) is greater than or equal to 0 radians and less than 2π radians, the value is as close to n as possible”. However, when a phase change is implemented in phase changer209B inFIG.2, as described in the present specification, in an environment in which direct waves are dominant, it is possible to achieve the advantageous effect that the effect of data reception quality will increase. Accordingly, when ρ(i)−ρ(i−1) is set to satisfy the above-described conditions, in a multi-path environment, an environment in which direct waves are dominant, or in both environments, it is possible to achieve a superior advantageous effect, namely that high data reception quality can be achieved by the terminal, which is the communication partner. By setting the phase change value as described in the present embodiment, in both an environment including multiple paths and in an environment which direct waves are dominant, it is possible to achieve the advantageous effect of improvement in data reception quality in the terminal, which is the communication partner. Note that one conceivable configuration for the reception device in the terminal is a configuration like the one illustrated inFIG.8, for example. However, as the operations illustrated inFIG.8have already been described in other embodiments, description will be omitted. There are many methods for generating single-carrier scheme modulated signals. This embodiment can implement any of them for any of the schemes. Examples of single-carrier schemes include DFT (Discrete Fourier Transform)-Spread OFDM (Orthogonal Frequency Division Multiplexing), Trajectory Constrained DFT-Spread OFDM, OFDM based SC (Single Carrier), SC (Single Carrier)-FDMA (Frequency Division Multiple Access), and Guard interval DFT-Spread OFDM. Moreover, the phase change method according to this embodiment achieves the same advantageous effects even when applied to a multi-carrier scheme such as OFDM. Note that when applied to a multi-carrier scheme, symbols may be aligned along the temporal axis, may be aligned along the frequency axis (carrier axis), and may be aligned along both temporal and frequency axes. This is also explained in other embodiments. (Supplemental Information 6) In the present specification, one example of a configuration of the reception device in the terminal, which is the communication partner of the base station or AP, upon the transmission device in the base station or AP transmitting a single stream modulated signal, is given inFIG.41, but the configuration of a terminal that receives a single stream modulated signal is not limited to the configuration illustrated inFIG.41. For example, the reception device in the terminal may include a plurality of receiving antennas. For example, inFIG.8, when channel estimation unit805_2,807_2of modulated signal u2 does not operate, the channel estimation unit operates for a single modulated signal, and even with such a configuration, a single stream modulated signal can be received. Accordingly, in the description in the present specification, an embodiment described with reference toFIG.41may be replaced with the reception device configuration described above, and can operate in the same manner and thus achieve the same advantageous effects. Moreover, in the present specification, examples of configurations of a reception capability notification symbol transmitted by the terminal are given inFIG.38andFIG.79. Here, advantageous effects related to the inclusion of a plurality of items of information were described. Hereinafter, a transmission method for the “plurality of items of information” included in the reception capability notification symbol transmitted by the terminal will be described. Configuration Example 1 For example, from among “information3601related to support for demodulation of modulated signals with phase changes”, “information3702related to support for reception of a plurality of streams”, “information3801related to supported schemes”, “information3802related to multi-carrier scheme support”, and “information3803related to supported error correction encoding scheme” illustrated inFIG.38, at least two of these items of information are transmitted in the same frame or in the same sub-frame. Configuration Example 2 For example, from among “information3601related to support for demodulation of modulated signals with phase changes”, “information3702related to support for reception of a plurality of streams”, “information3801related to supported schemes”, “information3802related to multi-carrier scheme support”, “information3803related to supported error correction encoding scheme”, and “information7901related to supported precoding method” illustrated inFIG.79, at least two of these items of information are transmitted in the same frame or in the same sub-frame. Next, “frame” and “sub-frame” will be described. FIG.80illustrates an example of a frame configuration. InFIG.80, time is represented on the horizontal axis. For example, inFIG.80, the frame includes preamble8001, control information symbol8002, and data symbol8003(for example, the frame may: include at least preamble8001; include at least control information symbol8002; include at least preamble8001and data symbol8003; include at least preamble8001and control information symbol8002; include at least preamble8001and data symbol8003; or include at least preamble8001, control information symbol8002, and data symbol8003). The terminal transmits a reception capability notification symbol using any one of preamble8001, control information symbol8002, or data symbol8003. Note thatFIG.80may be referred to as a sub-frame.FIG.80may also be referred to something other than a frame or sub-frame. As described above, as a result of the terminal transmitting the at least two items of information included in the reception capability notification symbol, the advantageous effects described in Embodiments A1, A2, A4, A11, etc., can be achieved. Configuration Example 3 For example, from among “information3601related to support for demodulation of modulated signals with phase changes”, “information3702related to support for reception of a plurality of streams”, “information3801related to supported schemes”, “information3802related to multi-carrier scheme support”, and “information3803related to supported error correction encoding scheme” illustrated inFIG.38, at least two of these items of information are transmitted in the same packet. Configuration Example 4 For example, from among “information3601related to support for demodulation of modulated signals with phase changes”, “information3702related to support for reception of a plurality of streams”, “information3801related to supported schemes”, “information3802related to multi-carrier scheme support”, “information3803related to supported error correction encoding scheme”, and information7901related to supported precoding method” illustrated inFIG.79, at least two of these items of information are transmitted in the same packet. Consider the frame illustrated inFIG.80. Assume the frame: includes at least preamble8001and data symbol8003; includes at least control information symbol8002and data symbol8003; or includes at least preamble8001, control information symbol8002, and data symbol8003. In such cases, there are two types of methods for transmitting packets. First Method: Data symbol8003includes a plurality of packets. In such a case, at least the two items of information included in the reception capability notification symbol are transmitted via data symbol8003. Second Method: The packet is transmitted via a plurality of frames of data symbols. In such a case, at least the two items of information included in the reception capability notification symbol are transmitted via a plurality of frames. As described above, as a result of the terminal transmitting the at least two items of information included in the reception capability notification symbol, the advantageous effects described in Embodiments A1, A2, A4, A11, etc., can be achieved. Note that although the terminology “preamble” is used inFIG.80, this element may be referred to as something else. The “preamble” includes at least one of the following symbols or signals: a symbol or signal for the communication partner to detect a modulated signal; a symbol or signal for the communication partner to perform channel estimation (propagation environment estimation); a symbol or signal for the communication partner to perform time synchronization; a symbol or signal for the communication partner to perform frequency synchronization; and a symbol or signal for the communication partner to perform frequency offset estimation. Moreover, although the terminology “control information symbol” is used inFIG.80, this element may be referred to as something else. The “control information symbol” is a symbol that includes at least one of the following items of information: information on the error correction encoding scheme for generating a data symbol; information on the modulation scheme for generating a data symbol; information on the number of symbols in a data symbol; information related to the transmission method of a data symbol; information required for transmitting things other than a data symbol to the communication partner; and information other than a data symbol. Note that the order in which preamble8001, control information symbol8002, and data symbol8003are transmitted, i.e., the frame configuration method, is not limited to the example illustrated inFIG.80. Embodiments A1, A2, A4, A11, etc., describe an example in which the terminal transmits a reception capability notification symbol and the communication partner of the terminal is the base station or AP, but these are non-limiting examples. For example, the base station or AP may transmit a reception capability notification symbol, and the communication partner of the base station or AP may be the terminal. Moreover, the terminal may transmit a reception capability notification symbol and the communication partner of the terminal may be a terminal. Moreover, the base station or AP may transmit a reception capability notification symbol, and the communication partner of the base station or AP may be a base station or AP. Note that in the phase change processing implemented on a precoded (weighting synthesized) signal, there are instances in which different values are used for the phase change cycle N depending on whether a single-carrier scheme frame is to be transmitted or an OFDM scheme frame is to be transmitted. This is because, for example, when the number of data symbols arranged in a frame differs between a single-carrier scheme and an OFDM scheme, there is a possibility that the preferred phase chance cycle differs between a single-carrier scheme and an OFDM scheme. In the above description, a cycle in the phase change processing implemented on a precoded (weighting synthesized) signal is described, but when precoding (weighting synthesis) is not performed, a different value may be used for the cycle in the phase change processing implemented on the mapped signal depending on whether the scheme is a single-carrier scheme or an OFDM scheme. Embodiment C2 A variation of Embodiment B3 will be described. The configuration method of the preamble and control information symbol transmitted by the base station or AP and the operations performed by the terminal, which is the communication partner of the base station or AP will be described. As described in Embodiment A8, the configuration of the transmission device in the base station or AP is the configuration illustrated inFIG.1orFIG.44. However, the transmission device in the base station may be configured so as to include one error correction encoder illustrated inFIG.1, and may be configured so as to include the plurality of error correction encoders illustrated inFIG.44. Radio unit107_A and radio unit107_B illustrated inFIG.1,FIG.44have the configuration illustrated inFIG.55, and are characterized in that they can selectively switch between a single-carrier scheme and an OFDM scheme. Note that since operations pertaining toFIG.55have already been described in Embodiment A8 in detail, description will be omitted from this embodiment. FIG.88illustrates one example of a frame configuration of a transmission signal transmitted by the base station or AP. Time is represented on the horizontal axis. The base station or AP first transmits preamble8801, and subsequently transmits control information symbol (header block)8802and data symbol8803. Preamble8801is a symbol for the reception device in the terminal, which is the communication partner of the base station or AP, to perform, for example, signal detection of a modulated signal transmitted by the base station or AP, frame synchronization, time synchronization, frequency synchronization, frequency offset estimation, and/or channel estimation. For example, preamble8801is configured as a PSK symbol known to the base station and terminal. Control information symbol (also referred to as a header block)8802is a symbol for transmitting control information related to data symbol8803, and includes, for example, the transmission method of data symbol8803, such as information on whether the transmission method is a single-carrier scheme or an OFDM scheme, information on whether the transmission method is single stream transmission or multi-stream transmission, information on the modulation scheme, and/or information on the error correction encoding method used upon generating the data symbols (for example, error correction code information, code length information, information on the encode rate of the error correction code). Moreover, control information symbol (also referred to as a header block)8802may include, for example, information on the data length to be transmitted. Data symbol8803is a symbol for the base station or AP to transmit data, and regarding the transmission method, data symbol8803is transmitted either under a single-carrier scheme or an OFDM scheme, and the modulation scheme and error correction encoding method of data symbol8803may be switched between SISO or MIMO transmission. Note thatFIG.88is merely one non-limiting example of a frame configuration. Moreover, not each of preamble8801, control information symbol8802, and data symbol8803need be present in the frame. For example, a pilot symbol or reference symbol may be included in the data symbol. As described in Embodiment B3, in the data symbol, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, switching can be performed for whether a phase change is implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Accordingly, information included in control information symbol (header block)8802illustrated inFIG.88and transmitted by the base station or AP includes the v3 bits illustrated in Table 10 and the v4 bits illustrated in Table 11. Additionally, v5 bits defined as follows is also included in control information symbol (header block)8802illustrated inFIG.88and transmitted by the base station or AP. TABLE 12phase change value when phase change isv5implemented cyclically/regularly0use phase change method #11use phase change method #2 Interpretation of Table 12 is as follows. When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas upon transmitting data symbol8803illustrated inFIG.88, and signal processor106has any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, v5 is set to 0 (v5=0) in weighting synthesizer203if a phase change is to be implemented using phase change method #1, and the base station transmits v5. When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas upon transmitting data symbol8803illustrated inFIG.88, and signal processor106has any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, v5 is set to 1 (v5=1) in weighting synthesizer203if a phase change is to be implemented using phase change method #2, and the base station transmits v5. One example will be described using Embodiment B1. As a first example, phase change method #1 is when λ(i)−λ(i−1) indicated in Equation (209) is set as follows. [Math.216]λ⁡(i)-λ⁡(i-1)=9⁢π8⁢radiansEquation⁢(216) Moreover, phase change method #2 is when λ(i)−λ(i−1) indicated in Equation (209) is set as follows. [MATH. 217] λ(i)−λ(i−1)=π radians  Equation (217) As a second example, phase change method #1 is when ρ(i)−ρ(i−1) indicated in Equation (214) is set as follows. [Math.217]ρ⁡(i)-p⁡(i-1)=9⁢π8⁢radiansEquation⁢(217) Moreover, phase change method #2 is when ρ(i)−ρ(i−1) indicated in Equation (214) is set as follows. [MATH. 219] ρ(i)−ρ(i−1)=π radians  Equation (219) Note that the schemes for phase change method #1 and phase change method #2 are not limited to the above examples; it is sufficient so long as the phase change methods differ between phase change method #1 and phase change method #2. Moreover, in the above examples, the phase change method is implemented in one location, but a phase change may be implemented in two or more phase changers. In the above examples, phase change method #1 is a method that improves the reception quality of terminal, which is the communication partner, in radio wave propagation environment in which the direct waves are dominant and in multi-path environments, and phase change method #2 is a method that improves reception quality of the terminal, which is the communication partner, when the radio wave environment is, in particular, a multi-path environment. Accordingly, by the base station changing the phase change method appropriately for the radio wave propagation environment in accordance with the set value for v5, the terminal, which is the communication partner, is capable of achieving the advantageous effect of improved reception quality. Hereinafter, an operational example in which base station transmits v1, v2, v3, and v4 described in Embodiment B3 and transmits the above-described v5 will be given. For example, in the base station, when MIMO transmission is performed, i.e., when v2 is set to 1 (v2=1) and a phase change is not to be implemented cyclically/regularly, i.e., v3 is set to 0 (v3=0), v5 information is null (v5 may be set to 0 and may be set to 1). In the base station, when MIMO transmission is performed, i.e., when v2 is set to 1 (v2=1) and a phase change is to be implemented cyclically/regularly, i.e., v3 is set to 0 (v3=0), v5 information is valid. Note that v5 may be interpreted as illustrated in Table 12. Accordingly, when the terminal, which is the communication partner of the base station, obtains v2 and recognizes that v2=0, i.e., that it is single stream transmission, the terminal uses control information excluding at least the bit corresponding to v5, and determines the demodulation method for data symbol8803. Moreover, when the terminal, which is the communication partner of the base station, obtains v2 and recognizes that v2=1, i.e., that it is MIMO transmission, and obtains v3 and v3=0, i.e., a phase change is not implemented cyclically/regularly, the terminal uses control information excluding at least the bit corresponding to v5, and determines the demodulation method for data symbol8803. When the terminal, which is the communication partner of the base station, obtains v2 and recognizes that v2=1, i.e., that it is MIMO transmission, and obtains v3 and v3=1, i.e., a phase change is implemented cyclically/regularly, the terminal uses control information including the bit corresponding to v5, and determines the demodulation method for data symbol8803. By the base station or AP and the terminal, which is the communication partner of the base station or AP, operating as described in the present embodiment, the base station or AP and the terminal can perform communication accurately, and as a result, it is possible to achieve an advantageous effect in that data reception quality is improved and data transmission speed is improved. Embodiment C3 In this embodiment, a variation of Embodiment C2 will be described. In this embodiment, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and a single-carrier scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change is not implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Then, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and an OFDM scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, switching can be performed for whether a phase change is implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. How v5 is handled in such situations will be described next. As the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and a single-carrier scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change is not implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Accordingly, when the base station or AP sets v1 to 0 (v1=0), and the transmission scheme used for the data symbol inFIG.88is a single-carrier scheme, (regardless of whether v2 indicates 0 or 1), the information on v5 is null (v5 may be set to 0 and may be set to 1) (then, when the data symbol inFIG.88is a single-carrier scheme modulated signal or includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, a phase change is not implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B, and a plurality of modulated signals are transmitted using a MIMO scheme. Note that the base station or AP may have a configuration in which phase changer205A, phase changer205B, and phase changer5901A are omitted). On the other hand, as the transmission method for the data symbol, when a MIMO scheme (multi-stream transmission) and an OFDM scheme are selected, when signal processor106includes any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.28,FIG.29,FIG.30,FIG.31,FIG.32,FIG.33,FIG.59,FIG.60,FIG.61,FIG.62,FIG.63,FIG.64,FIG.65,FIG.66, andFIG.67, switching can be performed for whether a phase change is implemented by phase changer205A, phase changer205B, phase changer5901A, and phase changer5901B. Accordingly, when a single stream is transmitted when the base station or AP sets v1 to 1 (v1=1), the transmission scheme of the data symbol inFIG.88is OFDM, v2 is set to 0 (v2=0) (or v21 and v22 are set to 0 (v21=0, v22=0)), and data symbol8803inFIG.88is transmitted, information on v5 is null (v5 may be set to 0 or 1) (here, the base station or AP transmits a single stream modulated signal). When a plurality of modulated signals are transmitted at the same frequency and time using a plurality of antennas when the base station or AP sets v1 to 1 (v1=1), the transmission scheme of the data symbol inFIG.88is OFDM, v2 is set to 1 (v2=1) (or v21 and v22 are set to something other than 0 (something other than v21=0, v22=0)), and data symbol8803inFIG.88is transmitted, there is a possibility that information on v5 “the base station or AP supports phase change”, and “reception is possible even when the terminal, which is the communication partner of the base station or AP, has performed a phase change” is valid. When the base station or AP does not perform a phase change in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, v5 information is null, and v5 may be set to 0 or 1 (the base station then transmits v5 information). When the base station or AP does implement a phase change in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, v5 information is valid, and in the phase changer, if phase change is to be implemented using phase change method #1, v5 is set to 0 (v5=0), and the base station transmits v5. Moreover, in the phase changer, if phase change is to be implemented using phase change method #2, v5 is set to 1 (v5=1) and the base station transmits v5. Note that since the determination of whether the terminal, which is the communication partner of the base station or AP, is capable of reception even when a phase change is implemented has already been described in another embodiment, repeated description will be omitted in this embodiment. Moreover, when the base station or AP does not support implementation of a phase change, the base station or AP does not include phase changer205A, phase changer205B, phase changer5901A, phase changer5901B. Next, an example of operations performed by the terminal, which is the communication partner of the base station, will be given. Consider a terminal capable of demodulating only a single-carrier scheme modulated signal. In such a case, the terminal determines that v5 information (v5 bit) obtained by control information demodulator (control information detector)809is null (v5 information (v5 bit) is not necessary). Accordingly, since the modulated signal generated by the base station or AP when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B is not transmitted, signal processor911does not perform corresponding signal processing, but instead performs demodulation and/or decoding corresponding to signal processing under a different scheme to obtain and output reception data812. More specifically, when the terminal receives a signal transmitted from another communications device such as the base station or AP, the terminal determines, based on preamble8801and control information symbol8802, whether data symbol8803is an OFDM scheme modulated signal or a single-carrier scheme modulated signal. When determined to be an OFDM scheme modulated signal, since the terminal is not functionally equipped to demodulate data symbol8803, data symbol8803is not demodulated. On the other hand, when determined to be a single-carrier scheme modulated signal, the terminal demodulates data symbol8803. Here, the terminal determines a demodulation method for data symbol8803based on information obtained by control information decoder (control information detector)809. Here, since a phase change is not implemented cyclically/regularly on a single-carrier scheme modulated signal, the terminal uses, among control information obtained by control information decoder (control information detector)809, control information excluding at least the bit corresponding to (v3 information and) v5 information to determine the demodulation method for data symbol8803. When the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, when a terminal that supports demodulation of such a modulated signal determines in control information decoder (control information detector)809that the modulated signal is an OFDM scheme modulated signal from v1, v5 information (v5 bit) is determined to be valid. Here, control information decoder (control information detector)809determines a demodulation method for data symbol8803based on control information including v5 information (v5 bit). Then, signal processor811performs operations for demodulation and decoding using a method based on the determined demodulation method. When the base station or AP transmits a modulated signal generated when a phase change is implemented in phase changer205A, phase changer205B, phase changer5901A, and/or phase changer5901B, when a terminal that supports demodulation of such a modulated signal determines in control information decoder (control information detector)809that the modulated signal is single-carrier scheme modulated signal from v1, v5 information (v5 bit) is determined to be null (v5 information (v5 bit) is not necessary). Here, control information decoder (control information detector)809determines a demodulation method for data symbol8803using control information excluding at least the bit corresponding to (v3 information and) v5 information. Then, signal processor811performs operations for demodulation and decoding using a method based on the determined demodulation method. By the base station or AP and the terminal, which is the communication partner of the base station or AP, operating as described in the present embodiment, the base station or AP and the terminal can perform communication accurately, and as a result, it is possible to achieve an advantageous effect in that data reception quality is improved and data transmission speed is improved. Moreover, when the base station or AP uses an OFDM scheme and implements a phase change upon transmitting a plurality of streams, in an environment in which direct waves are dominant, the terminal, which is the communication partner, can achieve an advantageous effect of an improvement in data reception quality. Embodiment C4 Next, a variation of Embodiment B2 will be described. The precoding method in weighting synthesizer203when mapped signal201A (s1(t)) is QPSK (or π/2 shift QPSK) and mapped signal201B (s2(t)) is QPSK (or π/2 shift QPSK) will be described (note that in Embodiment B2, π/2 shift QPSK may be used instead of QPSK). When the configuration of signal processor106inFIG.1is any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60, for example, the following is applied as the precoding matrix F used in weighting synthesizer203. [Math.220]F=(1ej⁢π31-ej⁢π3)⁢orEquation⁢(220)[Math.221]F=β2⁢(1ej⁢π31-ej⁢π3)⁢orEquation⁢(221)[Math.222]F=(β×1β×ej⁢π3β×1-β×ej⁢π3)⁢orEquation⁢(222)[Math.223]F=(ej⁢θ11ej⁡(θ11+π3)ej⁢θ21ej⁡(θ21+π+π3))⁢orEquation⁢(223)[Math.224]F=β2⁢(ej⁢θ11ej⁡(θ11+π3)ej⁢θ21ej⁡(θ21+π+π3))⁢orEquation⁢(224)[Math.225]F=(β×ej⁢θ11β×ej⁡(θ11+π3)ej⁢θ21β×ej⁡(θ21+π+π3))Equation⁢(225) β may be a real number, and, alternatively, may be an imaginary number. However, β is not 0 (zero). Moreover, θ11 and θ21 are real numbers. In weighting synthesizer203, when precoding is performed using either one of the precoding matrices expressed in Equation (220) or Equation (225), the signal points in the in-phase I-quadrature Q plane of weighting synthesized signals204A,204B do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signals108_A and108_B and in the terminal, which is the communication partner, the reception power of either of transmission signal108_A or transmission signal108_B is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Precoding matrix F may be applied as follows. [Math.226]F=(abcd)Equation⁢(226) Note that a, b, c, and d can be defined by imaginary numbers (and thus may be real numbers). Here, in Equation (220) through Equation (225), since the absolute values of a, b, c, and d are equal, it is possible to achieve the advantageous effect that it is highly possible to achieve diversity gain. Note that in the above description, the configuration of signal processor106in transmission device that is illustrated inFIG.1and included in the base station or AP is exemplified as being any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, but a phase change need not be implemented by phase changer205A, phase changer205B, phase changer209A, and/or phase changer209B illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60. Here, a phase change is not implemented on input signals, and the signals are output as-is. For example, (inFIG.2,) in phase changer205B, when a phase change is not implemented, signal204B corresponds to signal206B. When a phase change is not implemented in phase changer209B, signal208B corresponds to signal210B. When a phase change is not implemented in phase changer205A, signal204A corresponds to signal206A. When a phase change is not implemented in phase changer209A, signal208A corresponds to signal210B. Phase changer205A, phase changer205B, phase changer209A, and/or phase changer209B may be omitted. For example, (inFIG.2,) when phase changer205B is omitted, input206B of inserter207B corresponds to signal204B. When phase changer209B is omitted, signal210B corresponds to signal208B. When phase changer205A is omitted, input206A of inserter207A corresponds to signal204A. When phase changer209A is omitted, signal210A corresponds to signal208A. When the precoding matrices are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments, including Embodiment B1. Embodiment C5 Next, a variation of Embodiment B2 will be described. The precoding method used in weighting synthesizer203when mapped signal201A (s1(t)) is 16QAM (or π/2 shift 16QAM) and mapped signal201B (s2(t)) is 16QAM (or π/2 shift 16QAM) will be described. When the configuration of signal processor106inFIG.1is any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60, for example, the following is applied as the precoding matrix F used in weighting synthesizer203. [Math.227]F=(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))⁢orEquation⁢(227)[Math.228]F=βα2+1⁢(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))⁢orEquation⁢(228)[Math.229]F=(β×ej⁢θ11β×α×ej⁡(θ11+δ)β×α×ej⁢θ21β×ej⁡(θ21+π+δ))Equation⁢(229) As a first method, in Equation (227), Equation (228), and Equation (229), α is defined as follows. [Math.230]α=54Equation⁢(230) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. As a second method, in Equation (227), Equation (228), and Equation (229), α is defined as follows. [Math.231]α=45Equation⁢(231) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. In weighting synthesizer203, when precoding using any one of the precoding matrices according to the first method using Equation (227), the first method using Equation (228), the first method using Equation (229), the second method using Equation (227), the second method using Equation (228), and the second method using Equation (229) is performed, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signals204A,204B do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signals108_A and108_B and in the terminal, which is the communication partner, the reception power of either of transmission signal108_A or transmission signal108_B is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Precoding matrix F may be applied as shown in Equation (226). Here, in the first method using Equation (227), the first method using Equation (228), the first method using Equation (229), the second method using Equation (227), the second method using Equation (228), and the second method using Equation (229), since there is no big difference between the absolute values of a, b, c, and d, it is possible to achieve the advantageous effect that it is highly possible to achieve diversity gain. Note that in the above description, the configuration of signal processor106in the transmission device inFIG.1included in the base station or AP is described as being any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, but in phase changer205A, phase changer205B, phase changer209A, and phase changer209B inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, a phase change need not be implemented. Here, a phase change is not implemented on input signals, and the signals are output as-is. For example, (inFIG.2,) in phase changer205B, when a phase change is not implemented, signal204B corresponds to signal206B. When a phase change is not implemented in phase changer209B, signal208B corresponds to signal210B. When a phase change is not implemented in phase changer205A, signal204A corresponds to signal206A. When a phase change is not implemented in phase changer209A, signal208A corresponds to signal210B. Phase changer205A, phase changer205B, phase changer209A, and/or phase changer209B may be omitted. For example, (inFIG.2,) when phase changer205B is omitted, input206B of inserter207B corresponds to signal204B. When phase changer209B is omitted, signal210B corresponds to signal208B. When phase changer205A is omitted, input206A of inserter207A corresponds to signal204A. When phase changer209A is omitted, signal210A corresponds to signal208A. When the precoding matrices are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments, including Embodiment B1. Embodiment C6 Next, a variation of Embodiment B2 will be described. The precoding method used in weighting synthesizer203when mapped signal201A (s1(t)) is 64QAM (or π/2 shift 64QAM) and mapped signal201B (s2(t)) is 64QAM (or π/2 shift 64QAM) will be described. When the configuration of signal processor106inFIG.1is any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60, for example, the following is applied as the precoding matrix F used in weighting synthesizer203. [Math.232]F=(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))⁢orEquation⁢(232)[Math.233]F=βα2+1⁢(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))⁢orEquation⁢(233)[Math.234]F=(β×ej⁢θ11β×α×ej⁡(θ11+δ)β×α×ej⁢θ21β×ej⁡(θ21+π+δ))Equation⁢(234) As a first method, in Equation (232), Equation (233), and Equation (234), α is defined as follows. [MATH.235]α=98Equation⁢(235) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. As a second method, in Equation (232), Equation (233), and Equation (234), α is defined as follows. [MATH.236]α=89Equation⁢(236) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. In weighting synthesizer203, when precoding using any one of the precoding matrices according to the first method using Equation (232), the first method using Equation (233), the first method using Equation (234), the second method using Equation (232), the second method using Equation (233), and the second method using Equation (234) is performed, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signals204A,204B do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signals108_A and108_B and in the terminal, which is the communication partner, the reception power of either of transmission signal108_A or transmission signal108_B is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Precoding matrix F may be applied as shown in Equation (226). Here, in the first method using Equation (232), the first method using Equation (233), the first method using Equation (234), the second method using Equation (232), the second method using Equation (233), and the second method using Equation (234), since there is no big difference between the absolute values of a, b, c, and d, it is possible to achieve the advantageous effect that it is highly possible to achieve diversity gain. Note that in the above description, the configuration of signal processor106in the transmission device inFIG.1included in the base station or AP is described as being any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, but in phase changer205A, phase changer205B, phase changer209A, and phase changer209B inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, a phase change need not be implemented. Here, a phase change is not implemented on input signals, and the signals are output as-is. For example, (inFIG.2,) in phase changer205B, when a phase change is not implemented, signal204B corresponds to signal206B. When a phase change is not implemented in phase changer209B, signal208B corresponds to signal210B. When a phase change is not implemented in phase changer205A, signal204A corresponds to signal206A. When a phase change is not implemented in phase changer209A, signal208A corresponds to signal210B. Phase changer205A, phase changer205B, phase changer209A, and/or phase changer209B may be omitted. For example, (inFIG.2,) when phase changer205B is omitted, input206B of inserter207B corresponds to signal204B. When phase changer209B is omitted, signal210B corresponds to signal208B. When phase changer205A is omitted, input206A of inserter207A corresponds to signal204A. When phase changer209A is omitted, signal210A corresponds to signal208A. When the precoding matrices are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments, including Embodiment B1. Embodiment C7 Next, a variation of Embodiment B2 will be described. The precoding method used in weighting synthesizer203when mapped signal201A (s1(t)) is 16QAM (or π/2 shift 16QAM) and mapped signal201B (s2(t)) is 16QAM (or π/2 shift 16QAM) will be described. When the configuration of signal processor106inFIG.1is any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60, for example, the following is applied as the precoding matrix F used in weighting synthesizer203. [MATH.237]F=(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(237)or[MATH.238]F=βα2+1⁢(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(238)or[MATH.239]F=(β×ej⁢θ11β×α×ej⁡(θ11+δ)β×α×ej⁢θ21β×ej⁡(θ21+π+δ))Equation⁢(239) As a first method, in Equation (237), Equation (238), and Equation (239), α is defined as follows. [MATH. 240] α=4  Equation (240) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. As a second method, in Equation (237), Equation (238), and Equation (239), α is defined as follows. [MATH. 241] α=¼  Equation (241) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. In weighting synthesizer203, when precoding using any one of the precoding matrices according to the first method using Equation (237), the first method using Equation (238), the first method using Equation (239), the second method using Equation (237), the second method using Equation (238), and the second method using Equation (239) is performed, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signals204A,204B do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signals108_A and108_B and in the terminal, which is the communication partner, the reception power of either of transmission signal108_A or transmission signal108_B is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Note that in the above description, the configuration of signal processor106in the transmission device inFIG.1included in the base station or AP is described as being any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, but in phase changer205A, phase changer205B, phase changer209A, and phase changer209B inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, a phase change need not be implemented. Here, a phase change is not implemented on input signals, and the signals are output as-is. For example, (inFIG.2,) in phase changer205B, when a phase change is not implemented, signal204B corresponds to signal206B. When a phase change is not implemented in phase changer209B, signal208B corresponds to signal210B. When a phase change is not implemented in phase changer205A, signal204A corresponds to signal206A. When a phase change is not implemented in phase changer209A, signal208A corresponds to signal210B. Phase changer205A, phase changer205B, phase changer209A, and/or phase changer209B may be omitted. For example, (inFIG.2,) when phase changer205B is omitted, input206B of inserter207B corresponds to signal204B. When phase changer209B is omitted, signal210B corresponds to signal208B. When phase changer205A is omitted, input206A of inserter207A corresponds to signal204A. When phase changer209A is omitted, signal210A corresponds to signal208A. When the precoding matrices are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments, including Embodiment B1. Embodiment C8 Next, a variation of Embodiment B2 will be described. The precoding method used in weighting synthesizer203when mapped signal201A (s1(t)) is 64QAM (or π/2 shift 64QAM) and mapped signal201B (s2(t)) is 64QAM (or π/2 shift 64QAM) will be described. When the configuration of signal processor106inFIG.1is any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, orFIG.60, for example, the following is applied as the precoding matrix F used in weighting synthesizer203. [MATH.242]F=(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(242)or[MATH.243]F=βα2+1⁢(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(243)or[MATH.244]F=(β×ej⁢θ11β×α×ej⁡(θ11+δ)β×α×ej⁢θ21β×ej⁡(θ21+π+δ))Equation⁢(244) As a first method, in Equation (242), Equation (243), and Equation (244), α is defined as follows. [MATH. 245] α=8  Equation (245) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. As a second method, in Equation (242), Equation (243), and Equation (244), α is defined as follows. [MATH. 246] α=⅛  Equation (246) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. In weighting synthesizer203, when precoding using any one of the precoding matrices according to the first method using Equation (242), the first method using Equation (243), the first method using Equation (244), the second method using Equation (242), the second method using Equation (243), and the second method using Equation (244) is performed, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signals204A,204B do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signals108_A and108_B and in the terminal, which is the communication partner, the reception power of either of transmission signal108_A or transmission signal108_B is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Note that in the above description, the configuration of signal processor106in the transmission device inFIG.1included in the base station or AP is described as being any one of the configurations illustrated inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, but in phase changer205A, phase changer205B, phase changer209A, and phase changer209B inFIG.2,FIG.18,FIG.19,FIG.20,FIG.21,FIG.22,FIG.59, andFIG.60, a phase change need not be implemented. Here, a phase change is not implemented on input signals, and the signals are output as-is. For example, (inFIG.2,) in phase changer205B, when a phase change is not implemented, signal204B corresponds to signal206B. When a phase change is not implemented in phase changer209B, signal208B corresponds to signal210B. When a phase change is not implemented in phase changer205A, signal204A corresponds to signal206A. When a phase change is not implemented in phase changer209A, signal208A corresponds to signal210B. Phase changer205A, phase changer205B, phase changer209A, and/or phase changer209B may be omitted. For example, (inFIG.2,) when phase changer205B is omitted, input206B of inserter207B corresponds to signal204B. When phase changer209B is omitted, signal210B corresponds to signal208B. When phase changer205A is omitted, input206A of inserter207A corresponds to signal204A. When phase changer209A is omitted, signal210A corresponds to signal208A. When the precoding matrices are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments, including Embodiment B1. Embodiment D1 In this embodiment, preferable examples of the precoding method used in the transmission device in the base station or AP and based on Embodiment B2 will be given. Consider a case in which the base station or AP and the terminal communicate with each.FIG.90illustrates an example of the configuration of the transmission device in the base station AP in such a case. InFIG.90, objects that operate the same as inFIG.1share like reference marks, and repeated description of such objects will be omitted. Error correction encoder102receives inputs of data101and control signal100, and based on information related to the error correction code included in control signal100, performs error correction encoding, and outputs encoded data103. Mapper104receives inputs of encoded data103and control signal100, and based on information on the modulated signal included in control signal100, performs mapping in accordance with the modulation scheme, and outputs mapped signal (baseband signal)105_1. Signal processor106receives inputs of mapped signal105_1, signal group110, and control signal100, performs signal processing based on control signal100, and outputs signal-processed signal106_A. Radio unit107_A receives inputs of signal-processed signal106_A and control signal100, and based on control signal100, processes signal-processed signal106_A and outputs transmission signal108_A. Transmission signal108_A is then output as radio waves from antenna unit #A (109_A). FIG.91illustrates one example of a configuration of signal processor106illustrated inFIG.90. Note that inFIG.91, operations that are the same as inFIG.2share like reference marks, and duplicate description thereof is omitted. Weighting synthesizer (precoder)203receives inputs of mapped signal201A (corresponding to mapped signal105_1inFIG.90) and control signal200(corresponding to control signal100inFIG.90), performs weighting synthesis (precoding) based on control signal200, and outputs weighted signal204A. Here, mapped signal201A is expressed as s1(t) and weighted signal204A is expressed as z1(t). Note that one example oft is time (s1(t), z1(t) are defined as complex numbers (accordingly, they may be real numbers)). Weighting synthesizer203then performs weighted synthesis on the two symbols s1(2i−1) and s1(2i) in mapped signal201A s1(t), and outputs the two symbols z1(2i−1) and z1(2i) in weighted signal204A z1(t). More specifically, the following calculation is performed. [MATH.247](z⁢1⁢(2⁢i-1)z⁢1⁢(2⁢i))=(abcd)⁢(s⁢1⁢(2⁢i-1)s⁢1⁢(2⁢i))=F⁡(s⁢1⁢(2⁢i-1)s⁢1⁢(2⁢i))Equation⁢(247) Note that F is a matrix for weighted synthesis, and a, b, c, and d can be defined as complex numbers. Accordingly, a, b, c, and d can be defined as complex numbers (may be real numbers). Note that i is a symbol number (note that here, i is an integer that is greater than or equal to 1). Inserter207A receives inputs of weighting synthesized signal204A, pilot symbol signal (pa(t)) (t is time) (251A), preamble signal252, control information symbol signal253, and control signal200, and based on information on the frame configuration included in control signal200, outputs baseband signal208A based on the frame configuration. FIG.92illustrates one example of a frame configuration of a modulated signal transmitted by the transmission device illustrated inFIG.90. Time is represented on the horizontal axis.9201is a preamble, and is, for example, a symbol for the reception device that receives the modulated signal transmitted by the transmission device illustrated inFIG.90to implement time synchronization, frame synchronization, signal detection, frequency synchronization, frequency offset estimation, etc.9202is a control information symbol, and is, for example, a symbol for transmitting control information, such as the modulation scheme, error correction encoding scheme, and/or transmission method of a data symbol. 9203is a data symbol, and is a symbol for transmitting z1(2i−1) and z1(2i) described above. Since the frame configuration illustrated inFIG.92is a single-carrier scheme frame configuration, z1(2i−1) and z1(2i) are arranged in order along the time axis. For example, symbols are arranged along the time axis in the order of z1(2i−1) and z1(2i). Note that the transmission device illustrated inFIG.90may include an interleaver for shifting the order of the symbols, and depending on the shifting of the order of the symbols, z1(2i−1) and z1(2i) need not be temporally adjacent. Moreover, inFIG.92, a pilot symbol is not included, but a pilot symbol may be included in the frame. Moreover, symbols other than those illustrated inFIG.92may be included in the frame. FIG.93illustrates one example of a frame configuration different fromFIG.92of a modulated signal transmitted by the transmission device illustrated inFIG.90. Frequency is represented on the horizontal axis, and time is represented on the vertical axis.9301is a pilot symbol, and is, for example a symbol for the reception device that receives the modulated signal transmitted by the transmission device illustrated inFIG.90to implement channel estimation, etc.9303is some other type of symbol, including, for example, a preamble and control information symbol. The preamble is a symbol for the reception device that receives the modulated signal transmitted by the transmission device illustrated inFIG.90to implement time synchronization, frame synchronization, signal detection, frequency synchronization, frequency offset estimation, etc., and the control information symbol is a symbol for transmitting control information on the modulation scheme, error correction encoding scheme, transmission method, etc., of a data symbol. 9302is a data symbol, and is a symbol for transmitting z1(2i−1) and z1(2i) described above. Since the frame configuration illustrated inFIG.93is a multi-carrier transmission scheme frame configuration such as an OFDM frame configuration, z1(2i−1) and z1(2i) may be arranged in order along the time axis, and may be arranged in order along the frequency axis. Note that the transmission device illustrated inFIG.90may include an interleaver for shifting the order of the symbols, and depending on the shifting of the order of the symbols, z1(2i−1) and z1(2i) need not be temporally adjacent, and need not be adjacent on the frequency axis. Moreover, the frame may include symbols other than those illustrated nFIG.93. A suitable example of a weighting synthesis method for weighting synthesizer203inFIG.91when signal processor106inFIG.90has the configuration illustrated inFIG.91will be described. As a first example, the precoding method used in weighting synthesizer203inFIG.91when mapped signal201A (s1(t)) is BPSK (Binary Phase Shift Keying) or when mapped signal201A (s1(t)) is π/2 shift BPSK will be described. Consider a case in which the matrix F or F(i) for the weighting synthesis to be used in weighting synthesizer203inFIG.91includes only real numbers. For example, the matrix F for weighting synthesis is expressed as shown in the following equation. [MATH.248]F=(1-111)Equation⁢(248) For example, in the case of BPSK, the signal points of the signal after precoding in in-phase I-quadrature Q plane include three points, namely, signal points8601,8602, and8603illustrated inFIG.86(one point overlaps with a signal point). Consider a case in which, under the conditions above, as illustrated inFIG.1, z1(2i−1) and z1(2i) are transmitted and in the terminal, which is the communication partner, the reception power of z1(2i, z1(2i−1) or z1(2i) is low. Here, as illustrated inFIG.86, since there are only three signal points, a problem arises in which data reception quality is bad. Taking this into consideration, a method is proposed in which precoding matrix F for weighting synthesis is comprised of not only real numbers. As an example, matrix F for weighting synthesis is applied as follows. [MATH.249]F=(1jj1)Equation⁢(249)or[MATH.250]F=α2⁢(1jj1)Equation⁢(250)or[MATH.251]F=(α×1α×jα×jα×1)Equation⁢(251)or[MATH.252]F=(j11j)Equation⁢(252)or[MATH.253]F=α2⁢(j11j)Equation⁢(253)or[MATH.254]F=(α×jα×1α×1α×j)Equation⁢(254)or[MATH.255]F=(1j1-j)Equation⁢(255)or[MATH.256]F=α2⁢(1j1-j)Equation⁢(256)or[MATH.257]F=(α×1α×jα×1-α×j)Equation⁢(257)or[MATH.258]F=(1-j1j)Equation⁢(258)or[MATH.259]F=α2⁢(1-j1j)Equation⁢(259)or[MATH.260]F=(α×1-α×jα×1α×j)Equation⁢(260)or[MATH.261]F=(j1-j1)Equation⁢(261)or[MATH.262]F=α2⁢(j1-j1)Equation⁢(262)or[MATH.263]F=(α×jα×1-α×jα×1)Equation⁢(263)or[MATH.264]F=(-j1j1)Equation⁢(264)or[MATH.265]F=α2⁢(-j1j1)Equation⁢(265)or[MATH.266]F=(-α×jα×1α×jα×1)Equation⁢(266) Note that a may be a real number, and, alternatively, may be an imaginary number. However, a is not 0 (zero). When weighting synthesis using either of the matrices illustrated in Equation (249) or Equation (266) for weighting synthesis is performed in weighting synthesizer203illustrated inFIG.91, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signal204A are aligned in the order of signal point8701,8702,8703, and8704inFIG.87. Accordingly, when the base station or AP transmits transmission signal108_A and in the terminal, which is the communication partner, the reception power of either of z1(2i−1) or z1(2i) is low, taking into consideration the state illustrated inFIG.87, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Next, as a second example, a suitable example of a weighting synthesis method to be used in weighting synthesizer203when mapped signal201A (s1(t)) is QPSK (Quadrature Phase Shift Keying) will be described. When signal point processor106inFIG.90has the configuration illustrated inFIG.91, in one example of matrix F for weighting synthesis to be used by weighting synthesizer203, the following may be applied. [MATH.267]F=(12-21)Equation⁢(267)or[MATH.268]F=β5⁢(12-21)Equation⁢(268)or[MATH.269]F=(β×1β×2-β×2β×1)Equation⁢(269)or[MATH.270]F=(211-2)Equation⁢(270)or[MATH.271]F=β5⁢(211-2)Equation⁢(271)or[MATH.272]F=(β×2β×1β×1-β×2)Equation⁢(272)[MATH.273]F=(1-221)Equation⁢(273)or[MATH.274]F=β5⁢(1-221)Equation⁢(274)or[MATH.275]F=(β×1-β×2β×2β×1)Equation⁢(275)or[MATH.276]F=(-2112)Equation⁢(276)or[MATH.277]F=β5⁢(-2112)Equation⁢(277)or[MATH.278]F=(-β×2β×1β×1β×2)Equation⁢(278)[MATH.279]F=(122-1)Equation⁢(279)or[MATH.280]F=β5⁢(122-1)Equation⁢(280)or[MATH.281]F=(β×1β×2β×2-β×1)Equation⁢(281)[MATH.282]F=(21-12)Equation⁢(282)or[MATH.283]F=β5⁢(21-12)Equation⁢(283)or[MATH.284]F=(β×2β×1-β×1β×2)Equation⁢(284)[MATH.285]F=(-1221)Equation⁢(285)or[MATH.286]F=β5⁢(-1221)Equation⁢(286)or[MATH.287]F=(-β×1β×2β×2β×1)Equation⁢(287)[MATH.288]F=(2-112)Equation⁢(288)or[MATH.289]F=β5⁢(2-112)Equation⁢(289)or[MATH.290]F=(β×2-β×1β×1β×2)Equation⁢(290) β may be a real number, and, alternatively, may be an imaginary number. However, β is not 0 (zero). When weighting synthesis using either of the matrices illustrated in Equation (267) or Equation (290) for weighting synthesis is performed in weighting synthesizer203illustrated inFIG.91, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signal204A do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signal108_A and in the terminal, which is the communication partner, the reception power of either of z1(2i−1) or z1(2i) is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. When the matrices for weighting synthesis are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments. Embodiment D2 Next, a variation of Embodiment D1 will be described. A weighting synthesis method used in weighting synthesizer203inFIG.91when mapped signal201A (s1(t)) is QPSK (or π/2 shift QPSK) will be described (note that in Embodiment D1, π/2 shift QPSK may be used instead of QPSK). When signal processor106inFIG.90has the configuration illustrated inFIG.91, in one example of matrix F for weighting synthesis to be used by weighting synthesizer203, the following may be applied. [MATH.291]F=(1ej⁢π31-ej⁢π3)Equation⁢(291)or[MATH.292]F=β2⁢(1ej⁢π31-ej⁢π3)Equation⁢(292)or[MATH.293]F=(β×1β×ej⁢π3β×1-β×ej⁢π3)Equation⁢(293)or[MATH.294]F=(ej⁢θ11ej⁡(θ11+π3)ej⁢θ21ej⁡(θ21+π+π3))Equation⁢(294)or[MATH.295]F=β2⁢(ej⁢θ11ej⁡(θ11+π3)ej⁢θ21ej⁡(θ21+π+π3))Equation⁢(295)or[MATH.296]F=(β×ej⁢θ11β×ej⁡(θ11+π3)β×ej⁢θ21β×ej⁡(θ21+π+π3))Equation⁢(296) β may be a real number, and, alternatively, may be an imaginary number. However, β is not 0 (zero). Moreover,011and021are real numbers. When weighting synthesis using either of the matrices illustrated in Equation (291) or Equation (296) for weighting synthesis is performed in weighting synthesizer203illustrated inFIG.91, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signal204A do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signal108_A and in the terminal, which is the communication partner, the reception power of either of z1(2i−1) or z1(2i) is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Matrix F for weighting synthesis is applied as follows. [MATH.297]F=(abcd)Equation⁢(297) Note that a, b, c, and d can be defined by imaginary numbers (and thus may be real numbers). Here, in Equation (291) through Equation (296), since the absolute values of a, b, c, and d are equal, it is possible to achieve the advantageous effect that it is highly possible to achieve diversity gain. When the matrices for weighting synthesis are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments. Embodiment D3 Next, a variation of Embodiment D1 will be described. A weighting synthesis method used in weighting synthesizer203inFIG.91when mapped signal201A (s1(t)) is 16QAM (or π/2 shift 16QAM) will be described. When signal processor106inFIG.90has the configuration illustrated inFIG.91, in one example of matrix F for weighting synthesis to be used by weighting synthesizer203, the following may be applied. [MATH.298]F=(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(298)or[MATH.299]F=βα2+1⁢(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(299)or[MATH.300]F=(β×ej⁢θ11β×α×ej⁡(θ11+δ)β×α×ej⁢θ21β×ej⁡(θ21+π+δ))Equation⁢(300) As a first method, in Equation (298), Equation (299), and Equation (300), α is defined as follows. [MATH. 301] α=5/4  Equation (301) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. As a second method, in Equation (298), Equation (299), and Equation (300), α is defined as follows. [MATH. 302] α=⅘  Equation (302) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. In weighting synthesizer203, when precoding using any one of the precoding matrices according to the first method using Equation (227), the first method using Equation (228), the first method using Equation (229), the second method using Equation (227), the second method using Equation (228), and the second method using Equation (229) is performed, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signal204A do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signal108_A and in the terminal, which is the communication partner, the reception power of either of z1(2i−1) or z1(2i) is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Matrix F for weighting synthesis is expressed as shown in Equation (297). Here, in the first method using Equation (298), the first method using Equation (299), the first method using Equation (300), the second method using Equation (298), the second method using Equation (299), and the second method using Equation (300), since there is no big difference between the absolute values of a, b, c, and d, it is possible to achieve the advantageous effect that it is highly possible to achieve diversity gain. When the matrices for weighting synthesis are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments. Embodiment D4 Next, a variation of Embodiment D1 will be described. A weighting synthesis method used in weighting synthesizer203inFIG.91when mapped signal201A (s1(t)) is 64QAM (or π/2 shift 64QAM) will be described. When signal processor106inFIG.90has the configuration illustrated inFIG.91, in one example of matrix F for weighting synthesis to be used by weighting synthesizer203, the following may be applied. [MATH.303]F=(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(303)or[MATH.304]F=βα2×1⁢(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(304)or[MATH.305]F=(β×ej⁢θ11β×α×ej⁡(θ11+δ)β×α×ej⁢θ21β×ej⁡(θ21+π+δ))Equation⁢(305) As a first method, in Equation (303), Equation (304), and Equation (305), α is defined as follows. [MATH. 306] α9/8  Equation (306) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. As a second method, in Equation (303), Equation (304), and Equation (305), α is defined as follows. [MATH. 307] α= 8/9  Equation (307) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. In weighting synthesizer203, when weighting synthesis using any one of the matrices for weighting synthesis according to the first method using Equation (303), the first method using Equation (304), the first method using Equation (305), the second method using Equation (303), the second method using Equation (304), and the second method using Equation (305) is performed, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signal204A do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signal108_A and in the terminal, which is the communication partner, the reception power of either of z1(2i−1) or z1(2i) is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. Matrix F for weighting synthesis is expressed as shown in Equation (297). Here, in the first method using Equation (303), the first method using Equation (304), the first method using Equation (305), the second method using Equation (303), the second method using Equation (304), and the second method using Equation (305), since there is no big difference between the absolute values of a, b, c, and d, it is possible to achieve the advantageous effect that it is highly possible to achieve diversity gain. When the matrices for weighting synthesis are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments. Embodiment D5 Next, a variation of Embodiment D1 will be described. A weighting synthesis method used in weighting synthesizer203when mapped signal201A (s1(t)) is 16QAM (or π/2 shift 16QAM) will be described. When signal processor106inFIG.90has the configuration illustrated inFIG.91, in one example of matrix F for weighting synthesis to be used by weighting synthesizer203, the following may be applied. [MATH.308]F=(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(308)or[MATH.309]F=βα2+1⁢(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(309)or[MATH.310]F=(β×ej⁢θ11β×α×ej⁡(θ11+δ)β×α×ej⁢θ21β×ej⁡(θ21+π+δ))Equation⁢(310) As a first method, in Equation (308), Equation (309), and Equation (310), α is defined as follows. [MATH. 311] α=4  Equation (311) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. As a second method, in Equation (308), Equation (309), and Equation (310), α is defined as follows. [MATH. 312] α=¼  Equation (312) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. In weighting synthesizer203, when weighting synthesis using any one of the matrices for weighting synthesis according to the first method using Equation (308), the first method using Equation (309), the first method using Equation (310), the second method using Equation (308), the second method using Equation (309), and the second method using Equation (310) is performed, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signal204A do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signal108_A and in the terminal, which is the communication partner, the reception power of either of z1(2i−1) or z1(2i) is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. When the matrices for weighting synthesis are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments. Embodiment D6 Next, a variation of Embodiment D1 will be described. A weighting synthesis method used in weighting synthesizer203inFIG.91when mapped signal201A (s1(t)) is 64QAM (or π/2 shift 64QAM) will be described. When signal processor106inFIG.90has the configuration illustrated inFIG.91, in one example of matrix F for weighting synthesis to be used by weighting synthesizer203, the following may be applied. [MATH.313]F=(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(313)or[MATH.314]F=βα2+1⁢(ej⁢θ11α×ej⁡(θ11+δ)α×ej⁢θ21ej⁡(θ21+π+δ))Equation⁢(314)or[MATH.315]F=(β×ej⁢θ11β×α×ej⁡(θ11+δ)β×α×ej⁢θ21β×ej⁡(θ21+π+δ))Equation⁢(315) As a first method, in Equation (313), Equation (314), and Equation (315), α is defined as follows. [MATH. 316] α=8  Equation (316) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. As a second method, in Equation (313), Equation (314), and Equation (315), α is defined as follows. [MATH. 317] α=⅛  Equation (317) β may be a real number, and, alternatively, may be an imaginary number. θ11 is a real number, θ21 is a real number, and δ is a real number. In weighting synthesizer203, when weighting synthesis using any one of the matrices for weighting synthesis according to the first method using Equation (313), the first method using Equation (314), the first method using Equation (315), the second method using Equation (313), the second method using Equation (314), and the second method using Equation (315) is performed, the signal points in the in-phase I-quadrature Q plane of weighting synthesized signal204A do not overlap and are widely spread apart. Accordingly, when the base station or AP transmits transmission signal108_A and in the terminal, which is the communication partner, the reception power of either of z1(2i−1) or z1(2i) is low, taking into consideration the state of the signal points described above, it is possible to achieve the advantageous effect of an improvement in data reception quality by the terminal. When the matrices for weighting synthesis are set as described above, it is possible to achieve an advantageous effect of an improvement in data reception quality in the terminal, which is the communication partner of the base station or AP. Note that this embodiment may be combined with other embodiments. Embodiment E1 In this embodiment, the configuration of a transmission device that supports both the transmission method described in the present specification of transmitting a plurality of signals generated by precoding a plurality of modulated signal from a plurality of antennas at the same time and frequency and the transmission method described from Embodiments D1 through D6 of differing at least one of frequency and time of a plurality of weighting synthesized signals generated by performing weighting synthesis on a plurality of modulated signal and transmitting the signals from at least one antenna. As described in Embodiment A8, the configuration of the transmission device in the base station or AP is the configuration illustrated inFIG.1orFIG.44. Note that the transmission device in the base station may be configured to be capable of implementing both the method of generating a plurality of signals from data encoded by the single error correction encoder illustrated inFIG.1and the method of generating a plurality of signals from data encoded by the plurality of error correction encoders illustrated inFIG.44. Radio unit107_A and radio unit107_B inFIG.1andFIG.44include, for example, the configurations illustrated inFIG.3orFIG.55. When radio unit107_A and radio unit107_B have the configuration illustrated inFIG.55, they can selectively switch between a single-carrier scheme and an OFDM scheme. Note that since operations pertaining toFIG.3have already been described in an embodiment in detail and operations pertaining toFIG.55have already been described in Embodiment A8 in detail, description will be omitted from this embodiment. The transmission device in the base station or AP switches between transmission using the transmission method described in the present specification of transmitting a plurality of signals generated by precoding a plurality of modulated signal from a plurality of antennas at the same time and frequency and the transmission method described from Embodiments D1 through D6 of differing at least one of frequency and time of a plurality of weighting synthesized signals generated by performing weighting synthesis on a plurality of modulated signal and transmitting the signals from at least one antenna. For example, upon single stream modulated signal transmission described in Embodiment A8, the transmission device in the base station or AP performs transmission using the transmission method described from Embodiments D1 through D6 of differing at least one of frequency and time of a plurality of weighting synthesized signals generated by performing weighting synthesis on a plurality of modulated signal and transmitting the signals from at least one antenna. Since operations performed by the transmission device in the base station or AP for transmitting a plurality of modulated signals for a plurality of streams have already been described in Embodiment A8, description will be omitted from this embodiment. The transmission device in the base station or AP may use, as precoding processes to be implemented in transmission of a plurality of modulated signals for a plurality of streams, the precoding processes expressed by the matrix F that represents the weighting synthesis processes implemented in single stream modulated signal transmission. For example, the transmission device in the base station or AP performs the precoding processes illustrated in Equation (248) in transmission of a plurality of modulated signals for a plurality of streams, and performs the weighting synthesis processes illustrated in Equation (248) in single stream modulated signal transmission. With such a configuration, since the precoding processes implemented in transmission of a plurality of modulated signals for a plurality of streams and the weighting synthesis processes implemented in single stream modulated signal transmission are the same, the transmission device in the base station or AP reduce the scale of circuitry used compared to when different matrices F are used for the precoding processes and the weighting synthesis. Moreover, in the above description, an example is given in which the matrix F representing the precoding processes and the weighting synthesis processes is exemplified as the matrix F illustrated in Equation (248), but even if the matrix F representing the precoding processes and the weighting synthesis processes is another matrix F described in the present disclosure, it can be implemented in the same manner, as a matter of course. Moreover, operations performed by the transmission device in the base station or AP in transmission of a plurality of modulated signals for a plurality of streams are not limited to the examples in Embodiment A8. The transmission device included in the base station or AP can implement transmission of a plurality of modulated signals for a plurality of streams using arbitrary configurations and operations described in other embodiments for transmitting a plurality of transmission signals generated from the plurality of modulated signals from a plurality of antennas at the same frequency and time. For example, the transmission device in the base station or AP may include the configuration illustrated inFIG.73and described in Embodiment A10. Next, the reception device included in the terminal will be described. The reception device in the terminal that receives the signal transmitted by the transmission device in the base station or AP using transmission of a plurality of modulated signals for a plurality of streams performs operations for reception and demodulation of received signals that support the method of transmission of a plurality of modulated signals for a plurality of streams described in other embodiments, and obtains the transmitted data. The reception device in the terminal that receives the signal transmitted by the transmission device in the base station or AP using single stream modulated signal transmission includes, for example, the configuration illustrated inFIG.41. Signal processor4109uses both or at least one of the received plurality of weighting synthesized signals, performs demodulation and error correction decoding according to the weighting synthesis processed implemented on the signal(s), and obtains the transmitted data. As operations have already been described in Embodiment A4 in detail, description will be omitted from this embodiment. The reception device in the terminal described here can be applied in the same manner as described in Embodiments D1 through D6. Note that the transmission device in the base station or AP may use, as precoding processes to be implemented in transmission of a plurality of modulated signals for a plurality of streams, a single precoding method selected from among a plurality of precoding methods expressed by mutually different matrices F. Similarly, the transmission device in the base station or AP may us, as weighting synthesis processes to be implemented in single stream modulated signal transmission, a single weighting synthesis method selected from among a plurality of weighting synthesis methods expressed by mutually different matrices F. Here, if the matrix F expressing at least one of the precoding methods selectable by the transmission device in the base station or AP is the same as the matrix F expressing a weighting synthesis method selectable by the transmission device in the base station or AP, the transmission device in the base station or AP can reduce the scale of the circuitry used. A first transmission device according to one aspect of the present embodiment described above performs transmission in a transmission mode selected from among a plurality of transmission modes including a first transmission mode and a second transmission mode. In the first transmission mode, a first transmission signal and a second transmission signal generated by implementing first signal processing on a first modulated signal and a second modulated signal are transmitted from a plurality of antennas at the same frequency and same time. In the second transmission mode, a third transmission signal and a fourth transmission signal generated by implementing second signal processing on a third modulated signal and a fourth modulated signal are transmitted from at least one antenna at different frequencies, different times, or different frequencies and times. The first signal processing and the second signal processing include weighting synthesis defined by the same matrix F. A second transmission device according to another aspect of the present embodiment generates a first transmission signal and a second transmission signal by implementing predetermined signal processing including weighting synthesis defined by a matrix F on a first modulated signal and a second modulated signal. In a first transmission mode, the first transmission signal and the second transmission signal are transmitted from a plurality of antennas at the same frequency and the same time, and in a second transmission mode, the first transmission signal and the second transmission signal are transmitted from at least one antenna at different frequencies, different times, or different frequencies and times. Embodiment F1 In this embodiment, using the examples described in Embodiment A1, Embodiment A2, Embodiment A4, and Embodiment A11, another implementation method for operations performed by the terminal will be given. FIG.23illustrates one example of a configuration of the base station or AP. As this example has already been described, repeated description will be omitted. FIG.24illustrates one example of a configuration of a terminal, which is the communication partner of the base station or AP. As this example has already been described, repeated description will be omitted. FIG.34illustrates one example of a system configuration in a state in which base station or AP3401and terminal3402are communicating. As this example has already been described in Embodiment A1, Embodiment A2, Embodiment A4, and Embodiment A11, repeated description will be omitted. FIG.35illustrates an example of communication between the base station or AP3401and terminal3402illustrated inFIG.34. As this example has already been described in Embodiment A1, Embodiment A2, Embodiment A4, and Embodiment A11, repeated description will be omitted FIG.94illustrates a specific example of a configuration of reception capability notification symbol3502transmitted by the terminal illustrated inFIG.35. Before moving onto the description ofFIG.94, first, a configuration in which the terminal is provided as a terminal that communicates with the base station or AP will be described. In this embodiment, there is a possibility that the following types of terminals exist. Terminal Type #1: Terminal Type #1 can demodulate single-carrier scheme and single stream transmission modulated signals. Terminal Type #2: Terminal Type #2 can demodulate single-carrier scheme and single stream transmission modulated signals. Additionally, Terminal Type #2 can receive and demodulate single-carrier scheme modulated signals transmitted from a plurality of antennas by the communication partner. Terminal Type #3: Terminal Type #3 can demodulate single-carrier scheme and single stream transmission modulated signals. Additionally, Terminal Type #3 can demodulate OFDM scheme and single stream transmission modulated signals. Terminal Type #4: Terminal Type #4 can demodulate single-carrier scheme and single stream transmission modulated signals. Additionally, Terminal Type #4 can receive and demodulate single-carrier scheme modulated signals transmitted from a plurality of antennas by the communication partner. Additionally, Terminal Type #4 can demodulate OFDM scheme and single stream transmission modulated signals. Additionally, Terminal Type #4 can receive and demodulate OFDM scheme modulated signals transmitted from a plurality of antennas by the communication partner. Terminal Type #5: Terminal Type #5 can demodulate OFDM scheme and single stream transmission modulated signals. Terminal Type #6: Terminal Type #6 can demodulate OFDM scheme and single stream transmission modulated signals. Additionally, Terminal Type #6 can receive and demodulate OFDM scheme modulated signals transmitted from a plurality of antennas by the communication partner. In this embodiment, for example, Terminal Type #1 through Terminal Type #6 are capable of communicating with the base station or AP and vice versa. However, the base station or AP may communicate with a type of terminal other than Terminal Type #1 through Terminal Type #6. In view of this, disclosed is a reception capability notification symbol such as the one illustrated inFIG.94. FIG.94illustrates a specific example of a configuration of reception capability notification symbol3502transmitted by the terminal illustrated inFIG.35. As illustrated inFIG.94, reception capability notification symbols include reception capability notification symbol9401related to single-carrier scheme and OFDM scheme, reception capability notification symbol9402related to single-carrier scheme, and reception capability notification symbol9403related to OFDM scheme. Note that reception capability notification symbols other than those illustrated inFIG.94may be included. Reception capability notification symbol9401related to single-carrier scheme and OFDM scheme includes data for notifying the communication partner (in this case, for example, the base station or AP) of the reception capability of both the single-carrier scheme modulated signal and the OFDM scheme modulated signal. Reception capability notification symbol9402related to single-carrier scheme includes data for notifying the communication partner (in this case, for example, the base station or AP) of the reception capability of the single-carrier scheme modulated signal. Reception capability notification symbol9403related to OFDM scheme includes data for notifying the communication partner (in this case, for example, the base station or AP) of the reception capability of the OFDM scheme modulated signal. FIG.95illustrates an example of reception capability notification symbol9401related to single-carrier scheme and OFDM scheme illustrated inFIG.94. Reception capability notification symbol9401related to single-carrier scheme and OFDM scheme illustrated inFIG.94includes data related to SISO or MIMO (MISO) support9501, data related to supported error correction encoding scheme9502, and data related to single-carrier scheme and OFDM scheme support status9503. When data related to SISO or MIMO (MISO) support9501is indicated by g0 and g1, for example, when the communication partner of the terminal transmits a single stream modulated signal and the terminal can demodulate such a modulated signal, the terminal sets g0 to 1 (g0=1) and sets g1 to 0 (g1=0), and transmits a reception capability notification symbol including g0 and g1. When the communication partner of the terminal transmits a plurality of different modulated signals from a plurality of antennas and the terminal can demodulate such modulated signals, the terminal sets g0 to 0 (g0=0) and sets g1 to 1 (g1=1), and transmits a reception capability notification symbol including g0 and g1. When the communication partner of the terminal transmits a single stream modulated signal and the terminal can demodulate such a modulated signal and when the communication partner of the terminal transmits a plurality of different modulated signal from a plurality of antennas and the terminal can demodulate such modulated signals, the terminal sets g0 to 1 (g0=1) and sets g1 to 1 (g1=1), and transmits a reception capability notification symbol including g0 and g1. When data related to supported error correction encoding scheme9502is g2, for example, when the terminal is capable of error correction decoding first error correction encoding scheme data, the terminal sets g2 to 0 (g2=0), and transmits a reception capability notification symbol including g2. When the terminal is capable of error correction decoding first error correction encoding scheme data and capable of error correction decoding second error correction encoding scheme data, the terminal sets g2 to 1 (g2=1), and transmits a reception capability notification symbol including g2. As another example, assume that each of the terminals is capable of error correction decoding first error correction encoding scheme data. Furthermore, when the terminal is capable of error correction decoding second error correction encoding scheme data, the terminal sets g2 to 1 (g2=1), and when the terminal is not capable of error correction decoding second error correction encoding scheme data, the terminal sets g2 to 0 (g2=0). Note that the terminal transmits a reception capability notification symbol including g2. Note that the first error correction encoding scheme and the second error correction encoding scheme are different schemes. For example, assume that the block length (code length) of the first error correction encoding scheme is A bits (A is an integer that is greater than or equal to 2) and the block length (code length) of the second error correction encoding scheme is B bits (B is an integer that is greater than or equal to 2), and that A B. However, the example of different schemes i not limited to this example; it is sufficient if the error correction code used in the first error correction encoding scheme and the error correction code used in the second error correction encoding scheme are different. When the data related to single-carrier scheme and OFDM scheme support status9503is expressed as g3 and g4, for example, when the terminal is capable of demodulating a single-carrier scheme modulated signal, the terminal sets g3 to 1 (g3=1) and sets g4 to 0 (g4=0) (here, the terminal does not support demodulation of an OFDM modulated signal), and the terminal transmits a reception capability notification symbol including g3 and g4. When the terminal is capable of demodulating an OFDM scheme modulated signal, the terminal sets g3 to 0 (g3=0) and sets g4 to 1 (g4=1) (in this case, the terminal does not support demodulation of a single-carrier scheme modulated signal), and the terminal transmits a reception capability notification symbol including g3 and g4. When the terminal is capable of demodulating a single-carrier scheme modulated signal and capable of demodulating an OFDM scheme modulated signal, the terminal sets g3 to 1 (g3=1) and sets g4 to 1 (g4=1), and transmits a reception capability notification symbol including g3 and g4. FIG.96illustrates an example of a configuration of reception capability notification symbol9402related to a single-carrier scheme illustrated inFIG.94. Reception capability notification symbol9402related to a single-carrier scheme illustrated inFIG.94includes data related to scheme9601supported by a single-carrier scheme. When data related to scheme9601supported by a single-carrier scheme is expressed as h0 and h1, for example, when the communication partner of the terminal performs channel bonding and transmits a modulated signal, if the terminal is capable of demodulating such a modulated signal, the terminal sets h0 to 1 (h0=1) and if the terminal does not support demodulation of such a modulated signal, the terminal sets h0 to 0 (110=0), and then the terminal transmits a reception capability notification symbol including h0. When the communication partner of the terminal performs channel aggregation and transmits a modulated signal, if the terminal is capable of demodulating such a modulated signal, the terminal sets h1 to 1 (h1=1) and if the terminal does not support demodulation of such a modulated signal, the terminal sets h1 to 0 (h1=0), and then the terminal transmits a reception capability notification symbol including h1. Note that when the terminal sets g3 described above to 0 and sets g4 described above to 1, since the terminal does not support demodulation of a single-carrier scheme modulated signal, the bit (field) indicated by h0 becomes a null bit (field), and the bit (field) indicated by h1 becomes a null bit (field). Note that when the terminal sets g3 to 0 and sets g4 to 1, h0 and h1 described above may be predefined as reserved (held for future use) bits (fields), and the terminal may determine h0 and h1 described above to be null bits (fields) (may determine h0 or h1 described above to be null bits (fields)), and the base station or AP may obtain h0 and h1 described above but determine h0 and h1 to be null bits (fields) (determine h0 or h1 to be null bits (fields)). In the above description, it is described that the terminal may set g3 to 0 and set g4 to 1, in other words, the terminal may not support demodulation of a single-carrier scheme modulated signal, but an embodiment in which each of the terminals supports single-carrier scheme demodulation is possible. In such cases, the bit (field) expressed by g3 described above is not required. FIG.97illustrates an example of a configuration of reception capability notification symbol9403related to OFDM scheme illustrated inFIG.94. Reception capability notification symbol9403related to an OFDM scheme illustrated inFIG.94includes data related to scheme9701supported by an OFDM scheme. Data related to scheme9701supported by an OFDM scheme includes data3601related to support for demodulation of modulated signals with phase changes illustrated in, for example,FIG.36,FIG.38, andFIG.79. Note that since data3601related to support for demodulation of modulated signals with phase changes has already been described in Embodiments A1, A2, A4, A11, etc., repeated description herein will be omitted. When data3601related to support for demodulation of modulated signals with phase changes is expressed as k0, for example, when the communication partner of the terminal generates modulated signals, implements phase change processing, and transmits the generated modulated signals from a plurality of antennas, if the terminal is capable of demodulating such modulated signals, the terminal sets k0 to 1 (k0=1), and if the terminal does not support demodulation of such modulated signal, the terminal sets k0 to 0 (k0=0), and then the terminal transmits a reception capability notification symbol including k0. Note that when the terminal sets g3 described above to 1 and sets g4 described above to 0, since the terminal does not support demodulation of an OFDM scheme modulated signal, the bit (field) indicated by k0 becomes a null bit (field). When the terminal sets g3 to 1 and sets g4 to 0, k0 described above may be predefined as a reserved (held for future use) bit (field), and the terminal may determine k0 described above to be a null bit (field), and the base station or AP may obtain k0 described above but determine k0 to be a null bit (field). In the above description, an embodiment is possible in which each of the terminals supports single-carrier scheme demodulation. In such cases, the bit (field) expressed by g3 described above is not required. The base station that receives the reception capability notification symbol transmitted by the terminal in the above description generates and transmits modulated signals based on the received reception capability notification symbol so that the terminal can receive a transmission signal that can be demodulated. Note that specific examples of operations performed by the base station can be found in, for example, Embodiment A1, Embodiment A2, Embodiment A4, and Embodiment A11. If the above is implemented, the following exemplary features can be achieved.Feature #1:A first reception device, characterized in that:the first reception device generates control information indicating a signal that is receivable by the first reception device and including first, second, third, and fourth regions;the first region is configured to store information indicating whether a signal for transmitting data generated using a single-carrier scheme is receivable or not, and information indicating whether a signal generated using a multi-carrier scheme is receivable or not;the second region is configured to store information for each of one or more schemes that can be used when the signal is generated using the single-carrier scheme, can be used when the signal is generated using the multi-carrier scheme, or can be used in both cases, the information indicating whether the signal generated using said scheme is receivable;the third region:is configured to, when the first region stores information indicating that the signal for transmitting data generated using the single-carrier scheme is receivable, store information for each of one or more schemes that can be used when the signal is generated using the single-carrier scheme, the information indicating whether the signal generated using said scheme is receivable; andis configured to be a null or reserved region when the first region stores information indicating that the signal for transmitting data generated using the single-carrier scheme is not receivable,the fourth region:is configured to, when the first region stores information indicating that the signal for transmitting data generated using the multi-carrier scheme is receivable, store information for each of one or more schemes that can be used when the signal is generated using the multi-carrier scheme, the information indicating whether the signal generated using said scheme is receivable; andis configured to be a null or reserved region when the first region stores information indicating that the signal for transmitting data generated using the multi-carrier scheme is not receivable; andthe first reception device is configured to generate a control signal based on the control information and transmit the control signal to a transmission device.The first reception device described above, characterized in that:the second region includes a fifth region configured to store information indicating whether a signal generated using a multiple-input multiple-output (MIMO) scheme is receivable or not;the second or fourth region includes a sixth region configured to store information indicating whether a signal generated using a phase change scheme that implements a phase change while regularly changing a phase change value is receivable or not, for at least one of transmission system signals that transmit data; andthe first reception device is configured to set a bit in the sixth region to a predetermined value when (i) the first region stores information indicating that the signal for transmitting data generated using the multi-carrier scheme is not receivable or when (ii) the first region stores information indicating that the signal for transmitting data generated using the multi-carrier scheme is receivable and the fifth region stores information indicating that the signal generated using the MIMO scheme is not receivable.A first transmission device, configured to:receive the control signal from the first reception device described above;demodulate the received control signal to obtain the control signal; andbased on the control signal, determine a scheme to be used to generate a signal to be transmitted to the reception device.The first transmission device described above, characterized in that:the second region includes a fifth region configured to store information indicating whether a signal generated using a multiple-input multiple-output (MIMO) scheme is receivable or not;the second or fourth region includes a sixth region configured to store information indicating whether a signal generated using a phase change scheme that implements a phase change while regularly changing a phase change value is receivable or not, for at least one of transmission system signals that transmit data; andthe first transmission device is configured to determine a scheme to be used to generate a signal to be transmitted to the reception device, without using a value of a bit in the sixth region, when (i) the first region includes information indicating that the signal for transmitting data generated using the multi-carrier scheme is not receivable or when (ii) the first region includes information indicating that the signal for transmitting data generated using the multi-carrier scheme is receivable and the fifth region includes information indicating that the signal generated using the MIMO scheme is not receivable.Feature #2:A second reception device, characterized in that:the second reception device generates control information indicating a signal that is receivable by the first reception device and including first, second, third, and fourth regions;the first region is configured to store information indicating whether a signal for transmitting data generated using a multi-carrier scheme is receivable or not;the second region is configured to store information for each of one or more schemes that can be used when the signal is generated using the single-carrier scheme, can be used when the signal is generated using the multi-carrier scheme, or can be used in both cases, the information indicating whether the signal generated using said scheme is receivable;the third region is configured to store information for each of one or more schemes that can be used when the signal is generated using the single-carrier scheme, the information indicating whether the signal generated using said scheme is receivable;the fourth region:is configured to, when the first region stores information indicating that the signal for transmitting data generated using the multi-carrier scheme is receivable, store information for each of one or more schemes that can be used when the signal is generated using the multi-carrier scheme, the information indicating whether the signal generated using said scheme is receivable; andis configured to be a null or reserved region when the first region stores information indicating that the signal for transmitting data generated using the multi-carrier scheme is not receivable; andthe second reception device is configured to generate a control signal based on the control information and transmit the control signal to a transmission device.The second reception device described above, characterized in that:the second region includes a fifth region configured to store information indicating whether a signal generated using a multiple-input multiple-output (MIMO) scheme is receivable or not;the second or fourth region includes a sixth region configured to store information indicating whether a signal generated using a phase change scheme that implements a phase change while regularly changing a phase change value is receivable or not, for at least one of transmission system signals that transmit data; andthe second reception device is configured to set a bit in the sixth region to a predetermined value when (i) the first region stores information indicating that the signal for transmitting data generated using the multi-carrier scheme is not receivable or when (ii) the first region stores information indicating that the signal for transmitting data generated using the multi-carrier scheme is receivable and the fifth region stores information indicating that the signal generated using the MIMO scheme is not receivable.A second transmission device, configured to:receive the control signal from the first reception device described above;demodulate the received control signal to obtain the control signal; andbased on the control signal, determine a scheme to be used to generate a signal to be transmitted to the reception device.The second transmission device described above, characterized in that:the second region includes a fifth region configured to store information indicating whether a signal generated using a multiple-input multiple-output (MIMO) scheme is receivable or not;the second or fourth region includes a sixth region configured to store information indicating whether a signal generated using a phase change scheme that implements a phase change while regularly changing a phase change value is receivable or not, for at least one of transmission system signals that transmit data; andthe second transmission device is configured to determine a scheme to be used to generate a signal to be transmitted to the second reception device, without using a value of a bit in the sixth region, when (i) the first region includes information indicating that the signal for transmitting data generated using the multi-carrier scheme is not receivable or when (ii) the first region includes information indicating that the signal for transmitting data generated using the multi-carrier scheme is receivable and the fifth region includes information indicating that the signal generated using the MIMO scheme is not receivable. Note that in this embodiment, the configuration of reception capability notification symbol3502inFIG.35is exemplified as the configuration illustrated inFIG.94, but the configuration is not limited to this example; for example, a different reception capability notification symbol may be included inFIG.94. For example, the configuration may be the one illustrated inFIG.98. InFIG.98, components that operate the same as inFIG.94share like reference marks. Accordingly, repeated description thereof will be omitted. InFIG.98, other reception capability notification symbol9801is added as a reception capability notification symbol. Other reception capability notification symbol9801is, for example, a reception capability notification symbol that does not correspond to reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme, does not correspond to reception capability notification symbol9402related to a single-carrier scheme, and does not correspond to reception capability notification symbol9403related to an OFDM scheme. Even such a reception capability notification symbol can be implemented in the same manner as described above. Moreover, inFIG.94, the order of the reception capability notification symbols is exemplified as: reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme, reception capability notification symbol9402related to a single-carrier scheme, and reception capability notification symbol9403related to an OFDM scheme, but the order is not limited to this example. An alternative example will be given next. InFIG.94, suppose bits r0, r1, r2, and r3 are provided as reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme. InFIG.94, suppose bits r4, r5, r6, and r7 are provided as reception capability notification symbol9402related to a single-carrier scheme. InFIG.94, suppose bits r8, r9, r10, and r11 are provided as reception capability notification symbol9403related to an OFDM scheme. In this example, inFIG.94, assume bits r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, and r11 are arranged in the stated order, and, for example, are arranged in the stated order in a frame. As one alternative example, bits r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, and r11 may be reorganized, such as in the order of bits r7, r2, r4, r6, r1, r8, r9, r5, r10, r3, and r11, and arranged in the stated order in a frame. Note that the order in which the bits are arranged is not limited to these arrangements. Moreover, inFIG.94, suppose fields s0, s1, s2, and s3 are provided as reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme. InFIG.94, suppose fields s4, s5, s6, and s7 are provided as reception capability notification symbol9402related to a single-carrier scheme. InFIG.94, suppose fields s8, s9, s10, and s11 are provided as reception capability notification symbol9403related to an OFDM scheme. Note that a field is configured of one or more bits. In this example, inFIG.94, assume fields s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, and s11 are arranged in the stated order, and, for example, are arranged in the stated order in a frame. As one alternative example, fields s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, and s11 may be reorganized, such as in the order of fields s7, s2, s4, s6, s1, s8, s9, s5, s10, s3, and s11, and arranged in the stated order in a frame. Note that the order in which the fields are arranged is not limited to these arrangements. Moreover, inFIG.98, the order of the reception capability notification symbols is exemplified as: reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme, reception capability notification symbol9402related to a single-carrier scheme, reception capability notification symbol9403related to an OFDM scheme, and other reception capability notification symbol9801, but the order is not limited to this example. An alternative example will be given next. InFIG.98, suppose bits r0, r1, r2, and r3 are provided as reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme. InFIG.98, suppose bits r4, r5, r6, and r7 are provided as reception capability notification symbol9402related to a single-carrier scheme. InFIG.98, suppose bits r8, r9, r10, and r11 are provided as reception capability notification symbol9403related to an OFDM scheme, and suppose bits r12, r13, r14, and r15 are provided as other reception capability notification symbol9801. In this example, inFIG.98assume bits r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, r13, r14, and r15 are arranged in the stated order, and, for example, are arranged in the stated order in a frame As one alternative example, bits r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, r13, r14, and r15 may be reorganized, such as in the order of bits r7, r2, r4, r6, r13, r1, r8, r12, r9, r5, r10, r3, r15, r11, and r14, and arranged in the stated order in a frame. Note that the order in which the bits are arranged is not limited to these arrangements. Moreover, inFIG.98, suppose fields s0, s1, s2, and s3 are provided as reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme. InFIG.98, suppose fields s4, s5, s6, and s7 are provided as reception capability notification symbol9402related to a single-carrier scheme. InFIG.98, suppose fields s8, s9, s10, and s11 are provided as reception capability notification symbol9403related to an OFDM scheme, and suppose fields s12, s13, s14, and s15 are provided as other reception capability notification symbol9801. Note that a field is configured of one or more bits. In this example, inFIG.98, assume fields s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11, s12, s13, s14, and s15 are arranged in the stated order, and, for example, are arranged in the stated order in a frame. As one alternative example, fields s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11, s12, s13, s14, and s15 may be reorganized, such as in the order of fields s7, s2, s4, s6, s13, s1, s8, s12, s9, s5, s10, s3, s15, s11, and s14, and arranged in the stated order in a frame. Note that the order in which the fields are arranged is not limited to these arrangements. Note that information transmitted in a reception capability notification symbol related to a single-carrier scheme may not be explicitly indicated as information for a single-carrier scheme. The information transmitted in a reception capability notification symbol related to a single-carrier scheme described in this embodiment is, for example, information for notifying a selectable scheme when the transmission device transmits a signal via a single-carrier scheme. In another example, the information transmitted in a reception capability notification symbol related to a single-carrier scheme described in this embodiment is, in the case that the transmission device transmits signals using a scheme other than a single-carrier scheme, such as an OFDM scheme, not used (i.e., ignored) in the selection of a scheme to be used for signal transmission. In yet another example, the information transmitted in a reception capability notification symbol related to a single-carrier scheme described in this embodiment is, in the case that, for example, the reception device does not support reception of a single-carrier scheme signal (in the case that the transmission device is notified that the reception device does not support such reception), information that is transmitted in a region determined to be a null or reserved region by the transmission device or the reception device. As described above, although such a reception capability notification symbol is referred to as reception capability notification symbol9402related to a single-carrier scheme, this is merely one non-limiting example; such a reception capability notification symbol may be referred to as something else. For example, such a symbol may be referred to as a symbol for indicating reception ability of a (first) terminal. Moreover, reception capability notification symbol9402related to a single-carrier scheme may include information other than information for notifying of a receivable signal. Similarly, information transmitted in a reception capability notification symbol related to an OFDM scheme may not be explicitly indicated as information for an OFDM scheme. The information transmitted in a reception capability notification symbol related to an OFDM scheme described in this embodiment is, for example, information for notifying a selectable scheme when the transmission device transmits a signal via an OFDM scheme. In another example, the information transmitted in a reception capability notification symbol related to an OFDM scheme described in this embodiment is, in the case that the transmission device transmits signals using a scheme other than an OFDM scheme, such as a single-carrier scheme, not used (i.e., ignored) in the selection of a scheme to be used for signal transmission. In yet another example, the information transmitted in a reception capability notification symbol related to an OFDM scheme described in this embodiment is, in the case that, for example, the reception device does not support reception of an OFDM scheme signal, information that is transmitted in a region determined to be a null or reserved region by the transmission device or the reception device. As described above, although such a reception capability notification symbol is referred to as reception capability notification symbol9403related to an OFDM scheme, this is merely one non-limiting example; such a reception capability notification symbol may be referred to as something else. For example, such a symbol may be referred to as a symbol for indicating reception ability of a (second) terminal. Moreover, reception capability notification symbol9403related to an OFDM scheme may include information other than information for notifying of a receivable signal. Although reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme is referred to as such, this is merely one non-limiting example; such a reception capability notification symbol may be referred to as something else. For example, such a symbol may be referred to as a symbol for indicating reception ability of a (third) terminal. Moreover, reception capability notification symbol9401related to a single-carrier scheme and an OFDM scheme may include information other than information for notifying of a receivable signal. As described above, by forming a reception capability notification symbol, transmitting the reception capability notification symbol via a terminal, the base station receiving the reception capability notification symbol, referring to the validity indicated by the value of the reception capability notification symbol, generating and transmitting a modulated signal, the terminal can receive a modulated signal that can be demodulated, making it possible to accurately obtain data and thus achieve an advantageous effect of an improvement in data reception quality. Moreover, the terminal can determine the validity indicated by each of the bits (fields) of the reception capability notification symbol while generating data for each of the bits (fields), thus making it possible to transmit the reception capability notification symbol to the base station with certainty, thus making it possible to achieve the advantageous effect of an improvement in communication quality. Embodiment G1 In this embodiment, additional information pertaining to Embodiment A1, Embodiment A2, Embodiment A4, and Embodiment A11 will be given. As illustrated inFIG.37andFIG.38, the terminal transmits, to the base station or AP, which is the communication partner of the terminal, data related to information3702related to support for reception of a plurality of streams, as a part of the reception capability notification symbol. In Embodiments A1, A2, A4, A11, etc., the terminology “data related to information3702related to support for reception of a plurality of streams” is used, but this is merely a non-limiting example; any reception capability notification symbol that can identify whether there is support for reception of a plurality of streams or not can be implemented in the same manner. This will be discussed below. For example, consider a modulation and coding scheme (MCS), such as the ones described below.MCS #1:Data symbol transmission via error correction encoding scheme #A, modulation scheme QPSK, and single stream transmission. This makes it possible to realize transmission speeds of 10 Mbps (bps: bits per second).MCS #2:Data symbol transmission via error correction encoding scheme #A, modulation scheme 16QAM, and single stream transmission. This makes it possible to realize transmission speeds of 20 Mbps.MCS #3:Data symbol transmission via error correction encoding scheme #B, modulation scheme QPSK, and single stream transmission. This makes it possible to realize transmission speeds of 15 Mbps.MCS #4:Data symbol transmission via error correction encoding scheme #B, modulation scheme 16QAM, and single stream transmission. This makes it possible to realize transmission speeds of 30 Mbps.MCS #5:Data symbol transmission via error correction encoding scheme #A, modulation scheme QPSK, and transmission of a plurality of streams from a plurality of antennas. This makes it possible to realize transmission speeds of 20 Mbps (bps: bits per second).MCS #6:Data symbol transmission via error correction encoding scheme #A, modulation scheme 16QAM, and transmission of a plurality of streams from a plurality of antennas. This makes it possible to realize transmission speeds of 40 Mbps.MCS #7:Data symbol transmission via error correction encoding scheme #B, modulation scheme QPSK, and transmission of a plurality of streams from a plurality of antennas. This makes it possible to realize transmission speeds of 30 Mbps.MCS #8:Data symbol transmission via error correction encoding scheme #B, modulation scheme 16QAM, and transmission of a plurality of streams from a plurality of antennas. This makes it possible to realize transmission speeds of 60 Mbps. Here, the terminal transmits information, via the reception capability notification symbol, to the base station or AP, which is the communication partner, indicating that demodulation for MCS #1, MCS #2, MCS #3, and MCS #4 is possible, or that demodulation for MCS #1, MCS #2, MCS #3, MCS #4, MCS #5, MCS #6, MCS #7, and MCS #8 is possible. In such cases, the communication partner is notified that demodulation for single stream transmission is possible or the communication partner is notified that demodulation for single stream is possible and demodulation for transmission of a plurality of streams from a plurality of antennas is possible, which achieves the same function as the notification via information3702related to support for reception of a plurality of streams. However, when the terminal notifies, via a reception capability notification symbol, the base station or AP, which is the communication partner, of an MCS set that the terminal can demodulate, there is an advantage that the terminal can notify the base station or AP, which is the communication partner, of details regarding the MCS set that the terminal can demodulate. Moreover, inFIG.35, an example of communication between base station or AP3401and terminal3402inFIG.34is illustrated, but the configuration of communication between base station or AP3401and terminal3402is not limited to the example illustrated inFIG.35. For example, in Embodiments A1, A2, A4, A11, F1, etc., the transmission of a reception capability notification symbol by a terminal to a communication partner (for example, a base station or AP) is a critical aspect of the present disclosure, and it is this that allows for the advantageous effects described in the embodiments to be achieved. Here, communication between the terminal and the communication partner of the terminal before transmission of the reception capability notification symbol by the terminal to the communication partner is not limited to the example illustrated inFIG.35. Other Variations, Etc. Note that in the present specification, processed signal106_A illustrated in, for example,FIG.1,FIG.44, andFIG.73may be transmitted from a plurality of antennas, and processed signal106_B illustrated in, for example,FIG.1,FIG.44, andFIG.73may be transmitted from a plurality of antennas. Note that a configuration in which processed signal106_A includes any one of, for example, signals204A,206A,208A, and210A is conceivable. Moreover, a configuration in which processed signal106_B includes any one of, for example, signals204B,206B,208B, and210B is conceivable. For example, assume there are N transmitting antennas, i.e., transmitting antennas 1 through N are provided. Note that N is an integer that is greater than or equal to 2. Here, the modulated signal transmitted from transmitting antenna k is expressed as ck. Note that k is an integer that is greater than or equal to 1 and less than or equal to N. Moreover, assume that vector C including c1 through cN is expressed as C=(c1, c2 . . . cN)T. Note that transposed vector A is expressed as AT. Here, when the precoding matrix (weighting matrix) is G, the following expression holds true. [MATH.318]C=G⁡(da(i)db(i))Equation⁢(318) Note that da(i) is processed signal106_A, db(i) is processed signal106_B, and i is a symbol number. Moreover, G is a matrix having N rows and 2 columns, and may be a function of i. Moreover, G may be switched at some given timing (i.e., may be a function of frequency or time). Moreover, “processed signal106_A is transmitted from a plurality of transmitting antennas and processed signal106_B is also transmitted from a plurality of transmitting antennas” and “processed signal106_A is transmitted from a single transmitting antenna and processed signal106_B is also transmitted from a single transmitting antenna” may be switched in the transmission device. Regarding the timing of the switching, the switching may be performed per frame, and the switching may be performed in accordance with the decision to transmit a modulated signal (may be any arbitrary timing). Although only some exemplary embodiments of the present disclosure have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. INDUSTRIAL APPLICABILITY The present disclosure can be widely applied to communications systems that transmit modulated signals from a plurality of antennas.
841,689
11863265
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially utilized on other aspects without specific recitation. DETAILED DESCRIPTION Aspects of the present disclosure provide apparatus, methods, processing systems, and computer readable mediums for precoding for uplink transmission, such as multi-panel uplink transmission. In certain systems, for codebook-based uplink transmission, the base station (BS) chooses the precoder for uplink transmission and signals the selected precoder to the user equipment (UE). The precoder maps one or more layers at the UE to one or more antenna ports. A layer may refer to a multiple-input multiple-output (MIMO layer (e.g., a data stream). The precoders may be designed for single panel uplink transmission. However, in certain systems, such as new radio (NR) systems, the UE may support multi-panel uplink transmission. Accordingly, aspects of the present disclosure provide techniques for precoding for multi-panel uplink transmission. In some examples, an expanded codebook is provided. In some examples, the UE can indicate preferred or selected precoders to the BS. In some examples, the UE can indicate different precoders for different scenarios, such as depending on single or multi-panel uplink transmission. For codebook based transmission, one or more precoders are selected from a defined codebook. For non-codebook based transmission, the one or more precoders are computed (e.g., based on measurements of reference signals (RSs)). In some examples, for non-codebook based transmission, the BS can transmit simultaneous reference signals (RS) to the UE for the UE to compute precoders for multi-panel uplink transmission. The following description provides examples of precoders for multi-panel uplink transmission, and is not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular radio access technology (RAT) and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, etc. A frequency may also be referred to as a carrier, a subcarrier, a frequency channel, a tone, a subband, etc. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. The techniques described herein may be used for various wireless networks and radio technologies. While aspects may be described herein using terminology commonly associated with 3G, 4G, and/or 5G wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems, such as later technologies. 5G NR may support various wireless communication services, such as enhanced mobile broadband (eMBB) targeting wide bandwidth (e.g., 80 MHz or beyond), millimeter wave (mmW) targeting high carrier frequency (e.g., 25 GHz or beyond), massive machine type communications MTC (mMTC) targeting non-backward compatible MTC techniques, and/or mission critical targeting ultra-reliable low-latency communications (URLLC). These services may include latency and reliability requirements. These services may also have different transmission time intervals (TTI) to meet respective quality of service (QoS) requirements. In addition, these services may co-exist in the same subframe. Beamforming may be supported and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Multi-layer transmissions with up to 2 streams per UE may be supported. Aggregation of multiple cells may be supported with up to 8 serving cells. FIG.1illustrates an example wireless communication network100in which aspects of the present disclosure may be performed. For example, the wireless communication network100may be a New Radio (NR) or 5G network. As shown inFIG.1, the wireless communication network100may be in communication with a core network132. The core network132may be in communication with one or more base station (BSs)110and/or user equipment (UE)120in the wireless communication network100via one or more interfaces. As illustrated inFIG.1, the wireless communication network100may include a number of base stations (BSs)110a-z(each also individually referred to herein as BS110or collectively as BSs110) and other network entities. A BS110may provide communication coverage for a particular geographic area, sometimes referred to as a “cell”, which may be stationary or may move according to the location of a mobile BS110. In some examples, the BSs110may be interconnected to one another and/or to one or more other base stations or network nodes (not shown) in wireless communication network100through various types of backhaul interfaces (e.g., a direct physical connection, a wireless connection, a virtual network, or the like) using any suitable transport network. Beamforming may be supported and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Multi-layer transmissions with up to 2 streams per UE may be supported. Aggregation of multiple cells may be supported with up to 8 serving cells. In the example shown inFIG.1, the BSs110a,110band110cmay be macro BSs for the macro cells102a,102band102c, respectively. The BS110xmay be a pico BS for a pico cell102x. The BSs110yand110zmay be femto BSs for the femto cells102yand102z, respectively. A BS may support one or multiple cells. A network controller130may couple to a set of BSs and provide coordination and control for these BSs. The network controller130may communicate with the BSs110via a backhaul. The BSs110communicate with UEs120a-y(each also individually referred to herein as UE120or collectively as UEs120) that may be dispersed throughout the wireless communication network100. Each UE120may be stationary or mobile. Wireless communication network100may also include relay stations (e.g., relay station110r) that receive a transmission of data and/or other information from an upstream station (e.g., a BS100aor a UE120r) and send a transmission of the data and/or other information to a downstream station (e.g., a UE120or a BS110), or that relays transmissions between UEs120. According to certain aspects, the UEs120may be configured with multiple transmission configurations (e.g., antenna arrays/panels and/or beams) for uplink transmission to the BSs110. For example, as shown inFIG.1, the UE120ahas an uplink precoder determination manager that may be configured for determining the precoders according to aspects described herein, such as preferred or selected precoders that may be determined from an expanded codebook for multi-panel uplink transmission or computed based on simultaneous reference signals (RS) transmitted from the BS110a. The UE120amay send an indication of the uplink precoders to the BS110a. The BS110amay receive the indication of uplink precoders from the UE120aand may determine uplink precoders based on the indication. For example, as shown inFIG.1, the BS110ahas an uplink precoder determination manager that may be configured for determining the uplink precoders for the UE120a, according to aspects described herein. FIG.2illustrates example components of BS110aand UE120a(as depicted inFIG.1), which may be used to implement aspects of the present disclosure. At the BS110a, a transmit processor220may receive data from a data source212and control information from a controller/processor240. The control information may be for the physical broadcast channel (PBCH), physical control format indicator channel (PCFICH), physical hybrid ARQ indicator channel (PHICH), physical downlink control channel (PDCCH), group common PDCCH (GC PDCCH), etc. The data may be for the physical downlink shared channel (PDSCH), etc. The processor220may process (e.g., encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. The processor220may also generate reference symbols, e.g., for the primary synchronization signal (PSS), secondary synchronization signal (SSS), and channel state information reference signal (CSI-RS). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, and/or the reference symbols, if applicable, and may provide output symbol streams to the modulators (MODs)232athrough232t. Each modulator232may process a respective output symbol stream (e.g., for OFDM, etc.) to obtain an output sample stream. Each modulator may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from modulators232athrough232tmay be transmitted via the antennas234athrough234t, respectively. At the UE120a, the antennas252athrough252rmay receive the downlink signals from the BS110aand may provide received signals to the demodulators (DEMODs) in transceivers254athrough254r, respectively. Each demodulator254may condition (e.g., filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator may further process the input samples (e.g., for OFDM, etc.) to obtain received symbols. A MIMO detector256may obtain received symbols from all the demodulators254athrough254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor258may process (e.g., demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE120ato a data sink260, and provide decoded control information to a controller/processor280. On the uplink, at UE120a, a transmit processor264may receive and process data (e.g., for the physical uplink shared channel (PUSCH)) from a data source262and control information (e.g., for the physical uplink control channel (PUCCH) from the controller/processor280. The transmit processor264may also generate reference symbols for a reference signal (e.g., for the sounding reference signal (SRS)). The symbols from the transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by the demodulators in transceivers254athrough254r(e.g., for SC-FDM, etc.), and transmitted to the BS110a. As shown inFIG.2, the transmit processor464has an uplink precoder determination module that may be configured for determining one or more uplink precoders according to aspects described herein. At the BS110a, the uplink signals from the UE120amay be received by the antennas234, processed by the demodulators232, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by the UE120a. The receive processor238may provide the decoded data to a data sink239and the decoded control information to the controller/processor240. Antennas252, processors266,258,264, and/or controller/processor280of the UE120aand/or antennas234, processors220,230,238, and/or controller/processor240of the BS110amay be used to perform the various techniques and methods described herein for precoding for multi-panel uplink transmission. The controllers/processors240and280may direct the operation at the BS110aand the UE120a, respectively. For example, as shown inFIG.2, the processor240has an uplink precoder determination manager241and the processor280has an uplink precoder determination manager281that may be configured for uplink precoders for multi-panel uplink transmission, according to aspects described herein. The memories242and282may store data and program codes for BS110aand UE120a, respectively. A scheduler244may schedule UEs for data transmission on the downlink and/or uplink. NR may utilize orthogonal frequency division multiplexing (OFDM) and/or single-carrier frequency division multiplexing (SC-FDM) on the uplink and/or downlink. OFDM and SC-FDM partition the system bandwidth into multiple orthogonal subcarriers, are also referred to as tones, bins, etc. Each subcarrier may be modulated with data. Modulation symbols may be sent in the frequency domain with OFDM and in the time domain with SC-FDM. The spacing between adjacent subcarriers may be fixed, and the total number of subcarriers may be dependent on the system bandwidth. The system bandwidth may also be partitioned into subbands. For example, a subband may cover multiple resource blocks (RBs). FIG.3is a diagram showing an example of a frame format300for NR. The transmission timeline for each of the downlink and uplink may be partitioned into units of radio frames. Each radio frame may have a predetermined duration (e.g., 10 ms) and may be partitioned into 10 subframes, each of 1 ms, with indices of 0 through 9. Each subframe may include a variable number of slots depending on the subcarrier spacing (SCS). Each slot may include a variable number of symbol periods (e.g., 7 or 14 symbols) depending on the SCS. The symbol periods in each slot may be assigned indices. A mini-slot, which may be referred to as a sub-slot structure, refers to a transmit time interval having a duration less than a slot (e.g., 2, 3, or 4 symbols). Each symbol in a slot may indicate a link direction (e.g., DL, UL, or flexible) for data transmission and the link direction for each subframe may be dynamically switched. The link directions may be based on the slot format. Each slot may include DL/UL data as well as DL/UL control information. In some examples, access to the air interface may be scheduled. A scheduling entity (e.g., a BS) allocates resources for communication among some or all devices and equipment within its service area or cell. The scheduling entity may be responsible for scheduling, assigning, reconfiguring, and releasing resources for one or more subordinate entities. That is, for scheduled communication, subordinate entities utilize resources allocated by the scheduling entity. Base stations are not the only entities that may function as a scheduling entity. In some examples, a UE may function as a scheduling entity and may schedule resources for one or more subordinate entities (e.g., one or more other UEs), and the other UEs may utilize the resources scheduled by the UE for wireless communication. In some examples, a UE may function as a scheduling entity in a peer-to-peer (P2P) network, and/or in a mesh network. In a mesh network example, UEs may communicate directly with one another in addition to communicating with a scheduling entity. In certain systems, a user equipment (UE) may be able to transmit uplink signals with different transmission configurations. The uplink transmissions with different transmission configurations may be simultaneous (e.g., actually simultaneous or near simultaneous, such as in a same transmission time interval (TTI)), and may use the same frequency band. The uplink transmissions may be to the serving base station (BS). As used herein, a transmission configuration may be associated with, but not limited to, transmission reception points (TRPs), antennas, antenna arrays/panels, beams, channels, links, and/or quasi co-location (QCL) groups. In some cases, the UE can transmit simultaneous uplink transmissions using different transmission configurations. Suh transmission may be referred to as multi-panel uplink transmissions. In some examples, a UE may have up to sixteen antennas in one array/panel, and the UE may have multiple arrays/panels which may be located at various locations of the UE. In some examples, different arrays may use different beams to form multiple links. The different antennas, antenna panels, and/or beam cover different spatial directions. Simultaneous uplink transmissions may allow increased throughput (e.g., by simultaneously transmitting data to the BS using the multiple antennas, beams, and/or panels) and/or increased reliability (e.g., by sending the same information from the multiple antennas, beams, and/or panels). In certain systems, such as NR (new radio or 5G systems), multi-panel uplink transmission may be configured for physical uplink shared channel (PUSCH) and/or sounding reference signal (SRS) transmissions by the UE. In some examples, the UE is configured with one or more SRS resource sets configuring SRS resources for SRS transmission. Each SRS resource set may be associated with a UE antenna panel for both codebook-based (e.g., beamformed) and non-codebook based (e.g., non-beamformed) PUSCH transmission. In some examples, the SRS resource indicator (SRI) field in downlink control information (DCI) may be used to indicate (by the BS) and select (by the UE) SRS resources from the configured SRS resource sets. For example, the BS and UE may be configured with a table or mapping of the SRI field (e.g., SRI values) to which SRS resource from which SRS resource set is to be used for a multi-panel uplink transmission. In some examples, the SRI in the DCI may indicate multiple SRS resources from one SRS resource set. In some examples, of the multiple SRS resources indicated by the BS, the UE may select one to use for uplink transmission. FIG.4illustrates an example multi-panel uplink transmission scenario leading to different signals path which may cause different amounts of interference by different uplink signals transmitted from the UE via different transmission configurations, in accordance with certain aspects of the present disclosure. As shown inFIG.4, the UE404can send a first uplink transmission410to the BS402with a first transmission configuration (e.g., a first antenna, beam, and/or antenna panel). As shown inFIG.4, the first uplink transmission with the first transmission configuration may be oriented generally towards the serving BS402. The UE can send a second uplink transmission408using a second uplink transmission408configuration (e.g., a second antenna, beam, and/or antenna panel). In some examples, the first uplink transmission410and the second uplink transmission408may be transmitted simultaneously or concurrently. As shown inFIG.4, the second uplink transmission408may be oriented generally in a different direction than the first uplink transmission410, which may be toward a reflector406and/or a neighboring BS412. Thus, in the example shown inFIG.4, the uplink signal using the first transmission configuration may cause relatively little or no interference to the neighbor BS412, while an uplink signal using the second transmission configuration may cause relatively higher interference to the neighbor BS412. The uplink transmission scenario illustrated inFIG.4is merely illustrative. It should be appreciated that many different scenarios are possible, and may lead to different signals paths and different amounts of interference by different uplink signals transmitted from the UE via different transmission configurations. For example, as shown inFIG.5, the UE504can transmit multiple uplink transmissions510,512,514,516using different transmission configurations for each of the uplink transmissions. For example, the different transmission configurations may use different antenna panels, different beams in beamformed directions from one antenna panel, or both different antenna panels and different beamformed directions. As shown inFIG.5, uplink transmissions510,512may be oriented generally towards the serving BS502, while uplink transmissions514,516may be oriented generally towards a neighboring BS518. The uplink transmissions can use only a single panel/array and/or beam at a time, or the UE can transmit simultaneous uplink transmissions using multiple different antenna panels/arrays and/or beams. The serving BS502, UE504, and neighbor BSs518,520may include any number of arrays and arrays including any number of antennas. The antennas and/or antenna panels/arrays may be at any location on the front, sides, or back of the UE, and there may be any number of uplink transmissions transmitted via the multiple antennas and/or antenna panels. There may be various numbers of neighboring BSs and/or other UEs interfered by uplink transmissions from the UE504. Further, there could be various numbers of signal reflectors, at multiple different possible locations in the system, that reflect signals in any of various directions, and any one signal could be reflected via multiple signal reflectors, which can result in various levels of interference and/or potential interference caused by uplink transmissions via the different antenna panels/arrays and/or beams to one or more neighboring BSs. As will be discussed in more detail below, the UE can perform precoder selection for uplink transmission. As shown inFIG.4andFIG.5, the UE404and the UE504can include a precoder manager405and505, respectively. The precoder managers405and505may be configured to select precoders for uplink transmission. As shown inFIGS.4and5, and discussed herein, the precoder selection may be codebook-based or non codebook-based. Further, codebook-based precoder selection may be based on an expanded codebook to accommodate multi-panel uplink transmission. Precoding is a preprocessing technique to support multi-layer (e.g., multi-stream) transmission. Precoding may exploit transmit diversity and increase throughput in multi-antenna wireless communications by applying weighting to the information stream (e.g., layer). For example, information bits may be encoded to produce one or more codewords. After scrambling and modulation, each codeword may be mapped to one or more layers (e.g., streams). The number of layers may be based in part on a rank indicator. Precoding applies precoders to map each layer to one or more UE antenna ports (e.g., logical channel ports that can be spread across a single or multiple antennas). The precoded layers can then be mapped to resource elements (REs) and the signal may be generated and transmitted via the corresponding antenna ports. A precoder may refer to a precoding matrix. In certain systems, for codebook-based uplink transmission, the base station (BS) chooses the precoder for uplink transmission and signals the selected precoder to the user equipment (UE). The precoders may be designed for single panel uplink transmission. A codebook may include vectors and matrices that may correspond to precoders. Examples of codebooks may be found in the IEEE wireless standards. However, as discussed above, certain systems, such as NR systems, may support multi-panel uplink transmission. Example Precoders for Multi-Panel Uplink Transmission Aspects of the present disclosure provide techniques for precoding for multi-panel uplink transmission. In some examples, an expanded codebook is provided for multi-panel uplink transmission. In some examples, a user equipment (UE) can indicate preferred or selected precoders to a base station (BS). In some examples, the UE can indicate different precoders for different scenarios, such as depending on single or multi-panel uplink transmission. In some examples, for non-codebook based transmission, the BS can transmit simultaneous reference signals (RS) to the UE for the UE to compute precoders for multi-panel uplink transmission. Example Expanded UE Codebook for Multi-Panel Uplink Transmission According to certain aspects, an expanded UE codebook may be used at the UE, for example, to support multi-panel uplink transmission. In certain systems, the UE codebook contains precoding matrices mapping up to four layers to up to four UE antenna ports. For example, the UE codebook may contain precoding matrices mapping one or two layers to two or four UE antenna ports (e.g., 1-to-2, 1-to-4, 2-to-2, 2-to-4 layers to UE antenna ports), mapping three layers to four UE antenna ports (3-to-4), and/or mapping four layers to four UE antenna ports (4-to-4). In some examples, a multi-panel uplink transmission may use two panels, each panel transmitting one layer. In this case, the UE codebook may be sufficient. However, in other cases, it may desirable to map more than four layers and/or more than four UE antenna ports. Thus, an expanded UE codebook may be used for multi-panel uplink transmission. For example, the expanded UE codebook may contain precoding matrices mapping to five, six, or seven (or more) UE antenna ports. With higher number of UE antenna ports, the number of layers mapped to the UE antenna ports can also be higher, such as five or more layers. FIG.6is a flow diagram illustrating example operations600for wireless communication, in accordance with certain aspects of the present disclosure. The operations600may be performed, for example, by a UE (e.g., such as a UE120ain the wireless communication network100). Operations600may be implemented as software components that are executed and run on one or more processors (e.g., processor280ofFIG.2). Further, the transmission and reception of signals by the UE in operations600may be enabled, for example, by one or more antennas (e.g., antennas252ofFIG.2). In certain aspects, the transmission and/or reception of signals by the UE may be implemented via a bus interface of one or more processors (e.g., processor280) obtaining and/or outputting signals. The operations600may begin, at605, by determining one or more precoders to use for an uplink transmission (e.g., a multi-panel uplink transmission). The one or more precoders map a first number of transmit layers at the UE to a second number of UE antenna ports, where the first number of transmit layers, the second number of UE antenna ports, or both is associated with multiple transmit panels at the UE. In some examples, the first number of transmit layers, the second number of UE antenna ports, or both is greater than four. At610, the UE sends an uplink transmission using the determined one or more precoders. In some examples, the precoding matrix can be three different types: fully coherent, partial coherent, and non-coherent. The type of precoder used for uplink transmission may be dependent on UE capability. In certain systems (e.g., systems not using multi-panel uplink transmission), if the UE can transmit coherently over all antenna ports (e.g., the UE can control relative phases between all transmit chains), then a fully coherent precoding matrix can be used; if the UE can transmit coherently only over pairs of antenna ports, then a partially coherent precoding matrix can be used; and if the UE cannot transmit coherently over any antenna ports, then a non-coherent precoding matrix can be used. According to certain aspects, for multi-panel uplink transmission, the types of precoders may be expanded to include the concept of panels. For example, if the UE can transmit coherently over all panels, then a fully coherent precoding matrix can be used; if the UE can transmit coherently over all antenna ports in a panel, but antenna ports over different panels cannot be transmitted coherently, then a partially coherent precoding matrix may be used; and if the UE cannot transmit coherently over any antenna ports, then a non-coherent precoding matrix can be used. The UE may transmit an indication of the UE capability to the BS. In some examples, the BS may determine (e.g., select) a precoding matrix to indicate to the UE based at least in part on the UE capability. According to certain aspects, precoding matrix selection in downlink control information (DCI) may be expanded to accommodate an increased number of bits to support the expanded UE codebook. In some examples, the DCI specifies one large precoding matrix for all UE antenna panels. In some examples, the DCI specifies multiple smaller precoding matrices, such as one precoding matrix for each UE antenna panel. The larger or multiple smaller matrices may be used in the DCI depending on the codebook (e.g., the size of the codebook). The UE may determine the uplink precoder based on the indication in the DCI from the BS. Example UE Preferred/Selected Uplink Precoder Indication for Multi-Panel Uplink Transmission According to certain aspects, the UE may signal a preferred precoder subset or one or more selected precoders for codebook-based uplink transmission. FIG.7is another flow diagram illustrating example operations700for wireless communication, that may be performed by a UE, in accordance with certain aspects of the present disclosure. Operations700may be implemented as software components that are executed and run on one or more processors (e.g., processor280ofFIG.2). Further, the transmission and reception of signals by the UE in operations700may be enabled, for example, by one or more antennas (e.g., antennas252ofFIG.2). In certain aspects, the transmission and/or reception of signals by the UE may be implemented via a bus interface of one or more processors (e.g., processor280) obtaining and/or outputting signals. The operations700may begin, at705, by determining one or more precoders for uplink transmission. The one or more precoders map a first number of transmit layers at the UE to a second number of UE antenna ports. The one or more precoders may be from an expanded UE codebook, for example, the expanded UE codebook described above. At710, the UE sends an indication to a BS of the one or more precoders. According to certain aspects, the UE indicates a subset of preferred precoders to the BS. In some examples, the UE may indicate to the BS whether the UE prefers single-panel or multiple-panel transmission. The UE may indicate which UE antenna panels are preferred. The UE may indicate different subsets of preferred precoders for single-panel and multi-panel transmission. The UE may indicate preferred precoders for each panel (e.g., for each of the indicted preferred panels). The UE preferences may be to avoid interference to neighbors, to avoid MPE (maximum permissible exposure) issues, and/or due to UE capabilities. In some examples, the UE may determine the preferred UE antenna panels based on a level of interference caused by transmission using the one or more UE antenna panels to one or more neighbor cells, based on a level of radiation to a user caused by transmission using the one or more UE antenna panels, based on a capability of the UE to transmit coherently within or across the one or more UE antenna panels, and/or based on a combination thereof. The BS may take the indicated UE preferences into account when scheduling uplink transmission. For example, the BS may schedule the UE single or multi-panel transmission in accordance with the indication from the UE. The BS may schedule particular panels and/or indicate particular precoders for the uplink transmission in accordance with the indication from the UE. The BS may send DCI scheduling the uplink transmission and indicating the panels and/or precoders for the UE to use. According to certain aspects, the UE selects the precoders to use for uplink transmissions and indicates (e.g., signals) the selected precoders to the BS. The UE may know the transmit power imbalance between multiple panels and can better select precoder matrices to use at the time of uplink transmission. For example, the power imbalance may be due to MPE issues, such as which panels are pointing towards the user (e.g., toward the human body) and may need reduced transmit power (e.g., to meet specific absorption rate (SAR) limits). In this case, the BS may not need to indicate precoders to the UE in DCI, because the UE decides the precoders for uplink transmission. In some examples, the UE indicates the selected precoders to the BS in the scheduling request. FIG.8is a flow diagram illustrating example operations800for wireless communication, in accordance with certain aspects of the present disclosure. The operations800may be performed, for example, by a BS (e.g., such as a BS110ain the wireless communication network100). The operations800may be complimentary operations by the BS to the operations700performed by the UE. Operations800may be implemented as software components that are executed and run on one or more processors (e.g., processor240ofFIG.2). Further, the transmission and reception of signals by the BS in operations800may be enabled, for example, by one or more antennas (e.g., antennas234ofFIG.2). In certain aspects, the transmission and/or reception of signals by the BS may be implemented via a bus interface of one or more processors (e.g., processor240) obtaining and/or outputting signals. The operations800may begin, at805, by receiving an indication from a UE of one or more precoders for uplink transmission. The one or more precoders map a first number of transmit layers at the UE to a second number of UE antenna ports. The indicated one or more precoders may be preferred or selected precoders. At810, the BS schedules the UE for uplink transmission based, at least in part, on the indication. For example, as discussed above, the BS may schedule single-panel or multi-panel uplink transmission, the BS may schedule particular UE antenna panels, the BS may schedule the UE to use the indicated preferred precoders, etc. Example Uplink Precoder Determination for Non-Codebook Based Multi-Panel Uplink Transmission In some cases, uplink transmission is non-codebook based. The UE computes the precoders for the uplink transmission, for example, by estimating the channel based on reference signals from the BS. For example, the UE estimates the channel between the UE's transmit panel and the BS's receive panel based on a channel reciprocity assumption (assuming the UL channel is the same as the DL channel). According to certain aspects, for multi-panel uplink transmission, the UE estimates the channel for multiple panels, for example, based on RSs transmitted simultaneous by the BS. FIG.9is another flow diagram illustrating example operations900for wireless communication that may be performed by a UE, in accordance with certain aspects of the present disclosure. Operations900may be implemented as software components that are executed and run on one or more processors (e.g., processor280ofFIG.2). Further, the transmission and reception of signals by the UE in operations900may be enabled, for example, by one or more antennas (e.g., antennas252ofFIG.2). In certain aspects, the transmission and/or reception of signals by the UE may be implemented via a bus interface of one or more processors (e.g., processor280) obtaining and/or outputting signals. The operations900may begin, at905, by receiving a first RS, from a first port of a BS, with a first UE antenna panel. In some examples, the first RS is a channel state information RS (CSI-RS). At910, the UE receives a second RS (e.g., a second CSI-RS), from a second port of the BS, with a second UE antenna panel. In some examples, the BS transmits the first and second RS simultaneously (or near simultaneously). At915, the UE computes one or more precoders to use for uplink transmission based on the first and second RSs. For a multi-panel uplink transmission, the UE may rely on cross-link interference to compute the precoders. The cross-link interference can be estimated from the CSI-RS (e.g., the CSI-RS associated with all sounding reference signal (SRS) resource sets corresponding to multi-panel uplink transmission). The cross-link interference can be efficiently estimated if the CSI-RS are transmitted simultaneously. FIG.10is a diagram illustrating example CSI-RS transmissions from two BS ports received at two UE antenna panels for computing precoders for non-codebook based uplink transmission, in accordance with certain aspects of the present disclosure. As shown inFIG.10, the first UE antenna panel (Panel1) is associated with a first SRS resource set (SRSResourceSet1) associated with CSI-RS1transmitted from the first port of the BS (CSI-RS port 1) to the UE and the second UE antenna panel (Panel2) is associated with a second SRS resource set (SRSResourceSet2) associated with the CSI-RS2transmitted from the second port of the BS to the UE. If the CSI-RS1and CSI-RS2are transmitted simultaneously, the UE can estimate cross-link from Panel1to the CSI-RS2port at the BS, without having the Panel1explicitly being associated with CSI-RS2, and similarly the UE can estimate cross-link from Panel2to the CSI-RS1port at the BS, without having the Panel2explicitly being associated with CSI-RS1. FIG.11is another flow diagram illustrating example operations1100for wireless communication, in accordance with certain aspects of the present disclosure. The operations1100may be performed, for example, by a BS (e.g., such as a BS110ain the wireless communication network100). The operations1100may be complimentary operations by the BS to the operations900performed by the UE. Operations1100may be implemented as software components that are executed and run on one or more processors (e.g., processor240ofFIG.2). Further, the transmission and reception of signals by the BS in operations1100may be enabled, for example, by one or more antennas (e.g., antennas234ofFIG.2). In certain aspects, the transmission and/or reception of signals by the BS may be implemented via a bus interface of one or more processors (e.g., processor240) obtaining and/or outputting signals. The operations1100may begin, at1105, by transmitting a first RS (e.g., CSI-RS) from a first port of the BS to a first antenna panel at a UE. At1110, the BS transmits a second RS (e.g., CSI-RS) from a second port of the BS to a second antenna panel at the UE. In some examples, the BS transmits the first and second RSs simultaneously (or near simultaneously). At1115, the BS schedules the UE for an uplink transmission (e.g., a multi-panel uplink transmission) using the first and second antenna panels. FIG.12illustrates a communications device1200that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIG.6,FIG.7, and/orFIG.9. The communications device1200includes a processing system1202coupled to a transceiver1208. The transceiver1208is configured to transmit and receive signals for the communications device1200via an antenna1210, such as the various signals as described herein. The processing system1202may be configured to perform processing functions for the communications device1200, including processing signals received and/or to be transmitted by the communications device1200. The processing system1202includes a processor1204coupled to a computer-readable medium/memory1212via a bus1206. In certain aspects, the computer-readable medium/memory1212is configured to store instructions (e.g., computer-executable code) that when executed by the processor1204, cause the processor1204to perform the operations illustrated inFIG.6,FIG.7, and/orFIG.9, or other operations for performing the various techniques discussed herein for precoding for multi-panel uplink transmission. In certain aspects, computer-readable medium/memory1212stores code1214for determining precoders from a codebook for uplink transmission; code1216for sending an uplink transmission using the determined precoder; code1218for indicating the precoders to a BS; code1220for receiving RSs from different BS ports at different UE antenna panels; and/or code1222for computing precoders for uplink transmission based on the RSs. In certain aspects, the processor1204has circuitry configured to implement the code stored in the computer-readable medium/memory1212. The processor1204includes circuitry1224for determining precoders from a codebook for uplink transmission; circuitry1226for sending an uplink transmission using the determined precoder; circuitry1228for indicating the precoders to a BS; circuitry1230for receiving RSs from different BS ports at different UE antenna panels; and/or circuitry1232for computing precoders for uplink transmission based on the RSs. FIG.13illustrates a communications device1300that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIG.8and/orFIG.11. The communications device1300includes a processing system1302coupled to a transceiver1308. The transceiver1308is configured to transmit and receive signals for the communications device1300via an antenna1310, such as the various signals as described herein. The processing system1302may be configured to perform processing functions for the communications device1300, including processing signals received and/or to be transmitted by the communications device1300. The processing system1302includes a processor1304coupled to a computer-readable medium/memory1312via a bus1306. In certain aspects, the computer-readable medium/memory1312is configured to store instructions (e.g., computer-executable code) that when executed by the processor1304, cause the processor1304to perform the operations illustrated inFIG.8and/orFIG.11, or other operations for performing the various techniques discussed herein for precoding for multi-panel uplink transmission. In certain aspects, computer-readable medium/memory1312stores code1314for receiving an indication from a UE of one or more precoders for uplink transmission; code1316for scheduling the UE based on the indication; code1318for sending a first RS to a first antenna panel at the UE from a first port of the BS; code1320for sending a second RS to a second antenna panel at the UE from a second port of the BS; and/or code1322for scheduling the UE for uplink transmission using the first and second antenna panels. In certain aspects, the processor1304has circuitry configured to implement the code stored in the computer-readable medium/memory1312. The processor1304includes circuitry1324for receiving an indication from a UE of one or more precoders for uplink transmission; circuitry1326for scheduling the UE based on the indication; circuitry1328for sending a first RS to a first antenna panel at the UE from a first port of the BS; circuitry1330for sending a second RS to a second antenna panel at the UE from a second port of the BS; and/or circuitry1332for scheduling the UE for uplink transmission using the first and second antenna panels. Example Aspects In a first aspect, a method for wireless communication by a user equipment (UE) includes determining one or more precoders to use for one or more uplink transmissions, the one or more precoders mapping a first number of transmit layers at the UE to a second number of UE antenna ports, and the first number of transmit layers, the second number of UE antenna ports, or both being associated with multiple uplink transmit panels at the UE. The UE sends the one or more uplink transmissions using the determined one or more precoders. In a second aspect, in combination with the second aspect, a type of the one or more precoders is based on a UE capability to transmit coherently over at least some UE antennas, over pairs of UE antennas, over all UE antennas within a UE antenna panel, or over all antennas within all UE antenna panels. In a third aspect, in combination with one or more of the first and second aspects, the type of the one or more precoders includes fully coherent, partially coherent, or non-coherent. In a fourth aspect, in combination with one or more of the first through third aspects, the UE transmits an indication of the UE capability to a base station (BS). In a fifth aspect, in combination with one or more of the first through fourth aspects, the UE receives downlink control information (DCI) from a base station (BS) indicating the one or more precoders, the determination of the one or more precoders being based on the indication in the DCI. In a sixth aspect, in combination with one or more of the first through fifth aspects, the first number of transmit layers, the second number of UE antenna ports, or both is greater than four. In a seventh aspect, a method for wireless communication by a user equipment (UE) includes determining one or more precoders for uplink transmission, the one or more precoders mapping a first number of transmit layers at the UE to a second number of UE antenna ports. The UE sends an indication to a base station (BS) of the one or more precoders. In an eighth aspect, in combination with the seventh aspect, the UE sends the BS an indication of whether single-panel or multi-panel uplink transmission is preferred. In a ninth aspect, in combination with one or more of the seventh and eighth aspects, the UE sends the BS an indication of one or more preferred UE antenna panels. In a tenth aspect, in combination with one or more of the seventh through ninth aspects, the UE determines the one or more preferred UE antenna panels based on a level of interference caused by transmission using the one or more UE antenna panels to one or more neighbor cells, a level of radiation to a user caused by transmission using the one or more UE antenna panels, a capability of the UE to transmit coherently within or across the one or more UE antenna panels, or a combination thereof. In an eleventh aspect, in combination with one or more of the seventh through tenth aspects, the UE sends the BS an indication one or more preferred precoders for each of the one or more UE antenna panels. In a twelfth aspect, in combination with one or more of the seventh through eleventh aspects, the UE determines the one or more preferred precoders based on a capability of the UE, a type of the precoders, or both. In a thirteenth aspect, in combination with one or more of the seventh through twelfth aspects, the one or more preferred precoders for each of the one or more UE antenna panels includes at least a first preferred precoder for single-panel uplink transmission and a second preferred precoder for multi-panel uplink transmission. In a fourteenth aspect, in combination with one or more of the seventh through thirteenth aspects, determining the one or more precoders includes selecting the one or more precoders for uplink transmission using one or more UE antenna panels and sending the BS an indication of the selected one or more precoders. In a fifteenth aspect, in combination with one or more of the seventh through fourteenth aspects, the one or more precoders are selected based, at least in part, on a transmit power level associated with the one or more UE antenna panels. In a sixteenth aspect, in combination with one or more of the seventh through fifteenth aspects, the one or more precoders are indicated in a scheduling request to the BS. In a seventeenth aspect, in combination with one or more of the seventh through sixteenth aspects, the one or more precoders are selected from a codebook. In an eighteenth aspect, in combination with one or more of the seventh through ninth aspects, a method for wireless communication by a user equipment (UE) includes receiving a first reference signal (RS), from a first port of a base station (BS), with a first UE antenna panel. The UE receives a second RS, from a second port of the BS, with a second UE antenna panel and computes one or more precoders to use for uplink transmission based on the first and second RSs. In a nineteenth aspect, in combination with the eighteenth aspect, the first and second RSs include channel state information RSs (CSI-RS). In a twentieth aspect, in combination with one or more of the eighteenth and nineteenth aspects, the first and second RSs are received concurrently. In a twenty-first aspect, in combination with one or more of the eighteenth through twentieth aspects, the first UE antenna panel is associated with a first sounding reference signal (SRS) resource set, the first SRS resource set being associated with the first port of the BS; and the second UE antenna panel is associated with a second SRS resource set, the second SRS resource set being associated with the second port of the BS. In a twenty-second aspect, in combination with one or more of the eighteenth through twenty-first aspects, the UE estimates interference caused to a link between the second UE antenna panel and the second port of the BS based on the first RS; and the UE estimates interference caused to a link between the first UE antenna panel and the first port of the BS based on the second RS, the one or more precoders being computed based on the estimated interference. In a twenty-third aspect, in combination with one or more of the eighteenth through twenty-second aspects, estimating the interference includes measuring the second RS at the first UE antenna panel; estimating the uplink channel between the first UE antenna panel and the second port of the BS based on channel reciprocity; measuring the first RS at the second UE antenna panel; and estimating the uplink channel between the second UE antenna panel and the first port of the BS based on channel reciprocity. In a twenty-fourth aspect, in combination with one or more of the eighteenth through twenty-third aspects, the one or more precoders are computed for each UE antenna panel for a simultaneous multi-panel uplink transmission. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The techniques described herein may be used for various wireless communication technologies, such as LTE, CDMA, TDMA, FDMA, OFDMA, SC-FDMA and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. cdma2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as NR (e.g. 5G RA), Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDMA, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). NR is an emerging wireless communications technology under development in conjunction with the 5G Technology Forum (5GTF). 3GPP Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). cdma2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). In an LTE or LTE-A network, a set of one or more base stations may define an eNodeB (eNB). In other examples (e.g., in a next generation, a new radio (NR), or 5G network), a wireless multiple access communication system may include a number of distributed units (DUs) (e.g., edge units (EUs), edge nodes (ENs), radio heads (RHs), smart radio heads (SRHs), transmission reception points (TRPs), etc.) in communication with a number of central units (CUs) (e.g., central nodes (CNs), access node controllers (ANCs), etc.), where a set of one or more DUs, in communication with a CU, may define an access node (e.g., which may be referred to as a BS, 5G NB, next generation NodeB (gNB or gNodeB), transmission reception point (TRP), etc.). A BS or DU may communicate with a set of UEs on downlink channels (e.g., for transmissions from a BS or DU to a UE) and uplink channels (e.g., for transmissions from a UE to BS or DU).A UE may also be referred to as a mobile station, a terminal, an access terminal, a subscriber unit, a station, a Customer Premises Equipment (CPE), a cellular phone, a smart phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet computer, a camera, a gaming device, a netbook, a smartbook, an ultrabook, an appliance, a medical device or medical equipment, a biometric sensor/device, a wearable device such as a smart watch, smart clothing, smart glasses, a smart wrist band, smart jewelry (e.g., a smart ring, a smart bracelet, etc.), an entertainment device (e.g., a music device, a video device, a satellite radio, etc.), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) devices or evolved MTC (eMTC) devices. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, etc., that may communicate with a BS, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, which may be narrowband IoT (NB-IoT) devices. In some circumstances, two or more subordinate entities (e.g., UEs) may communicate with each other using sidelink signals. Real-world applications of such sidelink communications may include public safety, proximity services, UE-to-network relaying, vehicle-to-vehicle (V2V) communications, Internet of Everything (IoE) communications, IoT communications, mission-critical mesh, and/or various other suitable applications. Generally, a sidelink signal may refer to a signal communicated from one subordinate entity (e.g., UE1) to another subordinate entity (e.g., UE2) without relaying that communication through the scheduling entity (e.g., UE or BS), even though the scheduling entity may be utilized for scheduling and/or control purposes. In some examples, the sidelink signals may be communicated using a licensed spectrum (unlike wireless local area networks, which typically use an unlicensed spectrum). The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. If implemented in hardware, an example hardware configuration may comprise a processing system in a wireless node. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement the signal processing functions of the PHY layer. In the case of a user terminal (seeFIG.1), a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the machine-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For example, instructions for performing the operations described herein and illustrated inFIGS.6-9andFIG.11. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
66,994
11863266
DETAILED DESCRIPTION FIG.1throughFIG.15, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. To meet the demand for wireless data traffic having increased since deployment of 4G communication systems, efforts have been made to develop an improved 5G or pre-5G communication system. Therefore, the 5G or pre-5G communication system is also called a “beyond 4G network” or a “post LTE system.” The 5G communication system is considered to be implemented in higher frequency (mmWave) bands, e.g., 60 GHz bands, so as to accomplish higher data rates. To decrease propagation loss of the radio waves and increase the transmission coverage, the beamforming, massive multiple-input multiple-output (MIMO), full dimensional MIMO (FD-MIMO), array antenna, an analog beam forming, large scale antenna techniques and the like are discussed in 5G communication systems. In addition, in 5G communication systems, development for system network improvement is under way based on advanced small cells, cloud radio access networks (RANs), ultra-dense networks, device-to-device (D2D) communication, wireless backhaul communication, moving network, cooperative communication, coordinated multi-points (CoMP) transmission and reception, interference mitigation and cancelation and the like. The discussion of 5G systems and frequency bands associated therewith is for reference as certain embodiments of the present disclosure may be implemented in 5G systems. However, the present disclosure is not limited to 5G systems, or the frequency bands associated therewith, and embodiments of the present disclosure may be utilized in connection with any frequency band. For example, aspects of the present disclosure may also be applied to deployment of 5G communication systems, 6G or even later releases which may use terahertz (THz) bands. In 5G or the other wireless systems, mm Wave is a major band on which high throughputs can be achieved. The propagation on the mm Wave bands suffers more significant path loss than that in the sub-6 GHz bands. To compensate the increased path loss, multiple antenna elements can be simultaneously activated to form a narrow beam with high gain. The narrow beamwidth, however, incurs an overhead for aligning the beam direction to the best signal transmission/reception directions. Embodiments of the present disclosure present a method of generating a beam codebook including wide beams when a narrow beam is given and creating wide beams to cover those narrows beams that would be advantageous in terms of power, speed, efficiency, etc. Embodiments of the present disclosure also recognize that formulating a concave utility maximization problem and adopting the cyclic coordinate descent algorithm can be used to design the wide beam, where determining to output the wide beam can be predicated on the wide beam pattern meeting the design specifications based on such criteria as peak gain and half-powered beamwidth (HPBW). These design specifications can be requirements of the system, for the base station to maintain contact with the UE, or any other feature that would be advantageous to the design of the WBs or codebook. Moreover, embodiments of the present disclosure permit the codebook to offer different types, shapes, and powers of wide beams to accomplish the objectives of using wide beams to cover narrow beam areas or volumes. Additionally, embodiments of the present disclosure also provide for a method of using smaller WB for the cell-center area and larger WB for the cell-edge area and adopting a WB codebook with less WB switching frequency to support mobile UE. FIGS.1through5below describe various embodiments implemented in wireless communications systems and with the use of orthogonal frequency division multiplexing (OFDM) or orthogonal frequency division multiple access (OFDMA) communication techniques. The descriptions thereof are not meant to imply physical or architectural limitations to the manner in which different embodiments may be implemented. Different embodiments of the present disclosure may be implemented in any suitably arranged communications system. FIG.1illustrates an example wireless network according to embodiments of the present disclosure. The embodiment of the wireless network shown inFIG.1is for illustration only. Other embodiments of the wireless network100could be used without departing from the scope of this disclosure. As shown inFIG.1, the wireless network includes a gNB101, a gNB102, and a gNB103. The gNB101communicates with the gNB102and the gNB103. The gNB101can also communicate with at least one network130, such as the Internet, a proprietary Internet Protocol (IP) network, or other data network. The gNB102provides wireless broadband access to the network130for a first plurality of user equipments (UEs) within a coverage area120of the gNB102. The first plurality of UEs includes a UE111, which may be located in a small business; a UE112, which may be located in an enterprise (E); a UE113, which may be located in a WiFi hotspot (HS); a UE114, which may be located in a first residence (R); a UE115, which may be located in a second residence (R); and a UE116, which may be a mobile device (M), such as a cell phone, a wireless laptop, a wireless PDA, or the like. The gNB103provides wireless broadband access to the network130for a second plurality of UEs within a coverage area125of the gNB103. The second plurality of UEs includes the UE115and the UE116. In some embodiments, one or more of the gNBs101-103may communicate with each other and with the UEs111-116using 5G, LTE, LTE-A, WiMAX, WiFi, or other wireless communication techniques. Depending on the network type, the term “base station” or “BS” can refer to any component (or collection of components) configured to provide wireless access to a network, such as transmit point (TP), transmit-receive point (TRP), an enhanced base station (eNodeB or eNB), a 5G base station (gNB), a macrocell, a femtocell, a WiFi access point (AP), or other wirelessly enabled devices. Base stations may provide wireless access in accordance with one or more wireless communication protocols, e.g., 5G 3GPP new radio interface/access (NR), long term evolution (LTE), LTE advanced (LTE-A), high speed packet access (HSPA), Wi-Fi 802.11a/b/g/n/ac, etc. For the sake of convenience, the terms “BS” and “TRP” are used interchangeably in this patent document to refer to network infrastructure components that provide wireless access to remote terminals. Also, depending on the network type, the term “user equipment” or “UE” can refer to any component such as “mobile station,” “subscriber station,” “remote terminal,” “wireless terminal,” “receive point,” or “user device.” For the sake of convenience, the terms “user equipment” and “UE” are used in this patent document to refer to remote wireless equipment that wirelessly accesses a BS, whether the UE is a mobile device (such as a mobile telephone or smartphone) or is normally considered a stationary device (such as a desktop computer or vending machine). Dotted lines show the approximate extents of the coverage areas120and125, which are shown as approximately circular for the purposes of illustration and explanation only. It should be clearly understood that the coverage areas associated with gNBs, such as the coverage areas120and125, may have other shapes, including irregular shapes, depending upon the configuration of the gNBs and variations in the radio environment associated with natural and man-made obstructions. As described in more detail below, one or more of gNB101, gNB102and gNB103include one or more two-dimensional (2D) antenna arrays as described in embodiments of the present disclosure. In some embodiments, one or more of gNB101, gNB102and gNB103support the codebook design and structure for systems having 2D antenna arrays. In some embodiments, the network130facilitates communications between at least one server134and various client devices, such as client device136. Server134includes any suitable computing or processing device that can provide computing services for one or more client devices. Server134could, for example, include one or more processing devices, one or more memories storing instructions and data, and one or more network interfaces facilitating communication over the network130. Client device136represents any suitable computing or processing device that interacts with at least one server or other computing device(s) over the network130. In this example, the client device is represented as a desktop computer, but other non-limiting examples of client devices can include a mobile telephone, laptop computer, or tablet computer. However, any other or additional client devices could be used in the wireless network100. In this example, client devices can communicate indirectly with the network130. For example, some client devices can communicate via one or more base stations, such as cellular base stations or eNodeBs. Also, client devices can communicate via one or more wireless access points (not shown), such as IEEE 802.11 wireless access points. Note that these are for illustration only and that each client device could communicate directly with the network130or indirectly with the network130via any suitable intermediate device(s) or network(s). As described in more detail below, wireless network100can be a 5G communication system in which a UE116and/or BS102can adapt and/or use the codebook(s) described herein for use in wireless communication with each other. In addition, wireless network100can enable a computing device, such as server134, to design and disseminate codebooks or elements thereof that can either be performed offline, may not use instantaneous computations and dissemination, or simply to save power at the BSs when convenient for the system, to disseminate codebooks or some subset of information used for the codebooks to electronic devices, such as UE116and BS102, for wireless communication. AlthoughFIG.1illustrates one example of a wireless network, various changes may be made toFIG.1. For example, the wireless network could include any number of gNBs and any number of UEs in any suitable arrangement. Also, the gNB101could communicate directly with any number of UEs and provide those UEs with wireless broadband access to the network130. Similarly, each gNB102-103could communicate directly with the network130and provide UEs with direct wireless broadband access to the network130. Further, the gNBs101,102, and/or103could provide access to other or additional external networks, such as external telephone networks or other types of data networks. FIG.2illustrates an exemplary base station example gNB102according to embodiments of the present disclosure. The embodiment of the gNB102illustrated inFIG.2is for illustration only, and the gNBs101and103ofFIG.1could have the same or similar configuration. However, gNBs come in a wide variety of configurations, andFIG.2does not limit the scope of this disclosure to any particular implementation of a gNB. As shown inFIG.2, the gNB102includes multiple antennas205a-205n, multiple radio frequency (RF) transceivers210a-210n, transmit (TX) processing circuitry215, and receive (RX) processing circuitry220. The gNB102also includes a controller/processor225, a memory230, and a backhaul or network interface235. The RF transceivers210a-210nreceive, from the antennas205a-205n, incoming RF signals, such as signals transmitted by UEs in the network100. The RF transceivers210a-210ndown-convert the incoming RF signals to generate intermediate frequency (IF) or baseband signals. The IF or baseband signals are sent to the RX processing circuitry220, which generates processed baseband signals by filtering, decoding, and/or digitizing the baseband or IF signals. The RX processing circuitry220transmits the processed baseband signals to the controller/processor225for further processing. The TX processing circuitry215receives analog or digital data (such as voice data, web data, e-mail, or interactive video game data) from the controller/processor225. The TX processing circuitry215encodes, multiplexes, and/or digitizes the outgoing baseband data to generate processed baseband or IF signals. The RF transceivers210a-210nreceive the outgoing processed baseband or IF signals from the TX processing circuitry215and up-converts the baseband or IF signals to RF signals that are transmitted via the antennas205a-205n. The controller/processor225can include one or more processors or other processing devices that control the overall operation of the gNB102. For example, the controller/processor225could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceivers210a-210n, the RX processing circuitry220, and the TX processing circuitry215in accordance with well-known principles. The controller/processor225could support additional functions as well, such as more advanced wireless communication functions. That is, the controller/processor225can perform the method in part or in whole described herein, identify inputs, and create the codebook. Also, any of a wide variety of other functions can be supported in the gNB102by the controller/processor225. In some embodiments, the controller/processor225includes at least one microprocessor or microcontroller. In certain embodiments, the controller/processor225could support beam forming or directional routing operations in which outgoing signals from multiple antennas205a-205nare weighted differently to effectively steer the outgoing signals in a desired direction and create and control narrow beans and wide beams. Any of a wide variety of other functions could be supported in the gNB102by the controller/processor225. The controller/processor225is also capable of executing programs and other processes resident in the memory230, such as an operating system (OS). The controller/processor225can move data into or out of the memory230as used by an executing process. The controller/processor225is also capable of supporting channel quality measurement and reporting for systems having 2D antenna arrays as described in embodiments of the present disclosure. In some embodiments, the controller/processor225supports communications between entities, such as web real-time communication (RTC). The controller/processor225can move data into or out of the memory230as used by an executing process. The controller/processor225is also coupled to the backhaul or network interface235. The backhaul or network interface235allows the gNB102to communicate with other devices or systems over a backhaul connection or over a network. The interface235could support communications over any suitable wired or wireless connection(s). For example, when the gNB102is implemented as part of a cellular communication system (such as one supporting 5G, LTE, or LTE-A), the interface235could allow the gNB102to communicate with other gNBs over a wired or wireless backhaul connection. When the gNB102is implemented as an access point, the interface235could allow the gNB102to communicate over a wired or wireless local area network or over a wired or wireless connection to a larger network (such as the Internet). The interface235includes any suitable structure supporting communications over a wired or wireless connection, such as an Ethernet or RF transceiver. The memory230is coupled to the controller/processor225. Part of the memory230could include a RAM, and another part of the memory230could include a Flash memory or other read only memory (ROM). In certain embodiments, a plurality of instructions, such as a BIS algorithm is stored in memory230. The plurality of instructions are configured to cause the controller/processor225to perform the BIS process and to decode a received signal after subtracting out at least one interfering signal determined by the BIS algorithm. As described in more detail below, the transmit and receive paths of the gNB102(implemented using the RF transceivers210a-210n, TX processing circuitry215, and/or RX processing circuitry220) support the generation and use of WB codebooks. AlthoughFIG.2illustrates one example of gNB102, various changes may be made toFIG.2. For example, the gNB102could include any number of each component shown inFIG.2. As a particular example, an access point could include a number of interfaces235, and the controller/processor225could support routing functions to route data between different network addresses. As another particular example, while shown as including a single instance of TX processing circuitry215and a single instance of RX processing circuitry220, the gNB102could include multiple instances of each (such as one per RF transceiver). Also, various components inFIG.2could be combined, further subdivided, or omitted and additional components could be added according to particular needs. FIG.3illustrates an example UE116according to embodiments of the present disclosure. The embodiment of the UE116illustrated inFIG.3is for illustration only, and the UEs111-115ofFIG.1could have the same or similar configuration. However, UEs come in a wide variety of configurations, andFIG.3does not limit the scope of this disclosure to any particular implementation of a UE. As shown inFIG.3, the UE116includes an antenna305, a radio frequency (RF) transceiver310, TX processing circuitry315, a microphone320, and receive (RX) processing circuitry325. The UE116also includes a speaker330, a controller/processor340, an input/output (I/O) interface (IF)345, a touchscreen350(or keypad), a display355, and a memory360. The memory360includes an operating system (OS)361and one or more applications362. The RF transceiver310receives from the antenna305, an incoming RF signal transmitted by a gNB of the network100. The RF transceiver310down-converts the incoming RF signal to generate an intermediate frequency (IF) or baseband signal. The IF or baseband signal is sent to the RX processing circuitry325, which generates a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal. The RX processing circuitry325transmits the processed baseband signal to the speaker330(such as for voice data) or to the controller/processor340for further processing (such as for web browsing data). The TX processing circuitry315receives analog or digital voice data from the microphone320or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the controller/processor340. The TX processing circuitry315encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. The RF transceiver310receives the outgoing processed baseband or IF signal from the TX processing circuitry315and up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna305. The controller/processor340can include one or more processors or other processing devices and execute the OS361stored in the memory360in order to control the overall operation of the UE116. For example, the controller/processor340could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceiver310, the RX processing circuitry325, and the TX processing circuitry315in accordance with well-known principles. In some embodiments, the controller/processor340includes at least one microprocessor or microcontroller. The controller/processor340is also capable of executing other processes and programs resident in the memory360. The controller/processor340can move data into or out of the memory360as used by an executing process. In some embodiments, the controller/processor340is configured to execute the applications362based on the OS361or in response to signals received from gNBs or an operator. The controller/processor340is also coupled to the I/O interface345, which provides the UE116with the ability to connect to other devices, such as laptop computers and handheld computers. The I/O interface345is the communication path between these accessories and the controller/processor340. The controller/processor340is also coupled to the touchscreen350and the display355. The operator of the UE116can use the touchscreen350to enter data into the UE116. The display355may be a liquid crystal display, light emitting diode display, or other display capable of rendering text and/or at least limited graphics, such as from web sites. The memory360is coupled to the controller/processor340. Part of the memory360could include a random-access memory (RAM), and another part of the memory360could include a Flash memory or other read-only memory (ROM). AlthoughFIG.3illustrates one example of UE116, various changes may be made toFIG.3. For example, various components inFIG.3could be combined, further subdivided, or omitted and additional components could be added according to particular needs. As a particular example, the controller/processor340could be divided into multiple processors, such as one or more central processing units (CPUs) and one or more graphics processing units (GPUs). Also, whileFIG.3illustrates the UE116configured as a mobile telephone or smartphone, UEs could be configured to operate as other types of mobile or stationary devices. FIG.4illustrates an exemplary networked computing device in accordance with various embodiments of this disclosure. In one embodiment, the networked computing device400is a server, such as server134inFIG.1. As shown inFIG.4, the computing device400includes a bus system405, which supports communication between at least one processor410, at least one storage device415, at least one communications unit420, and at least one input/output (I/O) unit425. The processor410executes instructions that may be loaded into a memory430. The processor410may include any suitable number(s) and type(s) of processors or other devices in any suitable arrangement. Example types of processor410include microprocessors, microcontrollers, digital signal processors, field programmable gate arrays, application specific integrated circuits, and discreet circuitry. The memory430and a persistent storage435are examples of storage devices415, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis). The memory430may represent a random-access memory or any other suitable volatile or non-volatile storage device(s). The persistent storage435may contain one or more components or devices supporting longer-term storage of data, such as a ready only memory, hard drive, flash memory, or optical disc. The communications unit420supports communications with other systems or devices. For example, the communications unit420could include a network interface card or a wireless transceiver facilitating communications over the network130. The communications unit420may support communications through any suitable physical or wireless communication link(s). The I/O unit425allows for input and output of data. For example, the I/O unit425may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device. The I/O unit425may also send output to a display, printer, or other suitable output device. Note that whileFIG.4is described as representing the server134ofFIG.1, the same or similar structure could be used in one or more client devices. For example, client device136can have the same or similar structure as shown inFIG.4. As described in more detail below, a computing device such as server134inFIG.1can be used to design and disseminate codebooks for use by an electronic device, such as UE116and/or BS102for communicating over network100or may be used to store and calculate data necessary for the implementation of the method described herein, especially in situations where real-time data is not necessary or, on the other hand, where the calculations are more efficiently or effectively done by the networked computing device400. The networked computing device400could also maintain or determine any data or calculations that can be done offline and then transmitted to another component in network100. FIG.5illustrates an example antenna blocks500according to embodiments of the present disclosure. The embodiment of the antenna500illustrated inFIG.5is for illustration only.FIG.5does not limit the scope of this disclosure to any particular implementation of the antenna500. In certain embodiments, one or more of gNB102or UE116include the antenna500. For example, one or more of antenna205and its associated systems or antenna305and its associated systems can be configured the same as antenna500. Rel.14 LTE and Rel.15 NR support up to 32 Channel State Information Reference Signal (CSI-RS) antenna ports which enable an eNB to be equipped with a large number of antenna elements (such as 64 or 128). In this case, a plurality of antenna elements is mapped onto one CSI-RS port. For mmWave bands, although the number of antenna elements can be larger for a given form factor, the number of CSI-RS ports—which can correspond to the number of digitally precoded ports—tends to be limited due to hardware constraints (such as the feasibility to install a large number of analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) at mmWave frequencies). In the example shown inFIG.5, the antenna500includes analog phase shifters505, an analog beamformer (BF)510, a hybrid BF515, a digital BF520, and one or more antenna arrays525. In this case, one CSI-RS port is mapped onto a large number of antenna elements in antenna arrays525, which can be controlled by the bank of analog phase shifters505. One CSI-RS port can then correspond to one sub-array which produces a narrow analog beam through analog beamforming by analogy BF510. The analog beam can be configured to sweep530across a wider range of angles by varying the phase shifter bank505across symbols or subframes. The number of sub-arrays (equal to the number of RF chains) is the same as the number of CSI-RS ports NCSI-PORT. A digital BF515performs a linear combination across NCSI-PORT analog beams to further increase precoding gain. While analog beams are wideband (hence not frequency-selective), digital precoding can be varied across frequency sub-bands or resource blocks. Since the above system utilizes multiple analog beams for transmission and reception (wherein one or a small number of analog beams are selected out of a large number, for instance, after a training duration—to be performed from time to time), the term “multi-beam operation” is used to refer to the overall system aspect. This includes, for the purpose of illustration, indicating the assigned DL or UL transmit (TX) beam (also termed “beam indication”), measuring at least one reference signal for calculating and performing beam reporting (also termed “beam measurement” and “beam reporting”, respectively), and receiving a DL or UL transmission via a selection of a corresponding receive (RX) beam. Additionally, the antenna500system is also applicable to higher frequency bands such as >52.6 GHz (also termed the FR4. In this case, the system can employ only analog beams. Due to the O2 absorption loss around 60 GHz frequency (˜10 decibels (dB) additional loss @100 m distance), larger number of and sharper analog beams (hence larger number of radiators in the array) will be needed to compensate for the additional path loss. An antenna port is defined such that the channel over which a symbol on the antenna port is conveyed can be inferred from the channel over which another symbol on the same antenna port is conveyed. Two antenna ports are said to be quasi co-located (QCL) if the large-scale properties of the channel over which a symbol on one antenna port is conveyed can be inferred from the channel over which a symbol on the other antenna port is conveyed. The large-scale properties include one or more of delay spread, Doppler spread, Doppler shift, average gain, average delay, and spatial Rx parameters. As operating frequency bands in NR become higher, the UE is evolving to accommodate a plurality of antenna arrays525or panels (each panel is able to transmit via one analog beam, e.g., analog BF510) to enhance aspects of multi-beam operation such as coverage enhancement, beam failure event minimization, fast beam switching, and the like. By utilizing the capability of multiple panels, UE116is able to obtain a variety of diversity gains, which comes from dynamic selection of panel(s) with the best quality in terms of performance that systems want to optimize. For example, in 3GPP 5G NR Rel-17, features to facilitate UL beam/panel selection for UEs equipped with multiple panels is identified and specified under a unified transmission configuration indicator (TCI) framework, in order to mitigate UL coverage loss from several aspects such as maximum permissible exposure (MPE) issues on UE116. For example, a beam corresponds to a spatial transmission/reception filter that is used by the UE116and/or gNB102. In one example, a beam can correspond to a spatial reception filter that is used by the UE116to receive a reference signal, such as synchronization signals (SS) and physical broadcast channel (PBCH) (SS/PBCH block (SSB)) and/or a CSI-RS and so on. In another example, a beam can correspond to a spatial transmission filter that is used by the UE116to transmit a reference signal, such as an UL sounding reference signal (SRS) and so on. A beam training and measurement procedure can include, for example, a procedure wherein the gNB102configures the UE116with a set of reference signal (RS) resources, such as SSB resources and/or CSI-RS resources, as well as a configuration for report settings, such that the UE can report beam quality metric(s) measurement(s), such as Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), Received Signal Strength Indicator (RSSI), Signal to Noise Ratio (SNR), Signal to Interference and Noise Ratio (SINR), and so on, each of which can be, e.g., a L−1 measurement or a filtered L−3 measurement. In one example, a UE116and/or a gNB102can transmit a reference signal (RS), such as a Synchronization Signal Block (SSB) or a CSI-RS or an SRS with a number of repetitions using a same spatial transmission filter in multiple occasions, so that the gNB102and/or UE116, respectively, can receive the RS with different spatial reception filters, in order to facilitate beam sweeping and identification of a candidate/best beam based on a quality metric, such as L1/L3 RSRP or SINR. In one example, a selection of different spatial reception filters and/or quality metric and/or selection procedure can be per UE/gNB implementation. A beam indication procedure can include, for example, a procedure wherein the gNB102can indicate to the UE116to transmit an uplink channel (and/or a second uplink signal) with a same spatial filter that was used to receive a (first) reference signal. In another example, the gNB102can indicate to the UE116to receive a downlink channel (and/or a second downlink signal) with a same spatial filter that was used to receive a (first) reference signal. Such indication can be, e.g., a Downlink Control Information (DCI) and/or MAC Control Element (MAC-CE), and/or radio resource control (RRC) signaling. In one example, an antenna panel or, simply a panel, can refer to an antenna array525or an antenna sub-array connected to one or multiple RF chains. In one example, a panel can be referred to as a transmission-reception entity (TRE), which can virtualize multiple physical panels into a single virtual panel, based on a transparent UE/gNB implementation, such as MIMO diversity scheme(s). A millimeter-wave (mmWave) beam codebook design is very important and challenging for the 5G mmWave base stations. Different from the low frequency bands, the mmWave antenna is inherently directional and mmWave signal can be very sensitive to blockage, reflection, etc. Certain codebook designs are constructed in the angular domain where necessary to cover the angular domain. For example, 120° horizontal scan range (for example, −60°≤ϕ≤60°), and 60° vertical scan range (for example, 70°≤θ≤130°). It is noted that angular domain designed ignore a few important factors in real deployment. In certain angular designs, a path-loss difference is ignored. For example, when UEs are distributed on the ground, a θ=95° (i.e., cell-edge) region has much weaker channel gain compared to the region θ≥120° (i.e., cell-center). In certain angular designs, blockage is not considered, for example, when the mmWave cannot penetrate an obstacle well; therefore, a beam shooting towards a building is not able to serve the UE behind the building. Additionally, certain angular designs ignore the reflection and multi-path, which can be utilized to serve the non-line-of-sight (nLoS) UEs. Certain embodiments in this disclosure provide methods to generate a beam codebook for a base station. Certain embodiments obtain input information including the antenna element radiation pattern, antenna spacing and antenna size, and UE channel information to generate a site-specific and dynamic codebook design. Although mmWave bands are used as example in this disclosure, the embodiments in this disclosure can also be applied to other frequency bands as well. The site-specific codebook design corresponds to a codebook designed for one or more localize attributes or environmental conditions for a particular base station utilizing the site-specific codebook. That is, the site-specific codebook is uniquely designed for use by a particular base station. In certain embodiments, a site-specific BS beam codebook is designed based on a ray tracing data and by an iterative algorithm. Ray tracing simulation tool, for instance, Wireless InSite, can simulate multiple rays (up to 250 rays) for each transmitter-receiver pair. Assume that the strongest L rays are generated, the data extracted from ray tracing simulation tool is as follows:1. P_(k,l): ray gain2. θ_(k,l): Elevation angle of departure from BS,3. ϕ_(k,l): Azimuth angle of departure from BS, where l(1lL) represents the lthstrongest path and k(1kK) stands for the index of UE. In a non-limiting embodiment, the wide beam design is a data-driven method. If the design objective is assumed to be separable, such as expressed in a concave utility faction in the form of the following equation: h(w)=Σi∈Iƒi(wHMiw)  (1) and ƒi(x) is a continuous concave function and has the gradient or sub gradient, then an iterative algorithm is proposed to find the beam, then the utility function is a concave non-decreasing function, there is diminishing return as the beamforming gain increases. ƒicould be dependent or independent on the index i and ƒican be non-differentiable. Note, too, that many metrics of interest are actually concave functions of the beamforming gains. For example: if ƒi(x)=ln x, ∀i, then it is similar to the idea of “proportional fairness” in scheduling. For another example: if ƒi(x)=log2(1+γx), ∀i, then it is to maximize the mean data rate. A cyclic coordinate descent algorithm can sequentially update each entryuntil convergence. The iterative algorithm is guaranteed to converge to a stationary point (local optimum), but not guaranteed to be the optimum. The resultant beam satisfies the KKT condition and is a local optimal solution. Advantages of this proposed method are as follows: no assumption of (1) an omni-directional element pattern and (2) a regular array layout, e.g., half-wavelength spacing; computationally efficient (by using a coordinate descent algorithm); and flexible choice of the design metric, i.e., mean gain, mean data rate, detection probability. FIG.6illustrates a base station with two wideband beams and fourteen narrowband beams according to embodiments of the present disclosure. The embodiment of the base station beams900shown inFIG.6is for illustration only. Other embodiments could be used without departing from the scope of the present disclosure. In the example shown inFIG.6, gNB102is configured with two wideband (WB) beams605and fourteen narrowband (NB) beams610. In certain embodiments, the generation of a codebook is site-specific.FIG.6shows an example of 2 wide beams and 14 narrow beams, where each wide beam has 7 children narrow beams. The example only shows the beam distribution in one dimension. The wide beam shape in one dimension is only determined by the center direction ϕcand the beam width Δϕ. The coverage region of the wide beam is [ϕc-Δϕ2,ϕ+Δϕ2]. In a deployment of cellular network, the beambook may cover a 2D angular region to serve the UE of different heights and at different locations of a cell, for example, −60°≤ϕ≤60°, 80°≤θ≤120. There are more flexibility and challenges to determine the wide beam shape and the whole wide beam codebook, and such determinations could be performed by any of the processors and/or memories as described in more detail inFIG.11, below. FIG.7shows, in one non-limiting embodiment, examples of different possibilities of wide beams (WBs) being designed to cover different combinations of narrow beams, thus offering a flexible wide beam design of different shapes and could be performed by any of the processors and/or memories as described in more detail inFIG.11, below. For example, circle, triangle, diamond, square, parallelogram, trapezoid, hexagon, bar, etc. InFIG.7, which illustrates the various ways of clustering narrow beams and the corresponding wide beam shapes, each small circle stands for the main lobe of a narrow beam (NB). The narrow beams are usually placed close to each other to avoid coverage hole.FIG.7shows several different shapes of wide beams, which could be produced in accordance with the descriptions herein and utilizing the units described in the discussion underFIG.11and antennas500and600. The square WB could cover 4 NBs while are placed on a grid, while a parallelogram WB could cover 4 NBs placed in a zig-zag manner. InFIG.7, example701shows 4 narrow beams covered by a circular wide beam; example704shows 3 narrow beams covered by a triangular wide beam; example703shows four narrow beams covered by a square wide beam; example704shows four narrow beams covered by a rhomboid wide beam; example705shows four narrow beams covered by a rectangle lengthier in the horizontal direction; example706shows four narrow beams covered by a parallelogram-shaped wide beam; example707shows four narrow beams covered by a rectangle lengthier in the vertical direction; example708shows four narrow beams covered by a diamond-shaped wide beam; example709shows seven narrow beams covered by hexagonal shaped wide beam; and example710shows six narrow beams covered by a parallelogram shaped wide beam. FIG.8shows, in another non-limiting embodiment, the WB coverage region could be defined based on the contour of the composite radiation pattern of NBs and could be performed by any of the processors and/or memories as described in more detail inFIG.11, below.FIG.8illustrates an example of the composite radiation of the 4 NBs. The coverage region of a WB, which is supposed to cover these 4 NBs, is defined as the area enclosed by the 9 dB contour. FIG.9shows, in yet another non-limiting embodiment, the covered region could be determined by equations and could be performed by any of the processors and/or memories as described in more detail inFIG.11, below. For example, the three-coverages given inFIG.9are centered at the direction (θc, ϕc) and represented as: Diamond: |θ−θc|+|ϕ−ϕc|≤d(2) Circle: √{square root over ((θ−θc)2+(ϕ−ϕc)2)}≤d(3) Square: max(|θ−θc|,|ϕ−ϕc|)≤d(4) The parameter d is used to adjust the beamwidth. In another non-limiting embodiment, a wide beam design method is shown given the coverage region, the concave utility function method is used to generate the WB and could be performed by any of the processors and/or memories as described in more detail inFIG.11, below. The method first sample the angular directions from the specified coverage region. That specified coverage region could be a requirement to maintain coverage a UE, a requirement to attain or maintain a certain physical characteristic for the WB, or some other requirement or specification concerning the equipment, the beams, or both, or even some external standard or specification. Assuming that the array response at the direction is a(θ,ϕ), and the beamforming weights of the WB is w, the beam gain pattern is represented by the following equation: P(θ,ϕ)=p(θ,ϕ)wHa(θ,ϕ)a(θ,ϕ)Hw(5) The beam is designed to maximize the sum of a concave utility of the beamforming gain, as shown below: maxw∑(θ,ϕ)∈Cf⁡(p⁡(θ,ϕ)⁢wH⁢a⁡(θ,ϕ)⁢a⁡(θ,ϕ)H⁢w)(6)C is the angular coverage region;a(θ, ϕ) is the array response;p(θ, ϕ) is the antenna element pattern; andƒ(x) is a non-decreasing concave utility function. The utility function should be non-decreasing concave function. In one non-limiting option, the utility function ƒ(x) could be set as log(x), where log(x) is a concave function. In regard to a logarithmic utility function, by Jensen's inequality, the equation becomes the following 1N⁢∑i=1Nlog⁢Pi≤log⁢(1N⁢∑i=1NPi)(7) For example, log 10+log 90≈6.80<7.78≈log 40+log 60. Note that wHa(θ, ϕ)a(θ, ϕ)Hw is the radiation power at the direction (θ, ϕ). There is a total radiation constraints (ΣiPi<Ptotal), because of the law of conservation of energy. When maximizing Σi=1Nlog Pisubject to a total power constraint (ΣiPi<Ptotal), a fair allocation of the power is preferred. Fair allocation of the radiation power over the specified coverage region yields a wide beam. Thus, an efficient iterative algorithm is created to find a local optimum of w. FIG.10illustrates a cyclic coordinate descent algorithm, which sequentially updates one of the beamforming weights. Such an algorithm could be adopted to solve the above optimization problem and could be performed by any of the processors and/or memories as described in more detail inFIG.11, below. Assume that there are L+1 antenna elements, the coordinate descent algorithm first optimizes w0while keeping other weights unchanged. It then optimizes w1, w3, . . . wL, then back to optimize w0. The algorithm stops when it converges to a local optimal w. Moreover, an iterative algorithm to update the beam could take the form of the following equation: max⁢∑i∈Ifi(wH⁢Mi⁢w)(8)s.t.:⁢wℓ⋆⁢wℓ=PL,∀ℓ⁢arg⁡(wℓ)∈{0,2⁢π2b,4⁢π2b,2⁢(2b-1)⁢π2b},∀ℓarg⁡(x):phase⁢of⁢x As such, then the cyclic coordinate descent algorithm, could be expressed as follows to sequentially update each entry wl(it is derived from KKT condition), can be solved with the following equation: wℓ(n+1)=PL⁢exp⁢{jQb(arg⁡(∑i∈Ifi′((w(n))H⁢Mi⁢w(n))[Mi]ℓ,:⁢w(n)))}(9) where ƒ′i(x): gradient if ƒi(x) is differentiable(subgradient if ƒi(x) is not differentiable),  (10) [Mithe-th row of the matrixMi, and  (11) Qb(x)⁢is⁢a⁢function⁢quantize⁢the⁢angle⁢from[0,2⁢π)⁢to⁢{0,2⁢π2b,4⁢π2b,2⁢(2b-1)⁢π2b}(12) where the iterative algorithm is guaranteed to converge to a stationary point. InFIG.11another non-limiting embodiment is shown detailing a procedure for wide beam design. This embodiment can employ a processor in an electronic device, such as processor410in device400or a module within such processor, which can be in combination with a memory such as memory430and/or435in storage device415, or the embodiment could be a non-transitory, computer-readable medium that stores instructions that when executed by a processor (such as processor225or410) causes the electronic device to run method1100. Such functions could also be provided other embodiments including any of base stations101-103using processor225and/or memory230. Moreover, calculations could be performed using one of more units described in network100, such as network130, server134, or client device136. Even units such as115could relay or perform calculations. Any of these units could be used in concert with the antenna500to perform the method1100. Furthermore, all the calculations described in this disclosure could be performed by the units described above either singularly or in any combination. InFIG.11, method1100first accepts inputs in step1101as to the array size, antenna element spacing, phase-shifter resolution, specified coverage region, antenna element pattern, initial beam, etc. Then the processor formulates a concave utility maximization problem in step1102and could adopt the cyclic coordinate descent algorithm to design the wide beam. In step1103, it outputs a beamforming vector and a beam pattern. Next, in step1104, it checks the wide beam pattern to determine whether the wide beam pattern meets the design specification(s), for example, such as whether the peak gain is larger enough, the beamwidth is acceptable, lack of a coverage hole, etc. If the specifications are met, then in step1105, it outputs a beam candidate. Otherwise, in step1106, it chooses another initialization and runs the cyclic coordinate descent algorithm again to solve the concave utility maximization problem. The iterations could repeat several times before finding a beam meeting all the specifications. Once candidates are output, then a codebook can be compiled. Inputs into this framework of the disclosed wide beam deign method can include the following inputs in step1101of the method1100: Nh,Nv: size of the uniform planar array, e.g.:Nh=Nv=8  (13) dh,dv: element spacing(unit: λ), e.g.:dh=dv=0.47λ  (14) b: Phase-shifter resolution, e.g.:b=5  (15) Specified coverage region, e.g.: √{square root over ((θ−θc)2+(ϕ−ϕc)2)}≤11°  (16) p(θ,ϕ): Antenna element pattern, e.g.:HQelement pattern(default value),isotropic pattern  (17) w0: Initial beam, e.g.: the narrow beam pointing to the boresight direction  (18) When processed thru the concave utility function in step1102, the outputs in steps1103are the following: w: aNhNv×1 which is the beamforming vector and  (19) P(θ,ϕ) which is the beam pattern.  (20) After this step, then the results are checked in step1104to determine whether they meet the specifications on peak gain and half-power beam-width (HPBW). If they do meet the specifications, then the beam is outputted in step1105. If not, then the initialization w0is changed as shown in step1106. FIGS.12A-12Billustrate, in another non-limiting embodiment, that a single WB codebook could include various shapes of WBs, each showing in a key that the circles represent the NBs while the angular shapes represent the WBs. One benefit of having various shapes of WBs within the same codebook is to support different distribution of NB; such code books being produced in accordance with the above descriptions and utilizing the units described in the discussion underFIG.11and antennas500and600. FIG.12A, for example, illustrates a grid distribution of the NBs and the corresponding WBs. The NB codebook inFIG.12Aincludes 3×8 NBs. It is impossible to use, for instance, 3 WBs of square shape to cover those NBs without overlapping. Instead, theFIG.12Auses the square-shape WBs to cover the second and third rows, while the bar-shape WBs to cover the first row. FIG.12B, illustrates a zig-zag distribution of the NBs and the corresponding WBs. In such a case, it would not appropriate to adopt square-shape WBs as shown in the dotted line square inFIG.12B. Instead, a parallelogram, a diamond-shape, and even a larger circular shape (not shown in the key) WBs are adopted inFIG.12B. In another non-limiting embodiment, the WBs in a single WB codebook could cover different number of the narrow beams. In the example shown inFIG.12B, a WB could cover 4 or 7 NBs. Moreover, if the total number of the NBs is a prime number, for example, 29, then it is impossible to have a WB codebook with same NBs per WB. In another non-limiting embodiment, the WB shape and size are determined by the deployment scenario. In a cell with high buildings, the WBs could have a tall shape, which is wide in the vertical direction, but narrow in the horizontal direction. In a cell covering a plaza area without high buildings, the WBs could have a broad shape, which is wide in the horizontal direction, but narrow in the vertical direction. In another non-limiting embodiment, the WB beam shape and size could be determined through a simulation of the UE dropping and UE movement in the cellular network. The UE could be dropped on a street and/or in a building and simulate their movement. Checking the performance, includes but not would not be limited to:The average received WB signal strength of the UE,WB switch frequency as the UE moves,The hierarchical beam search accuracy, andThe received WB signal strength for the UE on the cell edge. FIG.13illustrates an example of the WB RSRP distribution of a 3-sector cell. In this example, the BS is located at the point (0, 0), and the UE are dropped on the street. The average WB RSRP and cell-edge WB RSRP could be used as the performance metric to determine the best WB shape and size. In yet another non-limiting embodiment, the NBs pointing to the cell-center could be covered by a large WB, thus reducing the number of WB s, while the NBs pointing to the cell-edge could be covered by a small WB, thus compensating the high path-loss. FIG.14illustrates an example, where there are three rows of NBs. Assume that the top row beams are serving the cell-edge far UE and the bottom row beams are for the cell-center close UE. The WBs are designed in a way that the bottom row WBs cover 6 NBs, the center row 4 NBs, and the top row 2 NBs. In one non-limiting embodiment, the WB codebook design takes into account the road directions and WB switching frequency. The WB switching usually incurs the signaling overhead and/or triggers a prohibit timer to freeze the beam tracking operation for a while (e.g., 100 ms). Frequent WB switching can degrade the UE throughput, and the WB codebook with less frequent WB switch is preferred. FIG.15AandFIG.15Billustrate two different WB codebooks,1500A and1500B, generated as described above coming from a base station such as BS102or as shown byFIGS.2,4,5, and6, serving a 120-degree cell sector, with a UE, such as any shown inFIG.1, moving along the path indicated by the arrow in each figure, namely,1501A and1501B. InFIG.15A, the WB codebook1500A results in a higher WB switching frequency than in WB codebook1500B ofFIG.15B.FIG.15Ashows three wide beams covering the cell edge1502A, cell middle1503A, and cell center area1504A.FIG.15Bshows 4 wide beams, each of them covering 30-deg region (1502B,1503B,1504B, and1505B). The two designs are likely to result in different frequencies of WB switching. For example, consider that a UE moves along a road from west to east inFIG.15Awhere UE switches the WB around 2 times such that the UE path1501A passes through areas1502A,1503A, and1504A. InFIG.15B, however, there is no WB switches when the UE moves along the same route such that the UE path1501B passes only through area1503B. In another particular non-limiting embodiment, if the NB codebook is site-specific, then the wide beam codebook, which builds on top of the NB codebook, should also be site-specific. For example, the NB and WB codebook are generated based on the ray-tracing data and thus site-specific. In yet another non-limiting embodiment, the WB codebook could be dependent on the mechanical tilting or electrical titling angles. As the titling angle changes, the region served by the NBs could change. For example, the NBs previously serving cell-edge UE, now serves the cell center UEs. The WB codebook thus should change as well. In one non-limiting option, the BS could design multiple WB codebooks for different tilting angles and save them in the memory. In an online deployment, the BS retrieves from its memory the WB codebook corresponding to its tilting angle. If the titling angle changes with time, the WB codebook should also change accordingly. In still another non-limiting embodiment, the beam width of WB is dependent on the statistics of the online UE report. In one non-limiting option, the BS examines the best beam index and signal strength (e.g., CSI-RS L1-RSRP, CSI-RS CQI, etc.) reported by UEs. If BS finds that some NBs show relatively weak signal strength generally, the BS could infer that the channel of those NBs are bad, for example, where there is large path-loss or penetration loss for those NBs. Therefore, the BS could design or adopt a narrower WB to cover those NBs. On the other hand, if there are some NBs showing relatively strong signal strength, the BS could design or adopt a wider WB to cover those NBs. In still yet another non-limiting embodiment, the WB codebook design takes into account the chance of WB switching during the site-specific UE movement. The street network of each cell could be significantly different and the WB switching chance could be quite different when applying a same WB codebook. Therefore, the site-specific WB codebook is needed to reduce the WB switching chance. In one non-limiting option, it can be done by simulating the UE movement in the cell and compare the resulting the WB switching chances. The WB codebook resulting a small WB switching chance could be finally chosen. In yet another non-limiting embodiment, a non-transitory computer readable medium includes a plurality of instructions. The plurality of instructions, when executed by at least one processor, is configured to cause the at least one processor to perform any of the functions or features of any, several, or all of the various non-limiting embodiments described herein above and further described in specifics hereinbelow. Such processor(s) could also be considered as controllers of various devices. A non-limiting embodiment, which could be labeled as example 1, is a method comprising: identifying input data including at least one of an array size, an antenna element spacing, a phase-shifter resolution, a specified coverage region, or an antenna element pattern; processing the input data and an initial beam through a non-decreasing concave utility function using a cyclic coordinate descent algorithm to generate a wide beam meeting one or more design specifications; and producing a codebook including the wide beam. Another non-limiting embodiment, which could be labeled as example 2, is the method of example 1, wherein processing the input data to generate the wide beam further comprises: maximizing the non-decreasing concave utility function with multiple random initial beamforming weights, wherein the non-decreasing concave utility function has a gradient or a sub gradient and wherein the cyclic coordinate decent algorithm sequentially updates each beamforming weight until convergence. Another non-limiting embodiment, which could be labeled as example 3, is the method of example 2, wherein maximizing the non-decreasing concave utility function further comprises: using angular directions from a specified coverage region, with an array response at direction a(θ, ϕ), beamforming weights of the wide beam as w, beam gain pattern is P(θ, ϕ)=p(θ, ϕ)wHa(θ, ϕ)a(θ, ϕ)Hw; and identifying an equation: maxw∑(θ,ϕ)∈Cf⁡(p⁡(θ,ϕ)⁢wH⁢a⁡(θ,ϕ)⁢a⁡(θ,ϕ)H⁢w), wherein C is an angular coverage region, a(θ, ϕ) is the array response, p(θ, ϕ) is the antenna element pattern, and ƒ(x) is the non-decreasing concave utility function. Another non-limiting embodiment, which could be labeled as example 4, is the method of example 1, wherein processing the input data to generate the wide beam further comprises one or more of: selecting the wide beam as having a largest minimal gain over the specified coverage region, wherein the largest minimal gain meets the one or more design specifications; varying the input data; or choosing a design requirement comprising one of more of mean gain and mean data rate. Another non-limiting embodiment, which could be labeled as example 5, is the method of example 1, further comprising generating the wide beam to cover a region comprising combinations of narrow beams by at least one of: shaping the wide beam based on location of a main lobe for each of the narrow beams; shaping the wide beam based on a contour of a composite radiation pattern of the narrow beams; or shaping the wide beam based on one of three coverage regions centered at a direction (θc, ϕc) and represented by: Diamond: |θ−θc|+|ϕ−ϕc|≤d, Circle: √{square root over ((θ−θc)2+(ϕ−ϕc)2)}≤d, or Square: max(|θ−θc|, |ϕ−ϕc|)≤d, wherein d is a parameter used to adjust beamwidth; and wherein the generated wide beam comprises a beamforming vector and a beam pattern. Another non-limiting embodiment, which could be labeled as example 6, is the method of claim1, wherein: the one or more design specifications include at least one of a peak gain, a half-power beamwidth (HPBW), or lack of coverage holes, and the codebook comprises a plurality of wide beams of various shapes and can cover a different number of narrow beams. Another non-limiting embodiment, which could be labeled as example 7, is the method of example 1, further comprising using the codebook to at least one of: depend on the size of the wide beam by applying a smaller size wide beam for a cell-center area and a larger size wide beam for a cell-edge area, wherein size is a function of a number of narrow beams covered by the generated wide beam; or favor a lesser wide beam switching frequency to support at least one mobile user equipment (UE). Yet another non-limiting embodiment, which could be labeled as example 8, is an electronic device comprising: a memory configured to store a hierarchical codebook; and a processor operably connected to the memory, the processor configured to: identify input data including at least one of an array size, an antenna element spacing, a phase-shifter resolution, a specified coverage region, or an antenna element pattern; process the input data and an initial beam through a non-decreasing concave utility function using a cyclic coordinate descent algorithm to generate a wide beam meeting one or more design specifications; and produce a codebook including the wide beam. Yet another non-limiting embodiment, which could be labeled as example 9, is the electronic device of example 8, wherein to process the input data to generate the wide beam, the processor is further configured to maximize the non-decreasing concave utility function with multiple random initial beamforming weights, wherein the non-decreasing concave utility function has a gradient or a sub gradient and wherein the cyclic coordinate decent algorithm sequentially updates each beamforming weight until convergence. Yet another non-limiting embodiment, which could be labeled as example 10, is the electronic device of example 9, wherein to maximize the non-decreasing concave utility function, the processor is further configured to: use angular directions from a specified coverage region, with an array response at direction a(θ, ϕ), beamforming weights of the wide beam as w, beam gain pattern is P(θ, ϕ)=p(θ, ϕ)wHa(θ, ϕ)a(θ, ϕ)Hw; and identify an equation: maxw∑(θ,ϕ)∈Cf⁡(p⁡(θ,ϕ)⁢wH⁢a⁡(θ,ϕ)⁢a⁡(θ,ϕ)H⁢w), wherein C is an angular coverage region, a(θ, ϕ) is the array response, p(θ, ϕ) is the antenna element pattern, and ƒ(x) is the non-decreasing concave utility function. Yet another non-limiting embodiment, which could be labeled as example 11, is the electronic device of example 8, wherein to process the input data to generate the wide beam, the processor is further configured to one or more of: selecting the wide beam as having a largest minimal gain over the specified coverage region, wherein the largest minimal gain meets the one or more design specifications; varying the input data; or choosing a design requirement comprising one of more of mean gain and mean data rate. Yet another non-limiting embodiment, which could be labeled as example 12, is the electronic device of example 8, wherein the processor is further configured to: generate the wide beam to cover a region comprising combinations of narrow beams by at least one of: shape the wide beam based on location of a main lobe for each of the narrow beams; shape the wide beam based on a contour of a composite radiation pattern of the narrow beams; or shape the wide beam based on one of three coverage regions centered at a direction (θc, ϕc) and represented by: Diamond: |θ−θc|+|ϕ−ϕc|≤d, Circle: √{square root over ((θ−θc)2+(ϕ−ϕc)2)}≤d, or Square: max(|θ−θc|, |ϕ−ϕc|)≤d, wherein d is a parameter used to adjust beamwidth; and wherein the generated wide beam comprises a beamforming vector and a beam pattern. Yet another non-limiting embodiment, which could be labeled as example 13, is the electronic device of example 8, wherein: the one or more design specifications include at least one of a peak gain, a half-power beamwidth (HPBW), or lack of coverage holes, and the codebook comprises a plurality of wide beams of various shapes and can cover a different number of narrow beams. Yet another non-limiting embodiment, which could be labeled as example 14, is the electronic device of example 13, wherein the processor is further configured to: use the codebook to at least one of: depend on the size of the wide beam by applying a smaller size wide beam for a cell-center area and a larger size wide beam for a cell-edge area, wherein size is a function of a number of narrow beams covered by the generated wide beam; or favor a lesser wide beam switching frequency to support at least one mobile user equipment (UE). Still another non-limiting embodiment, which could be labeled as example 15, is a non-transitory, computer-readable medium storing instructions that, when executed by a processor of an electronic device, cause the electronic device to: identify input data including at least one of an array size, an antenna element spacing, a phase-shifter resolution, a specified coverage region, or an antenna element pattern; process the input data and an initial beam through a non-decreasing concave utility function using a cyclic coordinate descent algorithm to generate a wide beam meeting one or more design specifications; and produce a codebook including the wide beam. Still another non-limiting embodiment, which could be labeled as example 16, is the non-transitory, computer-readable medium storing instructions of example 15, wherein the instructions to process the input data to generate the wide beam further comprise instructions that, when executed by the processor, cause the electronic device to maximize the non-decreasing concave utility function with multiple random initial beamforming weights, wherein the non-decreasing concave utility function has a gradient or a sub gradient and wherein the cyclic coordinate decent algorithm sequentially updates each beamforming weight until convergence. Still another non-limiting embodiment, which could be labeled as example 17, is the non-transitory, computer-readable medium storing instructions of example 16, wherein the instructions to maximize the non-decreasing concave utility function further comprise instructions that, when executed by the processor, cause the electronic device to: use angular directions from a specified coverage region, with an array response at direction a(θ, ϕ), beamforming weights of the wide beam as w, beam gain pattern is P(θ, ϕ)=p(θ, ϕ)wHa(θ, ϕ)a(θ, ϕ)Hw; and identify an equation: maxw∑(θ,ϕ)∈Cf⁡(p⁡(θ,ϕ)⁢wH⁢a⁡(θ,ϕ)⁢a⁡(θ,ϕ)H⁢w), wherein C is an angular coverage region, a(θ, ϕ) is the array response, p(θ, ϕ) is the antenna element pattern, and ƒ(x) is the non-decreasing concave utility function. Still another non-limiting embodiment, which could be labeled as example 18, is the non-transitory, computer-readable medium storing instructions of example 15, wherein the instructions to process the input data to generate the wide beam further comprise instructions that, when executed by the processor, cause the electronic device one or more of: select the wide beam as having a largest minimal gain over the specified coverage region, wherein the largest minimal gain meets the one or more design specifications; vary the input data; or choose a design requirement comprising one of more of mean gain and mean data rate. Still another non-limiting embodiment, which could be labeled as example 19, is the non-transitory, computer-readable medium storing instructions of example 15, wherein further comprising instructions that, when executed by the processor, cause the electronic device to: generate the wide beam to cover a region comprising combinations of narrow beams by at least one of: shape the wide beam based on location of a main lobe for each of the narrow beams; shape the wide beam based on a contour of a composite radiation pattern of the narrow beams; or shape the wide beam based on one of three coverage regions centered at a direction (θc, ϕc) and represented by: Diamond: |θ−θc|+|ϕ−ϕc|≤d, Circle: √{square root over ((θ−θc)2+(ϕ−ϕc)2)}≤d, orn Square: max(|θ−θc|, |ϕ−ϕc|)≤d, wherein d is a parameter used to adjust beamwidth; and wherein the generated wide beam comprises a beamforming vector and a beam pattern. Still another non-limiting embodiment, which could be labeled as example 20, is the non-transitory, computer-readable medium storing instructions of example 15, wherein: the one or more design specifications include at least one of a peak gain, a half-power beamwidth (HPBW), or lack of coverage holes, and the codebook comprises a plurality of wide beams of various shapes and can cover a different number of narrow beams. And, lastly, still another non-limiting embodiment, which could be labeled as example 21, is the non-transitory, computer-readable medium storing instructions of example 15, wherein further comprising instructions that, when executed by the processor, cause the electronic device to: depend on the size of the wide beam by applying a smaller size wide beam for a cell-center area and a larger size wide beam for a cell-edge area, wherein size is a function of a number of narrow beams covered by the generated wide beam; and favor a lesser wide beam switching frequency to support at least one mobile user equipment (UE). Although the present disclosure has been described with exemplary embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims. None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claims scope. The scope of patented subject matter is defined by the claims.
67,381
11863267
MODE FOR CARRYING OUT THE INVENTION Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art may easily implement the present invention. However, the present invention may be implemented in various forms and is not limited to the embodiments described herein. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and like reference numerals refer to like elements throughout the specification. Terms including ordinal numbers, such as first and second, may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only to distinguish one component from another. For example, a first component may be referred to as a second component and vice versa without departing from the scope of the inventive concept. The terms used in the present application are used only to describe specific embodiments, and are not intended to limit the present invention. The terms of a singular form may include plural forms unless otherwise specified. Hereinafter, the present invention will be described in detail with reference to the drawings. 1. Channel Estimation Apparatus for Performing Beamforming According to Present Invention A channel estimation apparatus according to the present invention is an apparatus for applying beamforming in a communication system using a standard based on a single reception antenna in which beamforming is not considered, and includes a transmitter and a receiver. FIG.1is a view showing the overall configuration of the transmitter according to the present invention, andFIG.2is a view showing the detailed configuration of each configuration of the pilot signal generation module (120) and transmission control module (160). With reference to these drawings, the transmitter according to the present invention will be described. 1.1. Transmitter100 The transmitter100of the present invention is provided with at least two or more transmission antennas, and is configured to generate a predetermined number of transmission pilot signals equal to the number of the provided transmission antennas and transmits them to the receiver, receive the phase shift information corresponding to the pilot signals as feedback from the receiver, and adjust the phase for each transmission antenna by using the feedback phase shift information. Here, the phase shift information means a codebook number corresponding to an optimal precoding vector. This transmitter100may be configured to include the following configuration. 1.1.1. Transmission Communication Module110 The transmission communication module110generates a frame including predetermined beamforming information, and transmits the generated frame to the receiver200. More specifically, it is configured to generate a frame including predetermined beamforming information. In this case, predetermined beamforming information is included in the beginning of the frame. Here, the predetermined beamforming information may include whether the transmitter uses beamforming, the number of antennas used for beamforming (i.e., the number of transmission antennas), and a parameter for generating a Zadoff-Chu sequence (hereinafter, a signal generation parameter). The operation of generating a frame including such beamforming information→transmitting the generated frame to the receiver100may be performed when a transmission communication module connection signal for connecting the transmission communication module110to the phase shift network140is output from the transmission control module160to be described later. This transmission communication module110transmits the generated beamforming information to the phase shift network140to be described later including the information, and enables transmission to the receiver200through the multiple transmission antenna150connected to the phase shift network140. Meanwhile, the frame is a unit of information transmitted as one block or packet in a data communication network, and refers to a normal data frame. 1.1.2. Pilot Signal Generation Module120 The pilot signal generation module120is configured to generate a Zadoff-Chu sequence based on a predetermined signal generation parameter from the transmission control module160to be described later, and generate the same number of transmission pilot signals as the number of transmission antennas provided in the transmitter100using the generated Zadoff-Chu sequence. The pilot signal generation module120may be configured to include the following detailed configuration. A. Zadoff-Chu Sequence Generation Module122 The Zadoff-Chu sequence generation module122may receive a predetermined signal generation parameter from the transmission control module160, and based on this, may generate a Zadoff-Chu sequence that is a basis for generating a transmission pilot signal to be transmitted to the receiver200. Generating the Zadoff-Chu sequence may be performed by the following (Equation 1). zq(k)=e-j⁢π⁢q⁢k⁡(k+1)NZC,k=0,1,2,…,NZC-1(Equation⁢1) (zq(k): Zadoff-Chu sequence generated based on signal generation parameters, k: sample order of the sequence, NZC: length of the generated sequence (signal), q: a variable that determines the characteristics of the sequence using a prime number smaller than NZC) Here, the predetermined signal parameter received from the transmission control module160, that is, the signal generation parameter for generating the Zadoff-Chu sequence, is the length NZCof the generated sequence (signal), and a variable q that determines the characteristics of the sequence. In an embodiment of the present invention, a value obtained by adding 1 to the largest value among prime factors of xxx is used to set the value of xxx. B. Transmission Pilot Signal Generation Module124 The transmission pilot signal generation module124may generate the same number of transmission pilot signals as the number of transmission antennas connected to the phase shift network140by Equation 2 below based on the Zadoff-Chu sequence generated in the Zadoff-Chu sequence generation module122. pn(k)=zq({k-n⁢⌊Nzc/Ntx⌋}⁢mod⁢Nzc),(Equation⁢2)k=0,1,…⁢Nzc-1,n=0,1,…⁢Ntx-1 (pn(k): transmission pilot signal of the n-th transmission antenna generated based on the Zadoff-Chu sequence, └ ┘: rounding down operation, Ntx: total number of transmission antennas connected to the phase shift network) Such an operation of the pilot signal generation module120may be performed when the pilot signal generation module connection signal for connecting the pilot signal generation module120to the phase shift network140is output from the transmission control module160. FIG.3is a diagram illustrating a result of a correlation operation between transmission pilot signals generated by a transmitter of the present invention. In general, the Zadoff-Chu sequence may generate a signal with a constant amplitude, and has a characteristic of outputting a correlation value very close to zero when the correlation is measured by delaying one or more samples of the sequence. FIG.3is an output obtained by generating4transmission pilot signals having a length Nzcof a sequence of 80 samples using the channel estimation apparatus according to the present invention by utilizing the characteristics of this Zadoff-Chu sequence and performing a correlation operation between two transmission pilot signals. Through the results shown inFIG.3, it may be confirmed that the highest value is output when the signal length Nzcis delayed by a value obtained by dividing the number of antennas. 1.1.3. Signal Switching Switch130 As shown inFIG.2, the signal switching switch130is configured between the transmission communication module110and the pilot signal generation module120and the phase shift network140to switch the connection between the transmission communication module110and the pilot signal generation module120for the phase shift network according to the control of the transmission control module160. More specifically, for example, one end is connected to the phase shift network140, and the other end is located between the transmission communication module110and the pilot signal generation module120, so that the other end may be implemented in a way that is connected to any one of the transmission communication module110and the pilot signal generation module120according to the control of the transmission control module160. That is, the signal switching switch130connects the transmission communication module110or the pilot signal generation module120to the phase shift network140. 1.1.4. Phase Shift Network140 The phase shift network140is configured to transmit an output signal from the transmission communication module110or the pilot signal generation module120connected through the signal switching switch130to the multiple transmission antenna150connected thereto. The phase shift network140is configured to include phase shifters (not shown) corresponding to each of the connected transmission antennas150. The phase shifter is provided corresponding to each transmission antenna150in the phase shift network140, and is configured to convert the phase of a signal connected to the corresponding transmission antenna150according to the phase delay value for each transmission antenna150set by the transmission control module160. That is, the phase shifters generate a phase delay of a signal of the same phase from the transmission communication module110or the pilot signal generation module120according to each phase delay value set by the transmission control module160. Here, when the communication protocol between the transmitter100and the receiver200is started, all phase shifters (not shown) are set as a predetermined initial phase delay value under the control of the transmission control module160. 1.1.5. Transmission Antenna150 The transmitter100of the present invention includes a plurality of transmission antennas150. The multiple transmission antenna150is connected to the phase shift network140as shown inFIG.1to radiate the output signal transmitted from the phase shift network140toward the receiver200or receive a signal radiated from a single reception antenna260of the receiver200. Unlike the conventional transmitter using the existing single antenna-based communication standard, the transmitter of the present invention that enables beamforming in a single antenna-based communication standard system constitutes a plurality of transmission antennas150. 1.1.6. Transmission Control Module160 The transmission control module160is configured to obtain phase shift information corresponding to the transmission pilot signals generated by the pilot signal generation module130from the receiver200through the control of the transmission communication module110, the pilot signal generation module120and the signal switching switch130, and adjust the phase delay of each signal of the phase shift network140by using the obtained phase shift information. The transmission control module160may be configured to include the following detailed configuration. A. Signal Generation Parameter Generation Module162 The signal generation parameter generation module162may generate a predetermined signal generation parameter for generating the Zadoff-Chu sequence and transmit the generated predetermined signal generation parameter to the Zadoff-Chu sequence generation module122of the pilot signal generation module120. Here, the predetermined signal generation parameter for generating the Zadoff-Chu sequence includes the length xxx of the generated sequence, and a variable xxx determining the characteristics of the sequence. B. Signal Switching Switch Control Module164 The signal switching switch control module164outputs a signal for switching the connection between the transmission communication module110and the pilot signal generation module120for the phase shift network140to the signal switching switch130, and controls the switching operation of the switch130. More specifically, in order for transmission of a transmission pilot signal to the receiver200when the communication protocol is started, for example, by outputting a pilot signal generation module connection signal to the signal switching switch130, the pilot signal generation module120may be connected to the phase shift network140. Then, when the transmission of the transmission pilot signals from the pilot signal generation module120to the receiver200is completed, in order for frame transmission including beamforming information to the receiver200, for example, by outputting a transmission communication module connection signal to the signal switching switch130, the transmission communication module110may be connected to the phase shift network140. C. Precoding Vector Extraction Module166 The precoding vector extraction module166receives phase shift information corresponding thereto from the receiver200that has received the transmission pilot signals and beamforming information by the pilot signal generation module120and the transmission communication module110and extracts the corresponding precoding vector from the pre-stored codebook using the phase shift information. Here, the phase shift information means a codebook number corresponding to the optimal precoding vector detected by the receiver200. D. Phase Delay Value Setting Module168 The phase delay value setting module168sets a phase delay value of each of the phase shifters (not shown) configured in the phase shift network140described above as a phase delay value corresponding to each transmission antenna150included in the precoding vector extracted by the precoding vector extraction module166. Here, the phase delay value setting module168may set the phase delay values of all phase shifters (not shown) of the phase shift network140as a predetermined initial phase delay value in the initial state when the communication protocol is started between the transmitter100and the receiver200. That is, when the communication protocol is started, after all phase shifters (not shown) of the phase shift network140are set as a predetermined initial phase delay value, thereafter, using the phase shift information fed back from the receiver200through frame generation including transmission pilot signals and beamforming information, the phase delay value of each phase shifter (not shown) is set differently. Here, setting the phase delay values of the phase shifters means setting how to perform the phase shift in each transmission antenna. After the signal phase delay value of the phase shift network is set according to the feedback for the codebook number from the receiver200in this way, thereafter, the communication between the transmitter100and the receiver200may be performed using the communication modules110and210. 1.1.7. Transmission Memory Module170 The transmission memory module170is a configuration in which a codebook including a plurality of precoding vector information is stored in advance. The codebook of the present invention includes phase delay value information of phase shifters (not shown) of the phase shift network140, and more specifically, includes a phase delay value for each transmission antenna150connected to the phase shift network140and information on a plurality of precoding vectors to which the corresponding codebook number is assigned. The precoding vector is calculated by the following (Equation 3). ωi=[1ej⁢θ2(i)⋮ej⁢θNtx(i)](Equation⁢3) (wi: precoding vector, i: codebook number) Here, each element constituting the above precoding vector represents a phase delay value for each antenna for delaying a signal. Here, the phase delay values, which are elements constituting the precoding vector, may be values designed based on an actual physical angle through a predetermined experiment. FIG.4is a diagram showing the overall configuration of a receiver according to the present invention. The receiver of the present invention will be described with reference toFIG.4. 1.2. Receiver200 The receiver200according to the present invention is provided with a single reception antenna170as shown inFIG.4, and is configured to estimate a channel for beamforming based on the transmission pilot signals transmitted from the transmitter100, derive an optimal precoding vector to be applied by the transmitter100based on the estimated channel information and a pre-stored codebook, and feed back the corresponding phase shift information to the transmitter100. Here, the optimal precoding vector means a precoding vector that allows the receiver200to receive the transmission signal from the transmitter100with the highest power. The receiver200may be configured to include the following configuration. 1.2.1. Reception Communication Module210 The reception communication module210may obtain beamforming information included in a frame generated and transmitted by the transmitter100described above through frame synchronization, and may receive transmission pilot signals using the obtained beamforming information. Performing frame synchronization may perform sampling on an interval longer than the sum of the maximum length of the frame length and the maximum length of the transmission pilot signal using a conventional synchronization technique, and perform synchronization. In this way, the synchronization of the frame transmitted from the transmission communication module110of the transmitter100in the reception communication module210may be described as finding beamforming information included in the beginning of the frame. FIG.6is a diagram illustrating the division and sequence of signals transmitted over time in the communication system of the present invention. As shown inFIG.6, before communication using a communication module between the transmitter100and the receiver200proceeds, pilot signals for beamforming are transmitted. After transmission of the pilot signal, the transmitter transmits information for beamforming to the receiver through the first frame of the communication module, and the receiver performs channel estimation and estimation of an optimal precoding vector based on beamforming information included in the first frame. Beamforming information obtained through synchronization may include, as described above, whether the transmitter uses beamforming, the number of antennas used for beamforming (transmission antenna number), and a signal generation parameter for generating a Zadoff-Chu sequence. On the other hand, receiving the transmission pilot signals from the transmitter100using this beamforming information may identify the number and length of transmission pilot signals, and collect/receive transmission pilot signals through the signal generation parameter for generating the Zadoff-Chu sequence included in the beamforming information. Here, information on the collected/received transmission pilot signals may be separately stored in the reception memory module270. On the other hand, the reception communication module210is configured to include a receiving circuit (not shown) and a transmitting circuit (not shown), so that it may transmit or receive a signal to or from a single reception antenna260through a transmission/reception switching switch240to be described later. More specifically, the transmission/reception switching switch240may receive a frame and transmission pilot signals from the transmitter100in a state in which the transmission/reception switching switch240is connected to a receiving circuit (not shown), and transmit the optimal precoding vector detected by the phonological precoding vector detection module230to be described later to the single reception antenna260to the transmitter100in a state in which the transmission/reception switching switch240is connected to the transmission circuit (not shown). 1.2.2. Channel Estimation Module220 When the beamforming use of the transmitter is confirmed using the beamforming information from the transmitter100obtained by the reception communication module210, the channel estimation module220may generate reception pilot signals based on the received transmission pilot signals and estimate a channel using the transmission pilot signals and the reception pilot signals. Here, as described above, since the beamforming information from the transmitter100includes whether the transmitter uses beamforming or not, it is possible to check the beamforming use of the transmitter from this. The channel estimation module220may be configured to include the following detailed configuration as shown inFIG.5. A. Reception Pilot Signal Generation Module222 The reception pilot signal generation module222generates reception pilot signals by using the beamforming information from the transmitter100obtained by the reception communication module210. More specifically, reception pilot signals may be generated using a signal generation parameter for generating a Zadoff-Chu sequence included in the obtained beamforming information. B. Calculation Module224 The calculation module224may perform a channel estimation operation by the following (Equation 5) using the transmission pilot signals from the transmitter100received by the reception communication module210and the reception pilot signals generated by the reception pilot signal generation module222. First, the received transmission pilot signal from the transmitter100is expressed by an equation as shown in Equation 4 below. y⁡(n)=∑l=0L-1hlT⁢p⁡(n-l),hl∈ℂNtx×1,p⁡(n)∈ℂNtx×1(Equation⁢4) (p(n): pilot vector including samples of the transmission pilot signal of each transmission antenna, hl: channel vector of the l-th path among multipath channels) An operation for channel estimation may be performed using (Equation 4) expressed as above. The operation for channel estimation is performed as a sum operation on all samples after multiplying y(n) and pH(n) in (Equation 4) as shown in Equation 5 below. ∑n=0N-11ρ⁢y⁡(n)⁢pH(n)=(Equation⁢5) (: estimated channel, ρ: constant for normalization, pH(n): value obtained by performing Hermitian operation on p(n)) Here, the estimated channelmeans a vector including amplitude/phase distortion information for each transmission antenna. Considering the characteristics of the Zadoff-Chu sequence,may be approximated as a vector corresponding to the first channel among multipath channels. In general, in a multipath channel, since the path that receives the highest power is most likely to be the one that arrives first, beamforming usingis more efficient than performing beamforming toward another path. Therefore,representing the estimated channel may be approximated by a vector corresponding to the first channel among the multipath channels. 1.2.3. Optimal Precoding Vector Detection Module230 The optimal precoding vector detection module230detects an optimal precoding vector to be applied by the transmitter100among precoding vectors included in a pre-stored codebook using the channel information estimated by the channel estimation module220. Detecting an optimal precoding vector from among the precoding vectors included in the codebook using the estimated channel information is performed by the following (Equation 6). iopt=argmaxi(Equation⁢6) (iopt: codebook number indicating the optimal precoding vector,estimated channel, wi: precoding vector, i: codebook number) Here, the optimal precoding vector means a precoding vector capable of receiving the transmission signal from the transmitter100in the receiver200with the highest power as described above. When the optimal precoding vector is detected as described above, the codebook number xxx indicating this is transmitted to the reception communication module210so that the transmitter100may receive feedback. 1.2.4. Transmission/Reception Switching Switch240 The transmission/reception switching switch240is configured between the reception communication module210and the single reception antenna260to switch the transmission/reception state of the reception communication module210for the single reception antenna260. More specifically, one end is connected to the single reception antenna260, and the other end is configured in a form located between the reception circuit (not shown) and the transmission circuit (not shown) of the reception communication module210, so that a reception circuit (not shown) and a single reception antenna260may be connected or a transmission circuit (not shown) and a single reception antenna260may be connected under the control of the reception control module250to be described later. For example, when a reception signal is output from the reception control module250, the other end may be connected to a reception circuit (not shown) to connect a single reception antenna260and a reception circuit (not shown) of the reception communication module210. In addition, when the transmission signal is output from the reception control module250, it may be implemented in the form of connecting a single reception antenna260and a transmission circuit (not shown) by switching the other end to a transmission circuit (not shown). 1.2.5. Reception Control Module250 The reception control module250may control a switching operation of the transmission/reception switching switch240to switch a transmission/reception state of the reception communication module210. First, in order for the reception communication module210to receive beamforming information and transmission pilot signals from the transmitter100, for example, by outputting a reception signal to the transmission/reception switching switch240, the other end of the transmission/reception switching switch240may be located in a reception circuit (not shown) of the reception communication module210. Thereafter, in order for the reception communication module210to transmit the codebook number of the optimal precoding vector detected by the optimal precoding vector detection module230to the transmitter100, for example, by outputting the transmission signal to the transmission/reception switching switch240, the other end of the transmission/reception switching switch240may be located in a transmission circuit (not shown) of the reception communication module210. Meanwhile, although the drawing shows the channel estimation module220and the optimal precoding vector module230as separate hardware components, the present invention is not limited thereto and may be implemented in software in the reception control module250. 1.2.6. Reception Antenna260 The receiver200is provided with a single reception antenna260, and as shown inFIG.4, is connected to the transmission/reception switching switch240to radiate an output signal transmitted from the reception communication module210through the transmission/reception switching switch240toward the transmitter100or receive a signal radiated from the multiple transmission antenna150of the transmitter100. 1.2.7. Reception Memory Module270 The reception memory module270stores in advance the same codebook as the codebook including a plurality of precoding vector information stored in the transmission memory module170of the transmitter100. In addition, information on transmission pilot signals of the transmitter100received by the reception communication module210may be additionally stored. FIG.7is a diagram illustrating the maximum frequency efficiency achievable by using a channel estimation apparatus for beamforming according to the present invention measured according to a signal to noise ratio (SNR). In the legend inFIG.7, the number of transmission antennas and the size of the codebook for beamforming may be confirmed. The vertical axis ofFIG.7represents the limiting data rate theoretically achievable without error at a single frequency, and the horizontal axis represents the signal-to-noise ratio (SNR) of the reception signal. From the results shown inFIG.7, it may be confirmed that the frequency efficiency is improved as the codebook is designed by using more antennas and allocating more bits. FIG.8is a diagram illustrating bit error performance achievable by using a channel estimation apparatus for beamforming according to the present invention measured according to a signal-to-noise ratio (SNR). The vertical axis ofFIG.8represents the bit error ratio, and the horizontal axis represents the signal-to-noise ratio (SNR) of the reception signal. As inFIG.7, the result ofFIG.8also confirms that fewer bit errors occur as the number of transmission antennas and the number of bits allocated for codebook design increases. 2. Channel Estimation Method for Performing Beamforming According to Present Invention FIG.9is a flowchart illustrating a channel estimation method for beamforming according to the present invention. Referring toFIG.9, it may be configured to include the following steps. 2.1. Initial Phase Delay Value Setting Step S100 First, when a communication protocol is started between the transmitter100and the receiver200, in the transmitter100, a phase delay value of all phase shifters (not shown) configured in the phase shift network140is set as a predetermined initial phase delay value. Here, the phase shifters (not shown) are configured to correspond to each transmission antenna150connected to the phase shift network140. 2.2. Pilot Signal Generation Step S200 In the transmitter100provided with multiple transmission antenna150, this is a step of generating a Zadoff-Chu sequence based on a predetermined signal generation parameter, and generating a predetermined number of transmission pilot signals equal to the number of the multiple transmission antennas150by using the generated Zadoff-Chu sequence. 2.2.1. Switch Control Step S210 First, a step of controlling the signal switching switch130configured between the transmission communication module110and the pilot signal generation module120and the phase shift network140, and connecting the pilot signal generation module120to the phase shift network140is performed. This operation is performed by the transmission control module160of the transmitter100. 2.2.2. Zadoff-Chu Sequence Generation Step S220 When the pilot signal generation module120is connected to the phase shift network140through the switch control step S210, a Zadoff-Chu sequence generation step S120for generating a Zadoff-Chu sequence by the following (Equation 1) is performed based on a predetermined signal generation parameter. zq(k)=e-j⁢π⁢q⁢k⁡(k+1)NZC,k=0,1,2,…,NZC-1(Equation⁢1) (zq(k): Zadoff-Chu sequence generated based on the signal generation parameter, k: sample order of the sequence, NZC: length of the generated sequence, q: a variable that determines the characteristics of the sequence using a prime number smaller than NZC) Here, the predetermined signal parameter, that is, the signal generation parameter for generating the Zadoff-Chu sequence, includes a length NZCof the sequence generated in Equation 1 above, and a variable q that determines the characteristics of the sequence. This signal generation parameter may be provided from the transmission control module160of the transmitter100. In an embodiment of the present invention, a value obtained by adding 1 to the largest value among prime factors of NZCis used to set the value of Q. 2.2.3. Transmission Pilot Signal Generation Step S230 When the Zadoff-Chu sequence is generated in the Zadoff-Chu sequence generation step S220, the transmission pilot signal generation step S230is a step of generating the same number of transmission pilot signals as the number of multiple transmission antennas150provided in the transmitter100by (Equation 2) below based on the generated Zadoff-Chu sequence. pn(k)=zq({k-n⁢⌊Nzc/Ntx⌋}⁢mod⁢Nzc),(Equation⁢2)k=0,1,…⁢Nzc-1,n=0,1,…⁢Ntx-1 (pn(k): transmission pilot signal of the n-th transmission antenna generated based on the Zadoff-Chu sequence, └ ┘: rounding down operation, Ntx: total number of transmission antennas connected to the phase shift network) Here, the predetermined signal generation parameter refers to a parameter for generating a Zadoff-Chu sequence. 2.3. Pilot Signal Transmission Step S300 The transmitter100transmits the transmission pilot signals generated in the pilot signal generation step S100to the receiver200. More specifically, the pilot signal generation module120of the transmitter100transmits the transmission pilot signals generated by the multi-antenna number150to each transmission antenna150through the phase shift network140, and each transmission antenna150may be configured to transmit the transmission pilot signals to the receiver200by radiating the received signal. 2.4. Frame Generation Step S400 In the transmitter100, a frame generation step S400of generating a frame including predetermined beamforming information and transmitting the generated frame to the multiple transmission antenna150through the phase shift network140is performed. Here, the frame generation step S400may be configured to include a switch control step S410of switching the state of the signal switching switch130connecting the phase shift network140and the pilot signal generation module120to a state in which the transmission communication module110is connected to the phase shift network140through the pilot signal generation step S200. That is, by controlling the switching operation of the signal switching switch130to generate a frame including predetermined beamforming information in a state in which the transmission communication module110is connected to the phase shift network140, the generated frame is transmitted to the multiple transmission antenna150through the phase shift network140. At this time, the beamforming information is included in the beginning of the frame and configured to be transmitted to the receiver200through the multiple transmission antenna150. In addition, the predetermined beamforming information may include whether the transmitter uses beamforming, the number of transmission antennas, and a signal generation parameter for generating a Zadoff-Chu sequence. 2.5. Frame and Pilot Signal Reception Step S500 The frame and pilot signal reception step S500is a step of obtaining beamforming information included in the frame generated by the transmitter100in the frame generation step S400through frame synchronization and receiving the transmission pilot signals using the obtained beamforming information in the receiver200provided with a single reception antenna260. Here, the reception control module250of the receiver200is provided in a state in which a reception state is made by connecting a reception circuit (not shown) of the reception communication module210to a single reception antenna260through the control of the transmission/reception switching switch240configured between the reception communication module210and the single reception antenna260. Such a frame and pilot signal reception step S500may be configured including the following detailed steps. 2.5.1. Beamforming Information Acquisition Step S510 The receiver200may acquire beamforming information included in the frame by performing frame synchronization to estimate the start part of the frame from the transmitter100. Such an operation is performed by the reception communication module210of the receiver200described above. 2.5.2. Transmission Pilot Signal Reception Step S520 In the receiver200, after acquiring beamforming information in the beamforming information acquisition step S510, the transmission pilot signals transmitted in the pilot signal transmission step S300are received using the acquired beamforming information. The reception of the transmission pilot signals may recognize and receive the number and length of the transmission pilot signals through the signal generation parameter for generating the Zadoff-Chu sequence included in the beamforming information acquired in the beamforming information acquisition step S510. 2.6. Channel Estimation Step S600 When the beamforming use of the transmitter100is confirmed using the beamforming information obtained in the frame and pilot signal reception step S500, the receiver200generates reception pilot signals based on the received transmission pilot signals and estimates a channel using the transmission and reception pilot signals. 2.6.1. Reception Pilot Signal Generation Step S610 First, reception pilot signals may be generated using a signal generation parameter for generating a Zadoff-Chu sequence included in beamforming information obtained from a frame of the transmitter100. 2.6.2. Calculation Step S620 After generating the reception pilot signals, using the received transmission pilot signals and the generated reception pilot signals, a channel estimation operation may be performed by (Equation 5) below to estimate a channel. The transmission pilot signal received from the transmitter100is expressed by the following equation (Equation 4), and using this, an operation for estimating the channel may be performed by the following (Equation 5). y⁡(n)=∑l=0L-1hlT⁢p⁡(n-l),hl∈ℂNtx×1,p⁡(n)∈ℂNtx×1(Equation⁢4) (p(n): pilot vector including samples of the transmission pilot signal of each transmission antenna, hl: channel vector of the l-th path among multipath channels) An operation for channel estimation may be performed using (Equation 4) expressed as above. The operation for channel estimation is performed as a sum operation on all samples after multiplying y(n) and pH(n) in (Equation 4) as shown in Equation 5 below. ∑n=0N-11ρ⁢y⁡(n)⁢pH(n)=(Equation⁢5) (: estimated channel, ρ: constant for normalization, pH(n): value obtained by performing Hermitian operation on p(n)) Here, the estimated channelmeans a vector including amplitude/phase distortion information for each transmission antenna. Here, considering the characteristics of the Zadoff-Chu sequence,may be approximated as a vector corresponding to the first channel among multipath channels. In general, in a multipath channel, since the path that receives the highest power is most likely to be the one that arrives first, beamforming usingis more efficient than performing beamforming toward another path. Therefore,representing the estimated channel may be approximated by a vector corresponding to the first channel among the multipath channels. 2.7. Optimal Precoding Vector Detection Step S700 In receiver200, an optimal precoding vector detection step S700of detecting an optimal precoding vector to be applied by the transmitter100among precoding vectors included in a pre-stored codebook using the channel information estimated in the channel estimation step S600may be performed. Detecting an optimal precoding vector from among the precoding vectors included in the codebook using the estimated channel information is performed by the following (Equation 6). iopt=argmaxi(Equation⁢6) (iopt: codebook number indicating the optimal precoding vector,: estimated channel, wi: precoding vector, i: codebook number) Here, as described above, the optimal precoding vector means a precoding vector capable of receiving the transmission signal from the transmitter100in the receiver200with the highest power. On the other hand, the codebook includes a predetermined phase delay value for each transmission antenna150connected to the phase shift network140and includes a plurality of precoding vector information to which the corresponding codebook number is assigned. The precoding vector included in this codebook is calculated by the following (Equation 3). ωi=[1ej⁢θ2(i)⋮ej⁢θNtx(i)](Equation⁢3) (wi: precoding vector, i: codebook number) Here, each element constituting the above precoding vector represents a phase delay value for each antenna for delaying a signal. 2.8. Codebook Number Feedback Step S800 The receiver200performs a step of feeding back the codebook number corresponding to the optimal precoding vector detected in the optimal precoding vector detection step S700to the transmitter100through the single reception antenna260. Here, the reception control module250of the receiver200is provided in a state in which a transmission state is made by connecting a transmission circuit (not shown) of the reception communication module210to a single reception antenna260through the control of the transmission/reception switching switch240configured between the reception communication module210and the single reception antenna260. 2.9. Phase Delay Value Conversion Setting Step S900 In the transmitter100, the corresponding precoding vector is extracted from the pre-stored codebook using the codebook number fed back from the receiver200through the codebook number feedback step S800, and a signal phase for each transmission antenna150is converted and set to a phase delay value corresponding to each transmission antenna150included in the extracted precoding vector. More specifically, a phase delay value corresponding to each transmission antenna150included in the extracted precoding vector may be converted into a phase delay value of each of the phase shifters (not shown) of the phase shift network140and may be set. After the signal phase delay value of the phase shift network is set according to the feedback for the codebook number from the receiver200in this way, thereafter, the communication between the transmitter100and the receiver200may be performed using the communication modules110and210. On the other hand, when a signal of low power is received below a certain standard in the reception communication module of the receiver, in order to estimate the channel and the optimal precoding vector for resetting the phase shift network, the above-described steps S100to S900are repeatedly performed to adjust the phase of the phase shift network140again. On the other hand, although the technical idea of the present invention has been specifically described according to the above embodiment, it should be noted that the above embodiments are for the purpose of explanation and not limitation. In addition, those skilled in the art in the technical field of the present invention will be able to understand that various embodiments are possible within the scope of the spirit of the present invention.
42,409
11863268
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially utilized on other aspects without specific recitation. DETAILED DESCRIPTION Aspects of the present disclosure provide apparatus, methods, processing systems, and computer readable mediums for providing quasi-colocation (QCL) signaling for groups of non-zero power channel state information reference signal (NZP CSI-RS) ports across scenarios involving multiple cells and/or multiple panels (multi-panel). The following description provides examples, and is not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. The techniques described herein may be used for various wireless communication technologies, such as LTE, CDMA, TDMA, FDMA. OFDMA. SC-FDMA and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. cdma2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as NR (e.g. 5G RA), Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDMA, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). New Radio (NR) is an emerging wireless communications technology under development in conjunction with the 5G Technology Forum (5GTF), 3GPP Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). cdma2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the wireless networks and radio technologies mentioned above as well as other wireless networks and radio technologies. For clarity, while aspects may be described herein using terminology commonly associated with 3G and/or 4G wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems, such as 5G and later, including NR technologies. New radio (NR) access (e.g., 5G technology) may support various wireless communication services, such as enhanced mobile broadband (eMBB) targeting wide bandwidth (e.g., 80 MHz or beyond), millimeter wave (mmW) targeting high carrier frequency (e.g., 25 GHz or beyond), massive machine type communications MTC (mMTC) targeting non-backward compatible MTC techniques, and/or mission critical targeting ultra-reliable low-latency communications (URLLC). These services may include latency and reliability requirements. These services may also have different transmission time intervals (TTI) to meet respective quality of service (QoS) requirements. In addition, these services may co-exist in the same subframe. Example Wireless Communications System FIG.1illustrates an example wireless communication network100in which aspects of the present disclosure may be performed. For example, the wireless communication network100may be a New Radio (NR) or 5G network provides quasi-colocation (QCL) signaling for groups of non-zero power channel state information reference signal (NZP CSI-RS) ports across scenarios involving multiple cells and/or multiple panels (multi-panel). As illustrated inFIG.1, the wireless network100may include a number of base stations (BSs)110and other network entities. A BS may be a station that communicates with user equipments (UEs). Each BS110may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a Node B (NB) and/or a Node B subsystem serving this coverage area, depending on the context in which the term is used. In NR systems, the term “cell” and next generation NodeB (gNB), new radio base station (NR BS), 5G NB, access point (AP), or transmission reception point (TRP) may be interchangeable. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some examples, the base stations may be interconnected to one another and/or to one or more other base stations or network nodes (not shown) in wireless communication network100through various types of backhaul interfaces, such as a direct physical connection, a wireless connection, a virtual network, or the like using any suitable transport network. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular radio access technology (RAT) and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, etc. A frequency may also be referred to as a carrier, a subcarrier, a frequency channel, a tone, a subband, etc. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. A base station (BS) may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or other types of cells. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having an association with the femto cell (e.g., UEs in a Closed Subscriber Group (CSG). UEs for users in the home, etc.). A BS for a macro cell may be referred to as a macro BS. A BS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, the BSs110a.110band110cmay be macro BSs for the macro cells102a.102band102c, respectively. The BS110xmay be a pico BS for a pico cell102x. The BSs110yand110zmay be femto BSs for the femto cells102yand102z, respectively. A BS may support one or multiple (e.g., three) cells. Wireless communication network100may also include relay stations. A relay station is a station that receives a transmission of data and/or other information from an upstream station (e.g., a BS or a UE) and sends a transmission of the data and/or other information to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that relays transmissions for other UEs. In the example shown inFIG.1, a relay station110rmay communicate with the BS110aand a UE120rin order to facilitate communication between the BS110aand the UE120r. A relay station may also be referred to as a relay BS, a relay, etc. Wireless network100may be a heterogeneous network that includes BSs of different types. e.g., macro BS, pico BS, femto BS, relays, etc. These different types of BSs may have different transmit power levels, different coverage areas, and different impact on interference in the wireless network100. For example, macro BS may have a high transmit power level (e.g., 20 Watts) whereas pico BS, femto BS, and relays may have a lower transmit power level (e.g., 1 Watt). Wireless communication network100may support synchronous or asynchronous operation. For synchronous operation, the BSs may have similar frame timing, and transmissions from different BSs may be approximately aligned in time. For asynchronous operation, the BSs may have different frame timing, and transmissions from different BSs may not be aligned in time. The techniques described herein may be used for both synchronous and asynchronous operation. A network controller130may couple to a set of BSs and provide coordination and control for these BSs. The network controller130may communicate with the BSs110via a backhaul. The BSs110may also communicate with one another (e.g., directly or indirectly) via wireless or wireline backhaul. The UEs120(e.g.,120x,120y, etc.) may be dispersed throughout the wireless network100, and each UE may be stationary or mobile. A UE may also be referred to as a mobile station, a terminal, an access terminal, a subscriber unit, a station, a Customer Premises Equipment (CPE), a cellular phone, a smart phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet computer, a camera, a gaming device, a netbook, a smartbook, an ultrabook, an appliance, a medical device or medical equipment, a biometric sensor/device, a wearable device such as a smart watch, smart clothing, smart glasses, a smart wrist band, smart jewelry (e.g., a smart ring, a smart bracelet, etc.), an entertainment device (e.g., a music device, a video device, a satellite radio, etc.), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) devices or evolved MTC (eMTC) devices. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, etc., that may communicate with a BS, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, which may be narrowband IoT (NB-IoT) devices. Certain wireless networks (e.g., LTE) utilize orthogonal frequency division multiplexing (OFDM) on the downlink and single-carrier frequency division multiplexing (SC-FDM) on the uplink. OFDM and SC-FDM partition the system bandwidth into multiple (K) orthogonal subcarriers, which are also commonly referred to as tones, bins, etc. Each subcarrier may be modulated with data. In general, modulation symbols are sent in the frequency domain with OFDM and in the time domain with SC-FDM. The spacing between adjacent subcarriers may be fixed, and the total number of subcarriers (K) may be dependent on the system bandwidth. For example, the spacing of the subcarriers may be 15 kHz and the minimum resource allocation (called a “resource block” (RB)) may be 12 subcarriers (or 180 kHz). Consequently, the nominal Fast Fourier Transfer (FFT) size may be equal to 128, 256, 512, 1024 or 2048 for system bandwidth of 1.25, 2.5, 5, 10, or 20 megahertz (MHz), respectively. The system bandwidth may also be partitioned into subbands. For example, a subband may cover 1.08 MHz (i.e., 6 resource blocks), and there may be 1, 2, 4, 8, or 16 subbands for system bandwidth of 1.25, 2.5, 5, 10 or 20 MHz, respectively. While aspects of the examples described herein may be associated with LTE technologies, aspects of the present disclosure may be applicable with other wireless communications systems, such as NR. NR may utilize OFDM with a cyclic prefix (CP) on the uplink and downlink and include support for half-duplex operation using TDD. Beamforming may be supported and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Aggregation of multiple cells may be supported with up to 8 serving cells. In some examples, access to the air interface may be scheduled, wherein a scheduling entity (e.g., a base station) allocates resources for communication among some or all devices and equipment within its service area or cell. The scheduling entity may be responsible for scheduling, assigning, reconfiguring, and releasing resources for one or more subordinate entities. That is, for scheduled communication, subordinate entities utilize resources allocated by the scheduling entity. Base stations are not the only entities that may function as a scheduling entity. In some examples, a UE may function as a scheduling entity and may schedule resources for one or more subordinate entities (e.g., one or more other UEs), and the other UEs may utilize the resources scheduled by the UE for wireless communication. In some examples, a UE may function as a scheduling entity in a peer-to-peer (P2P) network, and/or in a mesh network. In a mesh network example, UEs may communicate directly with one another in addition to communicating with a scheduling entity. InFIG.1, a solid line with double arrows indicates desired transmissions between a UE and a serving BS, which is a BS designated to serve the UE on the downlink and/or uplink. A finely dashed line with double arrows indicates interfering transmissions between a UE and a BS. FIG.2illustrates an example logical architecture of a distributed Radio Access Network (RAN)200, which may be implemented in the wireless communication network100illustrated inFIG.1. A 5G access node206may include an access node controller (ANC)202. ANC202may be a central unit (CU) of the distributed RAN200. The backhaul interface to the Next Generation Core Network (NG-CN)204may terminate at ANC202. The backhaul interface to neighboring next generation access Nodes (NG-ANs)210may terminate at ANC202. ANC202may include one or more transmission reception points (TRPs)208(e.g., cells. BSs, gNBs, etc.). The TRPs208may be a distributed unit (DU). TRPs208may be connected to a single ANC (e.g., ANC202) or more than one ANC (not illustrated). For example, for RAN sharing, radio as a service (RaaS), and service specific AND deployments, TRPs208may be connected to more than one ANC. TRPs208may each include one or more antenna ports. TRPs208may be configured to individually (e.g., dynamic selection) or jointly (e.g., joint transmission) serve traffic to a UE. The logical architecture of distributed RAN200may support fronthauling solutions across different deployment types. For example, the logical architecture may be based on transmit network capabilities (e.g., bandwidth, latency, and/or jitter). The logical architecture of distributed RAN200may share features and/or components with LTE. For example, next generation access node (NG-AN)210may support dual connectivity with NR and may share a common fronthaul for LTE and NR. The logical architecture of distributed RAN200may enable cooperation between and among TRPs208, for example, within a TRP and/or across TRPs via ANC202. An inter-TRP interface may not be used. Logical functions may be dynamically distributed in the logical architecture of distributed RAN200. As will be described in more detail with reference toFIG.5, the Radio Resource Control (RRC) layer, Packet Data Convergence Protocol (PDCP) layer. Radio Link Control (RLC) layer, Medium Access Control (MAC) layer, and a Physical (PHY) layers may be adaptably placed at the DU (e.g., TRP208) or CU (e.g., ANC202). FIG.3illustrates an example physical architecture of a distributed Radio Access Network (RAN)300, according to aspects of the present disclosure. A centralized core network unit (C-CU)302may host core network functions. C-CU302may be centrally deployed. C-CU302functionality may be offloaded (e.g., to advanced wireless services (AWS)), in an effort to handle peak capacity. A centralized RAN unit (C-RU)304may host one or more ANC functions. Optionally, the C-RU304may host core network functions locally. The C-RU304may have distributed deployment. The C-RU304may be close to the network edge. A DU306may host one or more TRPs (Edge Node (EN), an Edge Unit (EU), a Radio Head (RH), a Smart Radio Head (SRH), or the like). The DU may be located at edges of the network with radio frequency (RF) functionality. FIG.4illustrates example components of BS110and UE120(as depicted inFIG.1), which may be used to implement aspects of the present disclosure. For example, antennas452, processors466,458,464, and/or controller/processor480of the UE120and/or antennas434, processors420,430,438, and/or controller/processor440of the BS110may be used to perform the various techniques and methods described herein (such as the operations illustrated inFIGS.9and10). At the BS110, a transmit processor420may receive data from a data source412and control information from a controller/processor440. The control information may be for the physical broadcast channel (PBCH), physical control format indicator channel (PCFICH), physical hybrid ARQ indicator channel (PHICH), physical downlink control channel (PDCCH), group common PDCCH (GC PDCCH), etc. The data may be for the physical downlink shared channel (PDSCH), etc. The processor420may process (e.g., encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. The processor420may also generate reference symbols, e.g., for the primary synchronization signal (PSS), secondary synchronization signal (SSS), and cell-specific reference signal (CRS). A transmit (TX) multiple-input multiple-output (MIMO) processor430may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, and/or the reference symbols, if applicable, and may provide output symbol streams to the modulators (MODs)432athrough432t. Each modulator432may process a respective output symbol stream (e.g., for OFDM, etc.) to obtain an output sample stream. Each modulator may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from modulators432athrough432tmay be transmitted via the antennas434athrough434t, respectively. At the UE120, the antennas452athrough452rmay receive the downlink signals from the base station110and may provide received signals to the demodulators (DEMODs) in transceivers454athrough454r, respectively. Each demodulator454may condition (e.g., filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator may further process the input samples (e.g., for OFDM, etc.) to obtain received symbols. A MIMO detector456may obtain received symbols from all the demodulators454athrough454r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor458may process (e.g., demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE120to a data sink460, and provide decoded control information to a controller/processor480. On the uplink, at UE120, a transmit processor464may receive and process data (e.g., for the physical uplink shared channel (PUSCH)) from a data source462and control information (e.g., for the physical uplink control channel (PUCCH) from the controller/processor480. The transmit processor464may also generate reference symbols for a reference signal (e.g., for the sounding reference signal (SRS)). The symbols from the transmit processor464may be precoded by a TX MIMO processor466if applicable, further processed by the demodulators in transceivers454athrough454r(e.g., for SC-FDM, etc.), and transmitted to the base station110. At the BS110, the uplink signals from the UE120may be received by the antennas434, processed by the modulators432, detected by a MIMO detector436if applicable, and further processed by a receive processor438to obtain decoded data and control information sent by the UE120. The receive processor438may provide the decoded data to a data sink439and the decoded control information to the controller/processor440. The controllers/processors440and480may direct the operation at the base station110and the UE120, respectively. The processor440and/or other processors and modules at the BS110may perform or direct the execution of processes for the techniques described herein. The memories442and482may store data and program codes for BS110and UE120, respectively. A scheduler444may schedule UEs for data transmission on the downlink and/or uplink. FIG.5illustrates a diagram500showing examples for implementing a communications protocol stack, according to aspects of the present disclosure. The illustrated communications protocol stacks may be implemented by devices operating in a wireless communication system, such as a 5G system (e.g., a system that supports uplink-based mobility). Diagram500illustrates a communications protocol stack including a Radio Resource Control (RRC) layer510, a Packet Data Convergence Protocol (PDCP) layer515, a Radio Link Control (RLC) layer520, a Medium Access Control (MAC) layer525, and a Physical (PHY) layer530. In various examples, the layers of a protocol stack may be implemented as separate modules of software, portions of a processor or ASIC, portions of non-collocated devices connected by a communications link, or various combinations thereof. Collocated and non-collocated implementations may be used, for example, in a protocol stack for a network access device (e.g., ANs, CUs, and/or DUs) or a UE. A first option505-ashows a split implementation of a protocol stack, in which implementation of the protocol stack is split between a centralized network access device (e.g., an ANC202inFIG.2) and distributed network access device (e.g., DU208inFIG.2). In the first option505-a, an RRC layer510and a PDCP layer515may be implemented by the central unit, and an RLC layer520, a MAC layer525, and a PHY layer530may be implemented by the DU. In various examples the CU and the DU may be collocated or non-collocated. The first option505-amay be useful in a macro cell, micro cell, or pico cell deployment. A second option505-bshows a unified implementation of a protocol stack, in which the protocol stack is implemented in a single network access device. In the second option. RRC layer510, PDCP layer515, RLC layer520, MAC layer525, and PHY layer530may each be implemented by the AN. The second option505-bmay be useful in, for example, a femto cell deployment. Regardless of whether a network access device implements part or all of a protocol stack, a UE may implement an entire protocol stack as shown in505-c(e.g., the RRC layer510, the PDCP layer515, the RLC layer520, the MAC layer525, and the PH Y layer530). In LTE, the basic transmission time interval (TTI) or packet duration is the 1 ms subframe. In NR, a subframe is still 1 ms, but the basic TTI is referred to as a slot. A subframe contains a variable number of slots (e.g., 1, 2, 4, 8, 16, . . . slots) depending on the subcarrier spacing. The NR RB is 12 consecutive frequency subcarriers. NR may support a base subcarrier spacing of 15 KHz and other subcarrier spacing may be defined with respect to the base subcarrier spacing, for example, 30 kHz, 60 kHz, 120 kHz, 240 kHz, etc. The symbol and slot lengths scale with the subcarrier spacing. The CP length also depends on the subcarrier spacing. FIG.6is a diagram showing an example of a frame format600for NR. The transmission timeline for each of the downlink and uplink may be partitioned into units of radio frames. Each radio frame may have a predetermined duration (e.g., 10 ms) and may be partitioned into 10 subframes, each of 1 ms, with indices of 0 through 9. Each subframe may include a variable number of slots depending on the subcarrier spacing. Each slot may include a variable number of symbol periods (e.g., 7, 12, or 14 symbols) depending on the subcarrier spacing. The symbol periods in each slot may be assigned indices. A mini-slot, which may be referred to as a sub-slot structure, refers to a transmit time interval having a duration less than a slot (e.g., 2, 3, or 4 symbols). Each symbol in a slot may indicate a link direction (e.g., DL, UL, or flexible) for data transmission and the link direction for each subframe may be dynamically switched. The link directions may be based on the slot format. Each slot may include DL/UL data as well as DL/UL control information. In NR, a synchronization signal (SS) block is transmitted. The SS block includes a PSS, a SSS, and a two symbol PBCH. The SS block can be transmitted in a fixed slot location, such as the symbols 0-3 as shown inFIG.6. The PSS and SSS may be used by UEs for cell search and acquisition. The PSS may provide half-frame timing, the SS may provide the CP length and frame timing. The PSS and SSS may provide the cell identity. The PBCH carries some basic system information, such as downlink system bandwidth, timing information within radio frame. SS burst set periodicity, system frame number, etc. The SS blocks may be organized into SS bursts to support beam sweeping. Further system information such as, remaining minimum system information (RMSI), system information blocks (SIBs), other system information (OSI) can be transmitted on a physical downlink shared channel (PDSCH) in certain subframes. In some circumstances, two or more subordinate entities (e.g., UEs) may communicate with each other using sidelink signals. Real-world applications of such sidelink communications may include public safety, proximity services, UE-to-network relaying, vehicle-to-vehicle (V2V) communications, Internet of Everything (IoE) communications, IoT communications, mission-critical mesh, and/or various other suitable applications. Generally, a sidelink signal may refer to a signal communicated from one subordinate entity (e.g., UE1) to another subordinate entity (e.g., UE2) without relaying that communication through the scheduling entity (e.g., UE or BS), even though the scheduling entity may be utilized for scheduling and/or control purposes. In some examples, the sidelink signals may be communicated using a licensed spectrum (unlike wireless local area networks, which typically use an unlicensed spectrum). A UE may operate in various radio resource configurations, including a configuration associated with transmitting pilots using a dedicated set of resources (e.g., a radio resource control (RRC) dedicated state, etc.) or a configuration associated with transmitting pilots using a common set of resources (e.g., an RRC common state, etc.). When operating in the RRC dedicated state, the UE may select a dedicated set of resources for transmitting a pilot signal to a network. When operating in the RRC common state, the UE may select a common set of resources for transmitting a pilot signal to the network. In either case, a pilot signal transmitted by the UE may be received by one or more network access devices, such as an AN, or a DU, or portions thereof. Each receiving network access device may be configured to receive and measure pilot signals transmitted on the common set of resources, and also receive and measure pilot signals transmitted on dedicated sets of resources allocated to the UEs for which the network access device is a member of a monitoring set of network access devices for the UE. One or more of the receiving network access devices, or a CU to which receiving network access device(s) transmit the measurements of the pilot signals, may use the measurements to identify serving cells for the UEs. or to initiate a change of serving cell for one or more of the UEs. Example Quasi-Colocation Indication for Non-Zero Power Channel State Information Reference Signal Port Group Aspects of the present disclosure provide techniques for providing quasi-colocation (QCL) signaling for groups of non-zero power channel state information reference signal (NZP CSI-RS) ports across scenarios involving multiple cells and/or multiple panels (multi-panel), such as coordinated multipoint (CoMP) scenarios in which a UE is connected to multiple transmit receive points (TRPs). In wireless communications. CSI may refer to known channel properties of a communication link. The CSI may represent the combined effects of, for example, scattering, fading, and power decay with distance between a transmitter and receiver. Channel and interference measurements may be performed to determine these effects on the channel. CSI may be used to adapt transmissions based on the current channel conditions, which is useful for achieving reliable communication, in particular, with high data rates in multi-antenna systems. CSI is typically estimated at the receiver, quantized, and fed back to the transmitter. QCL assumptions generally refer to assumptions that, for a set of signals or channels considered to be QCL related (or simply “QCL'd” for short), certain characteristics derived for (measured from) one of the signals or channels may be applied to the other. As an example, if a NZP CSI-RS transmission is QCL'd with other DL RS, the doppler shift, doppler spread, average delay spread, average delay, or spatial Rx parameters used for measuring the NZP CSI-RS can be inferred from those used for measuring the other DL RS. In some cases, QCL assumptions for receptions/transmissions of signals and channels may be signaled via a mechanism referred to as Transmission Configuration Indicator (TCI) states.FIG.7illustrates an example TCI state used to configure a DM-RS port group via control signaling, in accordance with certain aspects of the present disclosure. In this example, the TCI state includes a single QCL configuration having at least two types of QCL information. In some cases, a UE may be configured with various TCI states via radio resource control (RRC) signaling, while one of the actual TCI states may be indicated by an N bit DCI field. In some other cases, a UE may be configured with a subset of various TCI states (e.g., up to 8 TCI states) via MAC control signaling (e.g., a MAC control element (MAC-CE)), and downlink control signaling (e.g., DCI) may be used to select a TCI state out of the subset (e.g., 3 bits may be used to identify which TCI state is enabled). For CSI-RS, RRC signaling may configure a list of CSI trigger states, and each trigger state may have one or more CSI report configurations. Each CSI report configuration may link to up to three CSI-RS resources (NZP channel measurement resource (CMR), CSI-IM, and NZP interference measurement resource (IMR)). QCL information may be provided per NZP CMR in the corresponding CSI report configuration of the corresponding trigger state. FIG.8illustrates an example of QCL information that may be included in a QCL configuration, in accordance with certain aspects of the present disclosure. The QCL assumptions may be grouped into different types that correspond to the parameters that may be assumed QCL'd for a set of QCL'd signals. For example, for a set of QCL'd signals, Type A may indicate that Doppler shift, Doppler spread, average delay, delay spread can be assumed QCL'd, while Type B may indicate only Doppler shift and Doppler spread, Type C may indicate a still different set of parameters. In some cases, spatial QCL assumptions may be indicated, for example, by Type D. Spatial QCL may mean a (Tx or Rx) beam selected based on a certain signal measurement may be applied to the QCL related signal. As an example, the QCL assumptions may provide a QCL relationship between a NZP CSI-RS and at least one of another CSI-RS or a synchronization signal (SS). As used herein, a set of QCL'd signals refers to the QCL relationship between those signals (e.g., Doppler shift, Doppler spread, average delay, and/or delay spread). One limitation of the current QCL configuration is that only one TCI state consisting of a single QCL assumption is provided per CSI-RS resource. That is, all the CSI-RS ports have the same QCL assumptions. Aspects of the present disclosure, however, extend the QCL configuration to allow signaling of QCL assumptions linked to multiple antenna port groups. As such, the QCL signaling provided herein may be applied in CSI-RS port with different beamforming, or multi-TRP/multi-panel scenarios, such as CoMP deployments where multiple transmission reception points (TRPs) communicate with a UE. FIG.9is a flow diagram illustrating example operations900that may be performed, for example, by a base station (e.g., BS110), for configuring NZP CSI-RS transmissions with QCL information that supports multi-TRP transmissions, in accordance with certain aspects of the present disclosure. Operations900may begin, at902, where the BS determines channel state information reference signal (CSI-RS) port groups associated with one or more non-zero power (NZP) CSI-RS resources for channel measurement (CM) or interference measurement (IM). At904, the BS transmits an indication of the CSI-RS port groups to at least one UE. At906, the BS generates quasi-colocation (QCL) information indicating QCL assumptions for the CSI-RS port groups. At908, the BS transmits the QCL information to the at least one UE. FIG.10is a flow diagram illustrating example operations1000that may be performed, for example, by a user equipment (e.g., UE120), for configuring NZP CSI-RS transmissions with QCL information that supports multi-TRP transmissions, in accordance with certain aspects of the present disclosure. Operations1000may begin, at1002, where the UE obtains an indication of channel state information reference signal (CSI-RS) port groups associated with one or more non-zero power (NZP) CSI-RS resources for channel measurement (CM) or interference measurement (IM). At1004, the UE obtains quasi-colocation (QCL) information indicating QCL assumptions for the CSI-RS port groups. At1006, the UEperforms at least one of a channel measurement or an interference measurement using the QCL information. At1008, the UE reports CSI feedback (e.g., to a base station) based on the at least one of the channel measurement or the interference measurement. In certain aspects, each NZP CSI-RS resource may be linked to one or more CSI-RS port groups. Also, the number of CSI-RS groups associated with a resource may be different across the different resources. For example, one NZP CSI-RS resource may have two CSI-RS groups, whereas another NZP CSI-RS may have only one CSI-RS group. The indication of the CSI-RS port groups and/or the QCL information may be transmitted to the UE via control signaling such as radio resource control (RRC) signaling (e.g., RRC element), medium access control (MAC) signaling (e.g., MAC control element (MAC-CE)), or downlink control signaling (e.g., downlink control information (DCI)). The indication of the CSI-RS port groups may be transmitted with a configuration of the NZP CSI-RS resources or in a resource mapping configuration. As an example, the UE may be initially configured with CSI report configurations having the CSI-RS port groups and various TCI states having QCL assumptions linked to the CSI-RS port groups via RRC signaling, and DCI signaling may be used to select the configured TCI states associated with the CSI-RS port groups. The QCL information for CSI-RS port groups may be indicated via RRC signaling. For example, the QCL information may be provided per CSI resource via RRC signaling in the CSI report configuration associated with a CSI trigger state. In certain aspects, the indication of the CSI-RS port groups provides grouping information for each port of the CSI-RS port groups. The group information may be a bit string or a portion of a bit string associated with each of the CSI-RS port groups. In aspects, the grouping information may be a bit map of CSI-RS ports corresponding to CSI-RS port groups. For instance, a first bit string may indicate the CSI-RS ports that belong to a first CSI-RS port group, and a second bit string may indicate the other CSI-RS ports that belong to a second port group. The total number of bit strings may be equal to the total number of CSI-RS port groups. The total number of bits in each bit string may be equal to the total number of CSI-RS ports associated with the one or more NZP CSI-RS resources. Each bit of a bit string may indicate whether a corresponding CSI-RS port associated with the bit belongs to the respective CSI-RS port group associated with the bit string. As an example, assuming a UE is configured with a NZP CSI-RS resource of 32 ports, a 32-bit bit string may be linked to the first CSI-RS port group, and a second 32-bit bit string may be linked to the second CSI-RS port group. If no grouping information is provided, the UE may assume that all ports belong to the same CSI-RS group. In aspects, the indication of the CSI-RS port groups may be based on code division multiplexing (CDM) groups. The grouping information may be a bit map of CDM groups having CSI-RS ports corresponding to CSI-RS port groups. For example, a first bit string may indicate the CDM groups that belong to a first port group, and a second bit string may indicate other CDM groups that belong to a second port group. The total number of bit strings may be equal to a total number of CDM groups. The total number of bits in each bit string may be equal to the total number of the CDM groups associated with the NZP CSI-RS resources. Each bit of a bit string may indicate whether a corresponding CDM group associated with the bit belongs to the respective CSI-RS port group associated with the bit string. As an example, assuming a UE is configured with a NZP CSI-RS resource of 32 and CDM-8 is used, then there are four CDM groups available for mapping to a CSI-RS port group. A 4-bit bit string may be linked to the each CSI-RS port group. FIG.11illustrates a diagram of an example CDM groups partitioned into CSI-RS port groups, in accordance with certain aspects of the present disclosure. As shown, the first CSI-RS port group may be linked to four CDM-4 groups, and the second CSI-RS port group may be linked to four CDM-4 groups. FIG.12illustrates a diagram of another example CDM groups partitioned into CSI-RS port groups, in accordance with certain aspects of the present disclosure. As shown, each CSI-RS port group may be linked to two CDM-8 groups. For aspects, the indication of the CSI-RS port groups may be based on component patterns. The grouping information may be a bit map of component patterns having CSI-RS ports corresponding to CSI-RS port groups. As an example, a first bit string may indicate component patterns that belong to a first CSI-RS port group, and a second bit string may indicate other component patterns that belong to a second CSI-RS port group. The total number of the bit strings may be equal to the total number of the component patterns. The total number of bits in each bit string may be equal to the total number of component patterns associated with the NZP CSI-RS resources. Each bit of a bit string may indicate whether a corresponding component pattern associated with the bit belongs to the respective port group associated with the bit string. The bit strings identifying the CSI-RS port group mapping may be included in a NZP CSI-RS resource configuration of an RRC message. For example, the NZP-CSI-RS-Resource information element of an RRC message may include a field having a bit string identifying the ports for a first CSI-RS port group (e.g., csi-rs-portGroup1) and a second field having a bit string identifying the ports for a second CSI-RS port group (e.g., csi-rs-portGroup2). As another example, the bit string fields may be included in the CSI-RS-ResourceMapping information element. The QCL information may be indicated via a plurality of TCI states, where each of the TCI states comprises a QCL configuration (e.g., QCL-info ofFIG.8) associated with one of the CSI-RS port groups. For instance, the TCI state shown inFIG.7may be used as one of the plurality of TCI states. The TCI state may be linked to a CSI-RS resource and one of the CSI-RS port groups. As an example, the UE may assume that the first TCI state provides the first QCL assumption for the first CSI-RS port group, and a second TCI state provides the second QCL assumption for the second CSI-RS port group. FIG.13illustrates an example CSI report configuration, in accordance with certain aspects of the present disclosure. The CSI report configuration may be transmitted to the UE, for example via RRC signaling, and provide the indication of the CSI-RS port groups as described herein. In this example, the CSI report configuration may have a field resourcesForChannel providing a CSI-RS resource set for channel measurements. The UE may obtain the NZP CSI-RS resource included in the set by another RRC configuration of the NZP CSI-RS resource set. Then, the UE may obtain the NZP CSI-RS port groups associated with each NZP CSI-RS resource by another RRC configuration of the NZP CSI-RS resource. Next, the field resourcesForChannel may also include a field qcl-info providing QCL information for each CSI-RS port group (e.g., field qcl-info-PortGroup1 is linked to a TCI state with the provided TCI-StateId, and field qcl-info-PortGroup2 is linked to another TCI state with the provided TCI-StateId) of each NZP CSI-RS resource. The CSI report configuration links to one resource set, which has one or more resources. The field qcl-info identifies one or more TCI states, and each of the TCI states is linked to a CSI-RS port group of a resource. The QCL information is provided in the TCI states (e.g., TCI state ofFIG.7) identified by a TCI state ID in the CSI report configuration as shown inFIG.13. The payload size of the field qcl-info is linked to the number of CSI-RS resources per set. That is, the length of the sequence qcl-info is equal to the number of resources per set in the CSI report configuration. For example, the CSI-RS resource included in the CSI report configuration may have two resources, resulting in a sequence of QCL information with two elements of qcl-info that identify the TCI states associated with two CSI-RS resources. The first qcl-info identifies two TCI states associated with the first CS-RS resource, where the first TCI state is for the first CSI-RS port group of the first resource, the second TCI state is for the second CSI-RS group of the first resource. The second qcl-info identifies two TCI states associated with the second CSI-RS resource, where the first TCI state is for the first CSI-RS port group of the second resource, and the second TCI state is for the second CSI-RS port group of the second resource. FIG.14illustrates another example CSI report configuration, in accordance with certain aspects of the present disclosure. In this example, the CSI report configuration may identify the QCL assumptions associated with each CSI-RS port group and provide the CSI-RS port groups used for NZP CSI-RS interference measurements. In this example, the CSI report configuration may have a field nzp-CSI-RS-ResourcesForInterference providing one or more CSI-RS resources for interference measurements. The UE may obtain the NZP CSI-RS port groups associated with each NZP CSI-RS resource by another RRC configuration of the NZP CSI-RS resource. Next, the field qcl-info-nzp-CSI-RS-ResourceforInterference provides a sequence that identifies QCL configuration for the CSI-RS port groups of each NZP CSI-RS resources used for interference measurement. The CSI report configuration links to one resource set, which has one or more resources. The field nzp-CSI-RS-ResourcesForInterference is similar to the field qcl-info ofFIG.13and identifies one or more TCI states, and each of the TCI states is linked to a CSI-RS port group of a resource. The QCL information is provided in the TCI states (e.g., TCI state ofFIG.7) identified by a TCI state ID in the CSI report configuration as shown inFIG.14. The payload size of the field nzp-CSI-RS-ResourcesForInterference is linked to the number of NZP CSI-RS resources for interference measurement associated with the corresponding CSI report configuration. That is, the length of the sequence nzp-CSI-RS-ResourcesForInterference is equal to the number of NZP CSI-RS resources for interference measurement associated with the corresponding CSI report configuration. For example, the CSI report configuration may have two NZP CSI-RS resource used for interference measurement, resulting in a sequence of QCL information with two elements of qcl-info-nzp-CSI-RS-ResourceforInterference that identify the TCI states associated with two CSI-RS resources. The first qcl-info-nzp-CSI-RS-ResourceforInterference identifies two TCI states associated with the first CS-RS resource, where the first TCI state is for the first CSI-RS port group of the first resource for interference measurement, the second TCI state is for the second CSI-RS group of the first resource for interference measurement. The second qcl-info-nzp-CSI-RS-ResourceforInterference identifies two TCI states associated with the second CSI-RS resource for interference measurement, where the first TCI state is for the first CSI-RS port group of the second resource for interference measurement, and the second TCI state is for the second CSI-RS port group of the second resource for interference measurement. In certain aspects, the QCL information may be indicated via a TCI state having at least a first QCL configuration associated with a first CSI-RS port group and a second QCL configuration associated with a second CSI-RS port group.FIG.15illustrates an example TCI state used to configure the CSI-RS port groups with QCL information, in accordance with certain aspects of the present disclosure. As illustrated inFIG.15, the TCI state may provide the QCL assumptions for at least two CSI-RS port groups. FIG.16illustrates an example CSI report configuration, in accordance with certain aspects of the present disclosure. The CSI report configuration may be transmitted to the UE and provide the indication of the CSI-RS port groups as described herein. In this example, the field qcl-info-nzp-CSI-RS-ResourcesforInterference may identify the QCL information associated with each CSI-RS port group of the resources. The payload size of the field qcl-info-nzp-CSI-RS-ResourcesforInterference may be linked to the number of CSI-RS resources used for interference measurement as described herein with respect toFIG.13. The CSI report configuration may identify the TCI state associated with the CSI-RS port groups via the field TCI-StateID, which may correspond to a TCI state (e.g., TCI-State ofFIG.15) provided to the UE having QCL assumptions for the CSI-RS port groups. For interference measurement, the QCL information is provided in the TCI state (e.g., TCI-state ofFIG.15) identified by a TCI state ID in the CSI report configuration as shown inFIG.16. For example, the CSI report configuration may have two NZP CSI-RS resource used for interference measurement, resulting in a sequence of QCL information with two elements of qcl-info-nzp-CSI-RS-ResourceforInterference that identify the TCI states associated with two CSI-RS resources. The first qcl-info-nzp-CSI-RS-ResourceforInterference identifies a first TCI state associated with the first CS-RS resource, where the first TCI state may have two QCL configurations, as shown inFIG.15. The first QCL configuration may be applied to the first CSI-RS port group of the first resource used for interference measurement, the second QCL configuration may be applied to the second CSI-RS group of the first resource. The second qcl-info-nzp-CSI-RS-ResourceforInterference identifies a second TCI state associated with the second CSI-RS resource used for interference measurement, where the second TCI state may have two QCL configurations, as shown inFIG.15. The first QCL configuration may be applied to the first CSI-RS port group of the second resource, and the second QCL configuration may be applied to the second CSI-RS port group of the second resource used for interference measurement. In certain aspects, the CSI report configuration ofFIG.16may identify the TCI state (e.g., TCI-state ofFIG.15) linked to a resource used for channel measurement and the corresponding CSI-RS port groups. For channel measurement, the CSI report configuration ofFIG.16may have a field qcl-info similar to the field shown inFIG.13, but in this case the qcl-info may be linked to a single TCI state per CSI-RS resource. The CSI report configuration may identify one TCI state per CSI-RS resource for channel measurements. As an example, the first QCL configuration in a TCI state may be for the first CSI-RS port group of the resource, and the second QCL configuration in the TCI state may be for the second CSI-RS port group of the resource. The QCL information is provided in the TCI state (e.g., TCI state ofFIG.15) identified by a TCI state ID in the CSI report configuration as shown inFIG.16. The payload size of the field qcl-info is linked to the number of NZP CSI-RS resources per set associated with the corresponding CSI report configuration. That is, the length of the sequence qcl-info is equal to the number of resources per set in the CSI report configuration. For example, the CSI-RS resource included in the CSI report configuration may have two resources, resulting in a sequence of QCL information with two elements of qcl-info that identify the TCI states associated with two CSI-RS resources. The first qcl-info identifies a first TCI state associated with the first CS-RS resource used for channel measurement, where the first TCI state may have two QCL configurations, as shown inFIG.15. The first QCL configuration may be applied to the first CSI-RS port group of the first resource, and the second QCL configuration may be applied to the second CSI-RS group of the first resource. The second qcl-info identifies a second TCI state associated with the second CSI-RS resource used for channel measurement, where the second TCI state may have two QCL configurations, as shown inFIG.15. The first QCL configuration may be applied to the first CSI-RS port group of the second resource, and the second QCL configuration may be applied to the second CSI-RS port group of the second resource. As examples, the UE may assume that the first QCL configuration (e.g., qcl-Config1 ofFIG.15) provides the QCL assumptions for the first group of CSI-RS ports, and that the second QCL configuration (e.g., qcl-Config2 ofFIG.15) provides the QCL assumptions for the second group of CSI-RS ports. In situations where one of the QCL configurations provides no QCL information (i.e., the field is reserved), the first QCL configuration may be applied to the QCL assumptions for the first and second group of CSI-RS ports, or vice versa. In other aspects, the first QCL configuration may be applied to the QCL assumptions for the first group of CSI-RS ports, and a default QCL configuration may be applied to the QCL assumptions for the second group of CSI-RS ports, or vice versa. If the UE is configured with only one CSI-RS port group, all the ports may be QCL'd with the same QCL information in the TCI state. Where the UE receives two QCL configurations and is configured with only one CSI-RS port group, the UE may use either of the first or second QCL configurations or apply the QCL configuration based on a group index. In certain aspects, the BS may identify that all CSI-RS ports associated with one of the one or more NZP CSI-RS resources belong to one CSI-RS port group. The BS may transmit a default CSI-RS port group configuration to the UE for indicating that the CSI-RS ports are associated with a single CSI-RS group. The QCL information may indicate QCL assumptions for the single CSI-RS port group. The UE may not expect to be configured with different ‘QCL-TypeD’ assumptions for CSI-RS port groups in one resource. That is, the UE may apply the same spatial QCL assumptions (e.g., QCL-TypeD) for CSI port groups linked to the same resource. In certain aspects, the BS may generate the QCL information with a single spatial QCL assumption or the same spatial QCL assumptions for the CSI-RS port groups associated with one of the NZP CSI-RS resources. That is, the BS may not provide QCL information with different spatial QCL assumptions for the CSI-RS port groups associated with one of the NZP CSI-RS resources. If no QCL information is provided for a CSI-RS port group used for interference measurement, the UE may assume each NZP CSI-RS port group for the interference measurement has the same QCL information as the respective NZP CSI RS port group for channel measurement. That is, the UE may assume that the CSI-RS resource(s) for channel measurement and the NZP CSI-RS resource(s) for interference measurement configured for one CSI reporting are resource-wise, ‘QCL-TypeA’ or ‘QCL-TypeB’ or ‘QCL-TypeC’, if applicable. For example, the UE may identify that the QCL information does not provide QCL assumptions for a NZP CSI-RS port group associated with a NZP CSI-RS resource used for interference measurement. Based on this, the UE may identify an association between the NZP CSI-RS port group used for interference measurement and another NZP CSI-RS port group used for channel measurement. The UE may apply the QCL information configured for the NZP CSI-RS port group for channel measurement to the NZP CSI-RS port group used for interference measurement. If there is more than one group configured per resource, then the association is made on a port group basis. If one group is configured per resource, then the association is made on a resource basis. FIG.17illustrates another example CSI report configuration, in accordance with certain aspects of the present disclosure. The CSI report configuration may identify QCL information in a corresponding TCI state for NZP CSI-RS interference measurements. The QCL information may be linked to one or more CSI-RS port groups. FIG.18illustrates a communications device1800(such as a BS110or a UE120) that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIGS.9and10. The communications device1800includes a processing system1802coupled to a transceiver1808. The transceiver1808is configured to transmit and receive signals for the communications device1800via an antenna1810, such as the various signal described herein. The processing system1802may be configured to perform processing functions for the communications device1800, including processing signals received and/or to be transmitted by the communications device1800. The processing system1802includes a processor1804coupled to a computer-readable medium/memory1812via a bus1806. In certain aspects, the computer-readable medium/memory1812is configured to store instructions that when executed by processor1804, cause the processor1804to perform the operations illustrated inFIGS.9and10, or other operations for performing the various techniques discussed herein. In certain aspects, the processing system1802may include a transmit/receive component1814for performing the operations illustrated inFIGS.9and10. Additionally, the processing system1802may include a determining component1816for performing the operations illustrated inFIGS.9and10. Additionally, the processing system1802may include a generating component1818for performing the operations illustrated inFIGS.9and10. Additionally, the processing system1802may include an obtaining component1820for performing the operations illustrated inFIGS.9and10. Additionally, the processing system1802may include a performing component1822for performing the operations illustrated inFIGS.9and10. Additionally, the processing system1802may include a reporting component1824for performing the operations illustrated inFIGS.9and10. The transmit/receive component1814, determining component1816, generating component1818, obtaining component1820, performing component1822, and reporting component1824may be coupled to the processor1804via bus1806. In certain aspects, the transmit/receive component1814, determining component1816, generating component1818, obtaining component1820, performing component1822, and reporting component1824may be hardware circuits. In certain aspects, the transmit/receive component1814, determining component1816, generating component1818, obtaining component1820, performing component1822, and reporting component1824may be software components that are executed and run on processor1804. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include identifying, resolving, selecting, choosing, establishing and the like. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. If implemented in hardware, an example hardware configuration may comprise a processing system in a wireless node. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement the signal processing functions of the PHY layer. In the case of a user terminal120(seeFIG.1), a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers. DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the machine-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory). EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair. DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For example, instructions for performing the operations described herein and illustrated inFIGS.9and10. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM. ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
69,253
11863269
DETAILED DESCRIPTION Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein, one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. Several aspects of telecommunication systems will now be presented with reference to various apparatuses and techniques. These apparatuses and techniques will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, or the like (collectively referred to as “elements”). These elements may be implemented using hardware, software, or combinations thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. It should be noted that while aspects may be described herein using terminology commonly associated with a 5G or NR radio access technology (RAT), aspects of the present disclosure can be applied to other RATs, such as a 3G RAT, a 4G RAT, and/or a RAT subsequent to 5G (e.g., 6G). FIG.1is a diagram illustrating an example of a wireless network100, in accordance with the present disclosure. The wireless network100may be or may include elements of a 5G (NR) network and/or an LTE network, among other examples. The wireless network100may include a number of base stations110(shown as BS110a, BS110b, BS110c, and BS110d) and other network entities. A base station (BS) is an entity that communicates with user equipment (UEs) and may also be referred to as an NR BS, a Node B, a gNB, a 5G node B (NB), an access point, a transmit receive point (TRP), or the like. Each BS may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a BS and/or a BS subsystem serving this coverage area, depending on the context in which the term is used. A BS may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or another type of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having association with the femto cell (e.g., UEs in a closed subscriber group (CSG)). ABS for a macro cell may be referred to as a macro BS. ABS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, a BS110amay be a macro BS for a macro cell102a, a BS110bmay be a pico BS for a pico cell102b, and a BS110cmay be a femto BS for a femto cell102c. A BS may support one or multiple (e.g., three) cells. The terms “eNB”, “base station”, “NR BS”, “gNB”, “TRP”, “AP”, “node B”, “5G NB”, and “cell” may be used interchangeably herein. In some aspects, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some aspects, the BSs may be interconnected to one another and/or to one or more other BSs or network nodes (not shown) in the wireless network100through various types of backhaul interfaces, such as a direct physical connection or a virtual network, using any suitable transport network. Wireless network100may also include relay stations. A relay station is an entity that can receive a transmission of data from an upstream station (e.g., a BS or a UE) and send a transmission of the data to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that can relay transmissions for other UEs. In the example shown inFIG.1, a relay BS110dmay communicate with macro BS110aand a UE120din order to facilitate communication between BS110aand UE120d. A relay BS may also be referred to as a relay station, a relay base station, a relay, or the like. Wireless network100may be a heterogeneous network that includes BSs of different types, such as macro BSs, pico BSs, femto BSs, relay BSs, or the like. These different types of BSs may have different transmit power levels, different coverage areas, and different impacts on interference in wireless network100. For example, macro BSs may have a high transmit power level (e.g., 5 to 40 watts) whereas pico BSs, femto BSs, and relay BSs may have lower transmit power levels (e.g., 0.1 to 2 watts). A network controller130may couple to a set of BSs and may provide coordination and control for these BSs. Network controller130may communicate with the BSs via a backhaul. The BSs may also communicate with one another, e.g., directly or indirectly via a wireless or wireline backhaul. UEs120(e.g.,120a,120b,120c) may be dispersed throughout wireless network100, and each UE may be stationary or mobile. A UE may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, or the like. A UE may be a cellular phone (e.g., a smart phone), a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, a medical device or equipment, biometric sensors/devices, wearable devices (smart watches, smart clothing, smart glasses, smart wrist bands, smart jewelry (e.g., smart ring, smart bracelet)), an entertainment device (e.g., a music or video device, or a satellite radio), a vehicular component or sensor, smart meters/sensors, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) or evolved or enhanced machine-type communication (eMTC) UEs. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, and/or location tags, that may communicate with a base station, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, and/or may be implemented as NB-IoT (narrowband internet of things) devices. Some UEs may be considered a Customer Premises Equipment (CPE). UE120may be included inside a housing that houses components of UE120, such as processor components and/or memory components. In some aspects, the processor components and the memory components may be coupled together. For example, the processor components (e.g., one or more processors) and the memory components (e.g., a memory) may be operatively coupled, communicatively coupled, electronically coupled, and/or electrically coupled. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular RAT and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, or the like. A frequency may also be referred to as a carrier, a frequency channel, or the like. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. In some aspects, two or more UEs120(e.g., shown as UE120aand UE120e) may communicate directly using one or more sidelink channels (e.g., without using a base station110as an intermediary to communicate with one another). For example, the UEs120may communicate using peer-to-peer (P2P) communications, device-to-device (D2D) communications, a vehicle-to-everything (V2X) protocol (e.g., which may include a vehicle-to-vehicle (V2V) protocol or a vehicle-to-infrastructure (V2I) protocol), and/or a mesh network. In this case, the UE120may perform scheduling operations, resource selection operations, and/or other operations described elsewhere herein as being performed by the base station110. Devices of wireless network100may communicate using the electromagnetic spectrum, which may be subdivided based on frequency or wavelength into various classes, bands, channels, or the like. For example, devices of wireless network100may communicate using an operating band having a first frequency range (FR1), which may span from 410 MHz to 7.125 GHz, and/or may communicate using an operating band having a second frequency range (FR2), which may span from 24.25 GHz to 52.6 GHz. The frequencies between FR1 and FR2 are sometimes referred to as mid-band frequencies. Although a portion of FR1 is greater than 6 GHz, FR1 is often referred to as a “sub-6 GHz” band. Similarly, FR2 is often referred to as a “millimeter wave” band despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. Thus, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like, if used herein, may broadly represent frequencies less than 6 GHz, frequencies within FR1, and/or mid-band frequencies (e.g., greater than 7.125 GHz). Similarly, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like, if used herein, may broadly represent frequencies within the EHF band, frequencies within FR2, and/or mid-band frequencies (e.g., less than 24.25 GHz). It is contemplated that the frequencies included in FR1 and FR2 may be modified, and techniques described herein are applicable to those modified frequency ranges. In some aspects, the UE120may include a communication manager140. As described in more detail elsewhere herein, the communication manager140may receive (e.g., from the base station110) an indication of a change in a non-linearity model associated with a power amplifier of the base station110, update a model associated with the power amplifier based at least in part on the indication, and update at least one parameter associated with slicing received signals based at least in part on the indication. Accordingly, as shown inFIG.1, the UE120may perform digital post-distortion (also referred to as “DPoD”) on signals received from the base station110. For example, as described in more detail elsewhere herein, the communication manager140may receive (e.g., from the base station110) a signal that was amplified using a power amplifier that is at least partially non-linear, estimate a portion of the received signal that includes an original data signal using slicing with at least two coefficients, estimate a portion of the signal that includes a distortion using a model associated with the power amplifier, and generate a reconstructed signal based at least in part on the received signal, the estimated portion including the original data signal, and the estimated portion including the distortion. Additionally, or alternatively, the communication manager140may perform one or more other operations described herein. In some aspects, the base station110may include a communication manager150. As described in more detail elsewhere herein, the communication manager150may determine a change in a non-linearity model associated with a power amplifier of the base station110; and transmit (e.g., to the UE120) an indication of the change. Additionally, or alternatively, the communication manager150may perform one or more other operations described herein. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. FIG.2is a diagram illustrating an example200of a base station110in communication with a UE120in a wireless network100, in accordance with the present disclosure. Base station110may be equipped with T antennas234athrough234t, and UE120may be equipped with R antennas252athrough252r, where in general T≥1 and R≥1. At base station110, a transmit processor220may receive data from a data source212for one or more UEs, select one or more modulation and coding schemes (MCS) for each UE based at least in part on channel quality indicators (CQIs) received from the UE, process (e.g., encode and modulate) the data for each UE based at least in part on the MCS(s) selected for the UE, and provide data symbols for all UEs. Transmit processor220may also process system information (e.g., for semi-static resource partitioning information (SRPI)) and control information (e.g., CQI requests, grants, and/or upper layer signaling) and provide overhead symbols and control symbols. Transmit processor220may also generate reference symbols for reference signals (e.g., a cell-specific reference signal (CRS) or a demodulation reference signal (DMRS)) and synchronization signals (e.g., a primary synchronization signal (PSS) or a secondary synchronization signal (SSS)). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, the overhead symbols, and/or the reference symbols, if applicable, and may provide T output symbol streams to T modulators (MODs)232athrough232t. Each modulator232may process a respective output symbol stream (e.g., for OFDM) to obtain an output sample stream. Each modulator232may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. T downlink signals from modulators232athrough232tmay be transmitted via T antennas234athrough234t, respectively. At UE120, antennas252athrough252rmay receive the downlink signals from base station110and/or other base stations and may provide received signals to demodulators (DEMODs)254athrough254r, respectively. Each demodulator254may condition (e.g., filter, amplify, downconvert, and digitize) a received signal to obtain input samples. Each demodulator254may further process the input samples (e.g., for OFDM) to obtain received symbols. A MIMO detector256may obtain received symbols from all R demodulators254athrough254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor258may process (e.g., demodulate and decode) the detected symbols, provide decoded data for UE120to a data sink260, and provide decoded control information and system information to a controller/processor280. The term “controller/processor” may refer to one or more controllers, one or more processors, or a combination thereof. A channel processor may determine a reference signal received power (RSRP) parameter, a received signal strength indicator (RSSI) parameter, a reference signal received quality (RSRQ) parameter, and/or a CQI parameter, among other examples. In some aspects, one or more components of UE120may be included in a housing284. Network controller130may include communication unit294, controller/processor290, and memory292. Network controller130may include, for example, one or more devices in a core network. Network controller130may communicate with base station110via communication unit294. Antennas (e.g., antennas234athrough234tand/or antennas252athrough252r) may include, or may be included within, one or more antenna panels, antenna groups, sets of antenna elements, and/or antenna arrays, among other examples. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include one or more antenna elements. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include a set of coplanar antenna elements and/or a set of non-coplanar antenna elements. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include antenna elements within a single housing and/or antenna elements within multiple housings. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include one or more antenna elements coupled to one or more transmission and/or reception components, such as one or more components ofFIG.2. On the uplink, at UE120, a transmit processor264may receive and process data from a data source262and control information (e.g., for reports that include RSRP, RSSI, RSRQ, and/or CQI) from controller/processor280. Transmit processor264may also generate reference symbols for one or more reference signals. The symbols from transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by modulators254athrough254r(e.g., for DFT-s-OFDM or CP-OFDM), and transmitted to base station110. In some aspects, a modulator and a demodulator (e.g., MOD/DEMOD254) of the UE120may be included in a modem of the UE120. In some aspects, the UE120includes a transceiver. The transceiver may include any combination of antenna(s)252, modulators and/or demodulators254, MIMO detector256, receive processor258, transmit processor264, and/or TX MIMO processor266. The transceiver may be used by a processor (e.g., controller/processor280) and memory282to perform aspects of any of the methods described herein (for example, with reference toFIGS.3-7). At base station110, the uplink signals from UE120and other UEs may be received by antennas234, processed by demodulators232, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by UE120. Receive processor238may provide the decoded data to a data sink239and the decoded control information to controller/processor240. Base station110may include communication unit244and communicate to network controller130via communication unit244. Base station110may include a scheduler246to schedule UEs120for downlink and/or uplink communications. In some aspects, a modulator and a demodulator (e.g., MOD/DEMOD232) of the base station110may be included in a modem of the base station110. In some aspects, the base station110includes a transceiver. The transceiver may include any combination of antenna(s)234, modulators and/or demodulators232, MIMO detector236, receive processor238, transmit processor220, and/or TX MIMO processor230. The transceiver may be used by a processor (e.g., controller/processor240) and memory242to perform aspects of any of the methods described herein (for example, with reference toFIGS.3-7). Controller/processor240of base station110, controller/processor280of UE120, and/or any other component(s) ofFIG.2may perform one or more techniques associated with performing high-order digital post-distortion, as described in more detail elsewhere herein. For example, controller/processor240of base station110, controller/processor280of UE120, and/or any other component(s) ofFIG.2may perform or direct operations of, for example, process500ofFIG.5, process600of FIG.6, process700ofFIG.7, and/or other processes as described herein. Memories242and282may store data and program codes for base station110and UE120, respectively. In some aspects, memory242and/or memory282may include a non-transitory computer-readable medium storing one or more instructions (e.g., code and/or program code) for wireless communication. For example, the one or more instructions, when executed (e.g., directly, or after compiling, converting, and/or interpreting) by one or more processors of the base station110and/or the UE120, may cause the one or more processors, the UE120, and/or the base station110to perform or direct operations of, for example, process500ofFIG.5, process600ofFIG.6, process700ofFIG.7, and/or other processes as described herein. In some aspects, executing instructions may include running the instructions, converting the instructions, compiling the instructions, and/or interpreting the instructions, among other examples. In some aspects, a UE (e.g., the UE120) may include means for receiving, from a base station (e.g., the base station110), an indication of a change in a non-linearity model associated with a power amplifier of the base station; means for updating a model associated with the power amplifier based at least in part on the indication; and/or means for updating at least one parameter associated with slicing received signals based at least in part on the indication. The means for the UE to perform operations described herein may include, for example, one or more of communication manager140, antenna252, demodulator254, MIMO detector256, receive processor258, transmit processor264, TX MIMO processor266, modulator254, controller/processor280, or memory282. Additionally, or alternatively, the UE120may include means for receiving, from a base station, a signal that was amplified using a power amplifier that is at least partially non-linear; means for estimating a portion of the received signal that includes an original data signal using slicing with at least two coefficients; means for estimating a portion of the signal that includes a distortion using a model associated with the power amplifier; and/or means for generating a reconstructed signal based at least in part on the received signal, the estimated portion including the original data signal, and the estimated portion including the distortion. The means for the UE to perform operations described herein may include, for example, one or more of communication manager140, antenna252, demodulator254, MIMO detector256, receive processor258, transmit processor264, TX MIMO processor266, modulator254, controller/processor280, or memory282. In some aspects, a base station (e.g., the base station110) may include means for determining a change in a non-linearity model associated with a power amplifier of the base station; and/or means for transmitting, to a UE (e.g., the UE120), an indication of the change. The means for the base station to perform operations described herein may include, for example, one or more of communication manager150, transmit processor220, TX MIMO processor230, modulator232, antenna234, demodulator232, MIMO detector236, receive processor238, controller/processor240, memory242, or scheduler246. While blocks inFIG.2are illustrated as distinct components, the functions described above with respect to the blocks may be implemented in a single hardware, software, or combination component or in various combinations of components. For example, the functions described with respect to the transmit processor264, the receive processor258, and/or the TX MIMO processor266may be performed by or under the control of controller/processor280. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described with regard toFIG.2. When transmitting, a base station generally uses a power amplifier (PA) to increase a magnitude associated with a downlink signal before transmitting the amplified downlink signal to a UE. However, the PA introduces some non-linear components to the amplified downlink signal. Generally, the strongest non-linear components are odd-ordered kernels (e.g., represented by forms similar to x|x|2, x|x|4, and so on, where the downlink signal may be represented by x). In some cases, the UE may process a received signal (that is based on the amplified downlink signal transmitted by the base station and channel conditions between the base station and the UE) by applying digital post-distortion to remove at least some of the non-linear components introduced by the PA. Accordingly, the UE may perform a slicing operation on the received signal to estimate a portion of the received signal that includes the non-linear components as well as a modelling operation that estimates a portion of the received signal that includes residual distortion. The UE may perform slicing and modelling iteratively until convergence to estimate a portion of the received signal that includes the downlink signal. Generally, the UE uses one coefficient during the slicing operation. For example, the UE may use a first odd-ordered kernel (e.g., which may be represented by x) with a first coefficient (e.g., which may be represented by a1) to perform slicing. Accordingly, remaining non-linear components introduced by the PA are included in the residual distortion (e.g., which may be represented by d). However, larger residual distortions result in less accurate digital post-distortion by the UE, which results in wasted power and processing resources when the UE is unable to decode the received signal even after digital post-distortion. Additionally, larger residual distortions may cause the UE to waste processing resources by hindering convergence of the digital post-distortion (e.g., when convergence is dynamically determined rather than statically determined). For time-domain modulated symbols, the non-linearities introduced by the PA result in shrinkage of the constellation for modulation of the symbols. In contrast, for OFDM symbols, the constellation is mapped to the frequency domain before the base station applies inverse fast Fourier transform (IFFT) and amplification using the PA, such that the non-linearities introduced by the PA are not applied directly on the constellation. In some cases, the base station may transmit to the UE using a single-carrier waveform, which uses time-domain modulated symbols, rather than OFDM. The base station may use the single-carrier waveform in higher frequencies (e.g., included in FR4, which may include frequencies between 52.6 GHz to 71 GHz, and/or included in FR5, which may include frequencies between 95 GHz to 325 GHz, among other examples) in order to reduce phase noise and processing overhead associated with using OFDM in the higher frequencies. Additionally, or alternatively, the single-carrier waveform may have a lower peak-to-average power ratio (PAPR) as compared with OFDM, which increases quality and/or reliability of transmissions to the UE. Generally, the constellation distortion for time-domain modulated symbols results in reduced accuracy of digital post-distortion. Some techniques and apparatuses described herein enable a UE (e.g., UE120) to perform high-order digital post-distortion on a received signal. In some aspects, the UE120may use at least two coefficients during a slicing operation. For example, the UE120may use a first odd-ordered kernel (e.g., which may be represented by x) with a first coefficient (e.g., which may be represented by a1), a second odd-ordered kernel (e.g., represented by a form similar to x|x2|) with a second coefficient (e.g., which may be represented by a3), and so on, to perform slicing. These high, odd-ordered kernels account for constellation shrinkage (e.g., when using time-domain modulated symbols), and therefore decrease residual distortion and increase accuracy of digital post-distortion. As a result, the UE120performs more accurate digital post-distortion, which conserves power and processing resources as compared with the UE120being unable to decode the received signal even after digital post-distortion. To enable high-order digital post-distortion, a base station (e.g., base station110) may transmit an indication of a change in a non-linearity model associated with a PA of the base station110. For example, environmental changes (e.g., temperature changes and/or humidity changes), transmission property changes (e.g., frequency changes, beam changes, phase changes, and/or other physical changes associated with transmitted signals from the base station110), power changes associated with the PA (e.g., voltage changes and/or current changes), and/or other changes detected by the base station110may result in the change in the non-linearity model. Accordingly, based at least in part on the indication, the UE120may update the model associated with the PA (e.g., used to estimate a portion of the received signal that includes residual distortion) and update at least one parameter associated with the slicing (e.g., at least one coefficient and/or at least one kernel). As a result, the UE120performs more accurate digital post-distortion, which conserves power and processing resources as compared with the UE120being unable to decode the received signal even after digital post-distortion. The UE120may request additional information from the base station110to use for updating the model and/or updating the at least one parameter, as described herein. Additionally, or alternatively, the UE120may determine the update for the model and/or the update for the at least one parameter, as described herein. FIG.3is a diagram illustrating an example300associated with performing high-order digital post-distortion, in accordance with the present disclosure. As shown inFIG.3, example300includes communication between a gNB110and a UE120. In some aspects, the gNB110and the UE120may be included in a wireless network, such as wireless network100. As shown inFIG.3, the gNB110may encode and modulate one or more downlink symbols for the UE120, using a transmission broadband (Tx BB)301, into an analog signal (e.g., represented by x in example300). The gNB110may further amplify the signal using a PA303(e.g., with an associated amplification function that may be represented by G in example300). The PA303may be at least partially non-linear. For example, imperfections in the PA303may introduce non-linear components to the amplified signal (e.g., represented by G(x) in example300). Accordingly, an antenna305of the gNB110may transmit the amplified signal to the UE120. An antenna307of the UE120may receive an analog signal (e.g., represented by y in example300) based at least in part on the signal transmitted by the gNB110and a channel from the gNB110to the UE120. The UE120may perform digital post-distortion to try to estimate the original data signal (e.g., represented by x in example300). Accordingly, the UE120may estimate a portion of the received signal that includes the original data signal, using a slicer309, with at least two coefficients. The estimated portion including the original data signal may be represented by 2 in example300. The UE120may use at least two coefficients (e.g., represented by a1, a3, and so on) to estimate the portion including the original data signal. For example, the UE120may use a plurality of odd powers (e.g., odd powered kernels, which may be represented by forms similar to x, x|x|2, and so on) with the at least two coefficients to estimate the portion including the original data signal. In some aspects, the digital post-distortion includes a Bussgang decomposition. For example, the UE120may estimate the original data signal using an expression similar to the form y(x)=Σi=1,i=oddKaix|x|(i-1)+d, where y may represent the received signal, x may represent the analog data signal generated by the gNB110, aimay represent a coefficient associated with the ithpowered kernel, x|x|(i-1)may represent the ithpowered kernel, K may determine a quantity of coefficients and odd powered kernels that are used, and d may represent residual distortion. Accordingly, the UE120may apply slicing with at least K=3. For example, the UE120may apply slicing using an expression similar to the form x^=argminx❘"\[LeftBracketingBar]"y-∑i=1,i=oddKai⁢x⁢❘"\[LeftBracketingBar]"x❘"\[RightBracketingBar]"(i-1)❘"\[RightBracketingBar]"2, where {circumflex over (x)} may represent the estimated portion including the original data signal and is based on minimization of the expression by searching over all possible constellation points associated with transmitted signal. The coefficients may be estimated by the UE120and/or the gNB110(e.g., as described below in connection withFIG.4). In some aspects, the kernels and the coefficients may be represented by a matrix expression similar to the form y=[x,x⁢❘"\[LeftBracketingBar]"x❘"\[RightBracketingBar]"2,…,x⁢❘"\[LeftBracketingBar]"x❘"\[RightBracketingBar]"K-1][a1a3⋮aK]=H⁢θ, where y may represent the received signal, x may represent the analog signal generated by the gNB110, aimay represent a coefficient associated with the ithpowered kernel, x|x|(i-1)may represent the ithpowered kernel, and K may determine a quantity of coefficients and odd powered kernels that are used. Accordingly, H may represent the kernel matrix and θ may represent the coefficient vector. The UE120and/or the gNB110may estimate the coefficients according to an expression similar to the form {circumflex over (θ)}LS=(HHH)−1HHy, where {circumflex over (θ)}LSmay represent the estimated coefficients (e.g., estimated via a least squares estimation), H may represent the kernel matrix, HHmay represent the Hermitian conjugate of the kernel matrix, and y may represent the received signal as a vector. The UE120may further estimate a portion of the received signal that includes a distortion using a model311associated with the PA. The distortion may be represented by d in example300. For example, the UE120may apply non-linear distortion estimation using an expression similar to the form d=PA_model({circumflex over (x)})=Σi=1,i=oddKai{circumflex over (x)}|{circumflex over (x)}|(i-1), where d may represent the distortion, PA_model may represent the model311that accepts an estimated data signal as input and that outputs an estimated non-linearly distorted signal that would be generated by the PA,2may represent the estimated portion including the original data signal, aimay represent a coefficient associated with the ithpowered kernel, {circumflex over (x)}|{circumflex over (x)}|(i-1)may represent the ithpowered kernel, and K may determine a quantity of coefficients and odd powered kernels that are used. Accordingly, as shown in connection with reference number315, the UE120may generate a reconstructed signal that may be represented by a form similar to y−d−Σi=3,i=oddKai{circumflex over (x)}|{circumflex over (x)}|(i-1). As shown in connection with reference number313, the portion including the original data signal and the portion including the distortion may be estimated iteratively. For example, in a second iteration, the UE120may re-estimate the portion including the original data signal based at least in part on a corrected signal output from a first iteration. The corrected signal output from the first iteration may be represented by a form similar to ycorrected=y−{circumflex over (d)} where ycorrectedmay represent the corrected signal, y may represent the received signal, and {circumflex over (d)} may represent the residual distortion from the first iteration. In the second iteration, the UE120may further re-estimate the distortion based at least in part on the re-estimated portion including the original data signal. Although described with respect to two iterations, the UE120may use additional iterations, such as three, four, and so on. In some aspects, the UE120may determine convergence statically (e.g., based at least in part on a preconfigured quantity of iterations). Additionally, or alternatively, the UE120may determine convergence dynamically (e.g., based at least in part on a change in reconstructed signals, whether in magnitude and/or direction, across two or more iterations satisfying a threshold). In a combinatory example, the UE120may determine convergence dynamically subject to a preconfigured maximum quantity of iterations. The UE120may thus generate a reconstructed signal based at least in part on the received signal, the estimated portion including the original data signal, and the estimated portion including the distortion. Accordingly, a reception broadband (Rx BB)317may demodulate and decode the reconstructed signal. By using techniques as described in connection withFIG.3, the UE120performs more accurate digital post-distortion. Increasing the accuracy in turn increases chances of successfully decoding the reconstructed signal. Successfully decoded signals do not need to be re-transmitted by the gNB110, which conserves power and processing resources at the gNB110, as well as conserving networking overhead. Additionally, the UE120conserves power and processing resources as compared with having to receive and decode re-transmitted signals. As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with respect toFIG.3. FIG.4is a diagram illustrating an example400associated with signaling related to high-order digital post-distortion, in accordance with the present disclosure. As shown inFIG.4, a base station110and a UE120may communicate with one another. In some aspects, the gNB110and the UE120may communicate using a wireless network, such as wireless network100. As shown in connection with reference number405, the base station110may transmit, and the UE120may receive, an indication of a change in a non-linearity model associated with a PA of the base station110. For example, the base station110may determine the change based at least in part on an environmental change (e.g., a temperature change and/or a humidity change), a transmission property change (e.g., a frequency change, a beam change, a phase change, and/or another physical change associated with transmitted signals from the base station110), a power change associated with the PA (e.g., a voltage change and/or a current change), and/or another changes detected by the base station110associated with the PA. The indication may be one or more bits indicative that the change exists or may include one or more indicators associated with a cause of the change. In some aspects, and as shown in connection with reference number410, the UE120may transmit, and the base station110may receive, a request associated with updating a model that is associated with the PA (e.g., used to estimate distortion as described above in connection withFIG.3). For example, the UE120may request a new model that will replace the original model. As an alternative, the UE120may request kernels associated with the model such that the UE120may update the model using the kernels. The request may be one or more bits indicative that the UE120requests information associated with updating the model or may include one or more indicators whether the UE120is requesting the new model or only the kernels. As shown in connection with reference number415, the base station110may transmit, and the UE120may receive, an indication of the new model. The base station110may transmit the indication of the new model based at least in part on the request described above in connection with reference number410. As an alternative, the base station110may transmit the indication of the new model in combination with or after transmitting the indication of the change as described above in connection with reference number405. In some aspects, the indication may include a look-up table (LUT) such that the UE120may map the indication of the change to kernels and coefficients that the UE120may use to update the model. Similarly, the indication may include one or more vectors such that the UE120may map the indication of the change to kernels and coefficients (e.g., using LUTs and/or other indices) that the UE120may use to update the model. As an alternative, the base station110may transmit, and the UE120may receive, an indication of a kernel series and coefficients used to update the model. For example, the indication may be associated with a Volterra polynomial and/or another kernel series as well as the coefficients for using with the kernel series. The base station110may transmit the indication of the kernel series and the coefficients based at least in part on the request described above in connection with reference number410. As an alternative, the base station110may transmit the indication of the kernel series and the coefficients in combination with or after transmitting the indication of the change as described above in connection with reference number405. As an alternative, the base station110may transmit, and the UE120may receive, an indication of the kernels used to update the model, and the UE120may determine the coefficients (e.g., as described below) when updating the model. The base station110may transmit the indication of the kernels based at least in part on the request described above in connection with reference number410. As an alternative, the base station110may transmit the indication of the kernels in combination with or after transmitting the indication of the change as described above in connection with reference number405. As shown in connection with reference number420, the UE120may update the model associated with the PA based at least in part on the indication described above in connection with reference number405. In some aspects, the UE120replaces the model with a new model indicated by the base station110as described above in connection with reference number415. As an alternative, the UE120updates the model using a kernel series and coefficients indicated by the base station110as described above in connection with reference number415. In some aspects, the UE120updates the model based at least in part on one or more pilots received from the base station110. For example, the UE120may receive kernels used to update the model from the base station110, as described above in connection with reference number415, and may estimate coefficients associated with the model based at least in part on the pilot(s). As shown in connection with reference number425, the UE120may transmit, and the base station110may receive, a request associated with updating at least one parameter associated with slicing received signals (e.g., as described above in connection withFIG.3). For example, the UE120may request at least one new parameter to replace at least one original parameter. In some aspects, and as shown in connection with reference number430, the base station110may transmit, and the UE120may receive, an indication of the at least one new parameter. The base station110may transmit the indication of the at least one new parameter based at least in part on the request described above in connection with reference number425. As an alternative, the base station110may transmit the indication of the at least one new parameter in combination with or after transmitting the indication of the change as described above in connection with reference number405. In some aspects, the indication may include an LUT such that the UE120may map the indication of the change to at least one new kernel such that the UE120may use the at least one new kernel as the at least one new parameter. As an alternative, the indication may include an indication of a kernel series such that the UE120may use the kernel series as the at least one new parameter. Additionally, in some aspects, the base station110may transmit, and the UE120may receive, an indication of coefficients associated with the kernel series. Accordingly, the UE120may additionally use the coefficients as the at least one new parameter. As shown in connection with reference number435, the UE120may update the at least one parameter associated with slicing received signals based at least in part on the indication described above in connection with reference number405. For example, the UE120may replace the at least one parameter with the at least one new parameter indicated by the base station110(e.g., as described above in connection with reference number430). As described above in connection withFIG.3, the at least one parameter may be associated with a Bussgang decomposition. In some aspects, the received signals may include single-carrier waveforms from the base station110. In some aspects, the UE120updates the at least one parameter based at least in part on one or more pilots received from the base station110. For example, the UE120may receive kernels associated with the model from the base station110, as described above in connection with reference number430, and may estimate coefficients associated with the model based at least in part on the pilot(s). Accordingly, in one example, the UE120may solve for a coefficient vector (e.g., represented by {circumflex over (θ)}LS) according to an expression of a form similar to (HHH)−1HHy, where H may represent a matrix of the kernels indicated by the base station110, HHmay represent the Hermitian conjugate of the kernel matrix, and y may represent the pilot(s) received by the UE120. Additionally, or alternatively, the UE120updates the at least one parameter based at least in part on the model associated with the PA. For example, the UE120may use the model associated with the PA to estimate kernels and coefficients to use when slicing received signals. In some aspects, the model used to perform slicing may include a truncated model based at least in part on a full non-linearity model associated with the PA. For example, the full model may include memory elements that are omitted from the truncated model. Accordingly, the UE120may perform digital post-distortion (e.g., as described above in connection withFIG.3) using the updated model associated with the PA and the updated parameter(s) associated with slicing. By using techniques as described in connection withFIG.4, the UE120can perform more accurate digital post-distortion (e.g., as described above in connection withFIG.3). Increasing the accuracy in turn increases chances of successfully decoding the reconstructed signal. Successfully decoded signals do not need to be re-transmitted by the base station110, which conserves power and processing resources at the base station110, as well as conserving networking overhead. Additionally, the UE120conserves power and processing resources as compared with having to receive and decode re-transmitted signals. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with respect toFIG.4. FIG.5is a diagram illustrating an example process500performed, for example, by a UE, in accordance with the present disclosure. Example process500is an example where the UE (e.g., UE120and/or apparatus800ofFIG.8) performs operations associated with performing high-order DPoD. As shown inFIG.5, in some aspects, process500may include receiving, from a base station (e.g., base station110and/or apparatus900ofFIG.9), an indication of a change in a non-linearity model associated with a power amplifier of the base station (block510). For example, the UE (e.g., using communication manager140and/or reception component802, depicted inFIG.8) may receive, from a base station, an indication of a change in a non-linearity model associated with a power amplifier of the base station, as described herein. As further shown inFIG.5, in some aspects, process500may include updating a model associated with the power amplifier based at least in part on the indication (block520). For example, the UE (e.g., using communication manager140and/or determination component808, depicted inFIG.8) may update a model associated with the power amplifier based at least in part on the indication, as described herein. As further shown inFIG.5, in some aspects, process500may include updating at least one parameter associated with slicing received signals based at least in part on the indication (block530). For example, the UE (e.g., using communication manager140and/or determination component808) may update at least one parameter associated with slicing received signals based at least in part on the indication, as described herein. Process500may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the model and the at least one parameter are updated based at least in part on one or more pilots received from the base station. In a second aspect, alone or in combination with the first aspect, process500further includes receiving (e.g., using communication manager140and/or reception component802), from the base station, an indication of a new model, where the model is updated based at least in part on the new model. In a third aspect, alone or in combination with one or more of the first and second aspects, the indication includes one or more of a look-up table, one or more vectors of look-up tables, or an indication of a kernel series and coefficients. In a fourth aspect, alone or in combination with one or more of the first through third aspects, process500further includes transmitting (e.g., using communication manager140and/or transmission component804, depicted inFIG.4), to the base station, a request for the new model, where the indication of the new model is received based at least in part on the request. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, process500further includes receiving (e.g., using communication manager140and/or reception component802), from the base station, an indication of kernels, where the model is updated based at least in part on the kernels, and receiving (e.g., using communication manager140and/or reception component802), from the base station, one or more pilots, where coefficients associated with the model are updated based at least in part on the one or more pilots. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, process500further includes transmitting (e.g., using communication manager140and/or transmission component804), to the base station, a request for the kernels, where the indication of the kernels is received based at least in part on the request. In a seventh aspect, alone or in combination with one or more of the first through sixth aspects, the at least one parameter is updated based at least in part on the updated model. In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, process500further includes receiving (e.g., using communication manager140and/or reception component802), from the base station, an indication of at least one new parameter, where the at least one parameter is updated based at least in part on the at least one new parameter. In a ninth aspect, alone or in combination with one or more of the first through eighth aspects, the indication includes one or more of a look-up table or an indication of a kernel series. In a tenth aspect, alone or in combination with one or more of the first through ninth aspects, process500further includes transmitting (e.g., using communication manager140and/or transmission component804), to the base station, a request for the at least one new parameter, where the indication of the at least one new parameter is received based at least in part on the request. In an eleventh aspect, alone or in combination with one or more of the first through tenth aspects, the at least one parameter is associated with a Bussgang decomposition. In a twelfth aspect, alone or in combination with one or more of the first through eleventh aspects, the received signals include single-carrier waveforms. AlthoughFIG.5shows example blocks of process500, in some aspects, process500may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of process500may be performed in parallel. FIG.6is a diagram illustrating an example process600performed, for example, by a base station, in accordance with the present disclosure. Example process600is an example where the base station (e.g., base station110and/or apparatus900ofFIG.9) performs operations associated with performing high-order DPoD. As shown inFIG.6, in some aspects, process600may include determining a change in a non-linearity model associated with a power amplifier of the base station (block610). For example, the base station (e.g., using communication manager150and/or determination component908, depicted inFIG.9) may determine a change in a non-linearity model associated with a power amplifier of the base station, as described herein. As further shown inFIG.6, in some aspects, process600may include transmitting, to a UE (e.g., UE120and/or apparatus800ofFIG.8), an indication of the change (block620). For example, the base station (e.g., using communication manager150and/or transmission component904, depicted inFIG.9) may transmit, to a UE, an indication of the change, as described herein. Process600may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process600further includes transmitting (e.g., using communication manager150and/or transmission component904), to the UE, one or more pilots for updating a model associated with the power amplifier. In a second aspect, alone or in combination with the first aspect, process600further includes transmitting (e.g., using communication manager150and/or transmission component904), to the UE, an indication of a model associated with the power amplifier. In a third aspect, alone or in combination with one or more of the first and second aspects, the indication includes one or more of a look-up table, one or more vectors of look-up tables, or an indication of a kernel series and coefficients. In a fourth aspect, alone or in combination with one or more of the first through third aspects, process600further includes receiving (e.g., using communication manager150and/or reception component902, depicted inFIG.9), from the UE, a request for the model, where the indication of the model is transmitted based at least in part on the request. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, process600further includes transmitting (e.g., using communication manager150and/or transmission component904), to the UE, an indication of kernels for updating a model associated with the power amplifier, and transmitting (e.g., using communication manager150and/or transmission component904), to the UE, one or more pilots for updating coefficients associated with the model. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, process600further includes receiving (e.g., using communication manager150and/or reception component902), from the UE, a request for the kernels, where the indication of the kernels is transmitted based at least in part on the request. In a seventh aspect, alone or in combination with one or more of the first through sixth aspects, process600further includes transmitting (e.g., using communication manager150and/or transmission component904), to the UE, an indication of at least one parameter to use for slicing signals from the base station. In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, the indication includes one or more of a look-up table or an indication of a kernel series. In a ninth aspect, alone or in combination with one or more of the first through eighth aspects, process600further includes receiving (e.g., using communication manager150and/or reception component902), from the UE, a request for the at least one parameter, where the indication of the at least one parameter is transmitted based at least in part on the request. In a tenth aspect, alone or in combination with one or more of the first through ninth aspects, the at least one parameter is associated with a Bussgang decomposition. In an eleventh aspect, alone or in combination with one or more of the first through tenth aspects, the signals from the base station include single-carrier waveforms. AlthoughFIG.6shows example blocks of process600, in some aspects, process600may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of process600may be performed in parallel. FIG.7is a diagram illustrating an example process700performed, for example, by a UE, in accordance with the present disclosure. Example process700is an example where the UE (e.g., UE120and/or apparatus800ofFIG.8) performs operations associated with performing high-order DPoD. As shown inFIG.7, in some aspects, process700may include receiving, from a base station (e.g., base station110and/or apparatus900ofFIG.9), a signal that was amplified using a power amplifier that is at least partially non-linear (block710). For example, the UE (e.g., using communication manager140and/or reception component802, depicted inFIG.8) may receive, from a base station, a signal that was amplified using a power amplifier that is at least partially non-linear, as described herein. As further shown inFIG.7, in some aspects, process700may include estimating a portion of the received signal that includes an original data signal using slicing with at least two coefficients (block720). For example, the UE (e.g., using communication manager140and/or estimation component810, depicted inFIG.8) may estimate a portion of the received signal that includes an original data signal using slicing with at least two coefficients, as described herein. As further shown inFIG.7, in some aspects, process700may include estimating a portion of the signal that includes a distortion using a model associated with the power amplifier (block730). For example, the UE (e.g., using communication manager140and/or estimation component810) may estimate a portion of the signal that includes a distortion using a model associated with the power amplifier, as described herein. As further shown inFIG.7, in some aspects, process700may include generating a reconstructed signal based at least in part on the received signal, the estimated portion including the original data signal, and the estimated portion including the distortion (block740). For example, the UE (e.g., using communication manager140and/or reconstruction component812, depicted inFIG.8) may generate a reconstructed signal based at least in part on the received signal, the estimated portion including the original data signal, and the estimated portion including the distortion, as described herein. Process700may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the portion including the original data signal and the portion including the distortion are estimated iteratively. In a second aspect, alone or in combination with the first aspect, the signal includes single-carrier waveforms. In a third aspect, alone or in combination with one or more of the first and second aspects, the slicing uses a plurality of odd powers with the at least two coefficients. In a fourth aspect, alone or in combination with one or more of the first through third aspects, the slicing and estimation of the portion including the distortion are based at least in part on a Bussgang decomposition. AlthoughFIG.7shows example blocks of process700, in some aspects, process700may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.7. Additionally, or alternatively, two or more of the blocks of process700may be performed in parallel. FIG.8is a block diagram of an example apparatus800for wireless communication. The apparatus800may be a UE, or a UE may include the apparatus800. In some aspects, the apparatus800includes a reception component802and a transmission component804, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus800may communicate with another apparatus806(such as a UE, a base station, or another wireless communication device) using the reception component802and the transmission component804. As further shown, the apparatus800may include the communication manager140. The communication manager140may include one or more of a determination component808, an estimation component810, or a reconstruction component812, among other examples. In some aspects, the apparatus800may be configured to perform one or more operations described herein in connection withFIGS.3-4. Additionally, or alternatively, the apparatus800may be configured to perform one or more processes described herein, such as process500ofFIG.5, process700ofFIG.7, or a combination thereof. In some aspects, the apparatus800and/or one or more components shown inFIG.8may include one or more components of the UE described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.8may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component802may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus806. The reception component802may provide received communications to one or more other components of the apparatus800. In some aspects, the reception component802may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus800. In some aspects, the reception component802may include one or more antennas, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. The transmission component804may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus806. In some aspects, one or more other components of the apparatus800may generate communications and may provide the generated communications to the transmission component804for transmission to the apparatus806. In some aspects, the transmission component804may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus806. In some aspects, the transmission component804may include one or more antennas, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. In some aspects, the transmission component804may be co-located with the reception component802in a transceiver. In some aspects, the reception component802may receive (e.g., from the apparatus806) an indication of a change in a non-linearity model associated with a power amplifier of the apparatus806. Accordingly, the determination component808may update a model associated with the power amplifier based at least in part on the indication. The determination component808may include a MIMO detector, a receive processor, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. In some aspects, the reception component802may receive (e.g., from the apparatus806) an indication of a new model, such that the determination component808updates the model based at least in part on the new model. Additionally, in some aspects, the transmission component804may transmit (e.g., to the apparatus806) a request for the new model, such that the reception component802receives the indication of the new model based at least in part on the request. As an alternative, the reception component802may receive (e.g., from the apparatus806) an indication of kernels, such that the determination component808updates the model based at least in part on the kernels. Additionally, the reception component802may receive (e.g., from the apparatus806) one or more pilots, such that the determination component808updates coefficients associated with the model based at least in part on the one or more pilots. In some aspects, the transmission component804may transmit (e.g., to the apparatus806) a request for the kernels, such that the reception component802receives the indication of the kernels based at least in part on the request. Additionally, the determination component808may update at least one parameter associated with slicing received signals based at least in part on the indication. In some aspects, the reception component802may receive (e.g., from the apparatus806) an indication of at least one new parameter, such that the determination component808updates the at least one parameter based at least in part on the at least one new parameter. Additionally, in some aspects, the transmission component804may transmit (e.g., to the apparatus806) a request for the at least one new parameter, such that the reception component802receives the indication of the at least one new parameter based at least in part on the request. Accordingly, in some aspects, the reception component802may receive (e.g., from the apparatus806), a signal that was amplified using a power amplifier that is at least partially non-linear. The estimation component810may estimate a portion of the received signal that includes an original data signal using slicing with at least two coefficients and may estimate a portion of the signal that includes a distortion using a model associated with the power amplifier. The estimation component810may include a MIMO detector, a receive processor, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. Additionally, the reconstruction component812may generate a reconstructed signal based at least in part on the received signal, the estimated portion including the original data signal, and the estimated portion including the distortion. The reconstruction component812may include a MIMO detector, a receive processor, a demodulator, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. The number and arrangement of components shown inFIG.8are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.8. Furthermore, two or more components shown inFIG.8may be implemented within a single component, or a single component shown inFIG.8may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.8may perform one or more functions described as being performed by another set of components shown inFIG.8. FIG.9is a block diagram of an example apparatus900for wireless communication. The apparatus900may be a base station, or a base station may include the apparatus900. In some aspects, the apparatus900includes a reception component902and a transmission component904, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus900may communicate with another apparatus906(such as a UE, a base station, or another wireless communication device) using the reception component902and the transmission component904. As further shown, the apparatus900may include the communication manager150. The communication manager150may include a determination component908, among other examples. In some aspects, the apparatus900may be configured to perform one or more operations described herein in connection withFIGS.3-4. Additionally, or alternatively, the apparatus900may be configured to perform one or more processes described herein, such as process600ofFIG.6, or a combination thereof. In some aspects, the apparatus900and/or one or more components shown inFIG.9may include one or more components of the base station described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.9may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component902may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus906. The reception component902may provide received communications to one or more other components of the apparatus900. In some aspects, the reception component902may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus906. In some aspects, the reception component902may include one or more antennas, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. The transmission component904may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus906. In some aspects, one or more other components of the apparatus906may generate communications and may provide the generated communications to the transmission component904for transmission to the apparatus906. In some aspects, the transmission component904may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus906. In some aspects, the transmission component904may include one or more antennas, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. In some aspects, the transmission component904may be co-located with the reception component902in a transceiver. In some aspects, the determination component908may determine a change in a non-linearity model associated with a power amplifier of the apparatus900. The determination component908may include a transmit MIMO processor, a transmit processor, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. Accordingly, the transmission component904may transmit (e.g., to the apparatus906) an indication of the change. In some aspects, the transmission component904may additionally transmit (e.g., to the apparatus906) one or more pilots for updating a model associated with the power amplifier. Additionally, or alternatively, the transmission component904may transmit (e.g., to the apparatus906) an indication of kernels for updating a model associated with the power amplifier. Accordingly, the one or more pilots may be for updating coefficients associated with the model. Additionally, in some aspects, the reception component902may receive (e.g., from the apparatus906) a request for the kernels, such that the transmission component904transmits the indication of the kernels based at least in part on the request. As an alternative, the transmission component904may transmit (e.g., to the apparatus906) an indication of a model associated with the power amplifier. Additionally, in some aspects, the reception component902may receive (e.g., from the apparatus906) a request for the model, such that the transmission component904transmits the indication of the model based at least in part on the request. Additionally, or alternatively, in some aspects, the transmission component904may transmit (e.g., to the apparatus906) an indication of at least one parameter to use for slicing signals from the apparatus900. In some aspects, the reception component902may receive (e.g., from the apparatus906) a request for the at least one parameter, such that the transmission component904transmits the indication of the at least one parameter based at least in part on the request. The number and arrangement of components shown inFIG.9are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.9. Furthermore, two or more components shown inFIG.9may be implemented within a single component, or a single component shown inFIG.9may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.9may perform one or more functions described as being performed by another set of components shown inFIG.9. The following provides an overview of some Aspects of the present disclosure:Aspect 1: A method of wireless communication performed by a user equipment (UE), comprising: receiving, from a base station, an indication of a change in a non-linearity model associated with a power amplifier of the base station; updating a model associated with the power amplifier based at least in part on the indication; and updating at least one parameter associated with slicing received signals based at least in part on the indication.Aspect 2: The method of Aspect 1, wherein the model and the at least one parameter are updated based at least in part on one or more pilots received from the base station.Aspect 3: The method of Aspect 1, further comprising: receiving, from the base station, an indication of a new model, wherein the model is updated based at least in part on the new model.Aspect 4: The method of Aspect 3, wherein the indication includes one or more of a look-up table, one or more vectors of look-up tables, or an indication of a kernel series and coefficients.Aspect 5: The method of any of Aspects 3 through 4, further comprising: transmitting, to the base station, a request for the new model, wherein the indication of the new model is received based at least in part on the request.Aspect 6: The method of any of Aspects 1 through 2, further comprising: receiving, from the base station, an indication of kernels, wherein the model is updated based at least in part on the kernels; and receiving, from the base station, one or more pilots, wherein coefficients associated with the model are updated based at least in part on the one or more pilots.Aspect 7: The method of Aspect 6, further comprising: transmitting, to the base station, a request for the kernels, wherein the indication of the kernels is received based at least in part on the request.Aspect 8: The method of any of Aspects 1 through 7, wherein the at least one parameter is updated based at least in part on the updated model.Aspect 9: The method of any of Aspects 1 through 7, further comprising: receiving, from the base station, an indication of at least one new parameter, wherein the at least one parameter is updated based at least in part on the at least one new parameter.Aspect 10: The method of Aspect 9, wherein the indication includes one or more of a look-up table or an indication of a kernel series.Aspect 11: The method of any of Aspects 9 through 10, further comprising: transmitting, to the base station, a request for the at least one new parameter, wherein the indication of the at least one new parameter is received based at least in part on the request.Aspect 12: The method of any of Aspects 1 through 11, wherein the at least one parameter is associated with a Bussgang decomposition.Aspect 13: The method of any of Aspects 1 through 12, wherein the received signals include single-carrier waveforms.Aspect 14: A method of wireless communication performed by a base station, comprising: determining a change in a non-linearity model associated with a power amplifier of the base station; and transmitting, to a user equipment (UE), an indication of the change.Aspect 15: The method of Aspect 14, further comprising: transmitting, to the UE, one or more pilots for updating a model associated with the power amplifier.Aspect 16: The method of Aspect 14, further comprising: transmitting, to the UE, an indication of a model associated with the power amplifier.Aspect 17: The method of Aspect 16, wherein the indication includes one or more of a look-up table, one or more vectors of look-up tables, or an indication of a kernel series and coefficients.Aspect 18: The method of any of Aspects 16 through 17, further comprising: receiving, from the UE, a request for the model, wherein the indication of the model is transmitted based at least in part on the request.Aspect 19: The method of any of Aspects 14 through 15, further comprising: transmitting, to the UE, an indication of kernels for updating a model associated with the power amplifier; and transmitting, to the UE, one or more pilots for updating coefficients associated with the model.Aspect 20: The method of Aspect 19, further comprising: receiving, from the UE, a request for the kernels, wherein the indication of the kernels is transmitted based at least in part on the request.Aspect 21: The method of any of Aspects 14 through 20, further comprising: transmitting, to the UE, an indication of at least one parameter to use for slicing signals from the base station.Aspect 22: The method of Aspect 21, wherein the indication includes one or more of a look-up table or an indication of a kernel series.Aspect 23: The method of any of Aspects 21 through 22, further comprising: receiving, from the UE, a request for the at least one parameter, wherein the indication of the at least one parameter is transmitted based at least in part on the request.Aspect 24: The method of any of Aspects 14 through 23, wherein the at least one parameter is associated with a Bussgang decomposition.Aspect 25: The method of any of Aspects 14 through 24, wherein the signals from the base station include single-carrier waveforms.Aspect 26: A method of wireless communication performed by a user equipment (UE), comprising: receiving, from a base station, a signal that was amplified using a power amplifier that is at least partially non-linear; estimating a portion of the received signal that includes an original data signal using slicing with at least two coefficients; estimating a portion of the signal that includes a distortion using a model associated with the power amplifier; and generating a reconstructed signal based at least in part on the received signal, the estimated portion including the original data signal, and the estimated portion including the distortion.Aspect 27: The method of Aspect 26, wherein the portion including the original data signal and the portion including the distortion are estimated iteratively.Aspect 28: The method of any of Aspects 26 through 27, wherein the signal includes single-carrier waveforms.Aspect 29: The method of any of Aspects 26 through 28, wherein the slicing uses a plurality of odd powers with the at least two coefficients.Aspect 30: The method of any of Aspects 26 through 29, wherein the slicing and estimation of the portion including the distortion are based at least in part on a Bussgang decomposition.Aspect 31: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 1-13.Aspect 32: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 1-13.Aspect 33: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 1-13.Aspect 34: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 1-13.Aspect 35: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 1-13.Aspect 36: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 14-25.Aspect 37: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 14-25.Aspect 38: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 14-25.Aspect 39: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 14-25.Aspect 40: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 14-25.Aspect 41: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 26-30.Aspect 42: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 26-30.Aspect 43: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 26-30.Aspect 44: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 26-30.Aspect 45: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 26-30. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware and/or a combination of hardware and software. “Software” shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, and/or functions, among other examples, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. As used herein, a processor is implemented in hardware and/or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the aspects. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems and/or methods based, at least in part, on the description herein. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the terms “set” and “group” are intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
89,000
11863270
DESCRIPTION OF EXEMPLARY EMBODIMENTS In what follows, preferred embodiments of the present invention will be described in detail with reference to appended drawings. Detailed descriptions to be disclosed below with reference to the appended drawings are intended to describe illustrative embodiments of the present invention but are not intended to represent the sole embodiment of the present invention. Detailed descriptions below include specific details to provide complete understanding of the present invention. However, it should be understood by those skilled in the art that the present invention may be embodied without the specific details to be introduced. In some cases, to avoid obscuring the gist of the present invention, well-known structures and devices may be omitted or may be depicted in the form of a block diagram with respect to core functions of each structure and device. A base station in this document is regarded as a terminal node of a network, which performs communication directly with a UE. In this document, particular operations regarded to be performed by the base station may be performed by an upper node of the base station depending on situations. In other words, it is apparent that in a network consisting of a plurality of network nodes including a base station, various operations performed for communication with a UE can be performed by the base station or by network nodes other than the base station. The term Base Station (BS) may be replaced with a term such as fixed station, Node B, evolved-NodeB (eNB), Base Transceiver System (BTS), Access Point (AP), or general NB (gNB). Also, a terminal can be fixed or mobile; and the term may be replaced with a term such as User Equipment (UE), Mobile Station (MS), User Terminal (UT), Mobile Subscriber Station (MSS), Subscriber Station (SS), Advanced Mobile Station (AMS), Wireless Terminal (WT), Machine-Type Communication (MTC) device, Machine-to-Machine (M2M) device, or Device-to-Device (D2D) device. In what follows, downlink (DL) refers to communication from a base station to a terminal, while uplink (UL) refers to communication from a terminal to a base station. In downlink transmission, a transmitter may be part of the base station, and a receiver may be part of the terminal. Similarly, in uplink transmission, a transmitter may be part of the terminal, and a receiver may be part of the base station. Specific terms used in the following descriptions are introduced to help understanding the present invention, and the specific terms may be used in different ways as long as it does not leave the technical scope of the present invention. The technology described below may be used for various types of wireless access systems based on Code Division Multiple Access (CDMA), Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Orthogonal Frequency Division Multiple Access (OFDMA), Single Carrier Frequency Division Multiple Access (SC-FDMA), or Non-Orthogonal Multiple Access (NOMA). CDMA may be implemented by such radio technology as Universal Terrestrial Radio Access (UTRA) or CDMA2000. TDMA can be implemented by such radio technology as Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), or Enhanced Data rates for GSM Evolution (EDGE). OFDMA may be implemented by such radio technology as the IEEE 802.11 (Wi-Fi), the IEEE 802.16 (WiMAX), the IEEE 802-20, or Evolved UTRA (E-UTRA). UTRA is part of the Universal Mobile Telecommunications System (UMTS). The 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) is part of the Evolved UMTS (E-UMTS) which uses the E-UTRA, employing OFDMA for downlink and SC-FDMA for uplink transmission. The LTE-A (Advanced) is an evolved version of the 3GPP LTE system. The 5G NR defines enhanced Mobile Broadband (eMBB), massive Machine Type Communication (mMTC), Ultra-Reliable and Low Latency Communications (URLLC), and vehicle-to-everything (V2X) depending on usage scenarios. And the 5G NR standard is divided into standalone (SA) and non-standalone (NSA) modes according to co-existence between the NR system and the LTE system. And the 5G NR supports various subcarrier spacing and supports CP-OFDM for downlink transmission while CP-OFDM and DFT-s-OFDM (SC-OFDM) for uplink transmission. The embodiments of the present invention may be supported by standard documents disclosed for at least one of wireless access systems such as the IEEE 802, 3GPP, and 3GPP2. In other words, those steps or portions among embodiments of the present invention not described to clearly illustrate the technical principles of the present invention may be backed up by the aforementioned documents. Also, all of the terms disclosed in the present document may be described by the aforementioned standard documents. For the purpose of clarity, descriptions are given mainly with respect to the 3GPP LTE/LTE-A, but the technical features of the present invention are not limited to the specific system. Definition of Terms eLTE eNB: An eLTE eNB is an evolution of an eNB that supports a connection for an EPC and an NGC. gNB: A node for supporting NR in addition to a connection with an NGC New RAN: A radio access network that supports NR or E-UTRA or interacts with an NGC Network slice: A network slice is a network defined by an operator so as to provide a solution optimized for a specific market scenario that requires a specific requirement together with an inter-terminal range. Network function: A network function is a logical node in a network infra that has a well-defined external interface and a well-defined functional operation. NG-C: A control plane interface used for NG2 reference point between new RAN and an NGC NG-U: A user plane interface used for NG3 reference point between new RAN and an NGC Non-standalone NR: A deployment configuration in which a gNB requires an LTE eNB as an anchor for a control plane connection to an EPC or requires an eLTE eNB as an anchor for a control plane connection to an NGC Non-standalone E-UTRA: A deployment configuration an eLTE eNB requires a gNB as an anchor for a control plane connection to an NGC. User plane gateway: A terminal point of NG-U interface Numerology: corresponds to one subcarrier spacing in the frequency domain. Different numerology may be defined by scaling reference subcarrier spacing by an integer N. NR: NR Radio Access or New Radio General System FIG.1is a diagram illustrating an example of an overall structure of a new radio (NR) system to which a method proposed by the present disclosure may be implemented. Referring toFIG.1, an NG-RAN is composed of gNBs that provide an NG-RA user plane (new AS sublayer/PDCP/RLC/MAC/PHY) and a control plane (RRC) protocol terminal for a UE (User Equipment). The gNBs are connected to each other via an Xn interface. The gNBs are also connected to an NGC via an NG interface. More specifically, the gNBs are connected to a Access and Mobility Management Function (AMF) via an N2 interface and a User Plane Function (UPF) via an N3 interface. NR (New Rat) Numerology and Frame Structure In the NR system, multiple numerologies may be supported. The numerologies may be defined by subcarrier spacing and a CP (Cyclic Prefix) overhead. Spacing between the plurality of subcarriers may be derived by scaling basic subcarrier spacing into an integer N (or μ). In addition, although a very low subcarrier spacing is assumed not to be used at a very high subcarrier frequency, a numerology to be used may be selected independent of a frequency band. In addition, in the NR system, a variety of frame structures according to the multiple numerologies may be supported. Hereinafter, an Orthogonal Frequency Division Multiplexing (OFDM) numerology and a frame structure, which may be considered in the NR system, will be described. A plurality of OFDM numerologies supported in the NR system may be defined as in Table 1. TABLE 1μΔf = 2μ· 15 [kHz]Cyclic prefix015Normal130Normal260Normal, Extended3120Normal4240Normal5480Normal Regarding a frame structure in the NR system, a size of various fields in the time domain is expressed as a multiple of a time unit of Ts=1/(Δfmax·Nf). In this case, Δfmax=480·103and Nf=4096 DL and UL transmission is configured as a radio frame having a section of Tf=(ΔfmaxNf/100)·Ts=10 ms. The radio frame is composed of ten subframes each having a section of Tsf=(ΔfmaxNf/1000)·Ts=1 ms. In this case, there may be a set of UL frames and a set of DL frames. FIG.2illustrates a relationship between a UL frame and a DL frame in a wireless communication system to which a method proposed by the present disclosure may be implemented. As illustrated inFIG.2, a UL frame number I from a User Equipment (UE) needs to be transmitted TTA=NTATsbefore the start of a corresponding DL frame in the UE. Regarding the numerology μ, slots are numbered in ascending order of nsμ∈{0,…,Nsubframeslots,μ-1} in a subframe, and in ascending order of ns,fμ∈{0,…,Nframeslots,μ-1} in a radio frame. One slot is composed of continuous OFDM symbols of Nsymbμ, and Nsymbμis determined depending on a numerology in use and slot configuration. The start of slots nsμin a subframe is temporally aligned with the start of OFDM symbols nsμNsymbμin the same subframe. Not all UEs are able to transmit and receive at the same time, and this means that not all OFDM symbols in a DL slot or an UL slot are available to be used. Table 2 shows the number of OFDM symbols per slot for a normal CP in the numerology μ, and Table 3 shows the number of OFDM symbols per slot for an extended CP in the numerology μ. TABLE 2Slot configuration01μNsymbμNframeslots,μNsubframeslots,μNsymbμNframeslots,μNsubframeslots,μ014101720211420274042144047808314808———41416016———51432032——— TABLE 3Slot configuration01μNsymbμNframeslots,μNsubframeslots,μNsymbμNframeslots,μNsubframeslots,μ012101620211220264042124046808312808———41216016———51232032——— NR Physical Resource Regarding physical resources in the NR system, an antenna port, a resource grid, a resource element, a resource block, a carrier part, etc. may be considered. Hereinafter, the above physical resources possible to be considered in the NR system will be described in more detail. First, regarding an antenna port, the antenna port is defined such that a channel over which a symbol on one antenna port is transmitted can be inferred from another channel over which a symbol on the same antenna port is transmitted. When large-scale properties of a channel received over which a symbol on one antenna port can be inferred from another channel over which a symbol on another antenna port is transmitted, the two antenna ports may be in a QC/QCL (quasi co-located or quasi co-location) relationship. Herein, the large-scale properties may include at least one of delay spread, Doppler spread, Doppler shift, average gain, and average delay. FIG.3illustrates an example of a resource grid supported in a wireless communication system to which a method proposed by the present disclosure may be implemented. Referring toFIG.3, a resource grid is composed of NRBμNscRBsubcarriers in a frequency domain, each subframe composed of 14·2μ OFDM symbols, but the present disclosure is not limited thereto. In the NR system, a transmitted signal is described by one or more resource grids, composed of NRBμNscRBsubcarriers, and 2μNsymb(μ)OFDM symbols Herein, NRBμ≤NRBmax,μ. The above NRBmax,μindicates the maximum transmission bandwidth, and it may change not just between numerologies, but between UL and DL. In this case, as illustrated inFIG.3, one resource grid may be configured for the numerology μ and an antenna port p. Each element of the resource grid for the numerology μ and the antenna port p is indicated as a resource element, and may be uniquely identified by an index pair (k,l). Herein, k=0, . . . , NRBμNscRB−1 is an index in the frequency domain, andl=0, . . . , 2μNsymb(μ)−1 indicates a location of a symbol in a subframe. To indicate a resource element in a slot, the index pair (k/l) is used. Herein, l=0, . . . , Nsymbμ−1. The resource element (k,l) for the numerology μ and the antenna port p corresponds to a complex value ak,l(p,μ). When there is no risk of confusion or when a specific antenna port or numerology is specified, the indexes p and μ may be dropped and thereby the complex value may become ak,l(p)or ak,l. In addition, a physical resource block is defined as NscRB=12 continuous subcarriers in the frequency domain. In the frequency domain, physical resource blocks may be numbered from 0 to NRBμ−1. At this point, a relationship between the physical resource block number nPRBand the resource elements (k,l) may be given as in Equation 1. nPRB=⌊kNs⁢cRB⌋[Equation⁢1] In addition, regarding a carrier part, a UE may be configured to receive or transmit the carrier part using only a subset of a resource grid. At this point, a set of resource blocks which the UE is configured to receive or transmit are numbered from 0 to NURBμ−1 in the frequency region. Self-Contained Subframe Structure FIG.4is a diagram illustrating an example of a self-contained subframe structure in a wireless communication system to which the present disclosure may be implemented. In order to minimize data transmission latency in a TDD system, 5G new RAT considers a self-contained subframe structure as shown inFIG.4. InFIG.4, a diagonal line area (symbol index 0) represents a UL control area, and a black area (symbol index 13) represents a UL control area. A non0shade area may be used for DL data transmission or for UL data transmission. This structure is characterized in that DL transmission and UL transmission are performed sequentially in one subframe and therefore transmission of DL data and reception of UL ACK/NACK may be performed in the subframe. In conclusion, it is possible to reduce time for retransmitting data upon occurrence of a data transmission error and thereby minimize a latency of final data transmission. In this self-contained subframe structure, a time gap is necessary for a base station or a UE to switch from a transmission mode to a reception mode or to switch from the reception mode to the transmission mode. To this end, some OFDM symbols at a point in time of switching from DL to UL in the self-contained subframe structure are configured as a guard period (GP). Analog Beamforming Since a wavelength is short in a Millimeter Wave (mmW) range, a plurality of antenna elements may be installed in the same size of area. That is, a wavelength in the frequency band 30 GHz is 1 cm, and thus, 64 (8×8) antenna elements may be installed in two-dimensional arrangement with a 0.5 lambda (that is, a wavelength) in 4×4 (4 by 4) cm panel. Therefore, in the mmW range, the coverage may be enhanced or a throughput may be increased by increasing a beamforming (BF) gain with a plurality of antenna elements. In this case, in order to enable adjusting transmission power and phase for each antenna element, if a transceiver unit (TXRU) is included, independent beamforming for each frequency resource is possible. However, it is not cost-efficient to install TXRU at each of about 100 antenna elements. Thus, a method is considered in which a plurality of antenna elements is mapped to one TXRU and a direction of beam is adjusted with an analog phase shifter. Such an analog BF method is able to make only one beam direction over the entire frequency band, and there is a disadvantage that frequency-selective BF is not allowed. A hybrid BF may be considered which is an intermediate between digital BF and analog BF, and which has B number of TXRU less than Q number of antenna elements. In this case, although varying depending upon a method of connecting B number of TXRU and Q number of antenna elements, beam directions capable of being transmitted at the same time is restricted to be less than B. Hereinafter, typical examples of a method of connecting TXRU and antenna elements will be described with reference to drawings. FIG.5is an example of a transceiver unit model in a wireless communication system to which the present disclosure may be implemented. A TXRU virtualization model represents a relationship between output signals from TXRUs and output signals from antenna elements. Depending on a relationship between antenna elements and TXRUs, the TXRU virtualization model may be classified as a TXRU virtualization model option-1: sub-array partition model, as shown inFIG.5(a), or as a TXRU virtualization model option-2: full-connection model. Referring toFIG.5(a), in the sub-array partition model, the antenna elements are divided into multiple antenna element groups, and each TXRU may be connected to one of the multiple antenna element groups. In this case, the antenna elements are connected to only one TXRU. Referring toFIG.5(b), in the full-connection model, signals from multiple TXRUs are combined and transmitted to a single antenna element (or arrangement of antenna elements). That is, this shows a method in which a TXRU is connected to all antenna elements. In this case, the antenna elements are connected to all the TXRUs. InFIG.5, q represents a transmitted signal vector of antenna elements having M number of co-polarized in one column. W represents a wideband TXRU virtualization weight vector, and W represents a phase vector to be multiplied by an analog phase shifter. That is, a direction of analog beamforming is decided by W×represents a signal vector of M_TXRU number of TXRUs. Herein, mapping of the antenna ports and TXRUs may be performed on the basis of 1-to-1 or 1-to-many. TXRU-to-element mapping InFIG.5is merely an example, and the present disclosure is not limited thereto and may be equivalently applied even to mapping of TXRUs and antenna elements which can be implemented in a variety of hardware forms. Channel State Information (CSI) Feedback In most cellular systems including an LTE system, a UE receives a pilot signal (or a reference signal) for estimating a channel from a base station, calculate channel state information (CSI), and reports the CSI to the base station. The base station transmits a data signal based on the CSI information fed back from the UE. The CSI information fed back from the UE in the LTE system includes channel quality information (CQI), a precoding matrix index (PMI), and a rank indicator (RI). CQI feedback is wireless channel quality information which is provided to the base station for a purpose (link adaptation purpose) of providing a guidance as to which modulation & coding scheme (MCS) to be applied when the base station transmits data. In the case where there is a high wireless quality of communication between the base station and the UE, the UE may feed back a high CQI value and the base station may transmit data by applying a relatively high modulation order and a low channel coding rate. In the opposite case, the UE may feed back a low CQI value and the base station may transmit data by applying a relatively low modulation order and a high channel coding rate. PMI feedback is preferred precoding matrix information which is provided to a base station in order to provide a guidance as to which MIMO precoding scheme is to be applied when the base station has installed multiple antennas. A UE estimates a downlink MIMO channel between the base station and the UE from a pilot signal, and recommends, through PMI feedback, which MIMO precoding is desired to be applied by the base station. In the LTE system, only linear MIMO precoding capable of expressing PMI configuration in a matrix form is considered. The base station and the UE share a codebook composed of a plurality of precoding matrixes, and each MIMO precoding matrix in the codebook has a unique index. Accordingly, by feeding back an index corresponding to the most preferred MIMO precoding matrix in the codebook as PMI, the UE minimizes an amount of feedback information thereof. A PMI value is not necessarily composed of one index. For example, in the case where there are eight transmitter antenna ports in the LTE system, a final 8tx MIMO precoding matrix may be derived only when two indexes (first PMI & second PMI) are combined. RI feedback is information on the number of preferred transmission layers, the information which is provided to the base station in order to provide a guidance as to the number of the UE's preferred transmission layers when the base station and the UE have installed multiple antennas to thereby enable multi-layer transmission through spatial multiplexing. The RI and the PMI are very closely correlated to each other. It is because the base station is able to know which precoding needs to be applied to which layer depending on the number of transmission layers. Regarding configuration of PMI/RM feedback, a PMI codebook may be configured with respect to single layer transmission and then PMI may be defined for each layer and fed back, but this method has a disadvantage that an amount of PMI/RI feedback information increases remarkably in accordance with an increase in the number of transmission layers. Accordingly, in the LTE system, a PMI codebook is defined depending on the number of transmission layers. That is, for R-layer transmission, N number of Nt×R matrixes are defined (herein, R represents the number of layers, Nt represents the number of transmitter antenna ports, and N represents the size of the codebook). Accordingly, in LTE, a size of a PMI codebook is defined irrespective of the number of transmission layers. As a result, since PMI/RI is defined in this structure, the number of transmission layers (R) conforms to a rank value of the precoding matrix (Nt×R matrix), and, for this reason, the term “rank indicator (RI)” is used. Unlike PMI/RI in the LTE system, PMI/RI described in the present disclosure is not restricted to mean an index value of a precoding matrix Nt×R and a rank value of the precoding matrix. PMI described in the present disclosure indicates information on a preferred MINO precoder from among MIMO precoders capable of being applied by a transmitter, and a form of the precoder is not limited to a linear precoder which is able to be expressed in a matrix form, unlike in the LTE system. In addition, RI described in the present disclosure means wider than RO in LTE and includes feedback information indicating the number of preferred transmission layers. The CSI information may be obtained in all system frequency domains or in some of the frequency domains. In particular, in a broad bandwidth system, it may be useful to obtain CSI information on some frequency domains (e.g., subband) preferred by each UE and then feedback the obtained CSI information. In the LTE system, CSI feedback is performed via an UL channel, and, in general, periodic CSI feedback is performed via a physical uplink control channel (PUCCH) and aperiodic CSI feedback is performed via physical uplink shared channel (PUSCH) which is a UL data channel. The aperiodic CSI feedback means temporarily transmitting a feedback only when a base station needs CSI feedback information, and the base station triggers the CSI feedback via a DL control channel such as a PDCCH/ePDCCH. In the LTE system, which information a UE needs to feedback in response to triggering of CSI feedback is defined as a PUSCH CSI reporting mode, as shown inFIG.8, and a PUSCH CSI reporting mode in which the UE needs to operate is informed for the UE in advance via a higher layer message. Channel State Information (CSI)-Related Procedure In the new radio (NR) system, a channel state information-reference signal (CSI-RS) is used for time/frequency tracking, CSI computation, layer 1 (L1)-reference signal received power (RSRP) computation, or mobility Throughout the present disclosure, “A and/or B” may be interpreted as the same as “including at least one of A or B”. The CSI computation is related to CSI acquisition, and L1-RSRP computation is related to beam management (BM). The CSI indicates all types of information indicative of a quality of a radio channel (or link) formed between a UE and an antenna port. Hereinafter, operation of a UE with respect to the CSI-related procedure will be described. FIG.6is a flowchart illustrating an example of a CSI-related procedure. To perform one of the above purposes of a CSI-RS, a terminal (e.g., a UE) receives CSI related configuration information from a base station (e.g., a general node B (gNB)) through a radio resource control (RRC) signaling (S610). The CSI-related configuration information may include at least one of CSI interference management (IM) resource-related information, CSI measurement configuration-related information, CSI resource configuration-related information, CSI-RS resource-related information, or CSI report configuration-related information. The CSIIM resource-related information may include CSI-IM resource information, CSI-IM resource set information, etc. The CSI-IM resource set is identified by a CSI-IM resource set ID (identifier), and one resource set includes at least one CSI-IM resource. Each CSI-IM resource is identified by a CSI-IM resource ID. The CSI resource configuration-related information defines a group including at least one of a non-zero power (NZP) CSI-RS resource set, a CSI-IM resource set, or a CSI-SSB resource set. That is, the CSI resource configuration-related information includes a CSI-RS resource set list, and the CSI-RS resource set list may include at least one of a NZP CSI-RS resource set list, a CSI-IM resource set list, or a CSI-SSB resource set list. The CSI resource configuration-related information may be expressed as CSI-REsourceConfig IE. The CSI-RS resource set is identified by a CSI-RS resource set ID, and one resource set includes at least one CSI-RS resource. Each CSI-RS resource is identified by a CSI-RS resource ID. As shown in Table 4, parameters (e.g.: the BM-related parameter repetition, and the tracking-related parameter trs-Info indicative of (or indicating) a purpose of a CSI-RS may be set for each NZP CSI-RS resource set. Table 4 shows an example of NZP CSI-RS resource set IE. TABLE 4-- ASN1START-- TAG-NZP-CSI-RS-RESOURCESET-STARTNZP-CSI-RS-ResourceSet ::=SEQUENCE {nzp-CSI-ResourceSetIdNZP-CSI-RS-ResourceSetId,nzp-CSI-RS-ResourcesSEQUENCE (SIZE (1..maxNrofNZP-CSI-RS-ResourcesPerSet)) OF NZP-CSI-RS-ResourceId,repetitionENUMERATED { on, off }aperiodicTriggeringOffsetINTEGER(0..4)trs-InfoENUMERATED {true}...}-- TAG-NZP-CSI-RS-RESOURCESET-STOP-- ASN1STOP In Table 4, the parameter repetition is a parameter indicative of whether to repeatedly transmit the same beam, and indicates whether repetition is set to “ON” or “OFF” for each NZP CSI-RS resource set. The term “transmission (Tx) beam” used in the present disclosure may be interpreted as the same as a spatial domain transmission filter, and the term “reception (Rx) beam” used in the present disclosure may be interpreted as the same as a spatial domain reception filter. For example, when the parameter repetition in Table 4 is set to “OFF”, a UE does not assume that a NZP CSI-RS resource(s) in a resource set is transmitted to the same DL spatial domain transmission filter and the same Nrofports in all symbols. In addition, the parameter repetition corresponding to a higher layer parameter corresponds to “CSI-RS-ResourceRep” of L1 parameter. The CSI report configuration related information includes the parameter reportConfigType indicative of a time domain behavior and the parameter reportQuantity indicative of a CSI-related quantity to be reported. The time domain behavior may be periodic, aperiodic, or semi-persistent. In addition, the CSI report configuration-related information may be represented as CSI-ReportConfig IE, and Table 5 shows an example of the CSI-ReportConfig IE. TABLE 5-- ASN1START-- TAG-CSI-RESOURCECONFIG-STARTCSI-ReportConfig ::=SEQUENCE {reportConfigIdCSI-ReportConfigId,carrierServCellIndexOPTIONAL, -- Need SresourcesForChannelMeasurementCSI-ResourceConfigId,csi-IM-ResourcesForInterferenceCSI-ResourceConfigIdOPTIONAL, -- Need Rnzp-CSI-RS-ResourcesForInterferenceCSI-ResourceConfigldOPTIONAL,-- Need RreportConfigTypeCHOICE {periodicSEQUENCE {reportSlotConfigCSI-ReportPeriodicityAndOffset,pucch-CSI-ResourceListSEQUENCE (SIZE (1..maxNrofBWPs)) OF PUCCH-CSI-Resource},semiPersistentOnPUCCHSEQUENCE {reportSlotConfigCSI-ReportPeriodicityAndOffset,pucch-CSI-ResourceListSEQUENCE (SIZE (1..maxNrofBWPs)) OF PUCCH-CSI-Resource},semiPersistentOnPUSCHSEQUENCE {reportSlotConfigENUMERATED {sl5, sl10, sl20, sl40, sl80, sl160, sl320},reportSlotOffsetListSEQUENCE (SIZE (1.. maxNrofUL-Allocations)) OF INTEGER(0..32),p0alphaP0-PUSCH-AlphaSetId},aperiodicSEQUENCE {reportSlotOffsetListSEQUENCE (SIZE (1..maxNrofUL-Allocations)) OF INTEGER(0..32)}},reportQuantityCHOICE {noneNULL,cri-RI-PMI-CQINULL,cri-RI-i1NULL,cri-RI-i1-CQISEQUENCE {pdsch-BundleSizeForCSIENUMERATED {n2, n4}OPTIONAL},cri-RI-CQINULL,cri-RSRPNULL,ssb-Index-RSRPNULL,cri-RI-LI-PMI-CQINULL}, In addition, the UE measures CSI based on configuration information related to the CSI (S620). Measuring the CSI may include (1) receiving a CSI-RS by the UE (S621) and (2) computing CSI based on the received CSI-RS (S622). A sequence for the CSI-RS is generated by Equation 2, and an initialization value of a pseudo-random sequence C(i) is defined by Equation 3. r⁡(m)=12⁢(1-2·c⁡(2⁢m))+j⁢12⁢(1-2·c⁡(2⁢m+1))[Equation⁢2] cinit=(210(Nsymbslotns,fμ+l+1)(2nID+1)+nID)mod 231[Equation 3] In Equations 2 and 3, ns,fμis a slot number within a radio frame, and a pseudo-random sequence generator is initialized with Cint at the start of each OFDM symbol where ns,fμis the slot number within a radio frame. In addition, l indicates an OFDM symbol number in a slot, and nIDindicates higher-layer parameter scramblingID. In addition, regarding the CSI-RS, resource element (RE) mapping of CSI-RS resources of the CSI-RS is performed in time and frequency domains by higher layer parameter CSI-RS-ResourceMapping. Table 6 shows an example of CSI-RS-ResourceMapping IE. TABLE 6-- ASN1START-- TAG-CSI-RS-RESOURCEMAPPING-STARTCSI-RS-ResourceMapping ::=SEQUENCE {frequencyDomainAllocationCHOICE {row1BIT STRING (SIZE (4)),row2BIT STRING (SIZE (12)),row4BIT STRING (SIZE (3)),otherBIT STRING (SIZE (6))},nrofPortsENUMERATED {p1,p2,p4,p8,p12,p16,p24,p32},firstOFDMSymbolInTimeDomainINTEGER (0..13),firstOFDMSymbolInTimeDomain2INTEGER (2..12)cdm-TypeENUMERATED (noCDM, fd-CDM2, cdm4-FD2-TD2, cdm8-FD2-TD4},densityCHOICE {dot5ENUMERATED {evenPRBs, oddPRBs},oneNULL,threeNULL,spareNULL},freqBandCSI-FrequencyOccupation,...} In Table 6, a density (D) indicates a density of CSI-RS resources measured in a RE/port/physical resource block (PRB), and nrofPorts indicates the number of antenna ports. In addition, the UE reports the measured CSI to the base station (S630). Herein, when a quantity of CSI-ReportConfig in Table 6 is set to “none (or No report)”, the UE may skip the reporting. However, even when the quantity is set to “none (or No report)”, the UE may report the measured CSI to the base station. The case where the quantity is set to “none” is t when an aperiodic TRS is triggered or when repetition is set. Herein, it may be defined such that reporting by the UE is omitted only when repetition is set to “ON”. To put it briefly, when repetition is set to “ON” and “OFF”, a CSI report may indicate any one of “No report”, “SSB Resource Indicator (SSBRI) and L1-RSRP”, and “CSI-RS Resource Indicator (CRI) and L1-RSRP”. Alternatively, it may be defined to transmit a CSI report indicative of “SSBRI and L1-RSRP” or “CRI and L1-RSRP” when repetition is set to “OFF”, it may be defined such that, and to transmit a CSI report indicative of “No report”, “SSBRI and L1-RSRP”, or “CRI and L1-RSRP” when repetition is “ON”. CSI Measurement and Reporting Procedure The NR system supports more flexible and dynamic CSI measurement and reporting. The CSI measurement may include receiving a CSI-RS, and acquiring CSI by computing the received CSI-RS. As time domain behaviors of CSI measurement and reporting, aperiodic/semi-persistent/periodic channel measurement (CM) and interference measurement (IM) are supported. To configure CSI-IM, four port NZP CSI-RS RE patterns are used. CSI-IM-based IMR of NR has a design similar to CSI-IM of LTE and is configured independent of ZP CSI-RS resources for PDSCH rate matching. In addition, each port in the NZP CSI-RS-based IMR emulates an interference layer having (a desirable channel and) a pre-coded NZP CSI-RS. This is about intra-cell interference measurement of a multi-user case, and it primarily targets MU interference. At each port of the configured NZP CSI-RS-based IMR, the base station transmits the pre-coded NZP CSI-RS to the UE. The UE assumes a channel/interference layer for each port in a resource set, and measures interference. If there is no PMI or RI feedback for a channel, a plurality of resources are configured in a set and the base station or network indicates, through DCI, a subset of NZP CSI-RS resources for channel/interference measurement. Resource setting and resource setting configuration will be described in more detail. Resource Setting Each CSI resource setting “CSI-ResourceConfig” includes configuration of S≥1 CSI resource set (which is given by higher layer parameter “csi-RS-ResourceSetList”). Herein, a CSI resource setting corresponds to CSI-RS-resourcesetlist. Herein, S represents the number of configured CSI-RS resource sets. Herein, configuration of S≥1 CSI resource set includes each CSI resource set including CSI-RS resources (composed of NZP CSI-RS or CSI-IM), and a SS/PBCH block (SSB) resource used for L1-RSRP computation. Each CSI resource setting is positioned at a DL bandwidth part (BWP) identified by higher layer parameter bwp-id. In addition, all CSI resource settings linked to a CSI reporting setting have the same DL BWP. In a CSI resource setting included in CSI-ResourceConfig IE, a time domain behavior of a CSI-RS resource may be indicated by higher layer parameter resourceType and may be configured to be aperiodic, periodic, or semi-persistent. The number S of CSI-RS resource sets configured for periodic and semi-persistent CSI resource settings is restricted to “1”. A periodicity and a slot offset configured for periodic and semi-persistent CSI resource settings are given from a numerology of related DL BWP, just like being given by bwp-id. When the UE is configured with a plurality of CSI-ResourceConfig including the same NZP CSI-RS resource ID, the same time domain behavior is configured for the CSI-ResourceConfig. When the UE is configured with a plurality of CSI-ResourceConfig having the same CSI-IM resource ID, the same time domain behavior is configured for the CSI-ResourceConfig. Then, one or more CSI resource settings for channel measurement (CM) and interference measurement (IM) are configured through higher layer signaling.A CSI-IM resource for interference measurement.An NZP CSI-RS resource for interference measurement.An NZP CSI-RS resource for channel measurement. That is, a channel measurement resource (CMR) may be an NZP CSI-RS for CSI acquisition, and an interference measurement resource (IMR) may be an NZP CSI-RS for CSI-IM and for IM. Herein, CSI-IM (or a ZP CSI-RS for IM) is primarily used for inter-cell interference measurement. In addition, an NZP CSI-RS for IM is primarily used for intra-cell interference measurement from multi-user. The UE may assume that a CSI-RS resource(s) and a CSI-IM/NZP CSI-RS resource(s) for interference measurement configured for one CSI reporting is “QCL-TypeD” for each resource. Resource Setting Configuration As described above, a resource setting may represent a resource set list. Regarding aperiodic CSI, each trigger state configured using higher layer parameter “CSI-AperiodicTriggerState” is that each CSI-ReportConfig is associated with one or multiple CSI-ReportConfig linked to a periodic, semi-persistent, or aperiodic resource setting. One reporting setting may be connected to three resource settings at maximum.When one resource setting is configured, a resource setting (given by higher layer parameter resourcesForChannelMeasurement) is about channel measurement for L1-RSRP computation.When two resource settings are configured, the first resource setting (given by higher layer parameter resourcesForChannelMeasurement) is for channel measurement and the second resource setting (given by csi-IM-ResourcesForInterference or nzp-CSI-RS-ResourcesForInterference) is for CSI-IM or for interference measurement performed on an NZP CSI-RS.When three resource settings are configured, the first resource setting (given by resourcesForChannelMeasurement) is for channel measurement, the second resource setting (given by csi-IM-ResourcesForInterference) is for CSI-IM based interference measurement, and the third resource setting (given by nzp-CSI-RS-ResourcesForInterference) is for NZP CSI-RS based interference measurement. Regarding semi-persistent or periodic CSI, each CSI-ReportConfig is linked to a periodic or semi-persistent resource setting.When one resource setting (given by resourcesForChannelMeasurement) is configured, the resource setting is about channel measurement for L1-RSRP computation.When two resource settings are configured, the first resource setting (given by resourcesForChannelMeasurement) is for channel measurement, and the second resource setting (given by tge higher layer parameter “csi-IM-ResourcesForInterference”) is used for interference measurement performed on CSI-IM. CSI computation regarding CSI measurement will be described in more detail. If interference measurement is performed on CSI-IM, each CSI-RS resource for channel measurement is associated with a CSI-RS resource in a corresponding resource set by an order of CSI-RS resources and CSI-IM resources. The number of CSI-RS resources for channel measurement is the same as the number of CSI-IM resources. In addition, when interference measurement is performed on an NZP CSI-RS, the UE is not expected to be configured with one or more NZP CSI-RS resources in an associated resource set within a resource setting for channel measurement. A UE configured with higher layer parameter nzp-CSI-RS-ResourcesForInterference is not expected to be configured with 18 or more NZP CSI-RS ports in a NZP CSI-RS resource set. For CSI measurement, the UE assumes the following.Each NZP CSI-RS port configured for interference measurement corresponds to an interference transmission layer.Every interference transmission layer of NZP CSI-RS ports for interference measurement considers an energy per resource element (EPRE) ratio.a different interference signal on a RE(s) of an NZP CSI-RS resource for channel measurement, an NZP CSI-RS resource for interference measurement, or a CSI-IM resource for interference measurement. A CSI reporting procedure will be described in more detail. For CSI reporting, time and frequency resources available for an UE are controlled by a base station. CSI may include at least one of channel quality indicator (CQI), a precoding matrix indicator (PMI), a CSI-RS resource indicator (CRI), am SS/PBCH block resource indicator (SSBRI), a layer indicator (LI), a rank indicator (RI), or L1-RSRP. Regarding the CQI, the PMI, the CRI, the SSBRI, the LI, the RI, and the L1-RSRP, the UE may be configured with N≥1 CSI-ReportConfig reporting setting, M≥1CSI-ResourceConfig resource setting, and a list of one or two trigger states (provided by aperiodicTriggerStateList and semiPersistentOnPUSCH-TriggerStateList) by a higher layer. In the aperiodicTriggerStateList, each trigger state includes a channel and a list of associated CSI-ReportConfigs selectively indicative of Resource set IDs for interference. In the semiPersistentOnPUSCH-TriggerStateList, each trigger state includes one associated CSI-ReportConfig. In addition, a time domain behavior of CSI reporting supports periodic, semi-persistent, and aperiodic CSI reporting. Hereinafter, periodic, semi-persistent, and aperiodic CSI reporting will be described. The periodic CSI presorting is performed on a short PUCCH and a long PUCCH. A periodicity and a slot offset of the periodic CSI reporting may be configured by RRC and refer to CSI-ReportConfig IE. Then, SP CSI reporting is performed on a short PUCCH, a long PUCCH, or a PUSCH. In the case of SP CSI on a short/long PUCCH, a periodicity and a slot offset are configured by RRC, and CSI reporting to an additional MAC CE is activated/deactivated In the case of SP CSI on a PUSCH, a periodicity of SP CSI reporting is configured by RRC, but a slot offset thereof is not configured by RRC and SP CSI reporting is activated/deactivated by DCI (format 0_1). The first CSI reporting timing follows a PUSCH time domain allocation value indicated by DCI, and subsequent CSI reporting timing follows a periodicity which is configured by RRC. For SP CSI reporting on a PUSCH, a separated RNTI (SP-CSI C-RNTI) is used. DCI format 0_1 may include a CSI request field and activate/deactivate a specific configured SP-CSI trigger state. In addition, SP CSI reporting is activated/deactivated identically or similarly to a mechanism having data transmission on a SPS PUSCH. Next, aperiodic CSI reporting is performed on a PUSCH and triggered by DCI. In the case of AP CSI having an AP CSI-RS, an AP CSI-RS timing is configured by RRC. Herein, a timing of AP CSI reporting is dynamically controlled by DCI. A reporting method (e.g., transmitting in order of RI, WB, PMI/CQI, and SB PMI/CQI) by which CSI is divided and reported in a plurality of reporting instances, the method which is applied for PUCCH-based CSI reporting in LTE, is not applied in NR. Instead, NR restricts configuring specific CSI reporting on a short/long PUCCH, and a CSI omission rule is defined. Regarding an AP CSI reporting timing, PUSCH symbol/slot location is dynamically indicated by DCI. In addition, candidate slot offsets are configured by RRC. Regarding CSI reporting, a slot offset (Y) is configured for each reporting setting. Regarding UL-SCH, a slot offset K2 is configured separately. Two CSI latency classes (low latency class and high latency class) are defined in terms of CSI computation complexity. The low latency CSI is WB CSI that includes up to 4-ports Type-I codebook or up to 4-ports non-PMI feedback CSI. The high latency CSI is a CSI other than the low latency CSI. Regarding a normal UE, (Z, Z′) is defined in a unit of OFDM symbols. Z represents the minimum CSI processing time after receiving CSI triggering DCI and before performing CSI reporting. Z′ represents the minimum CSI processing time after receiving CSI-RS about a channel/interference and before performing CSI reporting Additionally, the UE reports the number of CSI which can be calculated at the same time. A-CSI or AP CSI used in the present specification indicates aperiodic CSI which is the CSI reported aperiodically by the UE. Also, CSI report or CSI reporting used in the present specification may be regarded to have the same meaning. To inform of UE capability for A-CSI computation or calculation time, the UE reports a set of supported Z values and CSI configuration which may be supported for each Z value to the eNB. Here, Z is defined by the minimum required number of symbols for CSI computation for a given CSI configuration. More specifically, Z refers to the minimum amount of time required for calculation related to AP CSI processing, such as decoding time, channel measurement, CSI calculation, and TX preparation. A CSI configuration includes information indicating wideband (WB) only CSI or sub-band (SB) and WB CSI; information about the maximum number of CSI-RS ports; and information about type 1 codebook or type 2 codebook. When the UE supports a plurality of numerology, the information about CSI may be reported for each numerology. When an A-CSI report is triggered at slot n on the PUSCH, the UE drops the A-CSI report for the following cases:A case where the time gap between the last symbol of the PDCCH and the start symbol of the PUSCH in the slot n is smaller than a reported value of Z with respect to a given CSI configuration andA case where an AP CSI-RS resource is transmitted from the slot n, and the time gap between the last symbol of a CSI-RS resource and the start symbol of the PUSCH is smaller than a reported value of Z with respect to a given CSI configuration. And those symbols between the Z symbols before the start symbol of the PUSCH and the start symbol of the PUSCH are not valid as (CSI) reference resources. In what follows, an A-CSI report trigger and a CSI report related thereto will be described. When the eNB triggers an A-CSI report through downlink control information (DCI) transmission in the slot n, the UE operates as follows. A-CSI is transmitted by the UE through the PUSCH allocated as a resource by the DCI. The transmission timing of the PUSCH is indicated by a specific field (which is defined as a Y value) of the DCI. More specifically, the PUSCH is transmitted from the (n+Y)-th slot (slot n+Y) with reference to the slot n which corresponds to the trigger time of the A-CSI report. For example, when a DCI field for the Y value is defined by 2 bits, the Y value for 00, 01, 10, and 11 is defined respectively by RRC signaling and more specifically, defined within a report setting defined through RRC signaling. The report setting may also be expressed by reporting setting or CSI-ReportConfig. An A-CSI report trigger may trigger one or more specific report settings, and the value of 00, 01, 10, and 11 of the DCI field is defined according to the Y value defined within the triggered report setting. As described above, when the time gap or timing gap between the last symbol of the PDCCH and the start symbol of the PUSCH is smaller than the Z value corresponding to the CSI configuration of triggered A-CSI, the UE transmits the triggered A-CSI to the eNB without dropping or updating the A-CSI. Since the amount of time allocated for actual calculation is smaller than the minimum amount of time Z required for calculation of the A-CSI, the UE is unable to calculate the A-CSI. As a result, the UE does not drop or update triggered CSI. When a Non-Zero Power (NZP) CSI-RS or Zero Power (ZP) CSI-RS used for channel estimation or interference estimation of triggered A-CSI is an aperiodic CSI-RS, the UE estimates a channel or interference through one shot measurement from the corresponding RS. In other words, it indicates that the UE estimates a channel or interference by using the corresponding RS (NZP CSI-RS or ZP CSI-RS) only. At this time, if the time gap between the very last symbol of a CSI-RS resource and the start symbol of the PUSCH is smaller than the Z value corresponding to the CSI configuration of triggered A-CSI, in the same way as the UE's operation described above, the UE transmits the corresponding A-CSI to the eNB without dropping or updating the corresponding A-CSI. And when the UE calculates CSI, the UE does so by assuming reception of data for a specific frequency and/or time resource area, which is called a CSI reference resource. The CSI reference resource may be simply referred to as a reference resource. Since the UE starts CSI calculation from the CSI reference resource time, the UE may calculate CSI only when the amount of time as long as Z symbols from the CSI reference resource time is secured. Therefore, the reference resource time has to be defined at least before z symbols (or z+1 symbols) with respect to the CSI report time. To this end, when validity of a reference resource is checked, symbols or slots before at least z symbols (or z+1 symbols) are determined to be valid with respect to the CSI report time, but invalid, otherwise. Here, the (CSI) reference resource is defined in units of slots. Also, the slot whose number is less than or equal to n-nCQI_REF(namely slot n-nCQI_REF) is determined as the (CSI) reference resources with reference to the slot for CSI reporting (for example, slot n). The statement above, which says that ‘symbols or slots before at least z symbols (or z+1 symbols) are determined to be valid with respect to the CSI report time, but invalid, otherwise’, may indicate that nCQI_REFis configured by Eq. 4 below. nCQI_REF=floor(zThe⁢number⁢of⁢OFDM⁢⁢symbols⁢comprising⁢one⁢slot)+1[Eq.4] In Eq. 4, floor discards digits after the decimal point and is denoted by a symbol └·┘. The UE sets the most recent slot which satisfies the validity condition for a reference resource among slots whose number is less than or equal to n-nCQI_REFas a reference resource. Similarly, the UE may simply set the slot n-nCQI_REFas the reference resource. And the time offset of the CSI reference resource may be determined on the basis of the proposal 3 to be described later, and detailed descriptions about how the time offset of the CSI reference resource is determined will be given by the proposal 3. The A-CSI report trigger field included in the DCI may be interpreted as follows. When an eNB instructs a UE to perform an A-CSI trigger for a plurality of report settings simultaneously, and a definition of the Y value is different for each report setting, a problem occurs as described below, and a UE operation to solve the problem through various methods will be described. For example, suppose a report setting 1 is defined as Y={0, 1, 2, 3}, and a report setting 2 is defined as Y={1, 2, 3, 4}. In this case, an ambiguity occurs in which value the (2 bits) DCI field indicating the Y value has to be interpreted. Therefore, to remove the ambiguity, it is proposed that the UE operates according to the following methods. (Method 1) The UE newly generates Y′ as an intersection between two different Ys and interprets the DCI field according to the Y′ value. In other words, in the example above, the intersection of two different Ys is {1, 2, 3}, and the UE interprets 00, 01, 10, and 11 of the DCI field as 1, 2, 3, and 3, respectively. If the intersection between two different Ys is {1}, the UE interprets 00, 01, 10, and 11 as 1, 1, 1, and 1, respectively. If the intersection between two different Ys is {1, 2}, the UE interprets 00, 01, 10, and 11 as 1, 2, 2, and 2. In the example above, when the number of elements belonging to the intersection between two different Ys is smaller than the states (for example, 00, 01, 10, and 11) of the DCI field, the remaining states are defined by repeating the last intersection value. However, different from the definition above, the remaining states may be defined as reserved. (Method 2) The UE interprets the DCI field according to the Y value defined in one of a plurality of report settings. For example, among a plurality of report settings, the UE interprets the DCI field by using the Y value for a report setting having a low report setting index. Similarly, among a plurality of report setting, the UE interprets the DCI field by using the Y value for a report setting having a low index for a component carrier (CC). The UE puts priorities between the report setting index and the CC index and determines a Y value for a report setting by using the CC index. If the CC index is the same, the UE may then determine the Y value according to the report setting index. Or as described above, the priority may be reversed (a high priority is set for the report setting index). (Method 3) The UE may expect that a plurality of report settings always have the same Y value. In other words, the eNB configures the report settings 1 and 2 to have the same Y value through RRC signaling. For example, the eNB may configure the report setting 1 by using Y={1, 2, 3, 4} and the report setting 2 by using Y={1, 2, 3, 4}. (Method 4) The UE determines the time offset of aperiodic CSI reporting by using the larger value of two different Y values. For example, the report setting 1 may be defined by Y1={0, 1, 2, 3}, and the report setting 2 may be defined by Y2={1, 2, 3, 4}. When the DCI field for Y (for example, 2 bits) is ‘00’, Y1=0, and Y2=1; and therefore, the Y value is determined by ‘1’ which is the larger of the two values. When the DCI field for Y (for example, 2 bits) is ‘01’, Y1=1, and Y2=2; and therefore, the Y value is determined by ‘2’ which is the larger of the two values. The Y value may be defined in the same way as above when the DCI field value is ‘10’ and ‘11’, and the Y value for the DCI field value of ‘10’ and ‘11’ is determined as ‘3’ and ‘4’, respectively. If three Y values are defined, the largest one among the three values may be determined as a time offset by applying the same method as described above. As described above, the eNB may instruct the UE to perform an AP CSI reporting trigger through one DCI and determine the time offset of aperiodic CSI reporting according to the methods described above (Methods 1 to 4) by using the Y values defined for the respective N triggered AP CSI reporting settings. In addition, the eNB may indicate the data transmission time through the PUSCH while performing an AP CSI reporting trigger through the same DCI simultaneously. At this time, the data transmission time through the PUSCH is defined as a ‘K2’ value, and a plurality of candidate sets are set to the UE through upper layer signaling in advance. One of the candidate sets is determined (or selected) as a final K2 value through the DCI field (which is also called a ‘timing offset field’). Also, the DCI field for selecting the K2 value and the DCI field for selecting the Y value are not defined by separate fields but are defined by the same DCI field. When an AP CSI reporting trigger occurs, the UE uses the corresponding DCI field to select the Y value, and when scheduling of PUSCH data is occurred, the corresponding DCI field is used to select the K2 value. When PUSCH data scheduling occurs while an AP CSI reporting trigger is performed simultaneously through the DCI, an ambiguity arises about whether to define each value of the timing offset field as a candidate of the Y value or a candidate for the K2 value. To solve the ambiguity, it is possible to directly extend and apply the aforementioned methods (Methods 1 to 4). In other words, the proposed methods (Methods 1 to 4) above are related to how to define the value of the timing offset field when a plurality of Y candidate sets are given, and Methods 1 to 4 may also be applied to the K2 candidate set by treating the K2 candidate set as one Y candidate set. For example, Method 4 may be extended and applied as described below. The UE defines the timing offset field by using the larger of different Y and K2 values. For example, suppose a report setting 1 is defined as Y1={0, 1, 2, 3}, and a report setting 2 is defined as K2={3, 4, 5, 6}. If the DCI field of the timing offset is ‘00’, Y1=0, Y2=1, and K2=3; and therefore, the timing offset field is determined by the largest value ‘3’. If the DCI field is ‘01’, Y1=1, Y2=2, and K2=4; and therefore, the timing offset field is determined by the largest value ‘4’. The DCI field values for ‘10’ and ‘11’ may be determined in the same manner, and in this case, the DCI field values for ‘10’ and ‘11’ are determined as ‘5’ and ‘6’, respectively. The UE may multiplex PUSCH data and CSI in the slot (n+timing offset) with respect to the slot n which has received DCI according to an indicated DCI value and report (or transmit) the multiplexed data and CSI to the eNB simultaneously. Now, other methods for interpreting the A-CSI report trigger-related DCI field in addition to the aforementioned methods (Methods 1 to 4) will be described. (Method 5) In another method, the UE constructs a union set by combining candidate sets of different Ys and K2 candidate sets and defines the value of an n bit timing offset DCI field as the values ranging from the largest element to the 2n-th largest element of the union set. The UE multiplexes PUSCH data and CSI in the slot (n+timing offset) with respect to the slot n which has received DCI according to an indicated DCI value and reports (or transmits) the multiplexed data and CSI to the eNB simultaneously. (Method 6) In yet another method, after constructing one set from candidate sets of Ys through the Methods 1 to 4, a union set is constructed by combining one of the Y candidate sets and a candidate set of K2. And the DCI field value of an n bit timing offset is defined by the values ranging from the largest element to the 2n-th largest element of the union set. (Method 7) Method 7 constructs one set from candidate sets of Ys through the Methods 1 to 4 and defines the i-th value of the DCI field of the timing offset by using a sum of the i-th element of one of the Y candidate sets and the i-th element of the K2 candidate sets. For example, when the Y candidate set is {1, 2, 3, 4}, and the K2 candidate set is {5, 6, 7, 8}, the respective values of the 2-bit timing offset DCI field for 00, 01, 10, and 11 may be defined by 1+5 (6), 2+6 (8), 3+7 (10), and 4+8 (12). (Method 8) Method 8 constructs one set from candidate sets of Ys through the Methods 1 to 4 and defines the i-th value of the timing offset DCI field as a sum of the i-th element of the candidate set of Ys while ignoring the candidate set of K2. Next, a relaxation method for AP CSI calculation will be described. The UE reports a Z value as defined below to the eNB by using one of capabilities of the UE for AP CSI calculation. By assuming CSI only PUSCH (no HARQ ACK/NACK) for a given numerology and CSI complexity, Z is defined as the minimum required number of symbols for PDCCH detection/decoding time for receiving DCI triggering a CSI report, channel estimation time, and CSI calculation time. For low complexity CSI, one Z value for a given numerology is defined as shown in Table 7 below. And for high complexity CSI, one Z value for a given numerology is defined as shown in Table 7 below. TABLE 715 kHz30 kHz60 kHz120 kHzCSI complexityUnitsSCSSCSSCSSCSLow complexitySymbolsZ1, 1Z1, 2Z1, 3Z1, 4CSIHigh complexitySymbolsZ2, 1Z2, 2Z2, 3Z2, 4CSI 1High complexitySymbolsZN+1, 1ZN+1, 2ZN+1, 3ZN+1, 4CSI 2 As described above, Z is defined as a sum of the amount of time required for DCI decoding (which means decoding time of DCI holding AP CSI trigger information), the amount of time required for channel estimation, and the amount of time required for CSI calculation. According to the complexity of CSI triggered with respect to the Z value, the eNB indicates a Y value (in other words, according to whether it is low complexity CSI or high complexity CSI). If it is assumed that DCI holding an AP CSI trigger (namely AP CSI triggering DCI) is transmitted to slot n, the UE reports the corresponding CSI to the eNB at slot (n+timing offset Y). If the time allocated to the UE for CSI calculation is insufficient for the UE's capability for AP CSI calculation, the UE, instead of updating (or calculating) CSI, transmits the most recently reported CSI or arbitrary CSI (or predefined, specific CSI, for example, CQI=0, PMI=0, and RI=1) to the eNB. FIG.7illustrates the aforementioned situation. In other words,FIG.7illustrates timing at which a periodic CSI-RS is received. More specifically,FIG.7illustrates a situation in which the most recent periodic (P) CSI-RS which has been received at or before reference resource time exists within a T time period. InFIG.7, the UE measures CSI through a periodic CSI-RS (P CSI-RS), and it may be noticed that the P CSI-RS and the CSI reference resource exist within the time T. In this case, within the time T, the UE performs all of DCI decoding, channel estimation, and CSI calculation. Therefore, the UE compares T and Z and if T<Z, does not calculate (or update) CSI but transmits the most recently reported CSI or arbitrary CSI. If T>=Z, the UE calculates CSI on the basis of the periodic CSI-RS and reports the calculated CSI to the eNB. FIGS.8and9illustrate another example of the timing at which a period CSI-RS is received. In other words,FIGS.8and9illustrate a situation in which the most recent P CSI-RS received at or before the reference resource time exists before the T period. Or,FIGS.8and9illustrate a situation in which a P CSI-RS does not exist within the T period, but the P CSI-RS exists before the T period. In other words, referring toFIGS.8and9, the UE has already performed channel measurement from a (periodic) CSI-RS before a CSI report trigger is occurred. Therefore, in this case, the UE performs DCI decoding and CSI calculation within the T period. The UE compares T and Z-(channel estimation time) and if T<Z-(channel estimation time), does not calculate (or update) CSI but transmits the most recently reported CSI or arbitrary CSI to the eNB. Here, the UE may report the channel estimation time to the eNB by using separate capability. If T>=Z-(channel estimation time), the UE calculates CSI and reports the calculated CSI to the eNB. Here, Z-(channel estimation time) may be defined by a third variable Z′, and the UE may report Z and Z′ to the eNB, respectively. FIG.10illustrates one example of a method for measuring CSI by using an AP CSI-RS. First, an AP CSI-RS is defined to exist always within the time period T. In this case, within the time T, the UE performs all of DCI decoding, channel estimation, and CSI calculation. Therefore, the UE compares T and Z and if T<Z, does not calculate (or update) CSI but transmits the most recently reported CSI or arbitrary CSI. If T>=Z, the UE calculates CSI and reports the calculated CSI to the eNB. FIG.11illustrates one example of another method for measuring CSI by using an AP CSI-RS. More specifically,FIG.11illustrates a situation in which an AP CSI-RS is transmitted long after the UE finishes decoding of DCI. In this case, the UE has to perform all of DCI decoding, channel estimation, and CSI calculation within the time period T. However, since an AP CSI-RS is transmitted long after DCI decoding is finished, the UE is unable to perform channel measurement and CSI calculation during the time period T until the DCI decoding is finished and the AP CSI-RS is transmitted. Therefore, the UE compares T and Z and if T<Z, does not calculate (or update) CSI but may transmit the most recently reported CSI or arbitrary CSI to the eNB however, if T>=Z, the UE is unable to calculate CSI and thus unable to report CSI to the eNB. Therefore, to make the method as shown inFIG.11effective, the eNB has to transmit an AP CSI-RS within the DCI decoding time after the last OFDM symbol of triggering DCI. Or the eNB has to transmit an AP CSI-RS before Z-(decoding time) at the first OFDM symbol from which AP CSI is reported. The UE may report the decoding time to the eNB through separate capability. Here, Z-(decoding time) may be defined as a third variable Z′, and the UE may report Z and Z′ to the eNB, respectively. In other words, T′ between the time at which the AP CSI-RS used for channel measurement or interference measurement is last received and the start time at which CSI is reported is smaller than Z′, the UE determines that time for calculating CSI is not sufficient and does not calculate CSI. Therefore, the UE does not report valid CSI but reports a predefined dummy CSI value (for example, RI=1, PMI=1, and CQI=1) to the eNB. Or if T′ between the last OFDM symbol on which the AP CSI-RS is transmitted and the first OFDM symbol on which the AP-CSI is reported is smaller than Z-(decoding time), the UE does not calculate (or update) CSI but transmits the most recently reported CSI or arbitrary CSI to the eNB. And if T′>=Z-(decoding time), and T<Z, the UE does not calculate (or update) CSI but transmits the most recently reported CSI or arbitrary CSI. If T′>=Z-(decoding time) and T>=Z, the UE calculates CSI and reports the calculated CSI to the eNB. The UE may report the decoding time to the eNB through separate capability. Differently from the proposals to be described later, if Z′ is introduced, the Z in the proposals 2 and 3 may be replaced with the Z′. As described above, the Z indicates the minimum required time for all of the calculations related to AP CSI processing such as DCI decoding time, channel measurement, CSI calculation, and TX preparation. And the Z′ indicates the minimum required time for channel measurement, CSI calculation, and TX preparation. Therefore, it may be preferable to set the time provided for the UE, spanning from the last reception time of the CSI-RS used for channel measurement or interference measurement to the start time at which the CSI is transmitted, with reference to the Z′ which does not include decoding time. The proposals 2 and 3 below may be limited (or restricted) to the case where CSI is reported within a short time period after the A CSI report triggering. For example, the proposals 2 and 3 to be described later may be applied only to the case of a small Y value such as Y=0 (or Y=1). If Y=0, it may be related to the operation for self-contained CSI feedback which is operated in one slot, including CSI report triggering, channel measurement, and up to CSI reporting. For the self-contained structure, the descriptions given above may be referenced. To this purpose, a reference resource is defined to be as close as possible from slot n, and the UE is made to measure a channel by using a CSI-RS within a time period between CSI report triggering and CSI reporting. Or even if Y is a non-zero, small value (for example, Y=1), since the eNB is intended to trigger CSI reporting and to receive a fresh (or new) CSI report within a short time period, a reference resource may be defined to be as close as possible from slot n, and the eNB may be made to perform channel measurement by using a fresh CSI-RS close to the CSI reporting time. On the other hand, if Y is a large value, since it already takes a long time from a triggering time to the report time, the time at which a CSI-RS measures a channel does not cause a critical problem compared to the case where Y is small. Therefore, in this case, the proposal 3 to be described later is not applied but the time offset of the reference resource is configured by one of the following options. First, the option 1 is described. When a P/SP/AP CSI-RS is used to calculate CSI for A-CSI reporting, the time offset of a CSI reference resource is derived from the Z value with respect to a given CSI latency and numerology as described below. In other words, nCQI_refis the same as ┌Z/Nsymbslot┐ or is the smallest value greater than or equal to ┌Z/Nsymbslot┐, such that slot n-nCQI_refcorresponds to a valid downlink slot. The description above may be applied to P/SP CSI reporting in the same way. Next, the option 2 will be described. When a P/SP/AP CSI-RS is used to calculate CSI for A-CSI reporting, the time offset of a CSI reference resource is derived from the Z value with respect to a given CSI latency and numerology as described below. nCQI_refis the same as ┌Z/Nsymbslot┐+1 or is the smallest value greater than or equal to ┌Z/Nsymbslot┐+1, such that slot n-nCQI_refcorresponds to a valid downlink slot. The description above may be applied to P/SP CSI reporting in the same way. In the case of the option 2, the reference resource does not at all include symbols before 0, 1, 2, 3, . . . , Z symbols at the CSI report start time. According to the current standard, since channel measurement or interference measurement is not allowed to be performed after the reference resource, only the option 2 already satisfies the condition of the proposal 2. Next, particulars related to aperiodic CSI report timing and CSI relaxation will be described briefly. Candidates of CSI calculation time Z are defined in Table 7 above. While CSI is transmitted only on the PUSCH, if A-CSI reporting is triggered on slot n, the UE doesn't have to update the CSI with respect to A-CSI reporting for the following cases:The case where M-L-N<Z for given CSI complexity and numerology andThe case where an AP CSI-RS resource is transmitted on slot n for given CSI complexity and numerology, and M-O-N<Z. Here, L represents the last symbol of the PDCCH on slot n, M represents the start symbol of the PUSCH, and N represents the TA value (for example, TA=1.4 symbol) in units of symbols. And O represents a later symbol between the last symbol of the AP CSI-RS resource for a channel measurement resource (CMR) and the last symbol of the AP CSI-RS resource for an interference measurement resource (IMR). And the PUSCH timing offset for A-CSI reporting may be determined as follows. When the PUSCH is scheduled only for a single A-CSI report, the DCI field for the PUSCH timing offset is defined from the Y in a report setting. And when the PUSCH is scheduled only for a plurality of A-CSI reports, the DCI field for the PUSCH timing offset is defined as the maximum value among various Y values in the report setting. For example, when Y={1, 2, 3, 6} in a report setting 1, and Y={2, 3, 4, 5} in a report setting 2, Y may be defined as Y={2, 3, 4, 6}. Other particulars defined in the standard will be described. The terms of low complexity CSI and high complexity CSI may be replaced with low latency CSI and high latency CSI, respectively. Two CSI latency classes are supported for CSI computation capability. The low latency CSI class is defined as WB CSI including a maximum of four antenna ports, which may be applied only when a Type-I codebook or PMI is not configured. The high latency CSI class is defined as a superset of all of CSI supported by the UE, and the descriptions given above are not applied to L1 RSRP. And when CSI is transmitted through the PUSCH, a start and length indicator value (SLIV) and PUSCH mapping type are determined by pusch-symbolAllocation in the same way as in the PUSCH without CSI. The PUSCH slot offset when CSI is multiplexed with the UL-SCH on the PUSCH is determined solely by the K2 value indicated by pusch-symbolAllocation rather than aperiodicReportSlotOffset. The descriptions given above are applied only for the case where CSI is multiplexed with data. Here, the numbers of candidate values for the aperiodicReportSlotOffset and K2 are the same with each other. Particulars related to the A-CSI reporting will be further described. The condition for when the UE does not need to update CSI for A-CSI reporting will be described again on the basis of the descriptions give above. First, an A-CSI report trigger with respect to a plurality of CSI will be described with the A-CSI report trigger with respect to single CSI in mind. FIG.12illustrates one example of an A-CSI report trigger for single CSI proposed by the present specification. More specifically,FIG.12illustrates an example of an A-CSI report trigger with respect to single CSI, where a periodic CSI-RS and a CSI reference resource exist within a time window T. In this case, the UE has to perform DCI decoding, channel estimation, CSI calculation, and Tx preparation within the time window T. Therefore, when T<Z, the UE does not need to update the CSI. FIG.13illustrates one example of an A-CSI report trigger for single CSI having a periodic CSI-RS proposed by the present specification. (Proposal 1) In the case of an A-CSI report trigger for single CSI, the UE does not update the CSI when T<Z. Here, T is a time duration between the reception time of the last OFDM symbol of triggering DCI and the transmission time of the first OFDM symbol of AP CSI reporting. Different fromFIG.12, even though T>Z,FIG.13illustrates the case in which the P CSI-RS and the reference resource come late in the time window T. In this case, even though T>Z, the UE is unable to complete CSI calculation since it starts channel estimation too late. Therefore, to prevent such a case from happening, the UE has to perform channel/interference measurement at the ZP/NZP CSI-RS at which at least Z symbols are located before the first OFDM symbol of the AP CSI report. (Proposal 2) The UE does not need to measure channel or interference through the ZP/NZP CSI-RS received from 0 to Z symbols before the transmission time of the first OFDM symbol of the AP CSI report. The time offset of the CSI reference resource has to be derived properly from Z so that it matches the proposal 2. FIGS.14and15illustrate examples of a method for determining a time offset of a CSI reference resource proposed by the present specification. More specifically,FIGS.14and15illustrate two options for determining a time offset where Z=5, Nsymbslot=14, and a CSI report starts at the 10-th symbol of slot n. FIG.14illustrates one example of valid CSI-RS locations for CSI reference resource and channel measurement when nCQI_ref=┌Z/Nsymbslot┐. InFIG.14, since the reference resource is slot n−1, the UE is unable to use a potential CSI-RS resource at 1, 2, 3, or 4 symbol of slot n for channel measurement. The UE measures the channel from a CSI-RS at one or a few slots before the slot n. However, this operation incurs too much delay between channel measurement and CSI report. As a result, self-contained A-CSI feedback which is performed in the same single slot in which CSI triggering, channel measurement, and CSI report are conducted may not be supported. To solve the aforementioned problem, as shown inFIG.15, nCQI_refmay be defined as └Z/Nsymbslot┘. In other words,FIG.15illustrates another example of valid CSI-RS locations for CSI reference resource and channel measurement when nCQI_ref=└Z/Nsymbslot┘. InFIG.15, the reference resource is slot n, and the slot n includes a few symbols beyond Z. As a result, when the CSI-RS is transmitted on the 1st, 2nd, 3rd, or 4th symbol of the slot n, the UE may measure the channel by using the transmitted CSI-RS and calculate the CSI from the new channel measurement. (Proposal 3) When the P/SP/AP CSI-RS is used for CSI calculation for A-CSI reporting, the time offset of the CSI reference resource is derived from the Z value with respect to the CSI latency and numerology as given below. Here, nCQI_ref is the smallest value greater than or equal to └Z/Nsymbslot┘, such that slot n-nCQI_ref corresponds to a valid downlink slot. Here, a specific slot may be regarded as a valid downlink slot when the following conditions are satisfied:When the specific slot includes a downlink or a flexible symbol set on at least one upper layer,When the specific slot is not located within a measurement gap set for the UE,When the active DL BWP in a slot is the same as the DL BWP for which CSI report is conducted, andWhen at least one CSI-RS transmission occasion for channel measurement and CSI-RS for interference measurement and/or CSI-IM occasion is located in the DRS active time no later than the CSI reference resource in which the CSI report is conducted. The description above may be applied to the P/SP CSI reporting in the same way. When an AP CSI-RS is transmitted, a problem similar to what has been described with reference toFIG.13may occur, which will be described with reference toFIG.16. As shown inFIG.13, it may be seen that the AP CSI-RS comes late in the time window T. In this case, even though T>Z, the UE is unable to complete CSI calculation since it starts channel estimation too late. A simple method to solve this problem is to compare T′ and Z instead of T and Z. Here, T′ represents a time gap between the most recent AP CSI-RS reception time and transmission time of the first OFDM symbol of the AP CSI report. In particular, if T′<Z, the UE updates CSI and does not have to report the lowest CQI. In the case which requires more precise mechanism, Z′ which is smaller than Z is defined, and instead of T′ and Z, T′ and Z′ may be compared. In other words, Z′ indicates the amount of time required for channel measurement, CSI calculation, and TX preparation except for DCI decoding. Z indicates the time which includes DCI decoding in addition to the channel measurement, CSI calculation, and TX preparation. However, since the decoding time of DCI doesn't necessarily have to be considered in the T′, the time actually required for the T′ may be smaller than Z. If sufficient time is not provided for T′, the UE does not have measurement of a channel under consideration, and thus the UE may report the lowest CQI in a specific UCI field. FIG.16illustrates one example of an A-CSI report trigger for single CSI having an aperiodic CSI-RS proposed by the present specification. (Proposal 4) In the case of A-CSI report trigger for single CSI which uses an AP CSI-RS, if T′<Z, the UE does not need to calculate CSI and reports the lowest CQI. Here, T′ represents a time duration between the most recent CSI-RS reception time and the transmission time of the first OFDM symbol for AP CSI report. In the case of A-CSI report trigger for a plurality of N CSI, if the UE is equipped with N parallel processors, the UE may use the same mechanism as in the single CSI trigger. However, if more than N CSI is triggered, the UE is unable to complete calculation of all of the triggered CSI. In this case, a CSI relaxation method supported by the LTE system may be used again. (Proposal 5) In other words, the proposal 5 reuses a relaxation method supported by the LTE system in the case of an A-CSI report trigger for a plurality of CSI. Now, UE capability for CSI calculation will be described. According to the proposals 1 to 3 described above, the amount of time required for CSI processing is determined, which may be summarized as shown in Tables 8 and 9. In other words, Table 8 provides Z values for normal UEs, which are reference values that have to be supported by all of the UEs. And Table 9 provides Z values for advanced UEs; therefore, for a given numerology and CSI latency, UE capability is employed to report whether to support the Z values of Table 9. Also, for the given numerology and CSI latency, the Z values of Table 9 have to be the same as or smaller than the Z values of Table 8. Also, the value of Z′i,jneeds to be added with respect to Z′. The Z′i,jvalue represents a required time duration between the reception time of the most recent CSI-RS and the transmission time of the first OFDM symbol of the AP CSI report. Table 8 illustrates one example of the CSI calculation time Z for normal UEs. TABLE 815 kHz30 kHz60 kHz120 kHzCSI complexityUnitsSCSSCSSCSSCSLow latencySymbolsZ1, 1Z1, 2Z1, 3Z1, 4CSIHigh latencySymbolsZ2, 1Z2, 2Z2, 3Z2, 4CSI Table 9 illustrates one example of CSI calculation time Z for advanced UEs. TABLE 915 kHz30 kHz60 kHz120 kHzCSI complexityUnitsSCSSCSSCSSCSLow latencySymbolsZ1, 1Z1, 2Z1, 3Z1, 4CSIHigh latencySymbolsZ2, 1Z2, 2Z2, 3Z2, 4CSI The proposals described above are summarized briefly as follows. First, according to the proposal 1, if T<Z for an A-CSI report trigger with respect to single CSI, the UE doesn't need to update CSI. Here, T represents a time duration between the reception time of the last OFDM symbol of triggering DCI and the transmission time of the first OFDM symbol of AP CSI reporting. And according to the proposal 2, the UE doesn't need to measure a channel or interference due to a ZP/NZP CSI-RS received from 0 to Z symbols before the transmission time of the first OFDM symbol of AP CSI reporting. And according to the proposal 3, when a P/SP/AP CSI-RS is used to conduct CSI calculation for A-CSI reporting, the time offset of a CSI reference resource is derived from Z with respect to the given CSI latency and numerology as follows. In other words, nCQI_refis the smallest value greater than or equal to └Z/Nsymbslot┘, such that slot n-nCQI_refcorresponds to a valid downlink slot. This property may be applied in the same way to P/SP CSI reporting. And according to the proposal 4, in the case of an A-CSI report trigger with respect to single CSI which uses an AP CSI-RS, if T′<Z, the UE doesn't need to calculate CSI and reports the lowest channel quality indicator (CQI) to the eNB. Here, T′ represents a time duration between the reception time of the most recent AP CSI-RS and the transmission time of the first OFDM symbol of the AP CSI report. And the proposal 5 reuses a relaxation method supported by the LTE system in the case of an A-CSI report trigger for a plurality of CSI. Next, another embodiment will be described. The time offset of a CSI reference resource is derived from Z′ with respect to the CSI latency and numerology given as follows. nCQI_refis the smallest value greater than or equal to └Z/Nsymbslot┘, such that slot n-nCQI_refcorresponds to a valid downlink slot. Or nCQI_refmay be interpreted to be the same as └Z/Nsymbslot┘ or to be the smallest value among those values larger than └Z/Nsymbslot┘, such that slot n-nCQI_refcorresponds to a valid downlink slot. This property may also be applied to at least aperiodic CSI report. And this property is applied when an AP/P/SP CSI-RS is used for CSI calculation. When a P/SP CSI-RS and/or CSI-IM is used for channel or interference measurement, the UE does not expect the last OFDM symbol to measure a channel and/or interference with respect to the CSI-RS and/or CSI-IM received from 0 to Z′ symbols before the transmission time of the first OFDM symbol of the AP CSI reporting. The aforementioned property is not the only condition, and the CSI-RS has to be defined at or before the CSI reference resource. This property also includes the case of the AP CSI-RS. In the case of the AP CSI report, when the P/SP CSI-RS is used for channel and/or interference measurement, the UE does not expect the most recent CSI-RS to be received later than the CSI reference resource before triggering of the PDCCH. In Table 10 below, (Z, Z′) values are reference values that have to be supported by all of the UEs. For normal UEs, it has not been determined yet about whether the (Z, Z′) values with respect to low latency CSI and high latency CSI of Table 10 below are the same with each other for given numerology. If the two values are the same with each other for all of the numerology, low latency and high latency are combined to normal UEs. In Table 11 below, whether to support (Z, Z′) values of Table 11 with respect to given numerology and CSI latency is reported to the eNB through UE capability. For the given numerology and CSI latency, the (Z, Z′) values of Table 11 haves to be equal to or smaller than the (Z, Z′) values of Table 10. Table 10 illustrates CSI calculation time Z for normal UEs. TABLE 1015 kHz30 kHz60 kHz120 kHzCSI latencyUnitsSCSSCSSCSSCSLow latencySymbols(Z1, 1, Z′1, 1)(Z1, 2, Z′1, 2)(Z1, 3, Z′1, 3)(Z1, 4, Z′1, 4)High latencySymbols(Z2, 1, Z′2, 1)(Z2, 2, Z′2, 2)(Z2, 3, Z′2, 3)(Z2, 4, Z′2, 4) Table 11 illustrates CSI calculation time Z for advanced UEs. TABLE 1115 kHz30 kHz60 kHz120 kHzCSI latencyUnitsSCSSCSSCSSCSLow latencySymbols(Z1, 1, Z′1, 1)(Z1, 2, Z′1, 2)(Z1, 3, Z′1, 3)(Z1, 4, Z′1, 4)High latencySymbols(Z2, 1, Z′2, 1)(Z2, 2, Z′2, 2)(Z2, 3, Z′2, 3)(Z2, 4, Z′2, 4) As yet another embodiment, a mechanism related to CSI reporting will be described further. More specifically, CSI reporting timing and UE capability related thereto will be described. In what follows, through Tables 12 and 13, specific values of (Z, Z′) for a normal UE and an advanced UE will be examined. For the Z′ value of a normal UE, it is assumed that the UE performs CSI measurement/calculation and channel multiplexing; and CSI encoding and modulation for the Z′ symbol. Part of CSI measurement and calculation depends on the numerology and requires 6*2(μ-2) symbols; the remaining portions and channel multiplexing/CSI encoding/modulation uses 20 symbols respectively for a high latency and 13 symbols for a low latency. As a result, Z′ for the low latency and the high latency is 13+6*2 {circumflex over ( )} (μ-2) and 20+6*2 {circumflex over ( )}(μ-2). For the Z value of a normal UE, it is assumed that a CSI-RS is located at the next symbol of a final PDCCH symbol. Also, it is assumed that CSI processing may start after DCI decoding. The DCI decoding time requires 4+10*2 {circumflex over ( )}(μ-2) including a portion depending on a numerology such as PDCCH CE/demultiplexing/decoding and a portion independent of the numerology. As a result, Z is determined by DCI decoding time+CSI processing time, namely 4+10*2 (μ-2)+Z′. In the case of an advanced UE, since DCI decoding is conducted for 5 symbols, Z′ is 7 symbols and 14 symbols, respectively for a low latency and a high latency; and Z is Z′+5. Table 12 represents CSI calculation time (Z, Z′) for a normal UE. TABLE 1215 kHz30 kHz60 kHz120 kHzSCSSCSSCSSCSCSI latencyUnits(μ = 0)(μ = 1)(μ = 2)(μ = 3)Low latencySymbols(22, 15)(25, 16)(33, 19)(49, 25)High latencySymbols(29, 22)(32, 23)(40, 26)(56, 32) Table 13 represents CSI calculation time (Z, Z′) for an advanced UE. TABLE 1315 kHz30 kHz60 kHz120 kHzSCSSCSSCSSCSCSI latencyUnits(μ = 0)(μ = 1)(μ = 2)(μ = 3)Low latencySymbols(12, 7)(12, 7)(12, 7)(12, 7)High latencySymbols(19, 14)(19, 14)(19, 14)(19, 14) Various proposals related to the descriptions above will be examined. The proposals to be described later may be applied separately from the proposals described above or applied together with the aforementioned proposals. (Proposal 1′) As the minimum required CSI processing time for a normal and an advanced UEs, the (Z, Z′) values of Tables 12 and 13 above are selected, respectively. Regarding CSI and data multiplexing, one remaining problem is the number of symbols required for a UE to complete CSI processing and data encoding simultaneously. When CSI and data are multiplexed, allocation of a data resource element (RE) depends on a CSI payload; however, CSI/payload size is varied according to CRI/RI/amplitude coefficient other than 0, or the number of CSI omission. As a result, CSI processing and data encoding may not be performed in a fully parallel manner. More specifically, in the case of type I CSI, CRI/RI of Part 1 determines the payload size of Part 2 CSI such as PMI and CQI. In the case of type II CSI, the number of non-zero amplitude coefficients of RI/Part 1 CSI determines the payload size of Part 2 CSI such as PMI and CQI. Therefore, when CSI and data are multiplexed, instead of (Z, Z′), the UE requires at least (Z+C, Z′+C) symbol to prepare CSI and data simultaneously. Here, C is smaller than or equal to N2. (Proposal 2′) When AP CSI and data for a PUSCH are multiplexed, the UE is not expected to receive scheduling DCI having a symbol offset such that M-L-N<Z+C. Here, L represents the last symbol of a PDCCH triggering an A-CSI report, L is a start symbol of a PUSCH, N is a TA value in symbol units, and C is equal to or smaller than N2. (Proposal 3′) When AP CSI and data for a PUSCH are multiplexed, and an AP CSI-RS is used for channel measurement, the UE is not expected to receive scheduling DCI having a symbol offset such that M-O-N<Z′+C. Here, N represents a TA value in symbol units; O represents a value which comes late among the last symbol of an AP CSI-RS resource for a CMR, the last symbol of an aperiodic NZP CSI-RS for an IM (if exists), and the last symbol of an aperiodic CSI-IM (if exists); and C is equal to or smaller than N2. Also, when AP CSI and data for a PUSCH are multiplexed, although the time position of a CSI reference resource is determined in the same manner for the AP CSI only case, the time position is determined based on Z′+C instead of Z′. (Proposal 4′) When AP CSI and data for a PUSCH are multiplexed, a time offset of a CSI reference resource is derived from Z′+C with respect to a given CSI latency and a numerology. The time offset of a CSI reference resource is derived from Z′ with respect to a given CSI latency and a numerology as follows. nCQI_refis the smallest value greater than or equal to └(Z′+C)/Nsymbslot┘. such that slot n-nCQI_refcorresponds to a valid downlink slot. When a P/SP CSI-RS and/or CSI-IM is used for channel measurement and/or interference measurement, the UE does not expect the last OFDM symbol to measure a channel and/or interference with respect to the CSI-RS and/or CSI-IM received from 0 to Z′+C symbols before the transmission time of the first OFDM symbol of an AP CSI report. Another issue is calculation time for a beam report, namely CRI and layer 1 reference signal received power (L1 RSRP). When L1 RSRP is power measurement of a single port, and the same calculated power is used for a CSI report and a beam report, it is preferable to regard the L1 RSRP as low latency CSI. Also, to reduce calculation complexity, the number of CSI-RS resources for a beam report may be limited. (Proposal 5′) The same (Z, Z′) is applied for a beam report from low latency CSI as in the CSI report. Next, in the case of an A-CSI report trigger for a plurality of N CSI, if the UE is equipped with X parallel processors, and X≥N, the same mechanism as a single CSI report trigger may be used without relaxation. However, if more than X CSIs are triggered, the UE is unable to complete the calculation for all of the triggered CSIs. In this case, a relaxation method supported in the LTE system may be reused. In particular, if the UE does not have an unreported CSI(s), and N>X, the UE does not necessarily have to calculate N-X CSI(s). (Proposal 6′) In the case of an A-CSI report trigger for a plurality of CSI, a relaxation method supported in the LTE system may be reused. More specifically, if the UE is equipped with X parallel CSI processors and have N unreported CSI(s), and N>X, the UE does not necessarily have to update N-X most recent CSI(s). Regarding the time position of a reference resource for P/SP CSI reporting, the same method for the time position of a reference resource for AP CSI reporting may be applied. (Proposal 7′) The reference resource time position for P/SP CSI reporting may be determined by the same method for the reference resource time position for AP CSI reporting. Particulars related to CSI relaxation will be described in more detail. X represents capability for the maximum number of CSIs that may be updated simultaneously. If CSI processing time intervals of N (>X) CSI reports overlap with each other in the time domain, the UE does not need to update N-X CSI reports. A CSI processing time interval is a time interval which ranges from the start of a symbol S to the last of a symbol E. Here, regarding periodic and semi-persistent CSI reporting, (1) In the case of Alt. 1, S is a start symbol of a CQI reference resource slot. (2) In the case of Alt. 2, S is E-Z′ (or E-(Z′+1)), and E is a start symbol of a CSI report. Since the NR sets the location of a channel measurable CSI-RS at the symbol level (in other words, a CSI-RS located at a symbol below E-Z′ or at a symbol below E-(Z′+1) is measured), Alt. 2 proposes the latest time at which CSI processing may be started. In other words, the UE may start CSI processing at the time S of Alt. 2 at the latest. (3) In the case of Alt. 3, S is the location of the start symbol of a CSI report—Z′ (or start symbol of a CSI report—(Z′+1)) or the last symbol of the CSI-RS (which is used for calculation of the corresponding CSI) received at the most recent time point among the time points before the start symbol. Since the UE starts CSI calculation by using the CSI-RS at the aforementioned time point, the UE is appropriate for S and satisfies that E=S+Z′. Next, regarding a CSI report and a CSI-IM having a periodic or semi-persistent CSI-RS, (1) In the case of Alt. 1, If a reference resource is located before a PUCCH with aperiodic CSI triggering, S becomes the last symbol of the PDCCH with aperiodic CSI triggering, and E=S+Z. Otherwise, S=E-Z′, and E is the start symbol of a CSI report. (2) In the case of Alt. 2, If the start symbol of a CSI report—Z′ (or start symbol of a CSI report—(Z′+1)) is located before the PDCCH with aperiodic CSI triggering, S is the last symbol with aperiodic CSI triggering (or S is the last symbol of the PDCCH with aperiodic CSI triggering+1), and E=S+Z. In other words, if a measurable CSI-RS is received before the PDCCH, the UE may start CSI calculation after receiving the PDCCH. Since the minimum required time until a CSI report is completed after reception of the PDCCH is Z, the time at which the CSI calculation is finished becomes S+Z. Otherwise, S is E-Z′ (or E-(Z′+1)), and E is the start symbol of a CSI report. In other words, if a measurable CSI-RS is received after the PDCCH, the UE may start CSI calculation after receiving the CSI-RS. Since the minimum required time until the CSI report is completed after reception of the CSI-RS is Z′, the time at which CSI calculation is finished becomes S+Z′. (3) In the case of Alt. 3, Suppose the most recent CSI-RS received at or before the start symbol of CSI report—Z′ (or start symbol of CSI report—(Z′+1)) is a ‘reference CSI-RS’. If the last symbol of a reference CSI-RS is located before the PDCCH with aperiodic CSI triggering, S becomes the last symbol of the PDCCH with aperiodic CSI triggering (or last symbol of the PDCCH with aperiodic CSI triggering+1), and E=S+Z. In other words, if a measurable CSI-RS is received before the PDCCH, the UE may start CSI calculation after receiving the PDCCH. Since the minimum required time until a CSI report is completed after reception of the PDCCH is Z, the time at which CSI calculation is finished becomes S+Z. Otherwise, S=E-Z′ (or E-(Z′+1)), and E is the start symbol of a CSI report. In other words, if a measurable CSI-RS is received after the PDCCH, the UE may start CSI calculation after receiving the CSI-RS. Since the minimum required time until a CSI report is completed after receiving the CSI-RS is Z′, the time at which CSI calculation is finished becomes S+Z′. (4) In the case of Alt. 4, S is E-Z′ (or E-(Z′+1)), and E is the start symbol of a CSI report. Next, regarding an aperiodic CSI report with an aperiodic CSI-RS and a CSI-IM, S1 is the last symbol of a PDCCH with aperiodic CSI triggering. S2 is the symbol which comes late among the last symbol of an aperiodic CSI-RS with respect to a CMR, the last symbol of the aperiodic CSI-RS with respect to an IMR, and the last symbol of the aperiodic CSI-IM. (1) In the case of Alt. 1, If S1+Z>S2+Z′ (in other words, if the location of an OFDM symbol added by Z symbols in S1 lies after the OFDM symbol location added by Z′ symbols in S2), S=S1, and E=S1+Z. Otherwise, S=S2, and E=S2+Z′. The UE terminates CSI processing at a later time between S1+Z and S2+Z′. Therefore, E is set to the later of the two, and the start time of which is completed later between the two is assumed to be the start of CSI processing. (2) In the case of Alt. 2, It is set such that S=S2. If S1+Z>S2+Z (in other words, if the location of an OFDM symbol added by Z symbols in S1 lies after the OFDM symbol location added by Z′ symbols in S2), E=S1+Z. Otherwise, E=S2+Z′. Here, the end time of CSI processing in Alt. 2 is the same as that of Alt. 1, but the start time is fixed to S2 which is used for channel and/or interference estimation. This is so because an AP CSI-RS is always restricted to be received after reception of a PDCCH, and in this case, the UE is able to start CSI processing at least when the reception of the CSI-RS is completed. (3) In the case of Alt. 3, S is E-Z′ (or E-(Z′+1)), and E is the start symbol of a CSI report. When CSI is calculated by using a P/SP CSI-RS and/or CSI-Interference Measurement (IM), a plurality of measurable CSI-RSs may exist in the time domain. The UE may calculate CSI by measuring a CSI-RS received as recently as possible with respect to a CSI reporting time, thereby obtaining fresh CSI. At this time, too, a CSI-RS located before reporting time—Z′ has to be measured by taking into account the CSI calculation time of the UE. However, if the CSI (which is called ‘CSI 1’) calculation time overlaps with other CSI (which is called ‘CSI 2’) calculation time, and the number of CSIs that may be calculated at the same time is exceeded, the UE is unable to calculate part of CSIs. To solve the problem above, the calculation time of CSI 1 may be put to an earlier time so that it may not be overlapped with the CSI 2. This is possible since the CSI 1 is calculated by using a P/SP CSI-RS and/or CSI-IM, a plurality of P/SP CSI-RSs and/or CSI-IMs exist along the time axis, and thereby the CSI 1 may be calculated in advance by using the P/SP CSI-RS and/or CSI-IM received previously. However, it should be noted that if the CSI 1 is calculated too early, a potential interval is introduced to avoid a situation where CSI is outdated, and the CSI 1 may be calculated in advance by using the P/SP CSI-RS and/or CSI-IM received within the potential interval. A potential interval (namely the N value proposed below) may be determined by the eNB and indicated for the UE; or the UE may determine the potential interval and report the determined potential interval to the eNB. The potential interval is terminated at “reporting time—Z′” and starts at the end time—N time. When a plurality of CSIs are reported through the same PUSCH, channel multiplexing/encoding/modulation is performed simultaneously to a plurality of the corresponding CSIs, and therefore, a smaller amount of processing time is required than the case where a plurality of CSIs are reported through a different PUSCH. Therefore, when a plurality of CSIs are reported through the same PUSCH, one of the CSIs requires CSI processing time T, but the remaining CSI(s) requires only the time needed for “T—channel multiplexing/encoding/modulation”. Therefore, when processing time is defined for CSI relaxation, the remaining CSI is defined as “T-channel multiplexing/encoding/modulation”, and as a result, the possibility that the processing time overlaps with other CSI may be reduced. And when channel and/or interference is measured by using a periodic or semi-persistent CSI-RS, a plurality of measurable CSI-RSs may exist along the time axis. In this case, the UE calculates CSI by measuring a CSI-RS existing before Z′ (or Z′+1) symbol with reference to the first OFDM symbol which starts CSI reporting. Therefore, the latest time at which the UE measures CSI for CSI calculation becomes “the symbol before Z′ (or Z′+1) symbols with reference to the first OFDM symbol which starts CSI reporting”. Therefore, it is preferable to set the start time of CSI processing as “the symbol before Z′ (or Z′+1) symbols with reference to the first OFDM symbol which starts CSI reporting”. And it is preferable to set the end time of CSI processing as the first OFDM symbol which starts CSI reporting. On the other hand, when channel and/or interference is measured by using an aperiodic CSI-RS, one measurable CSI-RS may exist along the time axis. Therefore, it is preferable to set the start time of CSI processing as “the very last symbol at which an AP CSI-RS and/or AP CSI-IM is received”. In the case of periodic or semi-persistent CSI reporting, a reporting time is defined in advance. Therefore, the UE knows the location of a recent CSI-RS existing before Z′ (or Z′+1) symbol with reference to the first OFDM symbol which starts CSI reporting. Therefore, since calculation may be started from the corresponding CSI-RS, S becomes the last OFDM symbol of the corresponding CSI-RS, and E becomes S+Z′. In the case of AP CSI reporting, when an AP CSI-RS is used, one CSI-RS used for CSI calculation exists along the time axis. It should be noted that since a CSI-RS for CMR uses is different from a CSI-RS for IMR uses, there exist one CSI-RS for each use along the time axis. Therefore, since calculation may be started from the corresponding CSI-RS, S becomes the last OFDM symbol of the corresponding CSI-RS, and E becomes S+Z′. In the case of AP CSI reporting, when a P/SP CSI-RS is used, the most recent CSI-RS used for CSI calculation may be received before DCI. Therefore, if the last OFDM symbol of the corresponding CSI-RS is set to S, the UE starts to calculate CSI at a time at which it is uncertain whether the corresponding CSI may be triggered or not. If the corresponding CSI is not triggered, the UE wastes computation power, and a problem may arise, such that the corresponding computation power is not used for other CSI calculation. To solve the problem above, S is defined such that S=E-Z′, and E is defined as the first symbol of PUSCH CSI reporting. Various combinations are possible for S and E proposed in the different Alt.s, above, and corresponding combinations are also applicable to a method proposed by the present specification. For example, S and E may be determined by the S of Alt. 1 and the E of Alt. 2. And in the proposals 2 and 3 above, Z′ may be replaced with Z′-1. Since the UE may still be able to calculate CSI even if Z′ time is given, which ranges from a CSI-RS and/or CSI-IM to the start symbol of CSI reporting, Z′ may be replaced with Z′-1. For the same reason, in the proposal 4 above, Z′ may be replaced with Z′-1. Next, the calculation of the CSI will be described in more detail from the viewpoint of implementation. Two implementations of a CSI processor that is in charge of the calculation of the CSI are available. Type A corresponds to a serial processor. The UE may have X (type A) CSI processing units and a minimum time required for calculating one CSI may be defined as (Z, Z′). When the UE may simultaneously calculate CSIs of X or less and in this case, the required time needs to be sequentially calculated for X CSIs one by one, a minimum time of a sum (e.g., Z′ sum) of a value (Z, Z′) corresponding to each of each of X CSIs is required. When locations of X CSI-RSs or CSI-IMs are the same as each other, it is determined whether a given calculation time is sufficient according to a time when a time as much as Z′ sum is added at the location of the CSI-RS and/or CSI-IM (a last symbol of the CSI-RS and/or CSI-IM or a first symbol of the CSI-RS and/or CSI-IM) is before or after reporting. When the sufficient time is given, the UE updates the CSI and reports the CSI to the eNB. Otherwise, the UE does not update the CSI, transmits a dummy CSI, or ignores the trigger and does not transmit anything to the eNB. When the locations of X CSI-RSs or CSI-IMs are different from each other, it is determined whether a given calculation time is sufficient according to a time when a time as much as Z′ sum is added at the symbol location of the CSI-RS and/or CSI-IM which is most recently received is before or after reporting. Since the eNB has a degree of freedom in which the locations of X CSI-RSs/CSI-IMs may be differently configured, it is needs to be determined whether the given calculation time is sufficient by a latter scheme. Operation Methods of UE and eNB Hereinafter, operations of the UE and the eNB for performing the method proposed by the present invention will be described will be described with reference toFIGS.17to22. FIG.17is a flowchart illustrating an example of an operation method of a UE that performs a CSI report proposed by the present invention. First, the UE receives from the eNB a radio resource control (RRC) signaling including one or more reporting settings. Here, the reporting setting includes first values indicating a time offset for the CSI report. The first value may be expressed as Y The CSI report may be an aperiodic CSI report. In addition, the UE receives from the eNB downlink control information (DCI) for triggering the CSI report (S1720). The DCI includes control information for a transmission time point of a physical uplink shared channel (PUSCH). The control information may be represented by n bit(s). Here, the n is a natural number or non-negative integer. For example, when the control information is represented by 2 bits, each state value may be 00, 01, 10, or 11. In addition, when a plurality of reporting settings is triggered by the DCI, the UE determines a largest value among the first values corresponding to the control information in lists for the first values of the plurality of reporting settings as a second value (S1730). The 00 may correspond to a first entry in the list for the first values, the 01 may correspond to a second entry in the list for the first values, the 10 may correspond to a second entry in the list for the first values, and the 11 may correspond to a fourth entry in the list for the first values. In addition, the UE reports the CSI to the eNB on the PUSCH based on the second value (S1740). The DCI may be received on slot n and the CSI may be reported on slot (n+second value). The operation of the UE ofFIG.17may be interpreted as follows. The UE receives, from a base station, a radio resource control (RRC) signaling that comprises a plurality of reporting settings, wherein each reporting setting comprises a corresponding list of first values representing time offsets for transmitting a CSI report, forming a plurality of lists of first values. And, the UE receives, from the base station, downlink control information (DCI) triggering the CSI report, wherein the DCI comprises an index value related to a time at which to transmit the CSI report on a physical uplink shared channel (PUSCH). And, the UE determines, based on the DCI, a plurality of list entries by determining, for each list in the plurality of lists of first values, a corresponding list entry that is indexed in the list based on the index value. And, the UE determines a second value that is largest among the plurality of list entries. And, the UE transmits, to the base station, the CSI report on the PUSCH based on the second value. Here, the CSI report comprises an aperiodic CSI report. Additionally, the UE may receive the DCI on a slot n, and transmit the CSI report on a slot n+(second value). The index value is represented by 2 bits, and the index value is represented by one of 00, 01, 10 or 11. More specifically, the index value of 00 corresponds to a first entry in each of the plurality of lists of first values, the index value of 01 corresponds to a second entry in each of the plurality of lists of first values, the index value of 10 corresponds to a third entry in each of the plurality of lists of first values, and the index value 11 corresponds to a fourth entry in each of the plurality of lists of first values. Here, the index value may be greater than or equal to zero, and each list entry is indexed in the corresponding list of first values at a position corresponding to 1+(index value) in the list. FIG.18is a flowchart illustrating an example of an operation method of an eNB that receives a CSI report proposed by the present invention. First, the eNB receives from the UE the radio resource control (RRC) signaling including one or more reporting settings. Here, the reporting setting includes the list for the first values indicating the time offset for the CSI report. The first value may be expressed as Y The CSI report may be the aperiodic CSI report. In addition, the eNB transmits to the UE the downlink control information (DCI) for triggering the plurality of reporting settings (S1820). The DCI includes control information for a transmission time point of a physical uplink shared channel (PUSCH). The control information may be represented by n bit(s). Here, the n is a natural number or non-negative integer. For example, when the control information is represented by 2 bits, each state value may be 00, 01, 10, or 11. In addition, the eNB receives the CSI report from the UE on the PUSCH (S1830). The CSI report may be associated with the second value and the second value may be the largest value among the first values corresponding to the control information in the lists for the first values of the plurality of reporting settings. The 00 may correspond to a first entry in the list for the first values, the 01 may correspond to a second entry in the list for the first values, the 10 may correspond to a second entry in the list for the first values, and the 11 may correspond to a fourth entry in the list for the first values. The DCI may be received on slot n and the CSI may be reported on slot (n+second value). The operation of the eNB ofFIG.18may be interpreted as follows. The eNB transmits, to a UE, a radio resource control (RRC) signaling that comprises a plurality of reporting settings, wherein each reporting setting comprises a corresponding list of first values representing time offsets for transmitting a CSI report, forming a plurality of lists of first values. And, the eNB transmits, to the UE, downlink control information (DCI) triggering the CSI report, wherein the DCI comprises an index value related to a time at which to transmit the CSI report on a physical uplink shared channel (PUSCH). And, the eNB receives, from the UE, the CSI report on the PUSCH based on the second value. Here, the second value that is largest among the plurality of list entries. And, the plurality of list entries may be determined by a plurality of list entries by determining, for each list in the plurality of lists of first values, a corresponding list entry that is indexed in the list. Here, the CSI report comprises an aperiodic CSI report. Additionally, the DCI may receive on a slot n, and the CSI report transmit on a slot n+(second value). The index value is represented by 2 bits, and the index value is represented by one of 00, 01, 10 or 11. More specifically, the index value of 00 corresponds to a first entry in each of the plurality of lists of first values, the index value of 01 corresponds to a second entry in each of the plurality of lists of first values, the index value of 10 corresponds to a third entry in each of the plurality of lists of first values, and the index value 11 corresponds to a fourth entry in each of the plurality of lists of first values. Here, the index value may be greater than or equal to zero, and each list entry is indexed in the corresponding list of first values at a position corresponding to 1+(index value) in the list. Referring toFIGS.19to22to be described below, a process of implementing the method for reporting the CSI proposed by the present invention in the UE will be described in more detail. That is, the UE comprises a radio frequency (RF) module, at least one processor, and at least one computer memory operably connectable to the at least one processor and storing instructions that, when executed, cause the at least one processor to perform operations comprising: receiving, from a base station, a radio resource control (RRC) signaling that comprises a plurality of reporting settings, wherein each reporting setting comprises a corresponding list of first values representing time offsets for transmitting a CSI report, forming a plurality of lists of first values; receiving, from the base station, downlink control information (DCI) triggering the CSI report, wherein the DCI comprises an index value related to a time at which to transmit the CSI report on a physical uplink shared channel (PUSCH); determining, based on the DCI, a plurality of list entries by determining, for each list in the plurality of lists of first values, a corresponding list entry that is indexed in the list based on the index value; determining a second value that is largest among the plurality of list entries; and transmitting, to the base station, the CSI report on the PUSCH based on the second value. Here, the CSI report comprises an aperiodic CSI report. Additionally, the UE may receive the DCI on a slot n, and transmit the CSI report on a slot n+(second value). The index value is represented by 2 bits, and the index value is represented by one of 00, 01, 10 or 11. More specifically, the index value of 00 corresponds to a first entry in each of the plurality of lists of first values, the index value of 01 corresponds to a second entry in each of the plurality of lists of first values, the index value of 10 corresponds to a third entry in each of the plurality of lists of first values, and the index value 11 corresponds to a fourth entry in each of the plurality of lists of first values. Here, the index value may be greater than or equal to zero, and each list entry is indexed in the corresponding list of first values at a position corresponding to 1+(index value) in the list. Referring toFIGS.19to22to be described below, a process of implementing the method for reporting the CSI proposed by the present invention in the eNB will be described in more detail. That is, the UE comprises a radio frequency (RF) module, at least one processor, and at least one computer memory operably connectable to the at least one processor and storing instructions that, when executed, cause the at least one processor to perform operations comprising: transmitting, to a UE, a radio resource control (RRC) signaling that comprises a plurality of reporting settings, wherein each reporting setting comprises a corresponding list of first values representing time offsets for transmitting a CSI report, forming a plurality of lists of first values; transmitting, to the UE, downlink control information (DCI) triggering the CSI report, wherein the DCI comprises an index value related to a time at which to transmit the CSI report on a physical uplink shared channel (PUSCH); and receiving, from the UE, the CSI report on the PUSCH based on the second value. Here, the CSI report comprises an aperiodic CSI report. Additionally, the DCI may receive on a slot n, and the CSI report transmit on a slot n+(second value). The index value is represented by 2 bits, and the index value is represented by one of 00, 01, 10 or 11. More specifically, the index value of 00 corresponds to a first entry in each of the plurality of lists of first values, the index value of 01 corresponds to a second entry in each of the plurality of lists of first values, the index value of 10 corresponds to a third entry in each of the plurality of lists of first values, and the index value 11 corresponds to a fourth entry in each of the plurality of lists of first values. Here, the index value may be greater than or equal to zero, and each list entry is indexed in the corresponding list of first values at a position corresponding to 1+(index value) in the list. Overview of Devices to which Present Invention is Applicable FIG.19illustrates a block diagram of a wireless communication device to which methods proposed in the present invention may be applied. Referring toFIG.19, a wireless communication system includes an eNB1910and multiple UEs1920positioned within an area of the eNB. Each of the eNB and the UE may be expressed as a wireless device. The eNB includes a processor1911, a memory1912, and a radio frequency (RF) module1913. The processor1911implements a function, a process, and/or a method which are proposed inFIGS.1to18above. Layers of a radio interface protocol may be implemented by the processor. The memory is connected with the processor to store various information for driving the processor. The RF module is connected with the processor to transmit and/or receive a radio signal. The UE includes a processor1921, a memory1922, and an RF module1923. The processor implements a function, a process, and/or a method which are proposed inFIGS.1to18above. Layers of a radio interface protocol may be implemented by the processor. The memory is connected with the processor to store various information for driving the processor. The RF module is connected with the processor to transmit and/or receive a radio signal. The memories1912and1922may be positioned inside or outside the processors1911and1921and connected with the processor by various well-known means. Further, the eNB and/or the UE may have a single antenna or multiple antennas. The antennas1914and1924serve to transmit and receive the radio signals. FIG.20illustrates a block diagram of a communication device according to an embodiment of the present invention. In particular,FIG.20is a diagram more specifically illustrating the UE ofFIG.19above. Referring toFIG.20, the UE may be configured to include a processor (or a digital signal processor (DSP)2010, an RF module (or RF unit)2035, a power management module2005, an antenna2040, a battery2055, a display2015, a keypad2020, a memory2030, a subscriber identification module (SIM) card2025(This component is optional), a speaker2045, and a microphone2050. The UE may also include a single antenna or multiple antennas. The processor2010implements a function, a process, and/or a method which are proposed inFIGS.1to18above. Layers of a radio interface protocol may be implemented by the processor. The memory2030is connected with the processor and stores information related with an operation of the processor. The memory may be positioned inside or outside the processor and connected with the processor by various well-known means. A user inputs command information such as a telephone number or the like by, for example, pressing (or touching) a button on the keypad2020or by voice activation using the microphone2050. The processor receives such command information and processes to perform appropriate functions including dialing a telephone number. Operational data may be extracted from the SIM card2025or the memory2030. In addition, the processor may display command information or drive information on the display2015for the user to recognize and for convenience. The RF module2035is connected with the processor to transmit and/or receive an RF signal. The processor transfers the command information to the RF module to initiate communication, for example, to transmit radio signals constituting voice communication data. The RF module is constituted by a receiver and a transmitter for receiving and transmitting the radio signals. The antenna2040functions to transmit and receive the radio signals. Upon receiving the radio signals, the RF module may transfer the signal for processing by the processor and convert the signal to a baseband. The processed signal may be converted into to audible or readable information output via the speaker2045. FIG.21is a diagram illustrating an example of an RF module of the wireless communication device to which the method proposed in the present invention may be applied. Specifically,FIG.21illustrates an example of an RF module that may be implemented in a frequency division duplex (FDD) system. First, in a transmission path, the processors described inFIGS.19and20process the data to be transmitted and provide an analog output signal to the transmitter2110. Within the transmitter2110, the analog output signal is filtered by a low pass filter (LPF)2111to remove images caused by a digital-to-analog conversion (ADC) and up-converted to an RF from a baseband by an up-converter (mixer)2112, and amplified by a variable gain amplifier (VGA)2113and the amplified signal is filtered by a filter2114, additionally amplified by a power amplifier (PA)2115, routed through a duplexer(s)2150/an antenna switch(es)2160, and transmitted through an antenna2170. In addition, in a reception path, the antenna receives signals from the outside and provides the received signals, which are routed through the antenna switch(es)2160/duplexers2150and provided to a receiver2120. In the receiver2120, the received signals are amplified by a low noise amplifier (LNA)2123, filtered by a bans pass filter2124, and down-converted from the RF to the baseband by a down-converter (mixer)2125. The down-converted signal is filtered by a low pass filter (LPF)2127and amplified by a VGA1127to obtain an analog input signal, which is provided to the processors described inFIGS.19and20. Further, a local oscillator (LO) generator2140also provides transmitted and received LO signals to the up-converter2112and the down-converter2125, respectively. In addition, a phase locked loop (PLL)2130receives control information from the processor to generate the transmitted and received LO signals at appropriate frequencies and provides control signals to the LO generator2140. Further, circuits illustrated inFIG.21may be arranged differently from the components illustrated inFIG.21. FIG.22is a diagram illustrating another example of the RF module of the wireless communication device to which the method proposed in the present invention may be applied. Specifically,FIG.22illustrates an example of an RF module that may be implemented in a time division duplex (TDD) system. A transmitter2210and a receiver2220of the RF module in the TDD system are identical in structure to the transmitter and the receiver of the RF module in the FDD system. Hereinafter, only the structure of the RF module of the TDD system that differs from the RF module of the FDD system will be described and the same structure will be described with reference to a description ofFIG.21. A signal amplified by a power amplifier (PA)2215of the transmitter is routed through a band select switch2250, a band pass filter (BPF)2270, and an antenna switch(es)2280and transmitted via an antenna2280. In addition, in a reception path, the antenna receives signals from the outside and provides the received signals, which are routed through the antenna switch(es)2270, the band pass filter2260, and the band select switch2250and provided to the receiver2220. In the embodiments described above, the components and the features of the present invention are combined in a predetermined form. Each component or feature should be considered as an option unless otherwise expressly stated. Each component or feature may be implemented not to be associated with other components or features. Further, the embodiment of the present invention may be configured by associating some components and/or features. The order of the operations described in the embodiments of the present invention may be changed. Some components or features of any embodiment may be included in another embodiment or replaced with the component and the feature corresponding to another embodiment. It is apparent that the claims that are not expressly cited in the claims are combined to form an embodiment or be included in a new claim by an amendment after the application. The embodiments of the present invention may be implemented by hardware, firmware, software, or combinations thereof. In the case of implementation by hardware, according to hardware implementation, the exemplary embodiment described herein may be implemented by using one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, and the like. In the case of implementation by firmware or software, the embodiment of the present invention may be implemented in the form of a module, a procedure, a function, and the like to perform the functions or operations described above. A software code may be stored in the memory and executed by the processor. The memory may be positioned inside or outside the processor and may transmit and receive data to/from the processor by already various means. It is apparent to those skilled in the art that the present invention may be embodied in other specific forms without departing from essential characteristics of the present invention. Accordingly, the aforementioned detailed description should not be construed as restrictive in all terms and should be exemplarily considered. The scope of the present invention should be determined by rational construing of the appended claims and all modifications within an equivalent scope of the present invention are included in the scope of the present invention. Although a method for reporting CSI in a wireless communication system of the present invention has been described with reference to an example applied to a 3GPP LTE/LTE-A system or a 5G system (New RAT system), the scheme may be applied to various wireless communication systems in addition to the 3GPP LTE/LTE-A system or 5G system. According to the present invention, when a plurality of reporting settings is triggered by DCI, a largest value among slot offset values (associated with a CSI report included in each reporting setting) corresponding to the DCI is defined a slot offset associated with the CSI report, and as a result, a UE can normally perform the CSI report. Advantages which can be obtained in the present invention are not limited to the aforementioned effects and other unmentioned advantages will be clearly understood by those skilled in the art from the following description.
125,447
11863271
DETAILED DESCRIPTION Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description. As described above, a number of options for the basis subset selection have been discussed. According to one option, common selection is used for all the 2L beams, but only a size-K0<2LM subset of coefficients are reported (not reported coefficients are treated as zero). With this approach for basis subset selection, a common basis is used for all beams but only K0out of the 2LM total coefficients are reported. It is an open problem how the size-K0coefficient subset is to be indicated in the CSI report considering overhead constraints. Certain aspects of the present disclosure and its embodiments may provide solutions to these or other challenges. In certain embodiments, feedback of at least three different quantities are considered as follows: a set of reported coefficients; an indicator of how to interpret the set of reported coefficients (e.g., as a subset of a set of candidate reported coefficients); and an indicator of the payload size of the set of the reported coefficients and optionally the payload size of the above-described indicator of how to interpret the set of reported coefficients. The indicator of the payload size may be separately encoded from the set of reported coefficients. According to one example embodiment, a method performed by a wireless device for reporting CSI for a DL channel is disclosed. The wireless device transmits a CSI report for the DL channel to a network node, the CSI report comprising: a set of reported coefficients; an indication of how the network node is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. In certain embodiments, the wireless device may receive a CSI report configuration, the CSI report configuration indicating a maximum number of non-zero coefficients that the wireless device can include in the CSI report. In certain embodiments, the wireless device may estimate the DL channel. The wireless device may determine, based on the estimated DL channel, a plurality of coefficients. The wireless device may determine that a subset of the plurality of coefficients are quantized to zero. The wireless device may omit the determined subset of coefficients from the CSI report. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. According to other example embodiments, a corresponding wireless device, computer program, and computer program product are also disclosed. According to another example embodiment, a method performed by a network node for decoding CSI for a DL channel is disclosed. The network node receives a CSI report for the DL channel from a wireless device, the CSI report comprising: a set of reported coefficients; an indication of how the network node is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. In certain embodiments, the network node may decode the indication of how the network node is to interpret the set of reported coefficients. The network node may determine, based on the indication of how the network node is to interpret the set of reported coefficients, a number of non-zero coefficients included in the set of reported coefficients. In certain embodiments, the network node may determine a payload size of the set of reported coefficients. In certain embodiments, the network node may decode the set of reported coefficients. In certain embodiments, the network node may send a CSI report configuration to the wireless device, the CSI report configuration indicating a maximum number of non-zero coefficients that the wireless device can include in the CSI report. According to other example embodiments, a corresponding network node, computer program, and computer program product are also disclosed. Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art. FIG.5illustrates an example wireless network, in accordance with certain embodiments. Although the subject matter described herein may be implemented in any appropriate type of system using any suitable components, the embodiments disclosed herein are described in relation to a wireless network, such as the example wireless network illustrated inFIG.5. For simplicity, the wireless network ofFIG.5only depicts network506, network nodes560and560b, and wireless devices510,510b, and510c. In practice, a wireless network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device. Of the illustrated components, network node560and wireless device510are depicted with additional detail. The wireless network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by, or via, the wireless network. The wireless network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system. In some embodiments, the wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards. Network506may comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices. Network node560and wireless device510comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network. In different embodiments, the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)). Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS). Yet further examples of network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs. As another example, a network node may be a virtual network node as described in more detail below. More generally, however, network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network. InFIG.5, network node560includes processing circuitry570, device readable medium580, interface590, auxiliary equipment584, power source586, power circuitry587, and antenna562. Although network node560illustrated in the example wireless network ofFIG.5may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Moreover, while the components of network node560are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium580may comprise multiple separate hard drives as well as multiple RAM modules). Similarly, network node560may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which network node560comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeB's. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, network node560may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate device readable medium580for the different RATs) and some components may be reused (e.g., the same antenna562may be shared by the RATs). Network node560may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node560, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node560. Processing circuitry570is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry570may include processing information obtained by processing circuitry570by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Processing circuitry570may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node560components, such as device readable medium580, network node560functionality. For example, processing circuitry570may execute instructions stored in device readable medium580or in memory within processing circuitry570. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, processing circuitry570may include a system on a chip (SOC). In some embodiments, processing circuitry570may include one or more of radio frequency (RF) transceiver circuitry572and baseband processing circuitry574. In some embodiments, radio frequency (RF) transceiver circuitry572and baseband processing circuitry574may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry572and baseband processing circuitry574may be on the same chip or set of chips, boards, or units In certain embodiments, some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device may be performed by processing circuitry570executing instructions stored on device readable medium580or memory within processing circuitry570. In alternative embodiments, some or all of the functionality may be provided by processing circuitry570without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry570can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry570alone or to other components of network node560, but are enjoyed by network node560as a whole, and/or by end users and the wireless network generally. Device readable medium580may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by processing circuitry570. Device readable medium580may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry570and, utilized by network node560. Device readable medium580may be used to store any calculations made by processing circuitry570and/or any data received via interface590. In some embodiments, processing circuitry570and device readable medium580may be considered to be integrated. Interface590is used in the wired or wireless communication of signalling and/or data between network node560, network506, and/or wireless devices510. As illustrated, interface590comprises port(s)/terminal(s)594to send and receive data, for example to and from network506over a wired connection. Interface590also includes radio front end circuitry592that may be coupled to, or in certain embodiments a part of, antenna562. Radio front end circuitry592comprises filters598and amplifiers596. Radio front end circuitry592may be connected to antenna562and processing circuitry570. Radio front end circuitry may be configured to condition signals communicated between antenna562and processing circuitry570. Radio front end circuitry592may receive digital data that is to be sent out to other network nodes or wireless devices via a wireless connection. Radio front end circuitry592may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters598and/or amplifiers596. The radio signal may then be transmitted via antenna562. Similarly, when receiving data, antenna562may collect radio signals which are then converted into digital data by radio front end circuitry592. The digital data may be passed to processing circuitry570. In other embodiments, the interface may comprise different components and/or different combinations of components. In certain alternative embodiments, network node560may not include separate radio front end circuitry592, instead, processing circuitry570may comprise radio front end circuitry and may be connected to antenna562without separate radio front end circuitry592. Similarly, in some embodiments, all or some of RF transceiver circuitry572may be considered a part of interface590. In still other embodiments, interface590may include one or more ports or terminals594, radio front end circuitry592, and RF transceiver circuitry572, as part of a radio unit (not shown), and interface590may communicate with baseband processing circuitry574, which is part of a digital unit (not shown). Antenna562may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna562may be coupled to radio front end circuitry590and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna562may comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz. An omni-directional antenna may be used to transmit/receive radio signals in any direction, a sector antenna may be used to transmit/receive radio signals from devices within a particular area, and a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna may be referred to as MIMO. In certain embodiments, antenna562may be separate from network node560and may be connectable to network node560through an interface or port. Antenna562, interface590, and/or processing circuitry570may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna562, interface590, and/or processing circuitry570may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment. Power circuitry587may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node560with power for performing the functionality described herein. Power circuitry587may receive power from power source586. Power source586and/or power circuitry587may be configured to provide power to the various components of network node560in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source586may either be included in, or external to, power circuitry587and/or network node560. For example, network node560may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry587. As a further example, power source586may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry587. The battery may provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, may also be used. Alternative embodiments of network node560may include additional components beyond those shown inFIG.5that may be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node560may include user interface equipment to allow input of information into network node560and to allow output of information from network node560. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node560. As used herein, wireless device refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Unless otherwise noted, the term wireless device may be used interchangeably herein with user equipment (UE). Communicating wirelessly may involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. In some embodiments, a wireless device may be configured to transmit and/or receive information without direct human interaction. For instance, a wireless device may be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network. Examples of a wireless device include, but are not limited to, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE). a vehicle-mounted wireless terminal device, etc. A wireless device may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-everything (V2X) and may in this case be referred to as a D2D communication device. As yet another specific example, in an Internet of Things (IoT) scenario, a wireless device may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another wireless device and/or a network node. The wireless device may in this case be a machine-to-machine (M2M) device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the wireless device may be a UE implementing the 3GPP narrow band internet of things (NB-IoT) standard. Particular examples of such machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g. refrigerators, televisions, etc.) personal wearables (e.g., watches, fitness trackers, etc.). In other scenarios, a wireless device may represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation. A wireless device as described above may represent the endpoint of a wireless connection, in which case the device may be referred to as a wireless terminal. Furthermore, a wireless device as described above may be mobile, in which case it may also be referred to as a mobile device or a mobile terminal. As illustrated, wireless device550includes antenna511, interface514, processing circuitry520, device readable medium530, user interface equipment532, auxiliary equipment534, power source536and power circuitry537. Wireless device510may include multiple sets of one or more of the illustrated components for different wireless technologies supported by wireless device510, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies may be integrated into the same or different chips or set of chips as other components within wireless device510. Antenna511may include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface514. In certain alternative embodiments, antenna511may be separate from wireless device510and be connectable to wireless device510through an interface or port. Antenna511, interface514, and/or processing circuitry520may be configured to perform any receiving or transmitting operations described herein as being performed by a wireless device. Any information, data and/or signals may be received from a network node and/or another wireless device. In some embodiments, radio front end circuitry and/or antenna511may be considered an interface. As illustrated, interface514comprises radio front end circuitry512and antenna511. Radio front end circuitry512comprise one or more filters518and amplifiers516. Radio front end circuitry514is connected to antenna511and processing circuitry520, and is configured to condition signals communicated between antenna511and processing circuitry520. Radio front end circuitry512may be coupled to or a part of antenna511. In some embodiments, wireless device510may not include separate radio front end circuitry512; rather, processing circuitry520may comprise radio front end circuitry and may be connected to antenna511. Similarly, in some embodiments, some or all of RF transceiver circuitry522may be considered a part of interface514. Radio front end circuitry512may receive digital data that is to be sent out to other network nodes or wireless devices via a wireless connection. Radio front end circuitry512may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters518and/or amplifiers516. The radio signal may then be transmitted via antenna511. Similarly, when receiving data, antenna511may collect radio signals which are then converted into digital data by radio front end circuitry512. The digital data may be passed to processing circuitry520. In other embodiments, the interface may comprise different components and/or different combinations of components. Processing circuitry520may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other wireless device510components, such as device readable medium530, wireless device510functionality. Such functionality may include providing any of the various wireless features or benefits discussed herein. For example, processing circuitry520may execute instructions stored in device readable medium530or in memory within processing circuitry520to provide the functionality disclosed herein. As illustrated, processing circuitry520includes one or more of RF transceiver circuitry522, baseband processing circuitry524, and application processing circuitry526. In other embodiments, the processing circuitry may comprise different components and/or different combinations of components. In certain embodiments processing circuitry520of wireless device510may comprise a SOC. In some embodiments, RF transceiver circuitry522, baseband processing circuitry524, and application processing circuitry526may be on separate chips or sets of chips. In alternative embodiments, part or all of baseband processing circuitry524and application processing circuitry526may be combined into one chip or set of chips, and RF transceiver circuitry522may be on a separate chip or set of chips. In still alternative embodiments, part or all of RF transceiver circuitry522and baseband processing circuitry524may be on the same chip or set of chips, and application processing circuitry526may be on a separate chip or set of chips. In yet other alternative embodiments, part or all of RF transceiver circuitry522, baseband processing circuitry524, and application processing circuitry526may be combined in the same chip or set of chips. In some embodiments, RF transceiver circuitry522may be a part of interface514. RF transceiver circuitry522may condition RF signals for processing circuitry520. In certain embodiments, some or all of the functionality described herein as being performed by a wireless device may be provided by processing circuitry520executing instructions stored on device readable medium530, which in certain embodiments may be a computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by processing circuitry520without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner In any of those particular embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry520can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry520alone or to other components of wireless device510, but are enjoyed by wireless device510as a whole, and/or by end users and the wireless network generally. Processing circuitry520may be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being performed by a wireless device. These operations, as performed by processing circuitry520, may include processing information obtained by processing circuitry520by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by wireless device510, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Device readable medium530may be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry520. Device readable medium530may include computer memory (e.g., Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (e.g., a hard disk), removable storage media (e.g., a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that may be used by processing circuitry520. In some embodiments, processing circuitry520and device readable medium530may be considered to be integrated. User interface equipment532may provide components that allow for a human user to interact with wireless device510. Such interaction may be of many forms, such as visual, audial, tactile, etc. User interface equipment532may be operable to produce output to the user and to allow the user to provide input to wireless device510. The type of interaction may vary depending on the type of user interface equipment532installed in wireless device510. For example, if wireless device510is a smart phone, the interaction may be via a touch screen; if wireless device510is a smart meter, the interaction may be through a screen that provides usage (e.g., the number of gallons used) or a speaker that provides an audible alert (e.g., if smoke is detected). User interface equipment532may include input interfaces, devices and circuits, and output interfaces, devices and circuits. User interface equipment532is configured to allow input of information into wireless device510, and is connected to processing circuitry520to allow processing circuitry520to process the input information. User interface equipment532may include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment532is also configured to allow output of information from wireless device510, and to allow processing circuitry520to output information from wireless device510. User interface equipment532may include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment532, wireless device510may communicate with end users and/or the wireless network, and allow them to benefit from the functionality described herein. Auxiliary equipment534is operable to provide more specific functionality which may not be generally performed by wireless devices. This may comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment534may vary depending on the embodiment and/or scenario. Power source536may, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic devices or power cells, may also be used. Wireless device510may further comprise power circuitry537for delivering power from power source536to the various parts of wireless device510which need power from power source536to carry out any functionality described or indicated herein. Power circuitry537may in certain embodiments comprise power management circuitry. Power circuitry537may additionally or alternatively be operable to receive power from an external power source; in which case wireless device510may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable. Power circuitry537may also in certain embodiments be operable to deliver power from an external power source to power source536. This may be, for example, for the charging of power source536. Power circuitry537may perform any formatting, converting, or other modification to the power from power source536to make the power suitable for the respective components of wireless device510to which power is supplied. As described above, for CSI measurement and feedback, CSI-RS are defined. A CSI-RS is transmitted on each transmit antenna (or antenna port) and is used by wireless device510to measure the DL channel between each of the transmit antenna ports and each of its receive antenna ports. By measuring the received CSI-RS, wireless device510can estimate the channel that the CSI-RS is traversing, including the radio propagation channel and antenna gains. The CSI-RS for the above-described purpose is also referred to as Non-Zero Power CSI-RS. Wireless device510can be configured with multiple CSI reporting settings and multiple CSI-RS resource settings. Each resource setting can contain multiple resource sets, and each resource set can contain up to 8 CSI-RS resources. For each CSI reporting setting, wireless device510feeds back a CSI report. The CSI report may include, among other things, CSI parameters to be reported such as RI, PMI, CQI, and CRI (e.g., in cases where multiple CSI-RS resources are included in a resource set). As described above, a number of options for the basis subset selection have been discussed. According to one option, common selection is used for all the 2L beams, but only a size-K0<2LM subset of coefficients are reported (not reported coefficients are treated as zero). With this approach for basis subset selection, a common basis is used for all beams but only K0out of the 2LM total coefficients are reported. It is an open problem how the size-K0coefficient subset is to be indicated in the CSI report considering overhead constraints. Certain aspects of the present disclosure and its embodiments may provide solutions to these or other challenges. It should be understood, however, that this is just one scenario in which the CSI reporting format described herein may be useful. The present disclosure contemplates that the various embodiments described herein may be used in other situations as well. Some of the coefficients of the size-K0coefficient subset may be quantized in to amplitude domain to be zero. The present disclosure recognizes that, in such a scenario, phase information corresponding to those coefficients is redundant and could be omitted from the CSI report. Certain embodiments of the present disclosure utilize this aspect to reduce the CSI report payload (and thereby reduce the amount of resources required for transmission and/or improve the decoding reliability of the CSI report). For example, in certain embodiments some part of the CSI may be omitted. In certain embodiments, the size of the remaining CSI payload (i.e., how many parameters are omitted) may be known by the receiver (typically, a network node such as network node560, which may be a gNB) prior to decoding the CSI, so that the receiver of the CSI is not required to perform multiple blind decoding hypotheses for different candidate CSI payloads. In certain embodiments, an indication of which parameters have been omitted may be conveyed in the report, so that the receiver of the CSI can correctly interpret the remaining parameters. In certain embodiments, one or more types of feedback may be used. For example, in certain embodiments the feedback may include one or more of the following quantities: a set of reported coefficients; an indicator of how to interpret the set of reported coefficients (e.g., as a subset of a set of candidate reported coefficients); and an indicator of the payload size of the set of the reported coefficients. In certain embodiments, the feedback may also include the payload size of the indicator of how to interpret the set of reported coefficients. In certain embodiments, the indicator of the payload size may be separately encoded from the set of reported coefficients. One problem with the NR Rel-15 approach described above in the background section is that the wideband amplitude coefficients pl,i(1)are always transmitted, even if they are zero-valued. Unlike such an existing approach, in which wideband amplitude coefficients cannot be omitted (even if they are zero-valued), the present disclosure presents at least some embodiments in which the feedback (e.g., a CSI report) provided by a wireless device to a network node comprises an indicator of how to interpret the set of reported coefficients (e.g., as a subset of a set of candidate reported coefficients). This may allow the receiving network node to correctly interpret the feedback, even if zero-valued coefficients (such as zero-valued wideband coefficients) are omitted in the feedback. Wireless device510may be configured to perform a method for reporting CSI for a DL channel. In certain embodiments, network node560sends a CSI report configuration to wireless device510. The CSI report configuration may indicate a maximum number of non-zero coefficients that wireless device510can include in a CSI report. In certain embodiments, wireless device510receives the CSI report configuration, which may indicate the maximum number of non-zero coefficients that wireless device510can include in the CSI report. Wireless device510may estimate the DL channel and determine, based on the estimated DL channel, a plurality of coefficients. In certain embodiments, the plurality of coefficients may comprise one or more of amplitude coefficients and phase coefficients. Wireless device510may determine that a subset of the plurality of coefficients are quantized to zero and omit the determined subset of coefficients from the CSI report. Wireless device510transmits the CSI report for the DL channel to network node560. The CSI report may comprise: a set of reported coefficients; an indication of how network node560is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. The set of reported coefficients may comprise a subset K1 of the plurality of coefficients that are quantized to a non-zero value. In certain embodiments, only non-zero coefficients may be included in the CSI report. In certain embodiments, zero amplitude may not be included in a quantization range for the set of reported coefficients. The indication of how network node560is to interpret the set of reported coefficients may indicate the set of reported coefficients as a subset of a set of candidate reported coefficients. The indication of how network node560is to interpret the set of reported coefficients may comprise an indication of a number of non-zero coefficients K1 included in the CSI report. In certain embodiments, the CSI report may further comprise an indication of a payload size of the indication of how network node560is to interpret the set of reported coefficients. The indication of the payload size of the set of the reported coefficients may be encoded separately from the set of reported coefficients. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. The set of reported coefficients may be included in the CSI Part 2. In certain embodiments, the CSI report may further comprise an indication of a size K0 subset of non-zero coefficients. The indication of the size K0 subset of non-zero coefficients may be included in the CSI Part 2. The CSI report may indicate a plurality of precoder vectors. The precoder vectors may be expressed as linear combinations of spatial-domain vectors and frequency-domain vectors. The reported coefficients may be coefficients of the linear combinations. In certain embodiments, network node560is configured to perform a method for decoding CSI for the DL channel. Network node560receives the CSI report for the DL channel from wireless device510. As described above, the CSI report may comprise: the set of reported coefficients; the indication of how network node560is to interpret the set of reported coefficients; and an indication of the payload size of the set of the reported coefficients. It should be understood that the CSI report received by network node560may comprise any of the features described above with respect to the CSI report sent by wireless device510. In certain embodiments, network node560may decode the indication of how network node560is to interpret the set of reported coefficients. Network node560may determine, based on the indication of how network node560is to interpret the set of reported coefficients, a number of non-zero coefficients included in the set of reported coefficients. Network node560may determine a payload size of the set of reported coefficients. Network node560may decode the set of reported coefficients. To illustrate, two example cases of coefficient quantization strategies are described in more detail below. Although the present disclosure describes certain coefficient quantization strategies, these are for purposes of example only and it should be understood that the present disclosure is not limited to the example coefficient quantization strategies described herein. The present disclosure contemplates that the various embodiments described herein may be applicable to any other suitable quantization strategy. According to a first example coefficient quantization strategy, the amplitude and phase components of {tilde over (W)}2-coefficients ci,mare the reported quantities (note that these coefficients are also denoted by {tilde over (C)}Fin some places in the background section). For instance: ci,m=pi,mφi,m, where pi,mis the amplitude coefficient while φi,mis a phase coefficient. Assume that wireless device510(e.g., a UE) is configured to only report K0out of the 2LM coefficients and the remaining are set to zero and not reported. Depending on the channel of wireless device510, some of the K0coefficients in the subset to be reported may be quantized to zero anyway. The CSI payload could be reduced if they are not reported. Assume further that only K1≤K0of the coefficients are quantized by wireless device510to a non-zero value. In the scenario described above, the coefficients for all SD-components (i.e., beams) may be considered jointly and one value of K1may be reported and applied to a plurality of beams. In another embodiment, one K1may be signaled for each beam, hence each value of i. Generally, in the example embodiments of the first example coefficient quantization strategy described below, the indication is applied per-layer unless otherwise noted (i.e., for rank-2 PMI feedback, there is a separate set of coefficients and subset selection for each rank, and the indication is given separately for each layer). In the following example embodiments related to the first example coefficient quantization strategy, different variants implementing the above-described approach are given. According to a first example embodiment of the first coefficient quantization strategy, the maximum number of non-zero coefficients K0may be configured to wireless device510as part of the CSI report configuration. In such a scenario, the indication of the payload size of the set of the reported coefficients may be an indicator of the number of non-zero coefficients K1included in CSI Part 1, using ┌log2(K0)┐ bits. Additionally, a size-K1(where K1<K0) subset of non-zero coefficients is indicated in CSI Part 2, along with the K1actual coefficients (i.e., the set of reported coefficients). In certain embodiments, the CSI report includes an indication of how the network node is to interpret the set of reported coefficients. As one example, this subset of coefficients may be indicated to network node560using a combinatorial signaling scheme that requires ⌈log2(2⁢LMK1)⌉ bits to be transmitted. Each non-zero coefficient may be quantized and represented using N bits. In this example embodiment, wireless device510may be instructed to only include up to K0coefficients in the CSI report (e.g., through configuration as part of the CSI report configuration), and to force (or assume) the other, non-reported, coefficients to be zero. In one example implementation, wireless device510would select the K0non-zero coefficients as those with the strongest amplitude after applying the amplitude quantization. However, according to the assumption/example, there may be 2LM−K1coefficient that may have been quantized to zero. In such a scenario, wireless device510then indicates, in the other quantities separately encoded in CSI Part 1, how many of the K0coefficients are actually non-zero. As this information is encoded in CSI Part 1, which is decoded prior to CSI Part 2 by network node560, network node560knows how many actual non-zero coefficients are included in CSI Part 2. This information advantageously enables network node560to determine the payload size of both the indicator of the size-K1subset as well as the payload size of the set of K1coefficients, both of which are encoded in CSI Part 2. Hence, based on this information, possibly along with other information in CSI Part 1, network node560knows the payload of CSI Part 2 and can decode CSI Part 2. After decoding CSI Part 2, network node560reads the indication of the size-K1subset of non-zero coefficients and hence knows which K1out of the total 2LM coefficients that have been reported by wireless device510(or, equivalently, which 2LM−K1coefficients have not been reported and have been assumed to be zero). According to a second example embodiment of the first coefficient quantization strategy, a configuration of minimum value of reported non-zero coefficients is enabled. In this second example embodiment of the first coefficient quantization strategy, the approach is as described above for the first example embodiment, but both a minimum K2and maximum K0number non-zero coefficients are configured to wireless device510(e.g., as part of the CSI report configuration). In certain embodiments, an indicator of the number of non-zero coefficients K1is included in CSI Part 1, using ┌log2(K0−K2)┐ bits. With the approach described in this second example embodiment of the first coefficient quantization strategy, the fact that it is unlikely that wireless device510would report many coefficients to be non-zero is utilized to reduce the overhead of the payload size indicator. Therefore, a minimum number of non-zero coefficients is configured for wireless device510, so that wireless device510only has the possibility to indicate between K2and K0non-zero coefficients. Hence, a smaller number of bits ┌log2(K0−K2)┐ can be used. In a variant of this second example embodiment, K1may instead be constrained to belong to a set of values such that K1E S where S is a set of integers. In such a scenario, an indicator of the number of non-zero coefficients K1may then be represented using ┌log2(|S|)┐ bits, where |S| denotes the cardinality of the set. As an example of such a set, consider the case S={1,2,4, K0}, which would require 2 bits for signaling K1. According to a third example embodiment of the first coefficient quantization strategy, separate indication of K0and K1-subset is enabled, with the indication of the K0-subset in CSI Part 1. According to this third example embodiment of the first coefficient quantization strategy, the maximum number of non-zero coefficients K0may be configured for wireless device510(e.g., as part of the CSI report configuration). An indicator of the number of non-zero coefficients K1may be included in CSI Part 1, using ┌log2(K0)┐ bits. A size-K0subset of non-zero coefficients may be indicated in CSI Part 2, and additionally a size-K1subset of the K0coefficients may be indicated as non-zero in CSI Part 2, along with the K1actual coefficients in CSI Part 2. In certain embodiments, this can be indicated with combinatorial signaling using, for example, ⌈log2(2⁢LMK0)⌉+⌈log2(K0K1)⌉ bits. This indication via combinatorial signaling is an example of an indication of how the network node is to interpret the set of reported coefficients as a subset of a larger set of candidate reported coefficients. According to a fourth example embodiment of the first coefficient quantization strategy, separate indication of K0-subset and K1-subset is facilitated, with the indication of the K0-subset in CSI Part 2. According to this fourth example embodiment of the first coefficient quantization strategy, the maximum number of non-zero coefficients K0may be configured for wireless device510(e.g., as part of the CSI report). A size-K0subset of initial non-zero coefficients may be indicated in CSI Part 1 and additionally an indicator of the additional number of non-zero coefficients K1may be included in CSI Part 1, using ┌log2(K0)┐ bits. An indicator of a size-K1subset of the K0coefficients may be reported in CSI Part 2, along with the K1actual coefficients. In this fourth example embodiment of the first coefficient quantization strategy, the indication of the size-K0subset of coefficients out of the 2LM coefficients may be reported as a separate quantity, the payload of which is constant irrespective of the selection of K1. Therefore, the indication of the size-K0subset may be reported in CSI Part 1. The selection of which K1-subset out of the size-K0-subset are additionally non-zero is given separately. By this, the number of parameters that depend on wireless device510's selection of K1is minimized, which may be beneficial from a wireless device implementation viewpoint. According to a fifth example embodiment of the first coefficient quantization strategy, the amplitude quantization does not include a “zero” state. According to this fifth example embodiment, any of the four example embodiments of the first coefficient quantization strategy described above may be used and, additionally, zero amplitude is not included in the quantization range for the reported coefficients. In this fifth example embodiment of the first coefficient quantization strategy, the quantization range does not need to include a “zero”-value, because only non-zero coefficients are reported. Instead, another non-zero value could be added which would improve the quantization granularity. An example of this is given in the table below where the value 1128 has replaced the value “0”. Rel-15 amplitudePotential newkl, i(1)quantizationamplitude quantization00√{square root over ( 1/128)}1√{square root over ( 1/64)}√{square root over ( 1/64)}2√{square root over ( 1/32)}√{square root over ( 1/32)}3√{square root over ( 1/16)}√{square root over ( 1/16)}4√{square root over (⅛)}√{square root over (⅛)}5√{square root over (¼)}√{square root over (¼)}6√{square root over (½)}√{square root over (½)}711 According to a sixth example embodiment of the first coefficient quantization strategy, layer-dependent reporting is provided. According to this sixth example embodiment, K1is a function of the MIMO layer index. Hence, it may be that for one layer K1(l1) is used whereas for another layer (l2) is used. In certain embodiments, K1=K1(l1)=K1(l2)= . . . for all layers. According to a second coefficient quantization strategy, the {tilde over (W)}2coefficients (which are also denoted by {tilde over (C)}Fin some places in the background section) may be expressed as: ci,m=pi(1){tilde over (c)}i,m=pi(1)pi,m(2)φi,m, where pi(1)is a wideband amplitude coefficient that is reported separately from {tilde over (c)}i,m, pi,m(2)on is a (differential) amplitude coefficient (relative the wideband), and φi,mis a phase coefficient. The six example embodiments described above with respect to the first coefficient quantization strategy are generally applicable for the second coefficient quantization strategy as well. Some additional details are described below with respect to the utilization of the specific quantization structure of the second coefficient quantization strategy. In certain embodiments, the quantization range for pi(1)does not include zero and instead, if wireless device510wishes to indicate a pi(1)=0 for some beam i, wireless device510instead indicates this by signaling pi,m(2)=0, m=0, . . . , M−1. In certain embodiments, where a beam-specific subset indication is used, a reported value pi(1)=0 will imply that K1=0 for that beam. Hence, there may be no additional information reported for this beam. Thus, the indication of the wideband amplitude and the beam-specific K1-subset indication may be jointly encoded to conserve overhead. In certain embodiments, K1is instead constrained to belong to a set of values such that K1∈S(p1(1), p2(1), . . . , pL(1)), where the set is a function of the reported wideband coefficients. Hence, the wideband coefficients may be part of the CSI Part 1 and it will then decide the set of potential values of K1and implicitly also the size of the bit load to signal K1. In certain embodiments, the cardinally of S(p1(1), p2(1), . . . , pL(1)) is one, hence K1is a function of the set of wideband amplitude coefficient or a subset thereof. Although various example embodiments have been described above, this is for purposes of example only. The present disclosure is not limited to the particular example embodiments set forth above. It should be understood that the various aspects of the above-described example embodiments may be combined in any suitable manner. FIG.6is a flowchart illustrating an example of a method600performed by a wireless device, in accordance with certain embodiments. More particularly,FIG.6illustrates a method600performed by a wireless device for reporting CSI for a DL channel. Method600begins at step601, where the wireless device transmits a CSI report for the DL channel to a network node, the CSI report comprising: a set of reported coefficients; an indication of how the network node is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may indicate the set of reported coefficients as a subset of a set of candidate reported coefficients. In certain embodiments, the method may further comprise: estimating the DL channel; determining, based on the estimated DL channel, a plurality of coefficients; determining that a subset of the plurality of coefficients are quantized to zero; and omitting the determined subset of coefficients from the CSI report. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. In certain embodiments, the CSI report may indicate a plurality of precoder vectors. The precoder vectors may be expressed as linear combinations of spatial-domain vectors and frequency-domain vectors. The reported coefficients may be coefficients of the linear combinations. In certain embodiments, the method may further comprise receiving a CSI report configuration, the CSI report configuration indicating a maximum number of non-zero coefficients that the wireless device can include in the CSI report. In certain embodiments, the CSI report may further comprise an indication of a payload size of the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, the indication of the payload size of the set of the reported coefficients may be encoded separately from the set of reported coefficients. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. The set of reported coefficients may be included in the CSI Part 2. In certain embodiments, the set of reported coefficients may comprise a subset K1 of the plurality of coefficients that are quantized to a non-zero value. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may comprise an indication of a number of non-zero coefficients K1 included in the CSI report. In certain embodiments, the CSI report may further comprise an indication of a size K0 subset of non-zero coefficients. The indication of the size K0 subset of non-zero coefficients may be included in the CSI Part 2. In certain embodiments, only non-zero coefficients may be included in the CSI report. In certain embodiments, the plurality of coefficients may comprise one or more of amplitude coefficients and phase coefficients. In certain embodiments, zero amplitude may not be included in a quantization range for the set of reported coefficients. FIG.7is a flowchart illustrating an example of a method700performed by a wireless device, in accordance with certain embodiments. More particularly,FIG.7illustrates a method700performed by a wireless device for reporting CSI for a DL channel. Method700begins at step701, where the wireless device estimates a DL channel. At step702, the wireless device determines, based on the estimated DL channel, a plurality of coefficients. At step703, the wireless device determines that a subset of the plurality of coefficients are quantized to zero. At step704, the wireless device omits the determined subset of coefficients from a CSI report for the DL channel. At step705, the wireless device transmits the CSI report for the DL channel to a network node, the CSI report comprising: a set of reported coefficients; an indication of how the network node is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. In certain embodiments, the set of reported coefficients may be included in the CSI Part 2. In certain embodiments, the set of reported coefficients may comprise a subset K1 of the plurality of coefficients that are quantized to a non-zero value. In certain embodiments, the set of reported coefficients may comprise a subset K1 of the plurality of coefficients that are quantized to a non-zero value. In certain embodiments, the CSI report may further comprise an indication of a size K0 subset of non-zero coefficients. The indication of the size K0 subset of non-zero coefficients may be included in the CSI Part 2. In certain embodiments, only non-zero coefficients may be included in the CSI report. In certain embodiments, the plurality of coefficients may comprise one or more of amplitude coefficients and phase coefficients. In certain embodiments, zero amplitude may not be included in a quantization range for the set of reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may indicate the set of reported coefficients as a subset of a set of candidate reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may comprise an indication of a number of non-zero coefficients K1 included in the CSI report. In certain embodiments, the CSI report may further comprise an indication of a payload size of the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, the indication of the payload size of the set of the reported coefficients may be encoded separately from the set of reported coefficients. In certain embodiments, the CSI report may indicate a plurality of precoder vectors. The precoder vectors may be expressed as linear combinations of spatial-domain vectors and frequency-domain vectors. The reported coefficients may be coefficients of the linear combinations. In certain embodiments, the method may further comprise receiving a CSI report configuration, the CSI report configuration indicating a maximum number of non-zero coefficients that the wireless device can include in the CSI report. FIG.8is a block diagram illustrating an example of a virtual apparatus, in accordance with certain embodiments. More particularly,FIG.8illustrates a schematic block diagram of an apparatus800in a wireless network (for example, the wireless network shown inFIG.5). The apparatus may be implemented in a wireless device (e.g., wireless device510shown inFIG.5). Apparatus800is operable to carry out the example methods described above with reference toFIGS.6and7and possibly any other processes or methods disclosed herein. It is also to be understood that the methods ofFIGS.6and7are not necessarily carried out solely by apparatus800. At least some operations of the method can be performed by one or more other entities. Virtual Apparatus800may comprise processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In some implementations, the processing circuitry may be used to cause receiving unit802, determining unit804, communication unit806, and any other suitable units of apparatus800to perform corresponding functions according one or more embodiments of the present disclosure. In certain embodiments, apparatus800may be a UE. As illustrated inFIG.8, apparatus800includes receiving unit802, determining unit804, and communication unit806. Receiving unit802may be configured to perform the receiving functions of apparatus800. For example, receiving unit802may be configured to receive one or more signals. As another example, receiving unit802may be configured to receive a DL channel. As another example, receiving unit802may be configured to receive a CSI report configuration. In certain embodiments, the CSI report configuration may indicate a maximum number of non-zero coefficients that the wireless device can include in the CSI report. Receiving unit802may receive any suitable information (e.g., from another wireless device or a network node). Receiving unit802may include a receiver and/or a transceiver, such as RF transceiver circuitry522described above in relation toFIG.5. Receiving unit802may include circuitry configured to receive messages and/or signals (wireless or wired). In particular embodiments, receiving unit802may communicate received messages and/or signals to determining unit804and/or any other suitable unit of apparatus800. The functions of receiving unit802may, in certain embodiments, be performed in one or more distinct units. Determining unit804may perform the processing functions of apparatus800. For example, determining unit804may be configured to generate a CSI report for a DL channel. In certain embodiments, the CSI report may comprise: a set of reported coefficients; an indication of how the network node is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. In certain embodiments, determining unit804may be configured to include the set of reported coefficients in the CSI Part 2. In certain embodiments, the CSI report may further comprise an indication of a size K0 subset of non-zero coefficients. The indication of the size K0 subset of non-zero coefficients may be included in the CSI Part 2. In certain embodiments, determining unit804may be configured to only include non-zero coefficients in the CSI report. In certain embodiments, determining unit804may be configured to not include zero amplitude in a quantization range for the set of reported coefficients. In certain embodiments, determining unit804may be configured to estimate a DL channel. Determining unit804may be configured to determine, based on the estimated DL channel, a plurality of coefficients. In certain embodiments, the plurality of coefficients may comprise one or more of amplitude coefficients and phase coefficients. Determining unit804may be configured to determine that a subset of the plurality of coefficients are quantized to zero. Determining unit804may be configured to omit the determined subset of coefficients from the CSI report. In certain embodiments, determining unit804may be configured to determine the set of reported coefficients. In certain embodiments, the set of reported coefficients may comprise a subset K1 of the plurality of coefficients that are quantized to a non-zero value. In certain embodiments, determining unit804may be configured to determine how the network node is to interpret the set of reported coefficients. In certain embodiments, determining unit804may be configured to determine the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may indicate the set of reported coefficients as a subset of a set of candidate reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may comprise an indication of a number of non-zero coefficients K1 included in the CSI report. In certain embodiments, determining unit804may be configured to determine a payload size of the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, the CSI report may further comprise an indication of a payload size of the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, determining unit804may be configured to determine the payload size of the set of the reported coefficients. In certain embodiments, determining unit804may be configured to determine an indication of the payload size of the set of the reported coefficients. In certain embodiments, determining unit804may be configured to encode the indication of the payload size of the set of reported coefficients separately from the set of reported coefficients. In certain embodiments, determining unit804may determine a plurality of precoder vectors. In certain embodiments, the CSI report may indicate a plurality of precoder vectors. The precoder vectors may be expressed as linear combinations of spatial-domain vectors and frequency-domain vectors. The reported coefficients may be coefficients of the linear combinations. As yet another example, determining unit804may be configured to provide user data. Determining unit804may include or be included in one or more processors, such as processing circuitry520described above in relation toFIG.5. Determining unit804may include analog and/or digital circuitry configured to perform any of the functions of determining unit804and/or processing circuitry520described above. The functions of determining unit804may, in certain embodiments, be performed in one or more distinct units. Communication unit806may be configured to perform the transmission functions of apparatus800. For example, communication unit806may be configured to transmit the CSI report for the DL channel to a network node. Communication unit806may transmit messages (e.g., to another wireless device and/or a network node). Communication unit806may include a transmitter and/or a transceiver, such as RF transceiver circuitry522described above in relation toFIG.5. Communication unit806may include circuitry configured to transmit messages and/or signals (e.g., through wireless or wired means). In particular embodiments, communication unit806may receive messages and/or signals for transmission from determining unit804or any other unit of apparatus800. The functions of communication unit804may, in certain embodiments, be performed in one or more distinct units. FIG.9is a flowchart illustrating an example of a method900performed by a network node, in accordance with certain embodiments. More particularly,FIG.9illustrates a method900performed by a network node for decoding CSI for a DL channel Method900begins at step901, where the network node receives a CSI report for the DL channel from a wireless device, the CSI report comprising: a set of reported coefficients; an indication of how the network node is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. In certain embodiments, the method may further comprise: decoding the indication of how the network node is to interpret the set of reported coefficients; and determining, based on the indication of how the network node is to interpret the set of reported coefficients, a number of non-zero coefficients included in the set of reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may indicate the set of reported coefficients as a subset of a set of candidate reported coefficients. In certain embodiments, the CSI report may indicate a plurality of precoder vectors. The precoder vectors may be expressed as linear combinations of spatial-domain vectors and frequency-domain vectors. The reported coefficients may be coefficients of the linear combinations. In certain embodiments, the method may further comprise determining a payload size of the set of reported coefficients. In certain embodiments, the method may further comprise decoding the set of reported coefficients. In certain embodiments, the method may further comprise sending a CSI report configuration to the wireless device, the CSI report configuration indicating a maximum number of non-zero coefficients that the wireless device can include in the CSI report. In certain embodiments, the CSI report may further comprise an indication of a payload size of the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, the indication of the payload size of the set of the reported coefficients may be encoded separately from the set of reported coefficients. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. The set of reported coefficients may be included in the CSI Part 2. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may comprise an indication of a number of non-zero coefficients K1 included in the CSI report. In certain embodiments, the CSI report may further comprise an indication of a size K0 subset of non-zero coefficients. The indication of the size K0 subset of non-zero coefficients may be included in the CSI Part 2. In certain embodiments, only non-zero coefficients may be included in the CSI report. In certain embodiments, the set of reported coefficients may comprise one or more of amplitude coefficients and phase coefficients. In certain embodiments, zero amplitude may not be included in a quantization range for the set of reported coefficients. FIG.10is a flowchart illustrating an example of a method1000performed by a network node, in accordance with certain embodiments. More particularly,FIG.10illustrates a method1000performed by a network node for decoding CSI for a DL channel Method1000begins at step1001, where the network node receives a CSI report for the DL channel from a wireless device, the CSI report comprising: a set of reported coefficients; an indication of how the network node is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. The set of reported coefficients may be included in the CSI Part 2. In certain embodiments, only non-zero coefficients may be included in the CSI report. In certain embodiments, the set of reported coefficients may comprise one or more of amplitude coefficients and phase coefficients. In certain embodiments, zero amplitude may not be included in a quantization range for the set of reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may indicate the set of reported coefficients as a subset of a set of candidate reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may comprise an indication of a number of non-zero coefficients K1 included in the CSI report. In certain embodiments, the CSI report may further comprise an indication of a payload size of the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, the indication of the payload size of the set of the reported coefficients may be encoded separately from the set of reported coefficients. In certain embodiments, the CSI report may further comprise an indication of a size K0 subset of non-zero coefficients. The indication of the size K0 subset of non-zero coefficients may be included in the CSI Part 2. In certain embodiments, the CSI report may indicate a plurality of precoder vectors. The precoder vectors may be expressed as linear combinations of spatial-domain vectors and frequency-domain vectors. The reported coefficients may be coefficients of the linear combinations. In certain embodiments, the method may further comprise sending a CSI report configuration to the wireless device, the CSI report configuration indicating a maximum number of non-zero coefficients that the wireless device can include in the CSI report. At step1002, the network node decodes the indication of how the network node is to interpret the set of reported coefficients. At step1003, the network node determines, based on the indication of how the network node is to interpret the set of reported coefficients, a number of non-zero coefficients included in the set of reported coefficients. In certain embodiments, the method may further comprise determining a payload size of the set of reported coefficients. In certain embodiments, the method may further comprise decoding the set of reported coefficients. FIG.11is a block diagram illustrating an example of a virtual apparatus, in accordance with certain embodiments. More particularly,FIG.11illustrates a schematic block diagram of an apparatus1100in a wireless network (for example, the wireless network shown inFIG.5). The apparatus may be implemented in a network node (e.g., network node560shown inFIG.5). Apparatus1100is operable to carry out the example method described above with reference toFIGS.9and10and possibly any other processes or methods disclosed herein. It is also to be understood that the methods ofFIGS.9and10are not necessarily carried out solely by apparatus1100. At least some operations of the method can be performed by one or more other entities. Virtual Apparatus1100may comprise processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In some implementations, the processing circuitry may be used to cause receiving unit1102, determining unit1104, communication unit1106, and any other suitable units of apparatus1100to perform corresponding functions according one or more embodiments of the present disclosure. In certain embodiments, apparatus1100may be a gNB. As illustrated inFIG.11, apparatus1100includes receiving unit1102, determining unit1104, and communication unit1106. Receiving unit1102may be configured to perform the receiving functions of apparatus1100. For example, receiving unit1102may be configured to receive a CSI report for a DL channel from a wireless device. In certain embodiments, the CSI report may comprise: a set of reported coefficients; an indication of how the network node is to interpret the set of reported coefficients; and an indication of a payload size of the set of the reported coefficients. In certain embodiments, the CSI report may comprise a CSI Part 1 and a CSI Part 2. In certain embodiments, the set of reported coefficients maybe included in the CSI Part 2. In certain embodiments, the indication of the payload size of the set of the reported coefficients may be encoded separately from the set of reported coefficients. In certain embodiments, the CSI report may indicate a plurality of precoder vectors. The precoder vectors may be expressed as linear combinations of spatial-domain vectors and frequency-domain vectors. The reported coefficients may be coefficients of the linear combinations. In certain embodiments, the CSI report may comprise an indication of a payload size of the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, only non-zero coefficients may be included in the CSI report. In certain embodiments, the set of reported coefficients may comprise one or more of amplitude coefficients and phase coefficients. In certain embodiments, zero amplitude may not be included in a quantization range for the set of reported coefficients. Receiving unit1102may receive any suitable information (e.g., from a wireless device or another network node). Receiving unit1102may include a receiver and/or a transceiver, such as RF transceiver circuitry572described above in relation toFIG.5. Receiving unit1102may include circuitry configured to receive messages and/or signals (wireless or wired). In particular embodiments, receiving unit1102may communicate received messages and/or signals to determining unit1104and/or any other suitable unit of apparatus1100. The functions of receiving unit1102may, in certain embodiments, be performed in one or more distinct units. Determining unit1104may perform the processing functions of apparatus1100. For example, determining unit1104may be configured to decode the indication of how the network node is to interpret the set of reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may indicate the set of reported coefficients as a subset of a set of candidate reported coefficients. As another example, determining unit1104may be configured to determine, based on the indication of how the network node is to interpret the set of reported coefficients, a number of non-zero coefficients included in the set of reported coefficients. In certain embodiments, the indication of how the network node is to interpret the set of reported coefficients may comprise an indication of a number of non-zero coefficients K1 included in the CSI report. In certain embodiments, the CSI report may comprise an indication of a size K0 subset of non-zero coefficients. The indication of the size K0 subset of non-zero coefficients may be included in the CSI Part 2. As still another example, determining unit1104may be configured to determine a payload size of the set of reported coefficients. As yet another example, determining unit1104may be configured to determine a payload size of the indication of the payload size of the set of the reported coefficients. As another example, determining unit1104may be configured to decode the set of reported coefficients. As another example, determining unit1104may be configured to determine a CSI report configuration for the wireless device. In certain embodiments, determining unit1104may be configured to determine a maximum number of non-zero coefficients that the wireless device can include in the CSI report. In certain embodiments, determining unit1104may be configured to indicate the maximum number of non-zero coefficients that the wireless device can include in the CSI report in the CSI report configuration. Determining unit1104may include or be included in one or more processors, such as processing circuitry570described above in relation toFIG.5. Determining unit1104may include analog and/or digital circuitry configured to perform any of the functions of determining unit1104and/or processing circuitry570described above. The functions of determining unit1104may, in certain embodiments, be performed in one or more distinct units. Communication unit1106may be configured to perform the transmission functions of apparatus1100. For example, communication unit1106may be configured to send a CSI report configuration to the wireless device. The CSI report configuration may indicate a maximum number of non-zero coefficients that the wireless device can include in the CSI report. As another example, communication unit1106may be configured to forward the user data to a host computer or the wireless device. Communication unit1106may transmit messages (e.g., to a wireless device and/or another network node). Communication unit1106may include a transmitter and/or a transceiver, such as RF transceiver circuitry572described above in relation toFIG.5. Communication unit1106may include circuitry configured to transmit messages and/or signals (e.g., through wireless or wired means). In particular embodiments, communication unit1106may receive messages and/or signals for transmission from determining unit1104or any other unit of apparatus1100. The functions of communication unit1104may, in certain embodiments, be performed in one or more distinct units. The term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. In some embodiments a computer program, computer program product or computer readable storage medium comprises instructions which when executed on a computer perform any of the embodiments disclosed herein. In further examples the instructions are carried on a signal or carrier and which are executable on a computer wherein when executed perform any of the embodiments disclosed herein. FIG.12illustrates an example user equipment, in accordance with certain embodiments. As used herein, a user equipment or UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter). UE1200may be any UE identified by the 3rdGeneration Partnership Project (3GPP), including a NB-IoT UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE. UE1200, as illustrated inFIG.12, is one example of a wireless device configured for communication in accordance with one or more communication standards promulgated by the 3rdGeneration Partnership Project (3GPP), such as 3GPP's GSM, UMTS, LTE, and/or 5G standards. As mentioned previously, the term wireless device and UE may be used interchangeable. Accordingly, althoughFIG.12is a UE, the components discussed herein are equally applicable to a wireless device, and vice-versa. InFIG.12, UE1200includes processing circuitry1201that is operatively coupled to input/output interface1205, radio frequency (RF) interface1209, network connection interface1211, memory1215including random access memory (RAM)1217, read-only memory (ROM)1219, and storage medium1221or the like, communication subsystem1231, power source1233, and/or any other component, or any combination thereof. Storage medium1221includes operating system1223, application program1225, and data1227. In other embodiments, storage medium1221may include other similar types of information. Certain UEs may utilize all of the components shown inFIG.12, or only a subset of the components. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc. InFIG.12, processing circuitry1201may be configured to process computer instructions and data. Processing circuitry1201may be configured to implement any sequential state machine operative to execute machine instructions stored as machine-readable computer programs in the memory, such as one or more hardware-implemented state machines (e.g., in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored program, general-purpose processors, such as a microprocessor or Digital Signal Processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry1201may include two central processing units (CPUs). Data may be information in a form suitable for use by a computer. In the depicted embodiment, input/output interface1205may be configured to provide a communication interface to an input device, output device, or input and output device. UE1200may be configured to use an output device via input/output interface1205. An output device may use the same type of interface port as an input device. For example, a USB port may be used to provide input to and output from UE1200. The output device may be a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. UE1200may be configured to use an input device via input/output interface1205to allow a user to capture information into UE1200. The input device may include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, another like sensor, or any combination thereof. For example, the input device may be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor. InFIG.12, RF interface1209may be configured to provide a communication interface to RF components such as a transmitter, a receiver, and an antenna. Network connection interface1211may be configured to provide a communication interface to network1243a. Network1243amay encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network1243amay comprise a Wi-Fi network. Network connection interface1211may be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, or the like. Network connection interface1211may implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like). The transmitter and receiver functions may share circuit components, software or firmware, or alternatively may be implemented separately. RAM1217may be configured to interface via bus1202to processing circuitry1201to provide storage or caching of data or computer instructions during the execution of software programs such as the operating system, application programs, and device drivers. ROM1219may be configured to provide computer instructions or data to processing circuitry1201. For example, ROM1219may be configured to store invariant low-level system code or data for basic system functions such as basic input and output (I/O), startup, or reception of keystrokes from a keyboard that are stored in a non-volatile memory. Storage medium1221may be configured to include memory such as RAM, ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, or flash drives. In one example, storage medium1221may be configured to include operating system1223, application program1225such as a web browser application, a widget or gadget engine or another application, and data file1227. Storage medium1221may store, for use by UE1200, any of a variety of various operating systems or combinations of operating systems. Storage medium1221may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), floppy disk drive, flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a subscriber identity module or a removable user identity (SIM/RUIM) module, other memory, or any combination thereof. Storage medium1221may allow UE1200to access computer-executable instructions, application programs or the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied in storage medium1221, which may comprise a device readable medium. InFIG.12, processing circuitry1201may be configured to communicate with network1243busing communication subsystem1231. Network1243aand network1243bmay be the same network or networks or different network or networks. Communication subsystem1231may be configured to include one or more transceivers used to communicate with network1243b. For example, communication subsystem1231may be configured to include one or more transceivers used to communicate with one or more remote transceivers of another device capable of wireless communication such as another wireless device, UE, or base station of a radio access network (RAN) according to one or more communication protocols, such as IEEE 802.11, CDMA, WCDMA, GSM, LTE, UTRAN, WiMax, or the like. Each transceiver may include transmitter1233and/or receiver1235to implement transmitter or receiver functionality, respectively, appropriate to the RAN links (e.g., frequency allocations and the like). Further, transmitter1233and receiver1235of each transceiver may share circuit components, software or firmware, or alternatively may be implemented separately. In the illustrated embodiment, the communication functions of communication subsystem1231may include data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. For example, communication subsystem1231may include cellular communication, Wi-Fi communication, Bluetooth communication, and GPS communication. Network1243bmay encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network1243bmay be a cellular network, a Wi-Fi network, and/or a near-field network. Power source1213may be configured to provide alternating current (AC) or direct current (DC) power to components of UE1200. The features, benefits and/or functions described herein may be implemented in one of the components of UE1200or partitioned across multiple components of UE1200. Further, the features, benefits, and/or functions described herein may be implemented in any combination of hardware, software or firmware. In one example, communication subsystem1231may be configured to include any of the components described herein. Further, processing circuitry1201may be configured to communicate with any of such components over bus1202. In another example, any of such components may be represented by program instructions stored in memory that when executed by processing circuitry1201perform the corresponding functions described herein. In another example, the functionality of any of such components may be partitioned between processing circuitry1201and communication subsystem1231. In another example, the non-computationally intensive functions of any of such components may be implemented in software or firmware and the computationally intensive functions may be implemented in hardware. FIG.13illustrates an example virtualization environment, in accordance with certain embodiments. More particularly,FIG.13is a schematic block diagram illustrating a virtualization environment1300in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to a node (e.g., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks). In some embodiments, some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments1300hosted by one or more of hardware nodes1330. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node), then the network node may be entirely virtualized. The functions may be implemented by one or more applications1320(which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. Applications1320are run in virtualization environment1300which provides hardware1330comprising processing circuitry1360and memory1390. Memory1390contains instructions1395executable by processing circuitry1360whereby application1320is operative to provide one or more of the features, benefits, and/or functions disclosed herein. Virtualization environment1300, comprises general-purpose or special-purpose network hardware devices1330comprising a set of one or more processors or processing circuitry1360, which may be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware device may comprise memory1390-1which may be non-persistent memory for temporarily storing instructions1395or software executed by processing circuitry1360. Each hardware device may comprise one or more network interface controllers (NICs)1370, also known as network interface cards, which include physical network interface1380. Each hardware device may also include non-transitory, persistent, machine-readable storage media1390-2having stored therein software1395and/or instructions executable by processing circuitry1360. Software1395may include any type of software including software for instantiating one or more virtualization layers1350(also referred to as hypervisors), software to execute virtual machines1340as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein. Virtual machines1340, comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer1350or hypervisor. Different embodiments of the instance of virtual appliance1320may be implemented on one or more of virtual machines1340, and the implementations may be made in different ways. During operation, processing circuitry1360executes software1395to instantiate the hypervisor or virtualization layer1350, which may sometimes be referred to as a virtual machine monitor (VMM). Virtualization layer1350may present a virtual operating platform that appears like networking hardware to virtual machine1340. As shown inFIG.13, hardware1330may be a standalone network node with generic or specific components. Hardware1330may comprise antenna13225and may implement some functions via virtualization. Alternatively, hardware1330may be part of a larger cluster of hardware (e.g. such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO)13100, which, among others, oversees lifecycle management of applications1320. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment. In the context of NFV, virtual machine1340may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of virtual machines1340, and that part of hardware1330that executes that virtual machine, be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines1340, forms a separate virtual network elements (VNE). Still in the context of NFV, Virtual Network Function (VNF) is responsible for handling specific network functions that run in one or more virtual machines1340on top of hardware networking infrastructure1330and corresponds to application1320inFIG.13. In some embodiments, one or more radio units13200that each include one or more transmitters13220and one or more receivers13210may be coupled to one or more antennas13225. Radio units13200may communicate directly with hardware nodes1330via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signalling can be effected with the use of control system13230which may alternatively be used for communication between the hardware nodes1330and radio units13200. FIG.14illustrates an example telecommunication network connected via an intermediate network to a host computer, in accordance with certain embodiments. With reference toFIG.14, in accordance with an embodiment, a communication system includes telecommunication network1410, such as a 3GPP-type cellular network, which comprises access network1411, such as a radio access network, and core network1414. Access network1411comprises a plurality of base stations1412a,1412b,1412c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area1413a,1413b,1413c. Each base station1412a,1412b,1412cis connectable to core network1414over a wired or wireless connection1415. A first UE1491located in coverage area1413cis configured to wirelessly connect to, or be paged by, the corresponding base station1412c. A second UE1492in coverage area1413ais wirelessly connectable to the corresponding base station1412a. While a plurality of UEs1491,1492are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station1412. Telecommunication network1410is itself connected to host computer1430, which may be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm. Host computer1430may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider. Connections1421and1422between telecommunication network1410and host computer1430may extend directly from core network1414to host computer1430or may go via an optional intermediate network1420. Intermediate network1420may be one of, or a combination of more than one of, a public, private or hosted network; intermediate network1420, if any, may be a backbone network or the Internet; in particular, intermediate network1420may comprise two or more sub-networks (not shown). The communication system ofFIG.14as a whole enables connectivity between the connected UEs1491,1492and host computer1430. The connectivity may be described as an over-the-top (OTT) connection1450. Host computer1430and the connected UEs1491,1492are configured to communicate data and/or signaling via OTT connection1450, using access network1411, core network1414, any intermediate network1420and possible further infrastructure (not shown) as intermediaries. OTT connection1450may be transparent in the sense that the participating communication devices through which OTT connection1450passes are unaware of routing of UL and DL communications. For example, base station1412may not or need not be informed about the past routing of an incoming DL communication with data originating from host computer1430to be forwarded (e.g., handed over) to a connected UE1491. Similarly, base station1412need not be aware of the future routing of an outgoing UL communication originating from the UE1491towards the host computer1430. FIG.15illustrates an example host computer communicating via a base station with a user equipment over a partially wireless connection, in accordance with certain embodiments. Example implementations, in accordance with an embodiment, of the UE, base station and host computer discussed in the preceding paragraphs will now be described with reference toFIG.15. In communication system1500, host computer1510comprises hardware1515including communication interface1516configured to set up and maintain a wired or wireless connection with an interface of a different communication device of communication system1500. Host computer1510further comprises processing circuitry1518, which may have storage and/or processing capabilities. In particular, processing circuitry1518may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Host computer1510further comprises software1511, which is stored in or accessible by host computer1510and executable by processing circuitry1518. Software1511includes host application1512. Host application1512may be operable to provide a service to a remote user, such as UE1530connecting via OTT connection1550terminating at UE1530and host computer1510. In providing the service to the remote user, host application1512may provide user data which is transmitted using OTT connection1550. Communication system1500further includes base station1520provided in a telecommunication system and comprising hardware1525enabling it to communicate with host computer1510and with UE1530. Hardware1525may include communication interface1526for setting up and maintaining a wired or wireless connection with an interface of a different communication device of communication system1500, as well as radio interface1527for setting up and maintaining at least wireless connection1570with UE1530located in a coverage area (not shown inFIG.15) served by base station1520. Communication interface1526may be configured to facilitate connection1560to host computer1510. Connection1560may be direct or it may pass through a core network (not shown inFIG.15) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment shown, hardware1525of base station1520further includes processing circuitry1528, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Base station1520further has software1521stored internally or accessible via an external connection. Communication system1500further includes UE1530already referred to. Its hardware1535may include radio interface1537configured to set up and maintain wireless connection1570with a base station serving a coverage area in which UE1530is currently located. Hardware1535of UE1530further includes processing circuitry1538, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. UE1530further comprises software1531, which is stored in or accessible by UE1530and executable by processing circuitry1538. Software1531includes client application1532. Client application1532may be operable to provide a service to a human or non-human user via UE1530, with the support of host computer1510. In host computer1510, an executing host application1512may communicate with the executing client application1532via OTT connection1550terminating at UE1530and host computer1510. In providing the service to the user, client application1532may receive request data from host application1512and provide user data in response to the request data. OTT connection1550may transfer both the request data and the user data. Client application1532may interact with the user to generate the user data that it provides. It is noted that host computer1510, base station1520and UE1530illustrated inFIG.15may be similar or identical to host computer1430, one of base stations1412a,1412b,1412cand one of UEs1491,1492ofFIG.14, respectively. This is to say, the inner workings of these entities may be as shown inFIG.15and independently, the surrounding network topology may be that ofFIG.14. InFIG.15, OTT connection1550has been drawn abstractly to illustrate the communication between host computer1510and UE1530via base station1520, without explicit reference to any intermediary devices and the precise routing of messages via these devices. Network infrastructure may determine the routing, which it may be configured to hide from UE1530or from the service provider operating host computer1510, or both. While OTT connection1550is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network). Wireless connection1570between UE1530and base station1520is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to UE1530using OTT connection1550, in which wireless connection1570forms the last segment. More precisely, the teachings of these embodiments may improve the signaling overhead. A measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring OTT connection1550between host computer1510and UE1530, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring OTT connection1550may be implemented in software1511and hardware1515of host computer1510or in software1531and hardware1535of UE1530, or both. In embodiments, sensors (not shown) may be deployed in or in association with communication devices through which OTT connection1550passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software1511,1531may compute or estimate the monitored quantities. The reconfiguring of OTT connection1550may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect base station1520, and it may be unknown or imperceptible to base station1520. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling facilitating host computer1510's measurements of throughput, propagation times, latency and the like. The measurements may be implemented in that software1511and1531causes messages to be transmitted, in particular empty or ‘dummy’ messages, using OTT connection1550while it monitors propagation times, errors etc. FIG.16is a flowchart illustrating an example method implemented in a communication system, in accordance certain embodiments. More particularly,FIG.16illustrates an example method implemented in a communication system including a host computer, a base station and a user equipment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.14and15. For simplicity of the present disclosure, only drawing references toFIG.16will be included in this section. In step1610, the host computer provides user data. In substep1611(which may be optional) of step1610, the host computer provides the user data by executing a host application. In step1620, the host computer initiates a transmission carrying the user data to the UE. In step1630(which may be optional), the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step1640(which may also be optional), the UE executes a client application associated with the host application executed by the host computer. FIG.17is a flowchart illustrating a second example method implemented in a communication system, in accordance with certain embodiments. More particularly,FIG.17illustrates an example method implemented in a communication system including a host computer, a base station and a user equipment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.14and15. For simplicity of the present disclosure, only drawing references toFIG.17will be included in this section. In step1710of the method, the host computer provides user data. In an optional substep (not shown) the host computer provides the user data by executing a host application. In step1720, the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure. In step1730(which may be optional), the UE receives the user data carried in the transmission. FIG.18is a flowchart illustrating a third method implemented in a communication system, in accordance with certain embodiments. More particularly,FIG.18illustrates an example method implemented in a communication system including a host computer, a base station and a user equipment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.14and15. For simplicity of the present disclosure, only drawing references toFIG.18will be included in this section. In step1810(which may be optional), the UE receives input data provided by the host computer. Additionally or alternatively, in step1820, the UE provides user data. In substep1821(which may be optional) of step1820, the UE provides the user data by executing a client application. In substep1811(which may be optional) of step1810, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application may further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in substep1830(which may be optional), transmission of the user data to the host computer. In step1840of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure. FIG.19is a flowchart illustrating a fourth method implemented in a communication system, in accordance with certain embodiments. More particularly,FIG.19illustrates an example method implemented in a communication system including a host computer, a base station and a user equipment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.14and15. For simplicity of the present disclosure, only drawing references toFIG.19will be included in this section. In step1910(which may be optional), in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In step1920(which may be optional), the base station initiates transmission of the received user data to the host computer. In step1930(which may be optional), the host computer receives the user data carried in the transmission initiated by the base station.
119,403
11863272
DETAILED DESCRIPTION A wireless communications system may support communication between a base station and a user equipment (UE). Specifically, the wireless communications system may support downlink transmissions from the base station to the UE and uplink transmissions from the UE to a base station. Downlink transmissions may include data, control signals, and reference signals. Different reference signal waveforms may be multiplexed over a set of frequency resources (i.e., using frequency division multiplexing (FDM) and/or time division multiplexing (TDM)) for a given uplink transmission on an antenna. For example, a base station may identify respective single-carrier reference signal streams to be transmitted to a UE, and these streams may be precoded for the transmission. Transmission schemes utilized in a wireless communications system may be classified into open-loop, semi-open-loop, and closed-loop schemes. For example, if there is no precoding matrix indicator (PMI) feedback, a transmission scheme may be considered open-loop, whereas if there is at least partial PMI feedback, the scheme may be considered semi-open-loop. In open-loop and semi-open-loop transmissions, data and reference signals may or may not be restricted to be transmitted with the same precoding matrix. Further, different reporting schemes may be used for open-loop PMI reporting, including full PMI reporting, partial PMI reporting, and no PMI reporting. A method may thus provide for indicating whether channel quality information (CQI) is based on a closed-loop or open-loop transmission scheme, and for performing CQI computations using consistent assumptions based on the type of PMI reporting. As described herein, a UE may select a transmission scheme, and may report an indication of a time offset between multiple virtual antennas and a precoder cycling granularity transmission value that may be associated with the transmission scheme. The selected transmission scheme may be a closed-loop scheme or an open-loop (or semi-open-loop) scheme, where open-loop schemes may further include small cyclic delay diversity (SCDD) and resource block group (RBG) level precoder cycling schemes, or a combination thereof. Then, using the selected transmission scheme, the UE may compute an associated CQI. The CQI may further be derived based on a PMI matrix, depending on whether the PMI reporting technique is a full PMI reporting technique, a partial PMI reporting technique, or a no PMI reporting technique. The UE may indicate the calculated CQI and the respective values of a time offset value (e.g., τ) and a precoder cycling granularity value (e.g., M) associated with the transmission scheme to a base station, in, for example, a channel state information (CSI) report. The base station may adjust its transmission scheme and perform link adaptation accordingly. In some cases, joint coding may be used for coding of the time offset value and the precoder cycling granularity value in the CSI reporting. Aspects of the disclosure are initially described in the context of a wireless communications system. These and other features are further illustrated by and described with reference to various block diagrams, transmission schemes, and process flows. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to CSI feedback for semi-open-loop and open-loop schemes. FIG.1illustrates an example of a wireless communications system100in accordance with aspects of the present disclosure. The wireless communications system100includes base stations105, UEs115, and a core network130. In some examples, the wireless communications system100may be a Long Term Evolution (LTE), LTE-Advanced (LTE-A) network, or a New Radio (NR) network. In some cases, wireless communications system100may support enhanced broadband communications, ultra-reliable (i.e., mission critical) communications, low latency communications, and communications with low-cost and low-complexity devices. Wireless communications system100may enable or support methods for indicating whether CQI is based on a closed-loop or open-loop transmission scheme, and for performing CQI computations using assumptions accordingly to a corresponding scheme for PMI reporting. Base stations105may wirelessly communicate with UEs115via one or more base station antennas. Each base station105may provide communication coverage for a respective geographic coverage area110. Communication links125shown in wireless communications system100may include uplink transmissions from a UE115to a base station105, or downlink transmissions, from a base station105to a UE115. Control information and data may be multiplexed on an uplink channel or downlink according to various techniques. Control information and data may be multiplexed on a downlink channel, for example, using TDM techniques, FDM techniques, or hybrid TDM-FDM techniques. In some examples, the control information transmitted during a transmission time interval (TTI) of a downlink channel may be distributed between different control regions in a cascaded manner (e.g., between a common control region and one or more UE-specific control regions). UEs115may be dispersed throughout the wireless communications system100, and each UE115may be stationary or mobile. A UE115may also be referred to as a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or some other suitable terminology. A UE115may also be a cellular phone, a personal digital assistant (PDA), a wireless modem, a wireless communications device, a handheld device, a tablet computer, a laptop computer, a cordless phone, a personal electronic device, a handheld device, a personal computer, a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, a machine type communication (MTC) device, an appliance, an automobile, or the like. In some cases, a UE115may also be able to communicate directly with other UEs115(e.g., using a peer-to-peer (P2P) or device-to-device (D2D) protocol). One or more of a group of UEs115utilizing D2D communications may be within the coverage area110of a cell. Other UEs115in such a group may be outside the coverage area110of a cell, or otherwise unable to receive transmissions from a base station105. In some cases, groups of UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some cases, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out independent of a base station105. Some UEs115, such as MTC or IoT devices, may be low cost or low complexity devices, and may provide for automated communication between machines, i.e., Machine-to-Machine (M2M) communication. M2M or MTC may refer to data communication technologies that allow devices to communicate with one another or a base station without human intervention. For example, M2M or MTC may refer to communications from devices that integrate sensors or meters to measure or capture information and relay that information to a central server or application program that can make use of the information or present the information to humans interacting with the program or application. Some UEs115may be designed to collect information or enable automated behavior of machines. Examples of applications for MTC devices include smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access control, and transaction-based business charging. In some cases, an MTC device may operate using half-duplex (one-way) communications at a reduced peak rate. MTC devices may also be configured to enter a power saving “deep sleep” mode when not engaging in active communications. In some cases, MTC or IoT devices may be designed to support mission critical functions and wireless communications system may be configured to provide ultra-reliable communications for these functions. Base stations105may communicate with the core network130and with one another. For example, base stations105may interface with the core network130through backhaul links132(e.g., S1, S2, etc.). Base stations105may communicate with one another over backhaul links134(e.g., X1, X2, etc.) either directly or indirectly (e.g., through core network130). Base stations105may perform radio configuration and scheduling for communication with UEs115, or may operate under the control of a base station controller (not shown). In some examples, base stations105may be macro cells, small cells, hot spots, or the like. Base stations105may also be referred to as evolved NodeBs (eNBs)105. A base station105may be connected by an S1 interface to the core network130. The core network may be an evolved packet core (EPC), which may include at least one mobility management entity (MME), at least one serving gateway (S-GW), and at least one Packet Data Network (PDN) gateway (P-GW). The MME may be the control node that processes the signaling between the UE115and the EPC. All user Internet Protocol (IP) packets may be transferred through the S-GW, which itself may be connected to the P-GW. The P-GW may provide IP address allocation as well as other functions. The P-GW may be connected to the network operators IP services. The operators IP services may include the Internet, the Intranet, an IP Multimedia Subsystem (IMS), and a Packet-Switched (PS) Streaming Service. The core network130may provide user authentication, access authorization, tracking, IP connectivity, and other access, routing, or mobility functions. At least some of the network devices, such as base station105may include subcomponents such as an access network entity, which may be an example of an access node controller (ANC). Each access network entity may communicate with a number of UEs115through a number of other access network transmission entities, each of which may be an example of a smart radio head, or a transmission/reception point (TRP). In some configurations, various functions of each access network entity or base station105may be distributed across various network devices (e.g., radio heads and access network controllers) or consolidated into a single network device (e.g., a base station105). Wireless communications system100may operate in an ultra-high frequency (UHF) frequency region using frequency bands from 700 MHz to 2600 MHz (2.6 GHz), although some networks (e.g., a wireless local area network (WLAN)) may use frequencies as high as 5 GHz. This region may also be known as the decimeter band, since the wavelengths range from approximately one decimeter to one meter in length. UHF waves may propagate mainly by line of sight, and may be blocked by buildings and environmental features. However, the waves may penetrate walls sufficiently to provide service to UEs115located indoors. Transmission of UHF waves is characterized by smaller antennas and shorter range (e.g., less than 100 km) compared to transmission using the smaller frequencies (and longer waves) of the high frequency (HF) or very high frequency (VHF) portion of the spectrum. In some cases, wireless communications system100may also utilize extremely high frequency (EHF) portions of the spectrum (e.g., from 25 GHz to 300 GHz). This region may also be known as the millimeter band, since the wavelengths range from approximately one millimeter to one centimeter in length. Thus, EHF antennas may be even smaller and more closely spaced than UHF antennas. In some cases, this may facilitate use of antenna arrays within a UE115(e.g., for directional beamforming). However, EHF transmissions may be subject to even greater atmospheric attenuation and shorter range than UHF transmissions. Thus, wireless communications system100may support millimeter wave (mmW) communications between UEs115and base stations105. Devices operating in mmW or EHF bands may have multiple antennas to allow beamforming. That is, a base station105may use multiple antennas or antenna arrays to conduct beamforming operations for directional communications with a UE115. Beamforming (which may also be referred to as spatial filtering or directional transmission) is a signal processing technique that may be used at a transmitter (e.g., a base station105) to shape and/or steer an overall antenna beam in the direction of a target receiver (e.g., a UE115). This may be achieved by combining elements in an antenna array in such a way that transmitted signals at particular angles experience constructive interference while others experience destructive interference. Multiple-input multiple-output (MIMO) wireless systems use a transmission scheme between a transmitter (e.g., a base station105) and a receiver (e.g., a UE115), where both transmitter and receiver are equipped with multiple antennas. Some portions of wireless communications system100may use beamforming. For example, a base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use for beamforming in its communication with a UE115. Signals may be transmitted multiple times in different directions (e.g., each transmission may be beamformed differently). A mmW receiver (e.g., a UE115) may try multiple beams (e.g., antenna subarrays) while receiving the synchronization signals. In some cases, the antennas of a base station105or UE115may be located within one or more antenna arrays, which may support beamforming or MIMO operation. One or more base station antennas or antenna arrays may be collocated at an antenna assembly, such as an antenna tower. In some cases, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may multiple use antennas or antenna arrays to conduct beamforming operations for directional communications with a UE115. In some cases, wireless communications system100may be a packet-based network that operate according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A radio link control (RLC) layer may in some cases perform packet segmentation and reassembly to communicate over logical channels. A medium access control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer may also use hybrid automatic repeat request (HARM) to provide retransmission at the MAC layer to improve link efficiency. In the control plane, the radio resource control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a network device or core network130supporting radio bearers for user plane data. At the physical (PHY) layer, transport channels may be mapped to physical channels. Time intervals in LTE or NR may be expressed in multiples of a basic time unit (which may be a sampling period of Ts=1/30,720,000 seconds). Time resources may be organized according to radio frames of length of 10 ms (Tf=307200Ts), which may be identified by a system frame number (SFN) ranging from 0 to 1023. Each frame may include ten 1 ms subframes numbered from 0 to 9. A subframe may be further divided into two 0.5 ms slots, each of which contains 6 or 7 modulation symbol periods (depending on the length of the cyclic prefix prepended to each symbol). Excluding the cyclic prefix, each symbol contains 2048 sample periods. In some cases the subframe may be the smallest scheduling unit, also known as a TTI. In other cases, a TTI may be shorter than a subframe or may be dynamically selected (e.g., in short TTI bursts or in selected component carriers using short TTIs). A resource element may consist of one symbol period and one subcarrier (e.g., a 15 kHz frequency range). A resource block may contain 12 consecutive subcarriers in the frequency domain and, for a normal cyclic prefix in each orthogonal frequency division multiplexing (OFDM) symbol, 7 consecutive OFDM symbols in the time domain (1 slot), or 84 resource elements. The number of bits carried by each resource element may depend on the modulation scheme (the configuration of symbols that may be selected during each symbol period). Thus, the more resource blocks that a UE115receives and the higher the modulation scheme, the higher the data rate may be. Wireless communications system100may support operation on multiple cells or carriers, a feature which may be referred to as carrier aggregation (CA) or multi-carrier operation. A carrier may also be referred to as a component carrier (CC), a layer, a channel, etc. The terms “carrier,” “component carrier,” “cell,” and “channel” may be used interchangeably herein. A UE115may be configured with multiple downlink CCs and one or more uplink CCs for carrier aggregation. Carrier aggregation may be used with both frequency division duplexed (FDD) and time division duplexed (TDD) component carriers. In some cases, wireless communications system100may utilize enhanced component carriers (eCCs). An eCC may be characterized by one or more features including: wider bandwidth, shorter symbol duration, shorter TTIs, and modified control channel configuration. In some cases, an eCC may be associated with a carrier aggregation configuration or a dual connectivity configuration (e.g., when multiple serving cells have a suboptimal or non-ideal backhaul link). An eCC may also be configured for use in unlicensed spectrum or shared spectrum (where more than one operator is allowed to use the spectrum). An eCC characterized by wide bandwidth may include one or more segments that may be utilized by UEs115that are not capable of monitoring the whole bandwidth or prefer to use a limited bandwidth (e.g., to conserve power). In some cases, an eCC may utilize a different symbol duration than other CCs, which may include use of a reduced symbol duration as compared with symbol durations of the other CCs. A shorter symbol duration is associated with increased subcarrier spacing. A device, such as a UE115or base station105, utilizing eCCs may transmit wideband signals (e.g., 20, 40, 60, 80 MHz, etc.) at reduced symbol durations (e.g., 16.67 microseconds). A TTI in eCC may consist of one or multiple symbols. In some cases, the TTI duration (that is, the number of symbols in a TTI) may be variable. A shared radio frequency spectrum band may be utilized in an NR shared spectrum system. For example, an NR shared spectrum may utilize any combination of licensed, shared, and unlicensed spectrums, among others. The flexibility of eCC symbol duration and subcarrier spacing may allow for the use of eCC across multiple spectrums. In some examples, NR shared spectrum may increase spectrum utilization and spectral efficiency, specifically through dynamic vertical (e.g., across frequency) and horizontal (e.g., across time) sharing of resources. In some cases, wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, wireless communications system100may employ LTE License Assisted Access (LTE-LAA) or LTE Unlicensed (LTE U) radio access technology or NR technology in an unlicensed band such as the 5 GHz Industrial, Scientific, and Medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, wireless devices such as base stations105and UEs115may employ listen-before-talk (LBT) procedures to ensure the channel is clear before transmitting data. In some cases, operations in unlicensed bands may be based on a CA configuration in conjunction with CCs operating in a licensed band. Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, or both. Duplexing in unlicensed spectrum may be based on FDD, TDD, or a combination of both. A wireless communications system100may be classified into open-loop, semi-open-loop, and closed-loop schemes. Open-loop may refer to transmission performed by a transmitter without feedback of a receiver. Closed-loop may refer to a scheme by which the transmitter receives feedback from the receiver and accordingly performs transmission. In some cases, a wireless communications system (e.g., LTE, enhanced full-dimension MIMO (eFD-MIMO) and/or New Radio (NR) wireless communications system) may or may not support particular transmission schemes for unicast transmissions via physical downlink shared channel (PDSCH). A base station105-aof wireless communications system100may adopt different techniques for such PDSCH transmissions. A first type of transmission scheme may include closed-loop transmission where data and reference signals may be transmitted with the same precoding matrix. In the first type of scheme, demodulation of data at UE115-amay not require knowledge of the precoding matrix used at the transmitter. A second type of transmission scheme may include open-loop and semi-open-loop transmission, where data and demodulation reference signal (DMRS) may or may not be restricted to be transmitted with the same precoding matrix. In the second type of transmission scheme, demodulation of data at UE115-amay or may not require knowledge of a relationship between reference signal ports and data layers. The second type of transmission schemes may include schemes including, for example, RBG level precoder cycling, SCDD, and a combination of RBG level cycling and SCDD. According to the different transmission schemes, different techniques may be used for open-loop PMI reporting, including full PMI, partial PMI, and no PMI reporting. In some cases, precoding information fed back by a receiver may be indicated by a combination of two PMIs. A first PMI may have a long-term or wideband property, and may be referred to as W1, and the second PMI may have a short-term or partial-band property, and may be referred to as W2. For a full PMI reporting technique, both W1and W2may be reported, similarly as to a closed-loop technique. In a partial PMI reporting technique, W1may be reported (i.e., i1), while precoder cycling may be performed over W2(i.e., i2). In another case, a portion of W1and a portion of W2may be reported (e.g., a W11or W12part of W1), and precoder cycling may then be performed over (W12, W2) or (W11, W2). In this case, UE115-amay assume some values, as the PMI would have been only partially reported. In the third technique, in which no PMI is reported, base station105-amay indicate a subset of beams, for example, via a codebook subset restriction. A method may provide for indicating whether CQI is based on a closed-loop, open-loop, or semi-open-loop transmission scheme, and for performing CQI computation using assumptions based a corresponding type of PMI reporting. That is, in some cases, a method for efficient CQI computation assumptions associated with PMI reporting for full PMI, partial PMI, and no PMI reporting techniques may be desirable. For example, for a full PMI reporting technique, SCDD may be applied based on selected W1and W2. Alternatively, in partial PMI and no PMI reporting cases, CQI computation may be based on an assumption of the precoder usage of the non-reported part, for example, based on RBG level cycling, SCDD, or a combination of RBG level cycling and SCDD. Wireless communications system100may support CSI feedback for semi-open-loop and open-loop schemes. For instance a UE115may determine an open-loop, semi-open-loop, or closed-loop transmission scheme for deriving CQI. In the case of determined open-loop transmission scheme, the UE115may select a transmission scheme corresponding to a time offset value and a precoder cycling granularity value. For example, the transmission scheme may be a SCDD scheme, a RBG-level precoder cycling scheme, or a combination thereof. The UE115may then, depending on its chosen transmission scheme, determine to use one or more of a time offset value, a precoder cycling granularity value, and a PMI matrix for CSI, and generate CQI accordingly. Additionally, the UE115may transmit the CQI to a base station105in a CSI report, and may include the determined values to indicate the transmission scheme used for the CQI derivation. Based on the CQI, the base station105may determine the transmission scheme and perform link adaption. FIG.2illustrates an example of a wireless communications system200in accordance with aspects of the present disclosure. Wireless communications system200includes base station105-a, which may be an example of a base station105as described with reference toFIG.1. Wireless communications system also includes UE115-a, which may be an example of a UE115as described with reference toFIG.1. UE115-amay be configured with a transmitter205used to transmit signals to base station105-a, and base station105-amay be configured with a receiver210used to receive signals from UE115-a. UE115-amay select a transmission scheme based on a time offset between multiple antennas (e.g., virtual antennas). For example. UE115-amay report parameters τ and M to indicate to base station105-aa preferred transmission scheme (such as TS1or TS2). τ may refer to a time offset between virtual antennas, as applied in a SCDD scheme, and M may represent a precoder cycling granularity value. The precoder cycling granularity value may correspond to a cycling granularity of precoders or a number of resource blocks (RBs) for which different precoders (e.g., precoding matrices) are the same. For instance, in RBG-level precoder cycling, two precoders may be used for a transmission, with one precoder used in, for example, odd-indexed RBs and another precoder used in even-indexed RBs. Accordingly, the precoder cycling granularity value may indicate a number of RBs for which a same precoder is used in precoder cycling over resources. In some cases, the antennas may include a first group of reference signal ports (e.g., CSI-reference signal (CSI-RS) ports) and a second group of CSI-RS ports. In some cases, a first value of τ may be applied to the first group of CSI-RS ports and a second value of τ may be applied to the second group of CSI-RS ports. UE115-amay select τ and M from corresponding sets of values configured by base station105-avia, for example, RRC signaling, MAC control element (CE) signaling, or downlink control information (DCI). In some cases, τ and M may be predefined according to specification, or may be based on capabilities of UE115-a. For example, UE115-amay select τ∈{0, 0.4} μsec, or, alternatively, τ∈{0, 0.2, 0.4, 0.8} UE115-amay select M from a set of defined values, where the set may be defined, for example, for one of two methods. In the first method, M={1, 2, 4, CSI feedback size} RBs (or, as may also be referred to as a non-cycling indicator). These candidate values for M may represent a number of RBs. Additionally or alternatively, granularity values may include multiples of RBs and a non-cycling indicator, where a maximum multiple of RBs may, for example, be less than or equal to the smallest frequency granularity for CSI feedback, and the non-cycling indicator may be equal to zero. In such cases, a set of precoder cycling granularity values (e.g., {0, 1, 2, 4} (in units of RBs)) may be determined regardless of CSI feedback size. In some cases, if M is equal to a number of RBs in the CSI feedback resolution, then there may not be precoder cycling. For example, the cycling granularity values may include portions of the number of RBs for CSI feedback, where the non-cycling indicator is equal to one, indicating that the cycling granularity is the number of RBs for CSI feedback. In this case, the set of precoder cycling granularity values may be equal to {116,14,12,1}, where the unit is the CSI feedback size. Accordingly, the cycling granularity may be equal to a selected portion times a configured CSI feedback size (e.g.,M={11⁢6,14,12,1}*(CSI⁢feedback⁢size)). In this case, the candidate values for M may be portions of the CSI feedback resolution, where 1 refers to the case of no precoder cycling. Additionally or alternatively, determining the time offset and precoder cycling granularity values may be based on other configurations or measurements. For example, the time offset and precoder cycling granularity values may be based on a precoder granularity of a data channel, and/or measurements including UE mobility (e.g., a UE mobility parameter) or a delay spread. Based on respective values of τ and M, UE115-amay generate and correspondingly indicate CQI using a selected transmission scheme. For example, if τ=0, and M is equal to a non-cycling indicator (CSI feedback size), the UE115-amay use a closed-loop transmission scheme. If τ>0, and M is equal to the CSI feedback size, the UE115-amay use a SCDD scheme. If τ=0, and M is less (or, generally unequal to) than the CSI feedback size, the UE115-amay use a RBG level precoder cycling scheme. If τ>0, and M is less than (or, generally unequal to) the CSI feedback size, the UE115-amay use a scheme including a combination of SCDD and RBG level precoder cycling. Further based on the respective values of τ and M, UE115-amay perform CQI computation based on whether the PMI reporting technique is a full PMI reporting technique, a partial PMI reporting technique, or a non PMI reporting technique. For full PMI reporting, CQI may be derived based on the PMI matrix W1. A PMI matrix W2may be reported with a value for τ, and where M may be equal to the CSI feedback size. For partial PMI, CQI may be derived from W1, or the portions or component matrices of W1, (e.g., W11or W12), with a reported value for τ and for M. For no PMI reporting, the CQI may be derived based on codebook subset restriction, with a reported value for τ and for M. In some cases, a codebook subset restriction may be applied in any of the cases of full PMI reporting, partial PMI reporting, and no PMI reporting. In such cases, CQI may be derived based on the codebook subset restriction (e.g., when configured). In some cases, such as for partial PMI reporting and no PMI reporting cases, there may be a varied cycling order of unreported PMIs. For example, if W1is reported, and there are 16 vectors for W2to cycle, the cycling order may be [0, 1, 2, 3 . . . 15], or, for example: [0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15]. Joint coding may further be used for coding τ and M in the CSI reporting. In a first technique for coding of τ and M in the CSI reporting, τ and M may be jointly coded with a rank indicator (RI). In this case, if there may be A bits for τ, B bits for M, and C bits for the RI, then corresponding A+B+C bits may be inputted to the channel encoder. In some cases, the first technique may be performed similarly, but without jointly coding τ and M with PMI or CQI. Additionally or alternatively, in a second technique, τ and M may be jointly coded without a RI. In this case, values for the payload sizes for τ, M, and the RI may be fixed. In some cases, the values for each of these parameters may impact the payload sizes of the PMI and CQI. A first example is provided for the case of full PMI reporting. In an example of the case of full PMI reporting, UE115-amay use a particular precoder for CQI computation, where D represents an SCDD matrix which gradually changes in subcarrier level. The phase change may be a function of the time offset, the time offset given by: τ,Δθ=2π×τ׃scs,  (1) to produce: x⁡(k)=D⁡(k)×W1×W2×sk,(2)whereD⁡(k)=[100exp⁢(jk⁢Δθ)].(3) Here, W1is beam matrix (i.e., PMI matrix), given by: W1=[b0b1⁢b2b3⁢00⁢0000⁢00⁢b0b1⁢b2b3].(4) Here, W2is beam selection and co-phase matrix, as may be determined through selection by UE115-a, given by: W2=ei⊗[1exp⁡(jk⁢θi⁢n⁢i)].(5) Here, θinimay be considered an initial phase of the SCDD for the given sub-band. The value of θinimay be given by 0,π2,π,and⁢3⁢π2. In some cases (e.g., type II codebook), W2may be given by: W2=[pr,iexp(jkθr,i)]T,  (6) containing amplitude term pr,ifor a beam i and a polarization r, and a phase term θr,ifor a beam i and polarization r. In some cases, the value of θr,imay be based on phase modulation (e.g., 8PSK) symbols. A second example is provided for the case of partial PMI reporting or no PMI reporting. In an example of the case of partial PMI reporting or no PMI reporting, UE115-amay use a particular precoder for CQI computation, where D represents an SCDD matrix. The phase change across tones may be a function of the time offset, the time offset given by: τ,Δθ=2π×τ׃scs,  (7) to produce: x⁡(k)=D⁡(k)×W1×W2(n,i)×sk,(8)whereD⁡(k)=[100exp⁢(jk⁢Δθ)].(9) Here, W1may be based on a wideband PMI report, and may be given by: W1=[b0b1⁢b2b3⁢00⁢0000⁢00⁢b0b1⁢b2b3].(10) Here, W2(n, i) may change, for example, every M RBs. In this example, only W2, given by: W2=ei⊗[1exp⁡(jk⁢θi⁢n⁢i)],(11) may cycle, where θn=π⁡(n-1)2,n=1,TagBox[",", "NumberComma", Rule[SyntaxForm, "0"]]2,TagBox[",", "NumberComma", Rule[SyntaxForm, "0"]]3,TagBox[",", "NumberComma", Rule[SyntaxForm, "0"]]4, and i=1, 2, 3, 4. An example is provided here for M=1. Linking k with: (n,i):i=⌊mod(⌊k1⁢2⁢M⌋,16)/4⌋+1,(12)andn=mod(⌊mod(⌊k1⁢2⁢M⌋,16)⌋,4)+1,(13) may produce the resulting precoding: RB01234567. . .i of ei11112222. . .n12341234. . . Additionally or alternatively, a precoder cycling may further be based on codebook subset restriction. FIG.3illustrates an example of a process flow300in accordance with aspects of the present disclosure. In some examples, process flow300may implement aspects of wireless communications system100. For example, process flow includes a UE115-band base station105-b, which may be examples of the corresponding devices described with reference toFIGS.1through4. At305, base station105-bmay transmit to UE115-b, and UE115-bmay receive from base station105-b, predetermined values. In some cases, the predetermined set of precoder cycling granularity values may include a multiple of a number of RBs for CSI feedback, or a portion of the number of RBs for CSI feedback. Additionally or alternatively, the predetermined set of precoder cycling granularity values may include multiples of a number of RBs and a non-cycling indicator (i.e., a CSI feedback size), if, for example, a maximum multiple of the number of RBs is less than or equal to a smallest frequency granularity for CSI feedback, and the non-cycling indicator is equal to zero. The predetermined set of time offset values and the predetermined set of precoder cycling granularity values may be transmitted and received via any of a MAC CE, an RRC message, and DCI. At310, UE115-bmay determine a transmission scheme for CQI. In some cases, transmission scheme may include a first transmission scheme or a second transmission scheme. In some cases, the first transmission scheme may be a closed-loop transmission scheme and the second transmission scheme may be a semi-open-loop transmission scheme or an open-loop transmission scheme. In some cases, the second transmission scheme may be determined to be a SCDD scheme, a RBG level precoder cycling scheme, or a combination of both. At315, UE115-bmay determine a time offset and a precoder cycling granularity values based on the transmission scheme determined at310. The time offset value may be selected from a predetermined set of time offset values, as may have been received from base station105-bat305. The precoder cycling granularity value may be selected from a predetermined set of precoder cycling granularity values, as may have been received from base station105-bat305. In some cases, based on the first transmission scheme determined at310, UE115-bmay determine that the time offset value is, for example, equal to zero and the precoder cycling granularity value is equal to a non-cycling indicator. In some cases, based on the second transmission scheme being determined at310to be a SCDD scheme, UE115-bmay determine that the time offset value is, for example, greater than zero and the precoder cycling granularity value is equal to a non-cycling indicator. In some cases, based on the second transmission scheme being determined at310to be a RBG level precoder cycling scheme, UE115-bmay determine that the time offset value is, for example, equal to zero and the precoder cycling granularity value is not equal to a non-cycling indicator. In some cases, based on the second transmission scheme being determined at310to be a combination of a SCDD scheme and a RBG level precoder cycling scheme, UE115-bmay determine that the time offset value is, for example, greater than zero and the precoder cycling granularity value is not equal to a non-cycling indicator. At320, UE115-bmay generate CQI based on the transmission scheme determined at310. Generating CQI may include determining a PMI reporting scheme, where the PMI reporting scheme includes at least one full PMI reporting, partial PMI reporting, and no PMI reporting. In some cases, determining the PMI reporting scheme may be based on a quantity associated with the CSI report. In some cases, generating the CQI may include applying a first precoder for RBG and a second precoder for a second RBG, where a size of the first RBG and a size of the second RBG are equal to the determined precoder cycling granularity value. CQI may then be derived based on the determined PMI reporting scheme. If the PMI reporting scheme includes full PMI reporting, the CQI may be derived based on, for example, a first PMI matrix, a second PMI matrix, the determined time offset value, and the precoder cycling granularity value being equal to a non-cycling indicator. If the PMI reporting scheme includes partial PMI reporting, the CQI may be derived based on, for example, a first PMI matrix or a component of the first PMI matrix and the determined time offset value and the precoder cycling granularity value. If the PMI reporting scheme includes no PMI reporting, the CQI may be derived based on, for example, the determined time offset value and the precoder cycling granularity value. In some cases, UE115-bmay further determine that a codebook subset restriction (CSR) is configured, and derive the CQI based on the CSR. At325, UE115-bmay transmit to base station105-b, and base station105-bmay receive from UE115-b, a CSI report with CQI. The CSI report may include a PMI, the time offset value, and the precoder cycling granularity value, as UE115-bmay have determined, for example, at315. In some cases, the time offset value and/or the precoder cycling granularity value may be jointly coded within the CSI report. In some cases, the time offset value and/or the precoder cycling granularity value may further be jointly coded with a RI within the CSI report. At330, base station105-bmay determine a transmission schedule. The transmission schedule may include a first transmission scheme and a second transmission scheme. The transmission schedule may be used by UE115-bto generate further CQI, for example, based on the received time offset value and the precoder cycling granularity value. Base station105-bmay identify the first transmission scheduled and second transmission schedule to be, for example, the values as would be determined by a UE115according to step315, corresponding to a SCDD scheme, a RBG level precoder cycling scheme, or a combination thereof. At335, base station105-bmay perform link adaption. The link adaption may be based on the transmission scheme determined at330and the CSI report received at325. FIG.4shows a block diagram400of a wireless device405in accordance with aspects of the present disclosure. Wireless device405may be an example of aspects of a UE115as described herein. Wireless device405may include receiver410, UE communications manager415, and transmitter420. Wireless device405may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). Receiver410may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to CSI feedback for semi-open-loop and open-loop schemes, etc.). Information may be passed on to other components of the device. The receiver410may be an example of aspects of the transceiver735as described with reference toFIG.7. The receiver410may utilize a single antenna or a set of antennas. UE communications manager415may be an example of aspects of the UE communications manager715as described with reference toFIG.7. UE communications manager415and/or at least some of its various sub-components may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions of the UE communications manager415and/or at least some of its various sub-components may be executed by a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The UE communications manager415and/or at least some of its various sub-components may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical devices. In some examples, UE communications manager415and/or at least some of its various sub-components may be a separate and distinct component in accordance with aspects of the present disclosure. In other examples, UE communications manager415and/or at least some of its various sub-components may be combined with one or more other hardware components, including but not limited to an I/O component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with aspects of the present disclosure. UE communications manager415may determine at least one of a time offset value between two or more antenna ports or a precoder cycling granularity value associated with the determined transmission scheme, generate the CQI based on at least one of the determined time offset value and the precoder cycling granularity value, and transmit, in a CSI report, the generated CQI. In some cases, UE communications manager415may transmit, in the CSI report, at least one of the time offset value or the precoder cycling granularity value with the generated CQI. In some cases, UE communications manager415may determine the transmission scheme for CQI, where the transmission scheme for CQI may include a first transmission scheme or a second transmission scheme. Transmitter420may transmit signals generated by other components of the device. In some examples, the transmitter420may be collocated with a receiver410in a transceiver module. For example, the transmitter420may be an example of aspects of the transceiver735as described with reference toFIG.7. The transmitter420may utilize a single antenna or a set of antennas. FIG.5shows a block diagram500of a wireless device505in accordance with aspects of the present disclosure. Wireless device505may be an example of aspects of a wireless device405or a UE115as described with reference toFIG.4. Wireless device505may include receiver510, UE communications manager515, and transmitter520. Wireless device505may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). Receiver510may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to CSI feedback for semi-open-loop and open-loop schemes, etc.). Information may be passed on to other components of the device. The receiver510may be an example of aspects of the transceiver735as described with reference toFIG.7. The receiver510may utilize a single antenna or a set of antennas. UE communications manager515may be an example of aspects of the UE communications manager715as described with reference toFIG.7. UE communications manager515may also include transmission scheme component525, time offset and precoder granularity manager530, CQI component535, and UE CSI report manager540. Transmission scheme component525may determine a transmission scheme for CQI, where the transmission scheme may include a first transmission scheme or a second transmission scheme. In some cases, the first transmission scheme may be a closed-loop transmission scheme (e.g., TS1) and the second transmission scheme may be a semi-open-loop transmission scheme or an open-loop transmission scheme (e.g., TS2). Time offset and precoder granularity manager530may determine at least one of a time offset value or a precoder cycling granularity value, select the time offset value from a predetermined set of time offset values, and select the precoder cycling granularity value from a predetermined set of precoder cycling granularity values. In some examples, the predetermined set of precoder cycling granularity values may include a portion of a number of RBs for CSI feedback and a non-cycling indicator, where the non-cycling indicator is equal to one. In some cases, determining the time offset value and the precoder cycling granularity value may include determining that the time offset value is equal to zero and the precoder cycling granularity value is equal to a non-cycling indicator, where the determined transmission scheme is the first transmission scheme. In some cases, determining at least one of the time offset value or the precoder cycling granularity value may include determining that the time offset value is greater than zero and the precoder cycling granularity value is equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. Additionally or alternatively, determining at least one of the time offset value or the precoder cycling granularity value may include determining that the time offset value is equal to zero and the precoder cycling granularity value is not equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. In some examples, determining at least one of the time offset value or the precoder cycling granularity value includes determining that the time offset value is greater than zero and the precoder cycling granularity value is not equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. CQI component535may generate the CQI based on at least one of the determined time offset value or the determined precoder cycling granularity value. In some cases, generating the CQI includes determining a PMI reporting scheme, where the PMI reporting scheme may include full PMI reporting, partial PMI reporting, or no PMI reporting. In some examples, CQI component535may derive the CQI based on the determined PMI reporting scheme, or may derive the CQI based on a RI, a first PMI matrix, a second PMI matrix, the determined time offset value, and the determined precoder cycling granularity value being equal to a non-cycling indicator, where the PMI reporting scheme includes full PMI reporting. In other examples, CQI component535may derive the CQI based on a CSR. In other examples, CQI component535may derive the CQI based on a RI, a first PMI matrix or a component of the first PMI matrix, the determined time offset value, and the determined precoder cycling granularity value, where the PMI reporting scheme includes partial PMI reporting. Additionally or alternatively, CQI component535may derive the CQI based on a RI, the determined time offset value, and the precoder cycling granularity value, where the PMI reporting scheme includes no PMI reporting. In some cases, generating the CQI may include applying a first precoder for RBG and a second precoder for a second RBG, where a size of the first RBG and a size of the second RBG are equal to the determined precoder cycling granularity value. UE CSI report manager540may transmit, in a CSI report, the generated CQI. In some cases, UE CSI report manager540may transmit, in the CSI report, at least one of the time offset value or the precoder cycling granularity value with the generated CQI. Transmitter520may transmit signals generated by other components of the device. In some examples, the transmitter520may be collocated with a receiver510in a transceiver module. For example, the transmitter520may be an example of aspects of the transceiver735as described with reference toFIG.7. The transmitter520may utilize a single antenna or a set of antennas. FIG.6shows a block diagram600of a UE communications manager615in accordance with aspects of the present disclosure. The UE communications manager615may be an example of aspects of a UE communications manager415, a UE communications manager515, or a UE communications manager715described with reference toFIGS.4,5, and7. The UE communications manager615may include transmission scheme component620, time offset and precoder granularity manager625, CQI component630, UE CSI report manager635, predetermined value manager640, CSR component645, and coding manager650. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). Transmission scheme component620may determine the transmission scheme for CQI, where the transmission scheme for CQI may include a first transmission scheme or a second transmission scheme. In some cases, the first transmission scheme includes a closed-loop transmission scheme and the second transmission scheme includes a semi-open-loop transmission scheme or an open-loop transmission scheme. Time offset and precoder granularity manager625may determine at least one of a time offset value between two or more antenna ports or a precoder cycling granularity value associated with a transmission scheme for CQI. In some cases, the two or more antenna ports include a first group of CSI-RS ports and a second group of CSI-RS ports. In some cases, the time offset value and/or the precoder cycling granularity value may be determined based on a precoder granularity associated with a data channel, a UE mobility parameter, a delay spread, or a combination thereof. In some cases, the time offset value and/or the precoder cycling granularity value are configured via a MAC CE, an RRC message, or DCI. Time offset and precoder granularity manager625may select the time offset value from a predetermined set of time offset values and select the precoder cycling granularity value from a predetermined set of precoder cycling granularity values. In some examples, the predetermined set of precoder cycling granularity values includes a portion of a number of RBs for CSI feedback and a non-cycling indicator, and the non-cycling indicator is equal to one. In some cases, determining at least one of the time offset value or the precoder cycling granularity value includes determining that the time offset value is equal to zero and the precoder cycling granularity value is equal to a non-cycling indicator, where the determined transmission scheme is the first transmission scheme. In some cases, determining at least one of the time offset value or the precoder cycling granularity value may include determining that the time offset value is greater than zero and the precoder cycling granularity value is equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. Additionally or alternatively, determining at least one of the time offset value or the precoder cycling granularity value may include determining that the time offset value is equal to zero and the precoder cycling granularity value is not equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme including RBG level precoder cycling. In some examples, determining at least one of the time offset value or the precoder cycling granularity value includes determining that the time offset value is greater than zero and the precoder cycling granularity value is not equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme including SCDD and RBG level precoder cycling. CQI component630may generate the CQI based on at least one of the determined time offset value and the determined precoder cycling granularity value. In some cases, generating the CQI may include applying the time offset value to the second group of CSI-RS ports relative to the first group of CSI-RS ports. In some cases, generating the CQI includes determining a PMI reporting scheme based on a quantity associated with the CSI report, where the PMI reporting scheme may include full PMI reporting, partial PMI reporting, or no PMI reporting. In some examples, CQI component630may derive the CQI based on the determined PMI reporting scheme, or may derive the CQI based on a RI, a first PMI matrix, a second PMI matrix, the determined time offset value, and the determined precoder cycling granularity value being equal to a non-cycling indicator, where the PMI reporting scheme includes full PMI reporting. In other examples, CQI component630may derive the CQI based on a CSR. In other examples, CQI component630may derive the CQI based on a RI, a first PMI matrix or a component of the first PMI matrix, the determined time offset value, and the determined precoder cycling granularity value, where the PMI reporting scheme includes partial PMI reporting. Additionally or alternatively, CQI component630may derive the CQI based on a RI, the determined time offset value, and the precoder cycling granularity value, where the PMI reporting scheme includes no PMI reporting. In some cases, generating the CQI may include applying a first precoder for RBG and a second precoder for a second RBG, where a size of the first RBG and a size of the second RBG are equal to the determined precoder cycling granularity value. UE CSI report manager635may transmit, in a CSI report, the generated CQI. In some cases, UE CSI report manager635may transmit, in the CSI report, at least one of the time offset value or the precoder cycling granularity value with the generated CQI. Predetermined value manager640may receive a predetermined set of time offset values and a predetermined set of precoder cycling granularity values via a MAC CE, or an RRC message, or DCI. In some cases, determining at least one of the time offset value or the precoder cycling granularity value includes selecting the time offset value from the predetermined set of time offset values and selecting the precoder cycling granularity value from the predetermined set of precoder cycling granularity values. In some cases, the predetermined set of time offset values and the predetermined set of precoder cycling granularity values may be determined based on a capability associated with the UE. In some cases, the predetermined set of precoder cycling granularity values includes multiples of a RBs and a non-cycling indicator, and a maximum multiple of the number of RBs is less than or equal to a smallest frequency granularity for CSI feedback and the non-cycling indicator is equal to zero. In some examples, the predetermined set of precoder cycling granularity values includes a portion of a number of RBs for CSI feedback and a non-cycling indicator, and where the non-cycling indicator is equal to one. In some cases, the predetermined set of precoder cycling granularity values include a multiple of a number of RBs for CSI feedback or a portion of the number of RBs for CSI feedback. CSR component645may determine that the CSR is configured. Coding manager650may jointly code the time offset value or the precoder cycling granularity value with an RI within the CSI report. In some cases, CSR component645jointly code the time offset value or the precoder cycling granularity value within the CSI report. FIG.7shows a diagram of a system700including a device705in accordance with aspects of the present disclosure. Device705may be an example of or include the components of wireless device405, wireless device505, or a UE115as described above, e.g., with reference toFIGS.4and5. Device705may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including UE communications manager715, processor720, memory725, software730, transceiver735, antenna740, and I/O controller745. These components may be in electronic communication via one or more buses (e.g., bus710). Device705may communicate wirelessly with one or more base stations105. Processor720may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a central processing unit (CPU), a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, processor720may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into processor720. Processor720may be configured to execute computer-readable instructions stored in a memory to perform various functions (e.g., functions or tasks supporting CSI feedback for semi-open-loop and open-loop schemes). Memory725may include random access memory (RAM) and read only memory (ROM). The memory725may store computer-readable, computer-executable software730including instructions that, when executed, cause the processor to perform various functions described herein. In some cases, the memory725may contain, among other things, a basic input/output system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. Software730may include code to implement aspects of the present disclosure, including code to support CSI feedback for semi-open-loop and open-loop schemes. Software730may be stored in a non-transitory computer-readable medium such as system memory or other memory. In some cases, the software730may not be directly executable by the processor but may cause a computer (e.g., when compiled and executed) to perform functions described herein. Transceiver735may communicate bi-directionally, via one or more antennas, wired, or wireless links as described above. For example, the transceiver735may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver735may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the wireless device may include a single antenna740. However, in some cases the device may have more than one antenna740, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. I/O controller745may manage input and output signals for device705. I/O controller745may also manage peripherals not integrated into device705. In some cases, I/O controller745may represent a physical connection or port to an external peripheral. In some cases, I/O controller745may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. In other cases, I/O controller745may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, I/O controller745may be implemented as part of a processor. In some cases, a user may interact with device705via I/O controller745or via hardware components controlled by I/O controller745. FIG.8shows a block diagram800of a wireless device805in accordance with aspects of the present disclosure. Wireless device805may be an example of aspects of a base station105as described herein. Wireless device805may include receiver810, base station communications manager815, and transmitter820. Wireless device805may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). Receiver810may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to CSI feedback for semi-open-loop and open-loop schemes, etc.). Information may be passed on to other components of the device. The receiver810may be an example of aspects of the transceiver1135as described with reference toFIG.11. The receiver810may utilize a single antenna or a set of antennas. Base station communications manager815may be an example of aspects of the base station communications manager1115as described with reference toFIG.11. Base station communications manager815and/or at least some of its various sub-components may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions of the base station communications manager815and/or at least some of its various sub-components may be executed by a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The base station communications manager815and/or at least some of its various sub-components may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical devices. In some examples, base station communications manager815and/or at least some of its various sub-components may be a separate and distinct component in accordance with aspects of the present disclosure. In other examples, base station communications manager815and/or at least some of its various sub-components may be combined with one or more other hardware components, including but not limited to an I/O component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with aspects of the present disclosure. Base station communications manager815may receive CQI in a CSI report, the CQI based at least one of on a time offset value between two or more antenna ports and a precoder cycling granularity value. Base station communications manager815may determine a transmission scheme used for generating the CQI based on at least one of the received time offset value or the precoder cycling granularity value, and perform link adaptation based on the determined transmission scheme and the CSI report. In some cases, the transmission scheme may include a first transmission scheme or a second transmission scheme. Transmitter820may transmit signals generated by other components of the device. In some examples, the transmitter820may be collocated with a receiver810in a transceiver module. For example, the transmitter820may be an example of aspects of the transceiver1135as described with reference toFIG.11. The transmitter820may utilize a single antenna or a set of antennas. FIG.9shows a block diagram900of a wireless device905in accordance with aspects of the present disclosure. Wireless device905may be an example of aspects of a wireless device805or a base station105as described with reference toFIG.8. Wireless device905may include receiver910, base station communications manager915, and transmitter920. Wireless device905may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). Receiver910may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to CSI feedback for semi-open-loop and open-loop schemes, etc.). Information may be passed on to other components of the device. The receiver910may be an example of aspects of the transceiver1135as described with reference toFIG.11. The receiver910may utilize a single antenna or a set of antennas. Base station communications manager915may be an example of aspects of the base station communications manager1115as described with reference toFIG.11. Base station communications manager915may also include base station CSI report manager925, transmission scheme manager930, and link adaptation component935. Base station CSI report manager925may receive CQI in a CSI report, the CQI based on at least one of a time offset value between two or more antenna ports and a precoder cycling granularity value. In some cases, the CQI may be based on a PMI reporting scheme and the time offset value and the precoder cycling granularity value, and the CSI report may include a RI, a PMI, the time offset value, the precoder cycling granularity value, or a combination thereof. In some cases, the time offset value or the precoder cycling granularity value may be jointly coded within the CSI report. In some cases, the time offset value or the precoder cycling granularity value may be jointly coded with the RI within the CSI report. In some cases, receiving the CQI may include receiving the CQI having a first precoder applied for a first RBG and a second precoder applied for a second RBG, where a size of the first RBG and a size of the second RBG are equal to the precoder cycling granularity value. In some cases, base station CSI report manager925may receive at least one of the time offset value or the precoder cycling granularity value, where the CQI may be assumed to be associated with the received time offset value or the received precoder cycling granularity value. Transmission scheme manager930may determine a transmission scheme used for generating the CQI based on at least one of the time offset value or the precoder cycling granularity value. In some cases, the transmission scheme may include a first transmission scheme or a second transmission scheme. In some cases, the first transmission scheme includes a closed-loop transmission scheme and the second transmission scheme includes a semi-open-loop transmission scheme or an open-loop transmission scheme. Determining the transmission scheme used for generating the CQI may include identifying that the time offset value is greater than zero and the precoder cycling granularity value is equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. In some examples, determining the transmission scheme used for generating the CQI may include identifying that the time offset value is equal to zero and the precoder cycling granularity value is not equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. In some cases, generating the CQI may include determining a PMI reporting scheme, where the PMI reporting scheme may include full PMI reporting, partial PMI reporting, or no PMI reporting, and deriving the CQI based on the determined PMI reporting scheme. In some cases, the two or more antenna ports include a first group of CSI-RS ports and a second group of CSI-RS ports, and generating the CQI includes applying the time offset value to the second group of CSI-RS ports relative to the first group of CSI-RS ports. In some examples, determining the transmission scheme used for generating the CQI may include identifying that the time offset value is greater than zero and the precoder cycling granularity value is not equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. In some cases, the CQI is based on a PMI reporting scheme and the time offset value and the precoder cycling granularity value. Link adaptation component935may perform link adaptation based on the determined transmission scheme and the CSI report. Transmitter920may transmit signals generated by other components of the device. In some examples, the transmitter920may be collocated with a receiver910in a transceiver module. For example, the transmitter920may be an example of aspects of the transceiver1135as described with reference toFIG.11. The transmitter920may utilize a single antenna or a set of antennas. FIG.10shows a block diagram1000of a base station communications manager1015in accordance with aspects of the present disclosure. The base station communications manager1015may be an example of aspects of a base station communications manager1115described with reference toFIGS.8,9, and11. The base station communications manager1015may include base station CSI report manager1020, transmission scheme manager1025, link adaptation component1030, transmission scheme component1035, and predetermined value indicator1040. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). Base station CSI report manager1020may receive CQI in a CSI report, the CQI based on at least one of a time offset value between two or more antenna ports and a precoder cycling granularity value. In some cases, the CQI may be based on a PMI reporting scheme and the time offset value and the precoder cycling granularity value, and the CSI report may include a RI, a PMI, the time offset value, the precoder cycling granularity value, or a combination thereof. In some cases, the time offset value or the precoder cycling granularity value may be jointly coded within the CSI report. In some cases, the time offset value or the precoder cycling granularity value may be jointly coded with the RI within the CSI report. In some cases, receiving the CQI may include receiving the CQI having a first precoder applied for a first RBG and a second precoder applied for a second RBG, where a size of the first RBG and a size of the second RBG are equal to the precoder cycling granularity value. In some cases, base station CSI report manager1020may receiving at least one of the time offset value or the precoder cycling granularity value, where the CQI is assumed to be associated with the received time offset value or the received precoder cycling granularity value. Transmission scheme manager1025may determine a transmission scheme used for generating the CQI based on at least one of the time offset value and the precoder cycling granularity value. In some cases, the transmission scheme may include a first transmission scheme or a second transmission scheme. In some cases, the first transmission scheme includes a closed-loop transmission scheme and the second transmission scheme includes a semi-open-loop transmission scheme or an open-loop transmission scheme. Determining the transmission scheme used for generating the CQI may include identifying that the time offset value is greater than zero and the precoder cycling granularity value is equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. In some examples, determining the transmission scheme used for generating the CQI may include identifying that the time offset value is equal to zero and the precoder cycling granularity value is not equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme. In some cases, generating the CQI may include determining a PMI reporting scheme, where the PMI reporting scheme may include full PMI reporting, partial PMI reporting, or no PMI reporting, and deriving the CQI based on the determined PMI reporting scheme. In some cases, the two or more antenna ports include a first group of CSI-RS ports and a second group of CSI-RS ports, and generating the CQI includes applying the time offset value to the second group of CSI-RS ports relative to the first group of CSI-RS ports. In some examples, determining the transmission scheme used for generating the CQI may include identifying that the time offset value is greater than zero and the precoder cycling granularity value is not equal to a non-cycling indicator, where the determined transmission scheme is the second transmission scheme, which may include SCDD and RBG level precoder cycling. In some cases, the CQI is based on a PMI reporting scheme and the time offset value and the precoder cycling granularity value. Link adaptation component1030may perform link adaptation based on the determined transmission scheme and the CSI report. Transmission scheme component1035may identify that the time offset value is equal to zero and the precoder cycling granularity value is equal to a non-cycling indicator, where the determined transmission scheme is the first transmission scheme. Predetermined value indicator1040may transmit an indication of a predetermined set of time offset values and an indication of a predetermined set of precoder cycling granularity values via a MAC CE, or an RRC message, or DCI. In some cases, the predetermined set of precoder cycling granularity values includes multiples of a number of RBs and a non-cycling indicator, and a maximum multiple of the number of RBs is less than or equal to a smallest frequency granularity for CSI feedback and the non-cycling indicator is equal to zero. In some examples, the predetermined set of precoder cycling granularity values include a multiple of a number of RB for CSI feedback or a portion of the number of RBs for CSI feedback. In some cases, predetermined value indicator1040may configure (e.g., directly to the UE) at least one of the time offset value or the precoder cycling granularity value via a MAC CE, or a RRC message, or DCI. For instance, a base station105may directly configure parameters to the UE115, then the UE115may generate CSI using the configuration, and subsequently report the CSI without transmitting the time offset and precoder cycling granularity. In other examples, the base station105may configure a predefined set of parameters for the UE115, then the UE115may select the preferred parameters and generate the CSI. The UE115may then report the CSI together with the preferred parameter selected from the set of parameters. Accordingly, the base station105may know which parameters the CSI is associated with. FIG.11shows a diagram of a system1100including a device1105in accordance with aspects of the present disclosure. Device1105may be an example of or include the components of base station105, for example, as described with reference toFIG.1. Device1105may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including base station communications manager1115, processor1120, memory1125, software1130, transceiver1135, antenna1140, network communications manager1145, and inter-station communications manager1150. These components may be in electronic communication via one or more buses (e.g., bus1110). Device1105may communicate wirelessly with one or more UEs115. Processor1120may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, processor1120may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into processor1120. Processor1120may be configured to execute computer-readable instructions stored in a memory to perform various functions (e.g., functions or tasks supporting CSI feedback for semi-open-loop and open-loop schemes). Memory1125may include RAM and ROM. The memory1125may store computer-readable, computer-executable software1130including instructions that, when executed, cause the processor to perform various functions described herein. In some cases, the memory1125may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. Software1130may include code to implement aspects of the present disclosure, including code to support CSI feedback for semi-open-loop and open-loop schemes. Software1130may be stored in a non-transitory computer-readable medium such as system memory or other memory. In some cases, the software1130may not be directly executable by the processor but may cause a computer (e.g., when compiled and executed) to perform functions described herein. Transceiver1135may communicate bi-directionally, via one or more antennas, wired, or wireless links as described above. For example, the transceiver1135may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1135may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the wireless device may include a single antenna1140. However, in some cases the device may have more than one antenna1140, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. Network communications manager1145may manage communications with the core network (e.g., via one or more wired backhaul links). For example, the network communications manager1145may manage the transfer of data communications for client devices, such as one or more UEs115. Inter-station communications manager1150may manage communications with other base station105, and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. For example, the inter-station communications manager1150may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some examples, inter-station communications manager1150may provide an X2 interface within an LTE/LTE-A wireless communications network technology to provide communication between base stations105. FIG.12shows a flowchart illustrating a method1200in accordance with aspects of the present disclosure. The operations of method1200may be implemented by a UE115or its components as described herein. For example, the operations of method1200may be performed by a UE communications manager as described with reference toFIGS.4through7. In some examples, a UE115may execute a set of codes to control the functional elements of the device to perform the functions described below. Additionally or alternatively, the UE115may perform aspects of the functions described below using special-purpose hardware. At block1205the UE115may determine at least one of a time offset value between two or more antenna ports or a precoder cycling granularity value associated with a transmission scheme for CQI. The operations of block1205may be performed according to the methods described herein. In certain examples, aspects of the operations of block1205may be performed by a time offset and precoder granularity manager as described with reference toFIGS.4through7. At block1210the UE115may generate the CQI based on at least one of the determined time offset value or the determined precoder cycling granularity value. The operations of block1210may be performed according to the methods described herein. In certain examples, aspects of the operations of block1210may be performed by a CQI component as described with reference toFIGS.4through7. At block1215the UE115may transmit, in a CSI report, the generated CQI. The operations of block1215may be performed according to the methods described herein. In certain examples, aspects of the operations of block1215may be performed by a UE CSI report manager as described with reference toFIGS.4through7. FIG.13shows a flowchart illustrating a method1300in accordance with aspects of the present disclosure. The operations of method1300may be implemented by a UE115or its components as described herein. For example, the operations of method1300may be performed by a UE communications manager as described with reference toFIGS.4through7. In some examples, a UE115may execute a set of codes to control the functional elements of the device to perform the functions described below. Additionally or alternatively, the UE115may perform aspects of the functions described below using special-purpose hardware. At block1305the UE115may determine a transmission scheme for CQI, where the transmission scheme includes a first transmission scheme or a second transmission scheme. The operations of block1305may be performed according to the methods described herein. In certain examples, aspects of the operations of block1305may be performed by a transmission scheme component as described with reference toFIGS.4through7. At block1310the UE115may optionally select a time offset value from a predetermined set of time offset values, where the time offset value is associated with the determined transmission scheme. The operations of block1310may be performed according to the methods described herein. In certain examples, aspects of the operations of block1310may be performed by a time offset and precoder granularity manager as described with reference toFIGS.4through7. At block1315the UE115may optionally select a precoder cycling granularity value from a predetermined set of precoder cycling granularity values, where the precoder cycling granularity value is associated with the determined transmission scheme. The operations of block1315may be performed according to the methods described herein. In certain examples, aspects of the operations of block1315may be performed by a time offset and precoder granularity manager as described with reference toFIGS.4through7. At block1320the UE115may generate the CQI based on at least one of the time offset value and the precoder cycling granularity value. The operations of block1320may be performed according to the methods described herein. In certain examples, aspects of the operations of block1320may be performed by a UE CSI report manager as described with reference toFIGS.4through7. At block1325the UE115may transmit, in a CSI report, the generated CQI. The operations of block1325may be performed according to the methods described herein. In certain examples, aspects of the operations of block1325may be performed by a time offset and precoder granularity manager as described with reference toFIGS.4through7. FIG.14shows a flowchart illustrating a method1400in accordance with aspects of the present disclosure. The operations of method1400may be implemented by a UE115or its components as described herein. For example, the operations of method1400may be performed by a UE communications manager as described with reference toFIGS.4through7. In some examples, a UE115may execute a set of codes to control the functional elements of the device to perform the functions described below. Additionally or alternatively, the UE115may perform aspects of the functions described below using special-purpose hardware. At block1405the UE115may determine a transmission scheme for CQI, where the transmission scheme comprises a first transmission scheme or a second transmission scheme. The operations of block1405may be performed according to the methods described herein. In certain examples, aspects of the operations of block1405may be performed by a transmission scheme component as described with reference toFIGS.4through7. At block1410the UE115may determine a PMI reporting scheme based on a quantity associated with the CSI report, where the PMI reporting scheme may include full PMI reporting, partial PMI reporting, or no PMI reporting. The operations of block1410may be performed according to the methods described herein. In certain examples, aspects of the operations of block1410may be performed by a CQI component as described with reference toFIGS.4through7. At block1415the UE115may determine at least one of a time offset value between two or more antenna ports or a precoder cycling granularity value associated with a transmission scheme for CQI. The operations of block1415may be performed according to the methods described herein. In certain examples, aspects of the operations of block1415may be performed by a time offset and precoder granularity manager as described with reference toFIGS.4through7. At block1420the UE115may derive the CQI based on the determined PMI reporting scheme. The operations of block1420may be performed according to the methods described herein. In certain examples, aspects of the operations of block1420may be performed by a CQI component as described with reference toFIGS.4through7. At block1425the UE115may transmit, in a CSI report, the generated CQI. The operations of block1425may be performed according to the methods described herein. In certain examples, aspects of the operations of block1425may be performed by a UE CSI report manager as described with reference toFIGS.4through7. FIG.15shows a flowchart illustrating a method1500in accordance with aspects of the present disclosure. The operations of method1500may be implemented by a base station105or its components as described herein. For example, the operations of method1500may be performed by a base station communications manager as described with reference toFIGS.8through11. In some examples, a base station105may execute a set of codes to control the functional elements of the device to perform the functions described below. Additionally or alternatively, the base station105may perform aspects of the functions described below using special-purpose hardware. At block1505the base station105may receive CQI in a CSI report, the CQI based on at least one of a time offset value between two or more antenna ports or a precoder cycling granularity value. The operations of block1505may be performed according to the methods described herein. In certain examples, aspects of the operations of block1505may be performed by a base station CSI report manager as described with reference toFIGS.8through11. At block1510the base station105may determine a transmission scheme used for generating the CQI based on at least one of the time offset value or the precoder cycling granularity value. The operations of block1510may be performed according to the methods described herein. In certain examples, aspects of the operations of block1510may be performed by a transmission scheme manager as described with reference toFIGS.8through11. At block1515the base station105may perform link adaptation based on the determined transmission scheme and the CSI report. The operations of block1515may be performed according to the methods described herein. In certain examples, aspects of the operations of block1515may be performed by a link adaptation component as described with reference toFIGS.8through11. FIG.16shows a flowchart illustrating a method1600in accordance with aspects of the present disclosure. The operations of method1600may be implemented by a base station105or its components as described herein. For example, the operations of method1600may be performed by a base station communications manager as described with reference toFIGS.8through11. In some examples, a base station105may execute a set of codes to control the functional elements of the device to perform the functions described below. Additionally or alternatively, the base station105may perform aspects of the functions described below using special-purpose hardware. At block1605the base station105may transmit an indication of a predetermined set of time offset values and an indication of a predetermined set of precoder cycling granularity values via a MAC CE, or an RRC message, or DCI. The operations of block1605may be performed according to the methods described herein. In certain examples, aspects of the operations of block1605may be performed by a predetermined value indicator as described with reference toFIGS.8through11. At block1610the base station105may receive CQI in a CSI report, the CQI based on at least one of a time offset value between two or more antenna ports or a precoder cycling granularity value. The operations of block1610may be performed according to the methods described herein. In certain examples, aspects of the operations of block1610may be performed by a base station CSI report manager as described with reference toFIGS.8through11. At block1615the base station105may determine a transmission scheme used for generating the CQI based on at least one of the time offset value or the precoder cycling granularity value. The operations of block1615may be performed according to the methods described herein. In certain examples, aspects of the operations of block1615may be performed by a transmission scheme manager as described with reference toFIGS.8through11. At block1620the base station105may perform link adaptation based on the determined transmission scheme and the CSI report. The operations of block1620may be performed according to the methods described herein. In certain examples, aspects of the operations of block1620may be performed by a link adaptation component as described with reference toFIGS.8through11. In some examples, aspects from two or more of the described methods may be combined. It should be noted that the methods are just example implementations, and that the operations of the methods be rearranged or otherwise modified such that other implementations are possible. Techniques described herein may be used for various wireless communications systems such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal frequency division multiple access (OFDMA), single carrier frequency division multiple access (SC-FDMA), and other systems. The terms “system” and “network” are often used interchangeably. A CDMA system may implement a radio technology such as CDMA2000, Universal Terrestrial Radio Access (UTRA), etc. CDMA2000 covers IS-2000, IS-95, and IS-856 standards. IS-2000 Releases may be commonly referred to as CDMA2000 1×, 1×, etc. IS-856 (TIA-856) is commonly referred to as CDMA2000 1×EV-DO, High Rate Packet Data (HRPD), etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. A TDMA system may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA system may implement a radio technology such as Ultra Mobile Broadband (UMB), Evolved UTRA (E-UTRA), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunications System (UMTS). LTE and LTE-A are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A, NR, and GSM are described in documents from the organization named “3rd Generation Partnership Project” (3GPP). CDMA2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the systems and radio technologies mentioned above as well as other systems and radio technologies. While aspects of an LTE or an NR system may be described for purposes of example, and LTE or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE or NR applications. In LTE/LTE-A networks, including such networks described herein, the term evolved node B (eNB) may be generally used to describe the base stations. The wireless communications system or systems described herein may include a heterogeneous LTE/LTE-A or NR network in which different types of eNBs provide coverage for various geographical regions. For example, each eNB, next generation NodeB (gNB), or base station may provide communication coverage for a macro cell, a small cell, or other types of cell. The term “cell” may be used to describe a base station, a carrier or component carrier associated with a base station, or a coverage area (e.g., sector, etc.) of a carrier or base station, depending on context. Base stations may include or may be referred to by those skilled in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, eNodeB (eNB), gNB, Home NodeB, a Home eNodeB, or some other suitable terminology. The geographic coverage area for a base station may be divided into sectors making up only a portion of the coverage area. The wireless communications system or systems described herein may include base stations of different types (e.g., macro or small cell base stations). The UEs described herein may be able to communicate with various types of base stations and network equipment including macro eNBs, small cell eNBs, gNBs, relay base stations, and the like. There may be overlapping geographic coverage areas for different technologies. A macro cell generally covers a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscriptions with the network provider. A small cell is a lower-powered base station, as compared with a macro cell, that may operate in the same or different (e.g., licensed, unlicensed, etc.) frequency bands as macro cells. Small cells may include pico cells, femto cells, and micro cells according to various examples. A pico cell, for example, may cover a small geographic area and may allow unrestricted access by UEs with service subscriptions with the network provider. A femto cell may also cover a small geographic area (e.g., a home) and may provide restricted access by UEs having an association with the femto cell (e.g., UEs in a closed subscriber group (CSG), UEs for users in the home, and the like). An eNB for a macro cell may be referred to as a macro eNB. An eNB for a small cell may be referred to as a small cell eNB, a pico eNB, a femto eNB, or a home eNB. An eNB may support one or multiple (e.g., two, three, four, and the like) cells (e.g., component carriers). The wireless communications system or systems described herein may support synchronous or asynchronous operation. For synchronous operation, the base stations may have similar frame timing, and transmissions from different base stations may be approximately aligned in time. For asynchronous operation, the base stations may have different frame timing, and transmissions from different base stations may not be aligned in time. The techniques described herein may be used for either synchronous or asynchronous operations. The downlink transmissions described herein may also be called forward link transmissions while the uplink transmissions may also be called reverse link transmissions. Each communication link described herein—including, for example, wireless communications system100and200as described with reference toFIGS.1and2—may include one or more carriers, where each carrier may be a signal made up of multiple sub-carriers (e.g., waveform signals of different frequencies). The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. As used herein, including in the claims, the term “and/or,” when used in a list of two or more items, means that any one of the listed items can be employed by itself, or any combination of two or more of the listed items can be employed. For example, if a composition is described as containing components A, B, and/or C, the composition can contain A alone; B alone; C alone; A and B in combination; A and C in combination; B and C in combination; or A, B, and C in combination. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: A, B, or C” is intended to cover A, B, C, A-B, A-C, B-C, and A-B-C, as well as any combination with multiples of the same element (e.g., A-A A-A-A, A-A-B, A-A-C, A-B-B, A-C-C, B-B, B-B-B, B-B-C, C-C, and C-C-C or any other ordering of A, B, and C). As used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary feature that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
103,048
11863273
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer. While aspects and implementations are described in this application by illustration to some examples, those skilled in the art will understand that additional implementations and use cases may come about in many different arrangements and scenarios. Innovations described herein may be implemented across many differing platform types, devices, systems, shapes, sizes, and packaging arrangements. For example, implementations and/or uses may come about via integrated chip implementations and other non-module-component based devices (e.g., end-user devices, vehicles, communication devices, computing devices, industrial equipment, retail/purchasing devices, medical devices, artificial intelligence (AI)-enabled devices, etc.). While some examples may or may not be specifically directed to use cases or applications, a wide assortment of applicability of described innovations may occur. Implementations may range a spectrum from chip-level or modular components to non-modular, non-chip-level implementations and further to aggregate, distributed, or original equipment manufacturer (OEM) devices or systems incorporating one or more aspects of the described innovations. In some practical settings, devices incorporating described aspects and features may also include additional components and features for implementation and practice of claimed and described aspect. For example, transmission and reception of wireless signals necessarily includes a number of components for analog and digital purposes (e.g., hardware components including antenna, RF-chains, power amplifiers, modulators, buffer, processor(s), interleaver, adders/summers, etc.). It is intended that innovations described herein may be practiced in a wide variety of devices, chip-level components, systems, distributed arrangements, aggregated or disaggregated components, end-user devices, etc. of varying sizes, shapes, and constitution. FIG.1is a diagram illustrating an example of a wireless communications system and an access network100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations102, UEs104, an Evolved Packet Core (EPC)160, and another core network190(e.g., a 5G Core (5GC)). The base stations102may include macrocells (high power cellular base station) and/or small cells (low power cellular base station). The macrocells include base stations. The small cells include femtocells, picocells, and microcells. Aspects presented herein may improve the performance and the accuracy for RF sensing. Aspects presented may enable RF sensing to be adaptive to the environment to improve the sensing, performance, and/or the spectrum efficiency of cellular systems and the power efficiency of RF sensing nodes (e.g., base station and/or UE). For example, in one aspect of the present disclosure, if a network is able to determine the direction of a target, the network may guide an RF sensing node (or a radar transmission (Tx)) to beamform toward the target to enhance the Signal-to-noise ratio (SNR). In another example, if a network is able to determine that there is no small object to be sensed, the network may reduce the Tx power and/or the waveform repetitions of an RF sensing node to achieve power saving. In some examples, one or more features (e.g., parameters to be reported to a network to enable adaptive RF sensing) may be extracted/derived based on non-RF method(s), such as via a camera, an ultra-sound sensor, a lidar sensor, and/or a barometric sensor, etc. The non-RF based feature extraction/derivation may be implemented on an RF sensing node based on a real-time setting. Also, the non-RF based feature extraction/derivation may have less interference issue(s) compared with RF based feature extraction/derivation. In certain aspects, an RF sensing node, which may be a UE104or a base station102/180, may include an RF sensing component198configured to transmit RRS based at least in part on environmental conditions to improve the performance and the accuracy of RF sensing. In one configuration, the RF sensing component198may be configured to extract one or more features for a set of objects of an area via at least one non-RF sensor. In such configuration, the RF sensing component198may transmit, to a network entity, the one or more features or at least one non-RF measurement derived from the one or more features. In such configuration, the RF sensing component198may receive, from the network entity, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement transmitted to the network entity. In certain aspects, a network entity, which may be a server for sensing, a location server, or an LMF, may include an RRS configuration component199configured to configure RRS transmission for an RF sensing node based at least in part on environmental conditions around the RF sensing node to improve the performance and the accuracy of RF sensing. In one configuration, the RRS configuration component199may be configured to receive, from an RF sensing node, one or more features for a set of objects of an area or at least one non-RF measurement derived from the one or more features. In such configuration, the RRS configuration component199may transmit, to the RF sensing node, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement received from the RF sensing node. The base stations102configured for 4G LTE (collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) may interface with the EPC160through first backhaul links132(e.g., S1 interface). The base stations102configured for 5G NR (collectively referred to as Next Generation RAN (NG-RAN)) may interface with core network190through second backhaul links184. In addition to other functions, the base stations102may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. The base stations102may communicate directly or indirectly (e.g., through the EPC160or core network190) with each other over third backhaul links134(e.g., X2 interface). The first backhaul links132, the second backhaul links184, and the third backhaul links134may be wired or wireless. In some aspects, a base station102or180may be referred as a RAN and may include aggregated or disaggregated components. As an example of a disaggregated RAN, a base station may include a central unit (CU)103, one or more distributed units (DU)105, and/or one or more remote units (RU)109, as illustrated inFIG.1. A RAN may be disaggregated with a split between an RU109and an aggregated CU/DU. A RAN may be disaggregated with a split between the CU103, the DU105, and the RU109. A RAN may be disaggregated with a split between the CU103and an aggregated DU/RU. The CU103and the one or more DUs105may be connected via an F1 interface. A DU105and an RU109may be connected via a fronthaul interface. A connection between the CU103and a DU105may be referred to as a midhaul, and a connection between a DU105and an RU109may be referred to as a fronthaul. The connection between the CU103and the core network may be referred to as the backhaul. The RAN may be based on a functional split between various components of the RAN, e.g., between the CU103, the DU105, or the RU109. The CU may be configured to perform one or more aspects of a wireless communication protocol, e.g., handling one or more layers of a protocol stack, and the DU(s) may be configured to handle other aspects of the wireless communication protocol, e.g., other layers of the protocol stack. In different implementations, the split between the layers handled by the CU and the layers handled by the DU may occur at different layers of a protocol stack. As one, non-limiting example, a DU105may provide a logical node to host a radio link control (RLC) layer, a medium access control (MAC) layer, and at least a portion of a physical (PHY) layer based on the functional split. An RU may provide a logical node configured to host at least a portion of the PHY layer and radio frequency (RF) processing. A CU103may host higher layer functions, e.g., above the RLC layer, such as a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer. In other implementations, the split between the layer functions provided by the CU, DU, or RU may be different. An access network may include one or more integrated access and backhaul (IAB) nodes111that exchange wireless communication with a UE104or other IAB node111to provide access and backhaul to a core network. In an IAB network of multiple IAB nodes, an anchor node may be referred to as an IAB donor. The IAB donor may be a base station102or180that provides access to a core network190or EPC160and/or control to one or more IAB nodes111. The IAB donor may include a CU103and a DU105. IAB nodes111may include a DU105and a mobile termination (MT)113. The DU105of an IAB node111may operate as a parent node, and the MT113may operate as a child node. The base stations102may wirelessly communicate with the UEs104. Each of the base stations102may provide communication coverage for a respective geographic coverage area110. There may be overlapping geographic coverage areas110. For example, the small cell102′ may have a coverage area110′ that overlaps the coverage area110of one or more macro base stations102. A network that includes both small cell and macrocells may be known as a heterogeneous network. A heterogeneous network may also include Home Evolved Node Bs (eNBs) (HeNBs), which may provide service to a restricted group known as a closed subscriber group (CSG). The communication links120between the base stations102and the UEs104may include uplink (UL) (also referred to as reverse link) transmissions from a UE104to a base station102and/or downlink (DL) (also referred to as forward link) transmissions from a base station102to a UE104. The communication links120may use multiple-input and multiple-output (MIMO) antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity. The communication links may be through one or more carriers. The base stations102/UEs104may use spectrum up to Y MHz (e.g., 5, 10, 15, 20, 100, 400, etc. MHz) bandwidth per carrier allocated in a carrier aggregation of up to a total of Yx MHz (x component carriers) used for transmission in each direction. The carriers may or may not be adjacent to each other. Allocation of carriers may be asymmetric with respect to DL and UL (e.g., more or fewer carriers may be allocated for DL than for UL). The component carriers may include a primary component carrier and one or more secondary component carriers. A primary component carrier may be referred to as a primary cell (PCell) and a secondary component carrier may be referred to as a secondary cell (SCell). Certain UEs104may communicate with each other using device-to-device (D2D) communication link158. The D2D communication link158may use the DL/UL WWAN spectrum. The D2D communication link158may use one or more sidelink channels, such as a physical sidelink broadcast channel (PSBCH), a physical sidelink discovery channel (PSDCH), a physical sidelink shared channel (PSSCH), and a physical sidelink control channel (PSCCH). D2D communication may be through a variety of wireless D2D communications systems, such as for example, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, LTE, or NR. The wireless communications system may further include a Wi-Fi access point (AP)150in communication with Wi-Fi stations (STAs)152via communication links154, e.g., in a 5 GHz unlicensed frequency spectrum or the like. When communicating in an unlicensed frequency spectrum, the STAs152/AP150may perform a clear channel assessment (CCA) prior to communicating in order to determine whether the channel is available. The small cell102′ may operate in a licensed and/or an unlicensed frequency spectrum. When operating in an unlicensed frequency spectrum, the small cell102′ may employ NR and use the same unlicensed frequency spectrum (e.g., 5 GHz, or the like) as used by the Wi-Fi AP150. The small cell102′, employing NR in an unlicensed frequency spectrum, may boost coverage to and/or increase capacity of the access network. The electromagnetic spectrum is often subdivided, based on frequency/wavelength, into various classes, bands, channels, etc. In 5G NR, two initial operating bands have been identified as frequency range designations FR1 (410 MHz-7.125 GHz) and FR2 (24.25 GHz-52.6 GHz). Although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “sub-6 GHz” band in various documents and articles. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Recent 5G NR studies have identified an operating band for these mid-band frequencies as frequency range designation FR3 (7.125 GHz-24.25 GHz). Frequency bands falling within FR3 may inherit FR1 characteristics and/or FR2 characteristics, and thus may effectively extend features of FR1 and/or FR2 into mid-band frequencies. In addition, higher frequency bands are currently being explored to extend 5G NR operation beyond 52.6 GHz. For example, three higher operating bands have been identified as frequency range designations FR2-2 (52.6 GHz-71 GHz), FR4 (52.6 GHz-114.25 GHz), and FR5 (114.25 GHz-300 GHz). Each of these higher frequency bands falls within the EHF band. With the above aspects in mind, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like if used herein may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like if used herein may broadly represent frequencies that may include mid-band frequencies, may be within FR2, FR4, FR2-2, and/or FR5, or may be within the EHF band. A base station102, whether a small cell102′ or a large cell (e.g., macro base station), may include and/or be referred to as an eNB, gNodeB (gNB), or another type of base station. Some base stations, such as gNB180may operate in a traditional sub 6 GHz spectrum, in millimeter wave frequencies, and/or near millimeter wave frequencies in communication with the UE104. When the gNB180operates in millimeter wave or near millimeter wave frequencies, the gNB180may be referred to as a millimeter wave base station. The millimeter wave base station180may utilize beamforming182with the UE104to compensate for the path loss and short range. The base station180and the UE104may each include a plurality of antennas, such as antenna elements, antenna panels, and/or antenna arrays to facilitate the beamforming. The base station180may transmit a beamformed signal to the UE104in one or more transmit directions182′. The UE104may receive the beamformed signal from the base station180in one or more receive directions182″. The UE104may also transmit a beamformed signal to the base station180in one or more transmit directions. The base station180may receive the beamformed signal from the UE104in one or more receive directions. The base station180/UE104may perform beam training to determine the best receive and transmit directions for each of the base station180/UE104. The transmit and receive directions for the base station180may or may not be the same. The transmit and receive directions for the UE104may or may not be the same. The EPC160may include a Mobility Management Entity (MME)162, other MMEs164, a Serving Gateway166, a Multimedia Broadcast Multicast Service (MBMS) Gateway168, a Broadcast Multicast Service Center (BM-SC)170, and a Packet Data Network (PDN) Gateway172. The MME162may be in communication with a Home Subscriber Server (HSS)174. The MME162is the control node that processes the signaling between the UEs104and the EPC160. Generally, the MME162provides bearer and connection management. All user Internet protocol (IP) packets are transferred through the Serving Gateway166, which itself is connected to the PDN Gateway172. The PDN Gateway172provides UE IP address allocation as well as other functions. The PDN Gateway172and the BM-SC170are connected to the IP Services176. The IP Services176may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. The BM-SC170may provide functions for MBMS user service provisioning and delivery. The BM-SC170may serve as an entry point for content provider MBMS transmission, may be used to authorize and initiate MBMS Bearer Services within a public land mobile network (PLMN), and may be used to schedule MBMS transmissions. The MBMS Gateway168may be used to distribute MBMS traffic to the base stations102belonging to a Multicast Broadcast Single Frequency Network (MBSFN) area broadcasting a particular service, and may be responsible for session management (start/stop) and for collecting eMBMS related charging information. The core network190may include an Access and Mobility Management Function (AMF)192, other AMFs193, a Session Management Function (SMF)194, and a User Plane Function (UPF)195. The AMF192may be in communication with a Unified Data Management (UDM)196. The AMF192is the control node that processes the signaling between the UEs104and the core network190. Generally, the AMF192provides QoS flow and session management. All user Internet protocol (IP) packets are transferred through the UPF195. The UPF195provides UE IP address allocation as well as other functions. The UPF195is connected to the IP Services197. The IP Services197may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a Packet Switch (PS) Streaming (PSS) Service, and/or other IP services. The base station may include and/or be referred to as a gNB, Node B, eNB, an access point, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a basic service set (BSS), an extended service set (ESS), a transmit reception point (TRP), or some other suitable terminology. The base station102provides an access point to the EPC160or core network190for a UE104. Examples of UEs104include a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal digital assistant (PDA), a satellite radio, a global positioning system, a multimedia device, a video device, a digital audio player (e.g., MP3 player), a camera, a game console, a tablet, a smart device, a wearable device, a vehicle, an electric meter, a gas pump, a large or small kitchen appliance, a healthcare device, an implant, a sensor/actuator, a display, or any other similar functioning device. Some of the UEs104may be referred to as IoT devices (e.g., parking meter, gas pump, toaster, vehicles, heart monitor, etc.). The UE104may also be referred to as a station, a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or some other suitable terminology. In some scenarios, the term UE may also apply to one or more companion devices such as in a device constellation arrangement. One or more of these devices may collectively access the network and/or individually access the network. FIG.2Ais a diagram200illustrating an example of a first subframe within a 5G NR frame structure.FIG.2Bis a diagram230illustrating an example of DL channels within a 5G NR subframe.FIG.2Cis a diagram250illustrating an example of a second subframe within a 5G NR frame structure.FIG.2Dis a diagram280illustrating an example of UL channels within a 5G NR subframe. The 5G NR frame structure may be frequency division duplexed (FDD) in which for a particular set of subcarriers (carrier system bandwidth), subframes within the set of subcarriers are dedicated for either DL or UL, or may be time division duplexed (TDD) in which for a particular set of subcarriers (carrier system bandwidth), subframes within the set of subcarriers are dedicated for both DL and UL. In the examples provided byFIGS.2A,2C, the 5G NR frame structure is assumed to be TDD, with subframe 4 being configured with slot format 28 (with mostly DL), where D is DL, U is UL, and F is flexible for use between DL/UL, and subframe 3 being configured with slot format 1 (with all UL). While subframes 3, 4 are shown with slot formats 1, 28, respectively, any particular subframe may be configured with any of the various available slot formats 0-61. Slot formats 0, 1 are all DL, UL, respectively. Other slot formats 2-61 include a mix of DL, UL, and flexible symbols. UEs are configured with the slot format (dynamically through DL control information (DCI), or semi-statically/statically through radio resource control (RRC) signaling) through a received slot format indicator (SFI). Note that the description infra applies also to a 5G NR frame structure that is TDD. FIGS.2A-2Dillustrate a frame structure, and the aspects of the present disclosure may be applicable to other wireless communication technologies, which may have a different frame structure and/or different channels. A frame (10 ms) may be divided into 10 equally sized subframes (1 ms). Each subframe may include one or more time slots. Subframes may also include mini-slots, which may include 7, 4, or 2 symbols. Each slot may include 14 or 12 symbols, depending on whether the cyclic prefix (CP) is normal or extended. For normal CP, each slot may include 14 symbols, and for extended CP, each slot may include 12 symbols. The symbols on DL may be CP orthogonal frequency division multiplexing (OFDM) (CP-OFDM) symbols. The symbols on UL may be CP-OFDM symbols (for high throughput scenarios) or discrete Fourier transform (DFT) spread OFDM (DFT-s-OFDM) symbols (also referred to as single carrier frequency-division multiple access (SC-FDMA) symbols) (for power limited scenarios; limited to a single stream transmission). The number of slots within a subframe is based on the CP and the numerology. The numerology defines the subcarrier spacing (SCS) and, effectively, the symbol length/duration, which is equal to 1/SCS. SCSμΔf = 2μ· 15 [kHz]Cyclic prefix015Normal130Normal260Normal, Extended3120Normal4240Normal For normal CP (14 symbols/slot), different numerologies μ 0 to 4 allow for 1, 2, 4, 8, and 16 slots, respectively, per subframe. For extended CP, the numerology 2 allows for 4 slots per subframe. Accordingly, for normal CP and numerology μ, there are 14 symbols/slot and 2μslots/subframe. The subcarrier spacing may be equal to 2μ*kHz, where μ is the numerology 0 to 4. As such, the numerology μ=0 has a subcarrier spacing of 15 kHz and the numerology μ=4 has a subcarrier spacing of 240 kHz. The symbol length/duration is inversely related to the subcarrier spacing.FIGS.2A-2Dprovide an example of normal CP with 14 symbols per slot and numerology μ=2 with 4 slots per subframe. The slot duration is 0.25 ms, the subcarrier spacing is kHz, and the symbol duration is approximately 16.67 μs. Within a set of frames, there may be one or more different bandwidth parts (BWPs) (seeFIG.2B) that are frequency division multiplexed. Each BWP may have a particular numerology and CP (normal or extended). A resource grid may be used to represent the frame structure. Each time slot includes a resource block (RB) (also referred to as physical RBs (PRBs)) that extends 12 consecutive subcarriers. The resource grid is divided into multiple resource elements (REs). The number of bits carried by each RE depends on the modulation scheme. As illustrated inFIG.2A, some of the REs carry reference (pilot) signals (RS) for the UE. The RS may include demodulation RS (DM-RS) (indicated as R for one particular configuration, but other DM-RS configurations are possible) and channel state information reference signals (CSI-RS) for channel estimation at the UE. The RS may also include beam measurement RS (BRS), beam refinement RS (BRRS), and phase tracking RS (PT-RS). FIG.2Billustrates an example of various DL channels within a subframe of a frame. The physical downlink control channel (PDCCH) carries DCI within one or more control channel elements (CCEs) (e.g., 1, 2, 4, 8, or 16 CCEs), each CCE including six RE groups (REGs), each REG including 12 consecutive REs in an OFDM symbol of an RB. A PDCCH within one BWP may be referred to as a control resource set (CORESET). A UE is configured to monitor PDCCH candidates in a PDCCH search space (e.g., common search space, UE-specific search space) during PDCCH monitoring occasions on the CORESET, where the PDCCH candidates have different DCI formats and different aggregation levels. Additional BWPs may be located at greater and/or lower frequencies across the channel bandwidth. A primary synchronization signal (PSS) may be within symbol 2 of particular subframes of a frame. The PSS is used by a UE104to determine subframe/symbol timing and a physical layer identity. A secondary synchronization signal (SSS) may be within symbol 4 of particular subframes of a frame. The SSS is used by a UE to determine a physical layer cell identity group number and radio frame timing. Based on the physical layer identity and the physical layer cell identity group number, the UE can determine a physical cell identifier (PCI). Based on the PCI, the UE can determine the locations of the DM-RS. The physical broadcast channel (PBCH), which carries a master information block (MIB), may be logically grouped with the PSS and SSS to form a synchronization signal (SS)/PBCH block (also referred to as SS block (SSB)). The MIB provides a number of RBs in the system bandwidth and a system frame number (SFN). The physical downlink shared channel (PDSCH) carries user data, broadcast system information not transmitted through the PBCH such as system information blocks (SIB s), and paging messages. As illustrated inFIG.2C, some of the REs carry DM-RS (indicated as R for one particular configuration, but other DM-RS configurations are possible) for channel estimation at the base station. The UE may transmit DM-RS for the physical uplink control channel (PUCCH) and DM-RS for the physical uplink shared channel (PUSCH). The PUSCH DM-RS may be transmitted in the first one or two symbols of the PUSCH. The PUCCH DM-RS may be transmitted in different configurations depending on whether short or long PUCCHs are transmitted and depending on the particular PUCCH format used. The UE may transmit sounding reference signals (SRS). The SRS may be transmitted in the last symbol of a subframe. The SRS may have a comb structure, and a UE may transmit SRS on one of the combs. The SRS may be used by a base station for channel quality estimation to enable frequency-dependent scheduling on the UL. FIG.2Dillustrates an example of various UL channels within a subframe of a frame. The PUCCH may be located as indicated in one configuration. The PUCCH carries uplink control information (UCI), such as scheduling requests, a channel quality indicator (CQI), a precoding matrix indicator (PMI), a rank indicator (RI), and hybrid automatic repeat request (HARQ) acknowledgment (ACK) (HARQ-ACK) feedback (i.e., one or more HARQ ACK bits indicating one or more ACK and/or negative ACK (NACK)). The PUSCH carries data, and may additionally be used to carry a buffer status report (BSR), a power headroom report (PHR), and/or UCI. FIG.3is a block diagram of a base station310in communication with a UE350in an access network. In the DL, IP packets from the EPC160may be provided to a controller/processor375. The controller/processor375implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor375provides RRC layer functionality associated with broadcasting of system information (e.g., MIB, SIB s), RRC connection control (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), inter radio access technology (RAT) mobility, and measurement configuration for UE measurement reporting; PDCP layer functionality associated with header compression/decompression, security (ciphering, deciphering, integrity protection, integrity verification), and handover support functions; RLC layer functionality associated with the transfer of upper layer packet data units (PDUs), error correction through ARQ, concatenation, segmentation, and reassembly of RLC service data units (SDUs), re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. The transmit (TX) processor316and the receive (RX) processor370implement layer 1 functionality associated with various signal processing functions. Layer 1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, interleaving, rate matching, mapping onto physical channels, modulation/demodulation of physical channels, and MIMO antenna processing. The TX processor316handles mapping to signal constellations based on various modulation schemes (e.g., binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), M-phase-shift keying (M-PSK), M-quadrature amplitude modulation (M-QAM)). The coded and modulated symbols may then be split into parallel streams. Each stream may then be mapped to an OFDM subcarrier, multiplexed with a reference signal (e.g., pilot) in the time and/or frequency domain, and then combined together using an Inverse Fast Fourier Transform (IFFT) to produce a physical channel carrying a time domain OFDM symbol stream. The OFDM stream is spatially precoded to produce multiple spatial streams. Channel estimates from a channel estimator374may be used to determine the coding and modulation scheme, as well as for spatial processing. The channel estimate may be derived from a reference signal and/or channel condition feedback transmitted by the UE350. Each spatial stream may then be provided to a different antenna320via a separate transmitter318TX. Each transmitter318TX may modulate a radio frequency (RF) carrier with a respective spatial stream for transmission. At the UE350, each receiver354RX receives a signal through its respective antenna352. Each receiver354RX recovers information modulated onto an RF carrier and provides the information to the receive (RX) processor356. The TX processor368and the RX processor356implement layer 1 functionality associated with various signal processing functions. The RX processor356may perform spatial processing on the information to recover any spatial streams destined for the UE350. If multiple spatial streams are destined for the UE350, they may be combined by the RX processor356into a single OFDM symbol stream. The RX processor356then converts the OFDM symbol stream from the time-domain to the frequency domain using a Fast Fourier Transform (FFT). The frequency domain signal comprises a separate OFDM symbol stream for each subcarrier of the OFDM signal. The symbols on each subcarrier, and the reference signal, are recovered and demodulated by determining the most likely signal constellation points transmitted by the base station310. These soft decisions may be based on channel estimates computed by the channel estimator358. The soft decisions are then decoded and deinterleaved to recover the data and control signals that were originally transmitted by the base station310on the physical channel. The data and control signals are then provided to the controller/processor359, which implements layer 3 and layer 2 functionality. The controller/processor359can be associated with a memory360that stores program codes and data. The memory360may be referred to as a computer-readable medium. In the UL, the controller/processor359provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets from the EPC160. The controller/processor359is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. Similar to the functionality described in connection with the DL transmission by the base station310, the controller/processor359provides RRC layer functionality associated with system information (e.g., MIB, SIB s) acquisition, RRC connections, and measurement reporting; PDCP layer functionality associated with header compression/decompression, and security (ciphering, deciphering, integrity protection, integrity verification); RLC layer functionality associated with the transfer of upper layer PDUs, error correction through ARQ, concatenation, segmentation, and reassembly of RLC SDUs, re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto TBs, demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Channel estimates derived by a channel estimator358from a reference signal or feedback transmitted by the base station310may be used by the TX processor368to select the appropriate coding and modulation schemes, and to facilitate spatial processing. The spatial streams generated by the TX processor368may be provided to different antenna352via separate transmitters354TX. Each transmitter354TX may modulate an RF carrier with a respective spatial stream for transmission. The UL transmission is processed at the base station310in a manner similar to that described in connection with the receiver function at the UE350. Each receiver318RX receives a signal through its respective antenna320. Each receiver318RX recovers information modulated onto an RF carrier and provides the information to a RX processor370. The controller/processor375can be associated with a memory376that stores program codes and data. The memory376may be referred to as a computer-readable medium. In the UL, the controller/processor375provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, control signal processing to recover IP packets from the UE350. IP packets from the controller/processor375may be provided to the EPC160. The controller/processor375is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. In some examples, at least one of the TX processor368, the RX processor356, and the controller/processor359may be configured to perform aspects in connection with the RF sensing component198ofFIG.1. In other examples, at least one of the TX processor316, the RX processor370, and the controller/processor375may be configured to perform aspects in connection with the RF sensing component198ofFIG.1. A network may support a number of cellular network-based positioning technologies, such as downlink-based, uplink-based, and/or downlink-and-uplink-based positioning methods. Downlink-based positioning methods may include an observed time difference of arrival (OTDOA) (e.g., in LTE), a downlink time difference of arrival (DL-TDOA) (e.g., in NR), and/or a downlink angle-of-departure (DL-AoD) (e.g., in NR). In an OTDOA or DL-TDOA positioning procedure, a UE may measure the differences between each time of arrival (ToA) of reference signals (e.g., positioning reference signals (PRSs)) received from pairs of base stations, referred to as reference signal time difference (RSTD) measurements or time difference of arrival (TDOA) measurements, and report them to a positioning entity (e.g., a location management function (LMF)). For example, the UE may receive identifiers (IDs) of a reference base station (which may also be referred to as a reference cell or a reference gNB) and at least one non-reference base station in assistance data (AD). The UE may then measure the RSTD between the reference base station and each of the non-reference base stations. Based on the known locations of the involved base stations and the RSTD measurements, the positioning entity may estimate a location of the UE. In other words, a position of the UE may be estimated based on measuring reference signals transmitted between the UE and one or more base stations and/or transmission-reception points (TRPs) of the one or more base stations. As such, the PRSs may enable UEs to detect and measure neighbor TRPs, and to perform positioning based on the measurement. For purposes of the present disclosure, the suffixes “-based” and “-assisted” may refer respectively to the node that is responsible for making the positioning calculation (and which may also provide measurements) and a node that provides measurements (but which may not make the positioning calculation). For example, an operation in which measurements are provided by a UE to a base station/positioning entity to be used in the computation of a position estimate may be described as “UE-assisted,” “UE-assisted positioning,” and/or “UE-assisted position calculation” while an operation in which a UE computes its own position may be described as “UE-based,” “UE-based positioning,” and/or “UE-based position calculation.” In some examples, the term “TRP” may refer to one or more antennas of a base station whereas the term “base station” may refer to a complete unit (e.g., the base station102/180) that includes aggregated or disaggregated components, such as described in connection withFIG.1. For example, as an example of a disaggregated RAN, a base station may include CU, one or more DUs, one or more RUs, and/or one or more TRPs. One or more disaggregated components may be located at different locations. For example, different TRPs may be located at different geographic locations. In another example, a TRP may refer to a set of geographically co-located antennas (e.g., antenna array (with one or more antenna elements)) supporting transmission point (TP) and/or reception point (RP) functionality. Thus, a base station may transmit signal to and/or receive signal from other wireless device (e.g., a UE, another base station, etc.) via one or more TRPs. For purposes of the present disclosure, in some examples, the term “TRP” may be used interchangeably with the term “base station.” For DL-AoD positioning, the positioning entity may use a beam report from the UE of received signal strength measurements of multiple downlink transmit beams to determine the angle(s) between the UE and the transmitting base station(s). The positioning entity may then estimate the location of the UE based on the determined angle(s) and the known location(s) of the transmitting base station(s). Uplink-based positioning methods may include UL-TDOA and UL-AoA. UL-TDOA is similar to DL-TDOA, but is based on uplink reference signals (e.g., sounding reference signals (SRSs)) transmitted by the UE. For UL-AoA positioning, one or more base stations may measure the received signal strength of one or more uplink reference signals (e.g., SRSs) received from a UE on one or more uplink receive beams. The positioning entity may use the signal strength measurements and the angle(s) of the receive beam(s) to determine the angle(s) between the UE and the base station(s). Based on the determined angle(s) and the known location(s) of the base station(s), the positioning entity can then estimate the location of the UE. Downlink-and-uplink-based positioning methods may include enhanced cell-ID (E-CID) positioning and multi-round-trip-time (RTT) positioning (also referred to as “multi-cell RTT”). In an RTT procedure, an initiator (a base station or a UE) transmits an RTT measurement signal (e.g., a PRS or SRS) to a responder (a UE or a base station), which transmits an RTT response signal (e.g., an SRS or a PRS) back to the initiator. The RTT response signal may include the difference between the ToA of the RTT measurement signal and the transmission time of the RTT response signal, referred to as the reception-to-transmission (Rx-Tx) time difference. The initiator may calculate the difference between the transmission time of the RTT measurement signal and the ToA of the RTT response signal, referred to as the transmission-to-reception (Tx-Rx) time difference. The propagation time (also referred to as the “time of flight”) between the initiator and the responder may be calculated from the Tx-Rx and Rx-Tx time differences. Based on the propagation time and the known speed of light, the distance between the initiator and the responder may be determined. For multi-RTT positioning, a UE may perform an RTT procedure with multiple base stations to enable its location to be determined (e.g., using multilateration) based on the known locations of the base stations. RTT and multi-RTT methods may be combined with other positioning techniques, such as UL-AoA and DL-AoD, to improve location accuracy. The E-CID positioning method may be based on radio resource management (RRM) measurements. In E-CID, the UE may report the serving cell ID and the timing advance (TA), as well as the identifiers, estimated timing, and signal strength of detected neighbor base stations. The location of the UE is then estimated based on this information and the known locations of the base station(s). To assist positioning operations, a location server (e.g., an LMF, or an SLP) may provide assistance data (AD) to the UE. For example, the assistance data may include identifiers of the base stations (or the cells/TRPs of the base stations) from which to measure reference signals, the reference signal configuration parameters (e.g., the number of consecutive positioning subframes, periodicity of positioning subframes, muting sequence, frequency hopping sequence, reference signal identifier, reference signal bandwidth, etc.), and/or other parameters applicable to the particular positioning method. Alternatively, the assistance data may originate directly from the base stations (e.g., in periodically broadcasted overhead messages, etc.). In some cases, the UE may be able to detect neighbor network nodes without the use of assistance data. In the case of an OTDOA or DL-TDOA positioning procedure, the assistance data may further include an expected RSTD value and an associated uncertainty (e.g., a search space window) around the expected RSTD. In some cases, the value range of the expected RSTD may be plus-minus (+/−) 500 microseconds (μs). In some cases, when any of the resources used for the positioning measurement are in FR1, the value range for the uncertainty of the expected RSTD may be +/−32 μs. In other cases, when all of the resources used for the positioning measurement(s) are in FR2, the value range for the uncertainty of the expected RSTD may be +/−8 μs. In this context, “RSTD” may refer to one or more measurements indicative of a difference in time of arrival between a PRS transmitted by a base station, referred to herein as a “neighbor base station” or a “measuring base station,” and a PRS transmitted by a reference base station. A reference base station may be selected by a location server and/or by a UE to provide good or sufficient signal strength observed at a UE, such that a PRS may be more accurately and/or more quickly acquired and/or measured, such as without any special assistance from a serving base station. A location estimate may also be referred to as a position estimate, location, position, position fix, fix, or the like. A location estimate may be geodetic and include coordinates (e.g., latitude, longitude, and possibly altitude) or may be civic and include a street address, postal address, or some other verbal description of a location. A location estimate may further be defined relative to some other known location or defined in absolute terms (e.g., using latitude, longitude, and possibly altitude). A location estimate may include an expected error or uncertainty (e.g., by including an area or volume within which the location is expected to be included with some specified or default level of confidence). For purposes of the present disclosure, reference signals may include PRS, tracking reference signals (TRS), phase tracking reference signals (PTRS), cell-specific reference signals (CRS), CSI-RS, demodulation reference signals (DMRS), PSS, SSS, SSBs, SRS, etc., depending on whether the illustrated frame structure is used for uplink or downlink communication. In some examples, a collection of resource elements (REs) that are used for transmission of PRS may be referred to as a “PRS resource.” The collection of resource elements may span multiple PRBs in the frequency domain and one or more consecutive symbol(s) within a slot in the time domain. In a given OFDM symbol in the time domain, a PRS resource may occupy consecutive PRBs in the frequency domain. In other examples, a “PRS resource set” may refer to a set of PRS resources used for the transmission of PRS signals, where each PRS resource may have a PRS resource ID. In addition, the PRS resources in a PRS resource set may be associated with a same TRP. A PRS resource set may be identified by a PRS resource set ID and may be associated with a particular TRP (e.g., identified by a TRP ID). In addition, the PRS resources in a PRS resource set may have a same periodicity, a common muting pattern configuration, and/or a same repetition factor across slots. The periodicity may be a time from a first repetition of a first PRS resource of a first PRS instance to the same first repetition of the same first PRS resource of the next PRS instance. For example, the periodicity may have a length selected from 2{circumflex over ( )}μ*{4, 5, 8, 10, 16, 20, 32, 40, 64, 80, 160, 320, 640, 1280, 2560, 5120, 10240} slots, where μ=0, 1, 2, 3. The repetition factor may have a length selected from {1, 2, 4, 6, 8, 16, 32} slots. A PRS resource ID in a PRS resource set may be associated with a single beam (or beam ID) transmitted from a single TRP (where a TRP may transmit one or more beams). That is, each PRS resource of a PRS resource set may be transmitted on a different beam, and as such, a “PRS resource,” or simply “resource,” also can be referred to as a “beam.” In some examples, a “PRS instance” or “PRS occasion” may be one instance of a periodically repeated time window (such as a group of one or more consecutive slots) where PRS are expected to be transmitted. A PRS occasion also may be referred to as a “PRS positioning occasion,” a “PRS positioning instance,” a “positioning occasion,” “a positioning instance,” a “positioning repetition,” or simply an “occasion,” an “instance,” and/or a “repetition,” etc. A positioning frequency layer (PFL) (which may also be referred to as a “frequency layer”) may be a collection of one or more PRS resource sets across one or more TRPs that have the same values for certain parameters. Specifically, the collection of PRS resource sets may have a same subcarrier spacing and cyclic prefix (CP) type (e.g., meaning all numerologies supported for PDSCHs are also supported for PRS), the same Point A, the same value of the downlink PRS bandwidth, the same start PRB (and center frequency), and/or the same comb-size, etc. The Point A parameter may take the value of a parameter ARFCN-ValueNR (where “ARFCN” stands for “absolute radio-frequency channel number”) and may be an identifier/code that specifies a pair of physical radio channel used for transmission and reception. In some examples, a downlink PRS bandwidth may have a granularity of four PRBs, with a minimum of 24 PRBs and a maximum of 272 PRBs. In other examples, up to four frequency layers may be configured, and up to two PRS resource sets may be configured per TRP per frequency layer. The concept of a frequency layer may be similar to a component carrier (CC) and a BWP, where CCs and BWPs may be used by one base station (or a macro cell base station and a small cell base station) to transmit data channels, while frequency layers may be used by multiple (e.g., three or more) base stations to transmit PRS. A UE may indicate the number of frequency layers it is capable of supporting when the UE sends the network its positioning capabilities, such as during a positioning protocol session. For example, a UE may indicate whether it is capable of supporting one or four PFLs. FIG.4is a diagram400illustrating an example of a UE positioning based on reference signal measurements in accordance with various aspects of the present disclosure. In one example, a location of UE404may be estimated based on multi-cell round trip time (multi-RTT) measurements, where multiple TRPs402may perform round trip time (RTT) measurements for signals transmitted to and received from the UE404to determine the approximate distance of UE404with respect to each of the multiple TRPs402. Similarly, the UE404may perform RTT measurements for signals transmitted to and received from the TRPs402to determine the approximate distance of each TRP with respect to the UE404. Then, based at least in part on the approximate distances of UE404with respect to the multiple TRPs402, a location management function (LMF) that is associated with the TRPs402and/or the UE404may estimate the position of UE404. For example, a TRP406may transmit at least one downlink positioning reference signal (DL-PRS)410to the UE404, and may receive at least one uplink sounding reference signal (UL-SRS)412transmitted from the UE404. Based at least in part on measuring an RTT414between the DL-PRS410transmitted and the UL-SRS412received, a serving base station associated with the TRP406or an LMF associated with the TRP406may identify the position of UE404(e.g., distance) with respect to the TRP406. Similarly, the UE404may transmit UL-SRS412to the TRP406, and may receive DL-PRS410transmitted from the TRP406. Based at least in part on measuring the RTT414between the UL-SRS412transmitted and the DL-PRS410received, the UE404or an LMF associated with the UE404may identify the position of TRP406with respect to the UE404. The multi-RTT measurement mechanism may be initiated by the LMF that is associated with the TRP406/408and/or the UE404. A TRP may configure UL-SRS resources to a UE via radio resource control (RRC) signaling. In some examples, the UE and the TRP may report the multi-RTT measurements to the LMF, and the LMF may estimate the position of the UE based on the reported multi-RTT measurements. In other examples, a position of a UE may be estimated based on multiple antenna beam measurements, where a downlink angle of departure (DL-AoD) and/or uplink angle of arrival (UL-AoA) of transmissions between a UE and one or more TRPs may be used to estimate the position of the UE and/or the distance of the UE with respect to each TRP. For example, referring back toFIG.6, with regard to the DL-AoD, the UE404may perform reference signal received power (RSRP) measurements for a set of DL-PRS416transmitted from multiple transmitting beams (e.g., DL-PRS beams) of a TRP408, and the UE404may provide the DL-PRS beam measurements to a serving base station (or to the LMF associated with the base station). Based on the DL-PRS beam measurements, the serving TRP or the LMF may derive the azimuth angle (e.g., Φ) of departure and the zenith angle (e.g., θ) of departure for DL-PRS beams of the TRP408. Then, the serving TRP or the LMF may estimate the position of UE404with respect to the TRP408based on the azimuth angle of departure and the zenith angle of departure of the DL-PRS beams. Similarly, for the UL-AoA, a position of a UE may be estimated based on UL-SRS beam measurements measured at different TRPs, such as at the TRPs402. Based on the UL-SRS beam measurements, a serving base station or an LMF associated with the serving base station may derive the azimuth angle of arrival and the zenith angle of arrival for UL-SRS beams from the UE, and the serving base station or the LMF may estimate the position of the UE and/or the UE distance with respect to each of the TRPs based on the azimuth angle of arrival and the zenith angle of arrival of the UL-SRS beams. FIG.5Ais a diagram500A illustrating an example of DL-PRS transmitted from multiple TRPs in accordance with various aspects of the present disclosure. In one example, a serving base station may configure DL-PRS to be transmitted from one or more TRPs within a slot or across multiple slots. If the DL-PRS is configured to be transmitted within a slot, the serving base station may configure the starting resource element in time and frequency from each of the one or more TRPs. If the DL-PRS is configured to be transmitted across multiple slots, the serving base station may configure gaps between DL-PRS slots, periodicity of the DL-PRS, and/or density of the DL-PRS within a period. The serving base station may also configure the DL-PRS to start at any physical resource block (PRB) in the system bandwidth. In one example, the system bandwidth may range from 24 to 276 PRBs in steps of 4 PRBs (e.g., 24, 28, 32, 36, etc.). The serving base station may transmit the DL-PRS in PRS beams, where a PRS beam may be referred to as a “PRS resource” and a full set of PRS beams transmitted from a TRP on a same frequency may be referred to as a “PRS resource set” or a “resource set of PRS,” such as described in connection withFIG.4. As shown byFIG.5A, the DL-PRS transmitted from different TRPs and/or from different PRS beams may be multiplexed across symbols or slots. In some examples, each symbol of the DL-PRS may be configured with a comb-structure in frequency, where the DL-PRS from a TRP of a base station may occupy every Nthsubcarrier. The comb value N may be configured to be 2, 4, 6, or 12. The length of the PRS within one slot may be a multiple of N symbols and the position of the first symbol within a slot may be flexible as long as the slot consists of at least N PRS symbols. The diagram500A shows an example of a comb-6 DL-PRS configuration, where the pattern for the DL-PRS from different TRPs may be repeated after six (6) symbols. FIG.5Bis a diagram500B illustrating an example of UL-SRS transmitted from a UE in accordance with various aspects of the present disclosure. In one example, the UL-SRS from a UE may be configured with a comb-4 pattern, where the pattern for UL-SRS may be repeated after four (4) symbols. Similarly, the UL-SRS may be configured in an SRS resource of an SRS resource set, where each SRS resource may correspond to an SRS beam, and the SRS resource sets may correspond to a collection of SRS resources (e.g., beams) configured for a TRP. In some examples, the SRS resources may span 1, 2, 4, 8, or 12 consecutive OFDM symbols. In other examples, the comb size for the UL-SRS may be configured to be 2, 4, or 8. FIG.6is a diagram600illustrating an example of estimating a position of a UE based on multi-RTT measurements from multiple TRPs in accordance with various aspects of the present disclosure. A UE602may be configured by a serving base station to decode DL-PRS resources612that correspond to and are transmitted from a first TRP604(TRP-1), a second TRP606(TRP-2), a third TRP608(TRP-3), and a fourth TRP610(TRP-4). The UE602may also be configured to transmit UL-SRSs on a set of UL-SRS resources, which may include a first SRS resource614, a second SRS resource616, a third SRS resource618, and a fourth SRS resource620, such that the serving cell(s), e.g., the first TRP604, the second TRP606, the third TRP608, and the fourth TRP610, and as well as other neighbor cell(s), may be able to measure the set of the UL-SRS resources transmitted from the UE602. For multi-RTT measurements based on DL-PRS and UL-SRS, as there may be an association between a measurement of a UE for the DL-PRS and a measurement of a TRP for the UL-SRS, the smaller the gap is between the DL-PRS measurement of the UE and the UL-SRS transmission of the UE, the better the accuracy may be for estimating the position of the UE and/or the distance of the UE with respect to each TRP. Note that the terms “positioning reference signal” and “PRS” generally refer to specific reference signals that are used for positioning in NR and LTE systems. However, as used herein, the terms “positioning reference signal” and “PRS” may also refer to any type of reference signal that can be used for positioning, such as but not limited to, PRS as defined in LTE and NR, TRS, PTRS, CRS, CSI-RS, DMRS, PSS, SSS, SSB, SRS, UL-PRS, etc. In addition, the terms “positioning reference signal” and “PRS” may refer to downlink or uplink positioning reference signals, unless otherwise indicated by the context. To further distinguish the type of PRS, a downlink positioning reference signal may be referred to as a “DL-PRS,” and an uplink positioning reference signal (e.g., an SRS-for-positioning, PTRS) may be referred to as an “UL-PRS.” In addition, for signals that may be transmitted in both the uplink and downlink (e.g., DMRS, PTRS), the signals may be prepended with “UL” or “DL” to distinguish the direction. For example, “UL-DMRS” may be differentiated from “DL-DMRS.” In addition to network-based UE positioning technologies, a wireless device (e.g., a base station, a UE, etc.) may also be configured to include radar capabilities, which may be referred to as “radio frequency (RF) sensing” and/or “cellular-based RF sensing.” For example, a wireless device may transmit radar reference signals (RRSs) and measure the RRSs reflected from one or more objects. Based at least in part on the measurement, the wireless device may determine or estimate a distance between the wireless device and the one or more objects based. In another example, a first wireless device may also receive RRSs transmitted from one or more wireless devices, where the first wireless device may determine or estimate a distance between the first wireless device and one or more wireless devices based at least in part on the received RRS. As such, in some examples, RF sensing techniques may be used for UE positioning and/or for assisting UE positioning. For purposes of the present disclosure, a device that is capable of performing RF sensing (e.g., transmitting and/or receiving RRS for detecting an object) may be referred to as an “RF sensing node.” For example, an RF sensing node may be a UE, a base station, a TRP, a device capable of transmitting RRS, and/or a device configured to perform radar functions, etc. FIG.7is a diagram700illustrating an example radar signal (e.g., RRS) generated from an RF sensing node in accordance with various aspects of the present disclosure. An RF sensing node703may detect an object720(e.g., the location, the distance, and/or the speed of the object720with respect to the RF sensing node703) by transmitting RRS towards the object720and receiving the RRS reflected (e.g., bounce off) from the object720. In some examples, the object720may be a radar receiver or have a capability to receive and process RRS. In one example, the RRS may be a chirp signal that includes a frequency that varies linearly (e.g., has a frequency sweeping) over a fixed period of time (e.g., over a sweep time) by a modulating signal. For example, as shown by the diagram700, a transmitted chirp signal702may have a starting frequency at704of a sinusoid. Then, the frequency may gradually (e.g., linearly) increase on the sinusoid until it reaches an ending (or highest) frequency at706of the sinusoid, and then the frequency of the signal may return to the starting frequency as shown at708and another chirp signal710may be transmitted in the same way. In other words, each chirp signal may include an increase in frequency (e.g., linearly) and a drop in frequency or vice versa (e.g., including a decrease in frequency and then an increase in frequency), such that the RF sensing node703may transmit chirp signals sweeping in frequency. In some examples, such chirp signal may also be referred to as a frequency modulated continuous wave (FMCW). After a chirp signal (e.g., chirp signal702,710,712, etc.) is transmitted by the RF sensing node703, the transmitted chirp signal may reach the object720and reflect back to the RF sensing node703, such as shown by the reflected chirp signals714,716, and718, which may correspond to the transmitted chirp signals702,710, and712, respectively. As there may be a distance between the RF sensing node703and the object720and/or it may take time for a transmitted chirp signal to reach the object720and reflect back to the RF sensing node703, a delay may exist between a transmitted chirp signal and its corresponding reflected chirp signal. As the delay may be proportional to a range between the RF sensing node703and the object720(e.g., the further the target, the larger the delay and vice versa), the RF sensing node703may be able to measure or estimate a distance between the RF sensing node703and the object720based on the delay. In some examples, the RF sensing node703may also measure a difference in frequency between the transmitted chirp signal and the reflected chirp signal, which may also be proportional to the distance between the RF sensing node703and the object720. In other words, as the frequency difference between the reflected chirp signal and the transmitted chirp signal increases with the delay, and the delay is linearly proportional to the range, the distance of the object720from the RF sensing node703may also be determined based on the difference in frequency. Thus, the reflected chirp signal from the object720may be mixed with the transmitted chirp signal and down-converted to produce a beat signal (f b) which may be linearly proportional to the range after demodulation. For example, the RF sensing node703may determine a beat signal722by mixing the transmitted chirp signal702and its corresponding reflected chirp signal714. While examples in the diagram illustrate using an FMCW waveform for the RRS, other types of radar waveforms may also be used by the RF sensing node703for the RRS. Due to an increased amount of bandwidth (BW) being allocated for cellular communications systems (e.g., 5G and beyond) and an increased amount of applications (e.g., use cases) being introduced with cellular communications systems, joint communication and RF sensing may become an important feature for cellular systems. For example, a wireless device (e.g., a base station, a UE, an RF sensing node, etc.) may be configured to transmit communication signals (e.g., PDSCH, PUSCH, etc.) with radar signals (e.g., RRS) together or simultaneously. In addition, OFDM waveform (or its variants) may likely be considered as the waveform for joint communication/RF sensing as the OFDM waveform may enable in-band multiplexing with other cellular reference signals and physical channels. As such, the radar signals may be multiplexed with communication signals based on OFDM waveform. For purposes of the present disclosure, a wireless device that performs an RF sensing based on OFDM waveform(s) or transmits RRS based on OFDM waveform(s) may be referred to as an “OFDM radar.” For some RF sensing designs, the waveform configuration and/or the resource allocation may be based on upper bound(s) of a performance metric, such as based on the maximum range for an RF signal to be transmitted by an RF sensing node, or the range/Doppler resolution to be provided by the RF sensing node, etc. In some scenarios, such RF sensing design approach may not be flexible enough to be adaptive to a dynamic environment, such as when the RF sensing is to be deployed at a wide outdoor area. For example, as many objects (e.g., animals, transportations, humans) may be moving at an outdoor area, an RF sensing design that is based on the upper bound(s) of a performance metric may not be able to provide accurate object detections sometimes as the RF sensing design may not have taken the moving objects into consideration. Aspects presented herein may improve the performance and the accuracy for RF sensing. Aspects presented may enable RF sensing to be adaptive to the environment to improve the sensing, performance, and/or the spectrum efficiency of cellular systems and the power efficiency of RF sensing nodes (e.g., base station and/or UE). For example, in one aspect of the present disclosure, if a network is able to determine the direction of a target, the network may guide an RF sensing node (or a radar transmission (Tx)) to beamform toward the target to enhance the Signal-to-noise ratio (SNR). In another example, if a network is able to determine that there is no small object to be sensed, the network may reduce the Tx power and/or the waveform repetitions of an RF sensing node to achieve power saving. In some examples, one or more features (e.g., parameters to be reported to a network to enable adaptive RF sensing) may be extracted/derived based on non-RF method(s), such as via a camera, an ultra-sound sensor, a light detection and ranging (lidar) sensor, and/or a barometric sensor, etc. The non-RF based feature extraction/derivation may be implemented on an RF sensing node based on a real-time setting. Also, the non-RF based feature extraction/derivation may have less interference issue(s) compared with RF based feature extraction/derivation. FIG.8is a communication flow800illustrating an example of RF sensing based on non-RF measurement in accordance with various aspects of the present disclosure. The numberings associated with the communication flow800do not specify a particular temporal order and are merely used as references for the communication flow800. An RF sensing node802(e.g., a UE, a base station, a device capable of RF sensing, etc.) may be configured by a network entity (e.g., a server for sensing, which may include or may be a location server and/or an LMF, etc.) to perform RF sensing for an area. The RF sensing node802may include at least one non-RF sensor810. For purposes of the present disclosure, a non-RF sensor may refer to a sensor that does not transmit RF signals. For example, a non-RF sensor may be a camera, an ultra-sound sensor, a light detection and ranging (lidar) sensor, and/or a barometric sensor, etc. In addition, a “server for sensing” or a “sensing server” may refer to an entity that is at least partially responsible for managing or controlling the RF sensing performed by an RF sensing node. Thus, the server for sensing or the sensing server may be a location server/LMF or be part of a location server/LMF depending on the network implementation. As shown at816, during an RF sensing session, the RF sensing node802may use the at least one non-RF sensor810to extract one or more features806for one or more objects808of the area. In one example, the one or more features806may include the RCS, the size, the shape, the classification, the material, the orientation, the location, and/or the speed of the one or more objects808. FIG.9is a diagram900illustrating an example of extracting features for one or more objects based on a non-RF sensor in accordance with various aspects of the present disclosure. An RF sensing node (e.g., the RF sensing node802) with a camera (e.g., the non-RF sensor810) may be used to capture images of an area that is to be sensed by the RF sensing node, such as shown at902. Then, as shown at904, the captured images may be processed by an artificial intelligence (AI) processing unit associated with the camera, such as by a neural signal processor (NSP), to generate a category map (which may be based on a segmentation process) that identifies objects in the captured images, such as shown at906. For example, the category map may show that the captured images include multiple objects (e.g., the one or more objects808inFIG.8), such as clouds, trees, a human, an aircraft, and a building, etc. Then, as shown at908, the category map may further be processed by an image signal processor (ISP) unit/hardware associated with the camera, where the ISP unit/hardware may adjust the color of different segments of the captured images based on the category map and offline tuning data to produce a processed image (e.g., an image with extracted features). As such, the RF sensing node may extract one or more features of an area to be sensed using a non-RF sensor. Referring back toFIG.8, at818, after the RF sensing node802extracts the one or more features806for the one or more objects808of the area, the RF sensing node802may derive or compute one or more non-RF measurements812for the one or more objects808based on the one or more features806extracted. For example, in one aspect of the present disclosure, the RF sensing node802may derive a radar cross-section (RCS) for the one or more objects808based on the classification, size, shape, material, and/or orientation of the one or more objects808. The RCS may refer to a measurement indicating how detectable an object may be by a radar device, which may also be referred to as an electromagnetic signature of the object in some examples. For example, a larger RCS may indicate that an object may be more easily detected by a radar compared to a lower RCS. As such, RCS of an object (a) may depend on the object's shape and material, and also on the wavelength and/or incident angle of the electromagnetic wave. FIG.10is a diagram1000illustrating example RCS values for different objects in accordance with various aspects of the present disclosure. Different types of objects may have different ranges of RCS. For example, a typical RCS for cars may be approximately 10-20 decibels per square meter (dBsm), and a typical RCS for ships may be approximately 25-35 dBsm, etc. As an object may reflect a limited amount of radar energy (e.g., the RRS) back to the source, factors that may influence the amount of energy reflected from the object may include: the material for which the object is made; the size of the object relative to the wavelength of the illuminating radar signal; the absolute size of the object; the incident angle (angle at which the radar beam hits a particular portion of the object, which may depend upon the shape of the object and its orientation to the radar source; the reflected angle (angle at which the reflected beam leaves the part of the object hit; it depends upon incident angle); and/or the polarization of the transmitted and the received radiation with respect to the orientation of the object, etc. In some examples, the one or more features806(e.g., the size, shape, material, and/or orientation of the one or more objects808) may be derived by other sensors (or fused with other sensor(s), in addition to camera). For example, the material of an object may be sensed by an ultra-sound sensor, and the speed and orientation of an object may be sensed by a lidar, etc. Referring back toFIG.8, at820, after the RF sensing node802derives the one or more non-RF measurements812, the RF sensing node802may transmit the one or more non-RF measurements812to the network entity804. In one example, as an alternative, the RF sensing node802may skip deriving the non-RF measurements812based on the one or more features806, and the RF sensing node802may transmit the one or more features806to the network entity804at820. Then, as shown at822, the network entity804may derive or compute the one or more non-RF measurements812for the one or more objects808based on the one or more features806received. In other words, the non-RF measurements derivation may be performed by the network entity804instead of the RF sensing node802. At824, after either receiving the non-RF measurements812from the RF sensing node802or deriving the one or more non-RF measurements812based on the one or more features806, the network entity may configure an RRS transmission configuration814based at least in part on the non-RF measurements812. For example, after the RF sensing node802derive RCS values for the one or more objects808and report the RCS values to the network entity804, the network entity804may configure (or reconfigure) an RRS transmission for the RF sensing node802based on the non-RF measurements812to optimize the RF sensing. For example, if the RCS value for an object to be detected is below a threshold (e.g., is small or hard to detect), the network entity804may increase the Tx power for one or more antennas, the number of Tx antennas for the RRS transmission, the periodicity for transmitting the RRS, and/or the RRS repetition factor, etc., of the RF sensing node802; whereas if the RCS value for an object to be detected is above a threshold (e.g., is large or easy to detect), the network entity804may decrease the Tx power for one or more antennas, the number of Tx antennas for the RRS transmission, and/or the RRS repetition factor, etc., of the RF sensing node802. In another example, if the non-RF measurements812or the extracted features806indicate that a target object is static, the network entity804may increase the RRS periodicity to save power. For example, the non-RF sensor810(e.g., such as a camera) may be used for detecting the mobility of the target. As such, in one aspect of the present disclosure, the RRS transmission configuration may include a transmission power for one or more antennas of the RF sensing node for transmitting an RRS, a number of transmission antennas for transmitting the RRS, a periodicity for transmitting the RRS, an RRS repetition factor associated with transmitting the RRS, or a combination thereof, etc. In some scenarios, the non-RF sensor810(e.g., a camera capturing image) may consume more power than an RF sensor (e.g., for performing RF measurement). Thus, in another aspect of the present disclosure, the non-RF sensor810or the RF sensing node802itself may be configured to operate based on on-demand request or trigger RF based sensing/measurement. For example, the non-RF sensor810or the RF sensing node802may be triggered to perform the feature extraction and/or the non-RF measurement based on a request from the network entity804, and/or based on there being an RF measurement specified. In another example, for a trigger RF based sensing, the non-RF measurement module or the sensing node itself may on-demand request or trigger RF based measurement, such as by reporting its power consumption status explicitly or implicitly. In some examples, the reporting may or may not related to an object to be detected. For example, if the non-RF measurement module knows an object is static, it may on-demand request the RF measurement module and shut-down the non-RF measurement module. In other words, the RF sensing may be triggered by the non-RF sensing component in the RF sensing node (e.g., gNB or UE), where the condition for the trigger may be based on power consumption, memory cost, and/or feature(s) associated with the object(s) to be detected or measured. In some examples, at820, the RF sensing node802may also report its power consumption status, either explicitly or implicitly, to the network entity804, such that the network entity804may also configure the RRS transmission based at least in part on the power consumption status of the RF sensing node802. For example, if the RF sensing node802reports that the power consumption is high or above a power threshold, the network entity804may decrease the Tx power for one or more antennas, the number of Tx antennas for the RRS transmission, and/or the RRS repetition factor, etc., of the RF sensing node802; whereas if the RF sensing node802reports that the power consumption is low, moderate, or above a power threshold, the network entity804may increase the Tx power for one or more antennas, the number of Tx antennas for the RRS transmission, the periodicity for transmitting the RRS, and/or the RRS repetition factor, etc., of the RF sensing node802. As such, in some examples, the reporting at810may or may not related to the one or more objects808. For example, if the non-RF sensor810detects that an object is static, the non-RF sensor810may on-demand request the RF based measurement to be triggered (e.g., by an RF sensor) and shut-down the non-RF sensor based measurement. This may also be an example of trigger RF based sensing. At826, the network entity804may transmit the RRS transmission configuration814to the RF sensing node802to configure or modify the RRS transmission of the RF sensing node802. At828, after receiving the RRS transmission configuration814, the RF sensing node802may configure/reconfigure its RRS transmission, and the RF sensing node802may transmit its RRS based on the new configuration. As illustrated by the communication flow800, aspects presented herein may enable a network entity to adjust RF sensing parameters of an RF sensing node dynamically based on one or more features or non-RF measurements extracted or derived from one or more objects of an area to be sensed, thereby improving the accuracy and performance of the RF sensing. In another aspect of the present disclosure, a network entity may configure waveform and resource of RRS for an RF sensing node based on non-RF sensing and non-RF measurement (e.g., based on using at least one non-RF sensor, such as the non-RF sensor810). For example, a network entity may configure a cyclic prefix (CP) duration of a transmission based on the size of one or more objects to be detected if the RRS transmitted from an RF sensing node is based on an OFDM waveform, which may also be referred to a CP-OFDM waveform (e.g., an OFDM waveform with CP). For purposes of the present disclosure, an RF sensing node that transmits RRS based on OFDM waveform may be referred to as an “OFDM radar.” An OFDM radar may be a UE, a base station, or a device capable of transmitting OFDM signals. An OFDM radar may provide a large degree of flexibility in waveform choices, which may enable communication and radar capabilities to be combined by embedding communication information into the radar waveform. In some examples, OFDM waveforms may be used for digital or software-defined radar that may be independent of the communication aspect. In addition, for many OFDM radar applications, unlike the OFDM waveforms used by a UE or a base station for communications, the OFDM waveforms used by OFDM radar applications may not include a cyclic prefix (CP) or a sufficiently long CP. Thus, these OFDM radar waveforms may sometimes be treated as different kinds of radar waveforms by a receiver, and the receiver may receive or monitor these OFDM radar waveforms based on matched filtering. Matched filtering may refer to a process for detecting a known piece of signal or wavelet that is embedded in noise. As such, an OFDM waveform may be a natural waveform utilized for joint communication and RF sensing for future wireless communications, as it may enable in-band multiplexing with other cellular reference signals and physical (PHY) channels. A CP may refer to a set of samples that are duplicated (e.g., copied and pasted) from the end of each transmitted symbol to its beginning. In addition, the CP may function as a guard interval that may be used for eliminating inter-symbol interference (ISI) (e.g., interference between transmitting data via multiple symbols), such as without using additional hardware. Thus, when there is sufficient CP insertion (or CP duration) in an OFDM waveform, an ISI channel may be converted into multiple ISI-free subchannels in a wireless communications system. Similarly, or analogously, a sufficient CP insertion may also enable an inter-range-cell interference (IRCI)-free (high range resolution) RF sensing for radar systems. For example, by using a sufficient CP, the IRCI-free and ideally zero range sidelobes for range reconstruction may be obtained, which may provide an opportunity for high range resolution synthetic aperture radar (SAR) imaging. In other words, OFDM signals with a sufficient CP may be used for solving IRCI-related problems. For purposes of the present disclosure, a range resolution may refer to the capability of a radar system to distinguish or resolve nearby adjacent target(s) or different parts of one target in the range. The degree of range resolution may depend on the width of the transmitted pulse, the types and sizes of targets, and the efficiency of the receiver and indicator, etc. To achieve IRCI-free RF sensing for an OFMD waveform, the CP length Tcpfor the OFDM waveform may be specified to be greater than or equal to the time delay difference (To) from a first range cell of a tracking zone to a last range cell of the tracking zone (e.g., satisfy Tcp≥To). A range cell may refer to the smallest range increment a radar is capable of detecting, and a range (for a radar) may refer to the length of a straight line between the radar and a target. For example, if a radar has a range resolution of 1 yard and a total range of 100 yards, then there may be 100 range cells (e.g., 100/1=100). FIG.11is a diagram1100illustrating an example of range cells in accordance with various aspects of the present disclosure. The time delay difference (To) may be calculated based on: To=2(M−1)R/c=(M−1)/B, where c may be the speed of light, B may be the bandwidth of a radar signal, M may be the number of range cells in the tracking zone, and R may be the range solution that is obtained based on R=c/(2B). In one example, to minimize unnecessary transmission energy, and without loss of generality, the CP length of an OFDM form may be chosen to be equal to the time delay difference (e.g., Tcp=To). As such, when an object to be detected is larger (e.g., has a higher RCS), the network entity may configure a shorter CP for the RRS transmission, whereas when an object to be detected is smaller (e.g., has a lower RCS), the network entity may configure a longer CP for the OFDM waveform. For example, for a wide angle beam-based sensing, the duration of the CP may be configured to cover the delay spread across the sensing area. Thus, if the sensing area is larger, a longer CP may be specified. On the other hand, for a narrow beam based sensing (e.g., based on a narrow beam operation), there may be additional consideration(s) where the duration of the CP may be specified to cover just the size of the target to avoid inter symbol interference. Thus, if the target is smaller, a shorter CP may be specified. In other words, referring back toFIG.8, the network entity804may configure CP duration of a CP-OFDM waveform for the RF sensing node802based on the one or more features806or the one or more non-RF measurements812. In one example, to determine the size of an object, the RF sensing node802may use a camera to capture the image of an RF sensing area that has the object, and then the AI processor associated with the RF sensing node802may estimate the size of the object based on the captured image, such as described in connection withFIG.9. In another aspect of the present disclosure, a network entity may use one or more features extracted from one or more objects of an RF sensing area or one or more non-RF measurements derived based on the extracted one or more features to determine and optimize the resource allocation for RRS. For example, the speed of an object may be detected/calculated based on Doppler estimation of the object, and different ranges of the Doppler estimation may specify different time domain allocations for the RRS. For example, if the Doppler estimation of an object is high, it may indicate/infer that the object is moving at a higher speed. Thus, a network entity may configure a shorter RRS transmission periodicity for the RF sensing node, such that two adjacent RRS transmissions may be configured to be closer in the time domain. On the other hand, if the Doppler estimation of an object is low, it may indicate/infer that the object is moving at a lower speed. Thus, a network entity may configure a longer RRS transmission periodicity for the RF sensing node, such that two adjacent RRS transmissions may be configured to be further in the time domain. In another example, the speed of an object may be estimated based on non-RF measurements, such as via images provided by a camera, the echo signal measured from an ultra-sound sensor, and/or measured by a lidar, etc. In some scenarios, an RF measurement-based beam management may be challenging due to: non-line of sight (NLOS) channel conditions (e.g., there is an obstacle between an RF sensing node and an object to be sensed); the high frequency RF signal may be attenuated by objects, which may impact the beam determination for the RF sensing; the beamforming range and/or the beam resolution may be limited by implementation; or the latency of signaling may impact real-time RF sensing specification. As such, in another aspect of the present disclosure, a network entity may configure radar transmission (Tx) beam and/or reception beam (Rx) for an RF sensing node based on non-RF sensing and non-RF measurement (e.g., based on using at least one non-RF sensor, such as the non-RF sensor810). For example, the location of an object, the incident angle of an object (e.g., angle at which the radar beam hits a particular portion of the object), and/or the reflected angle of an (angle at which the reflected beam leaves the part of the object hit) may be used by a network work entity or an RF sensing node to guide the Tx/Rx beamforming for the RF sensing. In other words, an RF sensing node and/or a network entity may use non-RF sensor measurements for detecting a potential target object and/or the location (or the angle) relative to the Tx/Rx beam/direction of the RF sensing node. For example, the RF sensing node may use a camera to capture the image of an RF sensing area, and the AI processor associated with the camera or the RF sensing node may identify the target object, the location of the target object, and/or the relative angle of the target object, etc. In some examples, the RF sensing node may use multiple cameras to more accurately estimate the target object's location and the range information. After the network entity or the RF sensing node has the relative location or relative angle information of the target object, the network entity or the RF sensing node may configure Tx/Rx beam(s) of the RF sensing node based on the relative location or angle information. For example, the network entity or the RF sensing node may beamform a Tx/Rx beam toward the direction of the target object. In some examples, configuring Tx/Rx beam(s) of an RF sensing node based on non-RF sensing/measurement may be useful for bistatic RF sensing. Bistatic RF sensing may refer to an RF sensing technique where RRS is transmitted and received by different devices. In some examples, a device or system that performs bistatic RF sensing may be referred to as a bistatic radar. In other words, a bistatic radar may be a radar system that includes a transmitter and receiver that are separated by a distance comparable to an expected target distance. When bistatic RF sensing is employed, the receiver (or the Rx beam(s)) of the bistatic radar system may be configured to chase the RRS (or pulse) probates from the corresponding transmitter of the bistatic radar system, which may be referred to as “pulse chasing.” For example, the receiver may use a Rx beam to rapidly scan the volume/area covered by the Tx beam. As such, when non-RF sensing and/or non-RF measurements are used for assisting Tx/Rx beam configuration for an RF sensing node, such as a bistatic radar, it may be easier for a receiver to perform the pulse chasing if the location, direction, incident angle, and/or reflect angle of a target object can be determined based on non-RF measurements. FIG.12is a flowchart1200of a method of wireless communication. The method may be performed by an RF sensing node or a component of an RF sensing node (e.g., the RF sensing node703,802; the base station102,180,310; the UE104,350,404,602; the TRP402,604,606,608,610; the apparatus1302). The method may enable the RF sensing node to transmit RRS based at least in part on environmental conditions to improve the performance and the accuracy of RF sensing. At1202, the RF sensing node may extract one or more features for a set of objects of an area via at least one non-RF sensor, such as described in connection withFIGS.8and9. For example, at816, the RF sensing node802may extract feature(s)806for one or more objects808of an area via at least one non-RF sensor810. The extraction of one or more features for a set of objects may be performed by, e.g., the feature extraction component1340of the apparatus1302inFIG.13. In one example, the RF sensing node may be a base station or a UE, and the network entity may be a server for sensing (or may be referred to as a sensing server). In another example, the at least one non-RF sensor may include one or more of: a camera, an ultra-sound sensor, a lidar sensor, or a barometric sensor. In another example, the one or more features for the set of objects may include an RCS of an object, a size of the object, a shape of the object, a classification of the object, a material of the object, an orientation of the object, a location of the object, a speed of the object, or a combination thereof. At1204, the RF sensing node may transmit, to a network entity, the one or more features or at least one non-RF measurement derived from the one or more features, such as described in connection withFIG.8. For example, at820, the RF sensing node802may transmit, to the network entity804, the extracted features806or the non-RF measurements812that are derived from the extracted features806. The transmission of the one or more features or the at least one non-RF measurement may be performed by, e.g., the feature/non-RF measurement report component1342and/or the transmission component1334of the apparatus1302inFIG.13. In one example, the one or more features for the set of objects or the at least one non-RF measurement derived from the one or more features may be processed by an AI processor of the RF sensing node. At1206, the RF sensing node may receive, from the network entity, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement transmitted to the network entity, such as described in connection withFIG.8. For example, at826, the RF sensing node802may receive, from the network entity804, an RRS transmission configuration814that is derived based on the extracted features806or the non-RF measurement812. The reception of the RRS transmission configuration may be performed by, e.g., the RRS transmission process component1344and/or the reception component1330of the apparatus1302inFIG.13. In one example, the RRS transmission configuration may include a transmission power for one or more antennas of the RF sensing node for transmitting an RRS, a number of transmission antennas for transmitting the RRS, a periodicity for transmitting the RRS, an RRS repetition factor associated with transmitting the RRS, or a combination thereof. In another example, the RF sensing node may receive, from the network entity, a CP duration configuration for a CP-OFDM waveform based on the one or more features or the at least one non-RF measurement transmitted to the network entity. In another example, the RF sensing node may receive, from the network entity, a resource allocation for transmitting an RRS based on the one or more features or the at least one non-RF measurement transmitted to the network entity. In another example, the RF sensing node may receive, from the network entity, a beam configuration for transmitting an RRS or receiving a reflected RRS based on the one or more features or the at least one non-RF measurement transmitted to the network entity. In another example, the RF sensing node may transmit, towards the area, one or more RRSs based on the RRS transmission configuration. FIG.13is a diagram1300illustrating an example of a hardware implementation for an apparatus1302. The apparatus1302may be an RF sensing node, a component of an RF sensing node, or may implement RF sensing node functionality. In some aspects, the apparatus1302may include a baseband unit1304. The baseband unit1304may communicate through at least one transceiver1322(e.g., one or more RF transceivers and/or antennas) with the UE104or with an object720(e.g., an object that receives and/or bounce off RF sensing signals). The at least one transceiver1322may be associated with or include a reception component1330and/or a transmission component1334. The baseband unit1304may include a computer-readable medium/memory (e.g., a memory1326). The baseband unit1304and/or the at least one processor1328may be responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the baseband unit1304and/or the at least one processor1328, causes the baseband unit1304and/or the at least one processor1328to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the baseband unit1304when executing software. The baseband unit1304further includes the reception component1330, a communication manager1332, and the transmission component1334. The reception component1330and the transmission component1334may, in a non-limiting example, include at least one transceiver and/or at least one antenna subsystem. The communication manager1332includes the one or more illustrated components. The components within the communication manager1332may be stored in the computer-readable medium/memory and/or configured as hardware within the baseband unit1304. The baseband unit1304may be a component of the RF sensing node and may include the memory376and/or at least one of the TX processor316, the RX processor370, and the controller/processor375. The communication manager1332includes a feature extraction component1340that may be configured to extract one or more features for a set of objects of an area via at least one non-RF sensor, e.g., as described in connection with1202ofFIG.12. The communication manager1332further includes a feature/non-RF measurement report component1342that may be configured to transmit, to a network entity, the one or more features or at least one non-RF measurement derived from the one or more features, e.g., as described in connection with1204ofFIG.12. The communication manager1332further includes an RRS transmission process component1344that may be configured to receive, from the network entity, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement transmitted to the network entity, e.g., as described in connection with1206ofFIG.12. The apparatus may include additional components that perform each of the blocks of the algorithm in the flowchart ofFIG.12. As such, each block in the flowchart ofFIG.12may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. As shown, the apparatus1302may include a variety of components configured for various functions. In one configuration, the apparatus1302, and in particular the baseband unit1304, includes means for extracting one or more features for a set of objects of an area via at least one non-RF sensor (e.g., the feature extraction component1340). The apparatus1302includes means for transmitting, to a network entity, the one or more features or at least one non-RF measurement derived from the one or more features (e.g., the feature/non-RF measurement report component1342and/or the transmission component1334). The apparatus1302includes means for receiving, from the network entity, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement transmitted to the network entity (e.g., the RRS transmission process component1344and/or the reception component1330). In one configuration, the RF sensing node may be a base station or a UE, and the network entity may be a server for sensing. In another configuration, the at least one non-RF sensor may include one or more of: a camera, an ultra-sound sensor, a lidar sensor, or a barometric sensor. In another configuration, the one or more features for the set of objects may include an RCS of an object, a size of the object, a shape of the object, a classification of the object, a material of the object, an orientation of the object, a location of the object, a speed of the object, or a combination thereof. In another configuration, the one or more features for the set of objects or the at least one non-RF measurement derived from the one or more features may be processed by an AI processor of the RF sensing node. In another configuration, the RRS transmission configuration may a transmission power for one or more antennas of the RF sensing node for transmitting an RRS, a number of transmission antennas for transmitting the RRS, a periodicity for transmitting the RRS, an RRS repetition factor associated with transmitting the RRS, or a combination thereof. In another configuration, the apparatus1302includes means for receiving, from the network entity, a CP duration configuration for a CP-OFDM waveform based on the one or more features or the at least one non-RF measurement transmitted to the network entity. In another configuration, the apparatus1302includes means for receiving, from the network entity, a resource allocation for transmitting an RRS based on the one or more features or the at least one non-RF measurement transmitted to the network entity. In another configuration, the apparatus1302includes means for receiving, from the network entity, a beam configuration for transmitting an RRS or receiving a reflected RRS based on the one or more features or the at least one non-RF measurement transmitted to the network entity. In another configuration, the apparatus1302includes means for transmitting, towards the area, one or more RRSs based on the RRS transmission configuration. The means may be one or more of the components of the apparatus1302configured to perform the functions recited by the means. As described supra, the apparatus1302may include the TX Processor316, the RX Processor370, and the controller/processor375. As such, in one configuration, the means may be the TX Processor316, the RX Processor370, and the controller/processor375configured to perform the functions recited by the means. FIG.14is a flowchart1400of a method of wireless communication. The method may be performed by a network entity or a component of a network entity (e.g., the network entity804). The method may enable the network entity to configure RRS transmission for an RF sensing node based at least in part on environmental conditions around the RF sensing node to improve the performance and the accuracy of RF sensing. At1402, the network entity may receive, from an RF sensing node, one or more features for a set of objects of an area or at least one non-RF measurement derived from the one or more features, such as described in connection withFIG.8. For example, at820, the network entity804may receive, from the RF sensing node802, extracted features806for one or more objects808of an area or at least one non-RF measurement812derived based on the extracted features806. The reception of the one or more features or the at least one non-RF measurement may be performed by, e.g., the feature/non-RF measurement process component1540and/or the reception component1530of the apparatus1502inFIG.15. In one example, the RF sensing node may be a base station or a UE, and the network entity may be a server for sensing. In another example, the one or more features for the set of objects may include an RCS of an object, a size of the object, a shape of the object, a classification of the object, a material of the object, an orientation of the object, a location of the object, a speed of the object, or a combination thereof. At1404, the network entity may transmit, to the RF sensing node, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement received from the RF sensing node, such as described in connection withFIG.8. For example, at826, the network entity804may transmit, to the RF sensing node802, an RRS transmission configuration814that is derived based on the extracted features806or the non-RF measurement812received from the RF sensing node802. The transmission of the RRS transmission configuration may be performed by, e.g., the RRS configuration component1542and/or the transmission component1534of the apparatus1502inFIG.15. In one example, the RRS transmission configuration may include a transmission power for one or more antennas of the RF sensing node for transmitting an RRS, a number of transmission antennas for transmitting the RRS, an RRS repetition factor associated with transmitting the RRS, or a combination thereof. In another example, the network entity may transmit, to the RF sensing node, a CP duration configuration for a CP-OFDM waveform based on the one or more features or the at least one non-RF measurement received from the RF sensing node. In another example, the network entity may transmit, to the RF sensing node, a resource allocation for transmitting an RRS based on the one or more features or the at least one non-RF measurement received from the RF sensing node. In another example, the network entity may transmit, to the RF sensing node, a beam configuration for transmitting an RRS or receiving a reflected RRS based on the one or more features or the at least one non-RF measurement received from the RF sensing node. In another example, the network entity may derive RCS measurements for the set of objects of the area based on the one or more features or the at least one non-RF measurement received from the RF sensing node, where the RRS transmission configuration is further based on the RCS measurements for the set of objects. FIG.15is a diagram1500illustrating an example of a hardware implementation for an apparatus1502. The apparatus1502may be a network entity (e.g., a location server, an LMF, etc.), a component of a network entity, or may implement the network entity functionality. In some aspects, the apparatus1502may include a baseband unit1504. The baseband unit1504may communicate through at least one transceiver1522(e.g., one or more RF transceivers and/or antennas) with the RF sensing node802(e.g., which may be UE104or base station102/180). The at least one transceiver1522may be associated with or include a reception component1530and/or a transmission component1534. The baseband unit1504may include a computer-readable medium/memory (e.g., a memory1526). The baseband unit1504and/or the at least one processor1528may be responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the baseband unit1504and/or the at least one processor1528, causes the baseband unit1504and/or the at least one processor1528to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the baseband unit1504when executing software. The baseband unit1504further includes the reception component1530, a communication manager1532, and the transmission component1534. The reception component1530and the transmission component1534may, in a non-limiting example, include at least one transceiver and/or at least one antenna subsystem. The communication manager1532includes the one or more illustrated components. The components within the communication manager1532may be stored in the computer-readable medium/memory and/or configured as hardware within the baseband unit1504. The baseband unit1504may be a component of the network entity and may include the memory376and/or at least one of the TX processor316, the RX processor370, and the controller/processor375. The communication manager1532includes a feature/non-RF measurement process component1540that may be configured to receive, from an RF sensing node, one or more features for a set of objects of an area or at least one non-RF measurement derived from the one or more features, e.g., as described in connection with1402ofFIG.14. The communication manager1532further includes an RRS configuration component1542that may be configured to transmit, to the RF sensing node, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement received from the RF sensing node, e.g., as described in connection with1404ofFIG.14. The apparatus may include additional components that perform each of the blocks of the algorithm in the flowchart ofFIG.14. As such, each block in the flowchart ofFIG.14may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. As shown, the apparatus1502may include a variety of components configured for various functions. In one configuration, the apparatus1502, and in particular the baseband unit1504, includes means for receiving, from an RF sensing node, one or more features for a set of objects of an area or at least one non-RF measurement derived from the one or more features (e.g., the feature/non-RF measurement process component1540and/or the reception component1530). The apparatus1502includes means for transmitting, to the RF sensing node, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement received from the RF sensing node (e.g., the RRS configuration component1542and/or the transmission component1534). In one configuration, the RF sensing node may be a base station or a UE, and the network entity may be a server for sensing. In another configuration, the one or more features for the set of objects may include an RCS of an object, a size of the object, a shape of the object, a classification of the object, a material of the object, an orientation of the object, a location of the object, a speed of the object, or a combination thereof. In another configuration, the RRS transmission configuration may include a transmission power for one or more antennas of the RF sensing node for transmitting an RRS, a number of transmission antennas for transmitting the RRS, an RRS repetition factor associated with transmitting the RRS, or a combination thereof. In another configuration, the apparatus1502includes means for transmitting, to the RF sensing node, a CP duration configuration for a CP-OFDM waveform based on the one or more features or the at least one non-RF measurement received from the RF sensing node. In another configuration, the apparatus1502includes means for transmitting, to the RF sensing node, a resource allocation for transmitting an RRS based on the one or more features or the at least one non-RF measurement received from the RF sensing node. In another configuration, the apparatus1502includes means for transmitting, to the RF sensing node, a beam configuration for transmitting an RRS or receiving a reflected RRS based on the one or more features or the at least one non-RF measurement received from the RF sensing node. In another configuration, the apparatus1502includes means for deriving RCS measurements for the set of objects of the area based on the one or more features or the at least one non-RF measurement received from the RF sensing node, where the RRS transmission configuration is further based on the RCS measurements for the set of objects. The means may be one or more of the components of the apparatus1502configured to perform the functions recited by the means. As described supra, the apparatus1502may include the TX Processor316, the RX Processor370, and the controller/processor375. As such, in one configuration, the means may be the TX Processor316, the RX Processor370, and the controller/processor375configured to perform the functions recited by the means. It is understood that the specific order or hierarchy of blocks in the processes/flowcharts disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes/flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Terms such as “if,” “when,” and “while” should be interpreted to mean “under the condition that” rather than imply an immediate temporal relationship or reaction. That is, these phrases, e.g., “when,” do not imply an immediate action in response to or during the occurrence of an action, but simply imply that if a condition is met then an action will occur, but without requiring a specific or immediate time constraint for the action to occur. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words “module,” “mechanism,” “element,” “device,” and the like may not be a substitute for the word “means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.” The following aspects are illustrative only and may be combined with other aspects or teachings described herein, without limitation. Aspect 1 is an apparatus for wireless communication including a memory; at least one transceiver; and at least one processor communicatively connected to the memory and the at least one transceiver, the at least one processor configured to: extract one or more features for a set of objects of an area via at least one non-RF sensor; transmit, to a network entity, the one or more features or at least one non-RF measurement derived from the one or more features; and receive, from the network entity, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement transmitted to the network entity. Aspect 2 is the apparatus of aspect 1, where the RF sensing node is a base station or a UE and the network entity is a server for sensing. Aspect 3 is the apparatus of any of aspects 1 and 2, where the at least one non-RF sensor includes one or more of: a camera, an ultra-sound sensor, a lidar sensor, or a barometric sensor. Aspect 4 is the apparatus of any of aspects 1 to 3, where the one or more features for the set of objects include an RCS of an object, a size of the object, a shape of the object, a classification of the object, a material of the object, an orientation of the object, a location of the object, a speed of the object, or a combination thereof. Aspect 5 is the apparatus of any of aspects 1 to 4, where the RRS transmission configuration includes a transmission power for one or more antennas of the RF sensing node for transmitting an RRS, a number of transmission antennas for transmitting the RRS, a periodicity for transmitting the RRS, an RRS repetition factor associated with transmitting the RRS, or a combination thereof. Aspect 6 is the apparatus of any of aspects 1 to 5, where the one or more features for the set of objects or the at least one non-RF measurement derived from the one or more features is processed by an AI processor of the RF sensing node. Aspect 7 is the apparatus of any of aspects 1 to 6, where the at least one processor is further configured to: receive, from the network entity, a CP duration configuration for a CP-OFDM waveform based on the one or more features or the at least one non-RF measurement transmitted to the network entity. Aspect 8 is the apparatus of any of aspects 1 to 7, where the at least one processor is further configured to: receive, from the network entity, a resource allocation for transmitting an RRS based on the one or more features or the at least one non-RF measurement transmitted to the network entity. Aspect 9 is the apparatus of any of aspects 1 to 8, where the at least one processor is further configured to: receive, from the network entity, a beam configuration for transmitting an RRS or receiving a reflected RRS based on the one or more features or the at least one non-RF measurement transmitted to the network entity. Aspect 10 is the apparatus of any of aspects 1 to 9, where the at least one processor is further configured to: transmit, towards the area, one or more RRSs based on the RRS transmission configuration. Aspect 11 is a method of wireless communication for implementing any of aspects 1 to 10. Aspect 12 is an apparatus for wireless communication including means for implementing any of aspects 1 to 10. Aspect 13 is a computer-readable medium storing computer executable code, where the code when executed by a processor causes the processor to implement any of aspects 1 to 10. Aspect 14 is an apparatus for wireless communication including a memory; at least one transceiver; and at least one processor communicatively connected to the memory and the at least one transceiver, the at least one processor configured to: receive, from an RF sensing node, one or more features for a set of objects of an area or at least one non-RF measurement derived from the one or more features; and transmit, to the RF sensing node, an RRS transmission configuration derived based on the one or more features or the at least one non-RF measurement received from the RF sensing node. Aspect 15 is the apparatus of aspect 14, where the RF sensing node is a base station or a UE and the network entity is a server for sensing. Aspect 16 is the apparatus of any of aspects 14 and 15, where the one or more features for the set of objects include an RCS of an object, a size of the object, a shape of the object, a classification of the object, a material of the object, an orientation of the object, a location of the object, a speed of the object, or a combination thereof. Aspect 17 is the apparatus of any of aspects 14 to 16, where the RRS transmission configuration includes a transmission power for one or more antennas of the RF sensing node for transmitting an RRS, a number of transmission antennas for transmitting the RRS, an RRS repetition factor associated with transmitting the RRS, or a combination thereof. Aspect 18 is the apparatus of any of aspects 14 to 17, where the at least one processor is further configured to: transmit, to the RF sensing node, a CP duration configuration for a CP-OFDM waveform based on the one or more features or the at least one non-RF measurement received from the RF sensing node. Aspect 19 is the apparatus of any of aspects 14 to 18, where the at least one processor is further configured to: transmit, to the RF sensing node, a resource allocation for transmitting an RRS based on the one or more features or the at least one non-RF measurement received from the RF sensing node. Aspect 20 is the apparatus of any of aspects 14 to 19, where the at least one processor is further configured to: transmit, to the RF sensing node, a beam configuration for transmitting an RRS or receiving a reflected RRS based on the one or more features or the at least one non-RF measurement received from the RF sensing node. Aspect 21 is the apparatus of any of aspects 14 to 20, where the at least one processor is further configured to: derive RCS measurements for the set of objects of the area based on the one or more features or the at least one non-RF measurement received from the RF sensing node, where the RRS transmission configuration is further based on the RCS measurements for the set of objects. Aspect 22 is a method of wireless communication for implementing any of aspects 14 to 21. Aspect 23 is an apparatus for wireless communication including means for implementing any of aspects 14 to 21. Aspect 24 is a computer-readable medium storing computer executable code, where the code when executed by a processor causes the processor to implement any of aspects 14 to 21.
123,878
11863275
Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION The following description is directed to some implementations for the purposes of describing the innovative aspects of this disclosure. However, a person having ordinary skill in the art will readily recognize that the teachings herein can be applied in a multitude of different ways. The described implementations may be implemented in any device, system or network that is capable of transmitting and receiving radio frequency (RF) signals according to any of the Institute of Electrical and Electronics Engineers (IEEE) 16.11 standards, or any of the IEEE 802.11 standards, the Bluetooth® standard, code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), Global System for Mobile communications (GSM), GSM/General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Terrestrial Trunked Radio (TETRA), Wideband-CDMA (W-CDMA), Evolution Data Optimized (EV-DO), 1×EV-DO, EV-DO Rev A, EV-DO Rev B, High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved High Speed Packet Access (HSPA+), Long Term Evolution (LTE), AMPS, or other known signals that are used to communicate within a wireless, cellular or internet of things (IOT) network, such as a system utilizing 3G, 4G or 5G, or further implementations thereof, technology. In some systems, an access device may function as a customer premises equipment (CPE) which forwards data between a cellular network, such as a 5G network, and one or more stations (STAs) wirelessly connected to the access device over a wireless local area network (WLAN), such as a WiFi network. In such systems, the access device may include or otherwise feature a WLAN access point (AP) and interface with the cellular network via a cellular modem. The WLAN AP and the cellular modem of the access device may be located at a same physical location (within a single box) or may be located at different physical locations (such as outside of a building and inside of the building). The WLAN AP of the access device may be unaware of power saving mode events of the cellular network, and vice versa. For example, a WiFi scheduler of the WLAN AP may perform power conservation procedures, such as, for example, reducing a number of spatial streams (NSS) on a WiFi interface to reduce a power consumption of the WLAN AP. On the other hand, the cellular network (for example, a 5G NR network) may dynamically change a bandwidth part (BWP) configuration of the access device, including a quantity of BWPs used by the access device to communicate over the cellular network, to accommodate an increased amount of downlink traffic from the cellular network. If the WiFi scheduler of the WLAN AP is unaware the dynamic BWP changes at the cellular modem, the power optimization and conservation practices may be sub-optimal. In some implementations of the present disclosure, the access device may look-ahead at an expected amount of downlink data from the cellular network in order to determine a NSS to use for upcoming communications by the WLAN AP. The access device may obtain, from the cellular modem, a number of communication parameters, such as a number of BWPs configured at the cellular modem, or a start and end marker of downlink data at the cellular modem. The access device (for example, the WiFi scheduler of the access device) may use these communication parameters to look ahead to predict an amount of incoming data that is expected to be transmitted by the WLAN AP. The access device may select a NSS to be used at the WLAN AP according to the predicted amount of incoming data, and may communicate with one or more STAs over the WLAN using the selected NSS. Particular implementations of the subject matter described in this disclosure can be implemented to realize one or more of the following potential advantages. The described techniques may be implemented to achieve improved communications between WiFi communications systems and other communications systems, including systems operating according to 5G or other cellular systems. The described techniques may decrease latency by improving efficient use of available dataflow. The described techniques also may save power and battery usage by operating based on a global view of incoming data in an access point (AP) system. The described techniques also may improve user experience by decreasing oscillations in the selected NSS in a WiFi system. FIG.1illustrates an example wireless communications system100that supports spatial stream optimization using dynamic bandwidth. The wireless communications system100may include one or more base stations (BSs)105, one or more UEs115, and a core network130. In some implementations, the wireless communications system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, or a New Radio (NR) network. In some implementations, the wireless communications system100may support enhanced broadband communications, ultra-reliable (for example, mission critical) communications, low latency communications, communications with low-cost and low-complexity devices, or any combination thereof. The BSs105may be dispersed throughout a geographic area to form the wireless communications system100and may be devices in different forms or having different capabilities. The BSs105and the UEs115may wirelessly communicate via one or more communication links125. Each BS105may provide a geographic coverage area110over which the UEs115and the BS105may establish one or more communication links125. The geographic coverage area110may be an example of a geographic area over which a BS105and a UE115may support the communication of signals according to one or more radio access technologies. The UEs115may be dispersed throughout a geographic coverage area110of the wireless communications system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some example UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115, the BSs105, or network equipment (for example, core network nodes, relay devices, integrated access and backhaul (IAB) nodes, or other network equipment), as shown inFIG.1. The BSs105may communicate with the core network130, or with one another, or both. For example, the BSs105may interface with the core network130through one or more backhaul links120(for example, via an S1, N2, N3, or another interface). The BSs105may communicate with one another over the backhaul links120(for example, via an X2, Xn, or another interface) either directly (for example, directly between BSs105), or indirectly (for example, via core network130), or both. In some implementations, the backhaul links120may be or include one or more wireless links. One or more of the BSs105described herein may include or may be referred to by a person having ordinary skill in the art as a base transceiver station, a radio BS, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or other suitable terminology. A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” also may be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115also may include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some implementations, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other implementations. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the BSs105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay BSs, among other implementations, as shown inFIG.1. The UEs115and the BSs105may wirelessly communicate with one another via one or more communication links125over one or more carriers. The term “carrier” may refer to a set of radio frequency spectrum resources having a defined physical layer structure for supporting the communication links125. For example, a carrier used for a communication link125may include a portion of a radio frequency spectrum band (for example, a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (for example, LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (for example, synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communications system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and time division duplexing (TDD) component carriers. In some implementations (for example, in a carrier aggregation configuration), a carrier also may have acquisition signaling or control signaling that coordinates operations for other carriers. A carrier may be associated with a frequency channel (for example, an evolved universal mobile telecommunication system terrestrial radio access (E-UTRA) absolute radio frequency channel number (EARFCN)) and may be positioned according to a channel raster for discovery by the UEs115. A carrier may be operated in a standalone mode where initial acquisition and connection may be conducted by the UEs115via the carrier, or the carrier may be operated in a non-standalone mode where a connection is anchored using a different carrier (for example, of the same or a different radio access technology). The communication links125shown in the wireless communications system100may include uplink transmissions from a UE115to a BS105, or downlink transmissions from a BS105to a UE115. Carriers may carry downlink or uplink communications (for example, in an FDD mode) or may be configured to carry downlink and uplink communications (for example, in a TDD mode). A carrier may be associated with a particular bandwidth of the radio frequency spectrum, and in some implementations the carrier bandwidth may be referred to as a “system bandwidth” of the carrier or the wireless communications system100. For example, the carrier bandwidth may be one of a number of determined bandwidths for carriers of a particular radio access technology (for example, 1.4, 3, 5, 10, 15, 20, 40, or 80 megahertz (MHz)). Devices of the wireless communications system100(for example, the BSs105, the UEs115, or both) may have hardware configurations that support communications over a particular carrier bandwidth or may be configurable to support communications over one of a set of carrier bandwidths. In some implementations, the wireless communications system100may include BSs105or UEs115that support simultaneous communications via carriers associated with multiple carrier bandwidths. In some implementations, each served UE115may be configured for operating over portions (for example, a sub-band, a BWP) or all of a carrier bandwidth. Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (for example, using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may consist of one symbol period (for example, a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The quantity of bits carried by each resource element may depend on the modulation scheme (for example, the order of the modulation scheme, the coding rate of the modulation scheme, or both). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. A wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (for example, spatial layers or beams), and the use of multiple spatial layers may further increase the data rate or data integrity for communications with a UE115. One or more numerologies for a carrier may be supported, where a numerology may include a subcarrier spacing (Δf) and a cyclic prefix. A carrier may be divided into one or more BWPs having the same or different numerologies. In some implementations, a UE115may be configured with multiple BWPs. In some implementations, a single BWP for a carrier may be active at a given time and communications for the UE115may be restricted to one or more active BWPs. The time intervals for the BSs105or the UEs115may be expressed in multiples of a basic time unit which may, for example, refer to a sampling period of Ts=1/(Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (for example, 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (for example, ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some implementations, a frame may be divided (for example, in the time domain) into subframes, and each subframe may be further divided into a number of slots. Alternatively, each frame may include a variable number of slots, and the number of slots may depend on subcarrier spacing. Each slot may include a number of symbol periods (for example, depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communications systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (for example, Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (for example, in the time domain) of the wireless communications system100and may be referred to as a transmission time interval (TTI). In some implementations, the TTI duration (for example, the number of symbol periods in a TTI) may be variable. Additionally, or alternatively, the smallest scheduling unit of the wireless communications system100may be dynamically selected (for example, in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (for example, a control resource set (CORESET)) for a physical control channel may be defined by a number of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (for example, CORESETs) may be configured for a set of the UEs115. For example, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to a number of control channel resources (for example, control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. Each BS105may provide communication coverage via one or more cells, for example a macro cell, a small cell, a hot spot, or other types of cells, or any combination thereof. The term “cell” may refer to a logical communication entity used for communication with a BS105(for example, over a carrier) and may be associated with an identifier for distinguishing neighboring cells (for example, a physical cell identifier (PCID), a virtual cell identifier (VCID), or others). In some implementations, a cell also may refer to a geographic coverage area110or a portion of a geographic coverage area110(for example, a sector) over which the logical communication entity operates. Such cells may range from smaller areas (for example, a structure, a subset of structure) to larger areas depending on various factors such as the capabilities of the BS105. For example, a cell may be or include a building, a subset of a building, or exterior spaces between or overlapping with geographic coverage areas110, among other implementations. A macro cell generally covers a relatively large geographic area (for example, several kilometers in radius) and may allow unrestricted access by the UEs115with service subscriptions with the network provider supporting the macro cell. A small cell may be associated with a lower-powered BS105, as compared with a macro cell, and a small cell may operate in the same or different (for example, licensed, unlicensed) frequency bands as macro cells. Small cells may provide unrestricted access to the UEs115with service subscriptions with the network provider or may provide restricted access to the UEs115having an association with the small cell (for example, the UEs115in a closed subscriber group (CSG), the UEs115associated with users in a home or office). A BS105may support one or multiple cells and also may support communications over the one or more cells using one or multiple component carriers. In some implementations, a carrier may support multiple cells, and different cells may be configured according to different protocol types (for example, MTC, narrowband IoT (NB-IoT), enhanced mobile broadband (eMBB)) that may provide access for different types of devices. In some implementations, a BS105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some implementations, different geographic coverage areas110associated with different technologies may overlap, but the different geographic coverage areas110may be supported by the same BS105. In some other implementations, the overlapping geographic coverage areas110associated with different technologies may be supported by different BSs105. The wireless communications system100may include, for example, a heterogeneous network in which different types of the BSs105provide coverage for various geographic coverage areas110using the same or different radio access technologies. The wireless communications system100may support synchronous or asynchronous operation. For synchronous operation, the BSs105may have similar frame timings, and transmissions from different BSs105may be approximately aligned in time. For asynchronous operation, the BSs105may have different frame timings, and transmissions from different BSs105may, in some implementations, not be aligned in time. The techniques described herein may be used for either synchronous or asynchronous operations. Some UEs115, such as MTC or IoT devices, may be low cost or low complexity devices and may provide for automated communication between machines (for example, via Machine-to-Machine (M2M) communication). M2M communication or MTC may refer to data communication technologies that allow devices to communicate with one another or a BS105without human intervention. In some implementations, M2M communication or MTC may include communications from devices that integrate sensors or meters to measure or capture information and relay such information to a central server or application program that makes use of the information or presents the information to humans interacting with the application program. Some UEs115may be designed to collect information or enable automated behavior of machines or other devices. Examples of applications for MTC devices include smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access control, and transaction-based business charging. Some UEs115may be configured to employ operating modes that reduce power consumption, such as half-duplex communications (for example, a mode that supports one-way communication via transmission or reception, but not transmission and reception simultaneously). In some implementations, half-duplex communications may be performed at a reduced peak rate. Other power conservation techniques for the UEs115include entering a power saving deep sleep mode when not engaging in active communications, operating over a limited bandwidth (for example, according to narrowband communications), or a combination of these techniques. For example, some UEs115may be configured for operation using a narrowband protocol type that is associated with a defined portion or range (for example, set of subcarriers or resource blocks (RBs)) within a carrier, within a guard-band of a carrier, or outside of a carrier. The wireless communications system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. For example, the wireless communications system100may be configured to support ultra-reliable low-latency communications (URLLC) or mission critical communications. The UEs115may be designed to support ultra-reliable, low-latency, or critical functions (for example, mission critical functions). Ultra-reliable communications may include private communication or group communication and may be supported by one or more mission critical services such as mission critical push-to-talk (MCPTT), mission critical video (MCVideo), or mission critical data (MCData). Support for mission critical functions may include prioritization of services, and mission critical services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, mission critical, and ultra-reliable low-latency may be used interchangeably herein. In some implementations, a UE115also may be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(for example, using a peer-to-peer (P2P) or D2D protocol). One or more UEs115utilizing D2D communications may be within the geographic coverage area110of a BS105. Other UEs115in such a group may be outside the geographic coverage area110of a BS105or be otherwise unable to receive transmissions from a BS105. In some implementations, groups of the UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some implementations, a BS105facilitates the scheduling of resources for D2D communications. In some other implementations, D2D communications are carried out between the UEs115without the involvement of a BS105. In some implementations, the D2D communication link135may be an example of a communication channel, such as a sidelink communication channel, between vehicles (for example, UEs115). In some implementations, vehicles may communicate using vehicle-to-everything (V2X) communications, vehicle-to-vehicle (V2V) communications, or some combination of these. A vehicle may signal information related to traffic conditions, signal scheduling, weather, safety, emergencies, or any other information relevant to a V2X system. In some implementations, vehicles in a V2X system may communicate with roadside infrastructure, such as roadside units, or with the network via one or more network nodes (for example, BSs105) using vehicle-to-network (V2N) communications, or with both. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or 5G core (5GC), which may include at least one control plane entity that manages access and mobility (for example, a mobility management entity (MME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (for example, a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the BSs105associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to IP services150for one or more network operators. The IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. Some of the network devices, such as a BS105, may include subcomponents such as an access network entity140, which may be an example of an access node controller (ANC). Each access network entity140may communicate with the UEs115through one or more other access network transmission entities145, which may be referred to as radio heads, smart radio heads, or transmission/reception points (TRPs). Each access network transmission entity145may include one or more antenna panels. In some configurations, various functions of each access network entity140or BS105may be distributed across various network devices (for example, radio heads and ANCs) or consolidated into a single network device (for example, a BS105). In some cases, a network device, such as a BS105, may distribute different layers of functionality across physically separated components. For example, one or more of the BSs105described herein may operate as or otherwise implement a disaggregated radio access network (D-RAN) or an open radio access network (O-RAN). The wireless communications system100may operate using one or more frequency bands, typically in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (for example, less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The wireless communications system100also may operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band, or in an extremely high frequency (EHF) region of the spectrum (for example, from 30 GHz to 300 GHz), also known as the millimeter band. In some implementations, the wireless communications system100may support millimeter wave (mmW) communications between the UEs115and the BSs105, and EHF antennas of the respective devices may be smaller and more closely spaced than UHF antennas. In some implementations, this may facilitate use of antenna arrays within a device. The propagation of EHF transmissions, however, may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. The techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. The wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, the wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, devices such as the BSs105and the UEs115may employ carrier sensing for collision detection and avoidance. In some implementations, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (for example, LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other transmissions. A BS105or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a BS105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. For example, one or more BS antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some implementations, antennas or antenna arrays associated with a BS105may be located in diverse geographic locations. A BS105may have an antenna array with a number of rows and columns of antenna ports that the BS105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally, or alternatively, an antenna panel may support radio frequency beamforming for a signal transmitted via an antenna port. The BSs105or the UEs115may use MIMO communications to exploit multipath signal propagation and increase the spectral efficiency by transmitting or receiving multiple signals via different spatial layers. Such techniques may be referred to as spatial multiplexing. The multiple signals may, for example, be transmitted by the transmitting device via different antennas or different combinations of antennas. Likewise, the multiple signals may be received by the receiving device via different antennas or different combinations of antennas. Each of the multiple signals may be referred to as a separate spatial stream and may carry bits associated with the same data stream (for example, the same codeword) or different data streams (for example, different codewords). Different spatial layers may be associated with different antenna ports used for channel measurement and reporting. MIMO techniques include single-user MIMO (SU-MIMO), where multiple spatial layers are transmitted to the same receiving device, and multiple-user MIMO (MU-MIMO), where multiple spatial layers are transmitted to multiple devices. Beamforming, which also may be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (for example, a BS105, a UE115) to shape or steer an antenna beam (for example, a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (for example, with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). ABS105or a UE115may use beam sweeping techniques as part of beam forming operations. For example, a BS105may use multiple antennas or antenna arrays (for example, antenna panels) to conduct beamforming operations for directional communications with a UE115. Some signals (for example, synchronization signals, reference signals, beam selection signals, or other control signals) may be transmitted by a BS105multiple times in different directions. For example, the BS105may transmit a signal according to different beamforming weight sets associated with different directions of transmission. Transmissions in different beam directions may be used to identify (for example, by a transmitting device, such as one or more components of a BS105, or by a receiving device, such as a UE115) a beam direction for later transmission or reception by the BS105. Some signals, such as data signals associated with a particular receiving device, may be transmitted by a BS105in a single beam direction (for example, a direction associated with the receiving device, such as a UE115). In some implementations, the beam direction associated with transmissions along a single beam direction may be determined based on a signal that was transmitted in one or more beam directions. For example, a UE115may receive one or more of the signals transmitted by the BS105in different directions and may report to the BS105an indication of the signal that the UE115received with a highest signal quality or an otherwise acceptable signal quality. In some implementations, transmissions by a device (for example, by a BS105or a UE115) may be performed using multiple beam directions, and the device may use a combination of digital precoding or radio frequency beamforming to generate a combined beam for transmission (for example, from a BS105to a UE115). The UE115may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands. The BS105may transmit a reference signal (for example, a cell-specific reference signal (CRS), a channel state information reference signal (CSI-RS)), which may be precoded or unprecoded. The UE115may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (for example, a multi-panel type codebook, a linear combination type codebook, a port selection type codebook). Although these techniques are described with reference to signals transmitted in one or more directions by a BS105, a UE115may employ similar techniques for transmitting signals multiple times in different directions (for example, for identifying a beam direction for subsequent transmission or reception by the UE115) or for transmitting a signal in a single direction (for example, for transmitting data to a receiving device). A receiving device (for example, a UE115) may try multiple receive configurations (for example, directional listening) when receiving various signals from the BS105, such as synchronization signals, reference signals, beam selection signals, or other control signals. For example, a receiving device may try multiple receive directions by receiving via different antenna subarrays, by processing received signals according to different antenna subarrays, by receiving according to different receive beamforming weight sets (for example, different directional listening weight sets) applied to signals received at multiple antenna elements of an antenna array, or by processing received signals according to different receive beamforming weight sets applied to signals received at multiple antenna elements of an antenna array, any of which may be referred to as “listening” according to different receive configurations or receive directions. In some implementations, a receiving device may use a single receive configuration to receive along a single beam direction (for example, when receiving a data signal). The single receive configuration may be aligned in a beam direction determined based on listening according to different receive configuration directions (for example, a beam direction determined to have a highest signal strength, highest signal-to-noise ratio (SNR), or otherwise acceptable signal quality based on listening according to multiple beam directions). The wireless communications system100may be a packet-based network that operates according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A Radio Link Control (RLC) layer may perform packet segmentation and reassembly to communicate over logical channels. A Medium Access Control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer also may use error detection techniques, error correction techniques, or both to support retransmissions at the MAC layer to improve link efficiency. In the control plane, the Radio Resource Control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a BS105or a core network130supporting radio bearers for user plane data. At the physical layer, transport channels may be mapped to physical channels. The UEs115and the BSs105may support retransmissions of data to increase the likelihood that data is received successfully. Hybrid automatic repeat request (HARQ) feedback is one technique for increasing the likelihood that data is received correctly over a communication link125. HARQ may include a combination of error detection (for example, using a cyclic redundancy check (CRC)), forward error correction (FEC), and retransmission (for example, automatic repeat request (ARQ)). HARQ may improve throughput at the MAC layer in poor radio conditions (for example, low signal-to-noise conditions). In some implementations, a device may support same-slot HARQ feedback, where the device may provide HARQ feedback in a specific slot for data received in a previous symbol in the slot. In some other implementations, the device may provide HARQ feedback in a subsequent slot, or according to some other time interval. An access device as described with respect toFIG.1may connect with a communication device, such as one or more components of a BS105or a sidelink device, via a cellular modem associated with the access device. In some implementations, the BS105ofFIG.1may implement a disaggregated architecture, such as an open radio access network (O-RAN) architecture, in which components implementing different layers of network functionality are virtualized, physically separated, or both. The access device may obtain, from the cellular modem associated with the access device, one or more communication parameters associated with a connection state of the cellular modem with the communication device. The one or more communication parameters may include an indication of one or more bandwidth parts, or an indication of a start marker and an end marker of data, or both. The access device may select a number of spatial steams for a WLAN AP associated with the access device in accordance with the one or more communication parameters associated with the connection state of the cellular modem. The access device may communicate with one or more STAs served by the access device using the WLAN AP in accordance with the connection state of the cellular modem and the NSS for the WLAN AP. FIG.2illustrates an example network environment200that supports spatial stream optimization using dynamic bandwidth. The network environment200may implement or be implemented to realize aspects of the wireless communications system100. For example, the network environment200may illustrate communication between multiple STAs205, an access device210, and a BS105-a, which may be examples of corresponding devices described herein, including with reference toFIG.1. BS105-amay be an example of a BS105, or another communication device. In some examples, the access device210may be an example of a 5G-CPE including a WLAN AP (such as a WiFi modem or AP) and a cellular modem (such as a 5G modem) and may include a quality of service (QoS) manager interfacing the WLAN AP and the cellular modem. In some implementations, the QoS manager may select a power mode for the WLAN AP functionality of the access device210in accordance with various events or communication parameters associated with a connection state between the cellular modem and the BS105-a. As such, the QoS manager may coordinate or synchronize power management actions (such as sleep schedules and connection states) between both the WLAN AP functionality and the cellular modem functionality of the access device210. The network environment200may include various air or area networks or interfaces, including a WiFi network255, an air interface260(which may be equivalently referred to as an edge interface), and a core network265. The core network265may include several entities which may buffer, schedule, and transmit data provided from a set of content providers230to the BS105-a(for example, communication device). In some implementations, the core network265may include a radio access network (RAN)215, a UPF220, a data network225, and the content providers230. The data network225may transfer data between the content providers230and the UPF220through a process of data switching, system control, interconnection transmission lines, or a combination thereof. The UPF220may receive the data from the data network225and transmit the data to the RAN215. In some examples, the UPF220may select or otherwise determine which elements of the data received from the data network225that the UPF220may buffer and may schedule the data transmission of the data to the RAN215to mitigate or otherwise reduce latency (for example, according to some scheduling timeline) or in accordance with current radio conditions. Upon receiving the data from the UPF220, the RAN215may schedule different sets of information (for example, different portions of the received data) to corresponding STAs205. For instance, the RAN215may receive a first portion of data from a content provider230-a, a second portion of data from a content provider230-b, and a third portion of data from a content provider230-cand may direct the first portion of data to the STA205-a, the second portion of data to the STA205-b, and the third portion of data to the STA205-cin accordance with which content provide230each STA205is associated with (or which content provider230an application running at each STA205is associated with). As shown inFIG.2, the associations between the content providers230and the STAs205are illustrated by dashed lines between the content providers230and the STAs205. The RAN215may configure the scheduling policy, signal quality, or signal strength, or a combination thereof, to configure a time and a power at which the BS105-amay transmit the data to the access device210. The various elements of the core network265may work together to minimize latency and improve the overall QoS for the STAs205. The RAN215may send the data to the BS105-a, which may serve as a transition or interface between the core network265and the air interface260. The BS105-a, upon receiving the data from the content providers230via the RAN215, may transmit the data to the access device210via a communication link235(such as an over-the-air (OTA) cellular connection). In some examples, cellular communication may rely on a dynamic bandwidth that is dependent on one or more communication factors or variables and may experience dynamic or frequently changing radio conditions. For example, the BS105-amay communicate with the access device210over the communication link235and, in some implementations, communication over the communication link235may be sensitive to various environmental conditions. For instance, communication over the communication link235may be adversely affected by weather (such as rain or snow, among other examples) or physical barriers (such as cars, buildings, or trees, among other examples). In some examples, such environmental conditions, or the magnitude of their influence on the communication between the BS105-aand the access device210, may vary in a dynamic nature or according to relatively short timelines. In some aspects, for example, the BS105-aand the access device210may communicate over the communication link235using a millimeter wave (mmW) radio frequency band, which may be sensitive to such various environmental conditions. As a result of such dynamic radio conditions, the BS105-amay adjust one or more communication parameters relatively frequently to increase the likelihood for successful communication between the BS105-aand the access device210(for example, to provide a sufficiently strong signal to the access device210). In some examples, the access device210may communicate with the BS105-avia the cellular modem of (or associated with) the access device210. For example, the cellular modem may receive signaling sent over the communication link235from the BS105-ausing an antenna240-b. In some implementations, the cellular modem of the access device210may be co-located with the WLAN AP of the access device210(and, as such, the access device210may be understood as one box). In some other implementations, the cellular modem of the access device210may be located at a different physical location than the WLAN AP of the access device210(and, as such, the access device210may be understood as two boxes). In such implementations in which the cellular modem and the WLAN AP of the access device210are separately located, the cellular modem may be located at an exterior of a building and the WLAN AP may be located within the building. Accordingly, the cellular modem may be referred to or otherwise understood as an outdoor data unit (ODU) and the WLAN AP may be referred to or otherwise understood as an indoor data unit (IDU). The WLAN AP and the cellular modem may communicate via any signaling mechanism, such as via a wired connection (for example, via an Ethernet cable) or a wireless connection. Regardless of whether the cellular modem and the WLAN AP are co-located or located at different physical locations, the cellular modem may transmit the data received from the BS105-ato the WLAN AP, which may send the data to the STAs205. The WLAN AP may communicate with the STAs205in various ways, such as via a wired connection or via a wireless connection. For example, the WLAN AP may wirelessly communicate with the STA205-band the STA205-cusing the antenna240-avia a wireless communication link245-aand a wireless communication link245-b, respectively. Additionally, or alternatively, the WLAN AP may communicate with the STA205-avia a communication link250(such as an Ethernet or fiber optic connection). Further, although the network environment200shows one access device210and three STAs205, the network environment200may include any number of access devices210that communicate with any number of STAs205without exceeding the scope of the present disclosure. In some examples, one or more components of the BS105-amay dynamically change the bandwidth of the air interface260between the access device210and the BS105-aby adding or removing BWPs. For example, one or more components of the BS105-amay dynamically change bandwidth by changing the number of BWPs used by the BS105-ato transmit data to the access device210. In some implementations, dynamically adjustable BWP configurations may improve low latency communications by increasing the number of BWPs commensurate with the amount of data to be communicated or the latency requirements associated with the data. In some cases, the WLAN AP of the access device210, may not automatically be made aware of such changes at the cellular modem. In the absence of communication and coordination between the cellular modem and the WLAN AP, the WLAN AP may take power optimization steps which may not align with the objectives associated with dynamic BWP changes at the cellular modem. For example, the WLAN AP may switch to a lower NSS for WiFi communications as a power saving measure without knowing that the BS105-ahas recently added BWPs to the cellular connection in anticipation of a large downlink transmission to the cellular modem. In such an example, the power saving measures taken by the WLAN AP may reduce a WiFi throughput capability of the access device210and actually reduce power efficiency. To improve coordination between the WLAN AP and the cellular modem of the access device210, the WLAN AP may receive from the cellular modem indications of dynamic BWP changes or other communication parameters associated with cellular communications, such as start and end markers of downlink data transmissions. Using this information the WLAN AP may align NSS adjustments for WiFi transmissions with the dynamic BWP changes or communication parameters associated with the cellular modem. In one example, the WLAN AP of the access device210may obtain an indication of the number of BWPs configured at the cellular modem of the access device210, and use this information to predict a short-term bandwidth requirement at the WLAN AP for the wireless communication link245-aor the wireless communication link245-b. One or more components of the WLAN AP, such as a WiFi scheduler, may switch to a higher NSS for transmissions by the WLAN AP over the wireless communication link245-aor the wireless communication link245-bbased on the predicted bandwidth requirement. Thus, the information provided by the cellular modem may aid in achieving increased network efficiency, lower latency, and power savings by the WLAN AP of the access device210. In another example, for each data transfer initiated with the RAN215, the cellular modem may mark incoming data and convey data start and end markers to one or more components of the WLAN AP, such as the WiFi scheduler. The data start and end markers may provide a reasonably accurate basis for approximating an amount of incoming data to be forwarded over the wireless communication link245-aor the wireless communication link245-b. The WiFi scheduler may thus be aware of a global view of the data coming into the WiFi communications systems across client by receiving communication parameters from the modem. The cellular modem may determine whether to provide the indication of the BWP configuration or the start and end markers based on power optimization requirements of the WLAN AP or another component of the access device210. In either implementation, the WiFi scheduler may switch to a higher NSS based on the communications parameters obtained from the cellular modem in order to decrease latency, improve efficiency, and optimize battery performance. The WiFi scheduler may change the NSS for a period of time, and may turn to an original NSS (for example, after a number of TTIs). The WLAN AP of the access device210may communicate with one or more STAs205according to the selected NSS. FIG.3illustrates an example communication flow300that supports spatial stream optimization using dynamic bandwidth. Devices operating in a wireless communications system incorporating WiFi procedures may operate and communicate according to the communication flow300. For example, cellular components of an access device, such as a cellular modem of the access device210described with respect toFIG.2, may initiate procedures, and communicate with a WiFi Scheduler305of the access device (for example, a WiFi scheduler305of a WLAN portion of the access device). At310, a cellular modem of the access device may communicate with a WAN, via one or more components of a BS105as described inFIG.1orFIG.2, using an initial BWP. At335, the network may communicate an indication of the initial BWP to the WiFi scheduler305. This communication may include respective indications of an initial uplink bandwidth and an initial downlink bandwidth used to communicate with the WAN. At315, the WAN may dynamically configure the BWP used by the cellular modem of the access device, for example by signaling an increase or decrease in the number of BWPs to be used by the cellular modem of the access device in communicating with the WAN. This signaling of an increase or decrease in the number of BWPs may be communicated by the WAN via one or more components of the BS105. At320, the cellular modem may reconfigure communications according to the dynamically updated BWPs at315. The reconfiguration may include updating the number of BWPs used by the cellular modem in accordance with the dynamic configuration signaled change in the BWP configuration at the network (for example, the BWP configuration can be changed by one or more components of a BS105) at315. At340, the cellular modem may communicate an indication of the new uplink and downlink WAN bandwidths to the WiFi scheduler305. At325, an inactivity timer may expire. For example, the inactivity timer may expire based on a lack of data transmitted or received during a period of time. At320, the modem (for example, wireless cellular modem) may return to a default number of BWPs. Returning to the default number of BWPs may be indicated back to the network (for example, to one or more components of a BS105). At345, the cellular modem may communicate a return to a default number of BWPs to the WiFi scheduler305, including an indication of the new uplink and downlink WAN bandwidth. In some implementations, the new uplink and downlink WAN bandwidths may be the same as the initial uplink and downlink bandwidths at335. The WiFi scheduler305may therefore receive multiple communications, such as at335,340, and345, from the cellular modem indicating changes to uplink and downlink WAN bandwidth. As a result of receiving this information, the WiFi scheduler305may expect changes in WiFi throughput, and may dynamically update the NSS used for WiFi communications based on the BWP changes for WAN communications, as well as data start and end marker indications, as described herein with respect toFIG.4. FIG.4illustrates an example decision flow400that supports spatial stream optimization using dynamic bandwidth. A WiFi scheduler may be a part of an access device as described herein. The WiFi scheduler may perform the decision flow400outlined inFIG.4. The WiFi scheduler performing decision flow400may be an example of a WiFi scheduler305of a WLAN AP portion of an access device, such as a CPE, as described with respect toFIG.3. At410, the WiFi scheduler may receive (or otherwise obtain via a wired or wireless interface) an indication of configured BWPs for cellular communications between the access device and a WAN, or an indication of start marker and end markers for downlink data received from the WAN, or both. The WiFi scheduler may receive the indication of the configured BWPs, the indication of the start and end markers, or both) from a cellular modem of the access device, as described with respect toFIG.3. From this information, the WiFi scheduler may expect to forward the downlink data received from the WAN to one or more recipient STAs over WiFi. At415, the WiFi scheduler may identify the one or more recipient STAs to be scheduled to receive the upcoming data over WiFi. For example, the WiFi scheduler may identify the one or more recipient STAs from one or more recipient addresses associated with the data received from the WAN. At420, the WiFi scheduler may determine whether a queue depth is low for each of the one or more recipient STAs. A low queue depth in a recipient STA may be determined by comparing a number of requests in the request queue for the recipient STA to a depth threshold. At425, the WiFi scheduler may, if the queue depth for the recipient STA is not low (indicating a large amount of incoming data for the recipient STA), schedule the recipient STA for communications with an increased NSS. In some implementations, the recipient STA may be scheduled with a maximum NSS. At430, if the queue depth for a recipient STA is low, the WiFi scheduler may check whether the configured BWP, or the start markers and end markers, or both, indicate an increased amount of upcoming data. At425, if the start and end markers or the configured BWP, or both, indicate a high amount of upcoming data for the recipient STA, the WiFi scheduler may schedule the STA with an increased NSS, possibly a maximum NSS. At435, if the start and end markers or the configured BWP, or both, indicate a lower amount of upcoming data for the recipient STA, the WiFi scheduler may schedule the recipient STA with a lower NSS, possibly a minimum NSS, in an effort to conserve power. The decision flow400may cycle as a loop, and may restart after a number of TTIs, a timer expiration, or according to some other defined trigger. FIG.5illustrates an example process flow500that supports spatial stream optimization using dynamic bandwidth. The process flow500may be implemented by an access device505. The access device505may communicate with a STA510over a WLAN, such as a WiFi network. The access device505may be an example of or otherwise implement an access device210or a WiFi scheduler305or as described with respect toFIGS.2-4, respectively. In some implementations, at515, the access device505may communicate with a cellular communication device535via a cellular modem. The cellular communication device535may be an example of one or more components of a BS105(including one or more virtualized or physically separated components of a BS105implementing a disaggregated architecture, such as an O-RAN architecture), a sidelink device, or another device capable of communicating with the access device505over the cellular network. At520, the access device505may obtain one or more communication parameters associated with a connection state of a cellular modem associated with the access device505. In some implementations, the one or more communication parameters may include an indication of one or more BWPs configured for the cellular modem to communicate with the cellular communication device535. In these implementations, selecting the number of spatial stream at525may be based on the one or more BWPs. In some other implementations, the one or more communication parameters may include an indication of a data start marker and a data end marker associated with data transmission to the cellular modem. In these implementations, selecting the NSS at525may be based on the data start marker and the data end marker. At525, the access device505may select a NSS for a WLAN connection between a WLAN AP of the access device505and the STA510in accordance with the one or more communication parameters associated with the connection state of the cellular modem. For example, the access device505may selectively increase the NSS of the WLAN connection for a number of TTIs when the one or more communication parameters of the cellular connection indicate that more downlink data is expected over the cellular connection. Additionally, or alternatively, the access device505may selectively decrease the NSS of the WLAN connection when the one or more communication parameters of the cellular connection indicate that less downlink data is expected over the cellular connection. The access device505may return to a default or initial NSS setting after a termination of the number of TTIs. In some cases, the number of TTIs may be fixed or standardized. Additionally, or alternatively, the access device505may dynamically select the number of TTIs according to the one or more communication parameters of the cellular connection. In implementations where the one or more communication parameters of the cellular connection obtained at520include the number of BWPs configured for the cellular connection, the access device505may selectively increase or decrease the NSS in accordance with a change in a number of the one or more BWPs configured for the cellular modem. In some implementations, the access device505may detect an increase in bandwidth associated with the change in the number of the one or more BWPs configured for the cellular modem. In some implementations, the access device505may receive an indication of a change in the number of the one or more BWPs configured for the cellular modem. An increase in BWPs configured for the cellular modem of an access device505may indicate that more bandwidth may be used to communicate data to the access device505over a cellular connection. Because the access device505may forward data received over the cellular connections to WLAN STAs, an increase in a number of BWPs configured for the cellular modem of the access device505may correlate with an increased bandwidth for the upcoming WLAN transmissions by the access device505. In these implementations, the access device505may selectively increase the NSS based on the detected increase. In some other implementations, the access device505may detect a decrease in bandwidth associated with the change in the number of the one or more BWPs configured for the cellular modem. In some implementations, the access device505may receive an indication of a change in the number of the one or more BWPs configured for the cellular modem. A decrease in BWPs configured for the cellular modem of an access device505may indicate that less bandwidth may be used to communicate data to the access device505over a cellular connection. Because the access device505may forward data received over the cellular connections to WLAN STAs, a decrease in a number of BWPs configured for the cellular modem of the access device505may correlate with a decreased bandwidth for the upcoming WLAN transmissions by the access device505. In these implementations, the access device505may selectively decrease the NSS based on the detected decrease. In implementations where the one or more communication parameters of the cellular connection obtained at520include a data start marker and a data end marker or an amount of downlink data received by the cellular modem, the NSS selected at525may be in proportion to the amount of downlink data received or expected to be received by the cellular modem. For example, the access device505may increase the NSS based on obtaining the data start marker, and decrease the NSS based on obtaining the data end marker. At530, the access device505may communicate with one or more STAs including the STA510served by the access device505, using the WLAN AP, in accordance with the connection state of the cellular modem and the NSS for the WLAN AP. FIG.6shows a diagram of an example access system600including a device605that supports spatial stream optimization using dynamic bandwidth. The device605may communicate wirelessly with one or more BSs105, UEs115, or any combination thereof. The device605may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager620, an input/output (I/O) controller610, a transceiver615, an antenna625, a memory630, code635, and a processor640. These components may be in electronic communication or otherwise coupled (for example, operatively, communicatively, functionally, electronically, electrically) via one or more buses (for example, a bus645). The I/O controller610may manage input and output signals for the device605. The I/O controller610also may manage peripherals not integrated into the device605. In some implementations, the I/O controller610may represent a physical connection or port to an external peripheral. In some implementations, the I/O controller610may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. Additionally or alternatively, the I/O controller610may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some implementations, the I/O controller610may be implemented as part of a processor, such as the processor640. In some implementations, a user may interact with the device605via the I/O controller610or via hardware components controlled by the I/O controller610. In some implementations, the device605may include a single antenna625. However, in some other implementations, the device605may have more than one antenna625, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver615may communicate bi-directionally, via the one or more antennas625, wired, or wireless links as described herein. For example, the transceiver615may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver615also may include a modem to modulate the packets, to provide the modulated packets to one or more antennas625for transmission, and to demodulate packets received from the one or more antennas625. The memory630may include random access memory (RAM) and read-only memory (ROM). The memory630may store computer-readable, computer-executable code635including instructions that, when executed by the processor640, cause the device605to perform various functions described herein. The code635may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some implementations, the code635may not be directly executable by the processor640but may cause a computer (for example, when compiled and executed) to perform functions described herein. In some implementations, the memory630may contain, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor640may include an intelligent hardware device (for example, a general-purpose processor, a digital signal processor (DSP), a central processing unit (CPU), a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some implementations, the processor640may be configured to operate a memory array using a memory controller. In some other implementations, a memory controller may be integrated into the processor640. The processor640may be configured to execute computer-readable instructions stored in a memory (for example, the memory630) to cause the device605to perform various functions (for example, functions or tasks supporting spatial stream optimization using dynamic bandwidth). For example, the device605or a component of the device605may include a processor640and memory630coupled to the processor640, the processor640and memory630configured to perform various functions described herein. For example, the communications manager620may be configured as or otherwise support a means for obtaining one or more communication parameters associated with a connection state of a cellular modem associated with the access device. The communications manager620may be configured as or otherwise support a means for selecting a NSS for a WLAN AP associated with the access device in accordance with the one or more communication parameters associated with the connection state of the cellular modem. The communications manager620may be configured as or otherwise support a means for communicating with one or more stations (STAs) served by the access device using the WLAN AP in accordance with the connection state of the cellular modem and the NSS for the WLAN AP. In some examples, the communications manager620may be configured to perform various operations (for example, receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver615, the one or more antennas625, or any combination thereof. Although the communications manager620is illustrated as a separate component, in some implementations, one or more functions described with reference to the communications manager620may be supported by or performed by the processor640, the memory630, the code635, or any combination thereof. For example, the code635may include instructions executable by the processor640to cause the device605to perform various aspects of spatial stream optimization using dynamic bandwidth as described herein, or the processor640and the memory630may be otherwise configured to perform or support such operations. FIG.7shows a flowchart illustrating an example method700that supports spatial stream optimization using dynamic bandwidth. The operations of the method700may be implemented by a UE115or its components as described herein. For example, the operations of the method700may be performed by a UE115as described with reference toFIGS.1-6. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally, or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At705, the method may include obtaining one or more communication parameters associated with a connection state of a cellular modem associated with the access device. The operations of710may be performed in accordance with examples as disclosed herein. At710, the method may include selecting a NSS for a WLAN AP associated with the access device in accordance with the one or more communication parameters associated with the connection state of the cellular modem. The operations of715may be performed in accordance with examples as disclosed herein. At715, the method may include communicating with one or more stations (STAs) served by the access device using the WLAN AP in accordance with the connection state of the cellular modem and the NSS for the WLAN AP. The operations of720may be performed in accordance with examples as disclosed herein. The following provides an overview of some aspects of the present disclosure: Aspect 1: A method of wireless communications by an access device, including: obtaining one or more communication parameters associated with a connection state of a cellular modem associated with the access device selecting a NSS for a WLAN AP associated with the access device in accordance with the one or more communication parameters associated with the connection state of the cellular modem; and communicating with one or more STAs served by the access device using the WLAN AP in accordance with the connection state of the cellular modem and the NSS for the WLAN AP. Aspect 2: The method of aspect 1, where obtaining the one or more communication parameters includes obtaining an indication of one or more BWPs configured for the cellular modem; and selecting the NSS is based at least in part on the one or more BWPs. Aspect 3: The method of aspect 2, where selecting the NSS based at least in part on the one or more BWPs includes: selectively increasing or decreasing the NSS in accordance with a change in a number of the one or more BWPs configured for the cellular modem. Aspect 4: The method of aspect 3, further including: detecting an increase in bandwidth associated with the change in the number of the one or more BWPs configured for the cellular modem; and selectively increasing the NSS based at least in part on the detected increase. Aspect 5: The method of any of aspects 3 through 4, further including: detecting a decrease in bandwidth associated with the change in the number of the one or more BWPs configured for the cellular modem; and selectively decreasing the NSS based at least in part on the detected decrease. Aspect 6: The method of any of aspects 1 through 5, where obtaining the one or more communication parameters includes obtaining an indication of a data start marker and a data end marker associated with data transmission to the cellular modem; and selecting the NSS is based at least in part on the data start marker and the data end marker. Aspect 7: The method of aspect 6, further including: determining, based at least in part on the data start marker and the data end marker, an amount of downlink data received by the cellular modem, where the selected NSS is in proportion to the amount of downlink data received by the cellular modem. Aspect 8: The method of any of aspects 6 through 7, further including: increasing the NSS based at least in part on obtaining the data start marker. Aspect 9: The method of any of aspects 6 through 8, further including: decreasing the NSS based at least in part on obtaining the data end marker. Aspect 10: The method of any of aspects 1 through 9, further including: selectively increasing or decreasing the NSS for a number of TTIs; and using an original NSS after a termination of the number of TTIs. Aspect 11: An apparatus including a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 1 through 10. Aspect 12: An apparatus including at least one means for performing a method of any of aspects 1 through 10. Aspect 13: A non-transitory computer-readable medium storing code the code including instructions executable by a processor to perform a method of any of aspects 1 through 10. As used herein, the term “determine” or “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (such as via looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (such as receiving information), accessing (such as accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and other such similar actions. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c. The various illustrative logics, logical blocks, modules, circuits and algorithm processes described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and processes described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system. The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a DSP, an ASIC, a FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, or any processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular processes and methods may be performed by circuitry that is specific to a given function. In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also can be implemented as one or more computer programs, such as one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The processes of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can be enabled to transfer a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection can be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product. Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the claims are not intended to be limited to the implementations shown herein, but are to be accorded the widest scope consistent with this disclosure, the principles and the features disclosed herein. Additionally, a person having ordinary skill in the art will readily appreciate, the terms “upper” and “lower” are sometimes used for ease of describing the figures, and indicate relative positions corresponding to the orientation of the figure on a properly oriented page, and may not reflect the proper orientation of any device as implemented. Certain features that are described in this specification in the context of separate implementations also can be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also can be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in some combinations and even initially claimed as such, one or more features from a claimed combination can be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Further, the drawings may schematically depict one more example processes in the form of a flow diagram. However, other operations that are not depicted can be incorporated in the example processes that are schematically illustrated. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the illustrated operations. In some circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some implementations, the actions recited in the claims can be performed in a different order and still achieve desirable results.
82,361
11863276
DETAILED DESCRIPTION Some wireless communications systems may support communications in shared radio frequency spectrum bands (e.g., unlicensed radio frequency bands). For example, wireless communications in millimeter wave (mmW) and sub-terahertz (THz) frequencies may support a number of shared radio frequency bands for communications between devices. Transmission and reception of wireless signals over these bands, may be directional (e.g., using directional beamforming techniques). To mitigate interference by multiple devices communicating in unlicensed radio frequency spectrum bands, channel access procedures (e.g., listen-before-talk (LBT), long-term (LT) sensing, or the like) may be performed to determine whether a channel is clear prior to transmitting a message to another device in shared radio frequency spectrum bands. Additionally or alternatively, the message may be transmitted over a particular directional beam that may support reduced interference to one or more unintended receivers in the system. For instance, one or more devices that are nearby (e.g., in relatively close proximity to) an intended receiver may unintentionally receive some portion of a directional transmission to the intended receiver, thereby causing interference. As such, some beamforming techniques may instead be used to form relatively narrow beams that may reduce or minimize interference to nearby devices. In some cases, it may be beneficial to communicate with another device in the shared radio frequency spectrum band without performing channel access procedures (e.g., to reduce latency), but such techniques may only be performed by a device only if some minimum antenna gain requirements are satisfied, for example, to avoid interference to other devices when skipping channel access procedures. As such, it may be beneficial to define beams that may be used for communications in shared radio frequency bands without using channel access procedures. As described herein, techniques may be used for determining which beams may be used for shared radio frequency spectrum communications without first performing channel access procedures. Specifically, aspects of the present disclosure provide definitions for narrow beam-based channel access that enables a device to communicate in a shared radio frequency spectrum band (e.g., unlicensed radio frequency spectrum band) without performing channel access procedures. For example, a particular beam may be determined to be relatively narrow based on one or more measurement values, metrics derived from the one or more measurement values, thresholds, a number of spatial streams associated with the beam, or any combination thereof. In some aspects, the narrowness of a beam may be determined by a per-beam spherical measurement test. The spherical measurement test may be based on forming multiple beams each having a single spatial stream, and measuring effective isotropic radiated power (EIRP) values for various directions of the beams. Additionally or alternatively, the beams may be formed using multiple reference precoders (e.g., that may be from a predefined set of reference precoders) for the spherical measurement test, and the EIRP measurements may be recorded for each beam and each reference precoder. That is, the per-beam spherical measurement test may be performed on a number of beams each associated with a number of spatial streams. In any case, a particular beam may be associated with a set of EIRP measurement values obtained via the spherical measurement test, and one or more metrics may be identified based on the set of EIRP values for the directional beam. The one or more metrics may then be compared to one or more EIRP threshold values (e.g., based on a cumulative distribution function (CDF) for a set of EIRP values) to determine whether the beam may be used to communicate in the shared radio frequency spectrum band without performing channel access procedures. For instance, if the metrics satisfy at least one EIRP threshold based on the number of spatial streams of the beam, then device may use the beam for transmitting or receiving messages in the shared radio frequency spectrum band while skipping channel access procedures (e.g., LBT, LT sensing). Particular aspects of the subject matter described herein may be implemented to realize one or more advantages. For example, the described techniques may support definitions of beams that may be used without channel access procedures when communicating in shared radio frequency spectrum bands, such that a device (e.g., a UE) may efficiently identify some beams that enable reduced latency when transmitting messages on that beam. This may result in the device skipping channel access procedures when communicating with another device, while also reducing or minimizing interference to other devices in the system (e.g., based on a relative “narrowness” of the particular beam), which may improve communications efficiency for one or multiple devices. As such, the described techniques may promote device and system efficiencies, enhanced access to a channel, relatively increased data rates, improved spectral efficiency, among other benefits. Aspects of the disclosure are initially described in the context of wireless communications systems. Aspects of the disclosure are further described by a CDF plot, spherical measurement tests, and a process flow. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to directional channel access using a narrow beam with multiple spatial streams. FIG.1illustrates an example of a wireless communications system100that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The wireless communications system100may include one or more base stations105, one or more UEs115, and a core network130. In some examples, the wireless communications system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, or a New Radio (NR) network. In some examples, the wireless communications system100may support enhanced broadband communications, ultra-reliable communications, low latency communications, communications with low-cost and low-complexity devices, or any combination thereof. The base stations105may be dispersed throughout a geographic area to form the wireless communications system100and may be devices in different forms or having different capabilities. The base stations105and the UEs115may wirelessly communicate via one or more communication links125. Each base station105may provide a coverage area110over which the UEs115and the base station105may establish one or more communication links125. The coverage area110may be an example of a geographic area over which a base station105and a UE115may support the communication of signals according to one or more radio access technologies. The UEs115may be dispersed throughout a coverage area110of the wireless communications system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some example UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115, the base stations105, or network equipment (e.g., core network nodes, relay devices, integrated access and backhaul (IAB) nodes, or other network equipment), as shown inFIG.1. The base stations105may communicate with the core network130, or with one another, or both. For example, the base stations105may interface with the core network130through one or more backhaul links120(e.g., via an S1, N2, N3, or other interface). The base stations105may communicate with one another over the backhaul links120(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105), or indirectly (e.g., via core network130), or both. In some examples, the backhaul links120may be or include one or more wireless links. One or more of the base stations105described herein may include or may be referred to by a person having ordinary skill in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or other suitable terminology. A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115may also include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other examples. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the base stations105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay base stations, among other examples, as shown inFIG.1. The UEs115and the base stations105may wirelessly communicate with one another via one or more communication links125over one or more carriers. The term “carrier” may refer to a set of radio frequency spectrum resources having a defined physical layer structure for supporting the communication links125. For example, a carrier used for a communication link125may include a portion of a radio frequency spectrum band (e.g., a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (e.g., LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (e.g., synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communications system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and time division duplexing (TDD) component carriers. Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme, the coding rate of the modulation scheme, or both). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. A wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers or beams), and the use of multiple spatial layers may further increase the data rate or data integrity for communications with a UE115. The time intervals for the base stations105or the UEs115may be expressed in multiples of a basic time unit which may, for example, refer to a sampling period of Ts=1/(Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (e.g., 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (e.g., ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some examples, a frame may be divided (e.g., in the time domain) into subframes, and each subframe may be further divided into a number of slots. Alternatively, each frame may include a variable number of slots, and the number of slots may depend on subcarrier spacing. Each slot may include a number of symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communications systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (e.g., Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (e.g., in the time domain) of the wireless communications system100and may be referred to as a transmission time interval (TTI). In some examples, the TTI duration (e.g., the number of symbol periods in a TTI) may be variable. Additionally or alternatively, the smallest scheduling unit of the wireless communications system100may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (e.g., a control resource set (CORESET)) for a physical control channel may be defined by a number of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (e.g., CORESETs) may be configured for a set of the UEs115. For example, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to a number of control channel resources (e.g., control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. In some examples, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some examples, different geographic coverage areas110associated with different technologies may overlap, but the different geographic coverage areas110may be supported by the same base station105. In other examples, the overlapping geographic coverage areas110associated with different technologies may be supported by different base stations105. The wireless communications system100may include, for example, a heterogeneous network in which different types of the base stations105provide coverage for various geographic coverage areas110using the same or different radio access technologies. The wireless communications system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. For example, the wireless communications system100may be configured to support ultra-reliable low-latency communications (URLLC). The UEs115may be designed to support ultra-reliable, low-latency, or critical functions. Ultra-reliable communications may include private communication or group communication and may be supported by one or more services such as push-to-talk, video, or data. Support for ultra-reliable, low-latency functions may include prioritization of services, and such services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, and ultra-reliable low-latency may be used interchangeably herein. In some examples, a UE115may also be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(e.g., using a peer-to-peer (P2P) or D2D protocol). One or more UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105or be otherwise unable to receive transmissions from a base station105. In some examples, groups of the UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some examples, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between the UEs115without the involvement of a base station105. In some systems, the D2D communication link135may be an example of a communication channel, such as a sidelink communication channel, between vehicles (e.g., UEs115). In some examples, vehicles may communicate using vehicle-to-everything (V2X) communications, vehicle-to-vehicle (V2V) communications, or some combination of these. A vehicle may signal information related to traffic conditions, signal scheduling, weather, safety, emergencies, or any other information relevant to a V2X system. In some examples, vehicles in a V2X system may communicate with roadside infrastructure, such as roadside units, or with the network via one or more network nodes (e.g., base stations105) using vehicle-to-network (V2N) communications, or with both. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or 5G core (5GC), which may include at least one control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the base stations105associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to IP services150for one or more network operators. The IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. Some of the network devices, such as a base station105, may include subcomponents such as an access network entity140, which may be an example of an access node controller (ANC). Each access network entity140may communicate with the UEs115through one or more other access network transmission entities145, which may be referred to as radio heads, smart radio heads, or transmission/reception points (TRPs). Each access network transmission entity145may include one or more antenna panels. In some configurations, various functions of each access network entity140or base station105may be distributed across various network devices (e.g., radio heads and ANCs) or consolidated into a single network device (e.g., a base station105). The wireless communications system100may operate using one or more frequency bands, typically in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (e.g., less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The wireless communications system100may also operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band, or in an extremely high frequency (EHF) region of the spectrum (e.g., from 30 GHz to 300 GHz), also known as the millimeter band. In some examples, the wireless communications system100may support millimeter wave (mmW) communications between the UEs115and the base stations105, and EHF antennas of the respective devices may be smaller and more closely spaced than UHF antennas. In some examples, this may facilitate use of antenna arrays within a device. The propagation of EHF transmissions, however, may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. The techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. The wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, the wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, devices such as the base stations105and the UEs115may employ carrier sensing for collision detection and avoidance. In some examples, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other examples. A base station105or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a base station105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some examples, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally or alternatively, an antenna panel may support radio frequency beamforming for a signal transmitted via an antenna port. The base stations105or the UEs115may use MIMO communications to exploit multipath signal propagation and increase the spectral efficiency by transmitting or receiving multiple signals via different spatial layers. Such techniques may be referred to as spatial multiplexing. The multiple signals may, for example, be transmitted by the transmitting device via different antennas or different combinations of antennas. Likewise, the multiple signals may be received by the receiving device via different antennas or different combinations of antennas. Each of the multiple signals may be referred to as a separate spatial stream and may carry bits associated with the same data stream (e.g., the same codeword) or different data streams (e.g., different codewords). Different spatial layers may be associated with different antenna ports used for channel measurement and reporting. MIMO techniques include single-user MIMO (SU-MIMO), where multiple spatial layers are transmitted to the same receiving device, and multiple-user MIMO (MU-MIMO), where multiple spatial layers are transmitted to multiple devices. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105, a UE115) to shape or steer an antenna beam (e.g., a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). A base station105or a UE115may use beam sweeping techniques as part of beam forming operations. For example, a base station105may use multiple antennas or antenna arrays (e.g., antenna panels) to conduct beamforming operations for directional communications with a UE115. Some signals (e.g., synchronization signals, reference signals, beam selection signals, or other control signals) may be transmitted by a base station105multiple times in different directions. For example, the base station105may transmit a signal according to different beamforming weight sets associated with different directions of transmission. Transmissions in different beam directions may be used to identify (e.g., by a transmitting device, such as a base station105, or by a receiving device, such as a UE115) a beam direction for later transmission or reception by the base station105. Some signals, such as data signals associated with a particular receiving device, may be transmitted by a base station105in a single beam direction (e.g., a direction associated with the receiving device, such as a UE115). In some examples, the beam direction associated with transmissions along a single beam direction may be determined based on a signal that was transmitted in one or more beam directions. For example, a UE115may receive one or more of the signals transmitted by the base station105in different directions and may report to the base station105an indication of the signal that the UE115received with a highest signal quality or an otherwise acceptable signal quality. In some examples, transmissions by a device (e.g., by a base station105or a UE115) may be performed using multiple beam directions, and the device may use a combination of digital precoding or radio frequency beamforming to generate a combined beam for transmission (e.g., from a base station105to a UE115). The UE115may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands. The base station105may transmit a reference signal (e.g., a cell-specific reference signal (CRS), a channel state information reference signal (CSI-RS)), which may be precoded or unprecoded. The UE115may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook). Although these techniques are described with reference to signals transmitted in one or more directions by a base station105, a UE115may employ similar techniques for transmitting signals multiple times in different directions (e.g., for identifying a beam direction for subsequent transmission or reception by the UE115) or for transmitting a signal in a single direction (e.g., for transmitting data to a receiving device). A receiving device (e.g., a UE115) may try multiple receive configurations (e.g., directional listening) when receiving various signals from the base station105, such as synchronization signals, reference signals, beam selection signals, or other control signals. For example, a receiving device may try multiple receive directions by receiving via different antenna subarrays, by processing received signals according to different antenna subarrays, by receiving according to different receive beamforming weight sets (e.g., different directional listening weight sets) applied to signals received at multiple antenna elements of an antenna array, or by processing received signals according to different receive beamforming weight sets applied to signals received at multiple antenna elements of an antenna array, any of which may be referred to as “listening” according to different receive configurations or receive directions. In some examples, a receiving device may use a single receive configuration to receive along a single beam direction (e.g., when receiving a data signal). The single receive configuration may be aligned in a beam direction determined based on listening according to different receive configuration directions (e.g., a beam direction determined to have a highest signal strength, highest signal-to-noise ratio (SNR), or otherwise acceptable signal quality based on listening according to multiple beam directions). Wireless communications system100may utilize relatively high frequency bands (e.g., millimeter wave (mmWave) and sub-THz frequencies) to offer an abundance of shared (e.g., unlicensed) radio frequency spectrum bands to one or more UEs115. In such cases, transmission and reception over the aforementioned bands may be directional, which may result in an interference-limited wireless environment (e.g., due to the directional nature of the beams, occurrences of interference may be decreased). Depending on an operating scenario associated with some frequency bands (e.g., relatively high frequency bands), a UE115performing LBT and/or LT sensing may be avoided, since the directional nature of the beams may reduce the occurrence of interference. In some examples, to resolve potential beam collisions, LBT and LT sensing may be combined with other coexistence methods (e.g., utilizing a narrow beam). Wireless communications system100may support narrow beam-based channel access and associated channel access criteria that are based on statistics of EIRP-based measurements captured in beam-based full-spherical measurements when the UE115sends K spatial streams. The relative narrowness of directional beams may be defined in the context of interference (e.g., instead of a geometric context). Aspects of the disclosure may apply to both uplink and downlink channel access, and the definition of a narrow beam may apply to both transmitters and receivers (e.g., the base station105and/or the UE115) in the wireless communications system100. In some cases, aspects of the disclosure may provide techniques for determining whether a UE115performs LBT in various radio frequency spectrum bands (e.g., a 60 GHz radio frequency spectrum band in accordance with European Telecommunications Standards Institute (ETSI) regulatory requirements, among other radio frequency spectrum bands) in one or more operating modes. For example, operating mode C1may correspond to a defined, relatively less restrictive (e.g., by device) requirement. Operating mode C1may be applicable to both mobile and fixed devices, where each device performs LBT to initiate communications with another device. As another example, operating mode C2may correspond to relatively new and evolving standards applicable to mobile and fixed devices. Notably, in operating mode C2, LBT may be skipped at either the transmitting side or the receiving side with a relatively minimum antenna gain requirement. However, some mitigation techniques may be leveraged in the absence of sufficient antenna gain. As yet another example, operating mode C3may correspond to an evolving standard applicable to fixed networks (e.g., backhaul). In some cases, C3may implement automatic transmit power control and link adaptation, which may impact frequency range 2 (FR2) integrated access and backhaul (IAB). Aspects of the present disclosure, while generally applicable to a multitude of operating modes corresponding to wireless communications, may be especially suited for operating mode C2, where the definition of narrow beams, coupled with one or more metrics, may allow one or more UE115to perform channel access without performing LBT, LT sensing, or both. FIG.2illustrates an example of a wireless communications system200that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The wireless communications system200may include a wireless device205(e.g., a base station105, a UE115) and UE115-a, UE115-b, UE115-c, UE115-d, and UE115-e, which may each be examples of a UE115as described with reference toFIG.1. While signaling operations may be discussed below as being performed by particular wireless devices, it is important to note that the operations, techniques, and computations may be performed by any number of wireless devices as well as different wireless devices than those discussed below. The wireless device205and the UEs115-a,115-b,115-c,115-d, and115-emay use relatively narrow beams for channel access in accordance with aspects of the present disclosure. Specifically, the wireless device205and the UEs115-a,115-b,115-c,115-d, and115-emay specify metrics and criteria for narrow beam channel access based on a number of spatial streams and EIRP measurements (e.g., obtained from one or more spherical measurement tests). In some examples, metrics and criteria for narrow beam channel access may be obtained for a node (e.g., the wireless device205) sending one spatial stream (e.g., no additional digital precoding). In other examples, metrics and criteria for narrow beam channel access may be obtained for a node sending more than one spatial stream (e.g., including digital precoding). By using the aforementioned metrics and criteria, devices in the wireless communications system200may reduce interference at unintended receivers when a transmitter (e.g., base station105, a UE115) accesses channels without performing LBT and sends one or more spatial streams within a directional transmission. For example,FIG.2illustrates the wireless device205performing one or more directional transmissions to the UEs115-a,115-b,115-c,115-d, and115-e. The wireless device205may determine to transmit information over one or more spatial streams to the UE115-band the UE115-dvia directional transmissions utilizing a shared radio frequency spectrum band. In some cases, communications in the shared radio frequency spectrum band used by the wireless device205, the UE115-b, and the UE115-dmay exclude channel access procedures (e.g., based on a number of spatial streams associated with a directional beam). Additionally, the wireless device205may determine to transmit using narrow beam definitions based on one or more metrics and criteria in accordance with aspects of the present disclosure. The wireless device205may perform one or more measurements (e.g., EIRP measurements) and metrics to determine that a directional beam215is sufficiently narrow to transmit information to the UE115-bwithout interfering with nearby UEs115(e.g., UE115-aand UE115-c). That is, the wireless device205may transmit the directional beam215such that the directional beam215(or a sidelobe210and a sidelobe220) may not interfere with the UEs115-aand115-c. In some examples, the wireless device205may utilize different spatial streams within the directional beam215to perform single user multiple-input multiple-output (e.g., for SU-MIMO) with the UE115-b. For instance, there may be two different spatial streams within the same directional beam215(e.g., for polarization MIMO). The wireless device205may additionally or alternatively perform directional transmissions to the UE115-d. The wireless device205may perform similar measurements, along with utilizing similar metrics and thresholds, to determine if a directional beam225is a narrow beam such that the directional beam225may not interfere with the UE115-e, for example, when the wireless device skips channel access procedures. In some examples, the wireless device205may utilize different spatial streams on respective beams (e.g., for SU-MIMO) to support communications with both the UE115-dand the UE115-b. That is, different directional beams (e.g., directional beam215, directional beam225) may each include a single spatial stream. In other cases, the wireless device205may utilize different spatial streams within a same beam (e.g., for MU-MIMO) to communicate with multiple UEs115(e.g., the UE115-dand the UE115-b). In some cases, the wireless device205may utilize different spatial streams within multiple directional beams (e.g., for MU-MIMO) to communicate with the UE115-dand the UE115-b. In such examples, one directional beam may include a first spatial stream, a second directional beam may include two or more spatial streams, and a third directional beam may include one or more spatial streams. In some cases, the metrics and criteria utilized by the wireless device205to determine if a directional beam (e.g., directional beam215, directional beam225) is a narrow beam may be based on transmissions with a single spatial stream or multiple spatial streams. For example, the wireless device205may utilize a spherical measurement test with analog beams utilizing a single spatial stream, where metrics are obtained based on a beam and corresponding EIRP values. In other examples, such as where the wireless device205uses multiple spatial streams (e.g., K spatial streams), the wireless device205may perform a similar the spherical measurement test to obtain a set of EIRP values while also accounting for precoding parameters (e.g., for a set of reference precoders) corresponding to the K spatial streams. That is, the wireless device205may derive metrics based on a directional beam for multiple different directions (e.g., respective directions having an azimuth and elevation), where a spherical measurement test may result in recording corresponding sets of EIRP values for each directional beam. Based on the measurements and metrics obtained for the wireless device205, the wireless device205may compare some metrics to one or more threshold values (e.g., EIRP threshold values, spatial stream thresholds) based on a number of spatial streams associated with a directional beam to determine whether one or more directional beams are a narrow beam (e.g., relative to some amount of interference generated by transmissions using that beam). In some cases, when a directional beam is determined to be a narrow beam in accordance with the aspects described herein, the wireless device205may perform directional communication with one or more of the UEs115-a,115-b,115-c,115-d, or115-ewithout performing channel access procedures (e.g., in a shared radio frequency spectrum band). FIG.3illustrates an example of a CDF plot300that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The CDF plot300may be used by one or more wireless devices, such as a base station and a UE, which may be examples of a base station105and a UE115as described with reference toFIG.1. In some examples, the CDF plot300may be associated with one or more operations, signals, and procedures associated with a wireless device, such as a wireless device205described with reference toFIG.2. For example, the CDF plot300may be based on one or more spherical measurement tests and a measured set of EIRP values for respective directional beams. In some examples, such as those described by the present disclosure, a wireless device (e.g., a base station, a UE) may perform one or more measurements and determine one or more metrics in support of performing directional communications such that a directional beam carrying a number of spatial streams is as interference-limited as possible. For example, the wireless device may use a spherical measurement test to record various measurements for one or more directional beams. In some examples, the spherical measurement test may be used to ensure composite beam patterns associated with one or more beams is as wide as possible. In other examples, however, the spherical measurement test may allow definitions for narrow beam criteria (e.g., on a per-beam basis). Here, the spherical measurement test may ensure tested beam patterns, along with any potential analog beamforming with a single spatial stream, is as interference-limited as possible. In some examples of the spherical measurement test performed on a per-beam bases, for each beam j in B, where B is a set of analog beams (e.g., formed using an analog beamforming codebook) corresponding to a wireless device (e.g., a UE), a spherical measurement test may be performed to obtain measurements, which may in-turn be used to determine a narrowness of a beam in accordance with aspects of the present disclosure (e.g., comparing to one or more thresholds). In some cases, B may correspond to an input to the spherical measurement test. For example, the wireless device may be configured to form a directional beam j (e.g., from a codebook, setting of delay lines, etc.) and use one spatial stream for the spherical measurement test. The spherical measurement test may be conducted, for example, in a far field region to record EIRPs in possible spherical directions Ej←EIRPϕiθi, where Ejis the set of EIRPs recorded for beam j. The granularity of EIRP measurements recorded for each beam may be in terms of angular azimuth and elevation (e.g., δϕ, δθ). In some cases, directions of measurements may be decided based on regulatory requirements. Additionally or alternatively, δϕ, δθ may be uniform or non-uniform, for example, based on regulatory requirements. Each EIRP measurement obtained during the test may be annotated with an azimuth ϕiand elevation θi. Such measurements, along with associated annotations, may be collected in a set of EIRP measurements (e.g., represented by a matrix Ē), where the entries of Ē may correspond to directional measurements associated with j directional beams. For example, Ē may be expressed as follows with respect to Equation 1, Ē={E1, . . . ,Ej, . . . ,EB}  (1) where B is the set of analog beams for which measurements are taken to classify a narrowness associated with each beam j in B. Additionally, determining narrowness of a beam with a single spatial stream may be transparent to precoder conditions. As such, the determination of a beam narrowness may depend on a given beam j and associated EIRP values. To elucidate one or more aspects of the disclosure,FIG.3serves as an exemplary system where a narrow beam is used for directional transmissions without performing channel access procedures (e.g., LBT, LT sensing). The wireless device may configure different analog beams and conduct spherical measurement tests to characterize radiation patterns associated with the analog beams. Additionally, the measurements may be performed within the context of a maximum transmit power allowed by a regulator (e.g., Pmax). The measurements may be performed to obtain a set of EIRPs (e.g., Ē={E1, . . . , Ej, . . . , EN}) recorded for N different analog beams (i.e., beam 1, . . . , beam N). Based on the recorded measurements, multiple (e.g., two) metrics may be obtained, where the metrics may be used to determine a narrowness of a beam within the context of interference (e.g., by comparing the metrics to one or more thresholds). For example, a first metric, Mj,1, may be associated with a difference in EIRP values corresponding to two different percentiles (e.g., k1, k2) of EIRP measurements for beam j. In one example, Mj,1may be expressed as follows, with reference to Equation 2, Mj,1=k1percentile({EIRPi−b:i∈Ej})−k2percentile({EIRPi−b:i∈Ej})  (2) where Ejmay be the set of EIRPs capture in spherical measurements for analog beam j, k2<k1(e.g., special case may correspond to k1=100 (peak value)), and b may be constant and correspond to Pmax, or another constant value. In some examples, a relatively largest value of Mj,1(e.g., largest EIRP projection value) may correspond to the narrowest beam in the set of beams. As an example, the Mj,1metrics may be found via the CDF plot300(e.g., representing a CDF vs. EIRP-b plot) to determine a narrowest beam (e.g., in terms of interference associated with that beam), where the EIRP-b may correspond to the dB gain of an antenna array or a device antenna array. For example, the CDF plot300illustrates CDFs as a function of EIRP-b for a first beam305(e.g., j=1), a second beam310(e.g., j=2), and a third beam315(e.g., j=3). For each beam j, a plot of CDF vs. EIRP-b may be constructed to determine a first metric, Mj,1. For example, the metric Mj,1320may be obtained by finding the projection of the first beam305(e.g., j=1) at the k1percentile (e.g., a range corresponding to an EIRP-b value associated with a kthpercentile of all recorded EIRP values for a beam) subtracted by the projection of the first beam305on the k2percentile. Similarly, M2,1325and M3,1330may be obtained by finding the difference in projections of the second beam310on the k1and k2percentile and the third beam315on the k1and k2percentile, respectively. Based on the metrics, it may be determined that the projection corresponding to M1,1320(e.g., 10 dB) is larger than M2,1325and M3,1330. The second beam310may correspond to high gain in some directions (e.g., relatively higher than the first beam305. The third beam315may correspond to moderate to high gain in most directions (e.g., high possibility of interference). As such, the first beam305may be narrower than the second beam310and the third beam315. By utilizing the spherical measurement test, along with the first metric, the wireless device may determine that the first beam305is narrow enough to perform directional communications (e.g., by comparing metric M1,1320with one or more thresholds) with a second device without performing channel access procedures. In some cases, a second metric, Mj,2, may be used to classify the narrowness of beams. For example, a second metric Mj,2may be defined as a discounted EIRP value that corresponds to a target percentile (e.g., k3) of EIRP measurements for beam j. Mj,2may be expressed as follows, with reference to Equation 3, Mj,2=k3percentile({EIRPi−b:i∈Ej})  (3) where Ejmay be the set of EIRPs captured in spherical measurements for analog beam j and b may correspond to Pmax, among other constants. Put another way, Mj,2metrics may correspond to projections on a kthpercentile (e.g., k3) of multiple EIRP measurements. In this case, a smallest value of M1,2may correspond to the narrowest beam in the set of beams. For example, the metric Mj,2335may be obtained by finding the projection of the first beam305on k3. Similarly, the metric M2,2340corresponding to the second beam310may be obtained by finding the projection of the second beam310on k3. Lastly, the metric M3,2345may be obtained by finding the projection of the third beam315on k3. As illustrated inFIG.3, the projection corresponding to M1,2335may be the smallest (e.g., compared to M2,2340and M3,2345), which may indicate that the first beam305is the narrowest beam. Based on the metrics, the wireless device may determine (e.g., by comparing M1,1320, or M1,2335, or both, with one or more thresholds) to use the first beam305to perform directional communications without performing channel access procedures. In some cases, the first metric may be used by the wireless device to determine whether or not to perform directional communications with more than one spatial stream without performing channel access procedures (e.g., based on beam narrowness). Additionally or alternatively, the second metric may be used to determine the narrowness of beams associated with the wireless device, which may enable the wireless device to determine whether to perform directional communications with more than one spatial stream without performing channel access procedures. In some examples, the wireless device may determine if the first metric meets a first condition, if the second metric meets a second condition, or both. If either metric meets an associated condition (e.g., or both meet the conditions), the wireless device may perform directional communications without performing channel access procedures. Otherwise, the wireless device may fall back on one or more channel access procedures (e.g., LBT, LT sensing). For example, the wireless device may use a first criteria, where the wireless device accesses a channel without performing LBT and sends l spatial streams within beam j if Mj,1is greater than a first threshold (e.g., Xj(l)), or Mj,2is less than a second threshold (e.g., Zj(l)). In some examples, the first threshold Xj(l)) and the second threshold Zj(l)may depend on the number of spatial streams sent. Additionally or alternatively, there may exist different thresholds for different scenarios, which may influence how many spatial streams may be supported based on metrics obtained using a single spatial stream. In some examples, such as in a device-based power limitation scenario, thresholds may be relaxed when the number of spatial streams are increased (e.g., Xj(l1)<Xj(l2)and Zj(l1)>Zj(l2)when l1>l2. In other examples, such as for antenna-port-based power limitation scenarios, thresholds may be made more stringent when the number of spatial streams is increased (e.g., Xj(l1)>Xj(l2)and Zj(l1)<Zj(l2)when l1>l2). For example, the wireless device may determine that the first beam305is associated with M1,1320, where M1,1320has a value of 10 dB. Additionally, the device may be capable of transmitting up to 4 spatial streams. For a device-based power limitation scenario, one or more thresholds may be determined to be X1(1)=12 dB, X1(2)=11 dB, X1(3)=8 dB, and X1(4)=4 dB. Based on the determined thresholds, the device may access the channel without performing LBT or LT sensing with beam 1 if the device transmits at least 3 or more spatial streams. For an antenna-based power limitation scenario, where M1,1320has a value of 10 dB, one or more thresholds may be determined to be X1(1)=4 dB, X1(2)=8 dB, X1(3)=11 dB, and X1(4)=12 dB. In such cases, the device may access the channel without performing LBT or LT sensing with beam 1 if the device sends no more than 2 spatial streams. In some cases, the wireless device may use a second criteria for device-based power limitation scenarios. A device may access a channel without performing LBT and sends l spatial streams within a beam j if the number of spatial streams (e.g., lj) is greater than a predefined threshold (e.g., lth,1). Otherwise, if the number of spatial streams ljis smaller than a predefined threshold lth,1, then the device accesses the channel without LBT using beam j based on the first criteria for device-based power limitation scenarios. For example, M1,1320may have a value of 10 dB and lth,1may have a value of 2. The device may send up to 4 spatial streams, where power limitations are specified per device. If the device seeks to access a channel with beam 1 and send 3 spatial streams, then the device accesses the channel without LBT because l1=3 and l1>lth,1. If the device seeks to access the channel with beam 1 and send two spatial streams, then the device may check to determine if the first criteria is met since l1<lth,1. In some other cases, the wireless device may use a third criteria for antenna port-based power limitation scenarios. A device may access a channel without performing LBT and sends l spatial streams within a beam j if the number of spatial streams (e.g., is smaller than a predefined threshold (e.g., lth,j). Otherwise, if the number of spatial streams ljis greater than a predefined threshold lth,j, then the device accesses the channel without LBT using beam j based on the first criteria for antenna port-based power limitation scenarios. For example, M1,1320may have a value of 10 dB and lth,1may have a value of 3. A device may send up to four spatial streams, where power limitations are specified per antenna port. If the device seeks to access the channel with beam 1 and send four spatial streams, then the device may check the first criteria because l1=2, which is not smaller than lth,1. If the device seeks to access the channel with beam 1 and send two spatial streams, then the device may access the channel without LBT because l1=2 and l1<lth,1. FIG.4illustrates an example of a spherical measurement test400that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The spherical measurement test400may be implemented by one or more wireless devices, such as a base station and a UE1154, which may be examples of a base station105and a UE115as described with reference toFIG.1. In some examples, the spherical measurement test400may include one or more operations and procedures associated with the base station105and the UE115-f, which may be examples of those discussed with reference toFIGS.2and3. While specific signaling operations, mathematical techniques, and metrics may be discussed below, the signaling operations between various devices may be performed in a different order than the example order shown, or the operations performed by the devices may be performed by different devices or at different times. In some examples, the spherical measurement test400may be performed with analog beams including K spatial streams (e.g., instead of a single spatial stream). That is, for each beam j in B and precoder l (e.g., corresponding to a digital precoder410), where B is a set of analog beams (e.g., analog beamforming codebook) corresponding to a UE and l∈{1, . . . , L} is a set of precoders, a spherical measurement test may be performed to obtain measurements, which may in-turn be used to determine if a narrowness of a beam in accordance with aspects of the present disclosure. In such examples, both B and L may correspond to inputs to the spherical measurement test. For example, the UE may be configured to form beam j (e.g., from a codebook, setting of delay lines, etc.) and send K spatial streams with precoder l. The spherical measurement test400may be conducted in a far field region to record EIRPs in possible spherical directions Ej,1←EIRPϕiθi, where Ej,lis the set of EIRPs recorded for beam j and precoder l. The granularity of measurements may be in terms of angular azimuth and elevation (e.g., δϕ, δθ) on a surface of a sphere405. In some cases, directions of measurements may be decided based on regulatory requirements. Additionally or alternatively, δϕ, δθ may be uniform or non-uniform, depending on regulatory requirements. Each EIRP measurement may be annotated with its given azimuth ϕiand elevation θi. For example, the UE115-fmay transmit a beam415with an associated precoder index. The EIRP value may be obtained at a point420on the surface of the sphere405. Such measurements, along with associated annotations, may be collected in a matrix Ē, where the entries of Ē may correspond to directional measurements associated with beam j and precoder l. For example, Ē may be expressed as follows with respect to Equation 4, Ē={E1,1, . . . ,Ej,l, . . . ,EB,L}  (4) In some cases, precoders used in previous EIRP tests can be picked from a reference list of precoders that is specified by regulators or standardization bodies (e.g., 3GPP). Options for constructing the reference list are large, but may be restricted to some based options for the sake of comparison and facilitating beam narrowness tests. For example, one option for a reference list may be composed by choosing an orthonormal matrix OKmax×Kmax(e.g., a Discrete Fourier Transform (DFT) matrix, and selecting K columns from O to represent a precoder, where Kmaxmay be the maximum number of antenna ports and K is the number of spatial streams intended for the EIRP spherical test. Following selecting columns, possible column combinations may be obtained (i.e., P=KmaxChoose K). From the possible column combinations, a subset of possible combinations may be obtained such that the resultant precoder has a rank of K (i.e., P<KmaxChoose K), where the rank of each precoder l is K. It should be noted that the digital precoder or configuration of sending multiple spatial streams in the spherical measurement test400may not be accounted for when the wireless device is using single spatial streams (e.g., as inFIG.3). In the spherical measurement test400, new metrics and criteria may be defined while specifying EIRP measurements to joint predefined reference digital precoder and analog beam configurations. The wireless device may configure a beam j and predefined reference precoder l (e.g., or a number of spatial streams NSS) and conduct the spherical measurement test400to obtain EIRPs. The spherical measurement test400may be repeated for analog and precoder combinations. The measurements may be performed to obtain a set of EIRPs (e.g., Ē={E1,1, . . . , EN,L}) recorded for N different analog beams (i.e., beam 1, . . . , beam N) and predefined precoder configurations using the spherical measurement test400, where Ej,lis a set of EIRPs for an analog beam j and predefined reference precoder l. Based on the recorded measurements, two metrics may be obtained, where the two metrics may be used to determine a narrowness of a beam within the context of interference. For example, a first metric Mj,1may be defined as a difference in EIRP values corresponding to two different percentiles (e.g., k1, k2) of EIRP measurements for beam j. Mj,1may be expressed as follows, with reference to Equation 5, Mj,l,1=k1percentile({EIRPi−b:i∈Ej,l})−k2percentile({EIRPi−b:i∈Ej,l})  (5) where Ej,lmay be the set of EIRPs capture in spherical measurements for analog beam j and predefined reference precoder l, k2<k1(e.g., special case may correspond to k1=100 (peak value)), and b may correspond to Pmax, among other constants. In some cases, a second metric may be used to classify the narrowness of beams. For example, consider a second metric Mj,l,2as a discounted EIRP value that corresponds to a target percentile (e.g., k1, k2, k3) of EIRP measurements for beam j and predefined reference precoder l. Mj,l,2may be expressed as follows, with reference to Equation 6, Mj,l,2=k3percentile({EIRPi−b:i∈Ej,l})  (6) where Ej,1may correspond to a set of EIRPs captured in the spherical measurement test400for beam j and precoder l. In some cases, b may be fixed (e.g., correspond to Pmax). The spherical measurement test400may be used to generate a CDF plot (e.g., CDF vs. EIRP-b, similar to the CDF plot300) that also accounts for K spatial streams. Similar to the CDF plot300, one or more metrics may be obtained by projecting the CDF of a given beam and a given precoder on a percentile (e.g., k1, k2, k3). For example, a first beam with a first spatial stream may be projected onto k1and k2to obtain a first metric M1,1,1, which may correspond to the first metric described above. Similarly, the first beam with a second spatial stream may be projected onto k1and k2to obtain a second metric M1,2,1. In some cases, M1,1,1may be larger than M1,2,1. In such a case, the larger value may indicate the first beam with the first spatial stream is narrower than the first beam with the second spatial stream. While M1,1,imay correspond to a relatively narrower beam than M1,2,1, the wireless device may use one or more criteria or thresholds to determine if a beam is narrow enough to perform directional communications over a channel without performing channel access procedures. For example, a device may use a first criteria to determine if a device passes a narrow beam condition for beam j with a predefined reference precoder l if Mj,l,1is greater than a predefined threshold (e.g., Xj,l). Threshold Xj,lmay be specified based on a device class, a regulatory requirement, or the like. In some examples, a wireless device may have multiple configuration (e.g., four configuration). For instance, the multiple configurations may include a first configuration of the first beam and a first precoder, a second configuration of the first beam and a second precoder, a third configuration of the second beam and the first precoder, and a fourth configuration of the second beam and the second precoder. Based on the metrics and the one or more thresholds, the first configuration may be associated with a narrowest beam (e.g., having a relatively smallest amount of interference), whereas the second configuration may be associated with a relatively less narrow beam (e.g., compared to the first configuration), followed by the third and fourth configurations. However, other orderings of configurations may be possible, for example, based on the metrics and thresholds used for determining whether a beam may be defined as a narrow beam. In other examples, the device may use a second criteria, where a device passes the narrow beam condition for beam j with a predefined reference precoder l if Mj,l,1is greater than a predefined threshold (e.g., Xj,l) or Mj,l,2is less than a threshold (e.g., Zj,l). Thresholds Xj,land Zj,lmay also be specified based on a device class, a regulatory requirement, and the like. Based on the device passing one or more criteria, the device may access a channel without performing channel access procedures. FIG.5illustrates an example of a process flow500in a system that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The process flow500may be implemented by one or more wireless devices, such as a first wireless device505and a second wireless device510. In some examples, the first wireless device505, the second wireless device510, or both, may be an example of a base station or a UE, which may be examples of a base station105and a UE115as described with reference toFIG.1. In some examples, the process flow500may include one or more operations and procedures associated with the first wireless device505and the second wireless device510, which may be examples of those discussed with reference toFIGS.2-4. While specific signaling operations, mathematical techniques, and metrics may be discussed below, the signaling operations between various devices may be performed in a different order than the example order shown, or the operations performed by the devices may be performed by different devices or at different times. For instance, the operations performed by the first wireless device505may be additionally or alternatively performed by the second wireless device510, and vice versa. At515, the first wireless device505may select a directional beam from a set of one or more directional beams for transmitting a message to the second wireless device510in a shared radio frequency spectrum band (e.g., in an unlicensed radio frequency spectrum band). In some examples, one or both of the first wireless device505or the second wireless device510may access a channel in the shared radio frequency band by skipping channel access procedures. For instance, using the techniques described herein, the first wireless device505and the second wireless device510may communicate with each other without using some defined channel access procedures, such as LBT, LT sensing, or the like, based on a directional beam that qualifies as a narrow beam (e.g., based on a definition of a narrow beam associated with a level of interference). At520, the first wireless device505may determine a quantity of spatial streams associated with the selected directional beam. For instance, there may be one or more different spatial streams within the same directional beam (e.g., in accordance with SU-MIMO), or there may multiple different spatial streams within the same directional beam (e.g., in accordance with MU-MIMO in cases where the first wireless device505is communicating with multiple other wireless devices including the second wireless device510). In other examples, the selected directional beam may include a first spatial stream and another, different directional beam may include a second spatial stream. In some cases, first wireless device505may select multiple directional beams, where each directional beam may include one or more spatial streams. In any case, the number of spatial streams identified by the first wireless device505may be used for determining whether the first wireless device505may communicate with the second wireless device510in the shared radio frequency spectrum band without using channel access procedures. For example, at525, the first wireless device505may determine one or more threshold values (e.g., threshold EIRP values, a threshold number of spatial streams) for transmitting the message without performing channel access procedures. Here, the first wireless device505may use the number of spatial streams, as well as a power limitation scenario (e.g., device-based, antenna-port based) to determine whether one or more thresholds are satisfied. In some examples, one or more thresholds may be based on the number of spatial streams. In an illustrative example, the first wireless device505may determine that two spatial streams are used for communicating with the second wireless device510over the directional beam and a threshold value (e.g., from a set of threshold values) corresponding to the two spatial streams may be satisfied, thereby enabling the first wireless device505to skip channel access procedures before transmitting (e.g., the directional beam may be a “narrow beam”). In another example, the determined number of spatial steams may satisfy some threshold number of spatial streams, then the first wireless device505to skip channel access procedures before transmitting based on the satisfied threshold. In some examples, if one threshold is not satisfied based on the number of spatial streams, the first wireless device505may fall back to another threshold to determine whether the directional beam is a narrow beam that is associated with skipping channel access procedures. As an example, if the number of spatial streams does not satisfy a spatial stream threshold, an EIRP threshold may be used (e.g., based on a power limitation scenario) to determine that channel access procedures may be skipped. Thus, at530, the first wireless device505may determine whether a channel access procedure may be skipped or used for communications (e.g., transmitting/receiving messages) using the directional beam in the shared radio frequency spectrum band. Specifically, based on the thresholds, the first wireless device505may determine not to perform channel access procedures by leveraging a narrowness associated with the directional beam. In such cases, if the directional beam is determined to be a “narrow beam,” then the channel access procedures may be skipped. In some cases, a spherical measurement test may be used to determine one or more metrics, and the one or more thresholds may be based on the determined metrics. In such cases, the spherical measurement test may be used to record sets of EIRP values for each directional beam of a set of directional beams supported by the first wireless device505, which may be based on one or more precoders (e.g., a predetermined precoder, a set of reference precoders, or the like). In some aspects, the second wireless device510may perform one or more procedures that are similar or complementary to those described with reference to the first wireless device505. For example, the second wireless device510may select a directional beam (e.g., a receive beam) from a set of one or more directional beams for receiving a message from a first wireless device505in the shared radio frequency spectrum band. The second wireless device510may determine that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures (e.g., that the directional beam is a narrow beam). In such cases, the determination may be based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more thresholds (e.g., EIRP thresholds, spatial stream thresholds). At535, the first wireless device505may transmit, and the second wireless device510may receive, the message in the shared radio frequency spectrum band using the directional beam. In cases where the directional beam(s) used by the first wireless device505and/or the second wireless device510may be defined as a narrow beam using the techniques described herein, then the message may be transmitted and received without performing one or more defined channel access procedures. FIG.6shows a block diagram600of a device605that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The device605may be an example of aspects of a UE115as described herein. The device605may include a receiver610, a transmitter615, and a communications manager620. The device605may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver610may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to directional channel access using a narrow beam with multiple spatial streams). Information may be passed on to other components of the device605. The receiver610may utilize a single antenna or a set of multiple antennas. The transmitter615may provide a means for transmitting signals generated by other components of the device605. For example, the transmitter615may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to directional channel access using a narrow beam with multiple spatial streams). In some examples, the transmitter615may be co-located with a receiver610in a transceiver module. The transmitter615may utilize a single antenna or a set of multiple antennas. The communications manager620, the receiver610, the transmitter615, or various combinations thereof or various components thereof may be examples of means for performing various aspects of directional channel access using a narrow beam with multiple spatial streams as described herein. For example, the communications manager620, the receiver610, the transmitter615, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager620, the receiver610, the transmitter615, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally or alternatively, in some examples, the communications manager620, the receiver610, the transmitter615, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager620, the receiver610, the transmitter615, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a central processing unit (CPU), an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager620may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver610, the transmitter615, or both. For example, the communications manager620may receive information from the receiver610, send information to the transmitter615, or be integrated in combination with the receiver610, the transmitter615, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager620may support wireless communication at a first wireless device in accordance with examples as disclosed herein. For example, the communications manager620may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The communications manager620may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures (e.g., LBT procedures, LT procedures, or the like), where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The communications manager620may be configured as or otherwise support a means for transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. Additionally or alternatively, the communications manager620may support wireless communication at a first wireless device in accordance with examples as disclosed herein. For example, the communications manager620may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. The communications manager620may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The communications manager620may be configured as or otherwise support a means for receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. By including or configuring the communications manager620in accordance with examples as described herein, the device605(e.g., a processor controlling or otherwise coupled to the receiver610, the transmitter615, the communications manager620, or a combination thereof) may support techniques for reduced processing and reduced power consumption by avoiding performing channel access procedures. Similarly, the device605(e.g., a processor controlling or otherwise coupled to the receiver610, the transmitter615, the communications manager620, or a combination thereof) may support reduced latency and improvements in communications efficiency, throughput, and channel access. FIG.7shows a block diagram700of a device705that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The device705may be an example of aspects of a device605or a UE115as described herein. The device705may include a receiver710, a transmitter715, and a communications manager720. The device705may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver710may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to directional channel access using a narrow beam with multiple spatial streams). Information may be passed on to other components of the device705. The receiver710may utilize a single antenna or a set of multiple antennas. The transmitter715may provide a means for transmitting signals generated by other components of the device705. For example, the transmitter715may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to directional channel access using a narrow beam with multiple spatial streams). In some examples, the transmitter715may be co-located with a receiver710in a transceiver module. The transmitter715may utilize a single antenna or a set of multiple antennas. The device705, or various components thereof, may be an example of means for performing various aspects of directional channel access using a narrow beam with multiple spatial streams as described herein. For example, the communications manager720may include a beam selection component725, a beam determination component730, a beam transmitter735, a beam receiver740, or any combination thereof. The communications manager720may be an example of aspects of a communications manager620as described herein. In some examples, the communications manager720, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver710, the transmitter715, or both. For example, the communications manager720may receive information from the receiver710, send information to the transmitter715, or be integrated in combination with the receiver710, the transmitter715, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager720may support wireless communication at a first wireless device in accordance with examples as disclosed herein. The beam selection component725may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The beam determination component730may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The beam transmitter735may be configured as or otherwise support a means for transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. Additionally or alternatively, the communications manager720may support wireless communication at a first wireless device in accordance with examples as disclosed herein. The beam selection component725may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. The beam determination component730may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The beam receiver740may be configured as or otherwise support a means for receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. FIG.8shows a block diagram800of a communications manager820that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The communications manager820may be an example of aspects of a communications manager620, a communications manager720, or both, as described herein. The communications manager820, or various components thereof, may be an example of means for performing various aspects of directional channel access using a narrow beam with multiple spatial streams as described herein. For example, the communications manager820may include a beam selection component825, a beam determination component830, a beam transmitter835, a beam receiver840, a beam measurement component845, a metric identification component850, a threshold component855, a threshold identification component860, a channel access component865, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager820may support wireless communication at a first wireless device in accordance with examples as disclosed herein. The beam selection component825may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The beam determination component830may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The beam transmitter835may be configured as or otherwise support a means for transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. In some examples, the beam measurement component845may be configured as or otherwise support a means for generating a set of EIRP measurement values for the set of one or more directional beams based on a spherical measurement test for each directional beam of the set of one or more directional beams, where the set of EIRP measurement values includes at least a subset of EIRP measurement values for the directional beam. In some examples, the metric identification component850may be configured as or otherwise support a means for identifying the one or more metrics based on the subset of EIRP measurement values for the directional beam. In some examples, the threshold component855may be configured as or otherwise support a means for comparing the one or more metrics with the one or more EIRP thresholds based on the number of spatial streams associated with the directional beam, where determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures is based on the comparison. In some examples, to support identifying the one or more metrics for the directional beam, the metric identification component850may be configured as or otherwise support a means for identifying a first metric of the one or more metrics based on a difference between a first EIRP measurement value and a second EIRP measurement value from the subset of EIRP measurement values, where the first EIRP measurement value corresponds to a first percentile of EIRP measurements for the directional beam and the second EIRP measurement value corresponds to a second percentile of EIRP measurements for the directional beam. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component830may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the first metric being greater than or equal to at least one EIRP threshold of the one or more EIRP thresholds based on the number of spatial streams. In some examples, the number of spatial streams does not satisfy a threshold number of spatial streams. In some examples, the first wireless device operates in accordance with a device-based power limitation or an antenna port-based power limitation, or any combination thereof. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component830may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the first metric being greater than a first EIRP threshold of the one or more EIRP thresholds based on the reference precoder. In some examples, the first percentile is greater than the second percentile. In some examples, to support identifying the one or more metrics for the directional beam, the metric identification component850may be configured as or otherwise support a means for identifying a second metric of the one or more metrics based on a third EIRP measurement value that corresponds to a predefined percentile of EIRP measurements for the directional beam. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component830may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the second metric being less than or equal to at least one EIRP threshold of the one or more EIRP thresholds based on the number of spatial streams. In some examples, the number of spatial streams does not satisfy a threshold number of spatial streams. In some examples, the first wireless device operates in accordance with a device-based power limitation or an antenna port-based power limitation, or any combination thereof. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component830may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the second metric being less than a second EIRP threshold of the one or more EIRP thresholds based on the reference precoder. In some examples, the second metric is identified based on a transmission power value. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the threshold identification component860may be configured as or otherwise support a means for identifying a threshold number of spatial streams associated with the directional beam. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the threshold component855may be configured as or otherwise support a means for comparing the number of spatial streams with the threshold number of spatial streams. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component830may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the number of spatial streams satisfying the threshold number of spatial streams. In some examples, to support performing the spherical measurement test, the beam selection component825may be configured as or otherwise support a means for forming each directional beam of the set of one or more directional beams, where each directional beam is formed using a single spatial stream. In some examples, to support performing the spherical measurement test, the beam measurement component845may be configured as or otherwise support a means for measuring a set of multiple EIRP measurement values for each directional beam of the set of one or more directional beams, the set of multiple EIRP measurement values including respective EIRP measurement values for each direction of a set of multiple directions. In some examples, to support performing the spherical measurement test, the beam measurement component845may be configured as or otherwise support a means for recording the set of EIRP measurement values for the set of one or more directional beams including a subset of EIRP measurement values for each directional beam based on the set of multiple EIRP measurement values. In some examples, each directional beam of the set of one or more directional beams are formed based on a predetermined beamforming codebook. In some examples, each direction of the set of multiple directions includes a non-uniform azimuth and a non-uniform elevation. In some examples, each EIRP measurement value of the set of EIRP measurement values is associated with an azimuth value and an elevation value. In some examples, each direction of the set of multiple directions includes a uniform azimuth and a uniform elevation. In some examples, each EIRP measurement value of the set of EIRP measurement values is associated with an azimuth value and an elevation value. In some examples, to support performing the spherical measurement test, the beam selection component825may be configured as or otherwise support a means for forming each directional beam of the set of one or more directional beams based on a reference precoder from a set of reference precoders, where each directional beam is formed using multiple spatial streams. In some examples, to support performing the spherical measurement test, the beam measurement component845may be configured as or otherwise support a means for measuring a set of multiple EIRP measurement values for each directional beam of the set of one or more directional beams and for each reference precoder from the set of reference precoders, the set of multiple EIRP measurement values including respective EIRP measurement values for each direction of a set of multiple directions. In some examples, to support performing the spherical measurement test, the beam measurement component845may be configured as or otherwise support a means for recording the set of EIRP measurement values for the set of one or more directional beams including a subset of EIRP measurement values for each directional beam based on the set of multiple EIRP measurement values. In some examples, each directional beam of the set of one or more directional beams are formed based on a predetermined analog beamforming codebook and a predefined digital precoding codebook. In some examples, each direction of the set of multiple directions includes a non-uniform azimuth and a non-uniform elevation. In some examples, each EIRP measurement value of the set of EIRP measurement values is associated with an azimuth value and an elevation value. In some examples, each direction of the set of multiple directions includes a uniform azimuth and a uniform elevation. In some examples, each EIRP measurement value of the set of EIRP measurement values is associated with an azimuth value and an elevation value. In some examples, the set of reference precoders is based on a number of columns selected from an orthonormal matrix, the number of columns being based on a number of antenna ports and the multiple spatial streams. In some examples, the threshold identification component860may be configured as or otherwise support a means for determining the one or more EIRP thresholds based on the number of spatial streams and one or more parameters associated with a power limitation at the first wireless device. In some examples, the power limitation includes a device-based power limitation. In some examples, a first EIRP threshold associated with a first number of spatial streams is less than a second EIRP threshold associated with a second number of spatial streams, the first number of spatial streams being greater than the second number of spatial streams. In some examples, the one or more parameters associated with the power limitation are associated with a device-based power threshold or an antenna port-based power threshold. In some examples, a first EIRP threshold associated with a first number of spatial streams is greater than a second EIRP threshold associated with a second number of spatial streams, the first number of spatial streams being greater than the second number of spatial streams. In some examples, the channel access component865may be configured as or otherwise support a means for refraining from performing the channel access procedures prior to transmitting the message based on determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the channel access procedures including one or more listen-before-talk procedures. Additionally or alternatively, the communications manager820may support wireless communication at a first wireless device in accordance with examples as disclosed herein. In some examples, the beam selection component825may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. In some examples, the beam determination component830may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The beam receiver840may be configured as or otherwise support a means for receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. In some examples, the beam measurement component845may be configured as or otherwise support a means for generating a set of EIRP measurement values for the set of one or more directional beams based on a spherical measurement test for each directional beam of the set of one or more directional beams, where the set of EIRP measurement values includes at least a subset of EIRP measurement values for the directional beam. In some examples, the metric identification component850may be configured as or otherwise support a means for identifying the one or more metrics based on the subset of EIRP measurement values for the directional beam. In some examples, the threshold component855may be configured as or otherwise support a means for comparing the one or more metrics with the one or more EIRP thresholds based on the number of spatial streams associated with the directional beam, where determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures is based on the comparison. FIG.9shows a diagram of a system900including a device905that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The device905may be an example of or include the components of a device605, a device705, or a UE115as described herein. The device905may communicate wirelessly with one or more base stations105, UEs115, or any combination thereof. The device905may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager920, an input/output (I/O) controller910, a transceiver915, an antenna925, a memory930, code935, and a processor940. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus945). The I/O controller910may manage input and output signals for the device905. The I/O controller910may also manage peripherals not integrated into the device905. In some cases, the I/O controller910may represent a physical connection or port to an external peripheral. In some cases, the I/O controller910may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. Additionally or alternatively, the I/O controller910may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller910may be implemented as part of a processor, such as the processor940. In some cases, a user may interact with the device905via the I/O controller910or via hardware components controlled by the I/O controller910. In some cases, the device905may include a single antenna925. However, in some other cases, the device905may have more than one antenna925, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver915may communicate bi-directionally, via the one or more antennas925, wired, or wireless links as described herein. For example, the transceiver915may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver915may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas925for transmission, and to demodulate packets received from the one or more antennas925. The transceiver915, or the transceiver915and one or more antennas925, may be an example of a transmitter615, a transmitter715, a receiver610, a receiver710, or any combination thereof or component thereof, as described herein. The memory930may include random access memory (RAM) and read-only memory (ROM). The memory930may store computer-readable, computer-executable code935including instructions that, when executed by the processor940, cause the device905to perform various functions described herein. The code935may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code935may not be directly executable by the processor940but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory930may contain, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor940may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor940may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor940. The processor940may be configured to execute computer-readable instructions stored in a memory (e.g., the memory930) to cause the device905to perform various functions (e.g., functions or tasks supporting directional channel access using a narrow beam with multiple spatial streams). For example, the device905or a component of the device905may include a processor940and memory930coupled to the processor940, the processor940and memory930configured to perform various functions described herein. The communications manager920may support wireless communication at a first wireless device in accordance with examples as disclosed herein. For example, the communications manager920may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The communications manager920may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The communications manager920may be configured as or otherwise support a means for transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. Additionally or alternatively, the communications manager920may support wireless communication at a first wireless device in accordance with examples as disclosed herein. For example, the communications manager920may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. The communications manager920may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The communications manager920may be configured as or otherwise support a means for receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. By including or configuring the communications manager920in accordance with examples as described herein, the device905may support techniques for communication reliability, reduced latency, more efficient utilization of communication resources, improved coordination between devices, and enhanced communications efficiency while avoiding unnecessary interference to other devices when performing directional communication over a channel without performing channel access procedures. In some examples, the communications manager920may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver915, the one or more antennas925, or any combination thereof. Although the communications manager920is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager920may be supported by or performed by the processor940, the memory930, the code935, or any combination thereof. For example, the code935may include instructions executable by the processor940to cause the device905to perform various aspects of directional channel access using a narrow beam with multiple spatial streams as described herein, or the processor940and the memory930may be otherwise configured to perform or support such operations. FIG.10shows a block diagram1000of a device1005that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The device1005may be an example of aspects of a base station105as described herein. The device1005may include a receiver1010, a transmitter1015, and a communications manager1020. The device1005may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1010may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to directional channel access using a narrow beam with multiple spatial streams). Information may be passed on to other components of the device1005. The receiver1010may utilize a single antenna or a set of multiple antennas. The transmitter1015may provide a means for transmitting signals generated by other components of the device1005. For example, the transmitter1015may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to directional channel access using a narrow beam with multiple spatial streams). In some examples, the transmitter1015may be co-located with a receiver1010in a transceiver module. The transmitter1015may utilize a single antenna or a set of multiple antennas. The communications manager1020, the receiver1010, the transmitter1015, or various combinations thereof or various components thereof may be examples of means for performing various aspects of directional channel access using a narrow beam with multiple spatial streams as described herein. For example, the communications manager1020, the receiver1010, the transmitter1015, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager1020, the receiver1010, the transmitter1015, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally or alternatively, in some examples, the communications manager1020, the receiver1010, the transmitter1015, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager1020, the receiver1010, the transmitter1015, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager1020may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver1010, the transmitter1015, or both. For example, the communications manager1020may receive information from the receiver1010, send information to the transmitter1015, or be integrated in combination with the receiver1010, the transmitter1015, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager1020may support wireless communication at a first wireless device in accordance with examples as disclosed herein. For example, the communications manager1020may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The communications manager1020may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The communications manager1020may be configured as or otherwise support a means for transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. Additionally or alternatively, the communications manager1020may support wireless communication at a first wireless device in accordance with examples as disclosed herein. For example, the communications manager1020may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. The communications manager1020may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The communications manager1020may be configured as or otherwise support a means for receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. By including or configuring the communications manager1020in accordance with examples as described herein, the device1005(e.g., a processor controlling or otherwise coupled to the receiver1010, the transmitter1015, the communications manager1020, or a combination thereof) may support techniques for reduced processing and reduced power consumption by avoiding performing channel access procedures. FIG.11shows a block diagram1100of a device1105that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The device1105may be an example of aspects of a device1005or a base station105as described herein. The device1105may include a receiver1110, a transmitter1115, and a communications manager1120. The device1105may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1110may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to directional channel access using a narrow beam with multiple spatial streams). Information may be passed on to other components of the device1105. The receiver1110may utilize a single antenna or a set of multiple antennas. The transmitter1115may provide a means for transmitting signals generated by other components of the device1105. For example, the transmitter1115may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to directional channel access using a narrow beam with multiple spatial streams). In some examples, the transmitter1115may be co-located with a receiver1110in a transceiver module. The transmitter1115may utilize a single antenna or a set of multiple antennas. The device1105, or various components thereof, may be an example of means for performing various aspects of directional channel access using a narrow beam with multiple spatial streams as described herein. For example, the communications manager1120may include a beam selection component1125, a beam determination component1130, a beam transmitter1135, a beam receiver1140, or any combination thereof. The communications manager1120may be an example of aspects of a communications manager1020as described herein. In some examples, the communications manager1120, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver1110, the transmitter1115, or both. For example, the communications manager1120may receive information from the receiver1110, send information to the transmitter1115, or be integrated in combination with the receiver1110, the transmitter1115, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager1120may support wireless communication at a first wireless device in accordance with examples as disclosed herein. The beam selection component1125may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The beam determination component1130may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The beam transmitter1135may be configured as or otherwise support a means for transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. Additionally or alternatively, the communications manager1120may support wireless communication at a first wireless device in accordance with examples as disclosed herein. The beam selection component1125may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. The beam determination component1130may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The beam receiver1140may be configured as or otherwise support a means for receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. FIG.12shows a block diagram1200of a communications manager1220that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The communications manager1220may be an example of aspects of a communications manager1020, a communications manager1120, or both, as described herein. The communications manager1220, or various components thereof, may be an example of means for performing various aspects of directional channel access using a narrow beam with multiple spatial streams as described herein. For example, the communications manager1220may include a beam selection component1225, a beam determination component1230, a beam transmitter1235, a beam receiver1240, a beam measurement component1245, a metric identification component1250, a threshold component1255, a threshold identification component1260, a channel access component1265, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager1220may support wireless communication at a first wireless device in accordance with examples as disclosed herein. The beam selection component1225may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The beam determination component1230may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The beam transmitter1235may be configured as or otherwise support a means for transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. In some examples, the beam measurement component1245may be configured as or otherwise support a means for generating a set of EIRP measurement values for the set of one or more directional beams based on a spherical measurement test for each directional beam of the set of one or more directional beams, where the set of EIRP measurement values includes at least a subset of EIRP measurement values for the directional beam. In some examples, the metric identification component1250may be configured as or otherwise support a means for identifying the one or more metrics based on the subset of EIRP measurement values for the directional beam. In some examples, the threshold component1255may be configured as or otherwise support a means for comparing the one or more metrics with the one or more EIRP thresholds based on the number of spatial streams associated with the directional beam, where determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures is based on the comparison. In some examples, to support identifying the one or more metrics for the directional beam, the metric identification component1250may be configured as or otherwise support a means for identifying a first metric of the one or more metrics based on a difference between a first EIRP measurement value and a second EIRP measurement value from the subset of EIRP measurement values, where the first EIRP measurement value corresponds to a first percentile of EIRP measurements for the directional beam and the second EIRP measurement value corresponds to a second percentile of EIRP measurements for the directional beam. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component1230may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the first metric being greater than or equal to at least one EIRP threshold of the one or more EIRP thresholds based on the number of spatial streams. In some examples, the number of spatial streams does not satisfy a threshold number of spatial streams. In some examples, the first wireless device operates in accordance with a device-based power limitation or an antenna port-based power limitation, or any combination thereof. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component1230may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the first metric being greater than a first EIRP threshold of the one or more EIRP thresholds based on the reference precoder. In some examples, the first percentile is greater than the second percentile. In some examples, to support identifying the one or more metrics for the directional beam, the metric identification component1250may be configured as or otherwise support a means for identifying a second metric of the one or more metrics based on a third EIRP measurement value that corresponds to a predefined percentile of EIRP measurements for the directional beam. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component1230may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the second metric being less than or equal to at least one EIRP threshold of the one or more EIRP thresholds based on the number of spatial streams. In some examples, the number of spatial streams does not satisfy a threshold number of spatial streams. In some examples, the first wireless device operates in accordance with a device-based power limitation or an antenna port-based power limitation, or any combination thereof. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component1230may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the second metric being less than a second EIRP threshold of the one or more EIRP thresholds based on the reference precoder. In some examples, the second metric is identified based on a transmission power value. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the threshold identification component1260may be configured as or otherwise support a means for identifying a threshold number of spatial streams associated with the directional beam. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the threshold component1255may be configured as or otherwise support a means for comparing the number of spatial streams with the threshold number of spatial streams. In some examples, to support determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the beam determination component1230may be configured as or otherwise support a means for determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based on the number of spatial streams satisfying the threshold number of spatial streams. In some examples, to support performing the spherical measurement test, the beam selection component1225may be configured as or otherwise support a means for forming each directional beam of the set of one or more directional beams, where each directional beam is formed using a single spatial stream. In some examples, to support performing the spherical measurement test, the beam measurement component1245may be configured as or otherwise support a means for measuring a set of multiple EIRP measurement values for each directional beam of the set of one or more directional beams, the set of multiple EIRP measurement values including respective EIRP measurement values for each direction of a set of multiple directions. In some examples, to support performing the spherical measurement test, the beam measurement component1245may be configured as or otherwise support a means for recording the set of EIRP measurement values for the set of one or more directional beams including a subset of EIRP measurement values for each directional beam based on the set of multiple EIRP measurement values. In some examples, each directional beam of the set of one or more directional beams are formed based on a predetermined beamforming codebook. In some examples, each direction of the set of multiple directions includes a non-uniform azimuth and a non-uniform elevation. In some examples, each EIRP measurement value of the set of EIRP measurement values is associated with an azimuth value and an elevation value. In some examples, each direction of the set of multiple directions includes a uniform azimuth and a uniform elevation. In some examples, each EIRP measurement value of the set of EIRP measurement values is associated with an azimuth value and an elevation value. In some examples, to support performing the spherical measurement test, the beam selection component1225may be configured as or otherwise support a means for forming each directional beam of the set of one or more directional beams based on a reference precoder from a set of reference precoders, where each directional beam is formed using multiple spatial streams. In some examples, to support performing the spherical measurement test, the beam measurement component1245may be configured as or otherwise support a means for measuring a set of multiple EIRP measurement values for each directional beam of the set of one or more directional beams and for each reference precoder from the set of reference precoders, the set of multiple EIRP measurement values including respective EIRP measurement values for each direction of a set of multiple directions. In some examples, to support performing the spherical measurement test, the beam measurement component1245may be configured as or otherwise support a means for recording the set of EIRP measurement values for the set of one or more directional beams including a subset of EIRP measurement values for each directional beam based on the set of multiple EIRP measurement values. In some examples, each directional beam of the set of one or more directional beams are formed based on a predetermined analog beamforming codebook and a predefined digital precoding codebook. In some examples, each direction of the set of multiple directions includes a non-uniform azimuth and a non-uniform elevation. In some examples, each EIRP measurement value of the set of EIRP measurement values is associated with an azimuth value and an elevation value. In some examples, each direction of the set of multiple directions includes a uniform azimuth and a uniform elevation. In some examples, each EIRP measurement value of the set of EIRP measurement values is associated with an azimuth value and an elevation value. In some examples, the set of reference precoders is based on a number of columns selected from an orthonormal matrix, the number of columns being based on a number of antenna ports and the multiple spatial streams. In some examples, the threshold identification component1260may be configured as or otherwise support a means for determining the one or more EIRP thresholds based on the number of spatial streams and one or more parameters associated with a power limitation at the first wireless device. In some examples, the power limitation includes a device-based power limitation. In some examples, a first EIRP threshold associated with a first number of spatial streams is less than a second EIRP threshold associated with a second number of spatial streams, the first number of spatial streams being greater than the second number of spatial streams. In some examples, the one or more parameters associated with the power limitation are associated with a device-based power threshold or an antenna port-based power threshold. In some examples, a first EIRP threshold associated with a first number of spatial streams is greater than a second EIRP threshold associated with a second number of spatial streams, the first number of spatial streams being greater than the second number of spatial streams. In some examples, the channel access component1265may be configured as or otherwise support a means for refraining from performing the channel access procedures prior to transmitting the message based on determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the channel access procedures including one or more listen-before-talk procedures. Additionally or alternatively, the communications manager1220may support wireless communication at a first wireless device in accordance with examples as disclosed herein. In some examples, the beam selection component1225may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. In some examples, the beam determination component1230may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The beam receiver1240may be configured as or otherwise support a means for receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. In some examples, the beam measurement component1245may be configured as or otherwise support a means for generating a set of EIRP measurement values for the set of one or more directional beams based on a spherical measurement test for each directional beam of the set of one or more directional beams, where the set of EIRP measurement values includes at least a subset of EIRP measurement values for the directional beam. In some examples, the metric identification component1250may be configured as or otherwise support a means for identifying the one or more metrics based on the subset of EIRP measurement values for the directional beam. In some examples, the threshold component1255may be configured as or otherwise support a means for comparing the one or more metrics with the one or more EIRP thresholds based on the number of spatial streams associated with the directional beam, where determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures is based on the comparison. FIG.13shows a diagram of a system1300including a device1305that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The device1305may be an example of or include the components of a device1005, a device1105, or a base station105as described herein. The device1305may communicate wirelessly with one or more base stations105, UEs115, or any combination thereof. The device1305may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager1320, a network communications manager1310, a transceiver1315, an antenna1325, a memory1330, code1335, a processor1340, and an inter-station communications manager1345. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus1350). The network communications manager1310may manage communications with a core network130(e.g., via one or more wired backhaul links). For example, the network communications manager1310may manage the transfer of data communications for client devices, such as one or more UEs115. In some cases, the device1305may include a single antenna1325. However, in some other cases the device1305may have more than one antenna1325, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver1315may communicate bi-directionally, via the one or more antennas1325, wired, or wireless links as described herein. For example, the transceiver1315may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1315may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas1325for transmission, and to demodulate packets received from the one or more antennas1325. The transceiver1315, or the transceiver1315and one or more antennas1325, may be an example of a transmitter1015, a transmitter1115, a receiver1010, a receiver1110, or any combination thereof or component thereof, as described herein. The memory1330may include RAM and ROM. The memory1330may store computer-readable, computer-executable code1335including instructions that, when executed by the processor1340, cause the device1305to perform various functions described herein. The code1335may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code1335may not be directly executable by the processor1340but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory1330may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1340may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1340may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor1340. The processor1340may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1330) to cause the device1305to perform various functions (e.g., functions or tasks supporting directional channel access using a narrow beam with multiple spatial streams). For example, the device1305or a component of the device1305may include a processor1340and memory1330coupled to the processor1340, the processor1340and memory1330configured to perform various functions described herein. The inter-station communications manager1345may manage communications with other base stations105, and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. For example, the inter-station communications manager1345may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some examples, the inter-station communications manager1345may provide an X2 interface within an LTE/LTE-A wireless communications network technology to provide communication between base stations105. The communications manager1320may support wireless communication at a first wireless device in accordance with examples as disclosed herein. For example, the communications manager1320may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The communications manager1320may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The communications manager1320may be configured as or otherwise support a means for transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. Additionally or alternatively, the communications manager1320may support wireless communication at a first wireless device in accordance with examples as disclosed herein. For example, the communications manager1320may be configured as or otherwise support a means for selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. The communications manager1320may be configured as or otherwise support a means for determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The communications manager1320may be configured as or otherwise support a means for receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. By including or configuring the communications manager1320in accordance with examples as described herein, the device1305may support techniques for improved communication reliability, reduced latency, more efficient utilization of communication resources, improved coordination between devices, and enhanced communications efficiency while avoiding unnecessary interference to other devices when performing directional communication over a channel without performing channel access procedures. In some examples, the communications manager1320may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver1315, the one or more antennas1325, or any combination thereof. Although the communications manager1320is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager1320may be supported by or performed by the processor1340, the memory1330, the code1335, or any combination thereof. For example, the code1335may include instructions executable by the processor1340to cause the device1305to perform various aspects of directional channel access using a narrow beam with multiple spatial streams as described herein, or the processor1340and the memory1330may be otherwise configured to perform or support such operations. FIG.14shows a flowchart illustrating a method1400that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The operations of the method1400may be implemented by a UE or a base station or its components as described herein. For example, the operations of the method1400may be performed by a UE115as described with reference toFIGS.1through9or a base station105as described with reference toFIGS.1through5and10through13. In some examples, a UE or a base station may execute a set of instructions to control the functional elements of the UE or the base station to perform the described functions. Additionally or alternatively, the UE or the base station may perform aspects of the described functions using special-purpose hardware. At1405, the method may include selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The operations of1405may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1405may be performed by a beam selection component825or a beam selection component1225as described with reference toFIGS.8and12. At1410, the method may include determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The operations of1410may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1410may be performed by a beam determination component830or a beam determination component1230as described with reference toFIGS.8and12. At1415, the method may include transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. The operations of1415may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1415may be performed by a beam transmitter835or a beam transmitter1235as described with reference toFIGS.8and12. FIG.15shows a flowchart illustrating a method1500that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The operations of the method1500may be implemented by a UE or a base station or its components as described herein. For example, the operations of the method1500may be performed by a UE115as described with reference toFIGS.1through9or a base station105as described with reference toFIGS.1through5and10through13. In some examples, a UE or a base station may execute a set of instructions to control the functional elements of the UE or the base station to perform the described functions. Additionally or alternatively, the UE or the base station may perform aspects of the described functions using special-purpose hardware. At1505, the method may include selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band. The operations of1505may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1505may be performed by a beam selection component825or a beam selection component1225as described with reference toFIGS.8and12. At1510, the method may include generating a set of EIRP measurement values for the set of one or more directional beams based on a spherical measurement test for each directional beam of the set of one or more directional beams, where the set of EIRP measurement values includes at least a subset of EIRP measurement values for the directional beam. The operations of1510may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1510may be performed by a beam measurement component840or a beam measurement component1245as described with reference toFIGS.8and12. At1515, the method may include identifying the one or more metrics based on the subset of EIRP measurement values for the directional beam. The operations of1515may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1515may be performed by a metric identification component845or a metric identification component1250as described with reference toFIGS.8and12. At1520, the method may include comparing the one or more metrics with the one or more EIRP thresholds based on the number of spatial streams associated with the directional beam, where determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures is based on the comparison. The operations of1520may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1520may be performed by a threshold component850or a threshold component1255as described with reference toFIGS.8and12. At1525, the method may include determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The operations of1525may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1525may be performed by a beam determination component830or a beam determination component1230as described with reference toFIGS.8and12. At1530, the method may include transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. The operations of1530may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1530may be performed by a beam transmitter835or a beam transmitter1235as described with reference toFIGS.8and12. FIG.16shows a flowchart illustrating a method1600that supports directional channel access using a narrow beam with multiple spatial streams in accordance with aspects of the present disclosure. The operations of the method1600may be implemented by a base station or its components as described herein. For example, the operations of the method1600may be performed by a base station105as described with reference toFIGS.1through5and10through13. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the described functions. Additionally or alternatively, the base station may perform aspects of the described functions using special-purpose hardware. At1605, the method may include selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band. The operations of1605may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1605may be performed by a beam selection component1225as described with reference toFIG.12. At1610, the method may include determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, where the determination is based on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more EIRP thresholds. The operations of1610may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1610may be performed by a beam determination component1230as described with reference toFIG.12. At1615, the method may include receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based on the determination. The operations of1615may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1615may be performed by a beam receiver1240as described with reference toFIG.12. The following provides an overview of aspects of the present disclosure: Aspect 1: A method for wireless communication at a first wireless device, comprising: selecting a directional beam from a set of one or more directional beams for transmitting a message to a second wireless device in a shared radio frequency spectrum band; determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, wherein the determination is based at least in part on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more effective isotropic radiated power thresholds; and transmitting, to the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based at least in part on the determination. Aspect 2: The method of aspect 1, further comprising: generating a set of effective isotropic radiated power measurement values for the set of one or more directional beams based at least in part on a spherical measurement test for each directional beam of the set of one or more directional beams, wherein the set of effective isotropic radiated power measurement values comprises at least a subset of effective isotropic radiated power measurement values for the directional beam; identifying the one or more metrics based at least in part on the subset of effective isotropic radiated power measurement values for the directional beam; and comparing the one or more metrics with the one or more effective isotropic radiated power thresholds based at least in part on the number of spatial streams associated with the directional beam, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures is based at least in part on the comparison. Aspect 3: The method of aspect 2, wherein identifying the one or more metrics for the directional beam comprises: identifying a first metric of the one or more metrics based at least in part on a difference between a first effective isotropic radiated power measurement value and a second effective isotropic radiated power measurement value from the subset of effective isotropic radiated power measurement values, wherein the first effective isotropic radiated power measurement value corresponds to a first percentile of effective isotropic radiated power measurements for the directional beam and the second effective isotropic radiated power measurement value corresponds to a second percentile of effective isotropic radiated power measurements for the directional beam. Aspect 4: The method of aspect 3, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the first metric being greater than or equal to at least one effective isotropic radiated power threshold of the one or more effective isotropic radiated power thresholds based at least in part on the number of spatial streams. Aspect 5: The method of aspect 4, wherein the number of spatial streams does not satisfy a threshold number of spatial streams, and the first wireless device operates in accordance with a device-based power limitation or an antenna port-based power limitation, or any combination thereof. Aspect 6: The method of any of aspects 3 through 5, wherein the first metric is based at least in part on a reference precoder associated with generating the directional beam, the reference precoder being from a set of one or more predefined reference precoders, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the first metric being greater than a first effective isotropic radiated power threshold of the one or more effective isotropic radiated power thresholds based at least in part on the reference precoder. Aspect 7: The method of any of aspects 3 through 6, wherein the first percentile is greater than the second percentile. Aspect 8: The method of any of aspects 2 through 7, wherein identifying the one or more metrics for the directional beam comprises: identifying a second metric of the one or more metrics based at least in part on a third effective isotropic radiated power measurement value that corresponds to a predefined percentile of effective isotropic radiated power measurements for the directional beam. Aspect 9: The method of aspect 8, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the second metric being less than or equal to at least one effective isotropic radiated power threshold of the one or more effective isotropic radiated power thresholds based at least in part on the number of spatial streams. Aspect 10: The method of aspect 9, wherein the number of spatial streams does not satisfy a threshold number of spatial streams, and the first wireless device operates in accordance with a device-based power limitation or an antenna port-based power limitation, or any combination thereof. Aspect 11: The method of any of aspects 8 through 10, wherein the second metric is based at least in part on a reference precoder associated with generating the directional beam, the reference precoder being from a set of one or more predefined reference precoders, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the second metric being less than a second effective isotropic radiated power threshold of the one or more effective isotropic radiated power thresholds based at least in part on the reference precoder. Aspect 12: The method of any of aspects 8 through 11, wherein the second metric is identified based at least in part on a transmission power value. Aspect 13: The method of any of aspects 2 through 12, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: identifying a threshold number of spatial streams associated with the directional beam; comparing the number of spatial streams with the threshold number of spatial streams; and determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the number of spatial streams satisfying the threshold number of spatial streams. Aspect 14: The method of any of aspects 2 through 13, wherein performing the spherical measurement test comprises: forming each directional beam of the set of one or more directional beams, wherein each directional beam is formed using a single spatial stream; measuring a plurality of effective isotropic radiated power measurement values for each directional beam of the set of one or more directional beams, the plurality of effective isotropic radiated power measurement values comprising respective effective isotropic radiated power measurement values for each direction of a plurality of directions; and recording the set of effective isotropic radiated power measurement values for the set of one or more directional beams comprising a subset of effective isotropic radiated power measurement values for each directional beam based at least in part on the plurality of effective isotropic radiated power measurement values. Aspect 15: The method of aspect 14, wherein each directional beam of the set of one or more directional beams are formed based at least in part on a predetermined beamforming codebook. Aspect 16: The method of any of aspects 14 through 15, wherein each direction of the plurality of directions comprises a non-uniform azimuth and a non-uniform elevation, and each effective isotropic radiated power measurement value of the set of effective isotropic radiated power measurement values is associated with an azimuth value and an elevation value. Aspect 17: The method of any of aspects 14 through 16, wherein each direction of the plurality of directions comprises a uniform azimuth and a uniform elevation, and each effective isotropic radiated power measurement value of the set of effective isotropic radiated power measurement values is associated with an azimuth value and an elevation value. Aspect 18: The method of any of aspects 2 through 17, wherein performing the spherical measurement test comprises: forming each directional beam of the set of one or more directional beams based at least in part on a reference precoder from a set of reference precoders, wherein each directional beam is formed using multiple spatial streams; measuring a plurality of effective isotropic radiated power measurement values for each directional beam of the set of one or more directional beams and for each reference precoder from the set of reference precoders, the plurality of effective isotropic radiated power measurement values comprising respective effective isotropic radiated power measurement values for each direction of a plurality of directions; and recording the set of effective isotropic radiated power measurement values for the set of one or more directional beams comprising a subset of effective isotropic radiated power measurement values for each directional beam based at least in part on the plurality of effective isotropic radiated power measurement values. Aspect 19: The method of aspect 18, wherein each directional beam of the set of one or more directional beams are formed based at least in part on a predetermined analog beamforming codebook and a predefined digital precoding codebook. Aspect 20: The method of any of aspects 18 through 19, wherein each direction of the plurality of directions comprises a non-uniform azimuth and a non-uniform elevation, and each effective isotropic radiated power measurement value of the set of effective isotropic radiated power measurement values is associated with an azimuth value and an elevation value. Aspect 21: The method of any of aspects 18 through 19, wherein each direction of the plurality of directions comprises a uniform azimuth and a uniform elevation, and each effective isotropic radiated power measurement value of the set of effective isotropic radiated power measurement values is associated with an azimuth value and an elevation value. Aspect 22: The method of any of aspects 18 through 21, wherein the set of reference precoders is based at least in part on a number of columns selected from an orthonormal matrix, the number of columns being based at least in part on a number of antenna ports and the multiple spatial streams. Aspect 23: The method of any of aspects 1 through 22, further comprising: determining the one or more effective isotropic radiated power thresholds based at least in part on the number of spatial streams and one or more parameters associated with a power limitation at the first wireless device. Aspect 24: The method of aspect 23, wherein the power limitation comprises a device-based power limitation, and a first effective isotropic radiated power threshold associated with a first number of spatial streams is less than a second effective isotropic radiated power threshold associated with a second number of spatial streams, the first number of spatial streams being greater than the second number of spatial streams. Aspect 25: The method of any of aspects 23 through 24, wherein the one or more parameters associated with the power limitation are associated with a device-based power threshold or an antenna port-based power threshold, and a first effective isotropic radiated power threshold associated with a first number of spatial streams is greater than a second effective isotropic radiated power threshold associated with a second number of spatial streams, the first number of spatial streams being greater than the second number of spatial streams. Aspect 26: The method of any of aspects 1 through 25, further comprising: refraining from performing the channel access procedures prior to transmitting the message based at least in part on determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the channel access procedures comprising one or more listen-before-talk procedures. Aspect 27: A method for wireless communication at a first wireless device, comprising: selecting a directional beam from a set of one or more directional beams for receiving a message from a second wireless device in a shared radio frequency spectrum band; determining that the directional beam is associated with communications in the shared radio frequency spectrum band that exclude a defined set of channel access procedures, wherein the determination is based at least in part on a number of spatial streams associated with the directional beam and one or more metrics for the directional beam satisfying one or more effective isotropic radiated power thresholds; and receiving, from the second wireless device, the message in the shared radio frequency spectrum band using the directional beam based at least in part on the determination. Aspect 28: The method of aspect 27, further comprising: generating a set of effective isotropic radiated power measurement values for the set of one or more directional beams based at least in part on a spherical measurement test for each directional beam of the set of one or more directional beams, wherein the set of effective isotropic radiated power measurement values comprises at least a subset of effective isotropic radiated power measurement values for the directional beam; identifying the one or more metrics based at least in part on the subset of effective isotropic radiated power measurement values for the directional beam; and comparing the one or more metrics with the one or more effective isotropic radiated power thresholds based at least in part on the number of spatial streams associated with the directional beam, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures is based at least in part on the comparison. Aspect 29: The method of aspect 28, wherein identifying the one or more metrics for the directional beam comprises: identifying a first metric of the one or more metrics based at least in part on a difference between a first effective isotropic radiated power measurement value and a second effective isotropic radiated power measurement value from the subset of effective isotropic radiated power measurement values, wherein the first effective isotropic radiated power measurement value corresponds to a first percentile of effective isotropic radiated power measurements for the directional beam and the second effective isotropic radiated power measurement value corresponds to a second percentile of effective isotropic radiated power measurements for the directional beam. Aspect 30: The method of aspect 29, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the first metric being greater than or equal to at least one effective isotropic radiated power threshold of the one or more effective isotropic radiated power thresholds based at least in part on the number of spatial streams. Aspect 31: The method of aspect 30, wherein the number of spatial streams does not satisfy a threshold number of spatial streams, and the first wireless device operates in accordance with a device-based power limitation or an antenna port-based power limitation, or any combination thereof. Aspect 32: The method of any of aspects 29 through 31, wherein the first metric is based at least in part on a reference precoder associated with generating the directional beam, the reference precoder being from a set of one or more predefined reference precoders, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the first metric being greater than a first effective isotropic radiated power threshold of the one or more effective isotropic radiated power thresholds based at least in part on the reference precoder. Aspect 33: The method of any of aspects 29 through 32, wherein the first percentile is greater than the second percentile. Aspect 34: The method of any of aspects 28 through 33, wherein identifying the one or more metrics for the directional beam comprises: identifying a second metric of the one or more metrics based at least in part on a third effective isotropic radiated power measurement value that corresponds to a predefined percentile of effective isotropic radiated power measurements for the directional beam. Aspect 35: The method of aspect 34, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the second metric being less than or equal to at least one effective isotropic radiated power threshold of the one or more effective isotropic radiated power thresholds based at least in part on the number of spatial streams. Aspect 36: The method of aspect 35, wherein the number of spatial streams does not satisfy a threshold number of spatial streams, and the first wireless device operates in accordance with a device-based power limitation or an antenna port-based power limitation, or any combination thereof. Aspect 37: The method of any of aspects 34 through 36, wherein the second metric is based at least in part on a reference precoder associated with generating the directional beam, the reference precoder being from a set of one or more predefined reference precoders, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the second metric being less than a second effective isotropic radiated power threshold of the one or more effective isotropic radiated power thresholds based at least in part on the reference precoder. Aspect 38: The method of any of aspects 34 through 37, wherein the second metric is identified based at least in part on a transmission power value. Aspect 39: The method of any of aspects 28 through 38, wherein determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures comprises: identifying a threshold number of spatial streams associated with the directional beam; comparing the number of spatial streams with the threshold number of spatial streams; and determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures based at least in part on the number of spatial streams satisfying the threshold number of spatial streams. Aspect 40: The method of any of aspects 28 through 39, wherein performing the spherical measurement test comprises: forming each directional beam of the set of one or more directional beams, wherein each directional beam is formed using a single spatial stream; measuring a plurality of effective isotropic radiated power measurement values for each directional beam of the set of one or more directional beams, the plurality of effective isotropic radiated power measurement values comprising respective effective isotropic radiated power measurement values for each direction of a plurality of directions; and recording the set of effective isotropic radiated power measurement values for the set of one or more directional beams comprising a subset of effective isotropic radiated power measurement values for each directional beam based at least in part on the plurality of effective isotropic radiated power measurement values. Aspect 41: The method of aspect 40, wherein each directional beam of the set of one or more directional beams are formed based at least in part on a predetermined beamforming codebook. Aspect 42: The method of any of aspects 40 through 41, wherein each direction of the plurality of directions comprises a non-uniform azimuth and a non-uniform elevation, and each effective isotropic radiated power measurement value of the set of effective isotropic radiated power measurement values is associated with an azimuth value and an elevation value. Aspect 43: The method of any of aspects 40 through 42, wherein each direction of the plurality of directions comprises a uniform azimuth and a uniform elevation, and each effective isotropic radiated power measurement value of the set of effective isotropic radiated power measurement values is associated with an azimuth value and an elevation value. Aspect 44: The method of any of aspects 28 through 43, wherein performing the spherical measurement test comprises: forming each directional beam of the set of one or more directional beams based at least in part on a reference precoder from a set of reference precoders, wherein each directional beam is formed using multiple spatial streams; measuring a plurality of effective isotropic radiated power measurement values for each directional beam of the set of one or more directional beams and for each reference precoder from the set of reference precoders, the plurality of effective isotropic radiated power measurement values comprising respective effective isotropic radiated power measurement values for each direction of a plurality of directions; and recording the set of effective isotropic radiated power measurement values for the set of one or more directional beams comprising a subset of effective isotropic radiated power measurement values for each directional beam based at least in part on the plurality of effective isotropic radiated power measurement values. Aspect 45: The method of aspect 44, wherein each directional beam of the set of one or more directional beams are formed based at least in part on a predetermined analog beamforming codebook and a predefined digital precoding codebook. Aspect 46: The method of any of aspects 44 through 45, wherein each direction of the plurality of directions comprises a non-uniform azimuth and a non-uniform elevation, and each effective isotropic radiated power measurement value of the set of effective isotropic radiated power measurement values is associated with an azimuth value and an elevation value. Aspect 47: The method of any of aspects 44 through 45, wherein each direction of the plurality of directions comprises a uniform azimuth and a uniform elevation, and each effective isotropic radiated power measurement value of the set of effective isotropic radiated power measurement values is associated with an azimuth value and an elevation value. Aspect 48: The method of any of aspects 44 through 47, wherein the set of reference precoders is based at least in part on a number of columns selected from an orthonormal matrix, the number of columns being based at least in part on a number of antenna ports and the multiple spatial streams. Aspect 49: The method of any of aspects 27 through 48, further comprising: determining the one or more effective isotropic radiated power thresholds based at least in part on the number of spatial streams and one or more parameters associated with a power limitation at the first wireless device. Aspect 50: The method of aspect 49, wherein the power limitation comprises a device-based power limitation, and a first effective isotropic radiated power threshold associated with a first number of spatial streams is less than a second effective isotropic radiated power threshold associated with a second number of spatial streams, the first number of spatial streams being greater than the second number of spatial streams. Aspect 51: The method of any of aspects 49 through 50, wherein the one or more parameters associated with the power limitation are associated with a device-based power threshold or an antenna port-based power threshold, and a first effective isotropic radiated power threshold associated with a first number of spatial streams is greater than a second effective isotropic radiated power threshold associated with a second number of spatial streams, the first number of spatial streams being greater than the second number of spatial streams. Aspect 52: The method of any of aspects 27 through 51, further comprising: refraining from performing the channel access procedures prior to receiving the message based at least in part on determining that the directional beam is associated with the communications in the shared radio frequency spectrum band that excludes the defined set of channel access procedures, the channel access procedures comprising one or more listen-before-talk procedures. Aspect 53: An apparatus for wireless communication at a first wireless device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 1 through 26. Aspect 54: An apparatus for wireless communication at a first wireless device, comprising at least one means for performing a method of any of aspects 1 through 26. Aspect 55: A non-transitory computer-readable medium storing code for wireless communication at a first wireless device, the code comprising instructions executable by a processor to perform a method of any of aspects 1 through 26. Aspect 56: An apparatus for wireless communication at a first wireless device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 27 through 52. Aspect 57: An apparatus for wireless communication at a first wireless device, comprising at least one means for performing a method of any of aspects 27 through 52. Aspect 58: A non-transitory computer-readable medium storing code for wireless communication at a first wireless device, the code comprising instructions executable by a processor to perform a method of any of aspects 27 through 52. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Although aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR networks. For example, the described techniques may be applicable to various other wireless communications systems such as Ultra Mobile Broadband (UMB), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, as well as other systems and radio technologies not explicitly mentioned herein. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” The term “determine” or “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (such as via looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (such as receiving information), accessing (such as accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and other such similar actions. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
188,831
11863277
DETAILED DESCRIPTION A method and apparatus of a device that measures a reference signal and manages Rx beams for communication between a user equipment device and a base station is described. In the following description, numerous specific details are set forth to provide thorough explanation of embodiments of the present invention. It will be apparent, however, to one skilled in the art, that embodiments of the present invention may be practiced without these specific details. In other instances, well-known components, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description. Reference in the specification to “some embodiments” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in some embodiments” in various places in the specification do not necessarily all refer to the same embodiment. In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other. The processes depicted in the figures that follow, are performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system or a dedicated machine), or a combination of both. Although the processes are described below in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in different order. Moreover, some operations may be performed in parallel rather than sequentially. The terms “server,” “client,” and “device” are intended to refer generally to data processing systems rather than specifically to a particular form factor for the server, client, and/or device. A method and apparatus of a device that measures a reference signal used for downlink for a user equipment device and a base station is described. In some embodiments, the device is a user equipment device that has a wireless link with a base station. In some embodiments, the wireless link is a fifth generation (5G) link. The device further groups and selects component carriers (CCs) from the wireless link and determines a virtual CC from the group of selected CCs. The device additionally can perform a physical downlink resource mapping based on an aggregate resource matching patterns of groups of CCs. FIG.1illustrates a simplified example wireless communication system, according to some embodiments. It is noted that the system ofFIG.1is merely one example of a possible system, and that features of this disclosure may be implemented in any of various systems, as desired. As shown, the example wireless communication system includes a base station102A which communicates over a transmission medium with one or more user devices106A,106B, etc., through106N. Each of the user devices may be referred to herein as a “user equipment” (UE). Thus, the user devices106are referred to as UEs or UE devices. The base station (BS)102A may be a base transceiver station (BTS) or cell site (a “cellular base station”) and may include hardware that enables wireless communication with the UEs106A through106N. The communication area (or coverage area) of the base station may be referred to as a “cell.” The base station102A and the UEs106may be configured to communicate over the transmission medium using any of various radio access technologies (RATs), also referred to as wireless communication technologies, or telecommunication standards, such as GSM, UMTS (associated with, for example, WCDMA or TD-SCDMA air interfaces), LTE, LTE-Advanced (LTE-A), 5G new radio (5G NR), HSPA, 3GPP2 CDMA2000 (e.g., 1×RTT, 1×EV-DO, HRPD, eHRPD), etc. Note that if the base station102A is implemented in the context of LTE, it may alternately be referred to as an ‘eNodeB’ or ‘eNB’. Note that if the base station102A is implemented in the context of 5G NR, it may alternately be referred to as ‘gNodeB’ or ‘gNB’. As shown, the base station102A may also be equipped to communicate with a network100(e.g., a core network of a cellular service provider, a telecommunication network such as a public switched telephone network (PSTN), and/or the Internet, among various possibilities). Thus, the base station102A may facilitate communication between the user devices and/or between the user devices and the network100. In particular, the cellular base station102A may provide UEs106with various telecommunication capabilities, such as voice, SMS and/or data services. Base station102A and other similar base stations (such as base stations102B . . .102N) operating according to the same or a different cellular communication standard may thus be provided as a network of cells, which may provide continuous or nearly continuous overlapping service to UEs106A-N and similar devices over a geographic area via one or more cellular communication standards. Thus, while base station102A may act as a “serving cell” for UEs106A-N as illustrated inFIG.1, each UE106may also be capable of receiving signals from (and possibly within communication range of) one or more other cells (which might be provided by base stations102B-N and/or any other base stations), which may be referred to as “neighboring cells”. Such cells may also be capable of facilitating communication between user devices and/or between user devices and the network100. Such cells may include “macro” cells, “micro” cells, “pico” cells, and/or cells which provide any of various other granularities of service area size. For example, base stations102A-B illustrated inFIG.1might be macro cells, while base station102N might be a micro cell. Other configurations are also possible. In some embodiments, base station102A may be a next generation base station, e.g., a 5G New Radio (5G NR) base station, or “gNB”. In some embodiments, a gNB may be connected to a legacy evolved packet core (EPC) network and/or to a NR core (NRC) network. In addition, a gNB cell may include one or more transition and reception points (TRPs). In addition, a UE capable of operating according to 5G NR may be connected to one or more TRPs within one or more gNBs. Note that a UE106may be capable of communicating using multiple wireless communication standards. For example, the UE106may be configured to communicate using a wireless networking (e.g., Wi-Fi) and/or peer-to-peer wireless communication protocol (e.g., Bluetooth, Wi-Fi peer-to-peer, etc.) in addition to at least one cellular communication protocol (e.g., GSM, UMTS (associated with, for example, WCDMA or TD-SCDMA air interfaces), LTE, LTE-A, 5G NR, HSPA, 3GPP2 CDMA2000 (e.g., 1×RTT, 1×EV-DO, HRPD, eHRPD), etc.). The UE106may also or alternatively be configured to communicate using one or more global navigational satellite systems (GNSS, e.g., GPS or GLONASS), one or more mobile television broadcasting standards (e.g., ATSC-M/H or DVB-H), and/or any other wireless communication protocol, if desired. Other combinations of wireless communication standards (including more than two wireless communication standards) are also possible. FIG.2illustrates user equipment106(e.g., one of the devices106A through106N) in communication with a base station102, according to some embodiments. The UE106may be a device with cellular communication capability such as a mobile phone, a hand-held device, a computer or a tablet, or virtually any type of wireless device. The UE106may include a processor that is configured to execute program instructions stored in memory. The UE106may perform any of the method embodiments described herein by executing such stored instructions. Alternatively, or in addition, the UE106may include a programmable hardware element such as an FPGA (field-programmable gate array) that is configured to perform any of the method embodiments described herein, or any portion of any of the method embodiments described herein. The UE106may include one or more antennas for communicating using one or more wireless communication protocols or technologies. In some embodiments, the UE106may be configured to communicate using, for example, CDMA2000 (1×RTT/1×EV-DO/HRPD/eHRPD) or LTE using a single shared radio and/or GSM or LTE using the single shared radio. The shared radio may couple to a single antenna, or may couple to multiple antennas (e.g., for MIMO) for performing wireless communications. An antenna array (e.g., for MIMO) can be used to implement beamforming at the UE end to increase signal to noise ratio (SNR) and reduce channel interference of a single data stream. Rx beams can be generated by the antenna array, each of the Rx beams having predefined spatial location and/or direction relative to the user equipment device. An appropriate Rx beam can be selected that is optimally aligned to receive a transmitted beam from a base station or neighboring cell to provide improved communication quality. User equipment can use conventional or adaptive beam formers to generate a plurality of Rx beams. The beams can be generated by applying a spatial filter (e.g., phase shifts and amplitude weights) or other equivalent beamforming algorithms to each antenna in the antenna array. In general, a radio may include any combination of a baseband processor, analog RF signal processing circuitry (e.g., including filters, mixers, oscillators, amplifiers, etc.), or digital processing circuitry (e.g., for digital modulation as well as other digital processing). Similarly, the radio may implement one or more receive and transmit chains using the aforementioned hardware. For example, the UE106may share one or more parts of a receive and/or transmit chain between multiple wireless communication technologies, such as those discussed above. In some embodiments, the UE106may include separate transmit and/or receive chains (e.g., including separate antennas and other radio components) for each wireless communication protocol with which it is configured to communicate. As a further possibility, the UE106may include one or more radios which are shared between multiple wireless communication protocols, and one or more radios which are used exclusively by a single wireless communication protocol. For example, the UE106might include a shared radio for communicating using either of LTE or 5G NR (or LTE or 1×RTT or LTE or GSM), and separate radios for communicating using each of Wi-Fi and Bluetooth. Other configurations are also possible. FIG.3illustrates an example simplified block diagram of a communication device106, according to some embodiments. It is noted that the block diagram of the communication device ofFIG.3is only one example of a possible communication device. According to embodiments, communication device106may be a user equipment (UE) device, a mobile device or mobile station, a wireless device or wireless station, a desktop computer or computing device, a mobile computing device (e.g., a laptop, notebook, or portable computing device), a tablet and/or a combination of devices, among other devices. As shown, the communication device106may include a set of components300configured to perform core functions. For example, this set of components may be implemented as a system on chip (SOC), which may include portions for various purposes. Alternatively, this set of components300may be implemented as separate components or groups of components for the various purposes. The set of components300may be coupled (e.g., communicatively; directly or indirectly) to various other circuits of the communication device106. For example, the communication device106may include various types of memory (e.g., including NAND flash310), an input/output interface such as connector I/F320(e.g., for connecting to a computer system; dock; charging station; input devices, such as a microphone, camera, keyboard; output devices, such as speakers; etc.), the display360, which may be integrated with or external to the communication device106, and cellular communication circuitry330such as for 5G NR, LTE, GSM, etc., and short to medium range wireless communication circuitry329(e.g., Bluetooth™ and WLAN circuitry). In some embodiments, communication device106may include wired communication circuitry (not shown), such as a network interface card, e.g., for Ethernet. The cellular communication circuitry330may couple (e.g., communicatively; directly or indirectly) to one or more antennas, such as antennas335and336as shown. The short to medium range wireless communication circuitry329may also couple (e.g., communicatively; directly or indirectly) to one or more antennas, such as antennas337and338as shown. Alternatively, the short to medium range wireless communication circuitry329may couple (e.g., communicatively; directly or indirectly) to the antennas335and336in addition to, or instead of, coupling (e.g., communicatively; directly or indirectly) to the antennas337and338. The short to medium range wireless communication circuitry329and/or cellular communication circuitry330may include multiple receive chains and/or multiple transmit chains for receiving and/or transmitting multiple spatial streams, such as in a multiple-input multiple output (MIMO) configuration. In some embodiments, as further described below, cellular communication circuitry330may include dedicated receive chains (including and/or coupled to, e.g., communicatively; directly or indirectly. dedicated processors and/or radios) for multiple radio access technologies (RATs) (e.g., a first receive chain for LTE and a second receive chain for 5G NR). In addition, in some embodiments, cellular communication circuitry330may include a single transmit chain that may be switched between radios dedicated to specific RATs. For example, a first radio may be dedicated to a first RAT, e.g., LTE, and may be in communication with a dedicated receive chain and a transmit chain shared with an additional radio, e.g., a second radio that may be dedicated to a second RAT, e.g., 5G NR, and may be in communication with a dedicated receive chain and the shared transmit chain. The communication device106may also include and/or be configured for use with one or more user interface elements. The user interface elements may include any of various elements, such as display360(which may be a touchscreen display), a keyboard (which may be a discrete keyboard or may be implemented as part of a touchscreen display), a mouse, a microphone and/or speakers, one or more cameras, one or more buttons, and/or any of various other elements capable of providing information to a user and/or receiving or interpreting user input. The communication device106may further include one or more smart cards345that include SIM (Subscriber Identity Module) functionality, such as one or more UICC(s) (Universal Integrated Circuit Card(s)) cards345. As shown, the SOC300may include processor(s)302, which may execute program instructions for the communication device106and display circuitry304, which may perform graphics processing and provide display signals to the display360. The processor(s)302may also be coupled to memory management unit (MMU)340, which may be configured to receive addresses from the processor(s)302and translate those addresses to locations in memory (e.g., memory306, read only memory (ROM)350, NAND flash memory310) and/or to other circuits or devices, such as the display circuitry304, short range wireless communication circuitry229, cellular communication circuitry330, connector I/F320, and/or display360. The MMU340may be configured to perform memory protection and page table translation or set up. In some embodiments, the MMU340may be included as a portion of the processor(s)302. As noted above, the communication device106may be configured to communicate using wireless and/or wired communication circuitry. The communication device106may also be configured to determine a physical downlink shared channel scheduling resource for a user equipment device and a base station. Further, the communication device106may be configured to group and select CCs from the wireless link and determine a virtual CC from the group of selected CCs. The wireless device may also be configured to perform a physical downlink resource mapping based on an aggregate resource matching patterns of groups of CCs. As described herein, the communication device106may include hardware and software components for implementing the above features for measuring reference signals (e.g., CSI-RS signals), manages Rx beams, and determining a physical downlink shared channel scheduling resource for a communications device106and a base station. The processor302of the communication device106may be configured to implement part or all of the features described herein, e.g., by executing program instructions stored on a memory medium (e.g., a non-transitory computer-readable memory medium). Alternatively (or in addition), processor302may be configured as a programmable hardware element, such as an FPGA (Field Programmable Gate Array), or as an ASIC (Application Specific Integrated Circuit). Alternatively (or in addition) the processor302of the communication device106, in conjunction with one or more of the other components300,304,306,310,320,329,330,340,345,350,360may be configured to implement part or all of the features described herein. In addition, as described herein, processor302may include one or more processing elements. Thus, processor302may include one or more integrated circuits (ICs) that are configured to perform the functions of processor302. In addition, each integrated circuit may include circuitry (e.g., first circuitry, second circuitry, etc.) configured to perform the functions of processor(s)302. Further, as described herein, cellular communication circuitry330and short range wireless communication circuitry329may each include one or more processing elements. In other words, one or more processing elements may be included in cellular communication circuitry330and, similarly, one or more processing elements may be included in short range wireless communication circuitry329. Thus, cellular communication circuitry330may include one or more integrated circuits (ICs) that are configured to perform the functions of cellular communication circuitry330. In addition, each integrated circuit may include circuitry (e.g., first circuitry, second circuitry, etc.) configured to perform the functions of cellular communication circuitry230. Similarly, the short range wireless communication circuitry329may include one or more ICs that are configured to perform the functions of short range wireless communication circuitry32. In addition, each integrated circuit may include circuitry (e.g., first circuitry, second circuitry, etc.) configured to perform the functions of short range wireless communication circuitry329. FIG.4illustrates an example block diagram of a base station102, according to some embodiments. It is noted that the base station ofFIG.4is merely one example of a possible base station. As shown, the base station102may include processor(s)404which may execute program instructions for the base station102. The processor(s)404may also be coupled to memory management unit (MMU)440, which may be configured to receive addresses from the processor(s)404and translate those addresses to locations in memory (e.g., memory460and read only memory (ROM)450) or to other circuits or devices. The base station102may include at least one network port470. The network port470may be configured to couple to a telephone network and provide a plurality of devices, such as UE devices106, access to the telephone network as described above inFIGS.1and2. The network port470(or an additional network port) may also or alternatively be configured to couple to a cellular network, e.g., a core network of a cellular service provider. The core network may provide mobility related services and/or other services to a plurality of devices, such as UE devices106. In some cases, the network port470may couple to a telephone network via the core network, and/or the core network may provide a telephone network (e.g., among other UE devices serviced by the cellular service provider). In some embodiments, base station102may be a next generation base station, e.g., a 5G New Radio (5G NR) base station, or “gNB”. In such embodiments, base station102may be connected to a legacy evolved packet core (EPC) network and/or to a NR core (NRC) network. In addition, base station102may be considered a 5G NR cell and may include one or more transition and reception points (TRPs). In addition, a UE capable of operating according to 5G NR may be connected to one or more TRPs within one or more gNB s. The base station102may include at least one antenna434, and possibly multiple antennas. The at least one antenna434may be configured to operate as a wireless transceiver and may be further configured to communicate with UE devices106via radio430. The antenna434communicates with the radio430via communication chain432. Communication chain432may be a receive chain, a transmit chain or both. The radio430may be configured to communicate via various wireless communication standards, including, but not limited to, 5G NR, LTE, LTE-A, GSM, UMTS, CDMA2000, Wi-Fi, etc. The base station102may be configured to communicate wirelessly using multiple wireless communication standards. In some instances, the base station102may include multiple radios, which may enable the base station102to communicate according to multiple wireless communication technologies. For example, as one possibility, the base station102may include an LTE radio for performing communication according to LTE as well as a 5G NR radio for performing communication according to 5G NR. In such a case, the base station102may be capable of operating as both an LTE base station and a 5G NR base station. As another possibility, the base station102may include a multi-mode radio which is capable of performing communications according to any of multiple wireless communication technologies (e.g., 5G NR and Wi-Fi, LTE and Wi-Fi, LTE and UMTS, LTE and CDMA2000, UMTS and GSM, etc.). As described further subsequently herein, the BS102may include hardware and software components for implementing or supporting implementation of features described herein. The processor404of the base station102may be configured to implement or support implementation of part or all of the methods described herein, e.g., by executing program instructions stored on a memory medium (e.g., a non-transitory computer-readable memory medium). Alternatively, the processor404may be configured as a programmable hardware element, such as an FPGA (Field Programmable Gate Array), or as an ASIC (Application Specific Integrated Circuit), or a combination thereof. Alternatively (or in addition) the processor404of the BS102, in conjunction with one or more of the other components430,432,434,440,450,460,470may be configured to implement or support implementation of part or all of the features described herein. In addition, as described herein, processor(s)404may be comprised of one or more processing elements. In other words, one or more processing elements may be included in processor(s)404. Thus, processor(s)404may include one or more integrated circuits (ICs) that are configured to perform the functions of processor(s)404. In addition, each integrated circuit may include circuitry (e.g., first circuitry, second circuitry, etc.) configured to perform the functions of processor(s)404. Further, as described herein, radio430may be comprised of one or more processing elements. In other words, one or more processing elements may be included in radio430. Thus, radio430may include one or more integrated circuits (ICs) that are configured to perform the functions of radio430. In addition, each integrated circuit may include circuitry (e.g., first circuitry, second circuitry, etc.) configured to perform the functions of radio430. FIG.5illustrates an example simplified block diagram of cellular communication circuitry, according to some embodiments. It is noted that the block diagram of the cellular communication circuitry ofFIG.5is only one example of a possible cellular communication circuit. According to embodiments, cellular communication circuitry330may be included in a communication device, such as communication device106described above. As noted above, communication device106may be a user equipment (UE) device, a mobile device or mobile station, a wireless device or wireless station, a desktop computer or computing device, a mobile computing device (e.g., a laptop, notebook, or portable computing device), a tablet and/or a combination of devices, among other devices. The cellular communication circuitry330may couple (e.g., communicatively; directly or indirectly) to one or more antennas, such as antennas335a-band336as shown (inFIG.3). In some embodiments, cellular communication circuitry330may include dedicated receive chains (including and/or coupled to, e.g., communicatively; directly or indirectly. dedicated processors and/or radios) for multiple RATs (e.g., a first receive chain for LTE and a second receive chain for 5G NR). For example, as shown inFIG.5, cellular communication circuitry330may include a modem510and a modem520. Modem510may be configured for communications according to a first RAT, e.g., such as LTE or LTE-A, and modem520may be configured for communications according to a second RAT, e.g., such as 5G NR. As shown, modem510may include one or more processors512and a memory516in communication with processors512. Modem510may be in communication with a radio frequency (RF) front end530. RF front end530may include circuitry for transmitting and receiving radio signals. For example, RF front end530may include receive circuitry (RX)532and transmit circuitry (TX)534. In some embodiments, receive circuitry532may be in communication with downlink (DL) front end550, which may include circuitry for receiving radio signals via antenna335a. Similarly, modem520may include one or more processors522and a memory526in communication with processors522. Modem520may be in communication with an RF front end540. RF front end540may include circuitry for transmitting and receiving radio signals. For example, RF front end540may include receive circuitry542and transmit circuitry544. In some embodiments, receive circuitry542may be in communication with DL front end560, which may include circuitry for receiving radio signals via antenna335b. In some embodiments, a switch570may couple transmit circuitry534to uplink (UL) front end572. In addition, switch570may couple transmit circuitry544to UL front end572. UL front end572may include circuitry for transmitting radio signals via antenna336. Thus, when cellular communication circuitry330receives instructions to transmit according to the first RAT (e.g., as supported via modem510), switch570may be switched to a first state that allows modem510to transmit signals according to the first RAT (e.g., via a transmit chain that includes transmit circuitry534and UL front end572). Similarly, when cellular communication circuitry330receives instructions to transmit according to the second RAT (e.g., as supported via modem520), switch570may be switched to a second state that allows modem520to transmit signals according to the second RAT (e.g., via a transmit chain that includes transmit circuitry544and UL front end572). As described herein, the modem510may include hardware and software components for implementing the above features or for measuring one or more reference signals (e.g., CSI-RS signals) and determining a physical downlink shared channel scheduling resource for a user equipment device and a base station, as well as the various other techniques described herein. The processors512may be configured to implement part or all of the features described herein, e.g., by executing program instructions stored on a memory medium (e.g., a non-transitory computer-readable memory medium). Alternatively (or in addition), processor512may be configured as a programmable hardware element, such as an FPGA (Field Programmable Gate Array), or as an ASIC (Application Specific Integrated Circuit). Alternatively (or in addition) the processor512, in conjunction with one or more of the other components530,532,534,550,570,572,335and336may be configured to implement part or all of the features described herein. In addition, as described herein, processors512may include one or more processing elements. Thus, processors512may include one or more integrated circuits (ICs) that are configured to perform the functions of processors512. In addition, each integrated circuit may include circuitry (e.g., first circuitry, second circuitry, etc.) configured to perform the functions of processors512. As described herein, the modem520may include hardware and software components for implementing the above features for measuring reference signals (e.g., CSI-RS signals), managing Rx beams, and determining a physical downlink shared channel scheduling resource for a user equipment device and a base station, as well as the various other techniques described herein. The processors522may be configured to implement part or all of the features described herein, e.g., by executing program instructions stored on a memory medium (e.g., a non-transitory computer-readable memory medium). Alternatively (or in addition), processor522may be configured as a programmable hardware element, such as an FPGA (Field Programmable Gate Array), or as an ASIC (Application Specific Integrated Circuit). Alternatively (or in addition) the processor522, in conjunction with one or more of the other components540,542,544,550,570,572,335and336may be configured to implement part or all of the features described herein. In addition, as described herein, processors522may include one or more processing elements. Thus, processors522may include one or more integrated circuits (ICs) that are configured to perform the functions of processors522. In addition, each integrated circuit may include circuitry (e.g., first circuitry, second circuitry, etc.) configured to perform the functions of processors522. FIG.6illustrates a UE device602in communication with a serving cell and neighbor cell, according to some embodiments. The UE602can include any or all of the features described in relation to UE106. UE602can generate multiple local receiving (Rx) beams608. These Rx beams can be formed at different positions around the UE to pick up wireless communication signals, e.g., electro-magnetic signals, from serving cell604and neighboring cell606. Wireless signals can include channel state information reference signals (CSI-RS). These are downlink signals that are used to estimate channel and report channel quality information back to gNB. A CSI-RS signal can be periodic, semi-persistent, or aperiodic. The CSI-RS can be CSI-RS layer3mobility signal, used during mobility and beam management. The serving cell604communicates CSI-RS1 (a first CSI-RS signal) to the UE. The CSI-RS1 can be quasi co-located (QCL) with a synchronization signal block (SSB1) or another CSI-RS signal transmitted from the serving cell. This QCL information can be used to determine which of the Rx beams608should be used to receive CSI-RS1. Similarly, the neighboring cell606can communicate CSI-RS2 to the UE. CSI-RS2 can also be quasi co-located with SSB2 or another CSI-RS signal transmitted from the neighbor cell. This QCL information can be used to determine which of the Rx beams should be used to receive CSI-RS2. In some cases, however, QCL information may not be available. The UE may need to determine which of the Rx beams608to use for performing CSI-RS1 and CSI-RS2 measurements. In addition, when there is overlap between the CSI-RS signals, (e.g., if the CSI-RS signals are on the same time occasion and require pickup by different Rx beams), the UE may need to prioritize one CSI-RS over another. The UE should have the capability to adapt under different scenarios to sufficiently measure CSI-RS signals from the serving cell and the neighbor cell. FIG.6shows a first scenario where CSI-RS1 (a first CSI-RS signal) and CSI-RS2 (a second CSI-RS signal) are communicated with respective QCL information. Respective QCL information can include quasi co-location (QCL) between a) the first CSI-RS signal and a first synchronization signal block from the first cell, b) the first CSI-RS signal and another CSI-RS signal from the first cell, c) the second CSI-RS signal and a second synchronization signal block from the second cell, and/or d) the second CSI-RS signal and another CSI-RS signal from the second cell. A first Rx beam can be determined based on the QCL information associated with CSI-RS1 and a second Rx beam can be determined based on the QCL information associated with CSI-RS2. For example, based on QCL between CSI-RS1 and SSB1, the UE can determine that Rx1is appropriate to receive CSI-RS1. In other words, the signal strength of CSI-RS1 received through this beam can be higher than if received through other beams. The same holds true for determining an Rx beam for CSI-RS2 communicated from neighboring cell606. Signals from different antenna ports of the same cell are said to be quasi co-located if properties of the channel over which a symbol on one antenna port is conveyed can be inferred from the channel over which a symbol on the other antenna port is conveyed. InFIG.6, the Rx beam that is selected by the UE to receive CSI-RS1 can be different from the Rx beam selected to receive CSI-RS2, because one Rx beam might be more optimal to receive CSI-RS1 while another Rx beam might be optimal to receive CSI-RS2. If the CSI-RS1 and CSI-RS2 signal overlap in the time domain (e.g., as shown inFIG.7), the UE cannot measure those CSI-RS signals simultaneously by using different Rx beams, because the UE is limited to one active Rx beam at a given time. Under these conditions, two sub-scenarios are appreciated. FIG.7shows a first sub-scenario where CSI-RS1 and CSI-RS2 are fully overlapped on time domain with the same time offset and same periodicity. In other words, the signals are arriving and occurring over the same time at the UE, periodically. In this sub-scenario, the UE can opt to receive and measure the signals in the following manners. In a first option to address this first sub-scenario, the UE can determine or be provided a sharing factor X (e.g., 10%, 20%, 30%, 40%, 50%) through the network to allocate measurement resources. For example, if the sharing factor is 40% for CSI-RS1, then in four out of ten periods, the UE can receive and measure CSI-RS1 through Rx1, and in six out of ten periods, the UE can receive and measure CSI-RS2 through Rx2. Under a second option and third option of the first sub-scenario, the UE can always prioritize receiving and measuring either CSI-RS1 or CSI-RS2. For example, the UE can receive and measure only CSI-RS1 (e.g., through Rx1). Alternatively, the UE can receive and measure only CSI-RS2 (e.g., through Rx7). FIG.8shows a second sub-scenario, where the CSI-RS1 and CSI-RS2 are partially overlapped in the time domain, e.g., they can have the same time offset and different periodicity. In this example, CSI-RS1 has period T and CSI-RS2 has a periodicity T/2. Thus, some of the CSI-RS2 occasions (periods of signal transmission) are not overlapped with CSI-RS1. Given that some of these occasions of CSI-RS2 are alone, UE can opt to receive and measure the signals in the following manners. In a first option for the second sub-scenario, the UE performs CSI-RS2 measurement through the Rx beam determined based on the QCL information (in this example, Rx7), when the CSI-RS2 is not overlapped with CSI-RS1. The UE performs CSI-RS1 measurement through the Rx beam determined based on the CSI-RS1 QCL information (in this example Rx1) when the signals are overlapped. In a second option for the second sub-scenario, the UE performs CSI-RS2 measurement (with Rx7) when the signals are not overlapped. The UE can use sharing factor X to allocate measurement resource for CSI-RS1 and CSI-RS2 on the overlapped occasion. In other words, with this option, the CSI-RS2 measurement will be taken when the signals overlap, but when the signals do not overlap, the measurements can alternate between receiving and measuring CSI-RS1 (with Rx1) and CSI-RS2 (with Rx7). It should be understood that, although this example and others are illustrated with Rx1used to receive CSI-RS1 and Rx7used to receive CSI-RS2, any of the beams can be selected for pickup of a respective CSI-RS based on the QCL information associated with the respective CSI-RS signal, or based on beam sweeping measurements. In some cases, the CSI-RS1 and CSI-RS2 can use the same Rx beam, in which case, both signals can be received and measured with the same Rx beam. It should further be understood that although the Rx beams are shown as Rx0through Rx7in illustrated examples, the number, location, directionality, and direction of beams can vary depending on application (e.g., capacity of the antenna array of the UE) without departing from the scope of the present disclosure. FIG.9shows a second scenario where a CSI-RS (e.g., CSI-RS1) of the serving cell has available QCL information, but CSI-RS (e.g., CSI-RS2) of the neighbor cell has no available QCL information. The lack of QCL information associated with a CSI-RS signal can be due to different factors, such as but not limited to a) the network does not indicate this QCL information to UE or it is physically blocked, b) a previous measurement based on QCL information times out and is no longer relevant or useful for QCL, and/or c) a source reference signal in QCL chain is not available. In this example, an Rx beam (a first Rx beam) for receiving CSI-RS1 is known or determined through QCL information. CSI-RS2 from the neighboring cell, however, does not have QCL information available. In this case, UE can perform beam sweeping to find an Rx beam (a second Rx beam) that is optimal to receive CSI-RS2 with. For beam sweeping, the UE can activate different beams with predefined locations and direction around the UE and measure a CSI-RS signal strength through each beam to determine which beam receives the CSI-RS with the greatest signal strength. In this second scenario, CSI-RS1 may overlap with CSI-RS2 on some or all occasions on time domain, for example, depending on periodicity and time offset of each signal. The UE can decide on which occasions it can perform the CSI-RS1 measurement, and on which occasions it can sweep Rx beams for CSI-RS2. If all occasions of CSI-RS1 and CSI-RS2 are fully overlapped in the second scenario, as shown inFIG.10, UE can prioritize Rx beam sweeping for CSI-RS2 measurement. The UE receives and measures CSI-RS1 on the occasions where the Rx beam of CSI-RS1 and the index of beam sweeping for CSI-RS2 are the same. For example, as beam sweeping is performed over Rx0, Rx1, Rx2. . . etc., CSI-RS2 is measured with each beam. When beam sweeping is indexed at Rx1, both CSI-RS1 and CSI-RS2 are measured through Rx1. If some occasions of CSI-RS1 are not overlapped (but others are overlapped) with CSI-RS2 as shown inFIG.11, CSI-RS1 may have a shorter period than CSI-RS2 (e.g., CSI-RS1 has period T and CSI-RS2 has period2T. The UE can beam sweep for all occasions of CSI-RS2. In such a case, UE can perform CSI-RS1 measurements on the occasions where the beam sweeping index falls on the Rx beam that is associated with CSI-RS1 (e.g., Rx1in this example). UE can also perform the CSI-RS1 measurement on the occasions of CSI-RS1 that are not overlapped with CSI-RS2, using the known Rx beam for CSI-RS1 (Rx1). If some occasions of CSI-RS2 are not overlapped (but others are overlapped) with CSI-RS1, as shown inFIG.12, then UE can perform Rx beam sweeping for CSI-RS2 measurement only on the non-overlapped occasions of CSI-RS2. UE may perform the beam sweeping for CSI-RS2 but skip the Rx beam selected to receive CSI-RS1 in the sweep sequence (Rx1in this example) which improves efficiency and reduces redundancy. On the overlapped occasions, UE can use the Rx beam associated with CSI-RS1 to receive and measure both CSI-RS1 and CSI-RS2. Under a third scenario shown inFIG.13, the CSI-RS of the serving cell and the CSI-RS of the neighboring cell both lack QCL information to determine which Rx beam should be used for receiving the CSI-RS signals, respectively. If neither CSI-RS1 from cell1nor CSI-RS2 from Cell2has available QCL information, the UE can perform the beam sweeping for both CSI-RS1 and CSI-RS2. For each time period, a single Rx beam can be used for measuring both CSI-RS1 and CSI-RS2. In this case, the UE can use finer beam (more narrow) than typically used for SSB associated with a CSI-RS L3 signal. For example, as shown inFIG.14, beam sweeping can be used to measure both CSI-RS1 and CSI-RS2 by measuring each signal over each Rx beam. CSI-RS1 and CSI-RS2 do not necessarily have to overlap completely, although shown as such in this example. FIG.15shows a process that describes a measurement algorithm1500for CSI-RS signals according to some embodiments, for example, in response to the first scenario shown inFIG.6. At operation1501, the process includes receiving a first CSI-RS signal through a first cell and a second CSI-RS signal through a second cell. The first CSI-RS signal and the second CSI-RS signal can be periodic, e.g., transmitted periodically over time. At operation1502, if respective QCL information is available to determine a first Rx beam to measure the first CSI-RS signal and a second Rx beam to measure the second CSI-RS signal, then the process can proceed to operation1503or operation1507. It should be noted that although the process is shown as sequentially performed through operation1503to proceed to1507, this is not required. The process proceeds depending on the situation of the CSI-RS signals as described. At operation1503, if the first CSI-RS signal and the second CSI-RS signal are fully overlapped, then the process can proceed to any one of three options. At option1504, the process includes sharing resources by alternating between measuring the first CSI-RS signal with the first Rx beam and measuring the second CSI-RS signal with the second Rx beam. At option1505the process includes measuring only the first CSI-RS signal with the first Rx beam. At option1506, the process includes measuring only the second CSI-RS signal with the second Rx beam. At operation1507, if some occasions of the second CSI-RS signal are not overlapped with the first CSI-RS signal (but others are overlapped), then the process can proceed to any of two options. At option1508, the process includes measuring the second CSI-RS signal when not overlapped, and measuring the first CSI-RS signal when overlapped. At option1509, the process includes measuring the second CSI-RS signal when not overlapped, and alternating between measuring the first CSI-RS signal and the second CSI-RS signal when overlapped. It should be understood that the options can selected based on application and/or network behavior or network conditions. FIG.16shows a process according to some embodiments that describes a measurement and sweeping algorithm1600for CSI-RS signals, for example, in response to the second scenario shown inFIG.9. A first CSI-RS signal and a second CSI-RS signal are received by the UE. At operation1602, if the respective QCL information is available to determine the first Rx beam to measure the first CSI-RS signal, and the respective QCL information is not available to determine the second Rx beam, then the process proceeds to operation1603,1606, or1609, depending on the condition. The operations1603,1606, and1609need not be performed sequentially as shown. At operation1603, if the first CSI-RS signal and the second CSI-RS signal are fully overlapped, then the process can proceed to operation1604. Full overlap can occur when both signals have the same period and same time offset. Thus, the CSI-RS signals are received at the UE at the same time and the UE must resolve how to measure both signals. At operation1604, the process includes beam sweeping over a plurality of Rx beams that includes the first Rx beam, to measure the second CSI-RS signal over each of the plurality of Rx beams. At operation1605, the process includes measuring the first CSI-RS signal (and the second CSI-RS signal together) when the beam sweeping is indexed on the first Rx beam. The second Rx beam can be determined based on the sweep measurements of the second CSI-RS signal over the plurality of Rx beams (e.g., based on which Rx beam receives the CSI-RS signal with the highest strength). Operations1604and1605are also described in other sections in relation toFIG.10. At operation1606, if some occasions of the first CSI-RS signal are not overlapped with the second CSI-RS signal (and others are overlapped), then the process can proceed to operation1607. At1607, the process includes beam sweeping over a plurality of Rx beams (e.g., Rx1, Rx2, Rx3, etc. as shown inFIGS.6-14) that includes the first Rx beam, to measure the second CSI-RS signal over each of the plurality of Rx beams. At block1608, the process includes measuring the first CSI-RS signal with the first Rx beam on a) non-overlapped occasions of the first CSI-RS signal, and/or b) when the beam sweeping is indexed on the first Rx beam. Operations1607and1608as discussed in other sections in relation toFIG.11. As discussed, the second Rx beam can be determined based on sweep measurements of the second CSI-RS signal over the plurality of Rx beams. At operation1609, if some occasions of the second CSI-RS signal are not overlapped with the first CSI-RS signal (but others are), then the process can proceed to operation1610. At operation1610, the process includes measuring, on overlapped occasions, the first CSI-RS signal and the second CSI-RS signal with the first Rx beam. At operation1611, the process includes beam sweeping over the plurality of Rx beams that does not include the first Rx beam, on non-overlapped occasions of the second CSI-RS signal, to measure the second CSI-RS signal over each of the plurality of Rx beams. In other words, the first Rx beam is skipped during the sweep because the second CSI-RS signal is measured at the first Rx beam at operation1610. Operations1610and1611are further described in relation toFIG.12. The second Rx beam can be determined based on the sweep measurements of the second CSI-RS signal over the plurality of Rx beams. Thus, based on the above, although the second CSI-RS signal lacked respective QCL information, the UE can manage beams and measurement to determine which Rx beam to use to receive the second CSI-RS signal while also measuring the first CSI-RS signal. FIG.17shows a process according to some embodiments that describes a measurement and sweeping algorithm1700for CSI-RS signals, for example, in response to the third scenario shown inFIG.13. At operation1702, if the respective QCL information is not available to determine the first Rx beam and the second Rx beam, then the process can proceed to operation1604. At operation1703, the process includes beam sweeping over a plurality of Rx beams that includes the first Rx beam and the second Rx beam. At1704, each of the first CSI-RS signal and the second CSI-RS signal are measured over each of the plurality of Rx beams. The first Rx beam and the second Rx beam can be determined based on measurements of the first CSI-RS signal and the second CSI-RS signal over the plurality of Rx beams. In other words, the Rx beam that yields the highest signal strength for the first CSI-RS can be designated as a first Rx beam to use to receive the first CSI-RS signal. Similarly, the Rx beam that yields the highest signal strength for the second CSI-RS signal can be designated as a second Rx beam to use to receive the second CSI-RS signal. Operations1604and1605are discussed in other sections, for example, relative toFIGS.13and14. It should be understood that a UE can implement different combinations of the strategies discussed under varying conditions of CSI-RS signals.FIG.18shows a combination of strategies according to some embodiments. At operation1501, a first and second CSI-RS signal are received, as discussed in other sections. At operation1502, if both CSI-RS signals have available QCL information, the process proceeds to operation1500, which is described in other sections. At operation1602, if the first CSI-RS signal has available QCL information but the second CSI-RS signal does not, then the process proceeds to operation1600, which is described in other sections. At operation1702, if both the first CSI-RS signal and the second CSI-RS signal both lack respective QCL information, then the process proceeds to operation1700, which is described in other sections. In such a manner, the UE can implement a comprehensive and adaptive CSI-RS measurement and sweeping strategy for a serving cell and a neighboring cell under the different conditions described. Portions of what was described above may be implemented with logic circuitry such as a dedicated logic circuit or with a microcontroller or other form of processing core that executes program code instructions. Thus processes taught by the discussion above may be performed with program code such as machine-executable instructions that cause a machine that executes these instructions to perform certain functions. In this context, a “machine” may be a machine that converts intermediate form (or “abstract”) instructions into processor specific instructions (e.g., an abstract execution environment such as a “virtual machine” (e.g., a Java Virtual Machine), an interpreter, a Common Language Runtime, a high-level language virtual machine, etc.), and/or, electronic circuitry disposed on a semiconductor chip (e.g., “logic circuitry” implemented with transistors) designed to execute instructions such as a general-purpose processor and/or a special-purpose processor. Processes taught by the discussion above may also be performed by (in the alternative to a machine or in combination with a machine) electronic circuitry designed to perform the processes (or a portion thereof) without the execution of program code. The present invention also relates to an apparatus for performing the operations described herein. This apparatus may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), RAMs, EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. A machine readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; etc. An article of manufacture may be used to store program code. An article of manufacture that stores program code may be embodied as, but is not limited to, one or more memories (e.g., one or more flash memories, random access memories (static, dynamic or other)), optical disks, CD-ROMs, DVD ROMs, EPROMs, EEPROMs, magnetic or optical cards or other type of machine-readable media suitable for storing electronic instructions. Program code may also be downloaded from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a propagation medium (e.g., via a communication link (e.g., a network connection)). The preceding detailed descriptions are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the tools used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be kept in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “selecting,” “determining,” “receiving,” “forming,” “grouping,” “aggregating,” “generating,” “removing,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the operations described. The required structure for a variety of these systems will be evident from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. It is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users. The foregoing discussion merely describes some exemplary embodiments of the present invention. One skilled in the art will readily recognize from such discussion, the accompanying drawings and the claims that various modifications can be made without departing from the spirit and scope of the invention.
55,970
11863278
DETAILED DESCRIPTION The present invention will now be described more fully hereinafter. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those persons skilled in the relevant art. Like reference numbers refer to like elements throughout the description. In one of its aspects, the disclosure presented herein concerns a method for sequential receive combining in a radio stripe system. With reference toFIGS.1and2, a first embodiment will now be described.FIG.1illustrates a message sequence chart of a process for sequential receive combining in a radio stripe system.FIG.2illustrates a method100, performed by a first Antenna Processing Unit (APU), for sequential receive combining in a radio stripe system. The radio stripe system comprises at least two APUs and a Central Processing Unit (CPU). The at least two APUs are connected in series from the CPU. The radio stripe system serves at least two User Equipment (UEs), or terminals.FIG.3illustrates an example of such radio stripe system800. As seen inFIG.3, each radio stripe may comprise multiple APUs, or Access Points (APs), deployed along the same fronthaul connection to a central unit, a cloud-processor, also called the CPU600. In some embodiments, the CPU600may be called a stripe station or a network node. The radio stripe system800may be comprised in a cell-free distributed (massive) Multiple-Input Multiple-Output (MIMO) network. The method100starts at step110with the first APU300obtaining channel estimates for channels to the served UEs. The obtained channel estimates may be Channel State Information (CSI) obtained from reference pilot signals transmitted by the at least two UEs700served by the radio stripe system800. Based on the obtained channel estimates, a receive combining filter is determined in step120. The receive combining filter is going to be applied to received data signals. The receive combining filter may, for example, be generated by a method selected from the group comprised of: Maximum-Ratio Combining (MRC), Zero-Forcing (ZF) combining and Minimum-Mean Squared Error (MMSE) combining. Alternatively, the receive combining filter may be generated by another method. Thereafter the method100continues at step130with determining effective channels from said served UEs700based on the obtained channel estimates and the determined receive combining filter. The effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs700. At step140, the determined effective channels are transmitted from said served UEs700to at least one subsequent second APU400. The at least one subsequent second APU400may be located closer to the CPU600than the first APU300. If it is assumed that the radio stripe system800serves K number of UEs700, K2scalar coefficients will be transmitted to the at least one subsequent second APU400. The K2scalar coefficients represent the effective K×K channel created after application of the receive combining filter for each of the K served UEs700. In some embodiments, the method100may further comprise step150of receiving data signals from the served UEs700and step160of determining improved estimates of the received data signals by applying the determined receive combining filter to the received data signals from said served UEs700. If it is assumed that the radio stripe system800serves K UEs700, K improved estimates, one per UE, may be determined. Additionally, the method may further comprise step170of transmitting the determined improved estimates of the received data signals to at least one subsequent second APU400. The proposed method100is a scalable distributed method for coherent signal combining and interference suppression in distributed MIMO systems. The method100considers uplink transmission where the served UEs700transmit both payload data and reference pilot signals to enable channel estimation. The aim is to decode the signals from at least two served UEs jointly by using the signals from multiple APUs simultaneously to suppress interference and to perform this without gathering all the received signals from data and pilots at one location. The proposed method100achieves this by using the sequential topology of the fronthaul in radio stripe systems800to implement interference suppression in a sequential manner. The first APU300makes a local decision based on its locally received signals and thereafter transmits this information to a second APU400. This will enable suppression of interference between the first APU300and the second APU400. The proposed method100is contrary to the existing solutions that are either distributed, but lacks interference suppression capability, or support interference suppression, but require a centralized implementation with heavy fronthaul traffic. The proposed method100may greatly increase the system capacity, achievable rates, since cell-free networks typically operate at high Signal-to-Noise-Ratio (SNR) where the system performance is interference limited. In some embodiments, each of the at least two APUs300,400in the radio stripe system800may be equipped with one antenna. In other embodiments, at least one of the at least two APUs300,400may have multiple antennas. According to a second aspect, there is provided a method, performed by a second APU400, for sequential receive combining in a radio stripe system800. The method200is now going to be described with reference to theFIGS.1and4. As previously mentioned,FIG.1illustrates a message sequence chart of a process for sequential receive combining in a radio stripe system800.FIG.4illustrates the method200, performed by the second APU400, for retrieving sequential receive combining in a radio stripe system800. The radio stripe system800comprises at least two APUs300,400and a CPU600. The at least two APUs300,400are connected in series from the CPU600. The radio stripe system800serves at least two UEs700.FIG.3illustrate an example of such radio stripe system800. The method200starts at step210with obtaining channel estimates for channels to said served UEs700. The obtained channel estimates may be CSI obtained from reference pilot signals transmitted by said served UEs700. The proposed method continues with step220of receiving, from at least one preceding first APU300, effective channels from said served UEs700to said preceding first APU300. The at least one preceding first APU300may located further away from the CPU600than the second APU400. Thereafter, at step230, the proposed method200determines a receive combining filter based on the obtained channel estimates and the received effective channels from said at least one preceding first APU300. The receive combining filter is going to be applied to received data signals. The receive combining filter may be generated by a method selected from the group comprised of: MRC, ZF combining and MMSE combining. Alternatively, the receive combining filter may be generated by some other method. The proposed method200enables the second APU400to process its received signals locally and combine them with signals from at least one preceding first APU300in a sequential manner. If it is assumed that the second APU400comprises M antennas, the second APU400may apply a new determined receive combining filter, but the processing will consider an M+1 antenna system. The additional dimension is created based on the input provided by the at least one first preceding APU300. In some embodiments, the method200may further comprise the step240of determining effective channels from said served UEs700based on the obtained channel estimates, the determined receive combining filter and the received effective channels from said at least one preceding first APU300. The effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs. If it is assumed that the radio stripe system serves K UEs, K2effective channels representing the effective channel created after the receive combining filter being applied to each channel for said served UEs700will be determined. In some embodiments, the proposed method200may further comprise step250of receiving data signals from said served UEs700and step260of receiving, from said at least one preceding first APU300, improved estimates of data signals received by the preceding first APU300. Thereafter, the method may comprise step270of determining augmented received signals based on the received data signals and the received improved estimates of the data signals received by the preceding first APU300. The method200thereafter may comprise step280of determining improved estimates of the received data signals by applying the determined receive combining filter to the augmented received data signals from said served UEs700. If it is assumed that the radio stripe system serves K UEs700, K improved estimates, one per UE, may be determined. According to the proposed method200, the second APU400will make a local decision based on its locally received signals and fuse it with the information received from the at least one preceding first APU300. This will enable suppression of interference between the first APU300and the second APU400. The proposed method200is contrary to existing solutions that are either distributed, but lacks interference suppression capability, or support interference suppression, but require a centralized implementation with heavy fronthaul traffic. The proposed method200may greatly increase the system capacity, achievable rates, since cell-free networks typically operate at high Signal-to-Noise-Ratio (SNR) where the system performance is interference limited. A key benefit of the proposed method200is that the fronthaul capacity requirement between two APUs300,400on the same stripe may be the same, irrespective of how many APUs there may be on the stripe, but the APUs300,400may anyway provide interference suppression. In some embodiments, the radio stripe system800may have a tree structure as illustrated inFIG.3, then the second APU400may receive effective channels and improved estimates of data signals received from more than one preceding first APU300. All these received input, i.e. received effective channels and improved estimates, will then be included in the second APU's400processing. If it is assumed that the radio stripe system800may serve K UEs, second APU400may comprise M antennas and that the second APU400may receive input from B number of APUs, the second APU400may combine the input with its local information to effectively achieve an M+B antenna system. The second APU400may then apply the determined receive combining filter to determine K improved estimates, one per UE. This enables the use of the proposed method100with flexible radio stripe systems800, which may take several different forms. In some embodiments, the method200may further comprise the step290of transmitting the determined effective channels from said served UEs700and the improved estimates of the received data signals to at least one subsequent third APU500. The at least one subsequent third APU500may be located closer to the CPU600than the second APU400. In another embodiment, the method200may further comprise the step of transmitting290the determined effective channels from said served UEs and the improved estimates of the received data signals to the CPU600. This may, for example, happen when no subsequent APU is connected between the second APU and the CPU600. The determined effective channels from said served UEs700and the improved estimates of the received data signals may then be transmitted to the CPU600for final decoding. It may be appreciated that the number of variables that may be sent from one APU, e.g. the second APU400, to another APU, e.g. the third APU500, along a radio stripe is the same, irrespective of how many APUs there may be on the radio stripe. If a sequence of data symbols may be received in the same transmission block, they may be combined in the same way. Hence, the K2effective desired and interfering scalar channels are identical for all those symbols. Only the K improved estimates may need to be sent once per data symbol. Accordingly, the proposed method200provides a scalable solution that may be used in different types of radio stripe systems800. In some embodiments, only a subset of the APUs in the radio stripe system800may participate in decoding signal of each UE700. In some embodiments, each of the at least two APUs300,400in the radio stripe system800may be equipped with one antenna. In other embodiments, at least one of the at least two APUs300,400in the radio stripe system800may be equipped with more than one antenna. The above proposed methods100,200are now going to be described together with a non-limiting example arrangement. The example arrangement is illustrated inFIG.5. The arrangement inFIG.5comprises two M-antenna APUs, the first APU300and the second APU400. The arrangement further comprises two single-antenna UEs, the first UE710and the second UE720. The first UE710transmits the unit-power signal s1and the second UE720transmits the unit power s2. The received signal at the first APU300is the M-dimensional vector y1=h11s1+h12s2+n1 where h11is the M-dimensional channel, i.e. channel estimates, from the first UE710to the first APU300and h12is the M-dimensional channel vector from the second UE720to the first APU300, while n1is the M-dimensional noise vector. As previously described, the first APU300may learn the channels h11, h12from uplink pilots that are transmitted from the UEs710,720. It is here assumed that the first APU300may learn these channels without error. The first APU300thereafter use this information to determining the M dimensional receive combining filter, or vector, v11for the first UE710and v12for the second UE720. The first APU300applies the receive combining filters to the received signal y1, wherein improved estimates ŝ11and ŝ12of the data signals s1and s2are determined. ŝ11=v11Hy1=v11Hh11s1+v11Hh12s2+v11Hn1=g111s1+g121s2+v11Hn1 ŝ12=v12Hy1=v12Hh11s1+v12Hh12s2=+v12Hn1=g211s1+g221s2+v12Hn1 The notation gik1=v1iHh1kis introduced for i,k=1, 2. The first APU300may thereafter transmit the improved estimates ŝ11, ŝ12of the data signals and the effective channels g111, g121, g211, g221to the second APU400. From the same uplink transmission, the received signal at the second APU400is the M-dimensional vector y2=h21s1+h22s2+n2 where h21is the M-dimensional channel from the first UE710to the second APU400and h22is the M-dimensional channel vector from the second UE720to the second APU400, while n2is the M-dimensional noise vector. In order for the second APU400to determine an improved estimate of s1it first creates an augmented received signal based on the received data signals and the received improved estimates of the data signals received by the preceding first APU300 [y2sˆ1⁢1]=[h2⁢1g1⁢11]⁢s1+[h2⁢2g1⁢21]⁢s2+[n2ν11H⁢n1] which is an (M+1)-dimensional vector. The second APU400then applies the (M+1)-dimensional receive combining filter v21to determine the improved estimates of the received data signals s1 sˆ2⁢1=ν2⁢1H[y2sˆ1⁢1]=ν2⁢1H[h2⁢1g1⁢11]⁢s1+ν2⁢1H[h2⁢2g1⁢21]⁢s2+ν2⁢1H[n2ν1⁢1H⁢n1]=g1⁢12⁢s1+g1⁢22⁢s2+ν2⁢1H[n2ν1⁢1H⁢n1] where notation gi⁢k2=ν2⁢iH[h2⁢kg1⁢k1] is introduced for i,k=1, 2. Similarly, the second APU400determines the augmented received signal [y2sˆ1⁢2]=[h2⁢1g2⁢11]⁢s1+[h2⁢2g2⁢21]⁢s2+[n2νl⁢2H⁢n1] and then determines an improved estimate of s2by applying the (M+1)-dimensional receive combining vector v22to the augmented received signal as sˆ2⁢2=ν2⁢2H[y2sˆ1⁢2]=v2⁢2H[h2⁢1g2⁢11]⁢s1+ν2⁢2H[h2⁢2g2⁢21]⁢s2+ν2⁢2H[n2ν1⁢2H⁢n1]=g2⁢12⁢s1+g2⁢22⁢s2+ν2⁢2H[n2ν1⁢2H⁢n1] If the second APU400is the “last” APU in the radio stripe system, i.e. there is no other APU between the second APU400and the CPU600, the second APU400transmits the improved estimates ŝ21,ŝs22of the data signals and the effective channels g112, g122, g212, g222to the CPU600for final decoding. If the combining filter has norm one, then the variance of the noise term v12Hn1is the same as for the entries in the noise vector n2, and these noise terms are also independent, thus the noise variance needs not to be shared between the at least two APUs300,400. Hence, ŝ21,ŝ22and g112, g122, g212, g222is all the information that the second APU400send to the CPU600. Thus, the fronthaul capacity requirement is the same between every pair of APUs, or between the last APU and the CPU600, irrespective of how many APUs that may exist along the same fronthaul connection (i.e., the same radio stripe). If there are more than two APUs, APU l for l>2 will operate in the same way as the second APU400, but based on the input from the previous APU, i.e. APU l−1. The proposed methods100,200are particularly useful when implementing signal combining methods that offer interference suppression. For example, the first APU300can use the unit-norm MMSE combining vectors, or filters, ν1⁢1=(h1⁢1⁢h1⁢1H+h1⁢2⁢h1⁢2H+σ2⁢IM)-1⁢h1⁢1(h1⁢1⁢h1⁢1H+h1⁢2⁢h1⁢2H+σ2⁢IM)-1⁢h1⁢1ν1⁢2=(h1⁢1⁢h1⁢1H+h1⁢2⁢h1⁢2H+σ2⁢IM)-1⁢h1⁢2(h1⁢1⁢h1⁢1H+h1⁢2⁢h1⁢2H+σ2⁢IM)-1⁢h1⁢2 which maximize the SINRs for the first UE710and the second UE720, respectively, based on only the received signals at the first APU300. In this expression, IMdenotes the identity matrix of size M and σ2is the noise variance. Next, the second APU400can select the unit-norm MMSE combining vectors v21=([h2⁢1g111][h21g111]H+[h2⁢2g121][h2⁢2g121]H+σ2⁢IM+1)-1[h21g111]([h2⁢1g111][h21g111]H+[h2⁢2g121][h2⁢2g121]H+σ2⁢IM+1)-1[h21g111]v22=([h2⁢1g2⁢11][h21g211]H+[h2⁢2g2⁢21][h2⁢2g2⁢21]H+σ2⁢IM+1)-1[h2⁢2g2⁢21]([h2⁢1g2⁢11][h21g211]H+[h2⁢2g2⁢21][h2⁢2g2⁢21]H+σ2⁢IM+1)-1[h2⁢2g2⁢21] which maximize the SINR for the first UE710and the second UE720, respectively, based on the available signals at the second APU400. Observe that in this example the elementary form is used, where the residual noise term is thermal noise only. In some embodiments, this simplification may be used. In some embodiments, the first APU300may also transmit improved estimate of the residual noise term. In some embodiments, the second APU400may use a covariance matrix combined with said residual noise term, that is, the last term is a block matrix with one M block and one 1 block. In an alternative embodiment of the proposed solution, the first APU300may determine log-likelihood ratios (“L-values”) for each information bit associated with each UE700instead of determining effective channels from said served UEs. These determined L-values may then be transmitted to the at least one subsequent second APU400. The at least one subsequent APU400may then add up its locally obtained log-likelihood ratio values with those obtained from the first APU300. This process may continue until the signal reaches the CPU600. In some embodiments, each APU may use a threshold to decide if its impact on the log-likelihood is sufficiently large for it to have a non-negligible impact on the final decoding. If not, the APU may forward the log-likelihood ratios without changing them. According to a third aspect, there is provided a first APU300for performing the method100according to the first aspect. The first APU300may be used in, but are not limited to, a radio stripe system800such as illustrated inFIG.3. The first APU300is now going to be described with reference toFIG.6. The first APU300is configured for sequential receive combining in a radio stripe system800. The radio stripe system800comprises at least two APUs300,400and a CPU600. The at least two APUs300,400are connected in series to the CPU600. The radio stripe system800serves at least two UEs700. As illustrated inFIG.6, the first APU300comprises a processing circuitry310and a memory circuitry320. Additionally, or alternatively, the first APU300may further comprise a transmitter, or a transmitting circuitry340, configured to transmit data to other apparatuses, such as the at least one subsequent second APU400. Additionally, or alternatively, the first APU300may further comprise a receiver, or a receiving circuitry330, configured to receive data from other apparatuses, such as the at least one subsequent second APU400. The memory circuitry320stores computer program code which, when run in the processing circuitry310, causes the first APU300to obtain channel estimates for channels to said served UEs. The obtained channel estimates may be CSI obtained from reference pilot signals transmitted by said served UEs700. The first APU300is further caused to determine a receive combining filter based on the obtained channel estimates. The receive combining filter is going to be applied to received data signals. The memory circuitry320further stores computer program code which, when run in the processing circuitry310, causes the first APU300to determine effective channels from the served UEs700based on the obtained channel estimates and the determined receive combining filter. The effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs. The first APU300is further caused to transmit the effective channels from said served UEs700to at least one subsequent second APU400. The at least one subsequent second APU400may be located closer to the CPU600than the first APU300. In some embodiments, the memory circuitry320storing computer program code which, when run in the processing circuitry310, may further cause the first APU300to receive data signals from said served UEs, and to determine improved estimates of the received data signals by applying the determined receive combining filter to the received data signals from said served UEs700. In some embodiments, the memory circuitry320storing computer program code which, when run in the processing circuitry310, may further cause the first APU300to transmit the determined improved estimates of the received data signals to at least one subsequent second APU400. In some embodiments, the receive combining filter may be generated by a method selected from the group comprised of MRC, ZF combining and MMSE combining. In some embodiments, each of the at least two APUs300,400in the radio stripe system may be equipped with one antenna. In other embodiments, at least one of the at least two APUs300,400may be equipped with more than one antenna. According to a fourth aspect, there is provided a second APU400for implementing the method according to the second aspect. The second APU400is now going to be described with reference toFIG.7. The second APU400may be used in, but are not limited to, a radio stripe system800such as illustrated inFIG.3. The second APU400is configured for sequential receive combining in a radio stripe system800. The radio stripe system800comprises at least two APUs300,400and a CPU600, the at least two APUs300,400being connected in series to the CPU600. The radio stripe system800serves at least two UEs700. As illustrated inFIG.7, the second APU400comprises a processor, or a processing circuitry410, and a memory, or a memory circuitry420. Additionally, or alternatively, the second APU400may further comprise a transmitter, or a transmitting circuitry440, configured to transmit data to other apparatuses, such as the at least one preceding first APU300. Additionally, or alternatively, the second APU400may further comprise a receiver, or a receiving circuitry430, configured to receive data from other apparatuses, such as the at least one preceding first APU300. The memory circuitry420storing computer program code which, when run in the processing circuitry410, causes the second APU400to obtain channel estimates for channels to said served UEs. The obtained channel estimates may be CSI obtained from reference pilot signals transmitted by the served UEs700. The second APU400is further caused to receive, from at least one preceding first APU300, effective channels from said served UEs700to said preceding first APU300. The at least one preceding first APU300may be located further away from the CPU600than the second APU300. The memory circuitry420further stores computer program code which, when run in the processing circuitry410, causes the second APU400to determine a receive combining filter based on the obtained channel estimates and the received effective channels from said at least one preceding first APU300. The receive combining filter is going to be applied to received data signals. In some embodiments, the memory circuitry420storing computer program code which, when run in the processing circuitry410, may further cause the second APU400to determine effective channels from said served UEs700based on the obtained channel estimates, the determined receive combining filter and the received effective channels from said at least one preceding first APU300. The effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs. In some embodiments, the memory circuitry420storing computer program code which, when run in the processing circuitry410, may further cause the second APU400to receive data signals from said served UEs700and to receive, from said at least one preceding first APU300, improved estimates of data signals received by the preceding first APU300. The second APU400may further be caused to determine augmented received signals based on the received data signals and the received improved estimates of the data signals received by the preceding first APU300. The second APU400may further be caused to determine improved estimates of the received data signals by applying the determined receive combining filter to the augmented received data signals from said served UEs. In some embodiments, the memory circuitry420storing computer program code which, when run in the processing circuitry410, may further cause the second APU400to transmit the determined effective channels from said served UEs700and the improved estimates of the received data signals to at least one subsequent third APU500. The at least one subsequent third APU500may be located closer to the CPU600than the second APU400. In other embodiments, the memory circuitry420storing computer program code which, when run in the processing circuitry410, may further cause the second APU400to transmit the determined effective channels from said served UEs and the improved estimates of the received data signals to the CPU600. In some embodiments, the receive combining filter is generated by a method selected from the group comprised of: MRC, ZF combining and MMSE combining. According to a fifth aspect, there is provided a computer program comprising instructions which, when executed on a processing circuitry, cause the processing circuitry to carry out the method according to the first aspect and/or the second aspect. According to an sixth aspect, there is provided a carrier containing the computer program of the fifth aspect, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium. A Numerical Linear Processing Example In the following, a non-limiting example of the proposed solution will now be described. A radio stripe with10single-antenna equal-spaced APUs may be deployed on a 20-meter-long wall. The CPU may be located at the right end of the stripe such that the fronthaul connection goes from APU1-APU2-APU3- . . . -APU9-APU10-CPU (the setup may be symmetric, so a mirrored deployment would be equivalent). Three UEs may be located 5 meters from the wall and 3 or 6 meters from each other, as shown inFIG.8. The UEs may be transmitting orthogonal pilot sequences, which may enable each APU to estimate its channel to each UE locally, using MMSE estimation. The propagation channels may be computed using free-space line-of-sight propagation modelling at a 3 GHz carrier frequency. In the following, the proposed solution is compared with two baseline methods during uplink data transmission. These two baseline methods are known in the art and are:Distributed MRC, where each APU may process its signal locally and their signals may be accumulated along the radio stripe. The CPU may use the accumulated scalar signals, one per UE, to decode the signals.Centralized MMSE combining, where each APU may send its channel estimates and received signals from the data transmission to the CPU, which may perform the decoding using a state-of-the-art MMSE receiver—in a cellular-system fashion. These baseline methods may be compared with the proposed solution, wherein in this example each APU may use MMSE combining based on its locally available information. The proposed solution is here referred to as “sequential MMSE” or “Seq-MMSE”. FIG.9shows the achievable rates with the proposed solution and the two previously described known methods, as a function of the average Signal-to-Noise-Ratio (SNR) per UE. The performance with the distributed MRC method may saturate at high SNR, due to its lack of interference suppression capabilities. In this short-range scenario, an SNR of 20-30 dB may be achieved even when the transmit power is very low. In contrast, the centralized MMSE method may provide an achievable rate that grows linearly with the SNR in dB-scale, as expected from a method that provides interference suppression and has good CSI. However, its fronthaul requirements are not scalable. The proposed solution, described herein, may combine the benefits of the two baseline methods. The proposed solution may have a distributed implementation but may also provide achievable rates that grow linearly with the number of antennas. The performance loss compared to centralized MMSE is due to the fact that only two antennas are considered at a time in the interference suppression, as compared to the joint processing of all antennas that is done in the centralized case. This loss will diminish if each APU has multiple antennas as well. Although the subject matter described herein may be implemented in any appropriate type of system using any suitable components, the embodiments described herein relate to a wireless network, such as the example wireless communication network illustrated inFIG.10. For simplicity, the wireless communication network ofFIG.10only depicts network1006, network nodes1060and1060b, and Wireless Devices (WDs)1010,1010b, and1010c. The wireless communication network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone. Of the illustrated components, network node1060and wireless device (WD)1010are depicted with additional detail. The illustrated wireless communication network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by the wireless communication network. The wireless communication network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system. In some embodiments, the wireless communication network may be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless communication network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, and/or ZigBee standards. Network1006may comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices. Network node1060and WD1010comprise various components described in more detail below. These components may work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network. In different embodiments, the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless communication network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless communication network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, and evolved Node Bs (eNBs)). Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS). Yet further examples of network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs. As another example, network node1060may be a virtual network node as described in more detail below. More generally, however, network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless communication network or to provide some service to a wireless device that has accessed the wireless communication network. InFIG.10, Network node1060includes processing circuitry1070, device readable medium1080, interface1090, user interface equipment1082, auxiliary equipment1084, power source1086, power circuitry1087, and antenna1062. Although network node1060illustrated in the example wireless communication network ofFIG.10may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Moreover, while the components of network node1060are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium1080may comprise multiple separate hard drives as well as multiple RAM modules). Similarly, network node1060may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which network node1060comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeBs. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, network node1060may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate device readable medium1080for the different RATs) and some components may be reused (e.g., the same antenna1062may be shared by the RATs). Network node1060may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node1060, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node1060. Processing circuitry1070is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry1070may include processing information obtained by processing circuitry1070by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Processing circuitry1070may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node1060components, such as device readable medium1080, network node1060functionality. For example, processing circuitry1070may execute instructions stored in device readable medium1080or in memory within processing circuitry1070. Such functionality may include providing any of the various wireless features or benefits discussed herein. In some embodiments, processing circuitry1070may include a system on a chip (SOC). In some embodiments, processing circuitry1070may include one or more of radio frequency (RF) transceiver circuitry1072and baseband processing circuitry1074. In some embodiments, radio frequency (RF) transceiver circuitry1072and baseband processing circuitry1074may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry1072and baseband processing circuitry1074may be on the same chip or set of chips, boards, or units. In certain embodiments, some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device may be provided by processing circuitry1070executing instructions stored on device readable medium1080or memory within processing circuitry1070. In alternative embodiments, some or all of the functionality may be provided by processing circuitry1070without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry1070can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry1070alone or to other components of network node1060, but are enjoyed by network node1060as a whole, and/or by end users and the wireless network generally. Device readable medium1080may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by processing circuitry1070. Device readable medium1080may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry1070and, utilized by network node1060. Device readable medium1080may be used to store any calculations made by processing circuitry1070and/or any data received via interface1090. In some embodiments, processing circuitry1070and device readable medium10100may be considered to be integrated. Interface1090is used in the wired or wireless communication of signaling and/or data between network node1060, network1006, and/or WDs1010. As illustrated, interface1090comprises port(s)/terminal(s)1094to send and receive data, for example to and from network1006over a wired connection. Interface1090also includes radio front end circuitry1092that may be coupled to, or in certain embodiments a part of, antenna1062. Radio front end circuitry1092comprises filters1098and amplifiers1096. Radio front end circuitry1092may be connected to antenna1062and processing circuitry1070. Radio front end circuitry may be configured to condition signals communicated between antenna1062and processing circuitry1070. Radio front end circuitry1092may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry1092may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters1098and/or amplifiers1096. The radio signal may then be transmitted via antenna1062. Similarly, when receiving data, antenna1062may collect radio signals which are then converted into digital data by radio front end circuitry1092. The digital data may be passed to processing circuitry1070. In other embodiments, the interface may comprise different components and/or different combinations of components. In certain alternative embodiments, network node1060may not include separate radio front end circuitry1092, instead, processing circuitry1070may comprise radio front end circuitry and may be connected to antenna1062without separate radio front end circuitry1092. Similarly, in some embodiments, all or some of RF transceiver circuitry1072may be considered a part of interface1090. In still other embodiments, interface1090may include one or more ports or terminals1094, radio front end circuitry1092, and RF transceiver circuitry1072, as part of a radio unit (not shown), and interface1090may communicate with baseband processing circuitry1074, which is part of a digital unit (not shown). Antenna1062may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna1062may be coupled to radio front end circuitry1090and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna1062may comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz. An omni-directional antenna may be used to transmit/receive radio signals in any direction, a sector antenna may be used to transmit/receive radio signals from devices within a particular area, and a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna may be referred to as MIMO. In certain embodiments, antenna1062may be separate from network node1060and may be connectable to network node1060through an interface or port. Antenna1062, interface1090, and/or processing circuitry1070may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna1062, interface1090, and/or processing circuitry1070may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment. Power circuitry1087may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node1060with power for performing the functionality described herein. Power circuitry1087may receive power from power source1086. Power source1086and/or power circuitry1087may be configured to provide power to the various components of network node1060in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source1086may either be included in, or external to, power circuitry1087and/or network node1060. For example, network node1060may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry1087. As a further example, power source1086may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry1087. The battery may provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, may also be used. Alternative embodiments of network node1060may include additional components beyond those shown inFIG.10that may be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node1060may include user interface equipment to allow input of information into network node1060and to allow output of information from network node1060. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node1060. As used herein, wireless device (WD) refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Unless otherwise noted, the term WD may be used interchangeably herein with user equipment (UE). Communicating wirelessly may involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. In some embodiments, a WD may be configured to transmit and/or receive information without direct human interaction. For instance, a WD may be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network. Examples of a WD include, but are not limited to, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE), a vehicle-mounted wireless terminal device, etc. A WD may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-everything (V2X) and may in this case be referred to as a D2D communication device. As yet another specific example, in an Internet of Things (IoT) scenario, a WD may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another WD and/or a network node. The WD may in this case be a machine-to-machine (M2M) device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the WD may be a UE implementing the 3GPP narrow band internet of things (NB-IoT) standard. Particular examples of such machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g. refrigerators, televisions, etc.) personal wearables (e.g., watches, fitness trackers, etc.). In other scenarios, a WD may represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation. A WD as described above may represent the endpoint of a wireless connection, in which case the device may be referred to as a wireless terminal. Furthermore, a WD as described above may be mobile, in which case it may also be referred to as a mobile device or a mobile terminal. As illustrated, wireless device1010includes antenna1011, interface1014, processing circuitry1020, device readable medium1030, user interface equipment1032, auxiliary equipment1034, power source1036and power circuitry1037. WD1010may include multiple sets of one or more of the illustrated components for different wireless technologies supported by WD1010, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies may be integrated into the same or different chips or set of chips as other components within WD1010. Antenna1011may include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface1014. In certain alternative embodiments, antenna1011may be separate from WD1010and be connectable to WD1010through an interface or port. Antenna1011, interface1014, and/or processing circuitry1020may be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data and/or signals may be received from a network node and/or another WD. In some embodiments, radio front end circuitry and/or antenna1011may be considered an interface. As illustrated, interface1014comprises radio front end circuitry1012and antenna1011. Radio front end circuitry1012comprise one or more filters1013and amplifiers1016. Radio front end circuitry1014is connected to antenna1011and processing circuitry1020, and is configured to condition signals communicated between antenna1011and processing circuitry1020. Radio front end circuitry1012may be coupled to or a part of antenna1011. In some embodiments, WD1010may not include separate radio front end circuitry1012; rather, processing circuitry1020may comprise radio front end circuitry and may be connected to antenna1011. Similarly, in some embodiments, some or all of RF transceiver circuitry1022may be considered a part of interface1014. Radio front end circuitry1012may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry1012may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters1013and/or amplifiers1016. The radio signal may then be transmitted via antenna1011. Similarly, when receiving data, antenna1011may collect radio signals which are then converted into digital data by radio front end circuitry1012. The digital data may be passed to processing circuitry1020. In other embodiments, the interface may comprise different components and/or different combinations of components. Processing circuitry1020may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD1010components, such as device readable medium1030, WD1010functionality. Such functionality may include providing any of the various wireless features or benefits discussed herein. For example, processing circuitry1020may execute instructions stored in device readable medium1030or in memory within processing circuitry1020to provide the functionality disclosed herein. As illustrated, processing circuitry1020includes one or more of RF transceiver circuitry1022, baseband processing circuitry1024, and application processing circuitry1026. In other embodiments, the processing circuitry may comprise different components and/or different combinations of components. In certain embodiments processing circuitry1020of WD1010may comprise a SOC. In some embodiments, RF transceiver circuitry1022, baseband processing circuitry1024, and application processing circuitry1026may be on separate chips or sets of chips. In alternative embodiments, part or all of baseband processing circuitry1024and application processing circuitry1026may be combined into one chip or set of chips, and RF transceiver circuitry1022may be on a separate chip or set of chips. In still alternative embodiments, part or all of RF transceiver circuitry1022and baseband processing circuitry1024may be on the same chip or set of chips, and application processing circuitry1026may be on a separate chip or set of chips. In yet other alternative embodiments, part or all of RF transceiver circuitry1022, baseband processing circuitry1024, and application processing circuitry1026may be combined in the same chip or set of chips. In some embodiments, RF transceiver circuitry1022may be a part of interface1014. RF transceiver circuitry1022may condition RF signals for processing circuitry1020. In certain embodiments, some or all of the functionality described herein as being performed by a WD may be provided by processing circuitry1020executing instructions stored on device readable medium1030, which in certain embodiments may be a computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by processing circuitry1020without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry1020can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry1020alone or to other components of WD1010, but are enjoyed by WD1010as a whole, and/or by end users and the wireless network generally. Processing circuitry1020may be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by processing circuitry1020, may include processing information obtained by processing circuitry1020by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD1010, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Device readable medium1030may be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry1020. Device readable medium1030may include computer memory (e.g., Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (e.g., a hard disk), removable storage media (e.g., a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that may be used by processing circuitry1020. In some embodiments, processing circuitry1020and device readable medium1030may be considered to be integrated. User interface equipment1032may provide components that allow for a human user to interact with WD1010. Such interaction may be of many forms, such as visual, audial, tactile, etc. User interface equipment1032may be operable to produce output to the user and to allow the user to provide input to WD1010. The type of interaction may vary depending on the type of user interface equipment1032installed in WD1010. For example, if WD1010is a smart phone, the interaction may be via a touch screen; if WD1010is a smart meter, the interaction may be through a screen that provides usage (e.g., the number of gallons used) or a speaker that provides an audible alert (e.g., if smoke is detected). User interface equipment1032may include input interfaces, devices and circuits, and output interfaces, devices and circuits. User interface equipment1032is configured to allow input of information into WD1010, and is connected to processing circuitry1020to allow processing circuitry1020to process the input information. User interface equipment1032may include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment1032is also configured to allow output of information from WD1010, and to allow processing circuitry1020to output information from WD1010. User interface equipment1032may include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment1032, WD1010may communicate with end users and/or the wireless network, and allow them to benefit from the functionality described herein. Auxiliary equipment1034is operable to provide more specific functionality which may not be generally performed by WDs. This may comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment1034may vary depending on the embodiment and/or scenario. Power source1036may, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic devices or power cells, may also be used. WD1010may further comprise power circuitry1037for delivering power from power source1036to the various parts of WD1010which need power from power source1036to carry out any functionality described or indicated herein. Power circuitry1037may in certain embodiments comprise power management circuitry. Power circuitry1037may additionally or alternatively be operable to receive power from an external power source; in which case WD1010may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable. Power circuitry1037may also in certain embodiments be operable to deliver power from an external power source to power source1036. This may be, for example, for the charging of power source1036. Power circuitry1037may perform any formatting, converting, or other modification to the power from power source1036to make the power suitable for the respective components of WD1010to which power is supplied. FIG.11illustrates one embodiment of a UE in accordance with various aspects described herein. As used herein, a user equipment or UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter). UE1100may be any UE identified by the 3rd Generation Partnership Project (3GPP), including a NB-IoT UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE. UE900, as illustrated inFIG.9, is one example of a WD configured for communication in accordance with one or more communication standards promulgated by the 3rd Generation Partnership Project (3GPP), such as 3GPP's GSM, UMTS, LTE, and/or 5G standards. As mentioned previously, the term WD and UE may be used interchangeable. Accordingly, althoughFIG.11is a UE, the components discussed herein are equally applicable to a WD, and vice-versa. InFIG.11, UE1100includes processing circuitry1101that is operatively coupled to input/output interface1105, radio frequency (RF) interface1109, network connection interface1111, memory1115including random access memory (RAM)1117, read-only memory (ROM)1114, and storage medium1121or the like, communication subsystem1131, power source1133, and/or any other component, or any combination thereof. Storage medium1121includes operating system1123, application program1125, and data1127. In other embodiments, storage medium1121may include other similar types of information. Certain UEs may utilize all of the components shown inFIG.11, or only a subset of the components. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc. InFIG.11, processing circuitry1101may be configured to process computer instructions and data. Processing circuitry1101may be configured to implement any sequential state machine operative to execute machine instructions stored as machine-readable computer programs in the memory, such as one or more hardware-implemented state machines (e.g., in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored program, general-purpose processors, such as a microprocessor or Digital Signal Processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry1101may include two central processing units (CPUs). Data may be information in a form suitable for use by a computer. In the depicted embodiment, input/output interface1105may be configured to provide a communication interface to an input device, output device, or input and output device. UE1100may be configured to use an output device via input/output interface1105. An output device may use the same type of interface port as an input device. For example, a USB port may be used to provide input to and output from UE1100. The output device may be a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. UE1100may be configured to use an input device via input/output interface1105to allow a user to capture information into UE1100. The input device may include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, another like sensor, or any combination thereof. For example, the input device may be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor. InFIG.11, RF interface1109may be configured to provide a communication interface to RF components such as a transmitter, a receiver, and an antenna. Network connection interface1111may be configured to provide a communication interface to network1143a. Network1143amay encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network1143amay comprise a Wi-Fi network. Network connection interface1111may be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, or the like. Network connection interface1111may implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like). The transmitter and receiver functions may share circuit components, software or firmware, or alternatively may be implemented separately. RAM1117may be configured to interface via bus1102to processing circuitry1101to provide storage or caching of data or computer instructions during the execution of software programs such as the operating system, application programs, and device drivers. ROM1114may be configured to provide computer instructions or data to processing circuitry1101. For example, ROM1114may be configured to store invariant low-level system code or data for basic system functions such as basic input and output (I/O), startup, or reception of keystrokes from a keyboard that are stored in a non-volatile memory. Storage medium1121may be configured to include memory such as RAM, ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, or flash drives. In one example, storage medium1121may be configured to include operating system1123, application program1125such as a web browser application, a widget or gadget engine or another application, and data file1127. Storage medium1121may store, for use by UE1100, any of a variety of various operating systems or combinations of operating systems. Storage medium1121may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), floppy disk drive, flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a subscriber identity module or a removable user identity (SIM/RUIM) module, other memory, or any combination thereof. Storage medium1121may allow UE1100to access computer-executable instructions, application programs or the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied in storage medium1121, which may comprise a device readable medium. InFIG.11, processing circuitry1101may be configured to communicate with network1143busing communication subsystem1131. Network1143aand network1143bmay be the same network or networks or different network or networks. Communication subsystem1131may be configured to include one or more transceivers used to communicate with network1143b. For example, communication subsystem1131may be configured to include one or more transceivers used to communicate with one or more remote transceivers of another device capable of wireless communication such as another WD, UE, or base station of a radio access network (RAN) according to one or more communication protocols, such as IEEE 802.9, CDMA, WCDMA, GSM, LTE, UTRAN, WiMax, or the like. Each transceiver may include transmitter1133and/or receiver1135to implement transmitter or receiver functionality, respectively, appropriate to the RAN links (e.g., frequency allocations and the like). Further, transmitter1133and receiver1135of each transceiver may share circuit components, software or firmware, or alternatively may be implemented separately. In the illustrated embodiment, the communication functions of communication subsystem1131may include data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. For example, communication subsystem1131may include cellular communication, Wi-Fi communication, Bluetooth communication, and GPS communication. Network1143bmay encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network1143bmay be a cellular network, a Wi-Fi network, and/or a near-field network. Power5source1113may be configured to provide alternating current (AC) or direct current (DC) power to components of UE1100. The features, benefits and/or functions described herein may be implemented in one of the components of UE1100or partitioned across multiple components of UE1100. Further, the features, benefits, and/or functions described herein may be implemented in any combination of hardware, software or firmware. In one example, communication subsystem1131may be configured to include any of the components described herein. Further, processing circuitry1101may be configured to communicate with any of such components over bus1102. In another example, any of such components may be represented by program instructions stored in memory that when executed by processing circuitry1101perform the corresponding functions described herein. In another example, the functionality of any of such components may be partitioned between processing circuitry1101and communication subsystem1131. In another example, the non-computationally intensive functions of any of such components may be implemented in software or firmware and the computationally intensive functions may be implemented in hardware. FIG.12is a schematic block diagram illustrating a virtualization environment1200in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to a node (e.g., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks). In some embodiments, some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments1200hosted by one or more of hardware nodes1230. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node), then the network node may be entirely virtualized. The functions may be implemented by one or more applications1220(which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. Applications1220are run in virtualization environment1200which provides hardware1230comprising processing circuitry1260and memory1290. Memory1290contains instructions1295executable by processing circuitry1260whereby application1220is operative to provide one or more of the features, benefits, and/or functions disclosed herein. Virtualization environment1200, comprises general-purpose or special-purpose network hardware devices1230comprising a set of one or more processors or processing circuitry1260, which may be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analogue hardware components or special purpose processors. Each hardware device may comprise memory1290-1which may be non-persistent memory for temporarily storing instructions1295or software executed by processing circuitry1260. Each hardware device may comprise one or more network interface controllers (NICs)1270, also known as network interface cards, which include physical network interface1280. Each hardware device may also include non-transitory, persistent, machine-readable storage media1290-2having stored therein software1295and/or instructions executable by processing circuitry1260. Software1295may include any type of software including software for instantiating one or more virtualization layers1250(also referred to as hypervisors), software to execute virtual machines1240as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein. Virtual machines1240, comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer1250or hypervisor. Different embodiments of the instance of virtual appliance1220may be implemented on one or more of virtual machines1240, and the implementations may be made in different ways. During operation, processing circuitry1260executes software1295to instantiate the hypervisor or virtualization layer1250, which may sometimes be referred to as a virtual machine monitor (VMM). Virtualization layer1250may present a virtual operating platform that appears like networking hardware to virtual machine1240. As shown inFIG.12, hardware1230may be a standalone network node with generic or specific components. Hardware1230may comprise antenna12225and may implement some functions via virtualization. Alternatively, hardware1230may be part of a larger cluster of hardware (e.g. such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO)12100, which, among others, oversees lifecycle management of applications1220. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high-volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment. In the context of NFV, virtual machine1240may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of virtual machines1240, and that part of hardware1230that executes that virtual machine, be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines1240, forms a separate virtual network elements (VNE). Still in the context of NFV, Virtual Network Function (VNF) is responsible for handling specific network functions that run in one or more virtual machines1240on top of hardware networking infrastructure1230and corresponds to application1220inFIG.12. In some embodiments, one or more radio units12200that each include one or more transmitters12220and one or more receivers12210may be coupled to one or more antennas12225. Radio units12200may communicate directly with hardware nodes1230via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be affected with the use of control system12230which may alternatively be used for communication between the hardware nodes1230and radio units12200. With reference toFIG.13, in accordance with an embodiment, a communication system includes telecommunication network1310, such as a 3GPP-type cellular network, which comprises access network1311, such as a radio access network, and core network1314. Access network1311comprises a plurality of base stations1312a,1312b,1312c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area1313a,1313b,1313c. Each base station1312a,1312b,1312cis connectable to core network1314over a wired or wireless connection1315. A first UE1391located in coverage area1313cis configured to wirelessly connect to, or be paged by, the corresponding base station1312c. A second UE1392in coverage area1313ais wirelessly connectable to the corresponding base station1312a. While a plurality of UEs1391,1392are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station1312. Telecommunication network1310is itself connected to host computer1330, which may be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm. Host computer1330may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider. Connections1316and1322between telecommunication network1310and host computer1330may extend directly from core network1314to host computer1330or may go via an optional intermediate network1320. Intermediate network1320may be one of, or a combination of more than one of, a public, private or hosted network; intermediate network1320, if any, may be a backbone network or the Internet; in particular, intermediate network1320may comprise two or more sub-networks (not shown). The communication system ofFIG.13as a whole enables connectivity between the connected UEs1391,1392and host computer1330. The connectivity may be described as an over-the-top (OTT) connection1350. Host computer1330and the connected UEs1391,1392are configured to communicate data and/or signaling via OTT connection1350, using access network1311, core network1314, any intermediate network1320and possible further infrastructure (not shown) as intermediaries. OTT connection1350may be transparent in the sense that the participating communication devices through which OTT connection1350passes are unaware of routing of uplink and downlink communications. For example, base station1312may not or need not be informed about the past routing of an incoming downlink communication with data originating from host computer1330to be forwarded (e.g., handed over) to a connected UE1391. Similarly, base station1312need not be aware of the future routing of an outgoing uplink communication originating from the UE1391towards the host computer1330. Example implementations, in accordance with an embodiment, of the UE, base station and host computer discussed in the preceding paragraphs will now be described with reference toFIG.14. In communication system1400, host computer1410comprises hardware1415including communication interface1416configured to set up and maintain a wired or wireless connection with an interface of a different communication device of communication system1400. Host computer1410further comprises processing circuitry1418, which may have storage and/or processing capabilities. In particular, processing circuitry1418may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Host computer1410further comprises software1411, which is stored in or accessible by host computer1410and executable by processing circuitry1418. Software1411includes host application1412. Host application1412may be operable to provide a service to a remote user, such as UE1430connecting via OTT connection1450terminating at UE1430and host computer1410. In providing the service to the remote user, host application1412may provide user data which is transmitted using OTT connection1450. Communication system1400further includes base station1420provided in a telecommunication system and comprising hardware1425enabling it to communicate with host computer1410and with UE1430. Hardware1425may include communication interface1426for setting up and maintaining a wired or wireless connection with an interface of a different communication device of communication system1400, as well as radio interface1427for setting up and maintaining at least wireless connection1470with UE1430located in a coverage area (not shown inFIG.14) served by base station1420. Communication interface1426may be configured to facilitate connection1460to host computer1410. Connection1460may be direct, or it may pass through a core network (not shown inFIG.14) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment shown, hardware1425of base station1420further includes processing circuitry1428, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Base station1420further has software1421stored internally or accessible via an external connection. Communication system1400further includes UE1430already referred to. Its hardware1435may include radio interface1437configured to set up and maintain wireless connection1470with a base station serving a coverage area in which UE1430is currently located. Hardware1435of UE1430further includes processing circuitry1438, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. UE1430further comprises software1431, which is stored in or accessible by UE1430and executable by processing circuitry1438. Software1431includes client application1432. Client application1432may be operable to provide a service to a human or non-human user via UE1430, with the support of host computer1410. In host computer1410, an executing host application1412may communicate with the executing client application1432via OTT connection1450terminating at UE1430and host computer1410. In providing the service to the user, client application1432may receive request data from host application1412and provide user data in response to the request data. OTT connection1450may transfer both the request data and the user data. Client application1432may interact with the user to generate the user data that it provides. It is noted that host computer1410, base station1420and UE1430illustrated inFIG.14may be similar or identical to host computer1430, one of base stations1312a,1312b,1312cand one of UEs1391,1392ofFIG.13, respectively. This is to say, the inner workings of these entities may be as shown inFIG.14and independently, the surrounding network topology may be that ofFIG.13. InFIG.14, OTT connection1450has been drawn abstractly to illustrate the communication between host computer1410and UE1430via base station1420, without explicit reference to any intermediary devices and the precise routing of messages via these devices. Network infrastructure may determine the routing, which it may be configured to hide from UE1430or from the service provider operating host computer1410, or both. While OTT connection1450is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network). Wireless connection1470between UE1430and base station1420is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to UE1430using OTT connection1450, in which wireless connection1470forms the last segment. More precisely, the teachings of these embodiments may improve the data rate and thereby provide benefits such as better responsiveness. A measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring OTT connection1450between host computer1410and UE1430, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring OTT connection1450may be implemented in software1411and hardware1415of host computer1410or in software1431and hardware1435of UE1430, or both. In embodiments, sensors (not shown) may be deployed in or in association with communication devices through which OTT connection1450passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software1411,1431may compute or estimate the monitored quantities. The reconfiguring of OTT connection1450may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect base station1420, and it may be unknown or imperceptible to base station1420. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling facilitating host computer1410's measurements of throughput, propagation times, latency and the like. The measurements may be implemented in that software1411and1431causes messages to be transmitted, in particular empty or ‘dummy’ messages, using OTT connection1450while it monitors propagation times, errors etc. FIG.15is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.13and14. For simplicity of the present disclosure, only drawing references toFIG.15will be included in this section. In step1510, the host computer provides user data. In substep1511(which may be optional) of step1510, the host computer provides the user data by executing a host application. In step1520, the host computer initiates a transmission carrying the user data to the UE. In step1530(which may be optional), the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step1540(which may also be optional), the UE executes a client application associated with the host application executed by the host computer. FIG.16is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.13and14. For simplicity of the present disclosure, only drawing references toFIG.16will be included in this section. In step1610of the method, the host computer provides user data. In an optional substep (not shown) the host computer provides the user data by executing a host application. In step1620, the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure. In step1630(which may be optional), the UE receives the user data carried in the transmission. FIG.17is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.13and14. For simplicity of the present disclosure, only drawing references toFIG.17will be included in this section. In step1710(which may be optional), the UE receives input data provided by the host computer. Additionally or alternatively, in step1720, the UE provides user data. In substep1721(which may be optional) of step1720, the UE provides the user data by executing a client application. In substep1711(which may be optional) of step1710, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application may further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in substep1730(which may be optional), transmission of the user data to the host computer. In step1740of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure. FIG.18is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.13and14. For simplicity of the present disclosure, only drawing references toFIG.18will be included in this section. In step1810(which may be optional), in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In step1820(which may be optional), the base station initiates transmission of the received user data to the host computer. In step1830(which may be optional), the host computer receives the user data carried in the transmission initiated by the base station. Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read-Only Memory (ROM), Random-Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure. The term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. Numbered Embodiments in Particular Related to FIGS.10-18 1. A first APU configured to communicate with a User Equipment (UE), the first APU comprising a radio interface and processing circuitry configured to:obtain channel estimates for channels to said served UEs;determine a receive combining filter based on the obtained channel estimates, wherein the receive combining filter is going to be applied to received data signals;determine effective channels from said served UEs based on the obtained channel estimates and the determined receive combining filter, wherein the effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs; andtransmit the effective channels from said served UEs to at least one subsequent second APU.2. The first APU according to embodiment 1, wherein the first APU further is configured to:receive data signals from said served UEs; anddetermine improved estimates of the received data signals by applying the determined receive combining filter to the received data signals from said served UEs.3. The first APU according to embodiment 2, wherein the first APU further is configured to:transmit the determined improved estimates of the received data signals to at least one subsequent second APU.4. The first APU according to any of embodiments 1 to 3, wherein the at least one subsequent second APU is located closer to the CPU than the first APU.5. The first BS according to any of embodiments 1 to 4, wherein the obtained channel estimates are Channel State Information, CSI, obtained from reference pilot signals transmitted by said served UEs.6. The first BS according to any of embodiments 1 to 5, wherein the receive combining filter is generated by a method selected from the group comprised of: Maximum-Ratio Combining, MRC, Zero-Forcing, ZF, combining and Minimum-Mean Squared Error, MMSE, combining.7. The first BS according to any of embodiments 1 to 6, wherein each of the at least two APUs (300,400) in the radio stripe system is equipped with one antenna.8. A communication system including a host computer comprising:processing circuitry configured to provide user data; anda communication interface configured to forward the user data to a cellular network for transmission to a User Equipment (UE),wherein the cellular network comprises a first APU having a radio interface and processing circuitry, the first APU's processing circuitry configured to obtain channel estimates for channels to said served UEs; determine a receive combining filter based on the obtained channel estimates, wherein the receive combining filter is going to be applied to received data signals; determine effective channels from said served UEs based on the obtained channel estimates and the determined receive combining filter, wherein the effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs; and transmit the effective channels from said served UEs to at least one subsequent second APU.9. The communication system of embodiment 8, further including the first APU.10. The communication system of embodiment 9, further including the UE, wherein the UE is configured to communicate with the first APU.11. The communication system of embodiment 10, wherein:the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; andthe UE comprises processing circuitry configured to execute a client application associated with the host application.12. A method implemented in a first APU, comprisingobtaining channel estimates for channels to said served UEs;determining a receive combining filter based on the obtained channel estimates, wherein the receive combining filter is going to be applied to received data signals;determining effective channels from said served UEs based on the obtained channel estimates and the determined receive combining filter, wherein the effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs; andtransmitting the effective channels from said served UEs to at least one subsequent second APU.13. A method implemented in a communication system including a host computer, a first APU and a User Equipment (UE), the method comprising:at the host computer, providing user data; andat the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising the first APU, wherein the first APUobtaining channel estimates for channels to said served UEs;determining a receive combining filter based on the obtained channel estimates, wherein the receive combining filter is going to be applied to received data signals;determining effective channels from said served UEs based on the obtained channel estimates and the determined receive combining filter, wherein the effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs; andtransmitting the effective channels from said served UEs to at least one subsequent second APU.14. The method of embodiment 13, further comprising:at the first APU, transmitting the user data.15. The method of embodiment 14, wherein the user data is provided at the host computer by executing a host application, the method further comprising:at the UE, executing a client application associated with the host application.16. A User Equipment (UE) configured to communicate with a first APU, the UE comprising a radio interface and processing circuitry configured to transmit and receive data to and from the first APU.17. A communication system including a host computer comprising:processing circuitry configured to provide user data; anda communication interface configured to forward user data to a cellular network for transmission to a User Equipment (UE),wherein the UE comprises a radio interface and processing circuitry, the UE's processing circuitry configured to transmit and receive data to and from a first APU.18. The communication system of embodiment 16, further including the UE.19. The communication system of embodiment 17, wherein the cellular network further includes a first APU configured to communicate with the UE.20. The communication system of embodiment 18 or 19, wherein:the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; andthe UE's processing circuitry is configured to execute a client application associated with the host application.21. A method implemented in a communication system including a host computer, a first APU and a User Equipment (UE), the method comprising:at the host computer, providing user data; andat the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising the first APU, wherein the UE transmits and receives to and from the first APU.22. The method of embodiment 21, further comprising:at the UE, receiving the user data from the first APU.23. A communication system including a host computer comprising:a communication interface configured to receive user data originating from a transmission from a User Equipment (UE) to a first APU,wherein the UE comprises a radio interface and processing circuitry, the UE's processing circuitry configured to transmit and receive data to and from the first APU.24. The communication system of embodiment 23, further including the UE.25. The communication system of embodiment 24, further including the first APU, wherein the first APU comprises a radio interface configured to communicate with the UE and a communication interface configured to forward to the host computer the user data carried by a transmission from the UE to the first APU.26. The communication system of embodiment 24 or 25, wherein:the processing circuitry of the host computer is configured to execute a host application;and the UE's processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data.27. The communication system of embodiment 24 or 25, wherein:the processing circuitry of the host computer is configured to execute a host application, thereby providing request data; andthe UE's processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data in response to the request data.28. A method implemented in a User Equipment (UE), comprising transmitting and receiving data to and from a first APU.29. The method of embodiment 28, further comprising:providing user data; andforwarding the user data to a host computer via the transmission to the first APU.30. A method implemented in a communication system including a host computer, a first APU and a User Equipment (UE), the method comprising:at the host computer, receiving user data transmitted to the first APU from the UE,wherein the UE transmitting and receiving data to and from the first APU.31. The method of embodiment 30, further comprising:at the UE, providing the user data to the first APU.32. The method of embodiment 31, further comprising:at the UE, executing a client application, thereby providing the user data to be transmitted; andat the host computer, executing a host application associated with the client application.33. The method of embodiment 32, further comprising:at the UE, executing a client application; andat the UE, receiving input data to the client application, the input data being provided at the host computer by executing a host application associated with the client application, wherein the user data to be transmitted is provided by the client application in response to the input data.34. A communication system including a host computer comprising a communication interface configured to receive user data originating from a transmission from a User Equipment (UE) to a first APU, wherein the first APU comprises a radio interface and processing circuitry, the first APU's processing circuitry configured to obtain channel estimates for channels to said served UEs; determine a receive combining filter based on the obtained channel estimates, wherein the receive combining filter is going to be applied to received data signals; determine effective channels from said served UEs based on the obtained channel estimates and the determined receive combining filter, wherein the effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs; and transmitting the effective channels from said served UEs to at least one subsequent second APU.35. The communication system of embodiment 34, further including the first APU.36. The communication system of embodiment 35, further including the UE, wherein the UE is configured to communicate with the first APU.37. The communication system of embodiment 36, wherein:the processing circuitry of the host computer is configured to execute a host application;the UE is configured to execute a client application associated with the host application,thereby providing the user data to be received by the host computer.38. A method implemented in a communication system including a host computer, a first APU and a User Equipment (UE), the method comprising:at the host computer, receiving, from the first APU, user data originating from a transmission which the first APU has received from the UE, wherein the UE transmits and receives data to and from the first APU.39. The method of embodiment 38, further comprising:at the first APU, receiving the user data from the UE.40. The method of embodiment 39, further comprising:at the first APU, initiating a transmission of the received user data to the host computer.41. A second APU configured to communicate with a User Equipment (UE), the second APU comprising a radio interface and processing circuitry configured to:obtain channel estimates for channels to said served UEs;receive, from at least one preceding first APU, effective channels from said served UEs to said preceding first APU; anddetermine a receive combining filter based on the obtained channel estimates and the received effective channels from said at least one preceding first APU, wherein the receive combining filter is going to be applied to received data signals.42. The second APU according to embodiment 41, wherein the second APU further is configured to:determine effective channels from said served UEs based on the obtained channel estimates, the determined receive combining filter and the received effective channels from said at least one preceding first APU, wherein the effective channels represent the effective channel created after the receive combining filter being applied to each channel for said served UEs.43. The second APU according to embodiment 42, wherein the second APU further is configured to:receive data signals from said served UEs;receive, from said at least one preceding first APU, improved estimates of data signals received by the preceding first APU;determine augmented received signals based on the received data signals and the received improved estimates of the data signals received by the preceding first APU; anddetermine improved estimates of the received data signals by applying the determined receive combining filter to the augmented received data signals from said served UEs.44. The second APU according to embodiment 43, wherein the second APU further is configured to:transmit the determined effective channels from said served UEs and the improved estimates of the received data signals to at least one subsequent third APU, wherein the at least one subsequent third APU is located closer to the CPU than the second APU.45. The second APU according to embodiment 43, wherein the second APU further is configured to:transmit the determined effective channels from said served UEs and the improved estimates of the received data signals to the CPU.46. The second APU according to any of embodiments 41 to 45, wherein the at least one preceding first APU is located further away from the CPU than the second APU.47. The second APU according to any of embodiments 41 to 46, wherein the obtained channel estimates are Channel State Information, CSI, obtained from reference pilot signals transmitted by said served UEs.48. The second APU according to any of embodiments 41 to 47, wherein the receive combining filter is generated by a method selected from the group comprised of: Maximum-Ratio Combining, MRC, Zero-Forcing, ZF, combining and Minimum-Mean Squared Error, MMSE, combining.49. The second APU according to any of embodiments 41 to 48, wherein each of the at least two APUs (300,400) in the radio stripe system is equipped with one antenna.50. A communication system including a host computer comprising:processing circuitry configured to provide user data; anda communication interface configured to forward the user data to a cellular network for transmission to a User Equipment (UE), wherein the cellular network comprises a second APU having a radio interface and processing circuitry, the APU's processing circuitry configured to obtain channel estimates for channels to said served UEs; receive, from at least one preceding first APU, effective channels from said served UEs to said preceding first APU; and determine a receive combining filter based on the obtained channel estimates and the received effective channels from said at least one preceding first APU, wherein the receive combining filter is going to be applied to received data signals.51. The communication system of embodiment 50, further including the second APU.52. The communication system of embodiment 51, further including the UE, wherein the UE is configured to communicate with the second APU.53. The communication system of embodiment 52, wherein:the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; andthe UE comprises processing circuitry configured to execute a client application associated with the host application.54. A method implemented in a second APU, comprisingobtaining channel estimates for channels to said served UEs;receiving, from at least one preceding first APU, effective channels from said served UEs to said preceding first APU; anddetermining a receive combining filter based on the obtained channel estimates and the received effective channels from said at least one preceding first APU, wherein the receive combining filter is going to be applied to received data signals.55. A method implemented in a communication system including a host computer, a second APU and a User Equipment (UE), the method comprising:at the host computer, providing user data; andat the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising the second APU, wherein the second APUobtaining channel estimates for channels to said served UEs;receiving, from at least one preceding first APU, effective channels from said served UEs to said preceding first APU; anddetermining a receive combining filter based on the obtained channel estimates and the received effective channels from said at least one preceding first APU, wherein the receive combining filter is going to be applied to received data signals.56. The method of embodiment 54, further comprising:at the second APU, transmitting the user data.57. The method of embodiment 55, wherein the user data is provided at the host computer by executing a host application, the method further comprising:at the UE, executing a client application associated with the host application.58. A User Equipment (UE) configured to communicate with a second APU, the UE comprising a radio interface and processing circuitry configured to transmit and receive data to and from the second APU.59. A communication system including a host computer comprising:processing circuitry configured to provide user data; anda communication interface configured to forward user data to a cellular network for transmission to a User Equipment (UE),wherein the UE comprises a radio interface and processing circuitry, the UE's processing circuitry configured to transmit and receive data to and from a second APU.60. The communication system of embodiment 59, further including the UE.61. The communication system of embodiment 59, wherein the cellular network further includes a second APU configured to communicate with the UE.62. The communication system of embodiment 60 or 61, wherein:the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; andthe UE's processing circuitry is configured to execute a client application associated with the host application.63. A method implemented in a communication system including a host computer, a second APU and a User Equipment (UE), the method comprising:at the host computer, providing user data; andat the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising second APU, wherein the UE transmits and receives to and from the second APU.64. The method of embodiment 63, further comprising:at the UE, receiving the user data from the second APU.65. A communication system including a host computer comprising:a communication interface configured to receive user data originating from atransmission from a User Equipment (UE) to a second APU,wherein the UE comprises a radio interface and processing circuitry, the UE's processing circuitry configured to transmit and receive data to and from the second APU.66. The communication system of embodiment 65, further including the UE.67. The communication system of embodiment 66, further including the second APU, wherein the second APU comprises a radio interface configured to communicate with the UE and a communication interface configured to forward to the host computer the user data carried by a transmission from the UE to the second APU.68. The communication system of embodiment 66 or 67, wherein:the processing circuitry of the host computer is configured to execute a host application;and the UE's processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data.69. The communication system of embodiment 66 or 67, wherein:the processing circuitry of the host computer is configured to execute a host application, thereby providing request data; andthe UE's processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data in response to the request data.70. A method implemented in a User Equipment (UE), comprising transmitting and receiving data to and from a second APU.71. The method of embodiment 70, further comprising:providing user data; andforwarding the user data to a host computer via the transmission to the second APU.72. A method implemented in a communication system including a host computer, a second APU and a User Equipment (UE), the method comprising:at the host computer, receiving user data transmitted to the second APU from the UE, wherein the UE transmitting and receiving data to and from the second APU.73. The method of embodiment 72, further comprising:at the UE, providing the user data to the second APU.74. The method of embodiment 73, further comprising:at the UE, executing a client application, thereby providing the user data to be transmitted; andat the host computer, executing a host application associated with the client application.75. The method of embodiment 74, further comprising:at the UE, executing a client application; andat the UE, receiving input data to the client application, the input data being provided at the host computer by executing a host application associated with the client application,wherein the user data to be transmitted is provided by the client application in response to the input data.76. A communication system including a host computer comprising a communication interface configured to receive user data originating from a transmission from a User Equipment (UE) to a second APU, wherein the second APU comprises a radio interface and processing circuitry, the second APU's processing circuitry configured to obtain channel estimates for channels to said served UEs; receive, from at least one preceding first APU, effective channels from said served UEs to said preceding first APU; and determine a receive combining filter based on the obtained channel estimates and the received effective channels from said at least one preceding first APU, wherein the receive combining filter is going to be applied to received data signals.77. The communication system of embodiment 76, further including the second APU.78. The communication system of embodiment 77, further including the UE, wherein the UE is configured to communicate with the second APU.79. The communication system of embodiment 78, wherein:the processing circuitry of the host computer is configured to execute a host application;the UE is configured to execute a client application associated with the host application, thereby providing the user data to be received by the host computer.80. A method implemented in a communication system including a host computer, a second APU and a User Equipment (UE), the method comprising:at the host computer, receiving, from the second APU, user data originating from a transmission which the second APU has received from the UE, wherein the UE transmits and receives data to and from the second APU.81. The method of embodiment 80, further comprising:at the second APU, receiving the user data from the UE.82. The method of embodiment 81, further comprising:at the second APU, initiating a transmission of the received user data to the host computer. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” “comprising,” “includes” and/or “including” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Modifications and other variants of the described embodiments will come to mind to one skilled in the art having benefit of the teachings presented in the foregoing description and associated drawings. Therefore, it is to be understood that the embodiments are not limited to the specific example embodiments described in this disclosure and that modifications and other variants are intended to be included within the scope of this disclosure. Furthermore, although specific terms may be employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. Therefore, a person skilled in the art would recognize numerous variations to the described embodiments that would still fall within the scope of the appended claims. As used herein, the terms “comprise/comprises” or “include/includes” do not exclude the presence of other elements or steps. Furthermore, although individual features may be included in different claims, these may possibly advantageously be combined, and the inclusion of different claims does not imply that a combination of features is not feasible and/or advantageous. In addition, singular references do not exclude a plurality.
120,240
11863279
DETAILED DESCRIPTION OF THE FIGURES The figures and the following description illustrate specific exemplary embodiments. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody certain principles and are included within the scope of the embodiments. Furthermore, any examples described herein are intended to aid in understanding the embodiments and are to be construed as being without limitation to such specifically recited examples and conditions. As a result, the embodiments are not limited to any of the examples described below. The systems and methods herein leverage quantum computing (QC) to perform part of the computation for a MIMO processor (e.g., a decoder). One approach includes quantum annealing. A quantum annealer is an analog computer that uses a metaheuristic for finding a global minimum of an objective function over a given set of candidate solutions/states via quantum fluctuations (e.g., temporary changes in the amount of energy in a point in space). In the exemplary embodiments disclosed herein, a quantum annealer is operable to embed a Maximum Likelihood (ML) algorithm onto quantum bits (qubits) of the quantum annealer to increase a probability and speed of detection of UE data streams in increasingly complex MIMO communications. In some embodiments, the quantum annealer may perform this “ML-MIMO” detection via a D-Wave 2000Q quantum annealer provided by D-Wave systems, Inc. In these embodiments, the quantum annealer may achieve a target bit error rate (BER) (e.g., 10−6) and a frame error rate (FER) (e.g., 10−4) in a computation time limit (e.g., 10 to 20 μs of computation time) for a specific number of users and antennas (e.g., 48 user by 48 access point antenna (48×48) MIMO system) employing a modulation scheme (e.g., binary phase shift keyed (BPSK) modulation). Irrespective of the example, the embodiments herein, however, will show that other modulations and signaling types may be used as well. In fact, the embodiments herein may be used with virtually any ML detection algorithm. In this regard,FIG.1is a block diagram of a MIMO processing system10operable to process a spatially multiplexed MIMO data stream (e.g., from a plurality of UEs). The MIMO system10, in this embodiment, includes a receiver12that is operable to receive a plurality of spatially multiplexed data streams. For example, the receiver12may be part of an MU-MIMO communication system configured with a WCP (e.g., a WAP, a RAN, a cloud RAN, an eNodeB, or the like) to receive such signaling. Thus, the receiver12may be configured with or communicatively coupled to a plurality of antennas (e.g., configured as an array). In some embodiments, the receiver12may even be configured to receive such signaling over a coaxial cable, such as that found in cable television headends and networks (e.g., cellular backhaul communications and/or landline signaling). The receiver12may be operable to receive/demodulate/demultiplex a variety of modulation and multiplexing schemes including BPSK, quadrature-phase shift keyed (QPSK), quadrature amplitude (QAM), orthogonal frequency division multiplex (OFDM), and the like. The MIMO processing system10also includes a processor14that is operable to embed a maximum likelihood (ML) detection algorithm onto a quantum annealer or otherwise quantum optimizer16, and to decode the spatially multiplexed data streams via the embedded ML to detect a plurality of streams of data bits of a plurality of users (i.e., user data streams 1-N, where the reference “N” indicates an integer greater than “1” and not necessarily equal to any other “N” reference designated herein). Examples of the quantum optimizer16include a coherent optical machine, a complementary metal-oxide-semiconductor (CMOS) based digital annealer, a gate model quantum computer, a superconducting based quantum annealer, or the like. Based on the foregoing, the receiver12is any system, device, software, firmware, or combination thereof operable to receive a multiplexed signal. And, the processor14is any system, device, software, firmware, or combination thereof operable to perform an ML detection of bit streams within the multiplexed signal. Before delving into the aspects of compiling (e.g. minor embedding for quantum annealers) the ML detection algorithm onto the quantum optimizer16of the processor14, an introduction into ML detection is provided. For example, inFIG.2, the WCP20may be configured with the receiver12ofFIG.1. In this example, suppose there are NTUEs15-1T-15-NT(e.g., an integer greater than “1”), each of which comprises one antenna and sends data bits to antennas20-1r-20-Nrvia OFDM, where Nr≥NTand Nris the number of receiving antennas. Now, consider all of the data bits from the UEs15in a vector whose elements each comprise a single UE15's data bits. The data bits are mapped into a complex valued symbolvthat is transmitted over a radio channelv=[v1,v2, . . .vNt]T∈Nt. Each UEs15may send a constellation O of size |O|=2Q(e.g., Q bits per symbol). A MIMO decoding problem with an optimal solution is called the “ML solution” and comprises a search over the sets of transmitted symbols, looking for the set that minimizes the error with respect to what has been received by the WCP20. The solution may be represented as: v=[v1,v2, . . .vNt]T∈NtEq. 1. The processor14de-maps the decoded symbolsvto decoded bitsb. In Eq. 1., H∈NTNt=HI+jHQis the wireless channel on each OFDM subcarrier and yÅNr(=Hv+n) is a received set of symbols perturbed by n∈Nr(i.e., additive white Gaussian noise, or “AWGN”). A solution thus minimizes detection errors and maximizes throughput (e.g., via throughput optimal decoding). As an example, a sphere decoder is an ML detector algorithm that reduces complexity with respect to a brute force search by constraining its search to possible sets that lie within a hyper sphere of radius √{square root over (C)} centered around y (e.g., Eq. 1 with the constraint ∥y−Hv∥2≤C). This transforms Eq. 1 into a tree search by QR decomposition of H=QR, where Q is orthogonal and R is upper triangular, resulting in {circumflex over (v)}=arg minv∈ONt∥y−Rv∥2, withy=Q*y. The resulting tree has a height of NT, a branching factor of |O|, and nodes 1+Σi=1Nt|O|i. ML detection thus becomes the problem of finding a single leaf among |O|NTwith a minimum metric and the corresponding tree path is the ML solution. Thus, the min of Eq. 1 is a search in an exponentially large and spatially transmitted symbols {v}, despite sphere decoder reductions in the search space size. The following table illustrates the average number of tree nodes visited to perform ML sphere decoding with UE15stransmitting modulation symbols on 50 subcarrier's over a 20 MHz, 13 dB SNR (signal to noise ratio) Raleigh channel. Complexity (VisitedBPSKQPSK16 - QAMNodes)12 × 127 × 74 × 4≈40(feasible)21 × 2111 × 116 × 6≈270(possible)30 × 3015 × 158 × 8≈1,900(unfeasible) The tables is parameterized based on the number of clients, and the number of antennas at the WCP20, and modulation to highlight the exponential increase in computation. For example, for a UEs15with 16 QAM symbols, 15 UE15swith QPSK symbols, or 30 UE15ssending BPSK symbols, the sphere decoder visits close to 2,000 tree nodes, saturating previous architectures, such as those Implement It in silicon. The quantum optimizer16improves upon the ML detection of the sphere decoder. Again, the quantum optimizer16could be a specialized analog computer that computes continuously (i.e., rather than in discrete clock cycles) and represents numerical quantities in analog instead of digital quantities. Generally, the quantum optimizer16exploits quantum effects such as tunneling, “many-body” delocalization, and quantum relaxation to circumvent computational bottlenecks that may otherwise “trap” Monte Carlo methods and local minima of a solution landscape. The quantum optimizer16may be operable to solve non-deterministic polynomial (NP) complete and NP hard optimization problems. In some embodiments, the quantum optimizer16can formulate NP hard problems in an Ising model. In some embodiments, the quantum optimizer16could be a quantum circuit algorithm that is run in a gate-model quantum computer to perform optimization, for example by implementing the Quantum Approximate Optimization Algorithm or the Quantum Alternate Operator Ansatz or a Quantum Neural Network. In some embodiments, the quantum optimizer16could be a physical analog machine that is based on coherent optical effects such as those leveraged in degenerate optical parametric oscillators (coherent Ising machine), as built by NTT Corporation or Stanford University. In some embodiments, the quantum optimizer16could be an emulator of quantum annealing or otherwise quantum optimization system, such as a digital annealer built by CMOS technology such as those built by Fujitsu. The quantum optimizer16may initialize each of its N constituent qubits to begin in a superposition state of 1/√{square root over (2)}(|0+|1) that has no classical counterpart. In the D-Wave quantum annealer, these qubits are metallic circuits in a chip that are maintained in a superconducting state by low temperature and subject to the influence of tailored magnetic fluxes. The collection of N qubits generally encodes all possible 2N outputs in a single state. The initial setting may be achieved by exposing all of the qubits in the chip to a signal A(t) whose magnitude in time is maximal. The system may then implement an objective function which is represented by another signal B(t) ramped up from zero while A(t) his decreased progressively at the same time. The synchronized sequences of the signals A and B and their time dependence is the annealing schedule. The annealing schedule essentially the algorithm of the quantum annealer16that needs to be optimized so that, at the end of a run (i.e., when B(t)=max and A(t)=0), each qubit assumes either a value of |0+|1, corresponding to classical bit values of 0 or 1, respectively. And, the final state of these qubits may collectively represent a candidate solution of the problem, ideally the ground state of the quantum annealer16(e.g., the minimum of the optimization objective function). In some other quantum annealing embodiments, the quantum annealer could have been built with non-superconducting technologies but leveraging cold atoms instead. This includes ion-trap architectures such as the quantum computers by IonQ or the neutral atoms architectures such as the quantum computers built by ColdQuanta, the University of Wisconsin-Madison, or the University of Colorado. With this in mind, the processor14may first define an objective function of an ML detection algorithm that is to be minimized. This objective function may comprise a quadratic polynomial binary variables and exists in two equivalent forms—an Ising spin glass form and a quadratic unconstrained binary optimization (QUBO) form. In the Ising spin glass form, solution variables may be referred to as “spins” si∈{+1, −1} such that: s^1,…⁢⁢s^N=arg⁢min{s1,…⁢⁢sN}⁢(∑i<jN⁢gij⁢si⁢sj+∑iN⁢fi⁢si).Eq.⁢2 where N is a number of spin variables, and gijand fiare the Ising model parameters that characterize a problem. The fiparameter may characterize the preference for each spin to be +1 or −1. A positive indicates a preference for −1 while a negative indicator preference for +1 with the magnitude corresponding to the magnitude of the preference for either state. The gijparameter may capture preferred correlations between spins. For example, a positive may cause the quantum optimizer16to prefer si≠sj, while a negative may cause the quantum annealer16prefer si=sjin its optimization outcome. Analogous to fi, the magnitude of gijmay correspond to the magnitude of its preference. In the QUBO form, the optimization may have solution variables qithat are classical binary bits (i.e., logical “zero” or “1”) that may be represented as: q^1,…⁢⁢q^N=arg⁢min{q1,…⁢⁢qN}⁢⁢∑i≤jN⁢Qi⁢j⁢qi⁢qj.Eq.⁢3 where N is the qubit count and Q∈N×Nis upper triangular. The off diagonal matrix elements Qij(i≠j) correspond to gijin Eq. 2 and the diagonal elements correspond to fi. The two forms are equivalent and that their solutions are related by qi↔12⁢(si+1)⁢⁢leading⁢⁢to⁢⁢gi⁢j↔14⁢Qij⁢⁢and⁢⁢fi↔12⁢Qii+14⁢∑k=1i-1⁢Qki+14⁢∑k=i+1N⁢Qik.Eq.⁢4 With the Ising spin glass and QUBO forms established, the ML detection algorithm can be transformed for the quantum annealer16. As an example, an OFDM signaling technique is assumed where the wireless channel is subdivided into multiple “flat-fading” orthogonal sub carriers. The ML to Ising reduction may be required at each subcarrier. It should be noted, however, that the embodiment is not explicitly limited to OFDM techniques. In transforming the ML problem for compilation in the quantum optimizer16, the QUBO form is first considered. With the ML transformation to QUBO form, a variable to symbol transform function T(·) that represents a candidate vector v in the ML search process (e.g., Eq. 1) is first sought instead with a number of QUBO solution variables. More specifically, the processor14may represent each of the NTUE15candidate symbols vi∈0 (1≤i≤NT) with log2(|O|) QUBO solution variables naturally requiring QUBO variables for NTtransmitters. The processor14may form these QUBO variables into a vector qifor each UE15as i: qi=[q(i−1)·log2(|O|)+1, . . . , qi·log2(|O|)]. For example, T may recast a 2×2 QPSK (|O|=4) problem into a QUBO form with four solution variables split into two vectors, q1=[q1q2] and q2=[q3q4]. Generally, this transformation recasts the ML problem of Eq. 1 into the form: q^1,…⁢⁢q^Nt=arg⁢minq1,…⁢⁢qNt⁢y-He2.Eq.⁢5 where e=[T(q1), . . . , T(qNT)]T. Then, the resulting NTvectors {circumflex over (q)}1, . . . {circumflex over (q)}Ntcorrespond to the N QUBO solution variables {circumflex over (q)}1, . . . {circumflex over (q)}N. Continuing with this 2×2 QPSK example, e=[T(q1), T(q2)]T=[T(q1q2]]), T([q3q4]])]T. Then, Eq. 5 results in two ML decoded vectors {circumflex over (q)}1,{circumflex over (q)}2(e.g., noting that T({circumflex over (q)}1), T({circumflex over (q)}2), corresponds to the ML solution {circumflex over (v)}=[{circumflex over (v)}1,{circumflex over (v)}2]Tin Eq. 1, and the nearest symbol vector around a received y). The decoded vectors {circumflex over (q)}1,{circumflex over (q)}2correspond to the four decoding QUBO variables {circumflex over (q)}1, {circumflex over (q)}2, {circumflex over (q)}3, {circumflex over (q)}4in Eq. 3. If the transmitter's bit to symbol mapping and the variable to symbol transform of the quantum annealer16are equivalent, then the decoded {circumflex over (q)}1, {circumflex over (q)}2, {circumflex over (q)}3, {circumflex over (q)}4are the directly de-mapped bits {circumflex over (b)} from the ML solution in Eq. 1. When the transform T is linear, expansion of the norm is Eq. 5 yields a quadratic polynomial objective function, since qi2=qifor any 0 or 1 valued qi. Then, the ML problem (e.g., Eq. 1) transforms directly into the QUBO form (e.g., the Eqs. 3 and 5). Then, the processor14finds variable to symbol linear transform functions for each BPSK, QPSK, and 16 QAM modulation. For BPSK modulation, if two UE15ssend two signal simultaneously, (i.e., each with one of two possible information symbols), their transmissions can be described with a two-vector of symbols {circumflex over (v)}=[{circumflex over (v)}1,{circumflex over (v)}2]T∈[{±1}, {±1}]T. The ML problem applied to the BPSK case where symbols v1are represented by vi=T(qi)=2qi−1 results in a QUBO form. An example of such is illustrated inFIGS.3A-3D. For example inFIG.3A, a 2×2 BPSK MIMO ML detection that solves Eq. 1 is converted into the QUBO form. The norm expansion in Eq. 1 can be expressed as illustrated inFIG.3B. In the case of BPSK, the symbol vi∈{−1, 1} is represented by QUBO variable qi. One possible transition is 2qi−1, where qi=0 corresponds to vi=−1 and qi=1 to vi=1. This generally leads to [v1, v2]T=[T(q1), T(q2)]T, where T(q1)=2q1−1 and T(q2)=2q2−1. Using these relationships, the norm may be expressed as that shown inFIG.3C. Then, the objective function of the ML problem with QUBO variables can be obtained. For example, using qi2=qi, the minimization of the objective function becomes a QUBO form illustrated inFIG.3D. Higher-order modulations which may send one of M possible information symbols with each channel (e.g., where “M” is an integer greater than “2” and not necessarily equal to any other “M” reference designated herein) and result in higher communication rates are now considered. For example, in QPSK, each UE15transmits one of four possible symbols {circumflex over (v)}i∈{±1}, {±1j}. Since this can be viewed as a two-dimensional BPSK signal of vi=viI+jviQ, the processor14may represent each possibly transmitted QPSK information symbol with a linear combination of one QUBO variable plus the other QUBO variable multiplied by the imaginary unit. Transforming q2i-1and q2ito viIand viQ, respectively, leads to the transform vi=T(qi)=(2q2i-1−1)+j(2q2i−1). With 16 QAM and higher modulations, spectral efficiencies increase but they utilize multiple amplitudes (e.g., levels) so as to require a T that inputs more than one binary solution variable per I or Q channel. For example, consider a transform for a multilevel one-dimensional constellation of [00, 01, 10, 11]. In this example, T=4q1+2q2−3 maps these bits to the values of −3, −1, +1, +3. The processor14may generalize this into a two-dimensional problem by letting the first two arguments of T (e.g., q4i-1, q4i-2) represent the channel I and the next two arguments represent the channel Q. Generally, this is the 16 QAM transform for the quantum optimizer16. This transform map solution variables to symbols linearly as vi=T(qi)=(4q4i-3+2q4i-2−3)+j(4q4i+2q4i)−3), thereby resulting in a QUBO form. Transmitters, however, typically use different bit to symbol mappings than would be used in the quantum optimizer16. For example, in a 16 QAM signal, the constellation may be represented with the Gray coded bits illustrated inFIG.4. This means that the quantum annealer's bit to symbol mapping differs from that of the UE15. Thus, the processor14may map the decoding QUBO variables into the correct Gray coded transmitted bits. In this embodiment, the Gray coding of the UE15may be retained and corrected by the processor14. In doing so, the processor may perform a bitwise post translation that operates on a solution of output bits by the quantum annealer16(e.g.,FIG.3), essentially translating them back into Gray coded bits via the transition fromFIGS.4to7. For example, if the second bit {circumflex over (q)}4i-2of the QUBO solution bits {circumflex over (q)}4i-3, {circumflex over (q)}4i-2, {circumflex over (q)}4i-1, {circumflex over (q)}4iis 1, then the processor14translates and flips the third bit {circumflex over (q)}4i-1and the fourth bit {circumflex over (q)}4i(e.g., 1100 to 1111). Otherwise, the processor14leaves the solution as is. This translation can be generalized to 22n−QAM (e.g., where n≥2) as an operation that flips even-numbered column in the constellation upside down. The result b′ is an intermediate code illustrated inFIG.6. From there, the processor14may apply the differential bit encoding transformation ofFIG.5to the intermediate code b′ to obtain the gray coded bits inFIG.4(e.g., via the translation from 1111 to 1000). To illustrate, a UE15may map a bit string b1, b2, b3, b4onto one of the gray coded 16 QAM symbols inFIG.4) and send {tilde over (v)}=[v1] to the WCP20through a wireless channel H. The WCP may receive y=Hv+n, the transmitted signal perturbed by AWGN. From there, the quantum annealer16may decode the ML QUBO equation using H, y, and v=[v1]=T(q1)], where T(q1)=(4q1+2q2−3)+j(4q3+2q4)−3), a linear transform based on the transform of the quantum optimizer16, as illustrated inFIG.7. The quantum optimizer16may then solve the Ising/QUBO form of the ML detection problem resulting in an ML decoded vector {circumflex over (q)}1that includes Ising/QUBO variables {circumflex over (q)}1, {circumflex over (q)}2, {circumflex over (q)}3, {circumflex over (q)}4. Afterwards, the processor14may apply the bitwise translation from the decoding QUBO solution output {circumflex over (q)}1, {circumflex over (q)}2, {circumflex over (q)}3, {circumflex over (q)}4to the transform illustrated inFIG.4. Then, if {circumflex over (b)}1, {circumflex over (b)}2, {circumflex over (b)}3, {circumflex over (b)}4=b1, b2, b3, b4, the quantum optimizer16has decoded successfully and the Gray coding ofFIG.4is preserved in case of symbol error. Now turning to the Ising spin glass form, the ML detection algorithm can be obtained by transforming the resulting QUBO form into an Ising form via Eq. 4. As mentioned, in one embodiment, the quantum annealer16is a D-wave 2000 quantum annealing machine. In such an embodiment, the quantum annealer16may implement the Ising model by using generalized Ising model parameters. For example, for BPSK modulation and given a channel matrix and vector of received signals, the quantum annealer16may obtain the following Ising model parameters: fi(H,y)=−2(H(:,i)I·yI)−2(H(:,i)Q·yQ) gij(H)=2(H(:,i)I·H(:,j)I)+2(H(:,i)Q·H(:,j)Q) where H(:,i)denotes the ithcolumn of the channel matrix H. For QPSK modulation, the quantum annealer16may obtain the following Ising model parameter fias follows: fi⁡(H,y)={if⁢⁢i=2⁢n-2⁢(H(:,i/2)I·yQ)+2⁢(H(:,i/2)Q·yI),otherwise,-2⁢(H:,[i/2]I·yI)+2⁢(H:,[i/2]Q·yQ)}Eq.⁢7 Since the real and imaginary terms of each symbol are independent, the coupler strength between s2n-1and s2n(e.g., q2n-1and q2n) is logical “0”. For other siand sj, the Ising coupler strength for QPSK may be defined as: gi⁢j⁡(H)={if⁢⁢i+j=2⁢n-2⁢(H(:,[i/2])I·H(:,[j/2])I)+2⁢(H(:,[i/2])Q·H(:,[j/2])Q),otherwise,±2⁢(H(:,[i/2])I·H(:,[j/2])Q)∓2⁢(H(:,[i./2])I·H(:,[j/2])Q)}Eq.⁢8 where i<j and the sign of the latter case of Eq. 8 is determined by whether i=2n(i.e., when i=2n, then “+” and “−”). In the case of 16 QAM modulation, the Ising model parameter may follow the same structure as the BPSK and QPSK cases. These are illustrated inFIGS.8A and8B. For example,FIG.8Billustrates the fiparameters for 16 QAM. Since the real and imaginary terms of each symbol are independent, the coupler strength between s4n-2, s4n-2, s4n-1, s4nis “0”. And, for the other siand sj, the Ising coupler strength gijfor 16 QAM is illustrated inFIG.8B. The Ising spin glass form may be generalized using Ising model parameters. In this regard, the quantum optimizer16may insert the given channel H and the signal y received by the receiver12without requiring any computationally expensive operations. For example, the quantum optimizer16may directly consider the expansion of the norm in Eq. 5. Thus, the computational time and resources required for the ML to quantum optimizer conversion can be neglected as they are generally insignificant. Once the ML detection algorithm is in quadratic form, the processor14may compile the corresponding Ising model onto the quantum annealer16. Again, in one embodiment, the quantum optimizer16is a D-Wave 2000 quantum annealer. In this regard, the quantum optimizer16may implement an Ising model objective function energetically hardcoded so that Eq. 2 can support a certain coefficient gijto be nonzero if the variables siand sjare associated to physical variables (i.e., physical qubits) in such a way that the qubits are energetically coupled. In this regard, the quantum optimizer16may use a coupling matrix, such as the Chimera graph100illustrated inFIG.9. In some embodiments, the coupling matrix could be a Pegasus architecture as developed by D-Wave. In some embodiments, the compilation from Ising form could require a known mapping known as the LHZ scheme. The Chimera graph100illustrates qubit connections for a 32×32 BPSK problem being embedded onto a quantum annealer, that could be the optimizer16. The Chimera graph100comprises a plurality of Chimera unit cells103with each cell103comprising a set of eight physical qubits104, as illustrated in the expanded section102of the Chimera graph100. Each edge in the Chimera graph100is a coupler105. Once the Ising coefficients are passed to a quantum annealer16, the coefficients are assigned to the edges/couplers105of the Chimera graph100. The coefficients are divided into unit cells103(e.g., gij≠0 along with their connected nodes). While the Ising problem from Eq. 1 is almost fully connected (e.g., for most (i, j) pairs), the Chimera graph100allows for the embedding of the Ising problem. One method of embedding is to clone variables in such a way that a binary variable becomes associated not to a single qubit104, but to a connected linear chain of qubits104.FIG.10illustrates this cloning process as a mapping of logical qubits111onto physical qubits104of unit cells103in the Chimera graph100ofFIG.9. Each unit cell103comprises four logical qubits and the other unit cells103are employed in order to interconnect diagonal cells. More specifically, suppose the unit cell103with the coordinate value [1,1] includes logical qubits1-4and the unit cell103with the coordinate value [2, 2] includes logical qubits5-8. The left side of the unit cell103with the coordinate value [2, 1] has a vertical clone of qubits5-8and the right side has a horizontal clone of logical qubits1-4. Then, the logical qubits1-4and5-8are connected by a single unit cell103with the coordinate value [2, 1]. The unit cell103hosting the next four logical qubits9-12is placed at the coordinate value [3, 3]. The two unit cells below with the coordinate values [3, 1] and [3, 2] are used for connections between the logical qubits9-12and1-4, and the logical qubits9-12and5-8, respectively. Given a number N of spin variables (i.e., logical qubits111) in Ising form, the embedding may represent each with a chain of [N/4]+1 qubits, for a total of N ([N/4]+1) qubits (e.g., as N=Ntlog2(|O|). The following table summarizes size of the embedding in both logical and physical qubits as a function of the MIMO detection problems parameters (i.e., a number of users, a number of antennas, and modulation type. ConfigurationBPSKQPSK16 QAM64 QAM10 × 1010 (40)20 (120)40 (440)60 (1000)20 × 2020 (120)40 (440)80 (2000)120 (4000)40 × 4040 (440)80 (2000)60 (7000)240 (15,000)60 × 6060 (1000)120 (4000)240 (15,000)360 (33,000) After embedding onto the Chimera graph100, the Ising problem needs to be recast into an equivalent problem that has the same ground state but also satisfies the Chimera graph100constraints. A constant penalty term (JF) may need to be introduced to quantify the relatively large coupling that constrains the physical qubits belong to the same logical qubit to prefer the same state. FIG.11illustrates additional details regarding this embedding. For example, the embedding a map the Ising problem to an equivalent one that has the same ground state but also satisfies the Chimera graph constraints (e.g., Chimera graph100ofFIG.9). The compiled objective function is represented inFIG.10where the original logical variables siare associated to a chain i of c=1 . . . ([N/4]+1) qubits, indexed with spins sic. |JF| is operable to penalize a condition that sic≠sic′, in that it enforces all qubits in the chain to assume the same value (±1). This enforcement may be more likely to happen for large values of |JF|, however the maximum negative energy value may be set to −1 by design. |JF| effectively re-normalizes the terms in the objective function by a factor of |JF|−1. The linear term value fiis additionally divided by the number of qubits in the chain ([N/4]+1). The duplication of variables ensures the existence of a pair of qubits in the chains such that a physical coupler in the Chimera graph (e.g., Chimera graph100ofFIG.9) exists, where δijis the set of pairs of qubits that are connected by a physical bond once the chains i and j are specified. The bit string that the quantum optimizer16returns may be expressed in terms of the embedded Ising problem and is therefore decoded (e.g. “un-embedded”) in order to have the values of the bits expressed in terms of the ML Ising problem. This may be performed by checking that all the qubits of a logical chain are either +1 or −1. Should not all spins be concordant, the value of the corresponding logical variable may be obtained by majority voting (e.g., in case of a vote tie, the value is randomized). Once the logical variables are determined, each configuration yields a corresponding energy of the Ising objective function by substituting it into the original Ising spin glass equation of Eq. 2. An exemplary application programming interface (API) between the control plane, the quantum substrate, machine parameters and their tuning of the quantum annealer16are now explained. Each independent cycle (e.g. anneal cycle in a quantum annealer) on the quantum optimizer16generally yields a configuration of spins (e.g., one decoded bit string). The quantum optimizer16may be programmed to run a batch Naof cycles (e.g., one quantum annealing run) with the same parameters to accumulate statistics, which generally implies that there is a set of Naconfigurations from a job submission. The lowest energy configuration among Naanneals is generally the best answer found. Multiple instances (e.g., identical or not) can be run physically alongside each other, reducing runtime by a parallelization factor Pf≅Ntot/(N([N/4]+1)), which is generally considered to be a small 16 qubit problem in that it employs about 80 physical qubits (e.g., 16 user BPSK, 8 user QPSK, and 4 user 16 QAM) and could be run more than 20 times in parallel on the quantum annealer16. If the quantum optimizer16is an analog device, the desired embedded Ising coefficients (e.g.,FIG.11) do not perfectly match real energy values once hardcoded in the quantum annealer16. Accordingly, these coefficients may give rise to intrinsic control errors (ICE), an uncontrollable shift in the actual program values of the objective function. ICE may be modeled as noise fluctuating at a timescale of the order of the anneal time. For example, on each anneal, Ising coefficients may be perturbed as fi→fi+δfi, gij→gij+δgij, where the noise is Gaussian with a mean and variance ofδfi≅0.008±0.02 andδgij≅−0.015±0.025, measured respectively, in a delicate phase of the annealing run. The impact of ICE on performance may depend on the problem, but precision issues may arise if the largest energy scale squeezes the value of the coefficients inFIG.11to a level where ICE is likely to raise significant information of the problems ground state configuration. As mentioned, the value of |JF| that enforces a chain of qubits to return a series of values which are in agreement (e.g., all +1 or −1) and the annealing time Tamay both be important performance parameters that determine the net time to find a solution and illustrate the overall performance of the quantum annealer16. Pause times Tp(e.g., 1, 10 and 100 μs) in the middle of the annealing (e.g., Taequals 1 μs) with various pause positions Spmay be introduced to illustrate the effect of pausing on the problems. Setting |JF| too large may “wash out” the problem information due to the ICE. But, |JF| on average may increase with the number of logical chains and fully connected problems in the absence of ICE. In one embodiment, the dynamic range of coupler strengths may be defined as the ratio between the maximum and minimum values that can be set (e.g., gijin Eq. 2). To strengthen interactions between embedding qubits, the quantum annealer16may be able to double the magnitude of valid negative coupler values, effectively increasing the precision of embedded problems and reducing ICE. However, this improved range option, when enabled, may break the symmetry of the Ising objective function by substituting the opposite signs for connected coefficients and their couplings into the same problem. Accordingly, the improved range option may preclude averaging over symmetrical instances as a quantum optimizer16may do so without the improved range option to mitigate leakage errors. In one embodiment, the quantum annealer used as the quantum optimizer16was evaluated considering the same number of antennas at the UEs15and the WCP20(e.g., where Nt=Nrabove). In these evaluations, certain metrics were determined such as the time to solution (TTS), BER, and the time to BER (TTB). In determining the time to solution, a ground state with a probability of0may have been found corresponding to the minimum energy solution within the search space of 2Nbit strings, where Nis the variable count. In the absence of channel noise, the ground state corresponds to a correct decoding. Each anneal may be considered an independent identically distributed random process, meaning that the expected time to solution (i.e., TTS()) is an anneal time of each anneal multiplied by the expected number of samples to be able to find the ML solution with a probability. In this embodiment, TTS()=Talog (1−) log (1−0), whereis routinely established at 0.99. In evaluating the BER and the TTB, the TTS reflects the expected time to find the ground state but does not characterize the expected time that the quantum annealer16takes to achieve a certain BER. This quantity may differ from the TTS because the TTS generally only considers the ground state. Solutions with energy greater than the ground state may have no or relatively few bit errors even though wireless channel noise may induce bit errors in the ground state solution itself. Accordingly, a metric TTB(p) is introduced to characterize the time required to obtain a certain BER. In this embodiment, the TTB(p) for a single channel use is illustrated. Since one run of the quantum annealer16may include multiple Na anneals, the annealing solution with minimum energy among all anneals in that run may be returned. This process included one instance with a channel use comprised of certain transmitted bits and a certain wireless channel. In this embodiment, the quantum annealer16found different solutions with different Ising energies ranking them in order of their energy. Generally, the quantum optimizer16only finds the best solution by all anneals in a run such that the expected BER of an instance I after Naanneals can be expressed as: 𝔼⁡(BER⁡(Na))=∑k=1L⁢[(∑r=kL⁢pI⁡(r))Na-(∑r=k+1L⁢pI⁡(r))Na]·FI⁡(k)/NEq.⁢9 where N is the qubit count, L(≤Na) is a number of distinct solutions, r(1≤r≤L) is a rank index of each solution, p(r) is the probability of obtaining the rthsolution, and is Fl(k) the number of bit errors of the solution against ground truth. To compute the TTB(p), the left-hand side of Eq. 9 is replaced with p, such that Namay be solved for and TTB(p)=NaTa/fmay be computed. FIG.12is a flowchart of an exemplary process200of the system10ofFIG.1. In this embodiment, an ML detection algorithm is embedded on the quantum annealer16, in the process element202. For example, the processor14may reduce the ML detection algorithm into a quadratic form such that it may be embedded onto qubits of the quantum annealer16. With the ML detection algorithm embedded onto the quantum annealer16, the receiver12may receive a plurality of spatially multiplexed data streams, in the process element204(e.g., via Wi-Fi, cellular telephony, coaxial cable, etc.). Then, the quantum optimizer16may decode the spatially multiplexed data streams via the embedded ML detection algorithm to detect the data bits of a plurality of users, such as the UEs15illustrated inFIG.2, in the process element206. Any of the above embodiments herein may be rearranged and/or combined with other embodiments. Accordingly, the concepts herein are not to be limited to any particular embodiment disclosed herein. Additionally, the embodiments can take the form of entirely hardware or comprising both hardware and software elements. Portions of the embodiments may be implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.FIG.13illustrates a computing system300in which a computer readable medium306may provide instructions for performing any of the methods disclosed herein. Furthermore, the embodiments can take the form of a computer program product accessible from the computer readable medium306providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, the computer readable medium306can be any apparatus that can tangibly store the program for use by or in connection with the instruction execution system, apparatus, or device, including the computer system300. The medium306can be any tangible electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Examples of a computer readable medium306include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), NAND flash memory, a read-only memory (ROM), a rigid magnetic disk and an optical disk. Some examples of optical disks include compact disk—read only memory (CD-ROM), compact disk—read/write (CD-R/W) and digital versatile disc (DVD). The computing system300, suitable for storing and/or executing program code, can include one or more processors302coupled directly or indirectly to memory308through a system bus310. The memory308can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices304(including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the computing system300to become coupled to other data processing systems, such as through host systems interfaces312, or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
38,468
11863280
The drawings are for illustrative purposes only and are not to scale. DETAILED DESCRIPTION With reference toFIG.1, in a first embodiment of the method of determining the directional frequency response of an arrangement of transducer elements, a simulation of the locations of an arrangement of transducer elements is provided101as a periodic spatial function. The transducer elements, having a two-dimensional arrangement, are located in an xy plane of the spatial domain, having spatial coordinates x,y. The transducer elements could equally have a one-dimensional or a three-dimensional arrangement. The periodic spatial function is determined by sampling the sensor space with an infinite grid of lattice points. A sin c filter is used as an anti-aliasing filter to determine appropriate gain values at each of the lattice points due to the proximity, or otherwise, of any transducer elements. A beamforming direction and frequency range are provided appropriate to the intended application102, being to determine the directional frequency response of an arrangement of microphones for detecting audio signals. The beamforming direction is selected in the positive x direction and the frequency range is selected as 0 Hz to 24 kHz. A two-dimensional spatial Fast Fourier Transform is applied to the periodic spatial function to convert the periodic spatial function from the spatial domain to the spatial frequency domain103. The spatial frequencies are converted into equivalent temporal frequencies by multiplication by the speed of propagation of sound in air. The step of determining the directional frequency response104is achieved by applying a transformation to the frequency response values for the selected beamforming direction, and frequency range. For each of the frequencies in the frequency range 0 Hz to 24 kHz, there is a locus of points, in the spatial frequency domain, having coordinates kx, ky, kz. Each point has a respective frequency response value corresponding to the magnitude, in decibels, of the directional response of the arrangement of transducer elements at this frequency. The locus of points for each of the frequencies defines a three-dimensional spherical spatial frequency contour passing through the origin k=0 of the spatial frequency domain. The spatial frequency domain is translated into the modified frequency domain, by applying the transformation: kx=gx−√{square root over (gx2+gy2+gz2)},ky=gy, andkz=gz where gx, gy, gzare the modified frequency coordinates of the resulting modified frequency domain. Each of the spatial frequency contours is translated, such that, when mapped into the coordinates in the modified frequency domain, the modified frequency contours are arranged as a nested family of spherical contours, each being centred on the origin. The directional frequency response of the arrangement of transducer elements is outputted in three-dimensional polar coordinates derived from the Cartesian coordinates gx, gy, gz. With reference toFIG.2, in a second embodiment of the method of determining the frequency response of an arrangement of transducer elements, 48 microphone transducer elements206are arranged in two dimensions in the xy plane of the spatial domain207. Each of the 48 microphones is configured to receive sound in the selected frequency range of 0 Hz to 24 kHz. The beamforming direction is specified relative to the arrangement of transducer elements, in the positive x direction, to detect signals travelling in the negative x direction as indicated by arrow A. The frequency range of 0 Hz to 24 kHz is selected. Equally, other beamforming directions and frequency ranges may be selected. The 48 microphones are equidistantly spaced at 36 mm in the x direction and 39 mm in they direction, and, for the purposes of determining the simulation of their locations, are defined within a sensor space having dimensions of 3.6 m×3.6 m. The resulting periodic spatial function of the arrangement of 48 microphones206is determined by application of a sin c anti-aliasing filter. With reference toFIG.3, by means of the application of a spatial Fast Fourier Transform to the periodic spatial function, the simulation of locations of the arrangement of microphones206are converted from the spatial domain207into corresponding frequency response values in the spatial frequency domain319, having coordinates kx, ky, kz. The spatial frequencies are converted to equivalent temporal frequencies in Hertz. The locus of points for each of the frequencies defines a spherical spatial frequency contour. For example, the sets of points corresponding to the frequencies 24 kHz, 18 kHz, 12 kHz, 6 kHz, define frequency contours313,314,315,316respectively. All of the frequency contours313,314,315,316pass through the origin k=0. Each point within the spatial frequency domain319has a respective frequency response value in decibels, as indicated by the degree of shading inFIG.3. The straight lines B, C, D and E correspond to the angles 45°, 90°, 135° and 180° with respect to the beamforming direction. With reference toFIG.4, the step of determining the directional frequency response is achieved by translating the spatial frequency domain319, and the associated spatial frequency contours, into a modified frequency domain419according to the transformation: kx=gx−√{square root over (gx2+gy2+gz2)},ky=gy, andkz=gz where yx, gy, gzare the coordinates of the resulting modified frequency domain419. Respective frequency response values, associated with the spatial frequency contours313,314,315,316in the spatial frequency domain319, are translated, such that the resulting contours413,414,415,416for each of the plurality of frequencies in the modified frequency domain419, are arranged as a nested family of spherical contours, each centred on the origin. Frequency contours413,414,415,416are the translation of the spatial frequency contours313,314,315,316respectively. The straight lines G, H, I and J correspond to the directional frequency response at angles 45°, 90°, 135° and 180° with respect to the beamforming direction. Following the application of the transformation, the directional frequency response of the arrangement of microphones206is outputted in Cartesian coordinates (gx, gy), but could equally be outputted in polar coordinates (φ, f), as illustrated by the circular gridlines. With reference toFIGS.5aand5b, the translated frequency contours513,515, of the modified frequency domain520, are illustrated in three-dimensional Cartesian coordinates (gx, gy, gz). Frequency contours513and515correspond to frequency contours413and415as illustrated inFIG.4in two dimensions. Thereby, the directional frequency response of the arrangement of microphones is outputted, as the modified frequency domain419, as illustrated in two dimensions inFIG.4, and in three dimensions inFIGS.5aand5b. With reference toFIG.6, in a third embodiment of the method of determining the directional frequency response of an arrangement of transducer elements the modified frequency domain622is outputted, as the determined directional frequency response. The third embodiment is similar to the second embodiment, but the step of determining the directional frequency response is achieved by translating the spatial frequency domain319, and the associated spatial frequency contours313,314,315,316, into a modified frequency domain622according to the transformation: kx=f(cos φ−1); andky=fsin φ, where φ, f are the coordinates of the modified frequency domain622, wherein φ is the angle, with respect to beamforming direction, and f is the frequency, of the resulting directional frequency response. The locus of points, and the respective frequency response values, associated with the spatial frequency contours313,314,315,316, are translated such that the frequency contours for each of the plurality of frequencies are arranged as respective parallel linear contours613,614,615,616. The straight lines K, L, M and N correspond to the angles 45°, 90°, 135° and 180° with respect to the beamforming direction. With reference toFIGS.7aand7b, in a fourth embodiment of the method of determining the frequency response of an arrangement of transducer elements, seventeen microphone transducer elements706are arranged in two-dimensional concentric rings in the xy plane of the spatial domain707. Each of the seventeen microphones is configured to receive sound in the selected frequency range of 0 Hz to 12 kHz. The beamforming direction is specified relative to the arrangement of transducer elements706, in the positive x direction, to detect signals travelling in the negative x direction as indicated by arrow Z. The frequency range of 0 Hz to 12 kHz is selected. Equally, other beamforming directions and frequency ranges may be selected. The seventeen microphones706are arranged in three concentric rings, and, for the purposes of determining the simulation of their locations, are defined within a sensor space having dimensions of 7.2 m×7.2 m. The resulting periodic spatial function of the arrangement of seventeen microphones706is determined by application of a sin c anti-aliasing filter. The simulation of locations of the arrangement of microphones are converted from the spatial domain707, into corresponding frequency response values in the spatial frequency domain using a spatial Fourier Transform. The spatial frequencies are converted to equivalent temporal frequencies in Hertz. The locus of points for each of the frequencies defines a frequency contour. The step of determining the directional frequency response, is achieved by application of the following transformation to the spatial frequency domain: kx=gx−√{square root over (gx2+gy2+gz2)},ky=gy, andkz=gz where gx, gy, gzare the coordinates of the resulting modified frequency domain719. Respective frequency response values, associated with the spatial frequency contours, are translated, such that, the spatial frequency contours for each of the plurality of frequencies are arranged as a nested family of spherical contours, each centred on the origin. Following the application of the transformation to each of the sets of points and the respective frequency response values, the frequency response of the arrangement of microphones706is outputted. The directional frequency response of the arrangement of transducer elements is outputted in three-dimensional polar coordinates φ, θ, f derived from the Cartesian coordinates gx, gy, gzaccording to the transformation: gx=fcos φ cos θ gy=fsin φ cos θ gz=fsin θ With reference toFIG.8, in a fifth embodiment of the method of determining the directional frequency response of an arrangement of transducer elements an arrangement of 16 microphone transducer elements806are arranged in two dimensions in the xy plane of the spatial domain807. Each of the 16 microphones is configured to receive sound in the selected frequency range of 0 Hz to 24 kHz. The beamforming direction is specified relative to the arrangement of the transducer elements806, in the positive x direction, to detect signals travelling in the negative x direction. The frequency range of 0 Hz to 24 kHz is selected. The simulation of the locations of the arrangement of 16 microphones806is provided by a periodic spatial function. By means of the application of a spatial Fast Fourier Transform to the periodic spatial function, the simulation of locations of the arrangement of microphones806are converted from the spatial domain807into corresponding frequency response values in the spatial frequency domain, having coordinates kx, ky, kz. With reference toFIG.9a, the step of determining the directional frequency response is achieved by translating the spatial frequency domain, and the associated frequency contours, into a modified frequency domain919according to the transformation: kx=gx−√{square root over (gx2+gy2+gz2)},ky=gy, andkz=gz where gx, gy, gzare the coordinates of the resulting modified frequency domain919.FIG.9ashows a representation of the resulting directional frequency response in polar coordinates φ, θ, f, which are derived from the Cartesian coordinates gx, gy, gzaccording to the transformation: gx=fcos φ cos θ gy=fsin φ cos θ gz=fsin θ FIG.9bshows the measured directional frequency response of the arrangement of microphones806. The array of microphones was rotated on a turntable in an anechoic chamber in presence of test signals from a loudspeaker, from which the directional frequency response was calculated. In bothFIG.9aandFIG.9b, the degree of shading represents the magnitude of the response of the arrangement of microphones806. As can be seen, there is a good correlation between the magnitude of the directional frequency response in bothFIGS.9aand9b. For example, the main lobe923,922is clearly discernible and shows good degree of similarity in both the measured and the determined frequency response. With reference toFIGS.10aand10b, in a fifth embodiment of the method of determining the directional frequency response of an arrangement of transducer elements a measured directional frequency response and a determined directional frequency response of the arrangement of transducer elements806is outputted. The fifth embodiment is similar to the fourth embodiment, but the step of determining the directional frequency response is achieved by translating the spatial frequency domain into a modified frequency domain according to the transformation: kx=f(cos φ−1); andky=fsin φ, where φ, f are the coordinates of the modified frequency domain, wherein sp is the angle, with respect to beamforming direction, and f is the frequency, of the resulting directional frequency response.FIG.10ashows a representation of the resulting directional frequency response as a Cartesian plot. FIG.10bshows a Cartesian representation of the measured directional frequency response of the arrangement of microphones807, measured according to the method of embodiment 4 as used to produce figure the response illustrated inFIG.9b. In bothFIG.10aandFIG.10b, the degree of shading represents the magnitude of the response of the arrangement of microphones807. As can be seen, there is a good correlation between the magnitude of the directional frequency response in bothFIGS.10aand10b. For example, the main lobe1023,1022is clearly discernible and shows good degree of similarity in both the measured and the determined directional frequency response.
14,487
11863281
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more example aspects, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer. As used herein, the term computer-readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, “computer-readable medium,” “machine-readable medium,” “computer-readable memory,” and “machine-readable memory” may be used interchangeably. In some examples, a wireless device (e.g., a UE) may communicate using multiple carriers. For example, a UE configured for dual connectivity may be connected to two different base stations, and may be able to simultaneously transmit and receive data on multiple component carriers of the two different base stations. A UE configured for carrier aggregation may be able to simultaneously transmit and receive data on multiple component carriers from a same base station. In some examples in which different carriers are used for communication, a synchronization signal block (SSB) burst may be assigned to each carrier. An SSB burst may include one or more SSBs, and each SSB associated with an SSB burst may be allocated to a beam. In some examples, the different carriers may be intra-band carriers. In some examples, the different carriers may be inter-band carriers within a same frequency range. In some examples, the different carriers may be inter-band carriers in different frequency ranges. Example techniques disclosed herein enable associating cross-carrier beams based on, for example, one or more similar characteristics. For example, one or more beams of a first carrier may be cross-carrier associated with one or more beams of a second carrier. In some examples, when a first beam of a first carrier is indicated as being cross-carrier associated with a second beam of a second carrier, then characteristics associated with the first beam may be applied to the second beam. For example, if the first beam and the second beam are indicated as having a similar delay spread, then a UE can determine the delay spread for the first beam and apply the determined delay spread to the second beam without separately determining the delay spread for the second beam. FIG.1is a diagram illustrating an example of a wireless communications system and an access network100that includes UEs104in communication with base stations102or base stations180. As an example, the UE104may be configured to manage one or more aspects of wireless communication by processing cross-carrier beam associations. As an example, inFIG.1, the UE104may include a UE cross-carrier beam association handling component198. In certain aspects, the UE cross-carrier beam association handling component198may be configured to receive, from a base station, an indication of a cross-carrier beam association associated with a first carrier and a second carrier different than the first carrier. The example UE cross-carrier beam association handling component198may also be configured to determine an association between a first set of beams of the first carrier and a second set of beams of the second carrier based on the indication of the cross-carrier beam association. Additionally, the example UE cross-carrier beam association handling component198may be configured to receive on the first set of beams and the second set of beams based on the determined cross-carrier beam association. Still referring toFIG.1, in certain aspects, the base station102/180may be configured to manage one or more aspects of wireless communication via cross-carrier beam associations. As an example, inFIG.1, the base station102/180may include a base station cross-carrier beam association handling component199. The example base station cross-carrier beam association handling component199may be configured to determine a cross-carrier beam association between a first set of beams of a first carrier and a second set of beams of a second carrier, the first carrier being different than the second carrier. The example base station cross-carrier beam association handling component199may also be configured to transmit, to a UE, an indication of the cross-carrier beam association associated with the first carrier and the second carrier. Further, example base station cross-carrier beam association handling component199may be configured to transmit, to the UE, on the first set of beams and the second set of beams based on the determined cross-carrier beam association. Although the following description provides examples of cross-carrier beam associations directed to instances including two carriers, the concepts described herein may be applicable to any suitable quantity of carriers. Furthermore, while the following description provides examples directed to 5G NR, the concepts described herein may be applicable to other similar areas, such as LTE, LTE-A, CDMA, GSM, and/or other wireless technologies, in which cross-carrier beam associations may be beneficial. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes the base stations102, the UEs104, an Evolved Packet Core (EPC)160, and another core network190(e.g., a 5G Core (5GC)). The base stations102may include macrocells (high power cellular base station) and/or small cells (low power cellular base station). The macrocells include base stations. The small cells include femtocells, picocells, and microcells. The base stations102configured for 4G LTE (collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) may interface with the EPC160through first backhaul links132(e.g., S1 interface). The base stations102configured for 5G NR (collectively referred to as Next Generation RAN (NG-RAN)) may interface with core network190through second backhaul links184. In addition to other functions, the base stations102may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. The base stations102may communicate directly or indirectly (e.g., through the EPC160or core network190) with each other over third backhaul links134(e.g., X2 interface). The first backhaul links132, the second backhaul links184, and the third backhaul links134may be wired or wireless. The base stations102may wirelessly communicate with the UEs104. Each of the base stations102may provide communication coverage for a respective geographic coverage area110. There may be overlapping geographic coverage areas110. For example, the small cell102′ may have a coverage area110′ that overlaps the coverage area110of one or more macro base stations102. A network that includes both small cell and macrocells may be known as a heterogeneous network. A heterogeneous network may also include Home Evolved Node Bs (eNBs) (HeNBs), which may provide service to a restricted group known as a closed subscriber group (CSG). The communication links120between the base stations102and the UEs104may include uplink (UL) (also referred to as reverse link) transmissions from a UE104to a base station102and/or downlink (DL) (also referred to as forward link) transmissions from a base station102to a UE104. The communication links120may use multiple-input and multiple-output (MIMO) antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity. The communication links may be through one or more carriers. The base stations102/UEs104may use spectrum up to Y MHz (e.g., 5, 10, 15, 20, 100, 400, etc. MHz) bandwidth per carrier allocated in a carrier aggregation of up to a total of Yx MHz (x component carriers) used for transmission in each direction. The carriers may or may not be adjacent to each other. Allocation of carriers may be asymmetric with respect to DL and UL (e.g., more or fewer carriers may be allocated for DL than for UL). The component carriers may include a primary component carrier and one or more secondary component carriers. A primary component carrier may be referred to as a primary cell (PCell) and a secondary component carrier may be referred to as a secondary cell (SCell). Certain UEs104may communicate with each other using device-to-device (D2D) communication link158. The D2D communication link158may use the DL/UL WWAN spectrum. The D2D communication link158may use one or more sidelink channels, such as a physical sidelink broadcast channel (PSBCH), a physical sidelink discovery channel (PSDCH), a physical sidelink shared channel (PSSCH), and a physical sidelink control channel (PSCCH). D2D communication may be through a variety of wireless D2D communications systems, such as for example, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, LTE, or NR. The wireless communications system may further include a Wi-Fi access point (AP)150in communication with Wi-Fi stations (STAs)152via communication links154, e.g., in a 5 GHz unlicensed frequency spectrum or the like. When communicating in an unlicensed frequency spectrum, the STAs152/AP150may perform a clear channel assessment (CCA) prior to communicating in order to determine whether the channel is available. The small cell102′ may operate in a licensed and/or an unlicensed frequency spectrum. When operating in an unlicensed frequency spectrum, the small cell102′ may employ NR and use the same unlicensed frequency spectrum (e.g., 5 GHz, or the like) as used by the Wi-Fi AP150. The small cell102′, employing NR in an unlicensed frequency spectrum, may boost coverage to and/or increase capacity of the access network. The electromagnetic spectrum is often subdivided, based on frequency/wavelength, into various classes, bands, channels, etc. In 5G NR, two initial operating bands have been identified as frequency range designations FR1 (410 MHz-7.125 GHz) and FR2 (24.25 GHz-52.6 GHz). The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “sub-6 GHz” band in various documents and articles. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. With the above aspects in mind, unless specifically stated otherwise, the term “sub-6 GHz” or the like if used herein may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, the term “millimeter wave” or the like if used herein may broadly represent frequencies that may include mid-band frequencies, may be within FR2, or may be within the EHF band. A base station102, whether a small cell102′ or a large cell (e.g., macro base station), may include and/or be referred to as an eNB, gNodeB (gNB), or another type of base station. Some base stations, such as gNB180may operate in a traditional sub 6 GHz spectrum, in millimeter wave frequencies, and/or near millimeter wave frequencies in communication with the UE104. When the gNB180operates in millimeter wave or near millimeter wave frequencies, the gNB180may be referred to as a millimeter wave base station. The millimeter wave base station180may utilize beamforming182with the UE104to compensate for the path loss and short range. The base station180and the UE104may each include a plurality of antennas, such as antenna elements, antenna panels, and/or antenna arrays to facilitate the beamforming. The base station180may transmit a beamformed signal to the UE104in one or more transmit directions182′. The UE104may receive the beamformed signal from the base station180in one or more receive directions182″. The UE104may also transmit a beamformed signal to the base station180in one or more transmit directions. The base station180may receive the beamformed signal from the UE104in one or more receive directions. The base station180/UE104may perform beam training to determine the best receive and transmit directions for each of the base station180/UE104. The transmit and receive directions for the base station180may or may not be the same. The transmit and receive directions for the UE104may or may not be the same. The EPC160may include a Mobility Management Entity (MME)162, other MMEs164, a Serving Gateway166, a Multimedia Broadcast Multicast Service (MBMS) Gateway168, a Broadcast Multicast Service Center (BM-SC)170, and a Packet Data Network (PDN) Gateway172. The MME162may be in communication with a Home Subscriber Server (HSS)174. The MME162is the control node that processes the signaling between the UEs104and the EPC160. Generally, the MME162provides bearer and connection management. All user Internet protocol (IP) packets are transferred through the Serving Gateway166, which itself is connected to the PDN Gateway172. The PDN Gateway172provides UE IP address allocation as well as other functions. The PDN Gateway172and the BM-SC170are connected to the IP Services176. The IP Services176may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. The BM-SC170may provide functions for MBMS user service provisioning and delivery. The BM-SC170may serve as an entry point for content provider MBMS transmission, may be used to authorize and initiate MBMS Bearer Services within a public land mobile network (PLMN), and may be used to schedule MBMS transmissions. The MBMS Gateway168may be used to distribute MBMS traffic to the base stations102belonging to a Multicast Broadcast Single Frequency Network (MBSFN) area broadcasting a particular service, and may be responsible for session management (start/stop) and for collecting eMBMS related charging information. The core network190may include an Access and Mobility Management Function (AMF)192, other AMFs193, a Session Management Function (SMF)194, and a User Plane Function (UPF)195. The AMF192may be in communication with a Unified Data Management (UDM)196. The AMF192is the control node that processes the signaling between the UEs104and the core network190. Generally, the AMF192provides QoS flow and session management. All user Internet protocol (IP) packets are transferred through the UPF195. The UPF195provides UE IP address allocation as well as other functions. The UPF195is connected to the IP Services197. The IP Services197may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a Packet Switch (PS) Streaming (PSS) Service, and/or other IP services. The base station may include and/or be referred to as a gNB, Node B, eNB, an access point, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a basic service set (BSS), an extended service set (ESS), a transmit reception point (TRP), or some other suitable terminology. The base station102provides an access point to the EPC160or core network190for a UE104. Examples of UEs104include a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal digital assistant (PDA), a satellite radio, a global positioning system, a multimedia device, a video device, a digital audio player (e.g., MP3 player), a camera, a game console, a tablet, a smart device, a wearable device, a vehicle, an electric meter, a gas pump, a large or small kitchen appliance, a healthcare device, an implant, a sensor/actuator, a display, or any other similar functioning device. Some of the UEs104may be referred to as IoT devices (e.g., parking meter, gas pump, toaster, vehicles, heart monitor, etc.). The UE104may also be referred to as a station, a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or some other suitable terminology. FIG.2Ais a diagram200illustrating an example of a first subframe within a 5G NR frame structure.FIG.2Bis a diagram230illustrating an example of DL channels within a 5G NR subframe.FIG.2Cis a diagram250illustrating an example of a second subframe within a 5G NR frame structure.FIG.2Dis a diagram280illustrating an example of UL channels within a 5G NR subframe. The 5G NR frame structure may be frequency division duplexed (FDD) in which for a particular set of subcarriers (carrier system bandwidth), subframes within the set of subcarriers are dedicated for either DL or UL, or may be time division duplexed (TDD) in which for a particular set of subcarriers (carrier system bandwidth), subframes within the set of subcarriers are dedicated for both DL and UL. In the examples provided byFIGS.2A,2C, the 5G NR frame structure is assumed to be TDD, with subframe 4 being configured with slot format 28 (with mostly DL), where D is DL, U is UL, and F is flexible for use between DL/UL, and subframe 3 being configured with slot format 1 (with all UL). While subframes 3, 4 are shown with slot formats 1, 28, respectively, any particular subframe may be configured with any of the various available slot formats 0-61. Slot formats 0, 1 are all DL, UL, respectively. Other slot formats 2-61 include a mix of DL, UL, and flexible symbols. UEs are configured with the slot format (dynamically through DL control information (DCI), or semi-statically/statically through radio resource control (RRC) signaling) through a received slot format indicator (SFI). Note that the description infra applies also to a 5G NR frame structure that is TDD. Other wireless communication technologies may have a different frame structure and/or different channels. A frame (10 ms) may be divided into 10 equally sized subframes (1 ms). Each subframe may include one or more time slots. Subframes may also include mini-slots, which may include 7, 4, or 2 symbols. Each slot may include 7 or 14 symbols, depending on the slot configuration. For slot configuration 0, each slot may include 14 symbols, and for slot configuration 1, each slot may include 7 symbols. The symbols on DL may be cyclic prefix (CP) orthogonal frequency division multiplexing (OFDM) (CP-OFDM) symbols. The symbols on UL may be CP-OFDM symbols (for high throughput scenarios) or discrete Fourier transform (DFT) spread OFDM (DFT-s-OFDM) symbols (also referred to as single carrier frequency-division multiple access (SC-FDMA) symbols) (for power limited scenarios; limited to a single stream transmission). The number of slots within a subframe is based on the slot configuration and the numerology. For slot configuration 0, different numerologies μ 0 to 4 allow for 1, 2, 4, 8, and 16 slots, respectively, per subframe. For slot configuration 1, different numerologies 0 to 2 allow for 2, 4, and 8 slots, respectively, per subframe. Accordingly, for slot configuration 0 and numerology μ, there are 14 symbols/slot and 2μslots/subframe. The subcarrier spacing and symbol length/duration are a function of the numerology. The subcarrier spacing may be equal to 2μ*15 kHz, where μ is the numerology 0 to 4. As such, the numerology μ=0 has a subcarrier spacing of 15 kHz and the numerology μ=4 has a subcarrier spacing of 240 kHz. The symbol length/duration is inversely related to the subcarrier spacing.FIGS.2A-2Dprovide an example of slot configuration 0 with 14 symbols per slot and numerology μ=2 with 4 slots per subframe. The slot duration is 0.25 ms, the subcarrier spacing is 60 kHz, and the symbol duration is approximately 16.67 μs. Within a set of frames, there may be one or more different bandwidth parts (BWPs) (seeFIG.2B) that are frequency division multiplexed. Each BWP may have a particular numerology. A resource grid may be used to represent the frame structure. Each time slot includes a resource block (RB) (also referred to as physical RBs (PRBs)) that extends 12 consecutive subcarriers. The resource grid is divided into multiple resource elements (REs). The number of bits carried by each RE depends on the modulation scheme. As illustrated inFIG.2A, some of the REs carry reference (pilot) signals (RS) for the UE. The RS may include demodulation RS (DM-RS) (indicated as R for one particular configuration, but other DM-RS configurations are possible) and channel state information reference signals (CSI-RS) for channel estimation at the UE. The RS may also include beam measurement RS (BRS), beam refinement RS (BRRS), and phase tracking RS (PT-RS). FIG.2Billustrates an example of various DL channels within a subframe of a frame. The physical downlink control channel (PDCCH) carries DCI within one or more control channel elements (CCEs) (e.g., 1, 2, 4, 8, or 16 CCEs), each CCE including six RE groups (REGs), each REG including 12 consecutive REs in an OFDM symbol of an RB. A PDCCH within one BWP may be referred to as a control resource set (CORESET). A UE is configured to monitor PDCCH candidates in a PDCCH search space (e.g., common search space, UE-specific search space) during PDCCH monitoring occasions on the CORESET, where the PDCCH candidates have different DCI formats and different aggregation levels. Additional BWPs may be located at greater and/or lower frequencies across the channel bandwidth. A primary synchronization signal (PSS) may be within symbol 2 of particular subframes of a frame. The PSS is used by a UE104to determine subframe/symbol timing and a physical layer identity. A secondary synchronization signal (SSS) may be within symbol 4 of particular subframes of a frame. The SSS is used by a UE to determine a physical layer cell identity group number and radio frame timing. Based on the physical layer identity and the physical layer cell identity group number, the UE can determine a physical cell identifier (PCI). Based on the PCI, the UE can determine the locations of the aforementioned DM-RS. The physical broadcast channel (PBCH), which carries a master information block (MIB), may be logically grouped with the PSS and SSS to form a synchronization signal (SS)/PBCH block (also referred to as SS block (SSB)). The MIB provides a number of RBs in the system bandwidth and a system frame number (SFN). The physical downlink shared channel (PDSCH) carries user data, broadcast system information not transmitted through the PBCH such as system information blocks (SIBs), and paging messages. As illustrated inFIG.2C, some of the REs carry DM-RS (indicated as R for one particular configuration, but other DM-RS configurations are possible) for channel estimation at the base station. The UE may transmit DM-RS for the physical uplink control channel (PUCCH) and DM-RS for the physical uplink shared channel (PUSCH). The PUSCH DM-RS may be transmitted in the first one or two symbols of the PUSCH. The PUCCH DM-RS may be transmitted in different configurations depending on whether short or long PUCCHs are transmitted and depending on the particular PUCCH format used. The UE may transmit sounding reference signals (SRS). The SRS may be transmitted in the last symbol of a subframe. The SRS may have a comb structure, and a UE may transmit SRS on one of the combs. The SRS may be used by a base station for channel quality estimation to enable frequency-dependent scheduling on the UL. FIG.2Dillustrates an example of various UL channels within a subframe of a frame. The PUCCH may be located as indicated in one configuration. The PUCCH carries uplink control information (UCI), such as scheduling requests, a channel quality indicator (CQI), a precoding matrix indicator (PMI), a rank indicator (RI), and hybrid automatic repeat request (HARQ) acknowledgment (ACK) (HARQ-ACK) information (ACK/negative ACK (NACK)) feedback. The PUSCH carries data, and may additionally be used to carry a buffer status report (BSR), a power headroom report (PHR), and/or UCI. FIG.3is a block diagram of a base station310in communication with a UE350in an access network. In the DL, IP packets from the EPC160may be provided to a controller/processor375. The controller/processor375implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor375provides RRC layer functionality associated with broadcasting of system information (e.g., MIB, SIBs), RRC connection control (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), inter radio access technology (RAT) mobility, and measurement configuration for UE measurement reporting; PDCP layer functionality associated with header compression/decompression, security (ciphering, deciphering, integrity protection, integrity verification), and handover support functions; RLC layer functionality associated with the transfer of upper layer packet data units (PDUs), error correction through ARQ, concatenation, segmentation, and reassembly of RLC service data units (SDUs), re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. The transmit (TX) processor (e.g., a TX processor316) and the receive (RX) processor (e.g., an RX processor370) implement layer 1 functionality associated with various signal processing functions. Layer 1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, interleaving, rate matching, mapping onto physical channels, modulation/demodulation of physical channels, and MIMO antenna processing. The TX processor316handles mapping to signal constellations based on various modulation schemes (e.g., binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), M-phase-shift keying (M-PSK), M-quadrature amplitude modulation (M-QAM)). The coded and modulated symbols may then be split into parallel streams. Each stream may then be mapped to an OFDM subcarrier, multiplexed with a reference signal (e.g., pilot) in the time and/or frequency domain, and then combined together using an Inverse Fast Fourier Transform (IFFT) to produce a physical channel carrying a time domain OFDM symbol stream. The OFDM stream is spatially precoded to produce multiple spatial streams. Channel estimates from a channel estimator374may be used to determine the coding and modulation scheme, as well as for spatial processing. The channel estimate may be derived from a reference signal and/or channel condition feedback transmitted by the UE350. Each spatial stream may then be provided to a different antenna320via a separate transmitter318TX. Each transmitter318TX may modulate an RF carrier with a respective spatial stream for transmission. At the UE350, each receiver354RX receives a signal through its respective antenna352. Each receiver354RX recovers information modulated onto an RF carrier and provides the information to an RX processor356. A TX processor368and the RX processor356implement layer 1 functionality associated with various signal processing functions. The RX processor356may perform spatial processing on the information to recover any spatial streams destined for the UE350. If multiple spatial streams are destined for the UE350, they may be combined by the RX processor356into a single OFDM symbol stream. The RX processor356then converts the OFDM symbol stream from the time-domain to the frequency domain using a Fast Fourier Transform (FFT). The frequency domain signal comprises a separate OFDM symbol stream for each subcarrier of the OFDM signal. The symbols on each subcarrier, and the reference signal, are recovered and demodulated by determining the most likely signal constellation points transmitted by the base station310. These soft decisions may be based on channel estimates computed by a channel estimator358. The soft decisions are then decoded and deinterleaved to recover the data and control signals that were originally transmitted by the base station310on the physical channel. The data and control signals are then provided to a controller/processor359, which implements layer 3 and layer 2 functionality. The controller/processor359can be associated with a memory360that stores program codes and data. The memory360may be referred to as a computer-readable medium. In the UL, the controller/processor359provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets from the EPC160. The controller/processor359is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. Similar to the functionality described in connection with the DL transmission by the base station310, the controller/processor359provides RRC layer functionality associated with system information (e.g., MIB, SIBs) acquisition, RRC connections, and measurement reporting; PDCP layer functionality associated with header compression/decompression, and security (ciphering, deciphering, integrity protection, integrity verification); RLC layer functionality associated with the transfer of upper layer PDUs, error correction through ARQ, concatenation, segmentation, and reassembly of RLC SDUs, re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto TBs, demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Channel estimates derived by the channel estimator358from a reference signal or feedback transmitted by the base station310may be used by the TX processor368to select the appropriate coding and modulation schemes, and to facilitate spatial processing. The spatial streams generated by the TX processor368may be provided to different antenna352via separate transmitters354TX. Each transmitter354TX may modulate an RF carrier with a respective spatial stream for transmission. The UL transmission is processed at the base station310in a manner similar to that described in connection with the receiver function at the UE350. Each receiver318RX receives a signal through its respective antenna320. Each receiver318RX recovers information modulated onto an RF carrier and provides the information to the RX processor370. The controller/processor375can be associated with a memory376that stores program codes and data. The memory376may be referred to as a computer-readable medium. In the UL, the controller/processor375provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, control signal processing to recover IP packets from the UE350. IP packets from the controller/processor375may be provided to the EPC160. The controller/processor375is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. At least one of the TX processor368, the RX processor356, and the controller/processor359may be configured to perform aspects in connection with the example UE cross-carrier beam association handling component198ofFIG.1. At least one of the TX processor316, the RX processor370, and the controller/processor375may be configured to perform aspects in connection with the example base station cross-carrier beam association handling component199ofFIG.1. FIG.4Ais a diagram illustrating an example of cross-carrier communication400between a base station402and a UE404, in accordance with one or more aspects of this disclosure.FIG.4Bis a diagram illustrating another example of cross-carrier communication450between a first base station402a, a second base station402b, and a UE404, in accordance with one or more aspects of this disclosure. Aspects of the base station402may be implemented by the base station102/180and/or the base station310. Aspects of the UE404may be implemented by the UE104and/or the UE350. In the illustrated example ofFIG.4A, the UE404and the base station402are in communication. For example, the base station402and the UE404may transmit and/or receive messages through a first carrier410(“Carrier A”). The base station402and the UE404may additionally or alternatively transmit and/or receive messages through a second carrier420(“Carrier B”). Although not shown in the example ofFIG.4A, it may be appreciated that each of the respective carriers410,420may be associated with a set of beams. In some examples, the cross-carrier communication400ofFIG.4Amay correspond to examples of carrier aggregation. In the illustrated example ofFIG.4B, the UE404may be in communication with the first base station402athrough a first carrier (e.g., the first carrier410). The UE404may additionally or alternatively be in communication with the second base station402bthrough a second carrier (e.g., the second carrier420). In some examples, the cross-carrier communication450ofFIG.4Bmay correspond to examples of dual connectivity. In some examples, a carrier (e.g., the carriers410,420) may be associated with an operating band and a frequency range. For example, a first frequency range (FR1) may include frequencies between 410 MHz and 7125 MHz, and a second frequency range (FR2) may include frequencies between 24250 MHz and 52600 MHz. The respective frequency ranges may be further divided into operating bands that define a subset of frequencies. For example, a first operating band within FR1 may include frequencies between 1920 MHz and 1980 MHz for uplink transmissions and frequencies between 2110 MHz and 2170 MHz for downlink transmissions. A second operating band within FR1 may include frequencies between 1850 MHz and 1910 MHz for uplink transmissions and frequencies between 1930 MHz and 1990 MHz for downlink transmissions. The second frequency range (FR2) may also include operating bands. For example, a first operating band within FR2 may include frequencies between 26500 MHz and 29500 MHz for uplink transmissions and downlink transmissions, and a second operating band within FR2 may include frequencies between 24500 MHz and 27500 MHz for uplink transmissions and downlink transmissions. Although the following description provides examples of SSB bursts associated with the cross-carrier communication450ofFIG.4B, it may be appreciated that the concepts may be applicable to the cross-carrier communication400ofFIG.4A. An SSB may be used during a cell search procedure. For example, the UE404may search for synchronization signals when scanning for a cell to camp on. An SSB burst may be used to facilitate beamforming and/or beam sweeping. An SSB burst may include one or more SSBs and may be contained within a time window (e.g., 5 milliseconds). In some examples, the quantity and/or shape of a beam transmitting an SSB within an SSB burst may vary based on the operating band associated with the carrier. In the illustrated example ofFIG.4B, a first SSB burst412in the first carrier410includes four SSBs (or beams) A1, A2, A3, and A4. A second SSB burst422in the second carrier420includes eight SSBs (or beams) B1, B2, B3, B4, B5, B6, B7, and B8. The beams of a carrier may be associated with one or more characteristics (sometimes referred to as “beam characteristics” or “radio channel characteristics”). For example, a beam may be associated with a Doppler shift, a Doppler spread, an average delay, a delay spread, and/or spatial information. While the beams associated with the first carrier410and the second carrier420may be associated with different operating bands and/or frequency ranges, in some examples, one or more beam(s) of the first carrier410may be related to one or more beam(s) of the second carrier420. For example, in the illustrated example ofFIG.4B, the beam A1of the first carrier410has similar spatial characteristics to the beams B1and B2of the second carrier420, the beam A2of the first carrier410has similar spatial characteristics to the beams B3and B4of the second carrier420, the beam A3of the first carrier410has similar spatial characteristics to the beams B5and B6of the second carrier420, and the beam A4of the first carrier410has similar spatial characteristics to the beams B7and B8of the second carrier420. Example techniques disclosed herein enable a base station to indicate a cross-carrier beam association between a first set of beams of a first carrier and a second set of beams of a second carrier. The cross-carrier beam association may indicate cross-carrier beams that share similar characteristics. For example, the cross-carrier beam association may indicate that the beam A1of the first carrier410has similar spatial characteristics to the beams B1and B2of the second carrier420. In some examples, a UE may use the cross-carrier beam association to assist with channel estimation, frequency offset estimations, and/or synchronization procedures. For example, if the beams A1, B1, and B2are indicated as being cross-carrier associated, then the UE may measure a characteristic of the beam A1and apply the value measured for the beam A1of the first carrier410to the beams B1and B2of the second carrier420. FIG.5Ais a diagram illustrating cross-carrier beam association500for intra-band carriers or inter-band carriers within a same frequency range, in accordance with one or more aspects of this disclosure. In the illustrated example ofFIG.5A, a first set of SSBs512are transmitted in a first carrier510(“Carrier A”) and a second set of SSBs522are transmitted in a second carrier520(“Carrier B”). In some examples, the first carrier510may be referred to as a primary cell (PCell) and the second carrier520may be referred to as a secondary cell (SCell). Aspects of the first carrier510may be implemented by the first carrier410ofFIGS.4A and4B. Aspects of the second carrier520may be implemented by the second carrier420ofFIGS.4A and4B. Aspects of the first set of SSBs512may be implemented by the first SSB burst412ofFIG.4B. Aspects of the second set of SSBs522may be implemented by the second SSB burst422ofFIG.4B. In some examples, the first carrier510and the second carrier520ofFIG.5Amay be intra-band carriers (e.g., carriers associated with a same operating band). In some examples, the first carrier510and the second carrier520ofFIG.5Amay be inter-band carriers within a same frequency range (e.g., carriers associated with different operating bands within a same frequency range). For example, the first carrier510may be associated with a first operating band within a first frequency range (FR1) and the second carrier520may be associated with a second operating band within the first frequency range (FR1). As shown in the example ofFIG.5A, the SSBs of the respective sets of SSBs512,522are transmitted on respective beams514,524. For example, with respect to the first set of SSBs512, an SSB A1is transmitted on a beam A1, an SSB A2is transmitted on a beam A2, an SSB A3is transmitted on a beam A3, and an SSB A4is transmitted on a beam A4. With respect to the second set of SSBs522, an SSB B1is transmitted on a beam B1, an SSB B2is transmitted on a beam B2, an SSB B3is transmitted on a beam B3, and an SSB B4is transmitted on a beam B4. In the illustrated example ofFIG.5A, beams of the first set of beams514may have one or more characteristics that are similar to beams of the second set of beams524. For example, the beams A1, A2, A3, A4of the first set of beams514may be associated with a first beam width and the beams B1, B2, B3, B4of the second set of beams524may be associated with a second beam width that is the same as the first beam width. In some such examples, it may be appreciated that the beams of the second set of beams524may overlap with the beams of the first set of beams514on the respective same beam widths. For example, the beam A1of the first set of beams514may have a same beam width as the beam B1of the second set of beams524. Accordingly, the beam A1of the first set of beams514and the beam B1of the second set of beams524may be referred to as being cross-carrier associated. For example, a UE that measures a characteristic of the beam A1may apply the measured value to the beam B1. In some examples, based on the beams that are determined to be cross-carrier associated, a base station may transmit a message identifying cross-carrier beam associations. For example, the base station may transmit a message identifying a first cross-carrier beam association530abetween the beam A1of the first set of beams514and the beam B1of the second set of beams524, a second cross-carrier beam association530bbetween the beam A2of the first set of beams514and the beam B2of the second set of beams524, a third cross-carrier beam association530cbetween the beam A3of the first set of beams514and the beam B3of the second set of beams524, and a fourth cross-carrier beam association530dbetween the beam A4of the first set of beams514and the beam B4of the second set of beams524. FIG.5Bis a diagram illustrating cross-carrier beam association550for inter-band carriers across different frequency ranges, in accordance with one or more aspects of this disclosure. In the illustrated example ofFIG.5B, a first set of SSBs562are transmitted in a first carrier560(“Carrier A”) and a second set of SSBs572are transmitted in a second carrier570(“Carrier B”). In some examples, the first carrier560may be referred to as a primary cell (PCell) and the second carrier570may be referred to as a secondary cell (SCell). Aspects of the first carrier510may be implemented by the first carrier410ofFIGS.4A and4Band/or the first carrier510ofFIG.5A. Aspects of the second carrier520may be implemented by the second carrier420ofFIGS.4A and4Band/or the second carrier520ofFIG.5A. Aspects of the first set of SSBs562may be implemented by the first SSB burst412ofFIG.4Band/or the first set of SSBs512ofFIG.5A. Aspects of the second set of SSBs522may be implemented by the second SSB burst422ofFIG.4Band/or the second set of SSBs522ofFIG.5A. In the illustrated example ofFIG.5B, the first carrier560and the second carrier570are inter-band carriers across different frequency ranges. For example, the first carrier560may be associated with a first operating band within a first frequency range (FR1) and the second carrier570may be associated with a second operating band within a second frequency range (FR2). As shown in the example ofFIG.5B, the SSBs of the respective sets of SSBs562,572are transmitted on respective beams564,574. For example, with respect to the first set of SSBs562, an SSB A1is transmitted on a beam A1, an SSB A2is transmitted on a beam A2, an SSB A3is transmitted on a beam A3, and an SSB A4is transmitted on a beam A4. With respect to the second set of SSBs572, an SSB B1is transmitted on a beam B1, an SSB B2is transmitted on a beam B2, an SSB B3is transmitted on a beam B3, an SSB B4is transmitted on a beam B4, an SSB B5is transmitted on a beam B5, an SSB B6is transmitted on a beam B6, an SSB B7is transmitted on a beam B7, and an SSB B8is transmitted on a beam B8. Similar to the example ofFIG.5A, in the illustrated example ofFIG.5B, the beams of the first set of beams564may have one or more characteristics that are similar to the beams of the second set of beams574. For example, the beams A1, A2, A3, A4of the first set of beams564may be associated with a first beam width and the beams B1, B2, B3, B4, B5, B6, B7, B8of the second set of beams574may be associated with a second beam width that is different than the first beam width. For example, beams within FR1 may be “wide beams” and beams within FR2 may be “narrow beams.” In some such examples, it may be appreciated that the beams of the second set of beams574may overlap with the beams of the first set of beams564on the respective beam widths. For example, the beam A1of the first set of beams564may have a same beam width as the combined beam widths of the beams B1and B2of the second set of beams574. Accordingly, the beam A1of the first set of beams564may be referred to as being cross-carrier associated with the beams B1and B2of the second set of beams574. For example, a UE that measures a characteristic of the beam A1may apply the measured value to the beams B1and B2. In some examples, based on the beams that are determined to be cross-carrier associated, a base station may transmit a message identifying cross-carrier beam associations. For example, the base station may transmit a message identifying a first cross-carrier beam association580abetween the beam A1of the first set of beams564and the beams B1and B2of the second set of beams574, a second cross-carrier beam association580bbetween the beam A2of the first set of beams564and the beams B3and B4of the second set of beams574, a third cross-carrier beam association580cbetween the beam A3of the first set of beams564and the beams B5and B6of the second set of beams574, and a fourth cross-carrier beam association580dbetween the beam A4of the first set of beams564and the beams B7and B8of the second set of beams574. Although not shown in the illustrated example ofFIGS.5A and5B, it may be appreciated that if an SSB in one carrier is indicated as being a single frequency network SSB, the associated SSBs in the other carrier(s) are also operating as single frequency network. While the example ofFIG.5Bprovides examples in which two beams of the second set of beams574overlap with one beam of the first set of beams564, it may be appreciated that in other examples, any suitable quantity of beams of the second set of beams574may overlap with one or more beams of the first set of beams564. For example, if the first set of beams includes N beams, the second set of beams may include N*M beams, where the value “M” is an integer. Additionally, in some such examples, the second beam width may be approximately equal to 1/M the first beam width. While the examples ofFIGS.5A and5Bprovide examples of cross-carrier beam associations based on spatial information, it may be appreciated that in additional or alternative examples, the cross-carrier beam associations may be based on other suitable characteristics. For example, a first type of cross-carrier beam association may be based on Doppler shift, Doppler spread, average delay and delay spread, a second type of cross-carrier beam association may be based on Doppler shift and Doppler spread, a third type of cross-carrier beam association may be based on Doppler shift and average delay, and a fourth type of cross-carrier beam association may be based on spatial characteristics. Furthermore, in some examples, the cross-carrier beam association between the beams of the first carrier and the beams of the second carrier may be determined on a quasi co-location (QCL) relationship of the first set of beams and the second set of beams. FIG.6illustrates an example communication flow600between abase station602and a UE604, in accordance with one or more techniques disclosed herein. Aspects of the base station602may be implemented by the base station102, the base station180, the base station310, and/or the base stations402,402a,402b. Aspects of the UE604may be implemented by the UE104, the UE350, and/or the UE404. Although not shown in the illustrated example ofFIG.6, it may be appreciated that in additional or alternative examples, the base station602may be in communication with one or more other base stations or UEs, and/or the UE604may be in communication with one or more other base stations or UEs. In the illustrated example ofFIG.6, the UE604may transmit a UE capability610that is received by the base station602. The UE capability610may include information regarding UE capabilities of the UE604. For example, the UE capability610may indicate whether the UE604supports cross-carrier beam association. At620, the base station602determines a cross-carrier beam association associated with a first carrier and a second carrier. In some examples, the first carrier and the second carrier may be intra-band carriers. For example, the first carrier and the second carrier may be associated with a same operating band within a same frequency range. In some examples, the first carrier and the second carrier may be inter-band carriers within a same frequency range. For example, the first carrier and the second carrier may be associated with different operating bands within a first frequency range (FR1). In some examples, the first carrier and the second carrier may be inter-band carriers across different frequency ranges. For example, the first carrier may be associated with a first operating band within the first frequency range (FR1) and the second carrier may be associated with a second operating band within the second frequency range (FR2). In some examples, the cross-carrier beam associations may be based on spatial information, as described above in connection withFIGS.5A and5B. In some examples, the cross-carrier beam associations may be based on one or more beam characteristics, such as Doppler shift, Doppler spread, average delay, delay spread, and/or spatial characteristics. For example, a first type of cross-carrier beam association may be based on Doppler shift, Doppler spread, average delay, and delay spread, a second type of cross-carrier beam association may be based on Doppler shift and Doppler spread, a third type of cross-carrier beam association may be based on Doppler shift and average delay, and a fourth type of cross-carrier beam association may be based on spatial characteristics. Furthermore, in some examples, the cross-carrier beam association between the first carrier and the second carrier may be determined on a QCL relationship of the first set of beams and the second set of beams. The base station602transmits a cross-carrier beam associations indication630that is received by the UE604. The cross-carrier beam associations indication630may include information identifying one or more beams of a first carrier that are cross-carrier associated with one or more beams of a second carrier. For example, the cross-carrier beam associations indication630may include, or otherwise indicate, the cross-carrier beam associations530a-530dofFIG.5Aand/or the cross-carrier beam associations580a-580dofFIG.5B. The base station602may transmit the cross-carrier beam associations indication630via system information and/or RRC signaling. In some examples, if the UE capability610indicates that the UE604supports cross-carrier beam association, the base station602may transmit the cross-carrier beam associations indication630in an RRC message. At640, the UE604determines an association between a first set of beams of the first carrier and a second set of beams of the second carrier based on the cross-carrier beam associations indication630. In some examples, the UE604may determine, based on the cross-carrier beam associations indication630which one or more beams of the first carrier are cross-carrier associated with one or more beams of the second carrier. For example, and with respect to the example ofFIG.5A, the UE604may determine, based on the first cross-carrier beam association530aindicated by the cross-carrier beam associations indication630, that the beam A1of the first carrier510is cross-carrier associated with the beam B1of the second carrier520, and, thus, that the respective beams A1, B1have one or more similar characteristics. With respective to the illustrated example ofFIG.5B, the UE604may determine, based on the first cross-carrier beam association580aindicated by the cross-carrier beam associations indication630, that the beam A1of the first carrier560is cross-carrier associated with the beams B1and B2of the second carrier570, and, thus, that the respective beams A1, B1, B2have one or more similar characteristics. The base station602may transmit transmissions using a first set of beams650that are received by the UE604. The base station602may additionally or alternatively transmit transmissions using a second set of beams652that are received by the UE604. The first set of beams650may be of the first carrier (e.g., the first set of beams514of the first carrier510ofFIG.5Aand/or the first set of beams564of the first carrier560ofFIG.5B). The second set of beams652may be of the second carrier (e.g., the second set of beams524of the second carrier520ofFIG.5Aand/or the second set of beams574of the second carrier570ofFIG.5B). In some examples, the UE604may use the cross-carrier beam associations to improve performance at the UE. For example, the UE604may use the cross-carrier beam associations to help with channel estimation, frequency offset estimation, and/or synchronization procedures. For example, in some examples, the cross-carrier beam associations indication630may indicate that an SSB of the first set of beams is associated with a random access channel (RACH) occasion (RO). In some examples, measuring SSBs on a first carrier (e.g., in the first frequency range (FR1)) may be more power efficient than measuring SSBs on a second carrier (e.g., in the second frequency range (FR2)). However, it may be more beneficial for the UE to use the second carrier to transmit a RACH message. At660, the UE604may determine a subset of beams of the second set of beams to use for transmitting a RACH message. For example, while operating in an idle mode or an inactive mode, the UE604may measure SSBs received on a first carrier and transmit a RACH message on a second carrier. For example, and referring to the illustrated example ofFIG.5B, the UE604may measure the first set of SSBs562received on the first set of beams564of the first carrier560. The UE604may select a first subset of beams of the first set of beams564of the first carrier560(e.g., the beams A2and A3) based on the measurements of the first set of SSBs562. The UE604may then determine a second subset of beams of the second set of beams574of the second carrier570that correspond to the selected first subset of beams of the first set of beams564(e.g., the beams A2and A3) based on the cross-carrier beam associations. For example, the UE604may determine, based on the second cross-carrier beam association580b, that the beams B3and B4of the second set of beams574correspond to the beam A2of the first set of beams564. The UE604may also determine, based on the third cross-carrier beam association580c, that the beams B5and B6of the second set of beams574correspond to the beam A3of the first set of beams564. The UE604may then transmit a RACH message680using the second subset of beams of the second set of beams574(e.g., using the beams B3, B4, B5, and B6of the second set of beams574). In some examples, the UE604may further refine the beams used for transmitting the RACH message680. At670, the UE604may refine the subset of beams of the second set of beams. For example, the UE604may measure SSBs received on the second subset of beams of the second set of beams574of the second carrier570. For example, the UE604may measure SSBs B3, B4, B5, and B6received on the beams B3, B4, B5, and B6of the second carrier570. The UE604may then refine the beams in the second subset of beams of the second set of beams574based on the measured SSBs received through the respective beams. For example, based on the measured SSBs B3, B4, B5, and B6, the UE604may determine to transmit the RACH message680through the beams B5and B6. In some such examples, the UE604may perform the refining of the beams used for transmitting the RACH message680without measuring SSBs received on all of the beams of the second set of beams574of the second carrier570. It may be appreciated that by employing one or more of the techniques disclosed herein, the UE604may conserve resources (e.g., processing resources) by applying one or more characteristics associated with a beam of a first carrier to one or more beams of a second carrier. In some such examples, the UE604may avoid performing measurements on the one or more beams of the second carrier. FIG.7is a flowchart700of a method of wireless communication. The method may be performed by a UE (e.g., the UE104, the UE350, and/or an apparatus802ofFIG.8). Optional aspects are illustrated with a dashed line. The method may enable a UE to determine cross-carrier beams that may share similar characteristics, which may assist the UE with channel estimation, frequency offset estimations, and/or synchronization procedures. At702, the UE may transmit, to a base station, a capability indicating support of cross-carrier beam association, as described in connection with the UE capability610ofFIG.6. For example,702may be performed by a capabilities component840of the apparatus802ofFIG.8. At704, the UE receives, from the base station, an indication of a cross-carrier beam association associated with a first carrier and a second carrier different than the first carrier, as described in connection with the cross-carrier beam associations indication630ofFIG.6. For example,704may be performed by an associations reception component842of the apparatus802ofFIG.8. In some examples, the UE may receive the indication via SI. In some examples, the UE may receive the indication via RRC signaling. In some examples, if the UE transmits a capability indicating support of cross-carrier beam association (e.g., at702), the UE may receive the indication in an RRC message. In some examples, the first carrier may be different than the second carrier. In some examples, the first carrier and the second carrier may be intra-band carriers, as described above in connection withFIG.5A. For example, the first carrier and the second carrier may be associated with a same operating band of a frequency range. In some examples, the first carrier and the second carrier may be inter-band carriers. For example, the first carrier may be associated with a first operating band of a first example frequency range and the second carrier may be associated with a second operating band of a second example frequency range. In some examples, the first frequency range may be the same as the second frequency range (e.g., the first operating band and the second operating band are within a same frequency range, such as FR1 or FR2), as described above in connection withFIG.5A. In some examples, the first frequency range may be different than the second frequency range (e.g., the first operating band is within the first frequency range (FR1) and the second operating band is within the second frequency range (FR2)), as described above in connection withFIG.5B. At706, the UE determines an association between a first set of beams of the first carrier and a second set of beams of the second carrier based on the indication of the cross-carrier beam association, as described in connection with640ofFIG.6. For example,706may be performed by an associations handler component844of the apparatus802ofFIG.8. In some examples, the first set of beams may include a first quantity of beams and the second set of beams may include a second quantity of beams that is the same as the first quantity of beams, as described above in connection withFIG.5A. In some examples, the first set of beams may include a first quantity of beams and the second set of beams may include a second quantity of beams that is different than the first quantity of beams, as described above in connection withFIG.5B. In some examples, the first set of beams comprises N beams and the second set of beams comprises N*M beams, as described above in connection withFIG.5B. In some examples, the cross-carrier beam association between the first carrier and the second carrier may be determined based on a quasi co-location (QCL) relationship of the first set of beams and the second set of beams, as described above in connection withFIGS.5A and5B. In some examples, the first set of beams may be each associated with a first beam width, the second set of beams may be each associated with a second beam width, and the second set of beams based on the second beam width may overlap the first set of beams based on the first beam width, as described above in connection withFIGS.5A and5B. In some examples, the second beam width may be the same as the first beam width, as described above in connection withFIG.5A. In some examples, the second beam width may be approximately equal to 1/M the first beam width, as described above in connection withFIG.5B. At708, the UE receives on the first set of beams and the second set of beams based on the determined cross-carrier beam association, as described in connection with the transmissions using the first set of beams650and the transmissions using the second set of beams652ofFIG.6. For example,708may be performed by a beams reception component846of the apparatus802ofFIG.8. In some examples, the indication (e.g., received at704) may indicate that an SSB of the first set of beams is associated with a RACH occasion (RO). In some such examples, at710, the UE may select a first subset of beams of the first set of beams based on measurements of an SSB received on the first carrier, as described above in connection with660ofFIG.6. For example,710may be performed by a beam selection component848of the apparatus802ofFIG.8. For example, and with respect to the example ofFIG.5B, the UE may select a first subset of beams A2and A3of the first set of beams564of the first carrier560. The UE may select the beams of the first subset of beams based on, for example, measurements associated with the SSBs of the first carrier. At712, the UE may determine a second subset of beams of the second set of beams where the second subset of beams correspond to the first subset of beams based on the cross-carrier beam association, as described in connection with660ofFIG.6. For example,712may be performed by the beam selection component848of the apparatus802ofFIG.8. For example, and referring to the example ofFIG.5B, the UE may select a second subset of beams B3, B4, B5, and B6of the second set of beams574of the second carrier570. The UE may use the cross-carrier beam associations to determine the second subset of beams of the second set of beams. For example, the UE may use the second cross-carrier beam association580bto map the beam A2of the first set of beams564to the beams B3and B4of the second set of beams574. The UE may use the third cross-carrier beam association580cto map the beam A3of the first set of beams564to the beams B5and B6of the second set of beams574. At718, the UE may transmit, to the base station, a RACH message through the second subset of beams at the RO, as described in connection with the RACH message680ofFIG.6. For example,718may be performed by a RACH transmission component850of the apparatus802ofFIG.8. For example, the UE may use the second subset of beams B3, B4, B5, and B6(e.g., determined at712) to transmit the RACH message. In some examples, the UE may employ techniques for refining the one or more beams used for transmitting the RACH message. For example, at714, the UE may measure SSBs received through the second subset of beams, as described above in connection with670ofFIG.6. In some such examples, when the UE is performing the refining of the one or more beams used for transmitting the RACH message, the UE may avoid measuring SSBs received through the second set of beams that are not included in the second subset of beams. For example, in the example ofFIG.5B, the UE may measure the SSBs received through the beams B3, B4, B5, and B6, while avoiding (or foregoing) the measuring of SSBs received through the beams B1, B2, B7, and B8. At716, the UE may refine beams in the second subset of beams based on the measured SSBs received through the second set of beams, as described above in connection with670ofFIG.6. For example,716may be performed by a beam refinement component854of the apparatus802ofFIG.8. For example, based on the measurements of the SSBs received through the beams B3, B4, B5, and B6, the UE may determine that the beams B5and B6are more suitable for transmitting the RACH message. The UE may then, at718, transmit a RACH message through the refined beams in the second subset of beams based on the measured SSBs, as described above in connection with the RACH message680ofFIG.6. For example,718may be performed by the RACH transmission component850of the apparatus802ofFIG.8. Referring to the example ofFIG.5B, the UE may determine to transmit the RACH message through the beams B5and B6based on the measurements of the SSBs received through the beams B3, B4, B5, and B6. FIG.8is a diagram800illustrating an example of a hardware implementation for an apparatus802. The apparatus802is a UE and includes a cellular baseband processor804(also referred to as a modem) coupled to a cellular RF transceiver822and one or more subscriber identity modules (SIM) cards820, an application processor806coupled to a secure digital (SD) card808and a screen810, a Bluetooth module812, a wireless local area network (WLAN) module814, a Global Positioning System (GPS) module816, and a power supply818. The cellular baseband processor804communicates through the cellular RF transceiver822with the UE104and/or base station102/180. The cellular baseband processor804may include a computer-readable medium/memory. The computer-readable medium/memory may be non-transitory. The cellular baseband processor804is responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the cellular baseband processor804, causes the cellular baseband processor804to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the cellular baseband processor804when executing software. The cellular baseband processor804further includes a reception component830, a communication manager832, and a transmission component834. The communication manager832includes the one or more illustrated components. The components within the communication manager832may be stored in the computer-readable medium/memory and/or configured as hardware within the cellular baseband processor804. The cellular baseband processor804may be a component of the UE350and may include the memory360and/or at least one of the TX processor368, the RX processor356, and the controller/processor359. In one configuration, the apparatus802may be a modem chip and include just the baseband processor804, and in another configuration, the apparatus802may be the entire UE (e.g., see the UE350ofFIG.3) and include the additional modules of the apparatus802. The communication manager832includes a capabilities component840that is configured to transmit, to a base station, a capability indicating support of cross-carrier beam association, for example, as described in connection with702ofFIG.7. The communication manager832also includes an associations reception component842that is configured to receive, from the base station, an indication of a cross-carrier beam association associated with a first carrier and a second carrier different than the first carrier, for example, as described in connection with704ofFIG.7. The communication manager832also includes an associations handler component844that is configured to determine an association between a first set of beams of the first carrier and a second set of beams of the second carrier based on the indication of the cross-carrier beam association, for example, as described in connection with706ofFIG.7. The communication manager832also includes a beams reception component846that is configured to receive on the first set of beams and the second set of beams based on the determined cross-carrier beam association, for example, as described in connection with708ofFIG.7. The communication manager832also includes a beam selection component848that is configured to select a first subset of beams of the first set of beams based on measurements of an SSB received on the first carrier, for example, as described in connection with710ofFIG.7. The example beam selection component848may also be configured to determine a second subset of beams of the second set of beams where the second subset of beams correspond to the first subset of beams based on the cross-carrier beam association, for example, as described in connection with712ofFIG.7. The communication manager832also includes a RACH transmission component850that is configured to transmit, to the base station, a RACH message through the second subset of beams at the RO and/or transmit a RACH message through the refined beams in the second subset of beams based on the measured SSBs, for example, as described in connection with718ofFIG.7. The communication manager832also includes a measurement component852that is configured to measure SSBs received through the second subset of beams, for example, as described in connection with714ofFIG.7. The communication manager832also includes a beam refinement component854that is configured to refine beams in the second subset of beams based on the measured SSBs received through the second set of beams, for example, as described in connection with716ofFIG.7. The apparatus may include additional components that perform each of the blocks of the algorithm in the aforementioned flowchart ofFIG.7. As such, each block in the aforementioned flowchart ofFIG.7may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. In one configuration, the apparatus802, and in particular the cellular baseband processor804, includes means for receiving, from a base station, an indication of a cross-carrier beam association associated with a first carrier and a second carrier different than the first carrier. The example apparatus802also includes means for determining an association between a first set of beams of the first carrier and a second set of beams of the second carrier based on the indication of the cross-carrier beam association. The example apparatus802also includes means for receiving on the first set of beams and the second set of beams based on the determined cross-carrier beam association. In another configuration, the example apparatus802may also include means for selecting a first subset of beams of the first set of beams based on measurements of an SSB received on the first carrier. The example apparatus802also includes means for determining a second subset of beams of the second set of beams, the second subset of beams corresponding to the first subset of beams based on the cross-carrier beam association. The example apparatus802also includes means for transmitting, to the base station, a RACH message through the second subset of beams at the RO. In another configuration, the example apparatus802may also include means for measuring SSBs received through the second subset of beams. The example apparatus802also includes means for refining beams in the second subset of beams based on the measured SSBs received through the second set of beams. In another configuration, the example apparatus802may also include means for receiving the indication via system information (SI) or radio resource control (RRC) signaling. In another configuration, the example apparatus802may also include means for transmitting, to the base station, a capability indicating support of cross-carrier beam association, and where the indication is received in a radio resource control (RRC) message. The aforementioned means may be one or more of the aforementioned components of the apparatus802configured to perform the functions recited by the aforementioned means. As described supra, the apparatus802may include the TX processor368, the RX processor356, and the controller/processor359. As such, in one configuration, the aforementioned means may be the TX processor368, the RX processor356, and the controller/processor359configured to perform the functions recited by the aforementioned means. FIG.9is a flowchart900of a method of wireless communication. The method may be performed by a base station (e.g., the base station102/180, the base station310, and/or an apparatus1002ofFIG.10). Optional aspects are illustrated with a dashed line. The method may enable a base station to determine cross-carrier beams that may share similar characteristics, which may assist a UE with channel estimation, frequency offset estimations, and/or synchronization procedures. At902, the base station determines a cross-carrier beam association between a first set of beams of a first carrier and a second set of beams of a second carrier, as described in connection with620ofFIG.6. For example,902may be performed by an association determination component1040of the apparatus1002ofFIG.10. In some examples, the first carrier may be different than the second carrier. In some examples, the first carrier and the second carrier may be intra-band carriers, as described above in connection withFIG.5A. For example, the first carrier and the second carrier may be associated with a same operating band of a frequency range. In some examples, the first carrier and the second carrier may be inter-band carriers. For example, the first carrier may be associated with a first operating band of a first example frequency range and the second carrier may be associated with a second operating band of a second example frequency range. In some examples, the first frequency range may be the same as the second frequency range (e.g., the first operating band and the second operating band are within a same frequency range, such as FR1 or FR2), as described above in connection withFIG.5A. In some examples, the first frequency range may be different than the second frequency range (e.g., the first operating band is within the first frequency range (FR1) and the second operating band is within the second frequency range (FR2)), as described above in connection withFIG.5B. In some examples, the first set of beams may include a first quantity of beams and the second set of beams may include a second quantity of beams that is the same as the first quantity of beams, as described above in connection withFIG.5A. In some examples, the first set of beams may include a first quantity of beams and the second set of beams may include a second quantity of beams that is different than the first quantity of beams, as described above in connection withFIG.5B. In some examples, the first set of beams comprises N beams and the second set of beams comprises N*M beams, as described above in connection withFIG.5B. In some examples, the cross-carrier beam association between the first carrier and the second carrier may be determined based on a QCL relationship of the first set of beams and the second set of beams, as described above in connection withFIGS.5A and5B. In some examples, the first set of beams may be each associated with a first beam width, the second set of beams may be each associated with a second beam width, and the second set of beams based on the second beam width may overlap the first set of beams based on the first beam width, as described above in connection withFIGS.5A and5B. In some examples, the second beam width may be the same as the first beam width, as described above in connection withFIG.5A. In some examples, the second beam width may be approximately equal to 1/M the first beam width, as described above in connection withFIG.5B. At904, the base station may receive, from a UE, a capability indication support of cross-carrier beam association, as described in connection with the UE capability610ofFIG.6. For example,904may be performed by a capabilities component1042of the apparatus1002ofFIG.10. At906, the base station transmits, to the UE, an indication of the cross-carrier beam association associated with the first carrier and the second carrier, as described above in connection with the cross-carrier beam associations indication630ofFIG.6. For example,906may be performed by an indication transmission component1044of the apparatus1002ofFIG.10. In some examples, the base station may transmit the indication via SI. In some examples, the base station may transmit the indication via RRC signaling. In some examples, if the received capability indication (e.g., at904) indicates that the UE supports cross-carrier beam association, the base station may transmit the indication to the UE in an RRC message. At908, the base station transmits, to the UE, on the first set of beams and the second set of beams based on the determined cross-carrier beam association, as described above in connection with the transmissions using the first set of beams650and the transmissions using the second set of beams652ofFIG.6. For example,908may be performed by a beams transmission component1046of the apparatus1002ofFIG.10. In some examples, the indication (e.g., transmitted at906) may indicate that an SSB of the first set of beams is associated with a RACH occasion (RO). In some such examples, at910, the base station may receive, from the UE, a RACH message through a subset of beams of the second set of beams at an RO, as described above in connection with the RACH message680ofFIG.6. For example, the base station may receive the RACH message through the example second subset of beams B3, B4, B5, and B6ofFIG.5B. In some examples, the base station may receive the RACH message through a refined subset of beams, such as the example beams B5and B6ofFIG.5B. FIG.10is a diagram1000illustrating an example of a hardware implementation for an apparatus1002. The apparatus1002is a base station and includes a baseband unit1004. The baseband unit1004may communicate through a cellular RF transceiver1022with the UE104. The baseband unit1004may include a computer-readable medium/memory. The baseband unit1004is responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the baseband unit1004, causes the baseband unit1004to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the baseband unit1004when executing software. The baseband unit1004further includes a reception component1030, a communication manager1032, and a transmission component1034. The communication manager1032includes the one or more illustrated components. The components within the communication manager1032may be stored in the computer-readable medium/memory and/or configured as hardware within the baseband unit1004. The baseband unit1004may be a component of the base station310and may include the memory376and/or at least one of the TX processor316, the RX processor370, and the controller/processor375. The communication manager1032includes an association determination component1040that is configured to determine a cross-carrier beam association between a first set of beams of a first carrier and a second set of beams of a second carrier, for example, as described in connection with902ofFIG.9. The communication manager1032also includes a capabilities component1042that is configured to receive, from a UE, a capability indication support of cross-carrier beam association, for example, as described in connection with904ofFIG.9. The communication manager1032also includes an indication transmission component1044that is configured to transmit, to the UE, an indication of the cross-carrier beam association associated with the first carrier and the second carrier, for example, as described in connection with906ofFIG.9. The communication manager1032also includes a beams transmission component1046that is configured to transmit, to the UE, on the first set of beams and the second set of beams based on the determined cross-carrier beam association, for example, as described in connection with908ofFIG.9. The communication manager1032also includes a RACH handling component1048that is configured to receive, from the UE, a RACH message through a subset of beams of the second set of beams at an RO, for example, as described in connection with910ofFIG.9. The apparatus may include additional components that perform each of the blocks of the algorithm in the aforementioned flowchart ofFIG.9. As such, each block in the aforementioned flowchart ofFIG.9may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. In one configuration, the apparatus1002, and in particular the baseband unit1004, includes means for determining a cross-carrier beam association between a first set of beams of a first carrier and a second set of beams of a second carrier, the first carrier being different than the second carrier. The example apparatus1002also includes means for transmitting, to a user equipment (UE), an indication of the cross-carrier beam association associated with the first carrier and the second carrier. The example apparatus1002also includes means for transmitting, to the UE, on the first set of beams and the second set of beams based on the determined cross-carrier beam association. In another configuration, the example apparatus1002may also include means for receiving, from the UE, a RACH message through a subset of beams of the second set of beams at the RO. In another configuration, the example apparatus1002may also include means for transmitting the indication via system information (SI) or radio resource control (RRC) signaling. In another configuration, the example apparatus1002may also include means for receiving, from the UE, a capability indicating support of cross-carrier beam association, and where the indication is transmitted in a radio resource control (RRC) message. The aforementioned means may be one or more of the aforementioned components of the apparatus1002configured to perform the functions recited by the aforementioned means. As described supra, the apparatus1002may include the TX processor316, the RX processor370, and the controller/processor375. As such, in one configuration, the aforementioned means may be the TX processor316, the RX processor370, and the controller/processor375configured to perform the functions recited by the aforementioned means. Example techniques disclosed herein enable associating cross-carrier beams based on, for example, one or more similar characteristics. For example, one or more beams of a first carrier may be cross-carrier associated with one or more beams of a second carrier. In some examples, when a first beam of a first carrier is indicated as being cross-carrier associated with a second beam of a second carrier, then characteristics associated with the first beam may be applied to the second beam. For example, if the first beam and the second beam are indicated as having a similar delay spread, then a UE can determine the delay spread for the first beam and apply the determined delay spread to the second beam without separately determining the delay spread for the second beam. It may be appreciated that by employing one or more of the techniques disclosed herein, the UE may conserve resources (e.g., processing resources) by applying one or more characteristics associated with a beam of a first carrier to one or more beams of a second carrier. In some such examples, the UE may avoid performing measurements on the one or more beams of the second carrier. It is understood that the specific order or hierarchy of blocks in the processes/flowcharts disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes/flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Terms such as “if,” “when,” and “while” should be interpreted to mean “under the condition that” rather than imply an immediate temporal relationship or reaction. That is, these phrases, e.g., “when,” do not imply an immediate action in response to or during the occurrence of an action, but simply imply that if a condition is met then an action will occur, but without requiring a specific or immediate time constraint for the action to occur. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words “module,” “mechanism,” “element,” “device,” and the like may not be a substitute for the word “means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.” The following aspects are illustrative only and may be combined with other aspects or teachings described herein, without limitation. Aspect 1 is a method of wireless communication at a user equipment (UE), comprising: receiving, from a base station, an indication of a cross-carrier beam association associated with a first carrier and a second carrier different than the first carrier; determining an association between a first set of beams of the first carrier and a second set of beams of the second carrier based on the indication of the cross-carrier beam association; and receiving on the first set of beams and the second set of beams based on the determined cross-carrier beam association. Aspect 2 is the method of aspect 1, further including that the indication indicates that a synchronization signal block (SSB) of the first set of beams is associated with a random access channel (RACH) occasion (RO), the method further comprising: selecting a first subset of beams of the first set of beams based on measurements of an SSB received on the first carrier; determining a second subset of beams of the second set of beams, the second subset of beams corresponding to the first subset of beams based on the cross-carrier beam association; and transmitting, to the base station, a RACH message through the second subset of beams at the RO. Aspect 3 is the method of any of aspect 1 or aspect 2, further including: measuring SSBs received through the second subset of beams; and refining beams in the second subset of beams based on the measured SSBs received through the second set of beams. Aspect 4 is the method of any of aspects 1 to 3, further including that the first set of beams comprises a first quantity of beams and the second set of beams comprises a second quantity of beams that is a same quantity as the first quantity of beams or that is a different quantity than the first quantity of beams. Aspect 5 is the method of any of aspects 1 to 4, further including that the first set of beams comprises N beams and the second set of beams comprises N*M beams. Aspect 6 is the method of any of aspects 1 to 5, further including that the first carrier and the second carrier are inter-band carriers. Aspect 7 is the method of any of aspects 1 to 6, further including that the first carrier is associated with a first frequency range and the second carrier is associated with a second frequency range that is a same frequency range as the first frequency range or that is a different frequency range than the second frequency range. Aspect 8 is the method of any of aspects 1 to 5, further including that the first carrier and the second carrier are intra-band carriers. Aspect 9 is the method of any of aspects 1 to 8, further including that the first set of beams are each associated with a first beam width, where the second set of beams are each associated with a second beam width, and where the second set of beams based on the second beam width overlap the first set of beams based on the first beam width. Aspect 10 is the method of any of aspects 1 to 9, further including that the second beam width is a same beam width as the first beam width or that is a different beam width than the first beam width. Aspect 11 is the method of any of aspects 1 to 10, further including that the second beam width is approximately equal to 1/M the first beam width. Aspect 12 is the method of any of aspects 1 to 11, further including that the cross-carrier beam association between the first carrier and the second carrier is determined based on a quasi co-location (QCL) relationship of the first set of beams and the second set of beams. Aspect 13 is the method of any of aspects 1 to 12, further including that the indication is received via system information (SI) or radio resource control (RRC) signaling. Aspect 14 is the method of any of aspects 1 to 13, further including: transmitting, to the base station, a capability indicating support of cross-carrier beam association, and where the indication is received in a radio resource control (RRC) message. Aspect 15 is an apparatus for wireless communication including a memory and at least one processor coupled to a memory, the memory and the at least one processor configured to implement a method as in any of aspects 1 to 14. Aspect 16 is an apparatus for wireless communication including means for implementing a method as in any of aspects 1 to 14. Aspect 17 is a non-transitory computer-readable storage medium storing computer executable code, where the code, when executed, causes a processor to implement a method as in any of aspects 1 to 14. Aspect 18 is a method of wireless communication at a base station, comprising: determining a cross-carrier beam association between a first set of beams of a first carrier and a second set of beams of a second carrier, the first carrier being different than the second carrier; transmitting, to a user equipment (UE), an indication of the cross-carrier beam association associated with the first carrier and the second carrier; and transmitting, to the UE, on the first set of beams and the second set of beams based on the determined cross-carrier beam association. Aspect 19 is the method of aspect 18, further including that the indication indicates that a synchronization signal block (SSB) of the first set of beams is associated with a random access channel (RACH) occasion (RO), the method further comprising: receiving, from the UE, a RACH message through a subset of beams of the second set of beams at the RO. Aspect 20 is the method of any of aspect 18 or aspect 19, further including that the first set of beams comprises a first quantity of beams and the second set of beams comprises a second quantity of beams that is a same quantity as the first quantity of beams or that is a different quantity than the first quantity of beams. Aspect 21 is the method of any of aspects 18 to 20, further including that the first set of beams comprises N beams and the second set of beams comprises N*M beams. Aspect 22 is the method of any of aspects 18 to 21, further including that the first carrier and the second carrier are inter-band carriers. Aspect 23 is the method of any of aspects 18 to 22, further including that the first carrier is associated with a first frequency range and the second carrier is associated with a second frequency range that is a same frequency range as the first frequency range or that is a different frequency range than the second frequency range. Aspect 24 is the method of any of aspects 18 to 21, further including that the first carrier and the second carrier are intra-band carriers. Aspect 25 is the method of any of aspects 18 to 24, further including that the first set of beams are each associated with a first beam width, where the second set of beams are each associated with a second beam width, and where the second set of beams based on the second beam width overlap the first set of beams based on the first beam width. Aspect 26 is the method of any of aspects 18 to 25, further including that the second beam width is a same beam width as the first beam width or that is a different beam width than the first beam width. Aspect 27 is the method of any of aspects 18 to 26, further including that the second beam width is approximately equal to 1/M the first beam width. Aspect 28 is the method of any of aspects 18 to 27, further including that the cross-carrier beam association between the first carrier and the second carrier is determined based on a quasi co-location (QCL) relationship of the first set of beams and the second set of beams. Aspect 29 is the method of any of aspects 18 to 28, further including that the indication is transmitted via system information (SI) or radio resource control (RRC) signaling. Aspect 30 is the method of any of aspects 18 to 29, further including: receiving, from the UE, a capability indicating support of cross-carrier beam association, and where the indication is transmitted in a radio resource control (RRC) message. Aspect 31 is an apparatus for wireless communication including a memory and at least one processor coupled to a memory, the memory and the at least one processor configured to implement a method as in any of aspects 18 to 30. Aspect 32 is an apparatus for wireless communication including means for implementing a method as in any of aspects 18 to 30. Aspect 33 is a non-transitory computer-readable storage medium storing computer executable code, where the code, when executed, causes a processor to implement a method as in any of aspects 18 to 30.
100,488
11863282
DETAILED DESCRIPTION FIGS.1through12, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. Hereinafter, in various embodiments of the present disclosure, hardware approaches will be described as an example. However, various embodiments of the present disclosure include a technology that uses both hardware and software and thus, the various embodiments of the present disclosure may not exclude the perspective of software. Hereinafter, various embodiments of the present disclosure will be described with reference to the accompanying drawings. The embodiments and the terms used therein are not intended to limit the technology disclosed herein to specific forms, and should be understood to include various modifications, equivalents, and/or alternatives to the corresponding embodiments. In describing the drawings, similar reference numerals may be used to designate similar constituent elements. A singular expression may include a plural expression unless they are definitely different in a context. As used herein, singular forms may include plural forms as well unless the context clearly indicates otherwise. The expression “a first,” “a second,” “the first,” or “the second” used in various embodiments of the present disclosure may modify various components regardless of the order and/or the importance but does not limit the corresponding components. When an element (e.g., first element) is referred to as being “(functionally or communicatively) connected,” or “directly coupled” to another element (second element), the element may be connected directly to the another element or connected to the another element through yet another element (e.g., third element). The expression “configured to” as used in various embodiments of the present disclosure may be interchangeably used with, for example, “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” or “capable of” in terms of hardware or software, according to circumstances. Alternatively, in some situations, the expression “device configured to” may mean that the device, together with other devices or components, “is able to,” For example, the phrase “processor adapted (or configured) to perform A, B, and C” may mean a dedicated processor (e.g., embedded processor) only for performing the corresponding operations or a generic-purpose processor (e.g., central processing unit (CPU) or Application Processor (AP)) that can perform the corresponding operations by executing one or more software programs stored in a memory device. The present disclosure relates to an apparatus and a method for performing a beam search in a wireless communication system. More specifically, the present disclosure describes technology for performing a beam search through a procedure corresponding to an antenna structure in a wireless communication system. The term referring to channel information used in the following description, the term referring to network entities, and the term referring to an element of the device are employed for convenience of description. Accordingly, the present disclosure is not limited to the following terms and other terms having the same technical meaning may be used. In the following description, channel reciprocity refers to property that an uplink channel and a downlink channel have similar characteristics, in other words, channel attributes to handle an uplink channel response to be equal to a downlink channel response. Through the channel reciprocity, a downlink channel response can be acquired using an uplink channel response or an uplink channel response can be acquired using a downlink channel response. Similarly, beam reciprocity refers to a property that an uplink beam and a downlink beam have correspondence (i.e. similar characteristics), in other words, beam attributes that an uplink beam direction is handled to be correspond (i.e. equal) to a downlink beam direction. The beam reciprocity may be referred to as beam correspondence. Through the beam correspondence, a beam used in the uplink can be used in the downlink or a beam used in the downlink can be used in the uplink. In channel estimation or beam tracking, when antennas used for data transmission and antennas used for data reception are the same as each other, the antennas are regarded as the same spatial resources and thus using the reciprocity is considered. In this specification, whether antennas of a base station or a terminal are transmission/reception common antennas and the relation for channel reciprocity or beam correspondence will be described. FIG.1illustrates a wireless network environment according to various embodiments of the present disclosure. Referring toFIG.1, the wireless network environment may be a wireless network environment100. The wireless network environment100may include a base station (BS)110and a terminal120. According to the type of network, the BS110may be referred to as “access point (AP)”, “eNodeB (eNB)”, or 5th generation nodeB (5G node), transmission reception point (TRP) as well as the “base station (BS).” Hereinafter, for convenience of description, the BS110may be used for referring to network infrastructure elements that provide radio access to remote terminals in this patent document. Further, according to the type of network, the terminal120may be referred to as “mobile station,” subscriber station,” “remote terminal,” “wireless terminal,” or “user device” as well as a user equipment (UE),” The wireless network environment100includes a downlink corresponding to a link from the BS110to the terminal120and an uplink corresponding to a link from the terminal120to the BS110. The BS110may include a plurality of antennas. The BS110may include an antenna112. Through the antenna112, the BS110may perform both transmission through the downlink and reception through the uplink. That is, the antenna112may be a transmission/reception (TX/RX) common antenna. The terminal120may include at least one antenna. The terminal120may include a reception antenna122and a transmission antenna124. In the following description, although the reception antenna122and the transmission antenna124are spaced apart from each other by a predetermined distance, the present disclosure is not limited thereto. The BS110may transmit downlink data to the terminal120through the antenna112. In order to transmit the downlink data, the BS110may select a downlink transmission beam116among a plurality of BS beams115. The terminal120may receive the downlink data through the reception antenna122. In order to receive the downlink data, the terminal120may select a downlink reception beam126among a plurality of terminal beams125. The terminal120may transmit uplink data through the transmission antenna124. In order to transmit the uplink data, the terminal120may determine an uplink transmission beam among the plurality of terminal beams125. When the transmission antenna124and the reception antenna122are located within a predetermined distance from each other, the terminal120may determine the beam, which has been used for reception of the downlink data, as the uplink transmission beam. When the transmission antenna124and the reception antenna122are located within a predetermined distance from each other and implemented as one antenna, the terminal120may also determine the beam, which has been used for reception of the downlink data, as the uplink transmission beam. The terminal120may use a beam search result in the downlink for the beam search procedure. In other words, the terminal120may use beam correspondence (beam reciprocity) to determine the uplink transmission beam. However, when the transmission antenna124and the reception antenna122are spaced apart from each other by a predetermined distance or longer, the terminal120may determine that it is difficult to consider the downlink reception beam as the uplink transmission beam. That is, the terminal120may determine that it is difficult to use the beam correspondence. Accordingly, as illustrated in the example ofFIG.1, the terminal120may determine the uplink transmission beam127among the plurality of terminal beams by performing the beam search procedure for the uplink. The BS110may receive the uplink data through the antenna112. In order to receive the uplink data, the terminal120may determine the uplink reception beam among the plurality of BS beams112. At this time, as the transmission antenna and the reception antenna are implemented as one antenna112, the BS110may determine that beam correspondence can be used. The BS120may determine the downlink transmission beam116as the uplink reception beam. Accordingly, the BS110may omit the beam search procedure for determining the uplink reception beam. Meanwhile, according to a communication environment, the BS110may convert an estimation result of an uplink channel to estimate a downlink channel or convert an estimation result of a downlink channel to estimation an uplink channel. In other words, according to the communication environment, the BS110may determine whether to use the channel reciprocity. For example, the communication environment may include a time division duplex (TDD) communication environment or a frequency division duplex (FDD) communication environment, and partial resources (for example, antennas) used for data transmission. When downlink antennas (the transmission antenna of the BS110and the reception antenna of the terminal) and uplink antennas (the reception antenna of the BS and the transmission antenna of the terminal) are not the same, it may be determined that the use of channel reciprocity is not possible due to spatial division of the antennas. For example, the antenna112is the transmission reception common antenna but the antennas in the terminal side are divided into the reception antenna112and the transmission antenna124as illustrated inFIG.1, so that the BS110may determine that it is difficult to use channel reciprocity. After performing the uplink channel estimation, the BS110may have difficulty in applying the converted result of the uplink channel estimation to downlink data transmission. Accordingly, the BS110is required to perform separate compensation or transmit a separate reference signal (for example, a cell-specific reference signal (CRS), channel state information-reference signal (CSI-RS), or demodulation-RS (DM-RS)) for the downlink channel estimation. Although not illustrated inFIG.1, when the downlink antennas and the uplink antennas are the same, the channel reciprocity can be used. When uplink transmission and downlink transmission are performed using different time resources in the same frequency band, the BS may apply the channel estimation result in the uplink to precoding in downlink data transmission. That is, the BS may transmit downlink data without an operation of transmitting a separate reference signal or receiving a feedback on channel status information (CSI) or a compensation procedure. AlthoughFIG.1illustrates that the antenna of the BS110corresponds to the transmission/reception common antenna and the antennas of the terminal120are divided into the transmission antenna and the reception antenna, this is only an example for describing beam correspondence and channel reciprocity and the present disclosure is not limited thereto. Accordingly, the BS110according to various embodiments of the present disclosure may not include the transmission/reception common antenna. In other words, the BS110may include a transmission antenna and a reception antenna that are physically separated from each other. Further, the terminal120may include a transmission/reception common antenna instead of the divided reception antenna and transmission antenna. The operations of the BS110and the terminal120may be divided into a beam search step using beam correspondence and a channel estimation step using channel reciprocity.FIG.2illustrates the general operation of the BS110,FIG.3illustrates the general operation of the terminal120, andFIG.4illustrates the operations before the beam search by the BS110and the terminal120. Thereafter, the beam search procedure using beam correspondence will be described with reference toFIGS.5to9. The channel estimation using channel reciprocity will be described with reference toFIGS.10to12. Hereinafter, for convenience of description, the case in which the beam, which has been used in the downlink, is used in the uplink will be described as an example of the beam search procedure based on beam correspondence, but the case in which the beam, which has been used in the uplink, is used in the downlink can be also applied. FIG.2illustrates an example of the functional configuration of the BS according to various embodiments of the present disclosure. The BS may be the BS110ofFIG.1. The term “ . . . unit” or the ending of a word, such as “ . . . or,” “ . . . er,” or the like may indicate a unit of processing at least one function or operation, and this may be embodied by hardware, software, or a combination of hardware and software. Hereinafter, terms required for describing various embodiments will be defined inFIG.2. The terms referring to control information used in the following description, the terms for calculation states (for example, a mode and an operation), the terms referring to data (for example, information or a value), the terms referring to network entities, the terms referring to messages (for example, an enquiry and a signal), and the terms referring to elements of the device are employed for convenience of description. Accordingly, the present disclosure is not limited to the following terms and other terms having the same technical meaning may be used. Referring toFIG.2, the BS110may include a communication interface210, a controller220, and a storage230. The communication interface210may perform functions for transmitting a signal through a wireless channel. For example, the communication interface210may perform a function of conversion between a baseband signal and a bitstream according to a physical layer standard of a system. For example, when data is transmitted, the communication interface210may encode and modulate a transmission bitstream so as to generate complex symbols. Also, when data is received, the communication interface210may decode and demodulate a baseband signal so as to restore a reception bitstream. The communication interface210may up-convert the baseband signal into a radio frequency (RF) band signal and transmit the up-converted signal through an antenna. The communication interface210may down-convert an RF band signal received through an antenna to a baseband signal. For example, the communication interface210may include a transmission filter, a reception filter, an amplifier, a mixer, and oscillator, a digital analog converter (DAC), an analog digital converter (ADC), and the like. When a plurality of antennas is provided, the communication interface210may include a plurality of RF chains. According to some embodiments, the communication interface210may include a transmission/reception common antenna. In this case, the communication interface210may use the same antenna configuration in the transmission operation and the reception operation. The antenna configuration may include elements required for analog control of the antenna, such as a low noise amplifier (LNA), a coupler, and an antenna type (for example, antenna array). In other embodiments, the communication interface210may include a transmission antenna and a reception antenna separated from each other. In this case, the communication interface210may include each of the antenna configuration for the transmission antenna and the antenna configuration for the reception antenna. The communication interface210may transmit various reference signals. For example, the communication interface210may perform the beam search procedure in order to discover a beam having the good transmission or reception performance in beamforming. For the beam search procedure, the communication interface210may transmit a plurality of reference signals. The reference signals may be transmitted after being beamformed, and accordingly, referred to as beam reference signals or beam refinement reference signals. In another example, the communication interface210may transmit at least one of a CRS, a CSI-RS, and a DM-RS in the channel estimation operation. The communication interface210may receive feedback information from the terminal120in response to the various reference signals. The communication interface210may receive various reference signals. For example, when searching for a beam used in the uplink, that is, an optimal or preferred uplink beam, the communication interface210may receive reference signals. The reference signals may be transmitted after being transmission-beamformed by the terminal120, and may be reception-beamformed by the communication interface210. In another example, when the uplink channel estimation is performed, the communication interface210may receive an uplink DM-RS. In yet another example, the communication interface210may receive an uplink sounding reference signal (SRS) in the channel estimation for scheduling and link adaptation according to the uplink channel. The communication interface210may transmit feedback information to the terminal120in response to the various reference signals as necessary. The communication interface210may be used to negotiate UE capability information with the terminal120. The communication interface210may be used to request the UE capability information in an initial access procedure with the terminal120. That is, the communication interface210may transmit a UE capability information enquiry message to the terminal120. The communication interface210may receive the UE capability information from the terminal120in response to the enquiry message. The UE capability information may be UE capability information231. In some embodiments, the UE capability information231may contain information for determining whether beam correspondence or channel reciprocity can be used. For example, the UE capability information231may contain at least one piece of first information indicating whether the antennas of the terminal120are divided into the transmission antenna and the reception antenna, second information indicating whether the terminal120can use beam correspondence, and third information indicating whether the terminal120can use channel reciprocity. The communication interface120may transmit a control message to the terminal120. The control message may indicate an operation mode determined by the controller220. Through the control message, the communication interface210may indicate, to the terminal120, the operation mode in which the terminal120may perform the beam search and the method by which the uplink transmission beam of the terminal120may be determined. The control message may be transmitted through high layer signaling with the terminal120or transmitted through downlink control information (DCI). The communication interface210may receive a compensation value for antenna separation from the terminal120. The compensation value for the antenna separation may be a compensation value for compensating for a reciprocity error (for example, path error) attributable to the separation between the transmission antenna124of the terminal120and the reception antenna122of the terminal120. The controller220may include a reciprocity determination circuitry221, an operation mode determination circuitry223, a beam determination circuitry225, and a channel estimation circuitry227. In the beam search procedure using beam correspondence, the controller220may use the reciprocity determination circuitry221, the operation mode determination circuitry223, and the beam determination circuitry225. In the channel estimation using channel reciprocity, the controller220may use the reciprocity determination circuitry221and the channel estimation circuitry227. Hereinafter, the operation of the controller220for channel reciprocity will be described after the operation of the controller220for beam correspondence. The reciprocity determination circuitry221may determine whether the use of beam correspondence is possible. The reciprocity determination circuitry221may determine whether the BS110can use beam correspondence and the terminal120can use beam correspondence. The reciprocity determination circuitry221may determine whether the BS110can use beam correspondence. The reciprocity determination circuitry221may determine whether the BS110can use beam correspondence according to whether the antenna of the BS110used for communication with the terminal120is the transmission and/or reception common antenna. In some embodiments, when the antenna of the BS110is the transmission/reception common antenna, there is no difference in the location between the antenna for transmission and the antenna for reception, so that the reciprocity determination circuitry221may determine that beam correspondence can be used. In other embodiments, when the antenna of the BS110is not the transmission/reception common antenna, the reciprocity determination circuitry221may determine whether beam correspondence can be used according to the location of the antenna for transmission and the location of the antenna for reception. For example, the reciprocity determination circuitry221may determine whether beam correspondence can be used according to the distance between the antenna used for the downlink transmission beam and the antenna used for the uplink reception beam. When the antenna used for the downlink transmission beam and the antenna used for the uplink reception beam are separated from each other by a predetermined distance or more, the reciprocity determination circuitry221may determine that beam correspondence cannot be used in determination of the uplink reception beam. On the other hand, when the antenna used for the downlink transmission beam and the antenna used for the uplink reception beam are separated from each other by a distance shorter than the predetermined distance, the reciprocity determination circuitry221may determine that beam correspondence can be used in determination of the uplink reception beam. The location of each of the antennas may be considered at the moment when the BS110is designed, and thus the BS110may store in advance information related to the location of each of the antennas. The reciprocity determination circuitry221may determine whether the terminal120can use beam correspondence. The reciprocity determination circuitry221may determine whether the terminal120can use beam correspondence based on UE capability information received from the terminal120. In some embodiments, the UE capability information may contain first information indicating whether the antenna of the terminal120is the transmission/reception common antenna. When the first information indicates that the antenna of the terminal120is the transmission/reception common antenna, the reciprocity determination circuitry221may determine that beam correspondence can be used in the beam determination of the terminal120. On the other hand, when the first information indicates that the transmission antenna of the terminal120and the reception antenna of the terminal120are separated from each other, the reciprocity determination circuitry221may determine whether beam correspondence can be used in the beam determination of the terminal120through a separate procedure. In other embodiments, the UE capability information may contain second information indicating whether beam correspondence can be used in the beam determination of the terminal120. That is, the reciprocity determination circuitry221may determine whether the terminal120can use beam correspondence according to the content contained in the second information regardless of the first information. When it is not determined whether the terminal120can use beam correspondence based on the first information, analyzing the second information by the reciprocity determination circuitry221is useful. The operation mode determination circuitry223may determine the operation mode for a beam determination process of the BS110and the terminal120based on the result of the performance by the reciprocity determination circuitry221. When beam correspondence can be used for both the BS110and the terminal120, the operation mode determination circuitry223may control the terminal120to determine the uplink reception beam of the BS110as the downlink transmission beam and determine the uplink transmission beam of the terminal120as the downlink reception beam. Accordingly, the BS110may not be required to perform a separate beam search procedure for determining the uplink beam. As described above, the operation mode in which the uplink reception beam and transmission beam are determined without the beam search procedure may be referred to as a reciprocity operation mode. When the terminal120can use beam correspondence but the BS110cannot use beam correspondence, the operation mode determination circuitry223may determine that the BS110and the terminal120operate in a first operation mode for searching for the uplink reception beam of the BS110. When the terminal120cannot use beam correspondence but the BS110can use beam correspondence, the operation mode determination circuitry223may determine that the BS110and the terminal120operate in a second operation mode for searching for the uplink transmission beam of the terminal120. When beam correspondence can be used for neither the terminal120nor the BS110, the operation mode determination circuitry223may determine that the BS110and the terminal120operate in a third operation mode for searching for the uplink transmission beam of the terminal120and the uplink reception beam of the BS110. After determining the operation mode of the BS110and the terminal120, the operation mode determination circuitry223may generate a control message in order to inform the terminal120of the determination operation mode. The operation mode determination circuitry223may control the communication interface210to transmit the control message. The beam determination circuitry225may determine the beam according to the operation mode determined by the operation mode determination circuitry223. When the determination operation mode is a reciprocity operation mode, the beam determination circuitry225may determine that the beam used for downlink transmission of the BS110is the beam to be used for uplink reception of the BS110. More specifically, the beam determination circuitry225may use the beam in the same direction or the same index as that of the beam used for downlink data transmission for uplink data reception. When the determined operation mode is the first operation mode, the beam determination circuitry225may perform the beam search procedure for selecting the uplink reception beam of the BS110. At this time, the terminal120may determine the uplink transmission beam as the downlink reception beam according to a control message indicating the first operation mode. The terminal120may fix the determined uplink transmission beam. The beam determination circuitry225may perform a procedure for searching for an uplink reception beam based on the fixed uplink transmission beam. Though different uplink reception beams, the beam determination circuitry225may receive a plurality of reference signal transmitted through the fixed uplink transmission beam. The beam determination circuitry225may select one beam according to a reference signal quality of the plurality of reference signals. For example, the signal quality may be a received signal strength Index (RSSI), reference signal received power (RSRP), or a reference signal received quality (RSRQ). The beam determination circuitry225may determine the selected one beam as the uplink reception beam. When the determined operation mode is the second operation mode, the beam determination circuitry225may determine the beam used for downlink transmission of the BS110as the beam to be used for uplink reception of the BS110. Thereafter, the beam determination circuitry225may receive reference signals from the terminal120and transmit a feedback to the terminal120so as to perform an uplink transmission beam determination procedure of the terminal120. When the determined operation mode is the third operation mode, the beam determination circuitry225may perform the beam search procedure for selecting the uplink reception beam of the BS110. At this time, unlike the first operation mode, the terminal120may transmit a plurality of reference signals through uplink transmission beams according to the control message. Through difference uplink reception beams, the beam determination circuitry225may receive the plurality of reference signals transmitted through different uplink transmission beams. The beam determination circuitry225may measure a quality of each of the plurality of received reference signals and generate feedback information. The beam determination circuitry225may select one beam among the plurality of reference signals according to the measured quality. The beam determination circuitry225may determine the selected one beam as the uplink reception beam. The beam determination circuitry225may control the communication interface210to transmit the feedback information. The terminal120may determine the uplink transmission beam based on the feedback information. The reciprocity determination circuitry221may determine whether channel reciprocity can be used. Whether the channel reciprocity can be used may be determined based on whether the terminal120uses the transmission and/or reception common antenna and the BS110uses the transmission/reception common antenna. Further, whether the channel reciprocity can be used may be determined further based on the distance between the transmission antenna and the reception antenna that are separated from each other. When the antenna of the BS110used for communication with the terminal120is the transmission/reception common antenna, the reciprocity determination circuitry221may determine that the BS110can use channel reciprocity. On the other hand, when the antenna of the BS110is not the transmission/reception common antenna, the reciprocity determination circuitry221may determine that the BS110cannot use channel reciprocity. When the UE capability information received from the terminal120indicates that the antenna of the terminal120is the transmission and/or reception common antenna, the reciprocity determination circuitry221may determine that the terminal120can use channel reciprocity. On the other hands, when the UE capability information indicates that the antennas of the terminal120are divided into the antenna for transmission and the antenna for reception, the reciprocity determination circuitry221may determine that the terminal120cannot use channel reciprocity. When both the BS110and the terminal120can use channel reciprocity, the reciprocity determination circuitry221may determine that channel reciprocity is satisfied. On the other hand, when at least one of the BS110and the terminal120cannot use channel reciprocity, the reciprocity determination circuitry221may determine that channel reciprocity is not satisfied. When both the BS110and the terminal120can use channel reciprocity, the channel estimation circuitry227may perform precoding based on an uplink channel estimation result without a separate downlink channel estimation operation and transmit downlink data. However, when at least one of the BS110and the terminal120cannot use channel reciprocity, the BS110may be required to perform an operation of transmitting a reference signal (for example, a CSI, a CSI-RS, or a DM-RS) or an operation of calibrating the uplink channel estimation result for separate downlink channel estimation to transmit downlink data. For example, the channel estimation circuitry227may control the communication interface210to transmit the reference signal for the downlink channel estimation. In another example, the channel estimation circuitry227may perform a compensation operation for the downlink channel estimation. The compensation operation may include an operation of determining a BS compensation value for compensating for channel reciprocity when the BS110cannot use channel reciprocity, and an operation of receiving a terminal compensation value for compensating for channel reciprocity from the terminal120when the terminal120cannot use channel reciprocity. The storage230may store the UE capability information231. The storage230may store the UE capability information231received from the terminal120. The UE capability information231may contain at least one piece of first information indicating whether the terminal120includes the transmission/reception common antenna, second information indicating the terminal120can use beam correspondence, and third information indicating whether the terminal120can use channel reciprocity. The storage230may store beam information233. The storage230may store the beam information233that is information on the beam used by the BS110for downlink communication with the terminal120. For example, the beam information233may be an index indicating the used beam. In another example, the beam information233may be a parameter indicating the used beam. The parameter may be at least one of a parameter (for example, a precoding matrix indicator (PMI)) for digital beamforming and a parameter (for example, a phase value) for analog beamforming. The storage230may store channel information235. The storage235may store the channel information235that is the channel estimation result acquired by the BS110in uplink communication with the terminal120. For example, the channel information235may be a channel matrix for the uplink channel. In another example, the channel information235may be a parameter (for example, channel state information (CSI)) or a precoding matrix) related to precoding applied to the uplink channel. FIG.3illustrates an example of the functional configuration of the terminal according to various embodiments of the present disclosure. The terminal may be the terminal120ofFIG.1. Referring toFIG.3, the terminal120may include a communication interface310, a controller320, and a storage330. The communication interface310may perform functions similar to those of the communication interface210ofFIG.2. According to some embodiments, the communication interface310may include a transmission/reception common antenna. In this case, the communication interface310may use the same antenna configuration in the transmission operation and the reception operation. In other embodiments, the communication interface310may include each of a transmission antenna and a reception antenna. In this case, the communication interface310may use separate antenna configurations in the transmission operation and the reception operation. The communication interface310may transmit UE capability information to the BS110. When the terminal120performs an initial access procedure with the BS110, the communication interface310may transmit the UE capability information. The communication interface310may transmit the UE capability information to the BS110in response to a request for the UE capability information from the BS110. The communication interface310may receive a control message from the BS110. The communication interface310may transmit uplink reference signals for a beam search procedure according to an operation mode indicated by the received control message. The communication interface310may transmit uplink data through a transmission beam of the terminal120determined according to the operation mode indicated by the control message. The communication interface310may transmit a compensation value for antenna separation of the terminal120to the BS110. The compensation value for the antenna separation may be a compensation value for compensating for a reciprocity error according to the separation between the transmission antenna124of the terminal120and the reception antenna122of the terminal120. The controller320may include a UE capability information generation circuitry321, a reciprocity determination circuitry323, and a beam determination circuitry325. The UE capability information generation circuitry321may generate UE capability information in response to a UE capability information enquiry request received from the BS110. The UE capability information generation circuitry321may generate the UE capability information based on whether the transmission antenna and the reception antenna of the terminal120are separated from each other, whether the terminal120can use beam correspondence, and whether the terminal120can use channel reciprocity. According to various embodiments, the terminal120may provide information on whether beam correspondence is satisfied to the BS110through the UE capability information. Whether the transmission antenna and the reception antenna of the terminal120are separated from each other may be determined at the moment when the terminal120is designed or manufactured, so that the corresponding information may be stored in advance. The reciprocity determination circuitry323may determine whether beam correspondence can be used in beam determination of the terminal120. The reciprocity determination circuitry323may determine whether beam correspondence can be used regardless of reception of the control message indicating the operation mode. For example, when the antenna of the terminal120is the transmission and/or reception common antenna, the reciprocity determination circuitry323may determine that the terminal120can use beam correspondence. In another example, when the antennas of the terminal120are implemented to be separated into the transmission antenna and the reception antenna, the reciprocity determination circuitry323may determine whether the terminal120can use beam correspondence according to the distance between the transmission antenna and the reception antenna. The reciprocity determination circuitry323may determine whether the terminal120can use channel reciprocity in the channel estimation. When the antenna of the terminal120is the transmission and/or reception common antenna, the reciprocity determination circuitry323may determine that the terminal120can use channel reciprocity. In this case, the terminal120may not determine a compensation value for antenna separation in the channel estimation. On the other hand, when the antenna of the terminal120is implemented to be separated into the transmission antenna and the reception antenna, the reciprocity determination circuitry323may determine that the terminal120cannot use channel reciprocity. In this case, the terminal120may determine the compensation value for antenna separation in the channel estimation. The beam determination circuitry325may determine the beam according to the operation mode indicted by the control message received from the BS110. When the determined operation mode is a reciprocity operation mode, the beam determination circuitry325may determine the beam used in the downlink of the terminal120as the uplink transmission beam of the BS120. More specifically, the beam determination circuitry325may control the communication interface310to use the beam having the same direction or the same index as that of the beam used for downlink data reception for uplink data transmission. When the determined operation mode is a first operation mode, the beam determination circuitry325may determine the beam selected by the downlink beam procedure of the terminal120as the uplink transmission beam. Thereafter, the beam determination circuitry325may transmit a plurality of reference signals so that the BS110determines the uplink reception beam. Here, the reference signals are transmitted after being beamformed by the communication interface310. When the determined operation mode is a second operation mode, the beam determination circuitry325may perform a beam search procedure for selecting the uplink transmission beam of the terminal120. The terminal120may transmit a plurality of reference signals to select the uplink transmission beam. At this time, the reference signals are transmitted after being beamformed using different transmission beams by the communication interface310. The BS110may receive the plurality of reference signals based on the uplink reception beam having the same direction as that of the downlink transmission beam and transmit feedback information to the terminal120in response to the reception. The terminal120may determine the uplink transmission beam of the terminal120based on the feedback information received from the BS110. When the determined operation mode is a third operation mode, the beam determination circuitry325may perform the beam search procedure for selecting the uplink transmission beam of the terminal120. At this time, unlike the second operation mode, the terminal120may transmit a plurality of reference signals without fixing the uplink transmission beam of the terminal120according to the control message. That is, the reference signals are transmitted after being beamformed using different transmission beams by the communication interface310. Through different uplink reception beams, the BS110may receive the plurality of reference signals transmitted through different uplink transmission beams. The BS110may select one of the plurality of received reference signals and determine the uplink reception beam corresponding to the selected reference signal as the uplink reception beam to be used for data transmission. The beam determination circuitry325may determine the uplink transmission beam of the terminal120based on feedback information corresponding to the plurality of transmitted reference signals. AlthoughFIG.3illustrates that the controller320of the terminal120does not include a channel estimation circuitry, the present disclosure is not limited thereto. That is, for channel estimation, the terminal120may transmit reference signals for uplink channel estimation according to a request from the BS110and transmit feedback information to the BS110in response to the reference signals received from the BS110. The storage330may store beam information333. The storage330may store the beam information333that is information on the beam used by the terminal120for downlink communication with the BS110. The beam information333may be an index indicating the used beam or a beamforming parameter indicating the beam. The storage330may store channel information335. The storage335may store the channel information335that is the channel estimation result acquired by the terminal120in uplink communication with the BS110. The channel information335may be a parameter indicating an uplink channel response, channel state information for the uplink channel, or a precoding matrix. FIG.4illustrates a negotiation process of UE capability information according to various embodiments of the present disclosure. Referring toFIG.4, in step410, the BS110may transmit a UE capability enquiry message that makes a request for UE capability information to the terminal120. The UE capability enquiry message may be a message that makes a request for capability information of the terminal120related to a radio access network (RAN) between the BS110and the terminal120. The UE capability enquiry message may be transmitted through a logical channel of a dedicated control channel (DDCH). In step420, the terminal120may transmit UE capability information to the BS110. To this end, the terminal120may generate the UE capability information. More specifically, the terminal120may generate the UE capability information containing at least one of information on antenna capability and reciprocity capability as well as information on RF capability, transport channel capability, physical channel capability, secure capability, measurement capability. The information on the antenna capability may include information on the antenna configuration of the terminal120. In some embodiments, the information on the antenna configuration may include information on whether the terminal120uses the transmission/reception common antenna or information on whether the antenna of the terminal120is implemented to be separated into the transmission antenna and the reception antenna. In other embodiments, the information on the antenna configuration may include information on the distance between the antenna used for uplink reception of the terminal120and the antenna used for downlink transmission. The information on the reciprocity capability may include information on beam correspondence or information on channel reciprocity. The information on the beam correspondence may be information indicating whether the terminal120can use beam correspondence. In other words, the information on the beam correspondence may indicate whether the downlink beam can be used as the uplink beam. In some embodiments, the information on the beam correspondence may be configured by 1 bit and may directly indicate whether the terminal120can use beam correspondence (in other words, whether beam correspondence is satisfied). In other embodiments, the information on the beam correspondence may be configured by multiple bits and may include information indicating candidate beams related to the used beam. The candidate beams may be beams adjacent to the used beam. In this case, the BS110may determine various operation modes in consideration of a range of the adjacent beams. The information on the channel reciprocity may be information indicating whether the terminal120can use channel reciprocity. In other words, the information on the channel reciprocity may indicate whether an estimation value for the downlink channel may be used as an estimation value for the uplink channel. More specifically, the information on the channel reciprocity may be information indicating whether a transmission path within the terminal120, that is, an analog control path before the signal is transmitted to the air is different for the uplink channel and the downlink channel. In some embodiments, the information on the channel reciprocity may be configured by 1 bit and may directly indicate whether the terminal120can use channel reciprocity. In other embodiments, the information on the channel reciprocity may be configured by multiple bits and may indicate whether the terminal120can use channel reciprocity and also include information indicating a parameter for determining a compensation value when channel reciprocity cannot be used. In step430, the BS110may store the UE capability information in response to transmission of the UE capability information and transmit an acknowledgement message of the UE capability information to the terminal120. The UE capability information and the acknowledgement message may be transmitted through a logical channel of a DCCH. In step440, the BS110and the terminal120may perform the reciprocity determination and the beam search procedure. The BS110may determine the operation mode for the beam search procedure based on the UE capability information. Herein, the operation mode represents the configuration of the beam search procedure, such as a target of beam sweeping required for the beam search, the number of beam sweepings, and a separate beam search for the uplink. The BS110and the terminal120may select each of the uplink reception beam and the uplink transmission beam according to the determined operation mode. According to various embodiments, the terminal120may transmit the UE capability information after a radio resource control (RRC) connection procedure as illustrated inFIG.4. For example, after a random access procedure, the terminal120may report the UE capability information containing information on beam correspondence to the BS110. Although not illustrated inFIG.4, the BS110and the terminal120may perform a downlink beam search procedure for determining the downlink transmission beam and the downlink reception beam. For example, the downlink beam search procedure may be performed before a capability negotiation procedure (for example, step410, step420, or step430). Alternatively, the downlink beam search procedure may be performed after the capability negotiation procedure of the terminal120. Here, the downlink beam search procedure may include steps in which the BS110transmits reference signals along with transmission beamforming and the terminal120receives the reference signals along with reception beamforming and transmits feedback information. Hereinafter, a detailed process of the beam correspondence determination, the operation mode determination, and the beam search procedures that are performed in step440will be described with reference toFIGS.5to9. FIG.5is a flowchart illustrating a process of the beam determination by the BS according to various embodiments of the present disclosure. The BS may be the BS110ofFIG.1. Referring toFIG.5, in step510, the BS110may determine whether the BS and the terminal use beam correspondence based on UE capability information and whether the antenna of the BS110is the transmission/reception common antenna. The BS110may determine whether the terminal120can use beam correspondence based on the UE capability information. The UE capability information may contain first information on the antenna configuration of the terminal120or second information on beam correspondence of the terminal120. In some embodiments, when the first information indicates that the antenna of the terminal120is the transmission/reception common antenna, the BS110may determine that the terminal120can use beam correspondence. In other embodiments, when the second information directly indicates that the terminal120cannot use beam correspondence, the BS110may immediately determine that the terminal120cannot use beam correspondence. The BS110may determine whether beam correspondence can be used based on a system type used for communication with the terminal120. In some embodiments, when the communication system with the terminal120is TDD, the BS110may determine that beam correspondence can be used. In other embodiments, when the communication type with the terminal120is FDD, the BS110may determine that beam correspondence can be used. For example, when a bandwidth between the DL and the UL is not larger than a predetermined threshold value, the BS110may determine the best uplink reception beam based on the best downlink transmission beam. The bandwidth may be a guard bandwidth. The BS110may determine whether the BS110can use beam correspondence based on whether the antenna of the BS110used for communication (for example, downlink) with the terminal120is the transmission/reception common antenna. For example, when the antenna of the BS110is the transmission/reception common antenna, there is no distance difference between the antenna used for transmission and the antenna used for reception, so that it may be determined that the BS110can use beam correspondence. In another example, when the antenna of the BS110used in the downlink is the transmission-dedicated antenna, the BS110may determine whether the BS110can use beam correspondence through a separate procedure. The BS110may determine whether beam correspondence can be used according to the distance between the transmission antenna and the reception antenna. The distance between the transmission antenna and the reception antenna may be acquired from information recorded at the manufacturing moment or a parameter according to the design, and the BS110may determine whether beam correspondence can be used according to pre-stored values. In step520, when the BS110determines that the BS110or the terminal120does not use beam correspondence, the BS110may determine an operation mode for the uplink beam search. According to whether the BS110can use beam correspondence and whether the terminal120can use beam correspondence, four cases may be derived. When both the BS110and the terminal120can use beam correspondence, the BS110may determine the downlink transmission beam as the uplink reception beam, and the terminal120may determine the downlink reception beam as the uplink transmission beam. That is, the beam search procedure for searching for the transmission beam and the reception beam used in the uplink may not be required. The BS110may determine the operation mode of the BS110and the terminal120as a reciprocity operation mode. When it is determined that the BS110or the terminal120cannot use beam correspondence, the uplink beam search is required and thus the BS110may determine the operation mode for the uplink beam search. For example, when only the BS110cannot use beam correspondence, the BS110may determine a first operation mode for selecting the uplink reception beam. In another example, when only the terminal120cannot use beam correspondence, the BS110may determine a second operation mode for selecting the uplink transmission beam. In yet another example, when neither the BS110nor the terminal120can use beam correspondence, the BS110may determine a third operation mode for determining both the uplink transmission beam and the uplink reception beam. In step530, the BS110may determine the uplink reception beam of the BS110to be used in the uplink. In the case of operation mode in which the BS110can use beam correspondence (for example, the second operation mode), the BS110may determine the downlink transmission beam used in the downlink ad the uplink reception beam. On the other hand, in the case of operation mode in which the BS110cannot use beam correspondence (for example, the first operation mode or the third operation mode), the BS110may determine the uplink reception beam based on reference signals received from the terminal120. The reference signals are transmitted from the terminal120to the BS110in order to determine the uplink beam. The BS110may receive the reference signals through a plurality of reception beams, respectively. The BS110may select one beam among the plurality of reception beams based on a reception quality or reception power of the corresponding reference signal. The BS110may determine the selected beam as the uplink reception beam. The BS110may perform uplink communication with the terminal120through the uplink reception beam. FIG.6is a flowchart illustrating a process of the beam determination by the terminal according to various embodiments of the present disclosure. The terminal may be the terminal120ofFIG.1. Referring toFIG.6, in step610, the terminal120may receive a message indicating an operation mode for a beam search procedure from the BS110. The operation mode may indicate one of a first operation mode, a second operation mode, and a third operation mode that require the beam search procedure, and a reciprocity operation mode that determines the beam without any beam search procedure. In step620, the terminal120may determine an uplink transmission beam of the terminal120to be used in the uplink based on the determined operation mode. For example, in the case of first operation mode or reciprocity operation mode, the terminal120may determine a downlink reception beam as the uplink transmission beam. In another example, in the case of second operation mode or third operation mode, the terminal120may determine the uplink transmission beam through the beam search procedure. More specifically, in the case of second operation mode or third operation mode, the terminal120may transmit reference signals to the BS110. At this time, the reference signals are beamformed using a plurality of transmission beams. The terminal120transmits the reference signals to the BS110in order to determine the uplink beam. The terminal120may receive feedback information from the BS110and determine the uplink transmission beam based on the feedback information. The feedback information may contain measurement results of a plurality of transmission beams or indicate selection for the best transmission beam by the BS110. FIG.7illustrates an example of the determination of operation modes according to various embodiments of the present disclosure. Hereinafter, althoughFIG.7illustrates that the BS110first determines whether the terminal120can use beam correspondence and then determine whether the BS110can use beam correspondence, the present disclosure is not limited thereto. That is, unlike the order illustrated inFIG.7, it may be first determined whether the BS110can use beam correspondence. Referring toFIG.7, in step710, the BS110may determine whether the terminal120can use beam correspondence in the beam selection of the terminal120. Whether the terminal120can use beam correspondence may be determined based on UE capability information received from the terminal120. Thereafter, in steps720and725, the BS110may determine whether the BS110can use beam correspondence in the beam selection of the BS110. Whether the BS110can use beam correspondence may be determined according to predetermined information (for example, antenna design, distance, separation range, and antenna array) and the antenna used for communication with the terminal120. The BS110may determine four operation modes according to the determination on whether the BS110can use beam correspondence and whether the terminal120can use beam correspondence. When both the BS110and the terminal120can use beam correspondence, the BS110may determine the operation mode as the reciprocity operation mode. When only the terminal120can use beam correspondence, the BS110may determine the operation mode as the first operation mode. When only the BS110can use beam correspondence, the BS110may determine the operation mode as the second operation mode. When neither the BS110nor the terminal120can use beam correspondence, the BS110may determine the operation mode as the third operation mode. The four operation modes are shown in Table 1 below. TABLE 1BS can use beamBS cannot use beamcorrespondencecorrespondenceTerminal canReciprocityFirstuse beamoperation modeoperation modecorrespondenceTerminal cannotSecondThirduse beamoperation modeoperation modecorrespondence The BS110may inform the terminal120of the determined operation mode among the four operation modes through a control message. In some embodiments, the BS110may transmit the control message to the terminal120through higher layer signaling. In other embodiments, the BS110may transmit the control message through separate DCI. That is, the control message may be transmitted through a data channel or a control channel. For example, the control message may contain information of 2 bits in order to indicate four operation modes. When the reciprocity operation mode is determined, the BS110and the terminal120are not required to perform a separate beam search procedure. Accordingly, in the uplink, the BS110and the terminal120may use the beams, which have been used in the downlink. When one of the first operation mode to the third operation mode is determined, the BS110and the terminal120may be required perform the beam search procedure. At this time, the BS110may allocate resources (for example, subframe) for the uplink beam search procedure to the terminal120. In some embodiments, the BS110may allocate a semi-static or periodic subframe to the terminal120through higher layer signaling. In other embodiments, the BS110may dynamically or aperiodically allocate the subframe to the terminal120through DCI. The BS110may allocate resources for the uplink beam search procedure according to the determined operation mode. For example, when the first operation mode is determined as the operation mode, the BS110may allocate N subframes (or symbols) for reference signal transmission to the terminal120. On the other hand, in the case of third operation mode, reference signal transmission for searching for the larger number of beams is required than the case of first operation mode, the BS110may allocate M (>N) subframes (or symbols) for reference signal transmission to the terminal120. Although one of the two beams (for example, uplink transmission beam and uplink reception beam) is determined according to beam correspondence, the BS110and the terminal120may transmit and/or receive a control message or feedback information to determine the other beam. In the first operation mode or the third operation mode, the BS110may determine the reception beam to be used in the uplink based on reference signals received from the terminal120in the beam search procedure. In the second operation mode, the terminal120may determine the transmission beam to be used in the uplink based on feedback information on reference signals received from the BS110in the beam search procedure. AlthoughFIG.7illustrates that the control message indicates one of the four operation modes, the control message may indicate one of the three operation modes except for the reciprocity operation mode. That is, since the terminal120may perform the operation of the terminal120corresponding to the reciprocity operation mode regardless of the control message, the control message may be generated to indicate only the three operation modes. Hereinafter, the first operation mode to the third operation mode will be described in detail with reference toFIGS.8A to8C. As described above, althoughFIGS.8A to8Cillustrate an example in which beams used in the downlink are used in the uplink, the present disclosure is not limited thereto. In the embodiment for using the beams, which have been used in the uplink, in the downlink, entities that perform the operations for the beam search between the BS and the terminal may be changed. FIG.8Aillustrates a beam search procedure of the BS and the terminal in the first operation mode according to various embodiments of the present disclosure. The BS may be the BS110ofFIG.1. The terminal may be the terminal120ofFIG.1. Referring toFIG.8A, in step810, the terminal120may receive a control message indicating the first operation mode. The terminal120may perform a beam search procedure according to an instruction of the control message. For example, the terminal120may be pre-configured to perform a particular beam search result when receiving the control message. In another example, the control message specifically indicates an individual operation of the beam search procedure to be performed by the terminal120, and the terminal120may perform the beam search procedure according to the control message. As the terminal120receives the control message, the terminal120may determine an uplink transmission beam and fix the determined beam in step821. More specifically, the terminal120may determine a downlink reception beam used in the downlink as the uplink transmission beam of the terminal120and fix the uplink transmission beam. The fixed beam may be used for reference signal transmission. In step823, the terminal120may transmit reference signals to the BS110through the fixed uplink transmission beam. The BS110may receive the reference signals from the terminal120through a plurality of respective reception beams. More specifically, the terminal120may continuously transmit the reference signals and the BS110may receive the reference signals through the plurality of reception beams while continuously changing the direction of reception beams. That is, the BS110may receive the reference signals through a beam sweep operation. When the BS110receives the reference signals for all the plurality of reception beams, the BS110may end the beam sweep operation. At this time, the transmission beam of the terminal120is the fixed uplink transmission beam, and the BS110may determine the number of received reference signals in consideration of only the number of reception beams. For example, when the number of plurality of reception beams of the BS110is NB, the number of used transmission beams is 1, so that the BS110may receive NB (NB×1) reference signals in order to select the best beam. In step825, the BS110may determine the uplink reception beam based on the received reference signals. For example, the BS110may determine the uplink reception beam according to at least one of an RSSI indicating the received strength of the received reference signals, RSRQ indicating a reception quality, and RSRP indicating reception power. The RSRP may be referred to as beam reference signal received power (BRSRP). FIG.8Billustrates a beam search procedure of the BS and the terminal in the second operation mode according to various embodiments of the present disclosure. The BS may be the BS110ofFIG.1. The terminal may be the terminal120ofFIG.1. Referring toFIG.8B, in step830, the BS110may transmit a control message indicating the second operation mode. The terminal120may perform a beam search procedure according to an instruction of the control message. In step841, the BS110may determine an uplink reception beam of the BS110and fix the determined beam. More specifically, the BS110may determine a downlink transmission beam as the uplink transmission beam and fix the uplink transmission beam. AlthoughFIG.8Billustrates that the control message is transmitted and then the uplink reception beam is determined, the operation order may be changed. That is, the BS110may transmit the control message after determining the uplink reception beam. In step843, the terminal120may transmit a plurality of reference signals to the BS110. More specifically, the terminal120may transmit the reference signals while changing the transmission beam as well as continuously transmitting the reference signals. That is, the terminal120may transmit the plurality of reference signals through the beam sweep operation. The BS110may receive the plurality of reference signals from the terminal120through the fixed uplink transmission beam. Since the reception beam of the BS110is the fixed uplink reception beam, the BS110may determine the number of received reference signals in consideration of only the number of transmission beams. For example, when the number of plurality of transmission beams of the terminal120is NUE, the number of used reception beam is 1, so that the BS110may receive NUE (NUE×1) reference signals in order to select the best beam. In step845, the BS110may transmit feedback information generated based on the received reference signals to the terminal120. The BS110may generate the feedback information based on at least one piece of power information, quality information, and channel information of the received reference signals. In some embodiments, the BS110may select one of the received reference signals based on at least one piece of the power information, the quality information, and the channel information. The BS110may generate the feedback information containing an index corresponding to the selected reference signal. For example, the index may correspond to 9 bits. In other embodiments, the BS110may generate the feedback information containing power information (BRSRP) corresponding to each reference signal. For example, the BRSRP may correspond to 7 bits. The BS110may transmit the generated feedback information before the next data transmission (for example, PUSCH transmission) of the terminal120scheduled in the terminal120. In step847, the terminal120may determine the uplink transmission beam of the terminal120based on the feedback information. In some embodiments, the BS110may determine the beam corresponding to one of at least one index indicated by the feedback information as the uplink transmission beam. In other embodiments, the BS110may determine BRSRP having the largest value among the BRSRP contained in the feedback information as the uplink transmission beam. FIG.8Cillustrates a beam search procedure of the BS and the terminal in the third operation mode according to various embodiments of the present disclosure. The BS may be the BS110ofFIG.1. The terminal may be the terminal120ofFIG.1. Referring toFIG.8C, in step850, the terminal120may receive a control message indicating the third operation mode. The terminal120may perform a beam search procedure according to an instruction of the control message. As the terminal120receives the control message indicating the third operation mode, the terminal120may transmit a plurality of reference signals to the BS110in step863. The terminal120may transmit the plurality of reference signals through a plurality of transmission beams. The BS110may transmit the plurality of reference signals through a plurality of reception beams. More specifically, the terminal120may transmit the plurality of reference signals while continuously changing each of the plurality of transmission beams. The BS110may receive the plurality of reference signals while continuously changing each of the plurality of reception beams. At this time, unlike the first operation mode and the second operation mode, the third operation mode does not have the fixed beam, so that the sweep for the BS110and all the beams of the BS110may be required. For example, when the total number of beams supported by the BS110is NB and the total number of beams supported by the terminal120is NUE, the BS110may receive at least NB×NUE reference signals in order to select the best beam. In step865, the BS110may determine the uplink reception beam based on the received reference signals. For example, the BS110may determine one of the reference signals according to a magnitude value of BRSRP of the NB×NUE reference signals and determine a reception beam corresponding to the determined reference signal as the uplink reception beam. In step867, the BS110may transmit feedback information generated based on the received reference signals to the terminal120. The feedback information may contain an index for each of the reference signals or values for received power. The index may be an index of the beam corresponding to each reference signal. The value for power may be RSRP (that is, BRSRP) corresponding to each reference signal. In step869, the terminal120may determine the uplink transmission beam of the terminal120based on the feedback information. The terminal120may determine the uplink transmission beam in a similar way to that of step847in the second operation mode. As described above, the BS110and the terminal120according to various embodiments may simplify the beam search procedure through the beam correspondence determination. For example, when the BS110does not determine the possibility of the use of beam correspondence, the BS110is required to receive the NUE×NB reference signals. That is, by receiving reference signals for all available beam combinations, the BS110and the terminal120may determine each of the transmission beam and the reception beam for data transmission. However, when the BS110uses beam correspondence, the beam search procedure for discovering the best beam of the BS110may be omitted. In the case of second operation mode, the BS110may be required to receive NUE reference signals in order to discover the best beam of the BS110. In other words, the BS110and the terminal120may save a time spent for determining the beam for data transmission through the operation mode using beam correspondence. FIG.9illustrates an example of another beam search procedure of the BS and the terminal according to various embodiments of the present disclosure.FIG.9illustrates the case in which the BS sweeps reception beams and the terminal sweeps transmission beams, that is, the case in which both the BS and the terminal search the uplink beam. The BS may be the BS110ofFIG.1. The terminal may be the terminal120ofFIG.1. Referring toFIG.9, in step910, the BS110may transmit a control message to the terminal120. The control message may contain indication information indicating a range of available beams as well as information on the possibility of the use of beam correspondence (for example, operation mode). Although the case in which both the BS110and the terminal120cannot use beam correspondence (for example, third operation mode) is described below, the present disclosure is not limited thereto. In step920, the BS110may determine uplink reception beam candidates of the BS110. When it is determined that the BS110cannot use beam correspondence, the BS110may determine uplink reception beam candidates. The BS110may determine adjacent beams of the downlink transmission beam as the uplink reception beam candidates. Although the downlink transmission beam is not determined as the uplink reception beam, the BS110may determine one of the adjacent beams of the downlink transmission beams as the uplink reception beam. More specifically, when downlink data transmission is performed using a fifth beam among a total of 32 beams, the BS110may determine a beam near the fifth beam as the uplink reception beam even though beam correspondence is not used. This is because, when there is no detection of a movement of the terminal120at a range or wider, little change in a beam direction may be predicted. In another example, when the distance between the transmission antenna and the reception antenna of the BS110is within a predetermined range, little change in a beam direction may be predicted, so that the BS110may select one of the beams adjacent to the downlink transmission beams and determine the selected beam as the uplink reception beam. The adjacent beams may be referred to as reception beam candidates. In some embodiments, the range of the reception beam candidates may be set as a predetermined value. For example, the BS110may determine fixed adjacent two beams as the adjacent beams. When the BS110transmits downlink data to the terminal120through a no. 7 beam among twelve beams ranging from no. 1 to no. 12 beams, the BS110may determine nos. 5, 6, 8, and 9 beams as candidate beams for uplink data reception. In other embodiments, the range of the reception candidate beams may be adaptively set. For example, the range of the reception beam candidates may be determined according to the distance between the transmission antenna and the reception antenna of the BS110. The range of the reception beam candidates may be determined according to the distance between the transmission antenna used for downlink data transmission by the BS110and the reception antenna to be used for uplink data reception by the BS110. When the distance between the transmission antenna and a first reception antenna of the BS110is 3 and the distance between the transmission antenna and a second reception antenna of the BS110is 7, the BS110may determine different numbers of reception beam candidates according to whether the first reception is used with the terminal120or the second reception antenna is used. In step930, the terminal120may determine uplink transmission beam candidates of the terminal120. When it is determined that the terminal120cannot use beam correspondence, the terminal120may perform the beam search procedure for determining the uplink beam. When a predetermined condition is satisfied, the terminal120may determine one of the adjacent beams of the downlink reception beam as the uplink transmission beam although the downlink reception beam is not determined as the uplink transmission beam. The adjacent beams may be referred to as the uplink transmission beam candidates. The terminal120may determine transmission beam candidates of the terminal120in a similar way to the procedure in which the BS110determines the reception beam candidates in step920. For example, the terminal120may determine the transmission beam candidates according to the distance between the transmission antenna and the reception antenna included in the terminal120and the beam used in the downlink. In step943, the terminal120may transmit a plurality of reference signals to the BS110. More specifically, the terminal120may continuously transmit the plurality of reference signals while changing the uplink transmission beam candidates of the terminal120one by one. The BS110may continuously transmit the plurality of reference signals while changing the uplink reception beam candidates one by one. When the number of uplink reception beam candidates is NB, candidate and the number of uplink transmission beam candidates is NUE, candidate, the BS110may receive at least NB, candidate×NUE, candidate reference signals in order to select the best beam. That is, the BS110may determine the best uplink reception beam by receiving the smaller number of reference signals compared to the third operation mode ofFIG.8C. In step950, the BS110may determine the uplink reception beam among the uplink reception beam candidates based on the received reference signals. For example, the BS110may determine a beam corresponding to the largest BRSRP among the uplink reception beam candidates as the uplink reception beam. In step960, the BS110may transmit feedback information generated based on the received reference signals to the terminal120. The feedback information may contain an index for each of the reference signals or values for received power. In step970, the terminal120may determine the uplink transmission beam of the terminal120among the uplink transmission beam candidates based on the received feedback information. The terminal120may determine the uplink transmission beam in a similar way to that of step847inFIG.8B. FIG.9illustrates the case in which both the BS and the terminal determine the uplink beam, that is, the case of third operation mode. However, according to other embodiments of the present disclosure, an embodiment of reducing beam candidates can be applied to the first operation mode or the second operation mode. In other words, when the BS110can use beam correspondence but the terminal120cannot use beam correspondence (for example, the second operation mode), the terminal120may set the range of the uplink transmission beam of the terminal120to be discovered according to the control message as a particular range. When the BS110cannot use beam correspondence (for example, the first operation mode), the BS110can set the range of the uplink reception beam of the BS110as a predetermined range. As described above, although the BS110and the terminal120cannot use beam correspondence, one of the adjacent beams of the beam used in the downlink may be determined as the beam to be used in the uplink. The BS110and the terminal120may reduce a time spent for discovering the beam by setting candidates of beam combinations rather than performing reference signal transmission and the feedback procedure for all beam combinations. FIGS.5to9have described that it is possible to decrease transmission of unnecessary reference signals and reduce the time spent for selecting the best beam for reference signal and data transmission by determining the antenna design, the array, and the configuration of the BS110and the terminal120. Hereinafter,FIGS.10to12describe channel estimation using channel reciprocity. AlthoughFIGS.10to12describe, as an example, the case in which a result of uplink channel estimation is used for downlink data transmission when channel reciprocity is used, the case in which a result of downlink channel estimation is used for uplink data transmission can be applied. FIG.10is a flowchart illustrating a process of channel estimation by the BS according to various embodiments of the present disclosure. The BS may be the BS110ofFIG.1. Referring toFIG.10, in step1010, the BS110may perform uplink channel estimation. More specifically, the BS110may transmit a request for an uplink SRS to the terminal120for the uplink channel estimation. In response to the request, the terminal120may transmit the uplink SRS to the BS110. The BS110may estimate a channel from the terminal120to the BS110in response to the reception of the uplink SRS. The BS110may determine a parameter for the channel based on a result of the channel estimation. The parameter for the channel may include attenuation occurring in signal transmission, a phase shift, or a time delay. The parameter for the channel may include a precoding vector (or matrix). The BS110may store the result of the uplink channel estimation. In step1020, the BS110may determine whether channel reciprocity with the terminal120is satisfied. The BS110may use channel reciprocity for the BS110, and, when the use of channel reciprocity for the terminal120is possible, determine that channel reciprocity is satisfied. In some embodiments, when the antenna of the BS110used for uplink channel estimation is the transmission/reception common antenna in a communication system (for example, TDD) in which there is little frequency change according to uplink/downlink switching, the BS110may determine that channel reciprocity for the BS110can be used. The BS110may receive channel capability information from the terminal120and determine whether channel reciprocity for the terminal120can be used based on the channel capability information. In step1030, the BS110may perform calibration in order to determine a result of the downlink channel estimation. When the reception antenna used for uplink channel estimation of the BS110is different from the reception antenna to be used for downlink transmission, the BS110may determine a compensation value for antenna separation of the BS110. The BS110may determine a compensation value according to a difference in the physical configuration between the transmission antenna and the reception antenna. The difference in the physical configuration may be a path difference between a transmission side line and a reception side line, a difference and an error in a delay time occurring in each of the lines, or an impulse response by each of the lines. The BS110may perform the calibration by transmitting a calibration signal and receiving the calibration signal. The BS110may compensate for a gap between the uplink channel and the downlink channel through a channel response to the calibration signal. The calibration signal may be referred to as a training signal or a training sequence. The BS110may receive a compensation value for the terminal120from the terminal120. The compensation value for the terminal120may be a compensation value for antenna separation of the terminal120. The compensation value for the antenna separation of the terminal120may be determined in a similar way as that of the compensation value for the antenna separation of the BS110. The terminal120may determine the compensation value for the terminal120by transmitting and receiving a calibration signal. In step1040, the BS110may determine the result of the downlink channel estimation based on the calibration result. That is, the BS110may determine the result of the downlink channel estimation by reflecting the compensation value in step1030in the result of the uplink channel estimation. For example, the compensation value may be at least one of the compensation value for the antenna separation of the BS110and the compensation value for the antenna separation of the terminal120. When the BS110determines that channel reciprocity is satisfied, the BS110may determine the result of the downlink channel estimation based on the result of the uplink channel estimation in step1050. For example, the BS110may transpose the uplink channel response as shown in the following equation to acquire a downlink channel response. HDL=(HUL)TEquation (1) HDL denotes a downlink channel response, and HUL denotes an uplink channel response. The uplink channel response and the downlink channel response may be acquired in a matrix form. The BS110may determine a precoding matrix to be used for downlink data transmission based on the acquired downlink channel response. In another example, the BS110may determine the precoding matrix to be used for downlink data transmission based on a precoding matrix used for uplink channel estimation. Since channel reciprocity is satisfied, the BS110may determine a precoding scheme without separate downlink reference signal transmission. FIG.11is a flowchart illustrating a process of channel estimation by the terminal according to various embodiments of the present disclosure. The terminal may be the terminal120ofFIG.1. Referring toFIG.11, in step1110, the terminal120may determine whether reciprocity for the terminal120can be used. In some embodiments, the terminal120may determine channel reciprocity for the terminal120can be used according to whether the antenna used for uplink channel estimation is the transmission/reception common antenna or the transmission antenna. For example, when the antenna used for uplink channel estimation is the transmission antenna and the reception antenna is separately provided, the terminal120may determine that channel reciprocity for the terminal120can be used. In step1120, the terminal120may determine a compensation value for a channel reciprocity error. The channel reciprocity error may refer to a difference between a channel response in uplink channel estimation and a channel response in downlink channel estimation. The channel reciprocity error may occur according to the separated design of the reception antenna. The compensation value may be a compensation value for the antenna separation of the terminal120. The terminal120may determine a compensation value for the reciprocity error within the terminal120by transmitting a calibration signal through the transmission antenna and receiving the calibration signal through the reception antenna. For example, the terminal120may calculate an impulse response with a transmission side for transmitting the calibration signal and an impulse response within a reception side for receiving the calibration signal. The transmission side performs an analog control on the transmission antenna of the terminal120, and the reception side performs an analog control on the reception antenna of the terminal120. The terminal120may acquire the channel response, which is the result of the downlink channel estimation by reflecting the impulse response in the channel response of the uplink channel estimation. In step1130, the terminal120may transmit the compensation value determined in step1120to the BS110. In some embodiments, the terminal120may transmit the compensation value through UE capability information. The compensation value may be the compensation value for the antenna separation of the terminal120. Whether the transmission antenna and the reception antenna of the terminal120are separated from each other and the line impulse response within each of the transmission side and the reception side of the terminal120may be set in a process of manufacturing the terminal120. Accordingly, the terminal120may store in advance the compensation value for the antenna separation of the terminal120. The terminal120may transmit the compensation value for the antenna separation to the BS110in initial access to the BS110. In other embodiments, the terminal120may transmit the compensation value through a medium access control (MAC) Control Element (CE). The MAC CE is used for MAC layer control signaling between the BS110and the terminal120. The terminal120may set a logical channel identifier (LCD) for the compensation value to configure the MAC CE. The terminal120may transmit the MAC CE to the BS110through an uplink-shared channel (UL-SCH). In other embodiments, the terminal120may transmit the compensation value through a separate procedure. For example, the BS110may transmit DCI indicating transmission of the compensation value to the terminal120through a physical downlink control channel (PDCCH). The terminal120may transmit the compensation value through a physical uplink control channel (PUCCH) or a physical uplink shared channel (PUSCH) after decoding the DCI. That is, the terminal120may transmit the compensation value in a similar way to the general data transmission operation. FIG.12illustrates an example of compensation according to various embodiments of the present disclosure. Referring toFIG.12, the terminal120may transmit an uplink signal to the BS110. The uplink signal is transmitted through a transmission path1226of the terminal120, transmitted through an uplink wireless channel, and then transmitted through a reception path1216of the BS110. That is, the uplink signal may pass through three channels while being transmitted to the BS110from the terminal120. Similarly, when the BS110transmits a downlink signal to the terminal120, the downlink signal may pass through three channels, such as a transmission path1213of the BS110, a downlink wireless channel, and a reception path1223of the terminal120. In the case of all uplink channels, the channel through which the uplink signal passes may be expressed as equation (2) below. HUL=HB,Rx×HUL,air×HUE,TxEquation (2) HUL denotes a channel response for all uplink channel through which the uplink signal passes. HUE,Tx denotes an impulse response in the transmission path1226of the terminal120. HUL,air denotes a wireless channel on the air from the terminal120to the BS110. HB,Rx denotes an impulse response in the reception path1216of the BS110. In the case of all downlink channels, a channel through which the downlink signal passes may be expressed as the following equation. HDL=HUE,Rx×HDL,air×HB,TxEquation (3) HDLdenotes a channel response for all downlink channels through which the downlink signal passes. HB,Tx denotes an impulse response in the transmission path1213of the BS110. HDL,air denotes a channel response for a wireless channel on the air from the BS110to the terminal120. HUE,Rx denotes an impulse response in the reception path1223of the terminal120. In the case in which there is little frequency change within the uplink channel and the downlink channel like the TDD communication system, the wireless channel on the air may satisfy channel reciprocity. That is, HDL,air and HUL,air may establish a transposed relation therebetween. However, as the transmission path1213and the reception path1216are separately implemented in the BS110, the transposed relation may not be established between the impulse response in the transmission path1213and the impulse response in the reception path1216. Accordingly, the BS110may determine that channel reciprocity for the BS110cannot be used. Similarly, as the transmission path1226and the reception path1223of the terminal120are separated from each other, the terminal120may determine that channel reciprocity for the terminal120cannot be used. Since channel reciprocity for the BS110cannot be used, the BS110may compensate for a reciprocity error of the corresponding channel. The BS110may transmit a calibration signal1219in order to compensate for the reciprocity error of the corresponding channel. The BS110may acquire the impulse response HB,Tx of the transmission path1213and the impulse response HB,Rx of the reception path1216without the channel response for the wireless channel on the air by transmitting and receiving the calibration signal1219. More specifically, the BS110may transmit the calibration signal1219through the transmission path1213and receive the calibration signal1219through the reception path1216. The BS110may transmit and receive the calibration signal1219without passing through the wireless channel with the terminal120. The BS110may acquire a first compensation response HB,Rx×HB,Tx of the transmission path1213and the reception path1216through the transmission of the calibration signal1219. Similarly, the terminal120may acquire a second compensation response HUE,Rx×HUE,Tx of the transmission path1226and the reception path1223through the transmission of the calibration signal1229. When determining the downlink channel estimation result (for example, HDL) based on the uplink channel estimation result (for example, HUL), the BS110may transpose the uplink channel estimation and then reflect the compensation value (for example, HB,Rx×HB,Tx) of the BS110and the compensation value (for example, HUE,Rx×HUE,Tx) of the terminal120so as to determine the downlink channel estimation result. FIG.12illustrates that the channel reciprocity error is compensated for both the BS110and the terminal120, the present disclosure is not limited thereto. That is, when the transmission antenna and the reception antenna of the BS110are separated from each other but the terminal120transmits and receives data to and from the BS110through the transmission/reception common antenna, the BS110may determine the downlink channel estimation result by determining only the compensation value for the reciprocity error of the BS110. According to whether the antennas used for uplink channel estimation with the terminal120are transmission/reception common antennas, the BS110may omit the channel estimation procedure. For example, when it is determined that the BS110and the terminal120can use channel reciprocity as all the antennas are the transmission and/or reception common antennas, the BS110may omit a procedure for transmitting reference signals for downlink channel estimation. The BS110may determine the downlink channel estimation result based on the uplink channel estimation result without the reference signal transmission and the feedback procedure. The BS110may efficiently use resources for channel estimation and resources for data transmission in a resource block by omitting the procedure for transmitting the reference signals. In another example, when the antenna of the terminal120is not the transmission/reception common antenna, the BS110may determine the downlink channel estimation result based on the compensation value for the antenna separation acquired from the terminal120and the uplink channel estimation value. Similarly, the BS110may promote the efficient use of resources by omitting the reference signal transmission and the feedback procedure. That is, the BS110and the terminal120may decrease unnecessary reference signal transmission and reduce the time for data transmission by determining whether channel reciprocity can be used according to the design and array of used antennas. While the present disclosure has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the present disclosure. Therefore, the scope of the present disclosure should not be defined as being limited to the embodiments, but should be defined by the appended claims and equivalents thereof. Although the present disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.
95,566
11863283
DESCRIPTION OF EMBODIMENTS Embodiments of the present disclosure will be described below in detail with reference to the drawings. In each of the following embodiments, the same parts are denoted by the same reference numerals, and a repetitive description thereof will be omitted. Moreover, in the present specification and the drawings, a plurality of components having substantially the same functional configuration will be distinguished by attaching different numbers after the same reference numerals. However, when it is not particularly necessary to distinguish between the plurality of components having substantially the same functional configuration, only the same reference numeral is given. The present disclosure will be described in the following order.1. Introduction1.1. System configuration1.2. Related technologies1.3. Outline of proposed technology2. Configuration examples2.1. Configuration example of base station2.2. Configuration example of terminal device3. Embodiments4. Application examples4.1. Application examples related to base station4.2. Application examples related to terminal devices5. Modifications6. Summary 1. Introduction 1.1. System Configuration FIG.1is a diagram illustrating an example of an entire configuration of a communication system1according to an embodiment of the present disclosure. As illustrated inFIG.1, the communication system1includes base stations100(100A and100B), terminal devices200(200A and200B), a core network20, and a packet data network (PDN) (or simply referred to as data network (DN))30. The base station100is a base station device installed in a base station, which is a communication device that manages cells11(11A and11B) and provides radio services to one or more terminal devices located inside the cell11. For example, the base station100A provides a radio service to the terminal device200A, while the base station100B provides a radio service to the terminal device200B. The cell11can be managed according to a certain radio communication system such as LTE or New Radio (NR). The base station100may be any of eNodeB, ng-eNodeB, gNodeB, or en-gNodeB. In addition to or instead of this, the base station100may be referred to as EUTRAN when the base station100is either eNodeB or en-gNodeB. In addition to or instead of this, the base station100may be referred to as NGRAN when the base station100is either gNodeB or ng-eNodeB. The base station100is connected to the core network20. The core network20is connected to the PDN30. When working as an EPC in LTE, for example, the core network20can include Mobility Management Entity (MME), Serving gateway (S-GW), PDN gateway (P-GW), Policy and Charging Rule Function (PCRF), and Home Subscriber Server (HSS). The MME is a control node that handles control plane signals and manages the moving state of the terminal device. The S-GW is a control node that handles user plane signals and is implemented as a gateway device that switches user information transfer routing. The P-GW is a control node that handles user plane signals and implemented as a gateway device that makes a connection point between the core network20and the PDN30. The PCRF is a control node that controls policies such as Quality of Service (QoS) for bearers and billing. The HSS is a control node that handles subscriber data and controls services. Meanwhile, when working as a 5GC in NR, the core network20can include Access and mobility Management Function (AMF), Session Management Function (SMF), User-Plane Function (UPF), Policy Control Function (PCF), and Unified Data Management (UDM). The AMF is a control node that handles control plane signals and manages the moving state of the terminal device. The SMF is a control node that handles control plane signals and manages data transfer routing. The UPF is a control node that handles user plane signals and manages user information transfer routing. The PCF is a control node that controls policies. The UDM is a control node that handles subscriber data. The terminal device200is a communication device that performs radio communication with the base station100under the control of the base station100. The terminal device200may be a terminal referred to as User Equipment (UE). For example, the terminal device200transmits an uplink signal to the base station100and receives a downlink signal from the base station100. 1.2. Related Technologies (1) Bandwidth Part (BWP) FIG.2is a diagram illustrating a BWP. In the example ofFIG.2, Component Carrier (CC) #1contains a plurality of BWPs (#1and #2), and CC #2contains a plurality of BWPs (#1and #2). In the present specification, the number following the mark # represents an index (or an identifier). The BWPs contained in different CCs represent different BWPs even with an identical index. The BWP is obtained by dividing the CC, which is one operation band width, into a plurality of frequency bandwidths. In each of the BWPs, different Subcarrier spacings (e.g. Numerology) can be set. Note that one CC may include a Downlink Component Carrier and an Uplink Component Carrier, or may be either a Downlink Component Carrier or an Uplink Component Carrier. Moreover, one CC may correspond to one cell. That is, a plurality of BWPs may be included in one cell. This BWP has been standardized in the NR feature of 3GPP Rel15. The BWP can also be defined as a subset of the total cell bandwidth regarding one cell. In the Orthogonal Frequency Division Multiplexing (OFDM) modulation method standardized on LTE in Rel8, the subcarrier spacing is fixed at 15 kHz. By contrast, in the NR feature of Rel15, the subcarrier spacing can be set to 15 kHz, 30 kHz, 60 kHz, 120 kHz, or 240 kHz. The longer the subcarrier spacing, the shorter the OFDM symbol length. For example, the subcarrier spacing is 15 kHz in LTE, which has enabled transmission of two slots per 1 ms (millisecond) (i.e. 1 subframe), in other words, enabling transmission of 14 OFDM symbols. By contrast, in NR, the subcarrier spacing of 60 kHz enables transmission of four slots per 1 ms, while the subcarrier spacing of 120 kHz enables transmission of eight slots per 1 ms, and subcarrier spacing of 240 kHz enables transmission of 16 slots per 1 ms. In this manner, extending the subcarrier would shorten the OFDM symbol length. This makes it possible to provide a frame configuration suitable for low-latency communication. The NR makes it possible to set the BWPs with different subcarrier spacing settings to the terminal at the same time. Accordingly, the NR can provide a plurality of BWPs for different use cases at the same time. (2) Number of Active BWPs The BWP that can be used for transmission and reception is also referred to as an active BWP. In 3GPP, the active BWP is also defined as a UE operating bandwidth within a cell operating bandwidth. The number of BWPs that the base station100can transmit and receive at the same time is also referred to as the number of active BWPs. The number of active BWPs of the base station100may be plural. In contrast, the number of active BWPs of the terminal device200is one in the case of the UE of 3GPP Rel. 15. However, in the present specification, the number of active BWPs of the terminal device200may be plural. In the technique according to the present disclosure, the number of active BWPs of the terminal device200is assumed to be one. (3) Relationship Between Cell (or CC), Carrier, and BWP In the present disclosure, a plurality of cells may be allowed to overlap each other in the frequency direction in one carrier. For example, a plurality of Synchronization Signal/PBCH blocks (SSBs) may be transmitted at a plurality of frequency spans in one carrier. However, from the viewpoint of UE (that is, the terminal device200), each of cells (serving cells) is associated with at most one SSB (that is, a Cell-defining SSB). The UE (terminal device200) uses the BWP associated with the Cell-defining SSB as an Initial BWP. Furthermore, the UE (terminal device200) may use a Dedicated BWP constituted with one or more frequency spans in the same carrier as the Initial BWP, in addition to the Initial BWP. From a UE (terminal device200) perspective, the Initial BWP and the additional Dedicated BWP are associated with one cell. The present embodiment may include a case where the terminal device200uses a plurality of BWPs at the same time point. (4) Codebook Based Beamforming With beamforming performed in communicating with the terminal device200, the base station100can improve the communication quality, for example. Beamforming methods include a method of generating a beam that tracks the terminal device200and a method of selecting a beam that tracks the terminal device200from among candidate beams. The former method might not be adopted in cellular radio communication systems (for example, 5G) because of the computational cost of generating a beam each time. By contrast, the latter method is adopted in Full Dimension Multiple Input Multiple Output (FD-MIMO) in Release 13 of Third Generation Partnership Project (3GPP). The latter method is also referred to as codebook based beamforming. In the codebook based beamforming, the base station100prepares (that is, generates) beams in all directions in advance, and selects the beam suitable for the target terminal device200from among the prepared beams so as to communicate with the terminal device200using the selected beam. For example, when capable of communicating in 360 degrees in the horizontal direction, for example, the base station100prepares 360 types of beams in increments of 1 degree. When allowing the beams to be half overlapped with each other, the base station100prepares 720 types of beams. In the vertical direction, the base station100prepares a beam for 180 degrees ranging from −90 degrees to +90 degrees, for example. The terminal device200only monitors the beam, and thus, has no high need for grasping the existence of the codebook on the base station100side. In the following, a plurality of beams prepared in advance by the base station100is also referred to as a beam group. The beam group can be defined for each of frequency bands, for example. The beam group can also be defined for each of Rx/Tx beams, or for each of downlinks/uplinks. The plurality of beams prepared or managed by the base station100may be associated with one cell (i.e. the plurality of beams may constitute one cell). Alternatively, the plurality of beams prepared or managed by the base station100may be associated with a plurality of cells (i.e. the plurality of beams may constitute a plurality of cells). (5) Beam Sweeping In the NR, in order to select the optimum beam to be used for communication, beam sweeping, which transmits or receives a measurement signal (known signal) by using each of a plurality of beams belonging to a beam group, has been examined. The measurement signal is also referred to as a reference signal in some cases. When the measurement signal is a downlink signal, the measurement signal may include Synchronization Signal block (SSB)/Physical Broadcast Channel (PBCH) block, or Channel State Information-Reference Signal (CSI-RS). Based on the measurement result of the measurement signal (i.e. measurement signal of each of beams) transmitted from the base station with beam sweeping, the terminal can select the optimum transmission-oriented beam (hereinafter, also referred to as a transmitting beam). An example of this will be described with reference toFIG.3. FIG.3is a diagram illustrating beam sweeping. In an example illustrated inFIG.3, the base station100transmits a measurement signal with beam sweeping (that is, switching the transmitting beam) using the beam group40. In addition, transmission with beam sweeping is also referred to as beam sweeping transmission below. Thereafter, the terminal device200measures the measurement signal obtained by beam sweeping transmission and determines which of the transmitting beams is most likely to be received (which is the best beam(s) for the terminal device200). In this manner, the optimum transmitting beam of the base station100for the terminal device200is selected. By exchanging the base station100and the terminal device200and executing the similar procedure, the base station100can select the optimum transmitting beam of the terminal device200. On the other hand, the optimum reception-oriented beam (hereinafter, also referred to as a receiving beam, or a beam) can be selected based on the measurement result obtained by receiving the measurement signal with beam sweeping. For example, the terminal device200transmits a measurement signal by an uplink. Thereafter, the base station100receives the measurement signal with beam sweeping (that is, switching the receiving beams), and determines which of the receiving beams is most likely to be received. In this manner, the optimum receiving beam of the base station100is selected. By exchanging the base station100and the terminal device200and executing the similar procedure, the terminal device200can select the optimum receiving beam of the terminal device200. In addition, reception with beam sweeping is also referred to as beam sweeping reception below. The reception and measurement side of a measurement signal transmitted by beam sweeping transmission reports the measurement result to the transmitting side of the measurement signal. The measurement result may include information indicating which of the transmitting beams is optimal (e.g. beam identifier, time, preamble, or the like). The optimum transmitting beam is a transmitting beam having the highest reception power, for example. The measurement result may include information indicating one transmitting beam having the highest reception power, or may include information indicating the top K transmitting beams in order from the one having the highest reception power. The measurement result includes, for example, identification information of the transmitting beam (for example, the index of the beam) and information indicating the magnitude of the reception power of the transmitting beam (for example, Reference Signal Received Power (RSRP)) in association with each other. The beam used in beam sweeping is transmitted by giving directivity to the reference signal which is a known signal. Therefore, the terminal device200can discriminate the beam by using a resource being a reference signal. The base station100can provide one beam using the resource of one reference signal. That is, with preparation of ten resources, the base station100can perform beam sweeping corresponding to ten different directions. Ten resources can be collectively referred to as a resource set. One resource set formed with ten resources can provide beam sweeping corresponding to ten directions. (6) CSI Acquisition Procedure A Channel State Information (CSI) acquisition procedure is executed after the optimum beam selection performed by the beam selection procedure including the beam sweeping described above. The CSI acquisition procedure acquires the channel quality in communication using the selected beam. For example, the CSI acquisition procedure includes acquisition of a Channel Quality Indicator (CQI). Channel quality is used to determine communication parameters such as modulation methods. Adoption of a modulation method capable of transmitting only a few bits even with good channel quality, for example, Quadrature Phase Shift Keying (QPSK), would cause a low throughput. On the other hand, adoption of a modulation method capable of transmitting a large amount of bits, such as 256 Quadrature Amplitude Modulation (QAM) even with poor channel quality would lead to a failure in data reception (i.e. decoding) on the receiving side, resulting in a low throughput as well. In this manner, proper acquisition of channel quality is important in order to improve the throughput. FIG.4is a sequence diagram illustrating an example of a flow of a typical beam selection procedure and a CSI acquisition procedure executed by a base station and a terminal device. As illustrated inFIG.4, the base station uses beam sweeping to transmit a measurement signal (e.g. SSB) for beam selection (step S11). Next, the terminal device measures the measurement signal for beam selection and reports a beam measurement result (beam report) to the base station (step S12). The measurement result includes, for example, information (e.g. index associated with the best beam) indicating the selection result of the optimum transmitting beam of the base station. The base station then transmits a measurement signal (e.g. CSI-RS) for channel quality acquisition using the selected optimum beam (step S13). Next, the terminal device reports the acquired channel quality to the base station based on the measurement result of the measurement signal (step S14). Thereafter, the base station transmits user information to the terminal device by using the communication parameters based on the reported channel quality (step S15). From the above, a beam report, which includes the measurement result of the measurement signal for beam selection received by the base station or the terminal, is transmitted to the terminal or the base station. Downlink channel quality is measured based on the measurement signal transmitted over the downlink. Additionally, downlink channel quality can also be measured based on the measurement signal transmitted over the uplink. This is because the uplink channels and the downlink channels have reversibility, and have basically the same channel quality. Such reversibility is also referred to as channel reciprocity. When measuring the downlink channel quality based on the downlink measurement signal, the measurement result of the measurement signal for channel quality acquisition is reported as illustrated in step S14ofFIG.4. Reporting this measurement result can be a significant amount of overhead. A channel can be represented by an N×M matrix when the number of transmitting antennas is M and the number of receiving antennas is N. Each of elements of the matrix is a complex number corresponding to IQ. For example, in a case where each I/Q is represented by 10 bits, the number of transmitting antennas is 100, and the number of receiving antennas is eight, the report of the channel quality measurement result would use 8×100×2×10=16000 bits, which would be a significant amount of overhead. In comparison, when measuring the downlink channel quality based on the uplink measurement signal, it is not necessary to report the measurement result because the measurement subject is the base station. Therefore, by measuring the downlink channel quality based on the uplink measurement signal, it is possible to reduce the overhead related to reporting the measurement result and improve the throughput. The flow of process in measuring the channel quality of the downlink based on the uplink measurement signal will be described with reference toFIG.5. FIG.5is a sequence diagram illustrating another example of a flow of a typical beam selection procedure and CSI acquisition procedure executed by a base station and a terminal device. As illustrated inFIG.5, the terminal device transmits the measurement signal for beam selection by using beam sweeping transmission, and the base station receives the measurement signal by using beam sweeping (step S21). At that time, the base station selects the optimum transmitting beam of the terminal device and the optimum receiving beam of the base station based on the measurement result. Next, the base station reports the beam measurement result (beam report) to the terminal device (step S22). Such measurement result includes information indicating the selection result of the optimum transmitting beam of the terminal device. Next, the terminal device transmits a measurement signal for channel quality acquisition by using the selected transmitting beam (step S23). The base station acquires uplink channel quality based on the measurement result, and acquires downlink channel quality based on the uplink channel quality. Thereafter, the base station transmits user information to the terminal device using the communication parameters based on the acquired downlink channel quality (step S24). From the above, a beam report, which includes the measurement result of the measurement signal for beam selection received by the base station or the terminal, is transmitted to the terminal or the base station. (7) Analogue-Digital Hybrid Antenna Architecture In order to control the directivity of the antenna, there is an assumable architecture in which all processes are performed by an analogue circuit. Such an architecture is also referred to as a fully digital architecture. In a fully digital architecture, antenna weights as many as antennas (that is, antenna elements) are applied in a digital domain (that is, by a digital circuit) to control the directivity of the antenna. The antenna weight is a weight for controlling the amplitude and phase. Unfortunately, however, the fully digital architecture has a drawback of enlargement of the digital circuit. Examples of an architecture to overcome such a drawback of the fully digital architecture include an analogue-digital hybrid antenna architecture. FIG.6Ais a diagram illustrating an example of an analogue-digital hybrid antenna architecture. The architecture illustrated inFIG.6Aincludes a digital circuit50, analogue circuits60(60A and60B), and antenna panels70(70A and70B). The digital circuit can apply a plurality of antenna weights51(51A and51B). The analogue circuit60and the antenna panel70are provided in the same number as the number of antenna weights51applicable to the digital circuit50. The antenna panel70includes a plurality of antennas72(72A to72F) and phase shifters71(71A to71F) as many as the number of antennas72. The phase shifter71is a device that applies an antenna weight that can control the phase alone in an analogue domain. The characteristics of the antenna weight in the digital domain and the antenna weight in the analogue domain are illustrated in Table 1 below. TABLE 1AnaloguedomainDigital domainControllable targetPhaseAmplitude and phaseAnalogue or digitalAnalogueDigitalArrangement position:Time domainFrequency domain whentime domain orOFDM modulationfrequency domainmethod is use andwhen arrangement isperformed on FFT/IFFTback/front onreceivingside/transmittingsideIs it possible toImpossiblePossibleprovide different beamsin differentfrequencies in sametimeIs it possible toImpossiblePossibleprovide different beamsin same frequency insame time Antenna weights in the digital domain are applied in a frequency domain when OFDM modulation method is used. For example, the antenna weight in the digital domain is applied before Inverse Fast Fourier Transform (IFFT) at the time of transmission and applied after Fast Fourier Transform (FFT) at the time of reception. Antenna weights in the digital domain are applied in the frequency domain. Therefore, by applying the antenna weights in the digital domain, it is possible to transmit a beam in different directions using different frequency resources even when the time resources are the same. On the other hand, the antenna weights in the analogue domain are applied in a time domain. Therefore, even when the antenna weight in the analogue domain is applied, the beam can be directed only in the same direction over all frequency resources with the same time resource. That is, each of the antenna panels70can transmit a beam in different directions using different frequency resources even with the same time resource. On the other hand, one antenna panel70can direct the beam in only one direction using the same time resource and frequency resource. Therefore, in the analogue-digital hybrid antenna architecture, the number of directions of the beam that can be transmitted and received in the same time resource corresponds to the number of antenna panels70. Furthermore, in the analogue-digital hybrid antenna architecture, the number of beam groups that be handled by beam sweeping transmission or beam sweeping reception in the same time resource corresponds to the number of antenna panels70. Such an analogue-digital hybrid antenna architecture can be adopted in both the base station100and the terminal device200. (8) Antenna Panel InFIG.6A, three analogue domain phase shifters are connected to one digital domain weight. The one digital domain weight and the three analogue domain phase shifters can be arranged as a set as an antenna panel.FIG.6Aillustrates an example in which two antenna panels are provided, each of the antenna panels being formed with three antenna elements. As illustrated in Table 1, usually it would not possible, with one panel, to form beams in different directions at the same time using different frequencies. However, it is possible, with two panels, to form beams in different directions, even at the same time. This antenna panel configuration is used on both the base station side and the terminal side. FIG.6Bis a diagram illustrating an example of arranging eight antenna panels in the terminal device200.FIG.6Billustrates an example of arranging a total of eight antenna panels, specifically four on front surface and four on back surface of the terminal device200. The number of antenna elements mounted on one antenna panel is not limited to a specific number. Still, four antenna elements are mounted on one antenna panel, for example. Since the four antenna panels arranged on the front surface, or the four antenna panels arranged on the back surface, are arranged so as to face the same direction, the panels here are referred to as coherent antenna panels. In contrast, the antenna panels on the front surface and the antenna panels on the back surface are referred to as non-coherent antenna panels. (9) Reference Signal and User Information Resource In order to implement beam sweeping and the CSI acquisition procedure, it is necessary to transmit and receive the reference signal between the base station device100and the terminal device200. Furthermore, when the user information is transmitted and received between the base station device100and the terminal device200, it is also necessary to transmit and receive the reference signal. These reference signals are basically designated by frequency and time resources, and include some cases where resources are designated by using orthogonal sequences. In contrast, as for the user information, scheduling information included in the control signal designates the frequency and time resources of the user information. In the case of user information, orthogonal sequences are not to be assigned as resources. Only frequency and time resources are designated. (10) Selecting Antenna Panel and Beam on the Receiving Side (10-1) Selecting Antenna Panel and Beam at Beam Management Stage During beam management, with trial-and-error (e.g. trial on each combination of the beam and the antenna panel one by one) on the terminal device200side on the beam coming from the base station100, determination is made as to which beam and which antenna panel are to be used for reception. Basically, different antenna panels can operate at the same time. Therefore, when four resource areas in a resource block are set as reference signal resources for the same beam for a downlink beam, the terminal device200can use four different receiving beams for each of antenna panels to determine which is the desired receiving beam for the terminal device200. Such an operation is performed for the number of downlink beams corresponding to different directions on the base station100side. When the number of downlink beams is ten, the terminal device200monitors the receiving beam using 10×4=40 resources, thereby enabling determination of the desired beam from the base station100as well as the antenna panel and the desired beam on the terminal device200side. In the present specification, for convenience of explanation, the combination of the receiving antenna panel and the receiving beam used by the terminal for reception is also referred to as a reception environment. (10-2) Selecting Antenna Panel and Beam at CSI Procedure Stage The CSI procedure stage is the stage where the base station100uses precoding for transmission (finer antenna control) and then confirms the quality of the channel in more detail. At the CSI procedure stage, the reference signal (CSI-RS) for the CSI procedure is received by using the antenna panel of the terminal device200identified in the previous beam management stage and using the beam determined to be the most desirable within the antenna panels. (10-3) Selection of Antenna Panel and Beam at User Information Reception Stage At the user information reception stage, the terminal device200may only be required to receive user information using the antenna panel and the receiving beam determined at the time of beam management, similarly to the CSI procedure stage. However, when there are two beams using such an antenna panel, the terminal device200cannot determine how to select the antenna panel and the beam. FIG.7is a diagram illustrating two beam sets. When the terminal device200has performed the beam management process twice and has determined the antenna panel and the beam of the terminal device200suitable for each of the beams transmitted from the two different antenna panels of the base station100, there are two beam sets as illustrated inFIG.7. Specifically, the two beams sets include a first beam set “Beam set (0): transmitting beam (i) in transmitting antenna panel (0)+receiving beam (j) in receiving antenna panel (0)”, and a second beam set “Beam set (1): transmitting beam (m) in transmitting antenna panel (1)+receiving beam (n) in receiving antenna panel (1)”. The beam set refers to a beam link constituted with a combination of antenna panels and beams on the transmitting side and the receiving side. Furthermore, since control information (e.g. scheduling information), which is a control signal that designates a resource of user information, is transmitted using a beam, it is important to grasp which beam set is to be used to receive the control information by the terminal device200. Examples of the control information include PHY Downlink Control Channel (PDCCH) or Downlink Control Information (DCI) transmitted by the PDCCH. (10-4) Method of Designating Antenna Panel and Beam Used by Terminal InFIG.7, the base station100may explicitly or implicitly indicate to the terminal device200that reception of the PDCCH (0) is enabled by the receiving beam (j) of the receiving antenna panel (0). A conceivable example of this would be a method of directly designating the receiving antenna panel and the receiving beam of the terminal device200. On the other hand, for example, there is an assumable case where the base station100has transmitted “Reference Signal A” using the “transmitting beam (i) in the transmitting antenna panel (0)”, and the terminal device200has received the “Reference Signal A” by using the “receiving beam (j) in the receiving antenna panel (0)”. Furthermore, there is an assumable case where the base station100has transmitted “Reference Signal B” using the “transmitting beam (m) in the transmitting antenna panel (1)”, and the terminal device200has received the “Reference Signal B” by using the “receiving beam (n) in the receiving antenna panel (1)”. Based on this, before transmission of PDCCH (0), the base station100can instruct to use, at the time of receiving the PDCCH (0), the receiving antenna panel and the receiving beam used when receiving “Reference Signal A”. In other words, it is possible to implicitly designate an instruction equivalent to the instruction to use the receiving beam (j) in the receiving antenna panel (0). (10-5) Process with No Designation of Antenna Panel and Beam In the above, the base station100clearly instructed the terminal device200to use the same receiving antenna panel and receiving beam as when receiving “Reference Signal A”. However, there are cases where there is no instruction from the base station100or the setting of the instruction by the base station100is not in time, which leads to the necessity of performing a process for such a case. For example, it is conceivable to use the receiving antenna panel and the receiving beam used when the terminal device200synchronizes with the base station100as a default. However, when synchronization signals (reference signals) are provided from different antenna panels of the base station100, it is difficult to determine which antenna panel and beam used in reception of which synchronization signal should be used as the default. (10-6) Synchronization Signal Here, a synchronization signal will be described. The synchronization signal is a signal that periodically transmits an SSB burst. The SSB burst includes a plurality of SSBs that has undergone beamforming. The SSB contains a sequence of synchronization signals PSS and SSS and system information referred to as PBCH for broadcast. PSS and SSS are supposed to be used in the same manner as LTE. The base station100transmits each of SSBs using beams in different directions. Accordingly, the terminal device200receives the SSB facing the direction of the terminal device200and performs synchronization. Furthermore, the base station100transmits the SSB contained in the SSB burst by using a different transmitting antenna panel for each of the SSB bursts. The terminal device200can synchronize with the SSB transmitted from the plurality of transmitting antenna panels, and at the same time, can grasp one or more optimum receiving antenna panels and receiving beams required when receiving the SSBs from the plurality of transmitting antenna panels. In this case, for example, as illustrated inFIG.7, the terminal device200will grasp two sets of the receiving antenna panel and the receiving beam. In this manner, in a case where the receiving antenna panel and receiving beam settings required for receiving control signals and user information are not in time with a plurality of optimum receiving antenna panels and receiving beam sets for receiving synchronization signals, the terminal device200cannot determine which antenna panel and beam should be used because of the presence of the plurality of sets, even with a rule that the set at reception of SSB is to be used as a default. (10-7) Reference Signal Resource Set and Beam Sweeping FIGS.8and9are diagrams related to reference signal resource sets. As illustrated inFIG.8, the resource (RS Resource) for transmitting the reference signal is designated by the frequency and time resources. Such reference signal resources can be treated as a reference signal resource set by forming a plurality of reference signals into one group (signal group). The base station100performs beam sweeping by transmitting beams in different directions using the individual reference signal resources of this resource set. It would be also possible to prepare resources for separate reference signals and perform beam sweeping without using the resource set. Still, in order to avoid complication of the settings, it is desirable to set the resource set to ensure the resources as a signal group and use the resource set for beam sweeping. (10-8) Relationship Between Reference Signal Resource Set and Antenna Panel It is natural to consider that the resource of each of the reference signals in the resource set belongs to one antenna panel. Moreover, as illustrated inFIG.9, when there are different resource sets, assumable cases include a case where the sets use different antenna panels (relationship between resource set (1) and resource set (3) inFIG.9) or a case where the sets use the same antenna panel (relationship between the resource set (1) and the resource set (2) inFIG.9). 1.3. Outline of Proposed Technology The base station100has conventionally performed beam sweeping operations without considering the arrangement of the transmitting antenna panels, and thus, unnecessary reception processes have been performed on the terminal device200side, for example. Specifically, when a base station transmits a plurality of reference signal resource sets (that is, signal groups) to a terminal, the base station transmits the resource set from each of a plurality of antenna panels facing the same direction in some cases. In such a case, the transmitting antenna panels have the same orientation although the resource sets are different on the terminal side, and thus, the receiving antenna panels on the terminal side will also be the same with a high possibility. That is, in the prior technology, the signal group has been transmitted without considering the arrangement of the antenna panels on the base station side, leading to a concern of occurrence of unnecessary reception processes on the terminal side. Therefore, the base station100according to the embodiment transmits, to the terminal device200, similarity information indicating the similarity of the beam characteristics of the transmitting antenna panels in the plurality of signal groups (resource sets) to be transmitted to the terminal device200. Thereafter, the terminal device200selects and receives the signal group from among the plurality of signal groups based on the acquired similarity information. Specifically, the base station100according to the embodiment notifies the terminal device200of similarity information indicating that the beam characteristics of the transmitting antenna panels for transmitting each of the plurality of resource sets are similar (for example, the coherent antenna panel). In a case where the terminal device200has acquired such similarity information, the terminal device200performs the reception process on one resource set among the plurality of resource sets having similar beam characteristics, while omitting the reception processes on the other resource sets. That is, the terminal device200performs a beam determination process and a reporting process of the determined beam to the base station100only for one received resource set, and does not have to perform the determination process or reporting process for the other resource sets. Therefore, the terminal device200according to the embodiment can reduce unnecessary signal processing. 2. Configuration Examples Hereinafter, the configurations of the base station100(base station device100) and the terminal device200according to the present embodiment will be described in detail. 2.1. Configuration Example of Base Station FIG.10is a block diagram illustrating an example of a configuration of the base station device100according to the embodiment. As illustrated inFIG.10, the base station device100includes an antenna unit110, a communication unit120, a storage unit130, and a control unit140. The antenna unit110radiates the signal output by the communication unit120to space as a radio wave. Furthermore, the antenna unit110converts the radio wave in space into a signal and outputs the signal to the communication unit120. Specifically, the antenna unit110has a plurality of antenna elements and can form a beam. The communication unit120transmits and receives signals by radio communication. For example, the communication unit120receives a downlink signal from the terminal device200and transmits an uplink signal to the terminal device200. Incidentally, the antenna unit110and the communication unit120are provided as a configuration including the plurality of antenna panels70having the analogue-digital hybrid antenna architecture described above. For example, the antenna unit110corresponds to the antenna72. Furthermore, for example, the communication unit120corresponds to the digital circuit50, the analogue circuit60, and the phase shifter71. The storage unit130temporarily or permanently stores various programs and various types of data for the operation of the base station device100. The control unit140controls the operation of the entire base station device100to provide various functions of the base station device100. As illustrated inFIG.10, the control unit140includes an acquisition unit141, a generation unit142, and a transmission unit143. The acquisition unit141acquires various types of information from the terminal device200. For example, the acquisition unit141acquires capability information indicating that it is possible to selectively receive a signal group (resource set) from the terminal device200. That is, the acquisition unit141acquires capability information indicating that when a plurality of signal groups having similar beam characteristics has been transmitted, the terminal device200can omit the reception process for at least one or more signal groups among the plurality of signal groups. The generation unit142generates similarity information indicating the similarity of the beam characteristics of the transmitting antenna panels in the plurality of signal groups to be transmitted to the terminal device200. For example, the generation unit142generates similarity information indicating that the beam characteristics of the transmitting antenna panels are similar when the plurality of transmitting antenna panels transmitting the plurality of signal groups are coherent antenna panels. The transmission unit143transmits the similarity information generated by the generation unit142to the terminal device200. For example, based on the capability information acquired by the acquisition unit141, the transmission unit143transmits similarity information to the terminal device200when the terminal device200is capable of selectively receiving a signal group. This makes it possible to eliminate an unnecessary transmission process of transmitting similarity information to a terminal that cannot selectively receive a plurality of signal groups having similar beam characteristics. Detailed operation of each of configurations in the control unit140of the base station device100will be described below. 2.2. Configuration Example of Terminal Device FIG.11is a block diagram illustrating an example of a configuration of the terminal device200according to the embodiment. As illustrated inFIG.11, the terminal device200includes an antenna unit210, a communication unit220, a storage unit230, and a control unit240. The antenna unit210radiates the signal output by the communication unit220to space as a radio wave. Furthermore, the antenna unit210converts the radio wave in space into a signal and outputs the signal to the communication unit220. Specifically, the antenna unit210has a plurality of antenna elements and can form a beam. The communication unit220transmits and receives signals by radio communication. For example, the communication unit220receives a downlink signal from the base station100and transmits an uplink signal to the base station100. The antenna unit210and the communication unit220are provided as a configuration including the plurality of antenna panels70having the analogue-digital hybrid antenna architecture described above. For example, the antenna unit210corresponds to the antenna72. Furthermore, for example, the communication unit220corresponds to the digital circuit50, the analogue circuit60, and the phase shifter71. The storage unit230temporarily or permanently stores various programs and various types of data for the operation of the terminal device200. The control unit240controls the operation of the entire terminal device200to provide various functions of the terminal device200. As illustrated inFIG.11, the control unit240includes a notification unit241, an acquisition unit242, a reception unit243, and a transmission unit244. The notification unit241notifies the base station100of the capability information indicating that it is possible to selectively receive the signal group transmitted from the base station100. Furthermore, the notification unit241notifies the base station100of the characteristic information regarding the beam characteristics of the antenna panel that receives the signal group. Although the details will be described below, by acquiring the characteristic information, the base station100can designate, when transmitting the signal group, which receiving antenna panel should be used for receiving the signal group from among a plurality of receiving antenna panels having similar beam characteristics. The acquisition unit242acquires various types of information from the base station100. For example, the acquisition unit242acquires similarity information indicating the similarity of the beam characteristics of the transmitting antenna panel in a plurality of signal groups transmitted from the base station100. Furthermore, the acquisition unit242acquires receiving panel information that designates the antenna panel to receive the signal group from the base station100. In addition, the acquisition unit242acquires transmitting panel information that designates an antenna panel that transmits a signal group from the base station100. The reception unit243selects and receives a signal group to be received from among a plurality of signal groups based on the similarity information acquired by the acquisition unit242. For example, the reception unit243omits the reception process on signal groups other than the selected signal group. Furthermore, the reception unit243receives the signal group by the antenna panel designated by the receiving panel information acquired by the acquisition unit242. The transmission unit244transmits a predetermined signal group to the base station100. For example, the transmission unit244transmits a signal group by using the antenna panel designated by the transmitting panel information acquired by the acquisition unit242. Furthermore, the transmission unit244transmits the signal group to the base station100by using the antenna panel that has received the signal group transmitted from the base station100. Hereinafter, detailed operations of individual configurations in the control unit140of the base station device100and individual configurations in the control unit240of the terminal device200will be described with reference toFIGS.12to17. 3. Embodiments For example, there are cases of having a plurality of transmitting antenna panels of the base station100arranged to face the same direction or arranged to face different directions. When the plurality of transmitting antenna panels of the base station100are arranged facing different directions, the plurality of resource sets transmitted from the respective transmitting antenna panels have dissimilar beam characteristics, and thus are likely to be received by different receiving antenna panels of the terminal device200. In contrast, when the plurality of transmitting antenna panels of the base station100are arranged facing the same direction, the plurality of resource sets transmitted from the respective transmitting antenna panels have similar beam characteristics, and thus are likely to be received by the same receiving antenna panel of the terminal device200. Here, when the plurality of resource sets has been received by different receiving antenna panels, the terminal device200selects the optimum reference signal (for example, signal having high reception power) for each of the resource sets and reports the signal to the base station100. In contrast, when the plurality of resource sets has been received by different receiving antenna panels, the operation in which the terminal device200selects the optimum reference signal (for example, signal having high reception power) for each of resource sets and reports the signal to the base station100would involve an unnecessary selection process and a reporting process. In view of this, the base station100according to the embodiment performs the setting onto the terminal device200that the plurality of transmitting antenna panels is facing the same direction. That is, the transmission unit143of the base station100performs terminal settings by transmitting similarity information indicating that a plurality of transmitting antenna panels facing the same direction will be arranged to face the same direction. In other words, the base station100sets the terminal by transmitting similarity information indicating that a plurality of resource sets has been transmitted from the plurality of transmitting antenna panels facing the same direction. In still other words, the base station100sets the terminal by transmitting similarity information indicating that the beam characteristics of the plurality of resource sets are the same. The base station100performs beam sweeping for each of the plurality of resource sets by using the transmitting antenna panel. The terminal device200selects, for example, a reference signal having a high reception power from one resource set, and reports the identification information of the selected reference signal to the base station100. Furthermore, the terminal device200does not perform the process of selecting or reporting the optimum reference signal for the resource set having beam characteristics similar to the resource set for which the report has been completed. For this reason, the base station100has no expectation to receive, from the terminal device200, a report related to a resource set having similar beam characteristics. That is, the terminal device200side does not perform the reference signal selection process or the reporting process, and the base station10side does not perform the process of receiving the report from the terminal device200, and the like. In this manner, according to the communication method of the embodiment, it is possible to reduce unnecessary signal processing on the terminal device200and the base station100. Therefore, for example, in the terminal device200, the transmission/reception by the antenna panel having a high processing load can be replaced by the antenna panel that does not perform the above selection process or reporting process. FIG.12is a sequence diagram illustrating an example of a flow of a beam selection procedure executed by the base station100and the terminal device200. First, the notification unit241of the terminal device200notifies the base station100of the above-described capability information. In the example illustrated inFIG.12, the notification unit241notifies the base station100of capability information that the terminal device200has the capability to omit reporting regarding beam sweeping of at least one resource set among a plurality of resource sets (signal groups) (step S101). Subsequently, the base station100sets in the terminal that the beam characteristics of the plurality of resource sets transmitted from the base station100are the same (similar) (step S102). Subsequently, the base station100performs beam sweeping using one of the resource sets (step S103). Subsequently, the terminal device200determines the reference signal having the highest reception power from among the resource sets to which beam sweeping has been applied (step S104). Subsequently, the terminal device200reports the determined reference signal identification information and the reception power to the base station100(step S105). Subsequently, the base station100performs beam sweeping using another resource set (step S106). The terminal device200omits the monitoring of the beam sweeping of such a resource set (step S107). Note that the other terminal device200needs to determine the reference signal having the highest reception power from among the resource sets to which beam sweeping has been applied. Subsequently, using the transmitting antenna panel when the terminal device200has omitted the beam sweeping, the base station100transmits various types of data (PDSCH, or the like) using a beam having the same beam characteristics as the beam corresponding to the reference signal reported in step S105(step S108). Thereafter, the terminal device200notifies the base station100whether the PDCSH has been successfully received, for example (step S109). AlthoughFIG.12has described the case where the base station100performs beam sweeping on the downlink, this procedure can also be applied to the case where the terminal device200performs beam sweeping on the uplink. FIG.13is a sequence diagram illustrating an example of a flow of a beam selection procedure executed by the base station100and the terminal device200. As illustrated inFIG.13, the terminal device200reports the information regarding the antenna panel of the terminal device200to the base station100(step S201). Next, the base station100sets the resource set (1) to the antenna panel (1) and sets the resource set (2) to the antenna panel (2), for example (step S202). Note that the resource set (1) and the resource set (2) are assumed to have similar beam characteristics. Subsequently, the base station100instructs the terminal device200to perform beam sweeping using the resource set (1) (step S203). Subsequently, the terminal device200uses the resource set (1) to perform beam sweeping (step S204). The base station100grasps a beam (X) of the reference signal having a high reception power within the resource set (1) to which beam sweeping has been applied (step S205). Subsequently, the base station100omits an instruction to perform beam sweeping that uses the resource set (2) (step S206). Subsequently, the base station100instructs the transmission of predetermined data (PDSCH) by using the beam (X) within the resource set (2) to which no beam sweeping has been applied (step S207). Subsequently, the terminal device200transmits the PDCSH, for example, using the beam (X) of the antenna panel (2) corresponding to the instructed resource set (2) (step S208). Subsequently, the base station100notifies the terminal device200whether the PDSCH has been successfully received (step S209). Designating Receiving Beam to be Used in the Terminal Device200by the Antenna Panel FIG.14Aillustrates a reference signal with the same beam characteristics. Conventionally, when the base station100has designated the receiving beam to be used at reception on the terminal device200, the base station100sets an instruction onto the terminal device200to use the same receiving beam as the beam used at the reception of the reference signal. That is, as illustrated inFIG.14A, the base station100sets, onto the terminal device200, that the reference signal A and the reference signal B have the same beam characteristics, and at the transmission of the reference signal B thereafter, the terminal device200has successfully received the reference signal B using the receiving beam used at the reception of the reference signal A. At this time, by designating one resource defined by the frequency resource and the time resource, the reference signal A has successfully instructed to use the receiving beam used at the reception of the designated resource. Meanwhile, with a plurality of receiving antenna panels mounted on the terminal device200side, it would be desired to indicate, from the base station100, both the receiving antenna panel and the receiving beam to be used. In this case, similarly to the conventional case, it would be sufficient to instruct to use the same receiving antenna panel and receiving beam as the ones used at the reception of the reference signal A. Accordingly, similarly to the conventional case, after issuing an instruction to the terminal device200that the reference signal A and the reference signal B have same beam characteristic, it has been possible with the reference signal B to designate the receiving antenna panel and the receiving beam to be used by the terminal device200. There might be a case, however, where it is desired to separately instruct the receiving antenna panel and the receiving beam to be used at the reception, from the base station100to the terminal device200. For example, there might be a case where it is desired to designate the receiving antenna panel in semi-static manner and to dynamically designate the receiving beam by the PDCCH (control signal). This case occurs because it would take time to switch the antenna panels, but it would not take much time to switch the beams in the same antenna panel, for the terminal device200. On the contrary, there might be a case where it is desired to designate the receiving beam in semi-static manner and to dynamically designate the receiving antenna panel by the PDCCH (control signal). FIG.14Bis a diagram illustrating a case where the receiving antenna panels on the terminal device200side are coherent. That is, the two receiving antenna panels of the terminal device200are arranged on a same plane, indicating that the beam characteristics of the receiving beams are the same. In this case, for example, there is an assumable case where it is desired to perform reception using a beam (i) of the antenna panel on the left side illustrated inFIG.14Bwhile it is required to perform reception on the antenna panel on the left side using a beam (j) different from the beam (i). This is because, during that time for scheduling reasons, the beam (j) has to be used for receiving a signal from another base station100in some cases. At that time, the terminal device200can perform reception using the same beam (i) on the right antenna panel having the same beam characteristics and reception using the beam (j) on the left antenna panel. In such a case, it would be conceivable to have a case where although receiving beam designation is set to semi-static, only the antenna panel is to be dynamically switched. As described above, there are cases where it is desired to designate the receiving beam and the antenna panel separately from each other. In view of this, the base station100designates the antenna panel by using the receiving antenna panel that has received the reference signal by beam sweeping. That is, the terminal device200acquires the receiving panel information designating the receiving antenna panel used at the reception of the resource set from the base station100, and then receives a resource set or a reference signal in the resource set by using the receiving antenna panel designated by the receiving panel information. Incidentally, the antenna panel may be designated in a semi-static manner by RRC signaling or dynamically designated by the PDCCH. FIG.15is a sequence diagram illustrating an example of a flow of a beam selection procedure executed by the base station100and the terminal device200. As illustrated inFIG.15, the base station100sets on the terminal the resource set (1) to which beam sweeping will be applied (step S301). Subsequently, the base station100performs beam sweeping using the resource set (1) set on the terminal (step S302). Subsequently, the terminal device200determines the receiving antenna panel (1) as the antenna panel when receiving the resource set (1) based on the reception power, for example (step S303). Subsequently, the base station100instructs the use of the receiving antenna panel (1), which is the antenna panel that has received the resource set (1) by RRC signaling, for example (step S304). Subsequently, the base station100instructs, for example, to use the receiving beam used for the PDCCH as a receiving beam to be used for reception of the PDSCH (step S305). That is, the antenna panel is designated in step S304, and the beam is designated in step S305, making it possible to designate the antenna panel and the beam separately from each other. When the PDSCH has been transmitted from the base station100, the terminal device200notifies whether the PDSCH has been successfully received (step S306). Uplink Antenna Panel Designation Method Beam sweeping in a normal downlink (hereinafter referred to as a DL) uses resource sets of a plurality of DLs from the base station100to achieve beam sweeping in different directions. The terminal device200monitors reference signals transmitted by a plurality of resources and determines which beam corresponding to which reference signal is to be optimum. On the other hand, the uplink (hereinafter referred to as a UP) also uses a beam sweeping procedure. Beam sweeping is performed in different directions when viewed from the terminal device200by using a resource set of a plurality of UPs from the terminal device200. The UP resource set used by the terminal device200is a resource to be set from the base station100onto the terminal device200. In order to ensure the diversity of the communication path, the base station100might desire to perform a plurality of communications between the terminal device200and the base station100on variable communication paths as much as possible. For example, as illustrated inFIG.7, two different communication paths are ensured, and thus, even when one path is blocked, communication interruption will be suppressed by using the other path. This is effective in a same manner on the DL side and the UP side. In particular, since there are many cases where a car or a person to be an obstacle of the communication path is present near the terminal device200, it is important to ensure a communication path in different directions from the viewpoint of the terminal device200. In that case, it is most important to grasp the beams whose directions are significantly different when viewed from the terminal device200side. It is considered that the same beam path can be applied not only to UP communication but also to DL communication. For that purpose, regarding the beam sweeping of the UP performed by the terminal device200, it would be important to allow the base station100to monitor beams of the beam sweeping with different transmitting antenna panels, thereby ensuring UP communication paths having high spatial diversity between the base station100and the terminal device200. In such cases, however, how to distinguish the transmitting antenna panel would be important. Therefore, the base station100designates the transmitting antenna panel of the terminal device200to allocate the UP resource set to the terminal device200. Specifically, the terminal device200first notifies the base station100of information such as the number of transmitting antenna panels and the arrangement (coherent or non-coherent) thereof, that is, characteristic information regarding the beam characteristics of the antenna panels. The base station100then designates a transmitting antenna panel to be used for UP beam sweeping by the terminal device200based on the notified characteristic information, and requests beam sweeping. Subsequently, the terminal device200performs beam sweeping using the designated transmitting antenna panel and resource set. In this manner, by designating by the terminal device200the resource set and the transmitting antenna panel that should be used for transmission, the terminal device200can clarify which transmitting antenna panel should be used for each of resource sets. FIG.16is a sequence diagram illustrating an example of a flow of a beam selection procedure executed by the base station100and the terminal device200. As illustrated inFIG.16, the terminal device200notifies information regarding the antenna panel of the terminal device200(the number of antenna panels and the characteristic information regarding whether the antenna panels are coherent) (step S401). Subsequently, based on the information related to the antenna panel notified from the terminal device200, the base station100designates the transmitting antenna panel to be used for UP beam sweeping, and sets the resource set (step S402). The base station100then designates identification information of the resource set that has been set, and requests beam sweeping (step S403). The terminal device200performs beam sweeping using the designated resource set and transmitting antenna panel (step S404). When Base Station100Cannot Directly or Explicitly Grasp Transmitting Antenna Panel for Terminal Device200 The above has described the case where the base station100acquires the information related to the antenna panel from the terminal device200and grasps the characteristics of the antenna panel of the terminal device200. Here, there is an assumable case where the base station100cannot grasp the characteristics of the antenna panel of the terminal device200. In such a case, it is difficult for the base station100to designate a transmitting antenna panel for performing UP beam sweeping. To handle this, the base station100associates the receiving antenna panel used at the reception of the DL reference signal by the terminal device200with the transmitting antenna panel to be used to perform UP beam sweeping. That is, the base station100instructs to use, as a transmitting antenna panel, the same antenna panel as the receiving antenna panel used by the terminal device200at the reception of the DL reference signal or the DL resource set. The terminal device200performs UP beam sweeping using the instructed transmitting antenna panel and the resource set that has been set. At this time, the terminal device200can freely determine the beam direction to be used to transmit the reference signal of the resource set that has been set. That is, the base station100has implicitly designated the transmitting antenna panel alone. In this manner, the base station100can allow the terminal device200to freely select the beam while implicitly designating the transmitting antenna panel for the terminal device200. Incidentally, when the transmitting antenna panel for the terminal device200is implicitly designated, the beam characteristics of the receiving antenna panel of the terminal device200and the beam characteristics of the transmitting antenna panel need to be similar (or the same). Since the antenna elements of the antenna panel are the same for transmission and reception, antenna calibration needs to be performed so that the variation in the transfer function of the analogue circuit at the time of reception and the analogue circuit at the time of transmission would be adjusted to achieve a match in characteristics between at the time of reception and at the time of transmission. The antenna calibration is performed in a general method utilized in ordinary radio systems, and the present invention presupposes that the antenna calibration has been completed. The antenna calibration can be performed by the terminal device200alone. Therefore, with antenna calibration performed just once after startup of the terminal device200, the state can be maintained for about one day, for example. FIG.17is a sequence diagram illustrating an example of a flow of a beam selection procedure executed by the base station100and the terminal device200. As illustrated inFIG.17, the terminal device200notifies the base station100of information related to the antenna panel of the terminal device200(step S501). Subsequently, the base station100sets the DL resource set (1) onto the terminal (step S502). The base station100then performs beam sweeping using the resource set (1) that has been set (step S503). Subsequently, the terminal device200determines and holds the antenna panel (1) to be used for receiving the DL resource set (1) based on the reception power, for example (step S504). Thereafter, the base station100sets the UP resource set (2) onto the terminal (step S505). The base station100then requests to perform UP beam sweeping with the UP resource set (2) by using the antenna panel (1) that has received the DL resource set (1) (step S506). Subsequently, the terminal device200performs beam sweeping using the designated antenna panel (1) and resource set (2) (step S507). 4. Application Examples The technology according to the present disclosure is applicable to various products. For example, the base station100may be any of eNodeB, ng-eNodeB, gNodeB, or en-gNodeB as described above. In addition to or instead of this, the base station100may be referred to as EUTRAN when the base station100is either eNodeB or en-gNodeB. In addition to or instead of this, the base station100may be referred to as NGRAN when the base station100is either gNodeB or ng-eNodeB. Furthermore, the base station100may be a Master Node (MN) or a Secondary Node (SN) in Dual Connectivity. That is, the base station100may be a Secondary gNodeB in the case of EUTRA-NR Dual Connectivity or in the case of NR-NR Dual Connectivity. In this case, a part or all of the above-described RRC signaling may be transmitted to and received from the UE (terminal device200) via the MN, or may be directly transmitted or received between the UE (terminal device200) and a secondary gNodeB (base station100) via a Signaling Radio Bearer (SRB) 3. The above-described PDCCH and PDSCH may be transmitted in a Secondary Cell Group (SCG) between the UE (terminal device200) and the secondary gNodeB (base station100). In addition to or instead of this, the base station100may be a Master gNodeB in the case of NR-EUTRA Dual Connectivity or in the case of NR-NR Dual Connectivity. In this case, the above-described RRC signaling may be transmitted or received between the UE (terminal device200) and the Master gNodeB (base station100) via any of SRBs 0 to 2. The above-described PDCCH and PDSCH may be transmitted in a Master Cell Group (MCG) between the UE (terminal device200) and the Master gNodeB (base station100). In addition to or instead of this, the above-described base station100may be a gNB Central Unit (gNB-CU) or a gNB Distributed Unit (gNB-DU) or a combination of gNB-CU and gNB-DU (i.e. gNB). The gNB-CU hosts the RRC layer, SDAP layer, and PDCP layer for a certain UE. On the other hand, gNB-DU hosts the RLC layer, MAC layer, and PHY layer for a certain UE. That is, a part or all of the above-described RRC signaling may be terminated between the UE and gNB-CU via gNB-DU. A part or all of the downlink RRC signaling may be generated by gNB-CU. On the other hand, the above-described PDCCH and PDSCH may be generated by gNB-DU and transmitted to the UE. In addition to or instead of this, the base station100may be implemented as a macro eNB, a small eNB, or the like. The small eNB may be an eNB that covers cells smaller than the macro cell, such as a pico eNB, a micro eNB, or a home (femto) eNB. In addition to or instead of this, the base station100may be implemented as other types of base station such as Node B or a Base Transceiver Station (BTS). The base station100may include a main body (also referred to as a base station device) that controls radio communication, and one or more Remote Radio Heads (RRHs) arranged at a location different from the main body. Furthermore, various types of terminals, which will be described below, may operate as the base station100by temporarily or semi-permanently executing the base station function. Furthermore, for example, the terminal device200may be implemented as a mobile terminal such as a smartphone, a tablet Personal Computer (PC), a notebook PC, a portable game terminal, a portable/dongle type mobile router, and a digital camera, or as an in-vehicle terminal such as a car navigator. Furthermore, the terminal device200may be implemented as a terminal (also referred to as a Machine Type Communication (MTC) terminal) that performs Machine To Machine (M2M) communication. Furthermore, the terminal device200may be a radio communication module (for example, an integrated circuit module formed of one die) mounted on these terminals. 4.1. Application Examples Related to Base Station First Application Example FIG.18is a block diagram illustrating a first example of a schematic configuration of a gNB to which the technology according to the present disclosure is applicable. A gNB800has one or more antennas810and a base station device820. Each of the antennas810and the base station device820may be connected to each other via an RF cable. The technique of the present disclosure may be applied to eNB instead of gNB. Each of the antennas810has a single or a plurality of antenna elements (for example, a plurality of antenna elements constituting a MIMO antenna) and is used for transmission and reception of radio signals by the base station device820. The gNB800has a plurality of antennas810as illustrated inFIG.18, and the plurality of antennas810may each correspond to a plurality of frequency bands used by the gNB800, for example. AlthoughFIG.18illustrates an example in which the gNB800has the plurality of antennas810, the gNB800may have a single antenna810. The base station device820includes a controller821, memory822, a network interface823, and a radio communication interface825. The controller821may be a CPU or DSP, for example, and controls operation of various functions of an upper layer of the base station device820. For example, the controller821generates a data packet from the data in the signal processed by the radio communication interface825and transfers the generated packet via the network interface823. The controller821may generate a bundled packet by bundling data from a plurality of baseband processors and transfer the generated bundled packet. In addition, the controller821may include logical functions that execute controls such as radio resource control, radio bearer control, mobility management, admission control, or scheduling. Furthermore, the control may be executed in cooperation with surrounding gNBs or core network nodes. The memory822includes RAM and ROM, and stores a program executed by the controller821and various types of control data (for example, terminal list, transmission power data, and scheduling data) The network interface823is a communication interface for connecting the base station device820to a core network824. The controller821may communicate with a core network node or other gNBs via the network interface823. In that case, the gNB800may be connected to the core network node or other gNBs to each other by a logical interface (for example, an S1 interface or an X2 interface). The network interface823may be a wired communication interface or a radio communication interface for a radio backhaul. When the network interface823is a radio communication interface, the network interface823may use a frequency band higher than the frequency band used by the radio communication interface825, for radio communication. The radio communication interface825supports a cellular communication scheme such as NR, LTE, or LTE-Advanced, and provides a radio connection to terminals located in cells of gNB800via the antenna810. The radio communication interface825can typically include a baseband (BB) processor826, RF circuit827, or the like. The BB processor826may perform, for example, encoding/decoding, modulation/demodulation, and multiplexing/demultiplexing, and executes various types of signal processing in individual layers (for example, L1, Medium Access Control (MAC), Radio Link Control (RLC), and Packet Data Convergence Protocol (PDCP)). The BB processor826may include some or all of the above-described logical functions instead of the controller821. The BB processor826may be a module including: memory for storing a communication control program; a processor for executing the program; and related circuits. The functions of the BB processor826may be modified by updating the above program. Furthermore, the module may be a card or a blade inserted into a slot of the base station device820, or may be a chip mounted on the card or the blade. The RF circuit827may include a mixer, a filter, an amplifier, or the like, and transmits and receives radio signals via the antenna810. The radio communication interface825may include a plurality of BB processors826as illustrated inFIG.18, and the plurality of BB processors826may each correspond to a plurality of frequency bands used by the gNB800, for example. Furthermore, the radio communication interface825may include a plurality of RF circuits827as illustrated inFIG.18, and the plurality of RF circuits827may each correspond to a plurality of antenna elements, for example. AlthoughFIG.18illustrates an example in which the radio communication interface825includes a plurality of BB processors826and a plurality of RF circuits827, the radio communication interface825may include a single BB processor826or a single RF circuit827. In the gNB800illustrated inFIG.18, one or more components included in the control unit140described with reference toFIG.10may be implemented in the radio communication interface825. Alternatively, at least some of these components may be implemented in the controller821. As an example, the gNB800may be equipped with a module including a part or all of the radio communication interface825(for example, the BB processor826) and/or the controller821, and the module may be equipped with one or more of the above components. In this case, the module may store a program for causing the processor to function as the one or more components (in other words, a program for causing the processor to perform the operation of the one or more components) and may execute the program. As another example, the program causing the processor to function as the one or more of the above components may be installed in the gNB800, and the radio communication interface825(for example, the BB processor826) and/or the controller821may execute the program. As described above, the eNB800, the base station device820, or the above module may be provided as a device including the one or more components, and a program for causing the processor to function as the one or more components may be provided. Furthermore, a readable recording medium on which the above program is recorded may be provided. Furthermore, in the gNB800illustrated inFIG.18, the communication unit120described with reference toFIG.10may be implemented in the radio communication interface825(for example, the RF circuit827). Furthermore, the antenna unit110may be implemented in the antenna810. Furthermore, the storage unit130may be implemented in the memory822. Second Application Example FIG.19is a block diagram illustrating a second example of a schematic configuration of a gNB to which the technology according to the present disclosure is applicable. A gNB830has one or more antennas840, a base station device850, and a gNB-DU860. Each of the antennas840and the gNB-DU860may be connected to each other via an RF cable. Furthermore, the base station device850and the gNB-DU860can be connected to each other by a high-speed line such as an optical fiber cable. Incidentally, in a case where the technology of the present disclosure will be applied to eNB instead of gNB, the gNB-DU860will be replaced with RRH. Each of the antennas840has a single or a plurality of antenna elements (for example, a plurality of antenna elements constituting a MIMO antenna) and is used for transmission and reception of radio signals by the gNB-DU860. The gNB830has a plurality of antennas840as illustrated inFIG.19, and the plurality of antennas840may each correspond to a plurality of frequency bands used by the gNB830, for example. AlthoughFIG.19illustrates an example in which the gNB830has the plurality of antennas840, the gNB830may have a single antenna840. The base station device850includes a controller851, a memory852, a network interface853, a radio communication interface855, and a connection interface857. The controller851, the memory852, and the network interface853are similar to the controller821, memory822, and network interface823described with reference toFIG.20, respectively. The radio communication interface855supports a cellular communication scheme such as NR, LTE, or LTE-Advanced, and provides a radio connection to terminals located in the sector corresponding to the gNB-DU860via the gNB-DU860and the antenna840. The radio communication interface855can typically include a BB processor856or the like. The BB processor856is similar to the BB processor826described with reference toFIG.18, except that connection to an RF circuit864of the gNB-DU860is made via the connection interface857. The radio communication interface855may include a plurality of BB processors856as illustrated inFIG.19, and the plurality of BB processors856may each correspond to a plurality of frequency bands used by the gNB830, for example. AlthoughFIG.19illustrates an example in which the radio communication interface855includes a plurality of BB processors856, the radio communication interface855may include a single BB processor856. The connection interface857is an interface for connecting the base station device850(radio communication interface855) to the gNB-DU860. The connection interface857may be a communication module for communication over the high-speed line connecting the base station device850(radio communication interface855) and the gNB-DU860. The gNB-DU860also includes a connection interface861and a radio communication interface863. The connection interface861is an interface for connecting the gNB-DU860(radio communication interface863) to the base station device850. The connection interface861may be a communication module for communication over the high-speed line. The radio communication interface863transmits and receives radio signals via the antenna840. The radio communication interface863can typically include the RF circuit864or the like. The RF circuit864may include a mixer, a filter, an amplifier, or the like, and transmits and receives radio signals via the antenna840. The radio communication interface863includes a plurality of RF circuits864as illustrated inFIG.19, and the plurality of RF circuits864may each correspond to a plurality of antenna elements, for example. AlthoughFIG.19illustrates an example in which the radio communication interface863includes a plurality of RF circuits864, the radio communication interface863may include a single RF circuit864. In the gNB830illustrated inFIG.19, one or more components included in the control unit140described with reference toFIG.10may be implemented in the radio communication interface855and/or the radio communication interface863. Alternatively, at least some of these components may be implemented in the controller851. As an example, the gNB830may be equipped with a module including a part or all of the radio communication interface855(for example, the BB processor856) and/or the controller851, and the module may be equipped with one or more of the above components. In this case, the module may store a program for causing the processor to function as the one or more components (in other words, a program for causing the processor to perform the operation of the one or more components) and may execute the program. As another example, the program causing the processor to function as the one or more of the above components may be installed in the gNB830, and the radio communication interface855(for example, the BB processor856) and/or the controller851may execute the program. As described above, the gNB830, the base station device850, or the above module may be provided as a device including the one or more components, and a program for causing the processor to function as the one or more components may be provided. Furthermore, a readable recording medium on which the above program is recorded may be provided. Furthermore, in the gNB830illustrated inFIG.19, the communication unit120described with reference toFIG.10, for example, may be implemented in the radio communication interface863(for example, the RF circuit864). Furthermore, the antenna unit110may be implemented in the antenna840. Furthermore, the storage unit130may be implemented in the memory852. 4.2. Application Examples Related to Terminal Devices First Application Example FIG.20is a block diagram illustrating an example of a schematic configuration of a smartphone900to which the technology according to the present disclosure is applicable. The smartphone900includes a processor901, memory902, storage903, an external connection interface904, a camera906, a sensor907, a microphone908, an input device909, a display device910, a speaker911, a radio communication interface912, one or more antenna switches915, one or more antennas916, a bus917, a battery918, and an auxiliary controller919. The processor901may be a CPU or a System on Chip (SoC), for example, and controls the functions of the application layer and other layers of the smartphone900. The memory902includes RAM and ROM and stores programs to be executed by the processor901, and data. The storage903may include a storage medium such as semiconductor memory or a hard disk. The external connection interface904is an interface for connecting an external device such as a memory card or a Universal Serial Bus (USB) device to the smartphone900. The camera906includes an imaging element such as a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS), and generates a captured image. Examples of the sensor907can include a group of sensors such as a positioning sensor, a gyro sensor, a geomagnetic sensor, and an acceleration sensor. The microphone908converts the voice input to the smartphone900into a voice signal. The input device909includes a touch sensor that detects a touch on the screen of the display device910, a keypad, a keyboard, a button, or a switch and receives an input of operation or information from the user. The display device910has a screen such as a liquid crystal display (LCD) or an organic light emitting diode (OLED) display, and displays an output image of the smartphone900. The speaker911converts the voice signal output from the smartphone900into voice. The radio communication interface912supports a cellular communication scheme such as NR, LTE, or LTE-Advanced and executes radio communication. The radio communication interface912can typically include a BB processor913, an RF circuit914, or the like. The BB processor913may perform, for example, encoding/decoding, modulation/demodulation, and multiplexing/demultiplexing, and performs various signal processing for radio communication. The RF circuit914may include a mixer, a filter, an amplifier, or the like, and transmits and receives radio signals via the antenna916. The radio communication interface912may be a one-chip module integrating the BB processor913and the RF circuit914. The radio communication interface912may include a plurality of BB processors913and a plurality of RF circuits914as illustrated inFIG.18. AlthoughFIG.18illustrates an example in which the radio communication interface912includes a plurality of BB processors913and a plurality of RF circuits914, the radio communication interface912may include a single BB processor913or a single RF circuit914. Furthermore, the radio communication interface912may support other types of radio communication scheme such as short-range radio communication scheme, near field radio communication scheme, or wireless Local Area Network (LAN) scheme in addition to the cellular communication scheme. In that case, the radio communication interface912may include the BB processor913and the RF circuit914for each of the radio communication schemes. Each of the antenna switches915switches the connection destination of the antenna916between a plurality of circuits included in the radio communication interface912(for example, circuits for different radio communication schemes). Each of the antennas916has a single or a plurality of antenna elements (for example, a plurality of antenna elements constituting a MIMO antenna) and is used for transmitting and receiving radio signals by the radio communication interface912. The smartphone900may have a plurality of antennas916as illustrated inFIG.20. AlthoughFIG.20illustrates an example in which the smartphone900has the plurality of antennas916, the smartphone900may have a single antenna916. Furthermore, the smartphone900may be provided with the antenna916for each of the radio communication schemes. In that case, the antenna switch915may be omitted from the configuration of the smartphone900. The bus917provides mutual connection between the processor901, the memory902, the storage903, the external connection interface904, the camera906, the sensor907, the microphone908, the input device909, the display device910, the speaker911, the radio communication interface912, and the auxiliary controller919. The battery918supplies power to individual blocks of the smartphone900illustrated inFIG.20via the power supply lines partially illustrated by the broken lines in the figure. The auxiliary controller919controls operation of minimum necessary functions of the smartphone900during a sleep mode, for example. In the smartphone900illustrated inFIG.20, one or more components included in the control unit240described with reference toFIG.11may be implemented in the radio communication interface912. Alternatively, at least some of these components may be implemented in the processor901or the auxiliary controller919. As an example, the smartphone900may be equipped with a module including a part (for example, the BB processor913) or all of the radio communication interface912, the processor901, and/or the auxiliary controller919, and may be equipped with one or more of the above-described components in the module. In this case, the module may store a program for causing the processor to function as the one or more components (in other words, a program for causing the processor to perform the operation of the one or more components) and may execute the program. As another example, the program causing the processor to function as the one or more of the above components may be installed in the smartphone900, and the radio communication interface912(for example, the BB processor913), the processor901, and/or the auxiliary controller919may execute the program. As described above, the smartphone900or the above module may be provided as a device including the one or more components, and a program for causing the processor to function as the one or more components may be provided. Furthermore, a readable recording medium on which the above program is recorded may be provided. Furthermore, in the smartphone900illustrated inFIG.20, for example, the communication unit220described with reference toFIG.11may be implemented in the radio communication interface912(for example, the RF circuit914). Furthermore, the antenna unit210may be implemented in the antenna916. Furthermore, the storage unit230may be implemented in the memory902. Second Application Example FIG.21is a block diagram illustrating an example of a schematic configuration of a car navigator920to which the technology according to the present disclosure is applicable. The car navigator920includes a processor921, memory922, a Global Positioning System (GPS) module924, a sensor925, a data interface926, a content player927, a storage medium interface928, an input device929, a display device930, a speaker931, a radio communication interface933, one or more antenna switches936, one or more antennas937, and a battery938. The processor921may be a CPU or SoC, for example, and controls the navigation function and other functions of the car navigator920. The memory922includes RAM and ROM and stores programs to be executed by the processor921, and data. The GPS module924measures the position (including latitude, longitude, and altitude) of the car navigator920using GPS signals received from GPS satellites. The sensor925can include a group of sensors such as a gyro sensor, a geomagnetic sensor, and a barometric pressure sensor, for example. The data interface926is connected to an in-vehicle network941via a terminal (not illustrated), for example, and acquires data generated on the vehicle side such as vehicle speed data. The content player927plays pieces of content stored on a storage medium (for example, a CD or DVD) inserted into the storage medium interface928. The input device929includes a touch sensor that detects a touch on the screen of the display device930, a button, or a switch and receives an input of operation or information from the user. The display device930includes a screen such as an LCD or OLED display and displays an image of a navigation function or a content to be played. The speaker931outputs the sound of the navigation function or the content to be played. The radio communication interface933supports a cellular communication scheme such as NR, LTE, or LTE-Advanced and executes radio communication. The radio communication interface933can typically include a BB processor934, an RF circuit935, or the like. The BB processor934may perform, for example, encoding/decoding, modulation/demodulation, and multiplexing/demultiplexing, and performs various signal processing for radio communication. The RF circuit935may include a mixer, a filter, an amplifier, or the like, and transmits and receives radio signals via the antenna937. The radio communication interface933may be a one-chip module integrating the BB processor934and the RF circuit935. The radio communication interface933may include a plurality of BB processors934and a plurality of RF circuits935as illustrated inFIG.21. AlthoughFIG.21illustrates an example in which the radio communication interface933includes a plurality of BB processors934and a plurality of RF circuits935, the radio communication interface933may include a single BB processor934or a single RF circuit935. Furthermore, the radio communication interface933may support other types of radio communication schemes such as short-range radio communication scheme, near field radio communication scheme, or a wireless LAN scheme in addition to the cellular communication scheme. In that case, the radio communication interface933may include the BB processor934and the RF circuit935for each of the radio communication schemes. Each of the antenna switches936switches the connection destination of the antenna937between a plurality of circuits included in the radio communication interface933(for example, circuits for different radio communication schemes). Each of the antennas937has a single or a plurality of antenna elements (for example, a plurality of antenna elements constituting a MIMO antenna) and is used for transmitting and receiving radio signals by the radio communication interface933. The car navigator920may have a plurality of antennas937as illustrated inFIG.21. AlthoughFIG.21illustrates an example in which the car navigator920has a plurality of antennas937, the car navigator920may have a single antenna937. Furthermore, the car navigator920may include the antenna937for each of the radio communication schemes. In that case, the antenna switch936may be omitted from the configuration of the car navigator920. The battery938supplies power to individual blocks of the car navigator920illustrated inFIG.19via the power supply lines partially illustrated by the broken lines in the figure. In addition, the battery938stores electric power supplied from the vehicle side. In the car navigator920illustrated inFIG.21, one or more components included in the control unit240described with reference toFIG.11may be implemented in the radio communication interface933. Alternatively, at least some of these components may be implemented in the processor921. As an example, the car navigator920may be equipped with a module including a part (for example, the BB processor934) or all of the radio communication interface933and/or the processor921, and the module may be equipped with one or more of the above components. In this case, the module may store a program for causing the processor to function as the one or more components (in other words, a program for causing the processor to perform the operation of the one or more components) and may execute the program. As another example, a program causing the processor to function as one or more of the above components may be installed in the car navigator920, and the radio communication interface933(for example, the BB processor934) and/or the processor921may execute the program. As described above, the car navigator920or the above module may be provided as a device including the one or more components, and a program for causing the processor to function as the one or more components may be provided. Furthermore, a readable recording medium on which the above program is recorded may be provided. Furthermore, in the car navigator920illustrated inFIG.21, the communication unit220described with reference toFIG.11, for example, may be implemented in the radio communication interface933(for example, the RF circuit935). Furthermore, the antenna unit210may be implemented in the antenna937. Furthermore, the storage unit230may be implemented in the memory922. Furthermore, the technology according to the present disclosure may be actualized as an in-vehicle system (or vehicle)940including one or more blocks of the car navigator920described above, the in-vehicle network941, and a vehicle-side module942. The vehicle-side module942generates vehicle-side data such as vehicle speed, engine speed, or failure information, and outputs the generated data to the in-vehicle network941. 5. Modifications A control device that controls the base station device100or the terminal device200of the present embodiment may be actualized by a dedicated computer system or a general-purpose computer system. For example, a communication program for executing the above-described operations (for example, a transmission/reception process) is stored in a computer-readable recording medium such as an optical disk, semiconductor memory, a magnetic tape, or a flexible disk and distributed. For example, the program is installed on a computer and the above processes are executed to achieve the configuration of the control device. At this time, the control device may be a base station device or a device external to the terminal device (for example, a personal computer). Furthermore, the control device may be a base station device or a device inside the terminal device. Furthermore, the communication program may be stored in a disk device included in a server device on a network such as the Internet so as to be downloadable to a computer, for example. Furthermore, the functions described above may be implemented by using operating system (OS) and application software in cooperation. In this case, the sections other than the OS may be stored in a medium for distribution, or the sections other than the OS may be stored in a server device so as to be downloadable to a computer, for example. Furthermore, among individual processes described in the above embodiments, all or a part of the processes described as being performed automatically may be manually performed, or the processes described as being performed manually can be performed automatically by known methods. In addition, the processing procedures, specific names, and information including various data and parameters illustrated in the above documents or drawings can be arbitrarily changed unless otherwise specified. For example, various types of information illustrated in each of drawings are not limited to the information illustrated. In addition, each of components of each of devices illustrated is provided as a functional and conceptional illustration and thus does not necessarily need to be physically configured as illustrated. That is, the specific form of distribution/integration of each of devices is not limited to those illustrated in the drawings, and all or a part thereof may be functionally or physically distributed or integrated into arbitrary units according to various loads and use conditions. Furthermore, the above-described embodiments can be appropriately combined within a range implementable without contradiction of processes. Furthermore, the order of individual steps illustrated in the flowchart and the sequence diagram in the embodiment can be changed as appropriate. Although in the above-described embodiment, defaults of combinations of the receiving antenna panel and the receiving beam (reception environment) to be used by the terminal device200have been described, the “receiving antenna panel” does not have to be explicitly considered in one aspect. As an example, when one receiving beam has been received and measured by a plurality of different receiving antenna panels, it may be recognized (considered) as a plurality of different receiving beams from the viewpoint (UE perspective) of the UE (terminal device200). In this case, the above-described “defaults of the combinations (reception environment) of the receiving antenna panel and the receiving beam to be used by the terminal device200” may be replaced with “the defaults of the receiving beam to be used by the terminal device200”. Moreover, the antenna panel in the above embodiment (including modifications, application examples, and examples of application) may correspond to a combination of one or a plurality of antenna ports. In addition to or in place of this, the antenna panel in the above embodiment (including modifications, application examples, and examples of application) may correspond to an antenna port group including one or more antenna ports. In addition to or instead of this, the antenna panel in the above embodiments may correspond to a combination of one or more antenna ports (or an antenna port group) and Quasi-co-location parameters. In addition to or instead of this, the association between the resource area of the control information (e.g. PDCCH) and the identification information (e.g. SSB-Index) (or the combination of the receiving antenna panel and the receiving beam) described above may be set for per the terminal device200(UE), MAC entity in the UE, cell, CC, or BWP. The resource area or resource set in the above-described embodiment may be one or more of a Resource Element Group (REG) constituted with one Resource Block and one OFDM symbol, for example. Alternatively, the resource area or the resource set may be a Control Channel Element (CCE) constituted with a plurality of (e.g. six) REGs. Further alternatively, the resource area or the resource set may be a Control-resource set (CORESET) constituted with a plurality of Resource Blocks and one to three OFDM symbols. At least one of the parameters and L values illustrated in Table 2 below constituting CORESET may be transmitted from NGRAN (base station100) to the UE (terminal device200) by RRC signaling (e.g. RRC Reconfiguration message). The RRC Reconfiguration message here may also include a MeasConfig (measurement setting) for measuring the reference signal (e.g. SSB) described above. TABLE 2ParameterNRBCORESETNsymbCORESETNREGCORESET 6. Summary As described above, according to one embodiment of the present disclosure, the communication device (for example, the terminal device200) according to the present embodiment includes the acquisition unit242and the reception unit243. The acquisition unit242acquires similarity information indicating the similarity of the beam characteristics of the transmitting antenna panel in a plurality of signal groups transmitted from the base station100. The reception unit243selects and receives a signal group to be received from among a plurality of signal groups based on the similarity information acquired by the acquisition unit242. With this configuration, for example, the terminal device200performs a beam determination process and a reporting process of the determined beam to the base station100only for one received resource set, and does not have to perform the determination process or reporting process for the other resource sets. Therefore, the terminal device200according to the embodiment can reduce unnecessary signal processing. Furthermore, the reception unit243of the communication device according to the embodiment omits reception processes of signal groups other than the selected signal group. With this configuration, for example, the terminal device200performs a beam determination process and a reporting process of the determined beam to the base station100just for one received resource set, and does not have to perform the determination process or reporting process for the other resource sets. Therefore, the terminal device200according to the embodiment can reduce unnecessary signal processing. Furthermore, the acquisition unit242of the communication device according to the embodiment acquires the receiving panel information that designates the antenna panel to receive the signal group from the base station100. The reception unit243receives the signal group on the antenna panel designated by the receiving panel information acquired by the acquisition unit242. This makes it possible to designate the antenna panel and the beam separately from each other, enabling the use of the setting method suitable for the switching time of the antenna panel and the switching time of the beam. In addition, the communication device according to the embodiment includes the transmission unit244. The transmission unit244transmits a predetermined signal group to the base station100. The acquisition unit242acquires transmitting panel information that designates an antenna panel that transmits a signal group from the base station100. The transmission unit244transmits a signal group by using the antenna panel designated by the transmitting panel information acquired by the acquisition unit242. With this configuration, the terminal device200can have the transmitting antenna panel be designated from the base station100, making it possible to clarify which transmitting antenna panel should be used. Furthermore, the transmission unit244of the communication device according to the embodiment transmits the signal group to the base station100by using the antenna panel that has received the signal group transmitted from the base station100. With this configuration, the base station100can allow the terminal device200to freely select the beam after implicitly designating the transmitting antenna panel of the terminal device200. In addition, the communication device according to the embodiment includes the notification unit241. The notification unit241notifies the base station100of the capability information indicating that it is possible to selectively receive the signal group based on similarity information. With this configuration, the base station100can take measures such as not transmitting similarity information to the terminal device200that cannot selectively receive the resource set, leading to the reduction of the processing load. Furthermore, the notification unit241of the communication device according to the embodiment notifies the base station100of the characteristic information regarding the beam characteristics of the antenna panel that receives the signal group. This makes it easy for the base station100to designate the antenna panel of the terminal device200. Furthermore, the base station device100according to the embodiment includes the generation unit142and the transmission unit143. The generation unit142generates similarity information indicating the similarity of the transmission environment characteristics in the plurality of signal groups to be transmitted to the terminal device200. The transmission unit143transmits the similarity information generated by the generation unit142to the terminal device200. With this configuration, for example, the terminal device200performs a beam determination process and a reporting process of the determined beam to the base station100only for one received resource set, and does not have to perform the determination process or reporting process for the other resource sets. Therefore, the terminal device200according to the embodiment can reduce unnecessary signal processing. In addition, the base station device100according to the embodiment includes the acquisition unit141. The acquisition unit141acquires capability information indicating that it is possible to selectively receive a signal group from the terminal device200. The transmission unit143transmits similarity information to the terminal device200when the terminal device200is capable of selectively receiving the signal group based on the capability information. With this configuration, the base station100can take measures such as not transmitting similarity information to the terminal device200that cannot selectively receive the resource set, leading to the reduction of the processing load. The embodiments of the present disclosure have been described above. However, the technical scope of the present disclosure is not limited to the above-described embodiments, and various modifications can be made without departing from the scope of the present disclosure. Moreover, it is allowable to combine the components across different embodiments and modifications as appropriate. The effects described in individual embodiments of the present specification are merely examples, and thus, there may be other effects, not limited to the exemplified effects. Note that the present technology can also have the following configurations. (1) A communication device comprising:an acquisition unit that acquires similarity information indicating similarity of beam characteristics of a transmitting antenna panel in a plurality of signal groups transmitted from a base station; anda reception unit that selects and receives the signal group to be received, from among the plurality of signal groups, based on the similarity information acquired by the acquisition unit. (2) The communication device according to (1),wherein the reception unit omits reception processes of signal groups other than the selected signal group. (3) The communication device according to any one of (1) to (2),wherein the acquisition unit acquires receiving panel information that designates an antenna panel that receives the signal group from the base station, andthe reception unit receives the signal group by the antenna panel designated by the receiving panel information acquired by the acquisition unit. (4) The communication device according to any one of (1) to (3), further comprisinga transmission unit that transmits a predetermined signal group to the base station,wherein the acquisition unit acquires transmitting panel information that designates an antenna panel that transmits the signal group from the base station, andthe transmission unit transmits the signal group by the antenna panel designated by the transmitting panel information acquired by the acquisition unit. (5) The communication device according to (4),wherein the transmission unit transmits the signal group to the base station by using the antenna panel that has received the signal group transmitted from the base station. (6) The communication device according to any one of (1) to (5), further comprisinga notification unit that preliminarily notifies the base station of capability information indicating that it is possible to selectively receive the signal group based on the similarity information. (7) The communication device according to (6),wherein the notification unit preliminarily notifies the base station of characteristic information regarding beam characteristics of an antenna panel that receives the signal group. (8) A base station device comprising:a generation unit that generates similarity information indicating similarity of the characteristics of a transmission environment in a plurality of signal groups to be transmitted to a communication device; anda transmission unit that transmits the similarity information generated by the generation unit to the communication device. (9) The base station device according to (8), further comprisingan acquisition unit that acquires capability information indicating that it is possible to selectively receive the signal group from the communication device,wherein the transmission unit transmits the similarity information to the communication device in a case where the communication device is capable of selectively receiving the signal group based on the capability information. (10) A communication method comprising:an acquisition step of acquiring similarity information indicating similarity of beam characteristics of a transmitting antenna panel in a plurality of signal groups transmitted from a base station; anda reception step of selectively receiving the signal group to be received, from among the plurality of signal groups, based on the similarity information acquired by the acquisition step. (11) A base station device control method comprising:a generation step of generating similarity information indicating similarity of beam characteristics of a transmitting antenna panel in a plurality of signal groups to be transmitted to a communication device; anda transmission step of transmitting the similarity information generated by the generation step to the communication device. REFERENCE SIGNS LIST 1COMMUNICATION SYSTEM100BASE STATION DEVICE (BASE STATION)200TERMINAL DEVICE
113,372
11863284
DETAILED DESCRIPTION Embodiments of an improved communication system using a general-purpose processor to achieve high-rate processing are disclosed. Embodiments disclosed herein provide for improved communication systems capable of utilizing a general-purpose processor to efficiently achieve a high-rate of signal processing. After reading this description, it will become apparent to one skilled in the art how to implement the invention in various alternative embodiments and alternative applications. However, although various embodiments of the present invention will be described herein, it is understood that these embodiments are presented by way of example and illustration only, and not limitation. As such, this detailed description of various embodiments should not be construed to limit the scope or breadth of the present invention as set forth in the appended claims. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. A communication system is used as a primary example throughout the description, however, the application of the disclosed methods is not so limited. For example, any wireless or radio communication system requiring the use of digital signal processing, a modem, etc., can implement the systems, methods, and computer readable media described herein. This disclosure provides systems and methods for performing Digital Signal Processing using general purpose central processing units (CPUs) in either a standard server environment or a virtualized cloud environment. In some examples, the systems can employ single-instruction multiple data (SIMD) techniques to achieve high throughput including SSE, SSE2, SSE3, SSE4.1, SSE4.2, AVX, AVX2 and AVX512 instruction sets. This disclosure describes how the data processing is managed over multiple processing cores of the processors (e.g., CPUs) to achieve the necessary throughput without the use of dedicated signal processing hardware such as Field Programmable Gate Arrays (FPGAs) or High Performance Computing (HPC) hardware such as Graphics Processing Units (GPUs). The ability to perform this processing in general-purpose server CPUs, including but not limited to x86 architecture made by Intel and AMD micro-processors, as well as ARM processors like Cortex-A76, NEON and AWS Graviton and Graviton2, allows the functions to be deployed within a general-purpose cloud processing environment using a virtualized processing architecture without the need for dedicated hardware. The processing in general purpose CPUs is enabled by a Digital IF appliance that samples the analog signal and feeds the digitized samples into the CPU over an Ethernet connection. The Digital IF appliance can also accept digitized samples and covert to an analog signal, similar to that described in U.S. Pat. No. 9,577,936, issued Feb. 21, 2017, entitled “Packetized Radio Frequency Transport System” the contents of which are incorporated by reference in their entirety. FIG.1is a graphical representation of an embodiment of a communication system. A communication system (system)100can have a platform110and a satellite111that communicate with a plurality of ground stations. The platform110can be an aircraft (e.g., an airplane, helicopter, or unmanned aerial vehicle (UAV), missile, boat, etc.). A plurality of ground stations120,130,140can be associated with a terrestrial radiofrequency (RF) antenna122or one or more satellite antennas132,142. The ground station120can have an antenna122coupled to a digitizer124. The digitizer124can have one or more analog to digital converters (A2D) for converting analog signals received at the antenna122into a digital bit stream for transmission via a network. The digitizer124can also include corresponding digital to analog converters (D2A) for operations on the uplink to the platform110and the satellite111. Similarly, the ground station130can have an antenna132and a digitizer134, and the ground station140can have an antenna142and a digitizer144. The ground stations120,130,140can each receive downlink signals160(labeled160a,160b,160c) from the platform110and the downlink signals170(labeled170a,170b,170c) from the satellite111in a receive chain. The ground stations120,130,140can also transmit uplink signals via the respective antennas122,132,142in a transmit chain. The digitizers124,134,144can digitize the received downlink signals160,170for transmission as a digital bit stream154. The digital bit stream154can then be transmitted, via a network152to a cloud processing system. In some examples, the ground stations120,130,140can process all of the data (e.g., contained in the downlink signals) locally, however this can be exceptionally expensive from a time, resource, and efficiency perspective. Therefore, in some embodiments, the downlink signals can be digitized and transmitted as the digital bit stream152to a remote signal processing server (SPS)150. In some implementations, the SPS150can be positioned in a physical location, such as a data center located in an offsite facility that is accessible via a wide area network (WAN). Such a WAN can be the Internet, for example. The SPS150can demodulate the downlink signals from the digital bit stream152and output the data or information bits from the downlink signals. In some other implementations, the SPS150can use cloud computing or cloud processing to perform the signal processing and other methods described herein. The SPS150can also be referred to as a cloud server. The SPS150can then provide the processed data to the user or send to a different site. The data and information can be mission-dependent. In addition, the information contained in the data can be the main purpose of the satellite, including weather data, image data, and satellite communication (SATCOM) payload data. As noted above, SATCOM is used as a primary example herein, but any communication or signal processing system using DSP can implement the methods described herein. In order to achieve high processing rates with software, a phase lock loop (PLL) or delay lock loop (DLL) approach can be problematic due to the feedback within the loop. The feedback loop forces all of the incoming data (e.g., the downlink signal160and/or170) to be processed on a single (e.g., linear) process that cannot be easily split or otherwise divided. In addition to the feedback, there are other obstacles to overcome using the PLL/DLL including, for example, how often to calculate the error term. FIG.2is a functional block diagram of a wired or wireless communication device for use as one or more components of the system ofFIG.1. A processing device (device)200may be implemented as, for example, the SPS150ofFIG.1. The device200can be implemented as needed to perform one or more of the signal processing methods or steps disclosed herein. The device200may include a processor202which controls operation of the device200. The processor202may also be referred to as a CPU. The processor202can direct and/or perform the functions, for example, attributed to SPS150. Certain aspects of the device200, including the processor202, can be implemented as various cloud-based elements, such as cloud-based processing. Accordingly, the processor202can represent cloud processing, distributed over several disparate processors via a network (e.g., the Internet). Alternatively, certain components can be implemented in hardware. The processor202may be implemented with any combination of one or more of general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information. The processor202can have one or more cores204(shown as core204athrough core204n) on which the computations can be performed. In implementations using cloud processing, the cores204can represent multiple iterations of distributed cloud processing. In some embodiments, using hardware, the processor202can be a complex, integrated circuit on which all the computations for the receiver are taking place. As used herein, the cores204can each be one processing element of the processor202. The processor202can implement multiple cores204to perform the necessary parallel processing for the methods disclosed herein. In some embodiments, the processor202may be distributed across multiple CPUs as in cloud computing. The device200may further include a memory206operably coupled to the processor202. The memory206can be cloud-based storage or local hardware storage. The memory206can include both read-only memory (ROM) and random access memory (RAM), providing instructions and data to the processor202. A portion of the memory206may also include non-volatile random access memory (NVRAM). The processor202typically performs logical and arithmetic operations based on program instructions stored within the memory206. The instructions in the memory206may be executable to implement the methods described herein. The memory206can further include removable media or multiple distributed databases. The memory206may also include machine-readable media for storing software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the processor202or the one or more cores204, cause the device200(e.g., the SPS150) to perform the various functions described herein. The device200may also include a transmitter210and a receiver212to allow transmission and reception of data between the communication device200and a remote location. Such communication can occur between the ground station120and the SPS150via the network152, for example. Such communications can be wireless or conducted via wireline communications. The transmitter210and receiver212may be combined into a transceiver214. The transceiver214can be communicatively coupled to the network152. In some examples the transceiver214can include or be a portion of a network interface card (NIC). The device200may further comprise a user interface222. The user interface222may comprise a keypad, a microphone, a speaker, and/or a display. The user interface222may include any element or component that conveys information to a user of the device200and/or receives input from the user. The various components of the device200described herein may be coupled together by a bus system226. The bus system226may include a data bus, for example, as well as a power bus, a control signal bus, and a status signal bus in addition to the data bus. In some embodiments, the bus system226can be communicatively coupled to the network152. The network152can provide a communication link between the device200(e.g., the processor202) and the ground station120, for example. Those of skill in the art will appreciate the components of the device200may be coupled together or accept or provide inputs to each other using some other mechanism such as a local- or wide area network for distributed processing. FIG.3is a graphical depiction of schematic block diagram of an embodiment of feedforward or pre-calculation signal processing300. A method300can occur as a generalized process incorporating a plurality of functions by, for example, the processor202. The processor202can perform the plurality of functions in a series or in parallel arrangement as shown to perform one or more desired processes. Each function may refer to a block or collection of instructions or software executable by the processor202and stored in a memory206. A first function302can be performed by the processor202. In some embodiments, a second function304can be performed serially, following the first function302. Accordingly, the processor202can split blocks of data with the different functionality for processing over multiple cores204to perform the first function302and the second function304. The processor202can perform distributed processing of a third function306(shown as306a,306b, . . .306n) in parallel, following the second function304. To indicate that various number of functions306a-306nmay operate in parallel, three paths are depicted with three vertical dots between them indicating that any number of paths can be included, such as, but not limited to, four, five, six, etc. The parallel processing of the third function306can include, for example, splitting blocks of data associated with the same functionality over several cores204(e.g., processing blocks) of the processor202. For example, “blocks of data” can mean a group of samples that need to be processed. The term “parallel” is used herein to describe that processing occurs in the blocks306a-306nat the same time. The packets being processed may be of different lengths from one block306a-306nto another, so the processing of packets may have the same rate or speed from one block306a-306nto the next. As noted below, some of the bocks306a-306nmay proceed faster or slower than others. Accordingly, the term parallel should not be limited to simultaneous or concurrent processing within the blocks306a-306n. The processor202can then perform a fourth function308, and a fifth function309in series. Similar to the first function302and the second function304, the serial performance of the fourth function308and the fifth function309can include splitting blocks of data associated with the different functionality for processing over multiple cores204. In general, each of the first function302, the second function304, the third function306, the fourth function308, and the fifth function309can each be performed in a different processing block. As used herein, a processing block can refer to a specific task performed on a block of data. The processing block can be associated with one or more of the cores204, for example. Therefore, the method300can split blocks of data with the same functionality to process over multiple cores204, for example. Similarly, the method300can split blocks of data with different functionality to process over multiple cores204. In some other implementations of the method300, the same processing blocks (e.g., the cores204) can perform processing of data with single instruction, multiple data (SIMD), irrespective of the same or different functionality. In other implementations, the embodiments of the method300can support processing blocks of data with minimal state information by using overlapping data. As used herein, state information can include variables needed during feedback (e.g., feedback processing), data frame boundaries, etc. For example, in the case of feedback loops, state information can include the variables calculated within the loop that are needed during feedback in processing a continuous stream of data. State information can also include the location of a frame boundary within a data stream. Other examples can include things such as FIR filters where the state information includes values stored in buffers (e.g., possibly many delay elements) that are needed to keep continuous data flowing. By ignoring state information and overlapping portions of adjacent blocks of data, processes can take advantage of parallel processing, using a variable level of overlap amongst the blocks of data. FIG.4is a graphical depiction of an embodiment of a method for feedforward or pre-calculation signal processing ofFIG.3. A method400can use the principles of the method300for series-parallel and/or parallel-series processing for multiple functions grouped as a process315. In one example, the first function302(FIG.3) can be a data ingest function305, in which the processor202receives data for processing. The second function304(FIG.3) can be a data split function310, in which the processor202can parse data in overlapping blocks of data. The overlapped blocks of data can then be processed in parallel in various, parallel iterations of multiple functions as processing blocks315a-315n. For example, a first block of data can be processed by a group of functions in processing block315a, and another block of data can be processed by the group of functions in another processing block315b-315nexecuted in parallel with the processing block315a. A plurality of processing blocks315a-315nmay be executed in parallel, and is not limited to two such processing blocks. The overlap in the blocks of data can provide a level of redundancy that is not heavily reliant (or not reliant at all) on state information. The less state information that is needed, the easier it is to process the blocks of data in parallel as opposed to a continuous stream. To indicate that various number of processing blocks315a-315nmay operate in parallel, three paths are depicted with three vertical dots between them indicating that any number of paths can be included, such as, but not limited to, four, five, six, etc. The term “parallel” is used herein to describe that processing occurs in the processing blocks315a-315nat the same time. The packets being processed may be of different lengths from one processing block315a-315nto another, so the processing of packets may have the same rate or speed from one processing block315a-315nto the next. As noted below, some of the processing bocks315a-315nmay proceed faster or slower than others. Accordingly, the term parallel should not be limited to simultaneous or concurrent processing within the processing blocks315a-315n. The method400can further include a data combine function320, similar to the fourth function308(FIG.3), combining the processed data, and a data output function325, similar to the fifth function309(FIG.3). In a further example, the adjustable series-parallel or parallel-series arrangement of the various functions of the method300provide several methods of implementing feedforward processing to replace feedback loops. This is advantageous as it can increase throughput and avoid bottlenecks caused by delays in feedback processing. An additional advantage of the series-parallel or parallel-series processing provided by the method300and the method400, is that arranging one or more of desired algorithms within a processing block (e.g., one of the five processing blocks of the method300), allows the processor202to distribute the processing load (e.g., across multiple cores204) without concern for the speed of a given algorithm within a processing block (e.g., core204). Thus, each core204shares the exact same processing load and eliminates bottle necking issues caused by individual algorithms. An additional benefit of embodiments of the method300can include customizing a specific order of algorithms (e.g., processing blocks) to lower the computational burden within the processor202. As described below, the overall, multi-stage processing of a given process may be agnostic to the order of multiple sub-processes. Therefore, in some examples, ordering the fourth function308may have certain advantages if performed prior to the third function306. The method300can further implement different variable types for memory bandwidth optimization, such as int8, int16 and floats, for example. This can accelerate certain algorithms (e.g., based on type). In addition, this can provide increased flexibility to maximize memory bandwidth. FIGS.5and6are functional block diagrams of embodiments of digital signal diversity combiners. Methods500and/or600for diversity combining can include feedforward block processing as described above in connection toFIGS.3and4. The method500and/or method600may comprise a plurality of blocks. In some examples, each block may represent a function block and perform functions in a similar manner as the function blocks306a,306b, . . .306n(FIG.3), etc. In another example, two or more of the plurality of blocks ofFIGS.5and/or6can be grouped together as a single “process”315that perform functions in a similar manner as the processing blocks315a,315b, . . .315n(FIG.4), etc. FIG.9is a functional block diagram of an embodiment of channel simulator. Methods900for channel simulation can include feedforward block processing as described above in connection toFIGS.3and4. The method900comprises a plurality of blocks. In some examples, each block may represent a function block and perform functions in a similar manner as the function blocks306a,306b, . . .306n(FIG.3), etc. In another example, two or more of the plurality of blocks ofFIG.9can be grouped together as a single “process”315that perform functions in a similar manner as the processing blocks315a,315b, . . .315n(FIG.4), etc. FIG.10is a functional block diagram of an embodiment of signal modulator for waveform generation. Methods1000for signal modulation include feedforward block processing as described above in connection toFIGS.3and4. The method1000comprises a plurality of blocks. In some examples, each block may represent a function block and perform functions in a similar manner as the function blocks306a,306b,306n(FIG.3), etc. In another example, two or more of the plurality of blocks ofFIG.10can be grouped together as a single “process”315that perform functions in a similar manner as the processing blocks315a,315b, . . .315n(FIG.4), etc. At block305, the SPS150can ingest or otherwise receive the digital bit stream152(e.g., via the network152). The data ingest at block305can receive the digital bit stream data from a network connection (e.g., Ethernet). At block310, the data can be split into parallel data streams by a data splitter. In some embodiments, the processor202can perform data splitting functions required in block310. In some other embodiments, a separate data splitting component (e.g., a data splitter) can be included in the device200(FIG.2). Splitting the data into multiple parallel streams can allow parallel processing of the downlink signal, such as downlink signals160,170. The method300can therefore take advantage of feedforward or pre-calculation processing to allow the incoming digitized signal data to be broken into smaller pieces and then processed on multiple cores204. The digital bit stream152can be split to form overlapping packets in in-phase/quadrature (I/Q) pairs. In some embodiments, the “overlapping packets” can include data packets in which successive packets are overlapped with adjacent data packets. In some embodiments the data packets may all be the same length, but overlapped. The overlap in data packets can be at the beginning of the data packet or at the end. In addition, a data packet can overlap with both the preceding and the following data packets. The data packets can also have different lengths (e.g., varying amounts of data). Therefore, a first packet sent to the processing block315amay overlap or otherwise repeat certain data of a second packet sent to the processing block315b. The amount of overlap between packets, or overlap size, can be programmable and set as needed. In some examples, the overlap can be set to one percent (1%) of the packet size. This overlap size can be increased or decreased depending on need. For example, one particular parameter that can impact the overlap size is the uncertainty of the symbol rate in the digital bit stream152. For most signals, the worst case uncertainty is less than 1%, so a 1% overlap covers most cases. In some other embodiments, the overlap can be 2%, 3%, 4%, 5%, 6%, 7%, 8%, 9%, or as high as 10%, or anywhere in between, as needed. It is also possible to have less than 1% overlap as well. The overlap could be 0.1% or lower if the data rate uncertainty is less than 0.1%. The processor202can implement single instruction, multiple data (SIMD) processing on the digital bit stream152. In some examples, SIMD can include Advanced Vector Extensions using 512 bits (AVX-512) allowing 16 floating point operations on a single CPU core on a single CPU instruction. AVX-512, for example, can process enormous amounts of data with the CPU (e.g., the CPU202). For example, the processor202(and the device200) can receive a 500 MHZ bandwidth data stream. 500 MHz of bandwidth is significant in some respects because that is a generally accepted practical limit of a 10 Gigabit Ethernet link. Sampling the data at 500 MHz, with 8 bit samples for an I/Q pair and including parity bits, can saturate a 10 Gbit Ethernet link. The 500 MHz example is not limiting on the disclosure. Data pipes larger than a 10 Gbit Ethernet link are possible. In addition, the processing can be split into n-number of parallel blocks (e.g., block315) to accommodate any amount of data. Process315is shown in dashed lines and depicts a processing step of the method300. Process315is shown in executed in multiple, parallel steps, or processing blocks315a,315b, . . .315n. The process315as used herein, can refer to a collection of processing functions performed by the processor202, for example. The digital bit stream152can be sent into multiple parallel processing blocks315a,315b, . . .315nto spread the processing load across several cores204. Individual processing blocks315a,315b, . . .315ncan represent individual iterations of cloud processing. Thus, the processing of each of the processing blocks315a-315ncan be associated with a (cloud-based) core204a-204n. The number of processing blocks315a-315nneeded varies based on the amount of data being processed. In some embodiments, the number of processing blocks315a-315ncan be limited by the number of logical cores available via the network152or, for local hardware processing, within the processor202. In some other embodiments, memory bandwidth constraints can cause a bottle neck in the signal processing. Memory bandwidth can refer to the rate at which data can be read from or stored into a semiconductor memory (e.g., the memory206) by a processor (e.g., the processor202). In some embodiments, the number of processing blocks315a-315ncan vary. In general, the fewer processing blocks315a-315npresent, the better to limit the number of cores needed for the entire process. This can further enable the system to fit into smaller virtual private cloud (VPC) machines which are cheaper to operate. A VPC can include the SPS150having several CPUs, for example. In some embodiments, 8 processing blocks315a-315ncan be used for a 10 Gbit Ethernet link. Such an embodiment may not include forward error correction processing blocks. In some other embodiments, the only practical limitation on the number of processing blocks315a-315nneeded is the bitrate and bandwidth of the communication link (e.g., size of the pipe). Accordingly, any number (n) of processing blocks315a-315nis possible. In some embodiments, however a practical limitation on the number (n) processing blocks315a-315nmay be present based on the number of threads that can be run on a CPU or the number of cores204in the processor202. However, if the limits are reached within a single CPU, multiple CPUs (e.g., the processor202) together within the SPS150(e.g., a VPC) can have an unlimited number of cloud-based CPUs or cores204to perform the processing. In addition, the processor202can create new processing block315a-315nas needed. The processing cores204can be spread across multiple distributed processors (e.g., the processor202) as needed for throughput and efficiency. The processing blocks315a-315nare arranged in such a way that it does not matter which processing block315a,315b, . . .315nare performed the slowest (or fastest). The method300can share the processing load across the processing blocks315a-315nand therefore alleviate any processing delays caused by bottle necking issues at individual processing blocks315a-315n. For example, individual subprocesses of the processing blocks315a-315n(see description ofFIG.4, below) may not be performed or occur at equal rates (e.g., some are faster than others). Accordingly, larger process of the method400(FIG.4), for example, can account for variations in performance or processing times. The processing blocks315can then be created as many times as needed to handle the incoming data. In some embodiments, each processing block315a-315ncan represent a collection of signal processing algorithms performed by the processor202. As used herein, an algorithm can refer to the smallest collection of functions or method steps that perform a desired function. Multiple exemplary algorithms are described herein. An exemplary benefit of the method300is the ability to create more processing blocks315a-315nwhen needed. In general, the processing blocks315a-315ncan be implemented in software, and so can be created or eliminated as needed to suit a given data rate or processing load. Each processing block315a-315ncan be rearranged to fit the needs of different received waveforms (e.g., the downlink signals160and/or170) and the associated digital bit streams154. At block320the processed signal data from the multiple processing blocks315can be recombined to form the original data encoded and modulated on the downlink signal160,170. In some embodiments, the processor202can perform the functions of a data recombiner. In other embodiments, the device200can have an additional component to perform such functions. Each data packet or processed block of data can have a time stamp. The data recombiner (e.g., the processor202) can order the data blocks based on the time stamps and compare the phase between the ordered blocks. The recombiner can further adjust the phase of adjacent blocks to reorder the data stream. In some embodiments, the phase of a subsequent data block can be adjusted to match the phase of a previous data block. For all processing blocks shown in process315, there are at least four options for running:1) Multiple blocks running, with each sub-element (e.g., each block315a-315n) within the processing block315getting its own core (e.g., cores204a-204n);2) Multiple blocks running, with the processing block315getting just one dedicated core for the entire block;3) Single block running with each sub-element within the processing block getting its own core; and4) Single block running with the processing block getting just 1 dedicated core for the entire block. The more cores that can be run, the higher the rates that may be achievable. At block325, the device200can output the data to an appropriate receiver. In some examples such a receiver can be one or more mission operations centers. This data can be mission dependent (e.g., the purpose of the satellite), and can include, among other things, weather data, image data, and SATCOM payload data. In general-purpose CPUs, there are at least three main factors that may limit high rate performance: 1) Data ingest, 2) CPU capacity, and 3) memory bandwidth utilization. Data ingest refers to how fast data can be fed into the CPU. CPU capacity is driven by the CPU clock speed and the number of cores within the CPU. Memory bandwidth refers to how quickly data can be transferred to/from the CPU to external DDR RAM (not CPU cache). Memory bandwidth may be determined by the number of memory lanes and the DDR RAM clock speed. In certain cases, the limiting factor for achieving high rate processing is CPU capacity but in other cases it is memory bandwidth. Care must be taken to determine which of the above cases is impacting the performance and if it is memory bandwidth limited, the embodiments described below are non-limiting examples of ways to lower the memory bandwidth utilization within the proposed patent approach. Function calls within a given processing block can be arranged in such a manner to optimize CPU computation or memory bandwidth utilization. For example, referring to function calls (illustratively depicted as blocks) shown inFIG.5, for the given example, the various function calls (e.g., timing recovery block, carrier recovery block, correlator block, time adjust block, phase rotate block, power and Es/No estimator block, amplitude adjust block, and weighted combiner block) can be grouped in such a way to minimize memory bandwidth. These function calls can be called independently so that each function is completed on a set of data before another function starts, so to simplify each function. In another example, a plurality of or all of the function calls can be combined into one block, such that data is not transferred to RAM after each executed function and the memory bandwidth for the combined function is much smaller then called independently. In the case of independently called functions, a first function call (e.g., the timing recovery) may be performed over the whole data set before a second function call (e.g., the correlator) would occur. In the case of combining, just a portion of data would be processed in the first function call before the second is executed. In this way, memory bandwidth drops. This method can apply to any grouping of functions, not just those illustrated inFIG.5. For example, the method may be applied to the methods shown inFIG.6or any other grouping for function calls to be executed in a block as disclosed herein (e.g., the various function call blocks illustrated inFIGS.7-10). Another way to improve memory bandwidth utilization may be to collapse several function call blocks into one block similar to the approach described above. For example, as described in greater detail below with reference toFIG.5, a plurality of functions may be necessary to perform timing and carrier recovery. Normally, for ease of operation and CPU optimization, each function would require its own block, but to lower memory bandwidth utilization, all functions can be combined into one processing block. This tradeoff lowers memory bandwidth utilization for a hit in CPU performance. Digital Signal Post-Detection Diversity Combiner Running on General Purpose CPUs Employing Parallel Processing on Multiple Cores to Achieve High Through-Put Operating in a Cloud Environment: As described above,FIGS.5and6are functional block diagrams of example implementations of methods500and600. In various examples, each of methods500and600may be an example of a Diversity Combiner method. To indicate that various number of signals may be processed in parallel, two paths are depicted with three vertical dots between them indicating that any number of paths can be used such as four, eight, etc. Diversity combining may be used to combine multiple antenna feeds together such that the signals all are aligned in time and phase, and weighted based on signal quality to optimize combined information transfer on the multiple input signals A through N. Signal quality may be determined using, for example but not limited to, one or more of signal-to-noise ratio, energy per symbol to noise power spectral density (Es/No), power estimates, received signal strength indicators (RSSI), and the like. The multiple antenna feeds can be from one or more remote locations, such as the platform110or the satellite111. Satellites are used as an example herein, but other wireless transmission systems may be implemented such as radio antennas (e.g., the antenna122) or other type of transmitter. Accordingly, the use of a satellite is not limiting on this disclosure. In the case of a satellite as shown inFIG.1, the diversity combining can also be used during an antenna handover event when the platform110and the satellite111are visible from the same ground station (e.g., the ground station122) but, for example, the satellite111is falling below the horizon (e.g., in the east) and the platform110is rising over the horizon (e.g., in the west). In order to properly combine the downlink signals, several calculations must be performed. The disclosed system can digitize and convert the signals into digital samples which are then transported to a signal processing element. The system can further compute and compensate for Doppler effects. The system can also determine the residual phase and frequency delta (e.g., difference) between the downlink signals as well as the time differential and the estimated signal-to-noise ratios of each signal. Following these operations, the signals are then combined together. There are many approaches that may be used to combine signals. For example, signals may be combined using a Pre-Detect (Pre-D) Diversity Combiner and/or a Post-Detect (Post-D) Diversity Combiner. Pre-D Diversity Combiners may be configured to combine signals before executing a match filter (e.g., also referred to as detector). An example implementation of a Pre-D Diversity Combiner is described in PCT/US2020/65351, the disclosure of which is incorporated by reference herein in its entirety. Post-D Diversity combiners may be configured to combine signals after completion of a match filter function. As such, Post-D Diversity combiners may offer simplified methods for performing diversity combining over the Pre-D methods since the signals and data packets are discrete digital samples. Thus, the functions and complexity may be reduced as compared to the Pre-D Diversity Combiners. For example, Post-D may be simpler since execution comes after the match filter and combination occurs in symbol space. This means, the time adjustment may be done in only one whole symbols steps, and sub-sample adjustment is unneeded, which is not the case for Pre-D. Implementations of Post-D Diversity Combiners may include Post-D End (referred to herein as Post-DE, an illustrative example of which is shown inFIG.5) and Post-D Mid (referred to herein as Post-DM, an illustrative example of which is shown inFIG.6). As used herein, Post-DE may refer to a combiner where the signal is combined after a match filter is executed, for example, after executing full demodulation, including carrier and timing recovery. That is, combining in the Post-DE method occurs after both the timing recovery and the carrier recovery are locked. As used herein, the term “locked” refers to a correct demodulation of signals in which proper timing and/or carrier alignment is achieved via timing recovery and/or carrier recovery, respectively. As used herein, Post-DM may refer to a combiner where input signals are combined after partial execution of the full demodulation process, for example, after timing recovery of the demodulation processing chain and a match filer but before carrier recovery of the demodulation processing chain. Thus, the Post-DM method allows for combining after the timing recovery is locked, but before carrier recovery. An example of a non-limiting benefit of the Post-DM is that the combined signal can achieve a higher Es/No and/or signal-to-noise ratio (SNR) before carrier recovery is executed. Since carrier recovery may fail at higher Es/Nos than timing recovery, the Post-DM method may improve the entire system's Es/No sensitivity. However, the Post-DM method may include an increased complexity cost in execution. Whereas, the Post-DE method may be simpler to set up and execute, but carrier recovery may need to be locked prior to combining, which usually is a limiting factor to receiver sensitivity. This limits how low in terms of Es/No and/or SNR the diversity combiner can operate and may limit its usefulness for power Forward Error Correction (FECs) like those used in some waveform standards, such as DVB-S2. The Post-DM and Post-DE methods disclosed herein illustrate two possible high-level examples for Post-D Diversity Combiner methods. It will be appreciated that the embodiments herein are not limited to only these two methods. Other methods are possible. FIG.5illustrates an example Post-DE Diversity Combiner as method500. As shown inFIG.5, method500receives input samples from multiple antenna feeds (e.g., downlink signals160and/or170are received and sampled at the antenna and feed into method500), combines the inputs together, and outputs a combined signal such that the signals are aligned in time and phase, and weighted based on signal quality to optimize combined information transfer on the multiple signals. Method500may be executed by a processor (e.g., processor202ofFIG.2), for example, implemented as the SPS150ofFIG.1. The method500comprises a plurality of function blocks, for example, a plurality of timing recovery blocks510a-510n(collectively referred to as timing recovery block(s)510or block(s)510), a plurality of carrier recovery block520a-520n(collectively referred to as carrier recovery block(s)520or block(s)520), one or more correlator block(s)530, a plurality of time adjust blocks540a-540n(collectively referred to as time adjust block(s)540or block(s)540), a plurality of phase rotate blocks550a-550n(collectively referred to as phase rotate block(s)550or block(s)550), a plurality of amplitude adjust blocks560a-560n(collectively referred to as amplitude adjust block(s)560or block(s)560), a plurality of power and Es/No estimator blocks565a-540n(collectively referred to as power and Es/No estimator block(s)565or block(s)565) and one or more combiner block(s)570. In the illustrated example, a plurality of blocks510a-510n, blocks520a-520n, blocks,540a-540n, blocks,550a-550n, blocks,560a-560n, and565a-565nare shown for executing functions on samples of a plurality of downlink signals received via a plurality of antenna feeds, where each block is executed on a corresponding signal. Any number of signals are possible; however, the example herein will be described with reference to two signals (e.g., sample A and sample N). In the illustrative example ofFIG.5, a given timing recovery block510and corresponding carrier recovery block520may be part the full demodulation process, including a match filter for a respective input signal. For example, block510aand520amay be part of the full demodulation process of signal A, while block510nand520nmay be part of a the full demodulation process of signal N. Each respective block510and block520may be configured to demodulate a respective input signal and may be referred to herein as a demodulator processing chain. Thus, the Post-DE method shown inFIG.5is configured to combine input signals at the end (e.g., following execution of) the demodulator chain. As such, the combiner logic is after the full demodulation, but before forward error correction. As described above, the plurality of blocks of method500may each represent a function and may be implemented as one or more of the function306a,306b, . . .306n(FIG.3). For example, as shown in the illustrative implementation ofFIG.5, the correlator block530may be implemented as function306ofFIG.3, such data from the carrier recovery blocks520a-520nmay be split into blocks of data and processed in parallel functions306a-306n. Similarly, as shown inFIG.5, the combiner block570may be implemented as function306and executed as a plurality of functions306a-306nto process a plurality of blocks of data in parallel. While specific examples of blocks are shown implemented as functions306, these example as not intended to be limited and any blocks of method500may be implemented as function306. In another example, alone or in combination, a plurality of blocks shown inFIG.5can be grouped together as a single “processing”515that perform functions in a similar manner to the process315ofFIG.4. That is, a plurality of blocks ofFIG.5may be grouped together as process515and executed in multiple, parallel iterations as processing blocks315a,315b, . . .315n(FIG.4). For example, different portions of the method500can be grouped together as a processing515and run in series-parallel and/or parallel-series processing as described above in connection toFIG.4. In the illustrative example shown inFIG.5, the timing recovery block510nand carrier recovery block520nfor executing a match filter along the processing path of signal N are grouped as a processing515. In this case, with reference toFIG.4, the input samples may be ingested at block305, split into overlapping blocks of sample at block310, and each overlapping block of data may be processed in multiple, parallel iterations of timing recovery blocks510nand carrier recovery blocks520nas processing blocks315a-315n. The processed overlapping data blocks are then output to the data combine320for combining the processed data and then output by block325for processing by a subsequent block of method500. Data combine block320ofFIG.4is not to be confused with the combiner block570. Block320combines the parallel block processing of315a-315n, whereas combiner block570executes diversity combining, as described below. Similarly, as illustratively depicted inFIG.5, the time adjust block540n, phase rotate block550n, and power and Es/No estimator block565, and amplitude adjust block560nare shown grouped as a processing block315. While specific examples of blocks are shown grouped together as a process515, these examples are not intended to be limited and any grouping of one or more blocks of method500may be grouped together as processing515and executed in parallel as described in connection withFIG.4. For example, one or more of time adjust block540n, phase rotate block550n, and amplitude adjust block560nmay be executed as processing block315. Furthermore, while only portions of the path corresponding to the input from signal N input are shown inFIG.5as being grouped together, it will be understood that various blocks for the signal A path can also be grouped together as a process515and executed in parallel. For example, blocks510aand520bmay be grouped as a first process515and blocks540a-565ngrouped together as a second process515. Other groupings are possible as noted above. In various examples, the plurality of blocks ofFIG.5may be implemented using SIMD processing techniques as described throughout the present disclosure. SIMD techniques may offer increased throughput and minimized memory bandwidth requirements. Increasing the functionality of each processing block executed using SIMD techniques may serve to provide increased minimization of memory bandwidth requirements. At blocks510and520, the processor202(e.g., one or more cores204) can perform timing and carrier recovery on respective input samples from downlink from respective antenna feeds. An example of a timing and carrier recovery method is illustratively shown inFIGS.7and8. FIG.7is a flowchart of an example of a method for timing and carrier recovery implemented by the signal processing method ofFIG.3and/orFIG.4.FIG.7illustrates method700comprising a plurality of blocks, one or more of which may be implemented as a process315such that the groupings of blocks are processed in each of processing blocks315a-315nofFIG.4. Each of the blocks of method700may also each be implemented as a function306such that a single block can be executed across functions306a-306nofFIG.3. Execution of a block according toFIG.3may be performed separately or in combination with execution of a process according toFIG.4. The method700can be used for standard waveform processing as opposed to offset waveforms described below. For example, standard waveform processing can be used for waveforms that map bits into symbols and then modulate the symbols onto a carrier wave. Examples of standard waveforms include binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), 8PSK, 16APSK, 32APSK and 64 APSK as well as quadrature amplitude modulation (QAM) waveforms. The method700may be an illustrative example timing recovery processing block510and an example carrier recovery block520ofFIG.5. At block705, the processor202(e.g., one or more of the cores204) can perform a timing recovery error calculation on the received data packets (e.g., samples of the digitized bitstream154or the digitized downlink signal160and/or170). The timing recovery error calculation can provide the needed phase information to properly align a matched filter to the incoming data stream (e.g., the digitized bit stream134). The match filter is used to match the transmitted waveform in the time domain and is aligned by the timing error to capture all the energy in the received signal to optimize performance. The results of the timing recovery error calculation can include three parameters: 1) starting phase in degrees; 2) frequency adjustment in Hertz (Hz); and 3) Doppler rate adjustment in Hz/sec. The foregoing units are exemplary and are not limiting on the disclosure. Other equivalent units are also possible. At block710, the processor202(e.g., one of the cores204) can perform a timing recovery on the packets to align an internally generated match filter to the received samples that were generated with the modulator's respective match filter. The alignment is based on the calculation in block705. The output of block710is the synchronized (e.g., time-corrected) symbols within the data packets received at block705. Examples of the Timing Recovery Error Calc block705and Timing Recovery block710are described in U.S. Pat. No. 10,790,920, the disclosure of which is hereby incorporated herein by reference as if set forth in full. For example, an estimated Gardner Timing Error Detector can be applied to incoming data to create timing information, as is known in the art. In another embodiment, the incoming sample stream can be delayed by one sample. Then the non-delayed data can be multiplied by the conjugate (conjugate multiplication) of the delayed data. Both have advantages and drawbacks so it is an tradeoff on which to implement. Timing spikes, generated by the Gardner Timing Error Detector, can be mixed with a timing estimate or an estimate of the symbol rate; the mixed signal may be decimated to reduce the sampling rate; a phase unwrap calculation may be performed on the decimated samples; and a curve fit calculation may be performed to determine phase, frequency, and Doppler rate offset information that can be applied to update the timing estimate. At block715, the processor202(e.g., one of the cores204) can perform a carrier recovery error calculation on the packets to determine phase and frequency information. At block720, the processor202(e.g., one of the cores204) can perform a carrier recovery on the packets based on the calculation in block715. Carrier recovery compensates for unknown frequency, Doppler rate, and phase offsets in the downlink signal (e.g., downlink signals160and/or170) from the spacecraft (e.g., the satellite110). The two most common sources of uncertainty are the Doppler effects from the spacecraft motion and from imperfect oscillators within the spacecraft. The processor202can apply the phase, frequency, and Doppler rate corrections from block715to form a synchronous symbol corresponding to the modulated data in the downlink signal (e.g., downlink signals160and/or170) at the output of block720. Examples of the Carrier Recovery Error Calc block715and Carrier Recovery block720are also described in U.S. Pat. No. 10,790,920, the disclosure of which is hereby incorporated herein by reference as if set forth in full. For example, an incoming signal can be raised to certain power based on modulation type; the mixed signal can be decimated to reduce the sampling rate; a phase unwrap calculation can be performed on the decimated samples; a curve fit calculation can be performed to determine phase, frequency, and Doppler rate offset information that can be applied to update the carrier recovery algorithm; and the curve fit can be used to update (and improve) the carrier frequency estimate. In some implementations, blocks705and710may be grouped together as a single processing block, for example, as shown inFIG.5as timing recovery processing block510. Similarly, in some implementations, blocks715and720may be grouped together as a single processing block, for example, as shown inFIG.5as carrier recovery processing block520. In some implementations, one or more additional processing blocks may be executed between blocks710and705, for example, as illustrated inFIG.6. Furthermore, the timing recovery error calculation705and the timing recovery block710may be grouped together as a process315ofFIG.4. In the case where timing recovery is performed across multiple processing blocks315a-315n, the signal may be combined via block320prior to execution of the grouped blocks (e.g., process315) and output by block325ofFIG.4as a single thread operation for each signal. While blocks305,310,320, and325are not illustrated inFIG.7, it will be understood that such blocks may be present such that input data into a process may be ingested (305) and split (310) to perform the grouped functions as processing blocks315a-315nand that the resulting processed data may then be combined (320) and output (325) for downstream processing. The output signal is now in symbol space, as shown inFIG.7, and downstream functions may be executed on resulting output symbols. The more processing blocks executed, the higher processing rate that can be achieved and throughput can be increased. After converting from the sample space to the symbol space at block710, the signal symbols may be corrected by blocks715and720. The blocks715and720may be grouped together as a process315ofFIG.4. Thus, symbols output from the block710may be fed again into a carrier recovery process315and executed across processing blocks315a-315n, where the block710is implemented as a separate process and/or function from block715, for example, as described above. As another example, each of the blocks705-720may be grouped as a single processing block315and executed across processing blocks315a-315nofFIG.4, for example, as described above in connection toFIG.5. Furthermore, each block705-720may be implemented as function306and executed across the cores204as functions306a-306n. FIG.8is a flowchart of an embodiment of another method for timing and carrier recovery implemented by the signal processing method ofFIG.3/FIG.4(the processes that occur in each of blocks315a-315n).FIG.8illustrates another method800which may be similar to the method700(FIG.7), combining and rearranging some of the functional blocks. As with method700, the method800can be used for offset waveform processing. For example, offset waveform processing can be used for waveforms having an offset or stagger between the In-phase (I) and Quadrature (Q) channels, such as waveforms like Offset quadrature phase-shift keying (OQPSK), minimum-shift keying (MSK), Gaussian minimum-shift keying (GMSK), and shaped-offset quadrature phase shift (SOQPSK). At block805, the processor202(e.g., one or more cores204) can perform a timing and carrier recovery error calculation on the packets. The timing recovery error calculation and the carrier recovery error calculation are similar to those performed in blocks705and715(FIG.7). In the method800though, the carrier recovery is performed before timing recovery of the symbols. The input to the method800is the samples and the output is corrected, synchronous symbols. At block810, the processor202(e.g., one or more cores204) can perform a carrier recovery operation based on the calculation from block805. An example of the Timing and Carrier Recovery Error Calc block810is also described in U.S. Pat. No. 10,790,920, the disclosure of which is hereby incorporated herein by reference as if set forth in full. For example, a digitized bit stream can be squared, which can result in spikes being created in the frequency domain. Each spike can then be mixed by a mix signal created from a composite estimate of the carrier frequency and symbol rate. Both mixed signals may then be decimated to reduce the sampling rate; a phase unwrap calculation may be performed on both mixed signals; a curve-fit calculation may be performed; and the result passed onto the carrier recovery and timing recovery algorithms to update the information. Referring back toFIG.5, corrected symbols for each input signal output from respective blocks520a-520nare fed to block530. At block530, the processor202(e.g., one or more cores204) may calculate time and phase relationships between input signals (two input signals in this example). For example, the block530may perform a correlator function using a fast Fourier Transform (FFT) on the corrected symbols, which can output both time and phase information indicating respective offsets or stagger between the input signals from the same operation. Once a coarse correlation is run using an FFT, a fine correlation can be run over a smaller set of data to ensure time and phase alignment have not changed. Coarse correlation may refer to running timing and phase differences between the two signals over many symbols to determine the time uncertainty between the two signals. For the case with a single satellite and two antennas, this time is usually small (e.g. micro-seconds or less) and may vary based on cable lengths and analog equipment timing differences. For rates less than 1Msymbols/second (Msps), coarse timing estimation may need only to cover +/−1 symbol. If the symbol rate is 100 Msps, coarse timing estimation may need to cover +/−100 symbols. If the scenario is antenna handover, for example, where there are two satellites and two antennas, the timing difference between the two signals could be 100 milliseconds or greater. For 1 Msps, coarse timing estimation may need to cover at least +/−100k symbols in coarse timing estimation. And for 100 Msps, coarse timing estimation may need to cover 10M symbols. Fine correlation may be needed to run over at least 1 symbol, but may be ran over 3 to 15 to ensure the timing alignment is not lost once found by coarse correlation. For each acquisition mode, once timing is known, it is possible to find the phase difference between the two signals by either comparing the phase of the FFT result as is the case of coarse correlation, or the phase of properly time aligned correlator for the case of fine correlation. At block540, timing information from block520is fed to block540and the processor202(e.g., one or more cores204) and adjusts the timing of the input signals based on the timing information. Since the signal has already been properly demodulated and passed through a matched filter and is now just symbols, time alignment is straight forward since only integer number of symbol delay needs to be applied instead of fractional samples as is the case for Pre-D combining. Block540may apply a delay based on the timing offset between the input signals as calculated in block530for proper alignment therebetween. That is, for example, a delay corresponding to the timing offset between input signal A and input signal N is applied such that the symbols of the respective signals are aligned in the time domain. For example, block520may calculate time relationships for aligning the symbol streams from blocks520a-520nso that each symbol from one signal chain matches symbols in other symbol chains in terms of symbol order. For example, for the case with a single satellite (e.g., satellite111or platform110) and two antennas (e.g., antennas122,132, and/or142), it is possible to label each symbol from the satellite transmitter with a number that corresponds to every symbol transmitted. The correlator530determines that first symbol would be symbol1, and the 100th symbol will be symbol100and so on. The time adjust block540then makes sure to align symbol1from signal A to symbol1from signal N. Phase offset information calculated at block530is fed to block540and, at block540, the processor202(e.g., one or more cores204) rotates the phase of at least one input signal to align the phase of the signals. Block550may remove Doppler effects by rotating one of the signals to properly align with the other signal(s) based on the phase offset information from block530. This operation can be achieved using a complex multiply, as is known in the art, In some cases, if the phase change is +/−90 or 180 degrees, a combination of swapping and/or inverting of in-phase (I) and Quadrature (Q) channels may be performed, as is known in the art. As an illustrative example, the phase of a first signal A must be properly adjusted to match the phase of the signal N. For example, for QPSK, there are four possible phases that are possible for each symbol. Since the demodulator does not guarantee how these four phase possibilities line up after demodulation, one of the signals phases must be adjusted to match the other. The block530calculates this adjustment amount. For example, say the phase of signal A of symbol1is 45 degrees and the phase of signal N of symbol1is 135 degrees. The block530determines that signal N needs to be adjusted by negative 90 degrees so that symbol1of signal N (and all other symbols) line up with that of signal N and this information is passed to block550to rotate the phase of the signal accordingly. At block565, the processor202(e.g., one or more cores204) estimates signal power and Es/No for each input signal. At block565, the Es/No may be measured using any one of several approaches. One illustrative example for measuring Es/No is to calculate (C/N)×(B/fs), where C/N is one of the carrier-to-noise ratio or signal-to-noise ratio, B is the channel bandwidth in hertz, and fs is the symbol rate or symbols per second. However, it will be appreciated that any approach for measuring Es/No will be equally applicable to the embodiments disclosed herein. In another example, block565may estimate signal quality, for example, signal-to-noise ratio, power estimates, received signal strength indicators (RSSI), and the like. These estimates may be fed to block570to weigh each input signal appropriately for combining. The power and Es/No estimates from block565may be fed to block560along amplitude information indicating differences in signal amplitude from block530. As another example, the amplitude information can be applied by the demodulation process (e.g., blocks510and520) directly, because these blocks may include an automatic gain control (AGC) loop. In either case, at block560, the processor202(e.g., one or more core204) adjusts the amplitude of each respective signal by, for example, multiplying the amplitude of the input signal based on the provided estimated power and Es/No from block565. For subsequent combining, the signals A-N are weighted by the difference in Es/No between the each. For example, if both signals have the same Es/No, a 50/50 weighting may be applied, where each signal is scaled by 0.5 (or weighted by 50%) before combining. If the Es/No difference between signals is 3 dB, a weight of 66/34 may be applied, where the higher Es/No signal is scaled by 0.66 (or weighted by 66%) and the lower Es/No is scaled by 0.34 (or weighted by 34%) before combining. Once the signals have been time and phase aligned and the amplitude adjusted as set forth above, at block570the processor202(e.g., one or more processors204) may apply scaling based on Es/No estimates and power estimates calculated in block565. For example, a signal having a better signal-to-noise ratio as compared to another signal may be assigned a higher weight than the other signal and scaled accordingly. Similarly, higher Es/No estimates and/or power estimates may be assigned greater weights and scaled accordingly. SIMD techniques may be employed to efficiently scale and combine the multiple signals (e.g., two signals in this example). Block570may sum the signals after all the adjustments have been made. While the blocks540,550, and560are illustratively executed in a particular order, it will be appreciated that the embodiments herein are not limited to only the illustrated order. Blocks540,550, and560may be executed in any order as desired and/or executed in parallel. FIG.6illustrates an example Post-DM Diversity Combiner as method600. As shown inFIG.6, the method600receives input samples from multiple antenna feeds, combines the inputs together, and outputs a combined signal in a manner substantially similar to that ofFIG.5. The method600comprises the same blocks as those in method5that are configured to execute substantially the same functions, but are executed in the order as shown inFIG.6. For example, method600comprises the timing recovery blocks510, the carrier recovery blocks520, the one or more correlator block(s)530, the time adjust blocks540a, the phase rotate blocks550, the amplitude adjust blocks560, and the one or more combiner block(s)570. As with method500, method600includes the plurality of blocks510a-510n, blocks,540a-540n, blocks,550a-550n, blocks565a-565n, and blocks560a-560nfor executing functions on sample of a plurality of signals received via a plurality of input antenna feeds, where each block is executed on a corresponding signal. Any number of signals are possible; however, the example herein will be described with reference to two signals. Where method600deviates from that of method500is that the input signals are combined after performing timing recovery at blocks510but before executing the carrier recovery at block520of the demodulator processing chain. Therefore, combining occurs mid-demodulation. For example, as shown inFIG.6, the carrier recovery block520is executed on the combined signal out from the combiner block570. As described above in connection toFIG.5, the plurality of blocks of method600may each represent a function306and may be executed in parallel as one or more of the functions306a,306b, . . .306n(FIG.3). That is, for example, the correlator block530ofFIG.6, the weighted combiner block570ofFIG.6, etc. may be executed in parallel as one or more functions306a-306n. Similarly, a plurality of blocks shown inFIG.6can be grouped together as a single “processing” (e.g., process515and/or315) that performs functions in a similar manner as the processing blocks315a,315b, . . .315n(FIG.4). That is, for example, for a given signal processing chain the timing recovery block510may be grouped as process515(e.g., timing recovery error calculation block705and timing recovery block710), while the blocks540-565may be grouped together as another process515. Similarly, blocks570and520ofFIG.6may be grouped together as still another process515. Various other combinations are possible. Furthermore, the plurality of blocks ofFIG.6may be implemented using SIMD processing techniques as described throughout the present disclosure. WhileFIGS.5and6illustrate two possible high-level examples for Post-D Diversity Combiner methods, it will be appreciated that the embodiments herein are not limited to only these two methods and that other methods are possible. That is, embodiments herein provide for methods of executing any function of a diversity combiner as a function306to be executed in parallel as functions306a-306n(FIG.3) and/or grouping one or more functions of a diversity combiner as a process315to be executed in parallel as processing blocks315a-315n. Channel Simulator Running on General Purpose CPUs Employing Parallel Processing on Multiple Cores to Achieve High Through-Put Operating in a Cloud Environment: As described above,FIG.9is a functional block diagram of an example implementation of method900. In various examples, method900may be an example of a channel simulation method. A channel simulator is used to simulate one or more different distortions and/or effects of a moving transmitter and/or a receiver in a radio environment. For example, with reference toFIG.1, embodiments of the channel simulator may be used to simulate a transmitter on satellite111and/or platform110(e.g., an airplane, helicopter, or unmanned aerial vehicle (UAV), etc.) that is moving through the environment relative to a receiver (e.g., one or more of antenna122,132, and142). As another example, embodiments of the channel simulator may be used to simulate a receiver on satellite111and/or platform110(e.g., an airplane, helicopter, or unmanned aerial vehicle (UAV), etc.) that is moving through the environment relative to a transmitter (e.g., one or more of antenna122,132, and142). In yet another example, embodiments of the channel simulator may be used to simulate a transmitter on satellite111and/or platform110(e.g., an airplane, helicopter, or unmanned aerial vehicle (UAV), etc.) that is moving through the environment relative to a receiver on satellite111and/or platform110(e.g., an airplane, helicopter, or unmanned aerial vehicle (UAV), etc.) The channel simulator method simulates at least one or more of and possibly all of the possible effects of the above described environment, either from imperfect transmitters, environmental effects or moving vehicles. Possible transmitter impairments that can be simulated include, but are not limited to, phase noise, non-linear distortions (AM-PM), in-phase/quadrature (I/Q) imbalance, imperfect match filters, timing jitter, and the like. Possible environmental effects include, but are not limited to, rain fade, scintillation, multi-path, and the like. Possible motion effects include, but are not limited to, the adjustment on the center frequency of the signal, adjustment in time delay, power adjustments. A channel simulator can also add Additive White Gaussian Noise (AWGN) or any other kind of noise that channel may impart on a signal. FIG.9illustrates an example channel simulator as method900. To simulate all the needed channel effects, method900may include one or more functional blocks910-960for the several operations to be performed on the signal. For example, in the illustrative example ofFIG.9, the method900includes one or more of a signal distortions block910, phase noise block920, center frequency adjustment block930, timing adjustment block940, gain adjustment block950, and additive noise block960. The functional blocks included in method900may depend on the distortions or effects that are desired to be simulated. Method900may include one of, one or more of, or all of blocks910-960, and in some embodiments additional blocks may be added to simulate other distortions and/or effects. As described above, the plurality of blocks of method900may each represent a function and may be implemented as one or more of the function306a,306b, . . .306n(FIG.3). In another example, two or more of the plurality of blocks can be grouped together as a single “process”915that perform functions in a similar manner to the process315ofFIG.4. That is, a plurality of blocks ofFIG.9may be grouped together as process915and executed in multiple, parallel iterations as the processing blocks315a,315b, . . .315n(FIG.4), etc. For example, as shown inFIG.9, all function blocks910-960are grouped into a single process915, with the grouped functions replicated in multiple processing blocks315a-315n. The number of processing blocks315a-315nmay be replicated as many times as desired to achieve the required processing rate and throughput. While blocks305,310,320, and325are not illustrated inFIG.9, it will be understood that such blocks may be present such that input data may be ingested (305) and split (310) to perform the process915as processing blocks315a-315nand that the resulting processed data may then be combined (320) and output (325) for downstream processing. WhileFIG.9illustrates all functional blocks grouped into a process915, embodiments herein are not so limited. The functional blocks910-960may be grouped in many different ways. For example, fewer than all of the functional blocks910-960(e.g., two or more) may be grouped as a process (e.g., process915). As an illustrative example, functional blocks910and920may be grouped together as a first process915and processing distributed across first one or more processing blocks315a-315nand functional blocks930-960may be grouped together as a second process915and processing distributed across second one or more processing blocks315a-315n. Furthermore, while blocks305,310,320, and325are not illustrated inFIG.9, it will be understood that such blocks may be present prior to each process915such that input data may be ingested (305) and split (310) to perform the grouped functions of process915(e.g., as processing blocks315a-315n) and that the resulting processed data may then be combined (320) and output (325) for downstream processing. In various examples, the plurality of blocks ofFIG.9may be implemented using SIMD processing techniques as described throughout the present disclosure. SIMD techniques may offer increased throughput and minimized memory bandwidth requirements. Increasing the functionality of each processing block executed using SIMD techniques may provide increased minimization of memory bandwidth requirements. At block910, the processor202(e.g., one or more of the cores204) can simulate signal distortions on an input signal. Block910can impart simulations of one or of non-linear distortions (AM-PM), in-phase/quadrature (I/Q) imbalance distortions, scintillation distortions, multi-path distortions onto the input signal so to simulate a signal that has experienced such distortions. For example, a complex finite impulse response (FIR) filter may be used to simulate the above noted distortions, except for AM-PM distortions. Examples of an FIR filter can be implemented using SIMD techniques to improve throughput. The FIR filter coefficients can be set to achieve simulation of the desired distortions. For AM-PM distortion, a non-linear operation may be performed, for example, a look up table to complicated non-linear math operations. At block920, the processor202(e.g., one or more of the cores204) can simulate phase noise on an input signal. Block920can impart simulation of phase noise onto an input signal so to simulate phase noise. For example, colored noise may be added to a carrier that is mixed with the input signal. One way to create the colored noise is to shape white noise using a FIR filter (which may be the same FIR filter of block910or a different FIR filter) to achieve the desired shape of the noise. The noise can be created in decade steps, such that bands from 0.1 to 1 Hz can be created, then interpolated and added to another stage of noise running from 1 Hz to 10 Hz. This process can be repeated as many times as needed to cover the needed phase noise bandwidth. In each step, the generation, filtering, and interpolation of noise can be achieved using SIMD techniques. This colored noise is then used to adjust the phase of either a carrier signal or a complex vector that starts at (1,0). This phase adjusted signal or vector can then be multiplied with the input signal, resulting in added phase noise onto the input signal. At block930, the processor202(e.g., one or more of the cores204) can perform carrier adjustment by adjusting the phase of the input signal over time. Block930may be performed in a manner similar to block920, but at block930the phase of the mixing carrier changes over time to achieve the desired carrier frequency and phase adjustment. Block930may be used for, but not limited to, simulating the carrier frequency change from the motion of a moving platform (e.g., platform110and/or satellite111ofFIG.1), or more generically motion of either the transmitter or receiver, referred to as Doppler effects. At block940, the processor202(e.g., one or more of the cores204) can perform timing adjustments to simulate effects of a moving platform (e.g., platform110and/or satellite111). For example, such movement may stretch or increase the length of the input signal in time. Block940may apply a polyphase filter that uses adjustable delay taps. Block940may be similar to the time adjust block540ofFIGS.5and6; however, the timing information for block940is driven by a user input to simulate the desired effects, instead of a result of analyzing an input signal. At block950, the processor202(e.g., one or more of the cores204) can perform gain adjustment to simulate rain fade or anything else that can impact signal power. Block950may be performed by multiplication of the amplitude of input signal. At block960, the processor202(e.g., one or more of the cores204) can add noise to an input signal. For example, block960may simulate Additive Gaussian White Noise (or any type of noise (e.g., colored or other kind of distribution like Rayleigh) and impart the noise onto the input signal. There are many ways to generate Gaussian white noise and the Box-Muller approach is one method that is known in the art. While the blocks910-960are depicted in a particular order, it will be appreciated that the embodiments herein are not limited to only the illustrated order. Blocks910-960may be executed in any order as desired and/or executed in parallel on an input signal. Signal Modulator Running on General Purpose CPUs Employing Parallel Processing on Multiple Cores to Achieve High Through-Put Operating in a Cloud Environment: As described above,FIG.10is a functional block diagram of an example implementation of method1000. In various examples, method1000may be an example of a signal modulation method. A modulator may be used to generate waveforms to send information from one place to another. For example, downlink signals (e.g., downlink signals160and/or170ofFIG.1) may be modulated according to method1000. For example, the information could be broken down into digital information or could be an analog signal such as used in AM and FM radios. Generation of digital signals are used as an example herein, but the same approach could also be used to generate analog signals. FIG.10illustrates an example signal modular as method1000. Method1000is a Phase-Shift-Keying (PSK) modulator method that supports modulation types such as B/Q/SQ/8/16A/32APSK/etc., Quadrature-Amplitude-Modulation (QAM), or any similar digital modulation waveforms. WhileFIG.10illustrates one example modulation method, the same approach of signal processing (e.g., as described inFIGS.3and4) may be applied to other modulation methods. Method1000comprises a plurality of functional blocks as shown inFIG.10. For example, in the illustrative example ofFIG.10, the method1000includes one or more of frame builder block1010, Forward Error Correction (FEC) block1020, pulse shaper block1030, center frequency adjustment block1040, and sweeper block1050. While specific blocks and arrangements are illustrated inFIG.10, certain modulation schemes might require different blocks. Thus,FIG.10illustrates a high-level modulation method and is not a catch-all configuration. One or more additional functional blocks may be added to method1000as desired to execute different modulation schemes. To the extent a certain modulation scheme does not fall into the arrangement ofFIG.10, one skilled in the art will appreciate that the concepts disclosed in connection to the various embodiments throughout this disclosure apply equally to the modulation method ofFIG.10as well as any modulation scheme. As described above, the plurality of blocks of method1000may each represent a function and may be implemented as one or more of the function306a,306b, . . .306n(FIG.3). In another example, two or more of the plurality of blocks can be grouped together as a single “process”1015that performs functions in a similar manner to the process315ofFIG.4. That is, a plurality of blocks ofFIG.10may be grouped together as process1015and executed in multiple, parallel iterations as the processing blocks315a,315b, . . .315n(FIG.4), etc. For example, as shown inFIG.10, functional blocks1010and1020are grouped together as a first process1015, which is then replicated in first multiple processing blocks315a-315n, and functional blocks1030-1050are grouped together as a second process315, which is then replicated in second multiple processing blocks315a-315n. WhileFIG.10illustrates certain functional blocks grouped into separate process1015, embodiments herein are not so limited. The functional blocks1010-1050may be grouped in many different ways. For example, all functional blocks1010-1050may be grouped together. Method10also illustratively includes data ingest blocks1005and data split blocks1010prior to each process1015. Each of data ingest blocks1005may be substantially similar to the data ingest block305(FIG.4) and each data split block1010may be substantially similar to the data split block310(FIG.4). Accordingly, input data may be ingested (1005), in which the processor202receives data for processing, and split (1010), in which the processor202can parse data in overlapping blocks of data, for example, as described in connection toFIG.4. Furthermore, after each process1015, method10also illustratively includes data combine blocks1020and data output blocks1025. Each of data combine blocks1020may be substantially similar to the data combine block320(FIG.4) and each data output block1025may be substantially similar to the data output block325(FIG.4). Accordingly, the process1015outputs overlapping blocks of data that are combined (1020) and outputs the data (1025), for example, as described in connection toFIG.4. In various examples, the plurality of blocks ofFIG.10may be implemented using SIMD processing techniques as described throughout the present disclosure. SIMD techniques may offer increased throughput and minimized memory bandwidth requirements. Increasing the functionality of each processing block executed using SIMD techniques may provide increased minimization of memory bandwidth requirements. At block1010, the processor202(e.g., one or more of the cores204) can convert incoming data of an input signal to a predetermined format that is based on the desired modulation scheme (e.g., the modulation scheme of the receiver, such as antennas122,132, and/or134). For example, certain modulation schemes require a specific format and block1010converts data of the input signal to that format. The modulator method1000can support many different waveform standards, such as, but not limited to DVB-S2, DVB-S2x as well as less standardized cases that use Reed-Solomon Coding, Turbo Coding, Convolutional Coding, etc. For simplicity sake, waveform standards are grouped into two cases: Streaming Data and Framed Data. Streaming data cases are where the incoming data is a continuous, unbroken stream, like uncoded or convolutional coding. Framed Data is for incoming data that requires framed or blocks of data like DVB-S2 or Reed-Solomon. Block1010can build a frame (e.g., for framed data) or data stream (e.g., for streaming data) by converting the incoming data to the format corresponding to the modulation scheme. At block1020, the processor202(e.g., one or more of the cores204) generates coding corresponding to the modulation scheme of the method1010, including, but not limited to BCH and LDPC for DVB-S2, LDPC coding for CCSDS, Reed-Solomon, turbo coding, polar coding, and convolutional coding. Block1020may be one of the more complicated blocks for the modulator method1000and therefore may benefit from all of the signal processing methods (e.g.,FIGS.3and/or4) and SIMD techniques disclosed throughout this disclosure. At block1030, the processor202(e.g., one or more of the cores204) converts symbol data to samples, for example, by applying a pulse shape filter. Block1030can create any pulse shape, for example, Root-Raised-Cosine (RRC). The pulse shaper may be a combination of a polyphase-filter with a Numerically-Controlled-Oscillator (NCO). Block1030may also be a complicated block and therefore may benefit from all of the signal processing methods (e.g.,FIGS.3and/or4) and SIMD techniques disclosed throughout this disclosure. At block1040, the processor202(e.g., one or more of the cores204) can change the center frequency of the carrier of the sample data from block1030using a complex multiply. At block1040, the processor202(e.g., one or more of the cores204) can change the phase and frequency over time based on predefined profile corresponding to the modulation scheme. In some implementations, block1050is executed while the center frequency is adjusted via block1040. Other Aspects The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope of the disclosure. The various components illustrated in the figures may be implemented as, for example, but not limited to, software and/or firmware on a processor or dedicated hardware. Also, the features and attributes of the specific example embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the disclosure. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art, the order of operations in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc., are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an,” or “the” is not to be construed as limiting the element to the singular. The various illustrative logical blocks, modules, and algorithm operations described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present inventive concept. The hardware used to implement the various illustrative logics, logical blocks, and modules described in connection with the various embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of receiver devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function. In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or codes on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium. The operations of a method or algorithm disclosed herein may be embodied in processor-executable instructions that may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable storage media may include random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer program product. It is understood that the specific order or hierarchy of blocks in the processes/flowcharts disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes/flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more.
91,168
11863285
The detailed description explains the preferred embodiments of the invention together with advantages and features, by way of example with reference to the drawings. DETAILED DESCRIPTION One or more embodiments relate generally to media systems, and in particular, a system and method providing a maximum diversity scheme for improving coverage area and diversity performance of a media system. One embodiment provides a method providing a maximum diversity scheme for improving coverage area and diversity performance of a media system. The method comprises, at an antenna module of the media system, wirelessly receiving, via a receiver antenna of the antenna module, broadband radio frequency (RF) signals from a transmitter of the media system, instantaneously converting, via a converter circuitry of the antenna module, the broadband RF signals into one or more digital baseband components, generating, via a processor unit of the antenna module, a single digital data stream by applying diversity processing to the digital baseband components, and outputting the single digital data stream over a network cable to another device of the media system. Another embodiment provides a media system providing a maximum diversity scheme for improving coverage area and diversity performance of the media system. The media system comprises a master processing device, at least one transmitter, and at least one antenna module. Each antenna module is configured to wirelessly receive, via a receiver antenna of the antenna module, broadband RF signals from the at least one transmitter, instantaneously convert, via a converter circuitry of the antenna module, the broadband RF signals into one or more digital baseband components, generate, via a processor unit of the antenna module, a single digital data stream by applying diversity processing to the digital baseband components, and output the single digital data stream over a network cable to the master processing device. Examples of media systems include, but are not limited to, wireless systems such as wireless microphone systems, public address (PA) systems, broadcasting systems such as audio broadcasting systems, audiovisual (AV) systems, and other types of professional audio systems or professional video systems operated by broadcasters (e.g., a radio broadcaster, a TV broadcaster, etc.), festivals, fairs, film studios, conventions, corporate events, houses of worship, sports leagues, schools, recording studios (i.e., facilities for sound recording, mixing, and audio production), facilities for audio post production, programming networks, theaters, venues (e.g., sports venues, music venues, etc.), etc. For expository purposes, the term “media device” as used herein generally refers to a media device utilized in a media system. Examples of different media devices include, but are not limited to the following: a laptop computer, a smart phone, a tablet, a wearable device (e.g., a smart watch), any other type of mobile electronic device; a desktop computer; a smart appliance (e.g., a smart speaker, a smart television); an Internet of Things (IoT) device; professional audio equipment such as a microcontroller unit (MCU), a transceiver, a transmitter, a microphone, a wireless microphone, an amplifier, an audio mixer, a recording device, any other type of professional audio equipment; professional audio systems such as wireless microphone systems, broadcast systems, public address systems, and other professional audio systems. The present invention concerns a media system and a method of providing maximum antenna diversity for improving coverage area and diversity performance of a media system. FIG.1illustrates an example professional media system100, in accordance with one embodiment. The system100comprises one or more wireless mobile devices10(e.g., MOBILE DEVICE 1, . . . , MOBILE DEVICE m). In one embodiment, each mobile device10is a media device capable of exchanging data with another device wirelessly. Examples of mobile devices10include, but are not limited to, wireless microphones such as hand-held or body-worn wireless microphones, in-ear monitors, media devices used for cueing on-air talent, intercom systems for backstage communications, etc. Each mobile device10has a corresponding transmitter20. A transmitter20corresponding to a mobile device10is either integrated in the mobile device10itself or is a separate piece of equipment (e.g., a bodypack transmitter) connected/coupled to the mobile device10. A transmitter20corresponding to a mobile device10is configured to wirelessly transmit a data stream captured by the mobile device10(e.g., wirelessly transmit audio signals, video signals, etc.), and wirelessly receive signals (e.g., sync pulses, time synchronization information, control commands comprising instructions for adjusting one or more parameters/settings for the mobile device10, such as an operating mode). In one embodiment, a data stream wirelessly transmitted by a transmitter20of the media system comprises RF signals. For example, in one embodiment, the data stream comprises broadband RF signals. In one embodiment, each mobile device10and corresponding transmitter20is associated with a particular application/use. For example, assume the system100is a wireless microphone system utilized/operated at an event (e.g., a concert, an awards ceremony, a sports event, etc.), where at least one mobile device10of the system100is a wireless microphone and at least one transmitter20of the system100is a microphone transmitter unit (MTU) corresponding to the wireless microphone. One MTU20of the system100is for an on-air talent/performer at the event (e.g., a vocalist, a play-by-play commentator), and another MTU20of the system100is for a different on-air talent/performer at the same event (e.g., a guitarist, a color commentator). In one embodiment, a mobile device10and/or a transmitter20is a digital wireless media device10that utilizes digital wireless technology for transmission. In another embodiment, a mobile device10and/or a transmitter20is an analog wireless media device that utilizes traditional analog wireless technology for transmission (e.g., transmitting analog audio via RF with frequency modulation (FM)). The system100further comprises one or more antenna modules (i.e., components, devices, or nodes)30(e.g., ANTENNA MODULE 1, . . . , ANTENNA MODULE n). Each antenna module30of the system100comprises two or more receiver antennas40. Each receiver antenna40is configured to wirelessly receive at least one data stream from at least one transmitter20of the system100. In one embodiment, each receiver antenna40comprises a single antenna for wirelessly receiving a data stream comprising RF signals from a transmitter20of the system100. In another embodiment, each receiver antenna40comprises a plurality of antennas (e.g., a diversity of antennas, such as a flexible printed circuit board (PCB) comprising a planar array of microstrip antenna elements) for wirelessly receiving at least one data stream comprising RF signals from at least one transmitter20of the system100. In one embodiment, at least one antenna module30of the system100optionally comprises one or more transmitter antennas (not shown). Each transmitter antenna is configured to wirelessly transmit signals (e.g., sync pulses) to a transmitter20of the system100to coordinate timing and synchronization of data streams the transmitter antenna receives from the transmitter20. In one embodiment, at least one antenna module30of the system100operates as a transceiver (i.e., an antenna/transceiver node that transmits signals and receives data streams). In another embodiment, at least one antenna module30of the system100operates as a receiver only (i.e., an antenna/receiver node that receives data streams only). For example, at least one antenna module30of the system100does not have any transmitter antennas. As another example, each transmitter antenna included in at least one antenna module30of the system100is disabled. In one embodiment, each antenna module30comprises, for each receiver antenna40of the antenna module30, a corresponding independent converter circuitry45(FIG.2) configured to instantaneously/immediately convert broadband RF signals received via the receiver antenna40into one or more a digital baseband components. In one embodiment, the broadband RF signals have a band designation of high-frequency (HF). In one embodiment, the broadband RF signals comprise independent/individual narrowband signals. In one embodiment, each antenna module30comprises at least one processor unit46(FIG.2) configured to: (1) receive digital baseband components from a converter circuitry45of the antenna module30that is co-located with the processor unit46(i.e., located within proximity of the converter circuitry45), and (2) perform diversity processing on the digital baseband components to generate/produce a single digital data stream. Broadband RF signals received via a receiver antenna40are immediately converted (via a corresponding converter circuitry45) to baseband signals and processed (via a co-located processor unit46) before it is transported to other components of the system100for additional processing and output. In one embodiment, the system100comprises a master processing device65. For each antenna module30of the system100, the antenna module30has a direct connection (i.e., wired back) to the master processing device65via a network cable60. Each antenna module30of the system100has a wired interface for communicating with the master processing device65. In one embodiment, the wired interface is an Ethernet network interface card. In another embodiment, the wired interface is a fiber optic network interface card. Each antenna module30of the system100is configured to transmit, via its wired interface, a digital data stream generated by a processor unit46of the antenna module30to the master processing device65over a digital cable connection provided by a network cable60. The master processing device65is configured to interface with at least one antenna module30of the system100, process at least one digital data stream received from the at least one antenna module30, and output and interface with one or more other components of the system100for additional processing and output, such as a media processing device90(FIG.3), a media output device95(FIG.3), etc. A media processing device90is a media device configured for processing (e.g., audio mixing, etc.). A media output device95is a media device configured for output (e.g., displaying video, reproducing audio, etc.). Examples of a network cable60include, but are limited to, an Ethernet cable (e.g., Cat 5 cable), a fiber optic cable, or any other standard cable used for data transfer. As the antenna modules30locally convert RF signals received via its receiver antennas40into narrowband baseband signals and locally process the baseband signals into digital data, the system100allows for all RF processing to take place at the antenna modules30itself, such that only digital data streams need to be transmitted to other components of the system100(e.g., the master processing device65, a media processing device90, media output device95, etc.). A conventional media system permits operation of up to thirty-two (32) receivers/transceivers maximum and twenty-four (24) transmitters maximum. Typically, each receiver or transceiver of a conventional media system is limited/restricted to wirelessly receiving data streams from up to 24 transmitters maximum. By comparison, in one embodiment, the system100provides a maximum wireless diversity scheme in which an unlimited number of receiver antennas40and/or antenna modules30can be geographically distributed in an application space (e.g., a venue) without resulting in negative effects such as, but not limited to, antenna phase interactions, cable loss, RF interference, intermodulation or combining, etc. The system100enables any number of receiver antennas40to be placed/positioned throughout, or deployed/distributed in, an application space, thereby improving/increasing coverage area and diversity performance without encountering negative effects or without any requirement to coordinate/distribute RF signals. This is unlike conventional media systems where careful consideration must be given to frequency coordination, use of accessory antennas and distribution boxes, connection of custom or expensive filters, building and running of expensive cables, allocation of space and power to large racks of receivers, antenna distribution gear, and breakouts for digital output devices (such as MADI or Dante) if interfacing with a network/protocol system. In one embodiment, as the system100allows for any number of receiver antennas40and/or antenna modules30in an application space, the system100facilitates a large number of audio channels (e.g.,128audio channels) to be processed efficiently via the single system100, thereby allowing maximum signal robustness. The ability to add more receiver antennas40improves diversity performance of the system100without requiring any changes in the design of other components of the system100(e.g., the master processing device65, a media processing device90, media output device95, etc.). In one embodiment, a receiver antenna40and/or an antenna module30is flexibly configured without requiring any changes to other components of the system100(e.g., the master processing device65, a media processing device90, media output device etc.). For example, the system100allows adjustments to a receiver antenna40/antenna module30, such as adjusting a frequency that the receiver antenna40/antenna module30is configured to operate at and/or adjusting filtering of broadband RF signals, without requiring adjustments to other components of the system10. A receiver antenna40/antenna module30can be individually configured without affecting the other components of the system100, thereby enabling the system100to be arranged/constructed in a modular fashion and eliminating the need to maintain racks of equipment including traditional RF receivers. Further, a receiver antenna40/antenna module30can be flexibly configured to a job-specific or application-specific configuration necessary/required to execute a specific job or application (e.g., a particular audio project). Therefore an object of the present invention is to provide an improved media system. In particular, the invention seeks to provide a media system which allows for any number of receiver antennas that are flexibly configured, thereby providing a maximum wireless diversity scheme for improving coverage area and diversity performance of the media system. FIG.2illustrates an example antenna module30, in accordance with one embodiment. As shown inFIG.2, in one embodiment, each receiver antenna40of the antenna module30is immediately followed by, or immediately connected to, a corresponding independent converter circuitry (i.e., circuit)45. Each receiver antenna40of the antenna module30has its own separate converter circuitry45that is not shared with any other receiver antenna40. Each independent converter circuitry45is configured to: (1) instantaneously/immediately convert broadband RF signals received via a corresponding receiver antenna40into one or more digital baseband components (e.g., in-phase and quadrature components), and (2) provide the digital baseband components to a processor unit46of the antenna module30that is co-located with the converter circuitry45. In one embodiment, the broadband RF signals have a band designation of HF and comprise independent/individual narrowband signals. In one embodiment, a receiver antenna40of an antenna module30is configured to filter and/or amplify broadband RF received from a transmitter20of the media system100, and immediately provide resulting filtered and/or amplified broadband RF signals to a corresponding converter circuitry45which directly converts the broadband RF signals into one or more digital baseband components. In one embodiment, each processor unit46of the antenna module30is configured to: (1) receive digital baseband components from a converter circuitry45of the antenna module30that is co-located with the processor unit46, (2) deconstruct the digital baseband components into a plurality of independent/individual narrowband baseband signals, (3) generate/produce a single digital data stream by applying a diversity combining algorithm47to the narrowband baseband signals to combine the narrowband baseband signals into the single digital data stream, and (4) output, via a wired interface of the antenna module30, the digital data stream to the master processing device65over a network cable60connected to the antenna module30. The processor unit46locally performs initial diversity processing of the narrowband baseband signals (i.e., locally combines the narrowband baseband signals) to achieve full diversity performance. In one embodiment, the processor unit46is configured to divide baseband signals into a plurality of bands, and convert the baseband signals into narrowband baseband signals for each band. In one embodiment, the processor unit46is configured to generate a single digital data stream for each of the narrowband baseband signals, and output, via the wired interface, the single digital data stream without demodulation. Examples of a diversity combining algorithm47include, but are not limited to, equal-gain combining, maximal-ratio combining, switched combining, selection combining, etc. FIG.3illustrates an example master processing device65, in accordance with one embodiment. In one embodiment, the master processing device65comprises a modular wired interface70. Each antenna module30of the system100has a direct connection (i.e., wired back) to the interface70via a network cable60. For example, in one embodiment, the interface70provides an optical interface (e.g., a fiber optic network interface card) capable of receiving digital data streams from the antenna modules30of the system100via fiber optic cables that directly connect the antenna modules30to the interface70. In one embodiment, the master processing device65comprises one or more control units (CUs)80(e.g., CONTROL UNIT 1, . . . , CONTROL UNIT k). The interface70is configured to exchange data (e.g., over a network cable, such as a Cat 5 cable) with each CU80. Each CU80is in turn connected to any number of components for additional processing and output. For example, in one embodiment, each CU80exchanges data with one or more media processing devices90(e.g., audio processing devices, such as audio decoders) and/or one or more media output devices95(e.g., audio output devices, such as intercoms or speakers in a venue's IP-based media system) for additional diversity processing (e.g., applying another diversity combining algorithm) and then full demodulation into actual transmitted media (e.g., audio, video) and (signal) data. In one embodiment, the master processing device65operates as a master controller unit for the system100. In one embodiment, each CU80is configured to enhance performance of the system100by combining multiple digital data streams from multiple antenna modules30. FIG.4illustrates a flowchart of an example process500for implementing a maximum diversity scheme for improving coverage area and diversity performance of a media system, in accordance with one embodiment. Process block501includes wirelessly receiving broadband RF signals. Process block502instantaneously converting the broadband RF signals into digital baseband components. Process block503includes applying diversity processing to the digital baseband components to generate a single digital data stream. Process block504includes transmitting the single digital data stream over a network cable. In one embodiment, process blocks501-504may be performed utilizing one or more components of the media system100, such as the antenna module30. FIG.5is a high-level block diagram showing an information processing system comprising a computer system600useful for implementing the disclosed embodiments. The computer system600includes one or more processors601, and can further include an electronic display device602(for displaying video, graphics, text, and other data), a main memory603(e.g., random access memory (RAM)), storage device604(e.g., hard disk drive), removable storage device605(e.g., removable storage drive, removable memory module, a magnetic tape drive, optical disk drive, computer readable medium having stored therein computer software and/or data), user interface device606(e.g., keyboard, touch screen, keypad, pointing device), and a communication interface607(e.g., modem, a network interface (such as an Ethernet card), a communications port, or a PCMCIA slot and card). The main memory603may store instructions that when executed by the one or more processors601cause the one or more processors601to perform one or more process blocks of the process500. The communication interface607allows software and data to be transferred between the computer system and external devices. The system600further includes a communications infrastructure608(e.g., a communications bus, cross-over bar, or network) to which the aforementioned devices/modules601through607are connected. Information transferred via communications interface607may be in the form of signals such as electronic, electromagnetic, optical, or other signals capable of being received by communications interface607, via a communication link that carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, a radio frequency (RF) link, and/or other communication channels. Computer program instructions representing the block diagram and/or flowcharts herein may be loaded onto a computer, programmable data processing apparatus, or processing devices to cause a series of operations performed thereon to produce a computer implemented process. In one embodiment, processing instructions for one or more process blocks of process500(FIG.4) may be stored as program instructions on the memory603, storage device604and the removable storage device605for execution by the processor601. Embodiments have been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. Each block of such illustrations/diagrams, or combinations thereof, can be implemented by computer program instructions. The computer program instructions when provided to a processor produce a machine, such that the instructions, which execute via the processor create means for implementing the functions/operations specified in the flowchart and/or block diagram. Each block in the flowchart/block diagrams may represent a hardware and/or software module or logic. In alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures, concurrently, etc. The terms “computer program medium,” “computer usable medium,” “computer readable medium”, and “computer program product,” are used to generally refer to media such as main memory, secondary memory, removable storage drive, a hard disk installed in hard disk drive, and signals. These computer program products are means for providing software to the computer system. The computer readable medium allows the computer system to read data, instructions, messages or message packets, and other computer readable information from the computer readable medium. The computer readable medium, for example, may include non-volatile memory, such as a floppy disk, ROM, flash memory, disk drive memory, a CD-ROM, and other permanent storage. It is useful, for example, for transporting information, such as data and computer instructions, between computer systems. Computer program instructions may be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the embodiments may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Computer program code for carrying out operations for aspects of one or more embodiments may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects of one or more embodiments are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. References in the claims to an element in the singular is not intended to mean “one and only” unless explicitly so stated, but rather “one or more.” All structural and functional equivalents to the elements of the above-described exemplary embodiment that are currently known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the present claims. No claim element herein is to be construed under the provisions of 35 U.S.C. section 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or “step for.” The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the embodiments has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. Though the embodiments have been described with reference to certain versions thereof; however, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.
32,041
11863286
DETAILED DESCRIPTION Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. One skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. Several aspects of telecommunication systems will now be presented with reference to various apparatuses and techniques. These apparatuses and techniques will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, or the like (collectively referred to as “elements”). These elements may be implemented using hardware, software, or combinations thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. While aspects may be described herein using terminology commonly associated with a 5G or New Radio (NR) radio access technology (RAT), aspects of the present disclosure can be applied to other RATs, such as a 3G RAT, a 4G RAT, and/or a RAT subsequent to 5G (e.g., 6G). FIG.1is a diagram illustrating an example of a wireless network100, in accordance with the present disclosure. The wireless network100may be or may include elements of a 5G (e.g., NR) network and/or a 4G (e.g., Long Term Evolution (LTE)) network, among other examples. The wireless network100may include one or more base stations110(shown as a BS110a, a BS110b, a BS110c, and a BS110d), a user equipment (UE)120or multiple UEs120(shown as a UE120a, a UE120b, a UE120c, a UE120d, and a UE120e), and/or other network entities. A base station110is an entity that communicates with UEs120. A base station110(sometimes referred to as a BS) may include, for example, an NR base station, an LTE base station, a Node B, an eNB (e.g., in 4G), a gNB (e.g., in 5G), an access point, and/or a transmission reception point (TRP). Each base station110may provide communication coverage for a particular geographic area. In the Third Generation Partnership Project (3GPP), the term “cell” can refer to a coverage area of a base station110and/or a base station subsystem serving this coverage area, depending on the context in which the term is used. A base station110may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or another type of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs120with service subscriptions. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs120with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs120having association with the femto cell (e.g., UEs120in a closed subscriber group (CSG)). A base station110for a macro cell may be referred to as a macro base station. A base station110for a pico cell may be referred to as a pico base station. A base station110for a femto cell may be referred to as a femto base station or an in-home base station. In the example shown inFIG.1, the BS110amay be a macro base station for a macro cell102a, the BS110bmay be a pico base station for a pico cell102b, and the BS110cmay be a femto base station for a femto cell102c. A base station may support one or multiple (e.g., three) cells. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a base station110that is mobile (e.g., a mobile base station). In some examples, the base stations110may be interconnected to one another and/or to one or more other base stations110or network nodes (not shown) in the wireless network100through various types of backhaul interfaces, such as a direct physical connection or a virtual network, using any suitable transport network. The wireless network100may include one or more relay stations. A relay station is an entity that can receive a transmission of data from an upstream station (e.g., a base station110or a UE120) and send a transmission of the data to a downstream station (e.g., a UE120or a base station110). A relay station may be a UE120that can relay transmissions for other UEs120. In the example shown inFIG.1, the BS110d(e.g., a relay base station) may communicate with the BS110a(e.g., a macro base station) and the UE120din order to facilitate communication between the BS110aand the UE120d. A base station110that relays communications may be referred to as a relay station, a relay base station, a relay, or the like. The wireless network100may be a heterogeneous network that includes base stations110of different types, such as macro base stations, pico base stations, femto base stations, relay base stations, or the like. These different types of base stations110may have different transmit power levels, different coverage areas, and/or different impacts on interference in the wireless network100. For example, macro base stations may have a high transmit power level (e.g., 5 to 40 watts) whereas pico base stations, femto base stations, and relay base stations may have lower transmit power levels (e.g., 0.1 to 2 watts). A network controller130may couple to or communicate with a set of base stations110and may provide coordination and control for these base stations110. The network controller130may communicate with the base stations110via a backhaul communication link. The base stations110may communicate with one another directly or indirectly via a wireless or wireline backhaul communication link. The UEs120may be dispersed throughout the wireless network100, and each UE120may be stationary or mobile. A UE120may include, for example, an access terminal, a terminal, a mobile station, and/or a subscriber unit. A UE120may be a cellular phone (e.g., a smart phone), a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, a medical device, a biometric device, a wearable device (e.g., a smart watch, smart clothing, smart glasses, a smart wristband, smart jewelry (e.g., a smart ring or a smart bracelet)), an entertainment device (e.g., a music device, a video device, and/or a satellite radio), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, and/or any other suitable device that is configured to communicate via a wireless medium. Some UEs120may be considered machine-type communication (MTC) or evolved or enhanced machine-type communication (eMTC) UEs. An MTC UE and/or an eMTC UE may include, for example, a robot, a drone, a remote device, a sensor, a meter, a monitor, and/or a location tag, that may communicate with a base station, another device (e.g., a remote device), or some other entity. Some UEs120may be considered Internet-of-Things (IoT) devices, and/or may be implemented as NB-IoT (narrowband IoT) devices. Some UEs120may be considered a Customer Premises Equipment. A UE120may be included inside a housing that houses components of the UE120, such as processor components and/or memory components. In some examples, the processor components and the memory components may be coupled together. For example, the processor components (e.g., one or more processors) and the memory components (e.g., a memory) may be operatively coupled, communicatively coupled, electronically coupled, and/or electrically coupled. In general, any number of wireless networks100may be deployed in a given geographic area. Each wireless network100may support a particular RAT and may operate on one or more frequencies. A RAT may be referred to as a radio technology, an air interface, or the like. A frequency may be referred to as a carrier, a frequency channel, or the like. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. In some examples, two or more UEs120(e.g., shown as UE120aand UE120e) may communicate directly using one or more sidelink channels (e.g., without using a base station110as an intermediary to communicate with one another). For example, the UEs120may communicate using peer-to-peer (P2P) communications, device-to-device (D2D) communications, a vehicle-to-everything (V2X) protocol (e.g., which may include a vehicle-to-vehicle (V2V) protocol, a vehicle-to-infrastructure (V2I) protocol, or a vehicle-to-pedestrian (V2P) protocol), and/or a mesh network. In such examples, a UE120may perform scheduling operations, resource selection operations, and/or other operations described elsewhere herein as being performed by the base station110. Devices of the wireless network100may communicate using the electromagnetic spectrum, which may be subdivided by frequency or wavelength into various classes, bands, channels, or the like. For example, devices of the wireless network100may communicate using one or more operating bands. In 5G NR, two initial operating bands have been identified as frequency range designations FR1 (410 MHz-7.125 GHz) and FR2 (24.25 GHz-52.6 GHz). It should be understood that although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “Sub-6 GHz” band in various documents and articles. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Recent 5G NR studies have identified an operating band for these mid-band frequencies as frequency range designation FR3 (7.125 GHz-24.25 GHz). Frequency bands falling within FR3 may inherit FR1 characteristics and/or FR2 characteristics, and thus may effectively extend features of FR1 and/or FR2 into mid-band frequencies. In addition, higher frequency bands are currently being explored to extend 5G NR operation beyond 52.6 GHz. For example, three higher operating bands have been identified as frequency range designations FR4a or FR4-1 (52.6 GHz-71 GHz), FR4 (52.6 GHz-114.25 GHz), and FR5 (114.25 GHz-300 GHz). Each of these higher frequency bands falls within the EHF band. With the above examples in mind, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like, if used herein, may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like, if used herein, may broadly represent frequencies that may include mid-band frequencies, may be within FR2, FR4, FR4-a or FR4-1, and/or FR5, or may be within the EHF band. It is contemplated that the frequencies included in these operating bands (e.g., FR1, FR2, FR3, FR4, FR4-a, FR4-1, and/or FR5) may be modified, and techniques described herein are applicable to those modified frequency ranges. In some aspects, the UE120may include a communication manager140. As described in more detail elsewhere herein, the communication manager140may receive a signal that is transmitted by a base station with a modulation signature associated with a reconfigurable intelligent surface (RIS) or a repeater and redirected by the RIS or the repeater using modulation that reverses the modulation signature associated with the RIS or the repeater; and decode the signal, wherein the signal is decodable by the UE based at least in part on the signal being redirected by the RIS or the repeater using the modulation that reverses the signature associated with the RIS or the repeater. Additionally, or alternatively, the communication manager140may perform one or more other operations described herein. In some aspects, the base station110may include a communication manager150. As described in more detail elsewhere herein, the communication manager150may transmit a first signal using a modulation signature associated with an RIS or a repeater, wherein the RIS or the repeater redirects the first signal and reverses the modulation signature; and transmit a second signal without using the modulation signature associated with the RIS or the repeater. Additionally, or alternatively, the communication manager150may perform one or more other operations described herein. As shown inFIG.1, the wireless network100may include an RIS160. The RIS160may include a communication manager170. The RIS160may include one or more reconfigurable elements capable of redirecting or reflecting signals transmitted by a base station110or a UE120. In some aspects, as described in more detail elsewhere herein, the communication manager170of the RIS160may receive, from a base station, a first signal modulated by a modulation signature associated with the RIS160; and redirect the first signal using modulation that reverses the modulation signature associated with the RIS. Additionally, or alternatively, the communication manager170may perform one or more other operations described herein. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. FIG.2is a diagram illustrating an example200of a base station110in communication with a UE120in a wireless network100, in accordance with the present disclosure. The base station110may be equipped with a set of antennas234athrough234t, such as T antennas (T≥1). The UE120may be equipped with a set of antennas252athrough252r, such as R antennas (R≥1). At the base station110, a transmit processor220may receive data, from a data source212, intended for the UE120(or a set of UEs120). The transmit processor220may select one or more modulation and coding schemes (MCSs) for the UE120based at least in part on one or more channel quality indicators (CQIs) received from that UE120. The base station110may process (e.g., encode and modulate) the data for the UE120based at least in part on the MCS(s) selected for the UE120and may provide data symbols for the UE120. The transmit processor220may process system information (e.g., for semi-static resource partitioning information (SRPI)) and control information (e.g., CQI requests, grants, and/or upper layer signaling) and provide overhead symbols and control symbols. The transmit processor220may generate reference symbols for reference signals (e.g., a cell-specific reference signal (CRS) or a demodulation reference signal (DMRS)) and synchronization signals (e.g., a primary synchronization signal (PSS) or a secondary synchronization signal (SSS)). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, the overhead symbols, and/or the reference symbols, if applicable, and may provide a set of output symbol streams (e.g., T output symbol streams) to a corresponding set of modems232(e.g., T modems), shown as modems232athrough232t. For example, each output symbol stream may be provided to a modulator component (shown as MOD) of a modem232. Each modem232may use a respective modulator component to process a respective output symbol stream (e.g., for OFDM) to obtain an output sample stream. Each modem232may further use a respective modulator component to process (e.g., convert to analog, amplify, filter, and/or upconvert) the output sample stream to obtain a downlink signal. The modems232athrough232tmay transmit a set of downlink signals (e.g., T downlink signals) via a corresponding set of antennas234(e.g., T antennas), shown as antennas234athrough234t. At the UE120, a set of antennas252(shown as antennas252athrough252r) may receive the downlink signals from the base station110and/or other base stations110and may provide a set of received signals (e.g., R received signals) to a set of modems254(e.g., R modems), shown as modems254athrough254r. For example, each received signal may be provided to a demodulator component (shown as DEMOD) of a modem254. Each modem254may use a respective demodulator component to condition (e.g., filter, amplify, downconvert, and/or digitize) a received signal to obtain input samples. Each modem254may use a demodulator component to further process the input samples (e.g., for OFDM) to obtain received symbols. A MIMO detector256may obtain received symbols from the modems254, may perform MIMO detection on the received symbols if applicable, and may provide detected symbols. A receive processor258may process (e.g., demodulate and decode) the detected symbols, may provide decoded data for the UE120to a data sink260, and may provide decoded control information and system information to a controller/processor280. The term “controller/processor” may refer to one or more controllers, one or more processors, or a combination thereof. A channel processor may determine a reference signal received power (RSRP) parameter, a received signal strength indicator (RSSI) parameter, a reference signal received quality (RSRQ) parameter, and/or a CQI parameter, among other examples. In some examples, one or more components of the UE120may be included in a housing284. The network controller130may include a communication unit294, a controller/processor290, and a memory292. The network controller130may include, for example, one or more devices in a core network. The network controller130may communicate with the base station110via the communication unit294. One or more antennas (e.g., antennas234athrough234tand/or antennas252athrough252r) may include, or may be included within, one or more antenna panels, one or more antenna groups, one or more sets of antenna elements, and/or one or more antenna arrays, among other examples. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include one or more antenna elements (within a single housing or multiple housings), a set of coplanar antenna elements, a set of non-coplanar antenna elements, and/or one or more antenna elements coupled to one or more transmission and/or reception components, such as one or more components ofFIG.2. On the uplink, at the UE120, a transmit processor264may receive and process data from a data source262and control information (e.g., for reports that include RSRP, RSSI, RSRQ, and/or CQI) from the controller/processor280. The transmit processor264may generate reference symbols for one or more reference signals. The symbols from the transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by the modems254(e.g., for DFT-s-OFDM or CP-OFDM), and transmitted to the base station110. In some examples, the modem254of the UE120may include a modulator and a demodulator. In some examples, the UE120includes a transceiver. The transceiver may include any combination of the antenna(s)252, the modem(s)254, the MIMO detector256, the receive processor258, the transmit processor264, and/or the TX MIMO processor266. The transceiver may be used by a processor (e.g., the controller/processor280) and the memory282to perform aspects of any of the methods described herein (e.g., with reference toFIGS.7-15). At the base station110, the uplink signals from UE120and/or other UEs may be received by the antennas234, processed by the modem232(e.g., a demodulator component, shown as DEMOD, of the modem232), detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by the UE120. The receive processor238may provide the decoded data to a data sink239and provide the decoded control information to the controller/processor240. The base station110may include a communication unit244and may communicate with the network controller130via the communication unit244. The base station110may include a scheduler246to schedule one or more UEs120for downlink and/or uplink communications. In some examples, the modem232of the base station110may include a modulator and a demodulator. In some examples, the base station110includes a transceiver. The transceiver may include any combination of the antenna(s)234, the modem(s)232, the MIMO detector236, the receive processor238, the transmit processor220, and/or the TX MIMO processor230. The transceiver may be used by a processor (e.g., the controller/processor240) and the memory242to perform aspects of any of the methods described herein (e.g., with reference toFIGS.7-15). The controller/processor240of the base station110, the controller/processor280of the UE120, and/or any other component(s) ofFIG.2may perform one or more techniques associated with efficient RIS-assisted communication, as described in more detail elsewhere herein. For example, the controller/processor240of the base station110, the controller/processor280of the UE120, and/or any other component(s) ofFIG.2may perform or direct operations of, for example, process1000ofFIG.10, process1100ofFIG.11, process1200ofFIG.12, and/or other processes as described herein. The memory242and the memory282may store data and program codes for the base station110and the UE120, respectively. In some examples, the memory242and/or the memory282may include a non-transitory computer-readable medium storing one or more instructions (e.g., code and/or program code) for wireless communication. For example, the one or more instructions, when executed (e.g., directly, or after compiling, converting, and/or interpreting) by one or more processors of the base station110and/or the UE120, may cause the one or more processors, the UE120, and/or the base station110to perform or direct operations of, for example, process1000ofFIG.10, process1100ofFIG.11, process1200ofFIG.12, and/or other processes as described herein. In some examples, executing instructions may include running the instructions, converting the instructions, compiling the instructions, and/or interpreting the instructions, among other examples. In some aspects, the UE120includes means for receiving a signal that is transmitted by a base station with a modulation signature associated with an RIS or a repeater and redirected by the RIS or the repeater using modulation that reverses the modulation signature associated with the RIS or the repeater; and/or means for decoding the signal, wherein the signal is decodable by the UE based at least in part on the signal being redirected by the RIS or the repeater using the modulation that reverses the signature associated with the RIS or the repeater. The means for the UE120to perform operations described herein may include, for example, one or more of communication manager140, antenna252, modem254, MIMO detector256, receive processor258, transmit processor264, TX MIMO processor266, controller/processor280, or memory282. In some aspects, the base station110includes means for transmitting a first signal using a modulation signature associated with an RIS or a repeater, wherein the RIS or the repeater redirects the first signal and reverses the modulation signature; and/or means for transmitting a second signal without using the modulation signature associated with the RIS or the repeater. The means for the base station110to perform operations described herein may include, for example, one or more of communication manager150, transmit processor220, TX MIMO processor230, modem232, antenna234, MIMO detector236, receive processor238, controller/processor240, memory242, or scheduler246. In some aspects, an RIS includes means for receiving, from a base station, a first signal modulated by a modulation signature associated with the RIS; and/or means for redirecting the first signal using modulation that reverses the modulation signature associated with the RIS. In some aspects, the means for the RIS to perform operations described herein may include, for example, one or more of communication manager170, a transmit processor, an antenna, a modem, a receive processor, a controller/processor, a memory, and/or one or more reconfigurable elements. While blocks inFIG.2are illustrated as distinct components, the functions described above with respect to the blocks may be implemented in a single hardware, software, or combination component or in various combinations of components. For example, the functions described with respect to the transmit processor264, the receive processor258, and/or the TX MIMO processor266may be performed by or under the control of the controller/processor280. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described with regard toFIG.2. FIG.3is a diagram illustrating an example300of communications using an RIS, in accordance with the present disclosure. As shown inFIG.3, a base station110may communicate with a UE120in a wireless network, such as the wireless network100. The base station110and the UE120may use an RIS305to communicate with one another. For example, the RIS305may reflect or redirect a signal to the base station110and/or the UE120. The RIS305may also be referred to as an intelligent reflecting surface. In some examples, the RIS305may be a repeater. The RIS305may be, or may include, a planar or two-dimensional structure or surface that is designed to have properties to enable a dynamic control of signals or electromagnetic waves reflected and/or redirected by the RIS305. The RIS305may include one or more reconfigurable elements. For example, the RIS305may include an array of reconfigurable elements (e.g., an array of uniformly distributed reconfigurable elements). The reconfigurable elements may be elements with a reconfigurable electromagnetic characteristic. For example, the electromagnetic characteristic may include a reflection characteristic (e.g., a reflection coefficient), a scattering characteristic, an absorption characteristic, and/or a diffraction characteristic. The electromagnetic characteristic(s) of each reconfigurable element may be independently controlled and changed over time. The electromagnetic characteristic(s) of each reconfigurable element may be independently configured such that the combination of configured states of the reconfigurable elements reflects an incident signal or waveform in a controlled manner. For example, the reconfigurable elements may be configured to reflect or redirect an impinging signal in a controlled manner, such as by reflecting the impinging signal in a desired direction, with a desired beam width, with a desired phase, with a desired amplitude, and/or with a desired polarization, among other examples. In other words, the RIS305may be capable of modifying one or more properties (e.g., direction, beam width, phase, amplitude, and/or polarization) of an impinging signal. The reconfigurable elements of the RIS305may be controlled and/or configured by an RIS controller310. The RIS controller310may be a control module (e.g., a controller and/or a processor) that is capable of configuring the electromagnetic characteristic(s) of each reconfigurable element of the RIS305. The RIS controller310may be, or may be included in, the communication manager170. Alternatively, the communication manager170may be included in the RIS controller310. The RIS controller310may receive control communications (e.g., from a base station110and/or a UE120) indicating one or more properties of reflected signals (e.g., indicating a desired direction, a desired beam width, a desired phase, a desired amplitude, and/or a desired polarization). Therefore, in some examples, the RIS305may be capable of receiving communications (e.g., via the RIS305and/or the RIS controller310). In some examples, the RIS305and/or the RIS controller310may not have transmit capabilities (e.g., the RIS305may be capable of reflecting and/or redirecting impinging signals, via the reconfigurable elements, and modifying the reflected signals, but may not be capable of generating and/or transmitting signals). Due to the capability of the RIS305to receive communications (e.g., via the RIS305and/or the RIS controller310), the RIS305may recover partial synchronization with other wireless communication nodes (e.g., a base station110and/or a UE120). For example, the RIS305may acquire and track a frame structure (e.g., downlink or uplink frame structure) and/or slot or symbol boundaries, among other examples. As shown inFIG.3, the base station110may transmit a signal315. The signal315may be transmitted in a spatial direction toward the RIS305. The RIS controller310may configure the reconfigurable elements of the RIS305to reflect and/or redirect the signal315in a desired spatial direction and/or with one or more desired signal characteristics (e.g., beam width, phase, amplitude, frequency, and/or polarization). For example, as shown by reference number320, the RIS305may be capable of reflecting the signal315in one or more spatial directions. Although multiple beams are shown inFIG.3representing different beam states or beam directions of the RIS305, the RIS305may be capable of reflecting a signal with one beam state or one beam direction at a time. For example, in one case, as shown by reference number325, the RIS305may be configured to reflect the signal315using a first beam state (e.g., beam state1). “Beam state” may refer to a spatial direction and/or a beam of a reflected signal (e.g., a signal reflected by the RIS305). The first beam state may cause the signal315to be reflected in a spatial direction toward a first UE120(e.g., UE1). As shown by reference number330, in another case, the RIS305may be configured to reflect the signal315using a second beam state (e.g., beam state2). The second beam state may cause the signal315to be reflected in a spatial direction toward a second UE120(e.g., UE2). The RIS305may be deployed in a wireless network (such as the wireless network100) to improve communication performance and efficiency. For example, the RIS305may enable a transmitter (e.g., a base station110or a UE120) to control the scattering, reflection, and refraction characteristics of signals transmitted by the transmitter, to overcome the negative effects of wireless propagation. For example, the RIS305may effectively control signal characteristics (e.g., spatial direction, beam width, phase, amplitude, frequency, and/or polarization) of an impinging signal without a need for complex decoding, encoding, and radio frequency processing operations. Therefore, the RIS305may provide increased channel diversity for propagation of signals in a wireless network. The increased channel diversity provides robustness to channel fading and/or blocking, such as when higher frequencies are used by the base station110and/or the UE120(e.g., millimeter wave frequencies and/or sub-terahertz frequencies). Moreover, as the RIS305does not need to perform complex decoding, encoding, and radio frequency processing operations, the RIS305may provide a more cost and energy efficient manner of reflecting and/or redirecting signals in a wireless network (e.g., as compared to other mechanisms for reflecting and/or redirecting signals, such as a relay device). As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with respect toFIG.3. FIG.4is a diagram illustrating an example400of communication links in a wireless network that includes an RIS, in accordance with the present disclosure. As shown, example400includes a base station110, a first UE120(e.g., UE1), a second UE120(e.g., UE2), and the RIS305. The RIS305may be controlled and/or configured by the RIS controller310. As shown inFIG.4, some UEs120, such as UE1), may receive a communication (e.g., data and/or control information) directly from the base station110as a downlink communication. Additionally, or alternatively, some UEs120, such as UE2, may receive a communication (e.g., data and/or control information) indirectly from the base station110via the RIS305. For example, the base station110may transmit the communication in a spatial direction toward the RIS305, and the RIS305may redirect or reflect the communication to UE2. In some examples, UE1may communicate directly with the base station110via a direct link405. For example, a communication may be transmitted via the direct link405. A communication transmitted via the direct link405between UE1and the base station110does not pass through and is not reflected or redirected by the RIS305. In some examples, UE2may communicate indirectly with the base station110via an indirect link410. For example, a communication may be transmitted via different segments of the indirect link410. In some cases, the base station110may establish indirect links410through the RIS305with one or more UEs120out of a coverage area of the base station110and/or with one or more UEs for which a direct link405is blocked by an obstacle. A communication transmitted via the indirect link410between UE2and the base station110is reflected and/or redirected by the RIS305. As shown inFIG.4and by reference number415, the base station110may communicate with the RIS305(e.g., with the RIS controller310) via a control channel. For example, the base station110may indicate, in an RIS control message, spatial direction(s) and/or signal characteristics for signals reflected by the RIS305. The RIS controller310may configure reconfigurable elements of the RIS305in accordance with the RIS control message. In some examples, the RIS control message may indicate information associated with the wireless network, such as a frame structure (e.g., uplink or downlink frame structure), time synchronization information, and/or slot (and/or symbol) boundaries, among other examples. For example, the base station110may transmit the RIS control message to the RIS controller310and data to UE2via the indirect link410. The RIS control message may be received by the RIS controller310and terminated at the RIS305(e.g., not delivered to UE2). The RIS control message may indicate, to the RIS controller310, a configuration of the RIS305for a desired state (e.g., reflection angle) that enables the data reflected and/or redirected by the RIS305to be reliably received by UE2. Using the communication scheme shown inFIG.4may improve network performance and increase reliability by providing the UEs120with link diversity for communicating with the base station110. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with respect toFIG.4. FIG.5is a diagram illustrating an example500of SSB transmission in a wireless network that includes an RIS, in accordance with the present disclosure. As shown, example500includes a base station110, a first UE120(e.g., UE1), a second UE120(e.g., UE2), a third UE120(e.g., UE3), and the RIS305. The base station110may periodically transmit (e.g., broadcast) an SSB burst set that includes multiple SSBs. In some examples, different SSBs in an SSB burst set may be beam-formed differently (e.g., transmitted using different beams), and the SSBs may be used for initial cell search, cell acquisition, beam management, and/or beam selection (e.g., as part on an initial access procedure). An SSB may include a PSS, an SSS, and a physical broadcast channel (PBCH). A UE120may use the PSS to determine subframe/symbol timing of the base station110and to determine a physical layer identity. The UE may use the SSS to determine a physical layer cell identity group number and radio frame timing. The PBCH may carry a master information block (MIB) that provides system information for initial access (e.g., how to receive remaining minimum system information (RMSI)), as well as timing information including an SSB index. In some examples, the SSB index may correspond to a beam used to carry the SSB. A UE120may monitor for and/or measure SSBs using different receive (Rx) beams during an initial network access procedure and/or cell search procedure. The UE120may indicate one or more SSBs with a best signal parameter (e.g., an RSRP parameter) to the base station. The base station110and the UE120may use the one or more indicated SSBs to select one or more beams to be used for communication between the base station and the UE (e.g., for a random access channel (RACH) procedure). For example, the UE120may transmit a first message (e.g., Msg 1) of the RACH procedure to the base station110using a RACH resource associated with an SSB with the best signal parameter. Additionally, or alternatively, the UE120may use the SSB and/or the SSB index to determine a cell timing for a cell via which the SSB is received (e.g., a serving cell). As shown inFIG.5, the total SSB burst set, transmitted by the base station110, may be partitioned into multiple sets of SSBs. For example, the SSB burst set may include a set of SSBs (e.g., SSB1, SSB2, SSB3, and SSB4) for direct transmission from the base station110, and another set of SSBs (e.g., SSB5, SSB,6, and SSB7) for transmission through the RIS305. In some cases, if there are multiple RISs in a cell associated with the base station110, there may be a respective set of SSBs dedicated for each RIS in the cell. The base station110may perform beam sweeping with the SSBs in the set of SSBs for direct transmission from the base station110. For example, the base station110may transmit SSB1, SSB2, SSB3, and SSB4on different beams having different beam directions. The RIS305may perform SSB beam sweeping on behalf of the base station110by changing the reflection state of the RIS305to redirect/reflect the SSBs in the set of SSBs for transmission through the RIS305at different reflection angles. For example, the base station110may transmit SSB5, SSB6, and SSB7on a beam directed towards the RIS305, and the RIS305may redirect SSB5, SSB6, and SSB7at different reflection angles associated with different reflection states of the RIS305. As shown inFIG.5, SSB1, SSB2, and SSB3may be used to serve UEs120in region A (e.g., UE1). SSB4may be used to serve UEs120in region C (e.g., UE3). SSB5, SSB6, and SSB7may be used to serve UEs120in region B (e.g., UE2) through the RIS305. In some examples, region B may be out of a coverage area of the base station110. In some cases, an SSB (e.g., SSB4) for direct transmission from the base station110may be transmitted, by the base station110, in a same direction or in a similar direction as the set of SSBs (e.g., SSB5, SSB6, and SSB7) for transmission through the RIS. For example, the base station110may transmit SSB4, SSB5, SSB6, and SSB7on physically the same beam, or the base station110may transmit SSB4on a beam close to the beam used for transmitting SSB5, SSB6, and SSBI, such that SSB5, SSB6, and SSB7pass through at least a portion of Region C. In this case, although SSBs5-7pass through Region C when transmitted in the direction toward the RIS305, SSBs5-7are not be intended to serve Region C. SSB4may be transmitted in the direction toward the RIS305, but SSB4is not reflected (or relayed) by the RIS305. In some examples, UE3in region C may observe or detect any (or all) of SSBs4-7, and UE3may attempt to perform initial access using RACH resources associated with any (or all) of SSBs4-7. In this case, although only SSB4is intended for region C, UE3may not be able to distinguish SSB4from the other SSBs (e.g., SSBs5-7) that are intended for the RIS305. If UE3attempts to perform initial access using SSB5, SSB6, and/or SSB7(e.g., using the RACH resources associated with SSB5, SSB6, and/or SSB7), the base station may assume that UE3is in region B and served by the RIS305. This may lead to the base station110transmitting RIS control messages to the RIS305for transmissions to (and/or from) UE3(which do not need to be reflected by the RIS305), resulting in unnecessary control signaling overhead. Furthermore, signals transmitted to UE3may be unnecessarily reflected or redirected to region B, resulting in increased interference to other UEs in region B. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with respect toFIG.5. FIG.6is a diagram illustrating an example600of communications using an RIS in a wireless network with multiple operators, in accordance with the present disclosure. As shown inFIG.6, example600includes a first base station110-1at a first cell site (site1), a second base station110-2at a second cell site (site2), a first UE120(UE A), a second UE120(UE B), and an RIS305. In some examples, multiple operators (e.g., service providers) may share cell sites. As shown inFIG.6, a first operator (operator A) and a second operator (operator B) may share sites1and2. For example, operator A and operator B may share the first base station110-1at site1, and operator A and operator B may share the second base station110-2at site2. In some examples, the first base station110-1and the second base station110-2may be two transit receive points (TRPs) (e.g., first and second TRPs) for a cell. Operator A and operator B may operate in different frequency bands. For example, operator A and operator B operate in adjacent channels. UE A may be served by operator A, and UE B may be served by operator B. As shown inFIG.6, operator B may deploy the RIS305. In order to serve UE B in a coverage hole (e.g., due to a blockage), operator B may control the RIS305to reach UE B. In some examples, operators A and B may be synchronized, and may transmit SSBs at the same time. In some examples, unlike active radio frequency (RF)-type relays, the RIS305may not be frequency selective, and signals transmitted by operator A from the second base station110-2may be reflected by the RIS305as well as signals transmitted by operator B from the second base station. For example, SSB-A and SSB-B transmitted from the second base station may both be reflected by the RIS305. In some cases, UE A and UE B may receive respective SSBs with high strength (e.g., high RSRP) from the second base station110-2through the RIS305. In this case, both UE A and UE B may establish a connection with the second base station. However, because the RIS305is controlled by operator B, operator B may change the state of the RIS305, which may cause UE A to lose the connection with the second base station120. Accordingly, for UE A, establishing a connection with the first base station110-1may increase reliability of network communications for UE A, as compared with a connection with the second base station110-2through the RIS operated by operator B. However, UE A may be prevented from connecting to the first base station110-1in cases in which the strength of an SSB from the second base station110-2through the RIS305is higher than the strength of an SSB from the first base station. As indicated above,FIG.6is provided as an example. Other examples may differ from what is described with respect toFIG.6. Some techniques and apparatuses described herein enable a base station to transmit a signal to be redirected by an RIS using a modulation signature associated with the RIS. The RIS may redirect the signal using modulation that reverses the modulation signature associated with the RIS. A UE may receive the signal that is transmitted by the base station with the modulation signature associated with the RIS and redirected by the RIS using the modulation that reverses the modulation signature associated with the RIS. The UE may decode the signal. In some aspects, the signal may be decodable by the UE based at least in part on the signal being redirected by the RIS using the modulation that reverses the modulation signature associated with the RIS. In some aspects, the signal may be undecodable by the UE before being redirected by the RIS using the modulation that reverses the modulation signature associated with the RIS. As a result, an SSB associated with an RIS may be undecodable by a UE prior to being redirected by the RIS, which may prevent the UE from performing initial access using the SSB associated with the RIS before the SSB has been redirected by the RIS. This may reduce unnecessary control signaling overhead for the base station, and may reduce interference to other UEs resulting from unnecessarily redirecting signals for the UE to a region other than the region where the UE is located. In some aspects, the RIS may apply the modulation to reverse the modulation signature associated with the RIS to a signal that is not modulated using the modulation signature associated with the RIS, which may cause the signal to become undecodable by a UE once the signal is redirected by the RIS. For example, a signal (e.g., an SSB) associated with a different operator from an operator that controls the RIS may be transmitted without the modulation signature associated with the RIS, and may become undecodable by a UE once the signal is redirected by the RIS. This may prevent a UE associated with the different operator from the operator that control the RIS from establishing a connection with a base station through the RIS, which may increase reliability of network communications for the UE. FIG.7is a diagram illustrating an example700associated with efficient RIS-assisted communication, in accordance with the present disclosure. As shown inFIG.7, example700includes communication between a base station110and a UE120. In some aspects, the base station110and the UE120may be included in a wireless network, such as wireless network100. BS110and UE120may communicate via a wireless access link, which may include an uplink and a downlink. As shown inFIG.7, in some aspects, the base station110and the UE120may communicate via an RIS705. The RIS705may be similar to the RIS305described in connection withFIGS.3-6. As shown inFIG.7, and by reference number710, the base station110may transmit, to the RIS705, configuration and/or control information associated with the RIS705. The configuration and/or control information may include information relating to a modulation signature associated with the RIS705. “Modulation signature” may refer to a pattern or sequence of modulation added to a signal to be reflected or redirected by the RIS705. The modulation signature may also be referred to as an RIS watermark. The modulation signature may be an RIS-specific modulation signature for the RIS705. In some aspects, the modulation signature may be an RIS-specific modulation signature applied by the base station110to signals to be redirected by the RIS705. In this case, the RIS705may apply an inverse modulation pattern associated with the modulation signature to signals redirected by the RIS705. An “inverse modulation pattern” associated with a modulation signature may refer to a pattern that reverses the modulation signature applied to a signal. “Reversing” the modulation signature may refer to recovering a demodulated signal from a signal modulated using the modulation signature (e.g., recovering an original signal to which the modulation signature was applied). In some aspects, the inverse modulation pattern applied by the RIS705may be an RIS-specific modulation pattern that reverses the RIS-specific modulation signature associated with the RIS705. The inverse modulation pattern may also be referred to an inverse RIS watermark. In some aspects, the configuration and/or control information may include an indication of the modulation signature associated with the RIS705to be applied (by the base station110) to one or more signals to be redirected by the RIS705. In this case, the RIS705may receive the indication of the modulation signature associated with the RIS705, and the RIS705may determine the inverse modulation pattern associated with the modulation signature (e.g., the modulation pattern that reverses the modulation signature). In some aspects, the configuration and/or control information may include an indication of the inverse modulation pattern to be applied by the RIS705to reverse the modulation pattern associated with the RIS705. In some aspects, the base station110may transmit the indication of the modulation signature associated with the RIS705and/or the indication of the inverse modulation pattern in configuration information to configure the RIS705to apply the inverse modulation pattern associated with the modulation signature to all signals or a set of signals redirected by the RIS705. In some aspects, the base station110may transmit the indication of the modulation signature associated with the RIS705and/or the indication of the inverse modulation pattern in control information (e.g., in an RIS control message) to control the RIS705(e.g., on a per signal basis) to apply the inverse modulation pattern associated with the modulation signature for one or more signals redirected by the RIS705. In some aspects, in a case in which there are multiple RISs in a cell associated with the base station110, the base station110may transmit, to each RIS, at least one of an indication of a respective RIS-specific modulation signature associated with that RIS (e.g., to be applied by the base station110for signals to be redirected by that RIS) or an indication of a respective inverse modulation pattern to be applied by that RIS to reverse the RIS-specific modulation signature associated with that RIS. In some aspects, the configuration and/or control information may indicate a beam state or a beam direction of the RIS705that is associated with the modulation signature. For example, multiple modulation signatures (or inverse modulation patterns) may be indicated for multiple beam states and/or beam directions of the RIS705, and/or the RIS705may be configured/controlled to apply the inverse modulation pattern associated with a modulation signature when redirecting a signal using the beam state(s) or beam direction(s) associated with the modulation signature. The modulation signature associated with the RIS705(e.g., to be applied by the base station110to a signal to be redirected by the RIS705) may be a phase modulation signature, a frequency modulation signature, a polarization modulation signature, and/or an amplitude modulation signature, among other examples. “Phase modulation signature” may refer to a pattern or sequence of phase changes or phase shifts, added (e.g., by the base station110) to a signal that is to be reflected or redirected by the RIS705. “Frequency modulation signature” may refer to a pattern or sequence of frequency changes or frequency shifts, added (e.g., by the base station110) to a signal that is to be reflected or redirected by the RIS705. “Polarization modulation signature” may refer to a pattern or sequence of polarization states (e.g., angle or polarization or polarization mode), added (e.g., by the base station110) to a signal that is to be reflected or redirected by the RIS705. “Amplitude modulation signature” may refer to a pattern or sequence of amplitude changes or amplitude shifts, added (e.g., by the base station110) to a signal that is to be reflected or redirected by the RIS705. In some aspects, for a phase modulation signature, the phase modulation signature may be a time-domain phase shift pattern, and the base station110may modulate (e.g., scramble) a signal that is to be reflected or redirected by the RIS705by applying the time-domain phase shift pattern to the signal to be reflected or redirected by the RIS705. The time-domain phase shift pattern may include phase changes (e.g., phase shifts) that are included in a set of phase changes (e.g., a finite set of phase changes). For example, the time-domain phase shift pattern may include phase changes from a set of phase changes that includes ±90°, ±45°, and/or ±30°, among other examples. In some aspects, in order to minimize negative effects of inter-carrier interference or inter-symbol interference, the phase changes may be applied (e.g., by the base station110) on an OFDM symbol level (e.g., may be applied at OFDM symbol boundaries). For example, the modulation signature may modulate the signal (e.g., in phase) at each symbol of a set of symbols associated with the signal or at a subset of symbols of the set of symbols. In some aspects, the configuration information may indicate the set of symbols and/or the subset of symbols that are to be associated with the phase change. In some aspects, the modulation signature may apply the phase changes per sample or a per group of samples, and the configuration information may indicate the samples and/or the groups of samples that are to be associated with the phase changes. In some aspects, the inverse modulation pattern applied by the RIS705when reflecting or redirecting a signal may reverse (e.g., descramble) the phase shifts applied to the signal by the base station110in accordance with the phase modulation signature. For a frequency modulation signature, the base station110may apply a frequency change to a signal that is to be reflected or redirected by the RIS705, in accordance with the frequency modulation signature associated with the RIS705. The frequency modulation signature may identify a pattern for applying a frequency shift (e.g., by a number of subcarriers) to a signal that is to be reflected and/or redirected by the RIS705. For example, the frequency modulation signature may modulate the frequency at each subcarrier of a set of subcarriers associated with the signal (e.g., to be redirected by the RIS705) or at a subset of subcarriers of the set of subcarriers. In some aspects, the configuration information may indicate the set of subcarriers and/or the subset of subcarriers that are to be associated with the frequency change, and the configuration information may indicate the size of the frequency shift (e.g., the number of subcarriers) to be applied to the set of subcarriers and/or the subset of carriers. In some aspects, the inverse modulation pattern applied by the RIS705when reflecting or redirecting a signal may reverse the frequency shifts applied to signal by the base station110in accordance with the frequency modulation signature. For a polarization modulation signature, the base station110may change a polarization of a signal that is to be reflected or redirected by the RIS705, in accordance with the polarization modulation signature associated with the RIS705. For example, the polarization of the signal to be reflected by the RIS705may be modulated (e.g., scrambled) by the polarization modulation signature. For example, the base station110may modify the signal, in accordance with the polarization modulation signature, from a first polarization state of the signal to a second polarization state of the signal. In this case, the inverse modulation pattern applied by the RIS705when reflecting or redirecting the signal may modify the signal from the second polarization state back to the first polarization state. The polarization state (e.g., the first polarization state and/or the second polarization state) may include an angle of polarization (e.g., for linear polarization) or a polarization mode (e.g., the first polarization state and the second polarization state may use different polarization modes). A polarization mode may include linear polarization, circular polarization, and/or elliptical polarization, among other examples. In some aspects, in order to minimize negative effects of inter-carrier interference or inter-symbol interference, the polarization changes may be applied (e.g., by the base station110) on an OFDM symbol level (e.g., may be applied at OFDM symbol boundaries). For example, the modulation signature associated with the RIS705may modulate the polarization of the signal at each symbol of a set of symbols associated with the signal (e.g., to be reflected by the RIS705) or at a subset of symbols of the set of symbols. In some aspects, the configuration information may indicate the set of symbols and/or the subset of symbols that are to be associated with the polarization change. For an amplitude modulation signature, the base station110may attenuate the amplitude of the signal (e.g., to be reflected or redirected by the RIS705) in accordance with the amplitude modulation signature associated with the RIS705. In some aspects, the base station110may attenuate the amplitude of the signal by puncturing the signal at certain time intervals in accordance with a pattern identified in the modulation signature associated with the RIS705. For example, the signal may be modulated (e.g., by the base station110) with attenuation in the amplitude of the signal (e.g., where with amplitude is reduced) or with gaps (e.g., where the amplitude is zero). The pattern or sequence of the attenuation or the gaps identified in the amplitude modulation signature may be specific to the RIS705(e.g., for signals to be reflected or redirected by the RIS705). As further shown inFIG.7, and by reference number715, the base station110may transmit a signal modulated using the modulation signature associated with the RIS705. The base station110may modulate a signal associated with the RIS705(e.g., a signal to be reflected or redirected by the RIS705) using the modulation signature associate with the RIS705. For example, the signal may be (or may include) an SSB signal, a reference signal (RS), and/or a data signal, among other examples, that is to be reflected or redirected by the RIS705. The base station110may transmit the modulated (e.g., scrambled) signal on a beam in a spatial direction toward the RIS705. The base station110may modulate the signal to be redirected (or reflected) by the RIS705using the modulation signature associated with the RIS705. In some aspects, the base station110may modulate the signal to apply a time-domain phase shift pattern to the signal (e.g., in accordance with a phase modulation signature), may modulate the signal to apply a frequency shift (e.g., of a number of subcarriers) to the signal (e.g., in accordance with a frequency modulation signature), may modulate a polarization of the signal (e.g., in accordance with a polarization modulation signature), and/or may modulate an amplitude of the signal (e.g., in accordance with an amplitude modulation signature). For example, the base station110may modulate (e.g., scramble) at least one of the phase, the amplitude, the frequency, or the polarization of the signal at the symbol boundaries of the signal, in accordance with an RIS-specific pattern or sequence of modulation identified in the modulation signature associated with the RIS705. In some aspects, the base station110may transmit one or more first signals associated with the RIS705, and the base station110may transmit one or more second signals not associated with the RIS705. For example, the one or more first signals may be signals to be reflected or redirected by the RIS705(e.g., to cover a region outside of a coverage area of the base station110), and the one or more second signals may be signals associated with direction transmission from the base station110(e.g., without being reflected or redirected by the RIS705). The base station110may transmit each first signal using the modulation signature associated with the RIS705. For example, the base station110may modulate each first signal using the modulation signature associated with the RIS705, and transmit each modulated (e.g., scrambled) first signal in a first beam direction associated with the RIS705. The base station110may transmit each second signal without using the modulation signature associated with the RIS705(e.g., without scrambling the second signal). In some aspects, the base station110may transmit a second signal without using the modulation signature associated with the RIS705in the first beam direction associated with the RIS705or in a second beam direction that is close to the first beam direction (e.g., the second beam direction may satisfy a distance threshold with respect to the first beam direction). This may result in a UE (e.g., a UE in a region between the base station110and the RIS705) detecting/receiving a scrambled first signal (e.g., a first signal modulated by the base station110using the modulation signature associated with the RIS705) and/or a second signal that is transmitted from the base station110without being scrambled (e.g., without being modulated using the modulation signature associated with the RIS705). In this case, the scrambled first signal transmitted from the base station110may be undecodable for a UE (e.g., UE120) that receives or detects the scrambled first signal before that first signal is redirected by the RIS705. The second signal, which is transmitted from the base station110without being scrambled, may be decodable by a UE (e.g., UE120) without being redirected by the RIS705. As further shown inFIG.7, and by reference number720, the RIS705redirect (or reflect) the modulated signal transmitted from the base station110and reverse the modulation signature associated with the RIS705applied to the signal by the base station110, resulting in redirected signal that is no longer modulated with the modulation signature (e.g., an unscrambled signal). The RIS705may receive, from the base station110, the signal that is modulated using the modulation signature associated with the RIS705, and the RIS705may redirect (or reflect) the signal using modulation that reverses the modulation signature associated with the RIS705. In some aspects, the RIS705may receive one or more signals with the modulation signature applied by the base station110, and the RIS705may reverse the modulation signature applied to the one or more signals and redirect the one or more signals (e.g., without the modulation signature) at one or more different reflection angles. For example, the RIS705may change a reflection state (e.g., beam state) of the RIS705for each signal received from the base station110to redirect each SSB at a different reflection angle. In some aspects, when the RIS705redirects a signal that was modulated by the base station110using the modulation signal associated with the RIS705, the RIS705may modulate the signal using the inverse modulation pattern associated with the modulation signature to reverse the modulation signature associated with the RIS705. The inverse modulation pattern associated with the modulation signature may reverse the modulation performed in accordance with the modulation signature to recover an unscrambled signal (e.g., the original signal that was modulated by the base station110). For example, the RIS705may apply the inverse modulation pattern associated with the modulation signature to reverse modulation, in accordance with the modulation signature, of the phase, the frequency, the polarization, and/or the amplitude of the signal. In some aspects, the RIS705may apply the inverse modulation pattern associated with the modulation signature associated with the RIS705to each signal redirected by the RIS705. For a signal scrambled by the base station110using the modulation signature associated with the RIS705, the RIS705reverses the modulation signature, resulting in an unscrambled signal that may be decodable by a UE (e.g., UE120). When redirecting a signal that is not modulated using the modulation signature associated with RIS705(e.g., an unscrambled signal or a signal scrambled using a modulation signature associated with a different RIS), the modulation of the signal, by the RIS705, using the inverse modulation pattern associated with the modulation signature associated with the RIS705may result in a scrambled signal that is undecodable to a UE (e.g., UE120). In some aspects, the base station110may control the RIS705to redirect each signal received from the base station110. For example, the base station110may transmit, to the RIS705(e.g., to an RIS controller of the RIS705), an RIS control signal indicating a configuration of the reconfigurable elements of the RIS705for redirecting a signal transmitted by the base station110. In some aspects, the base station110may also control the RIS705(e.g., via an RIS control signal) to apply the inverse modulation pattern associated with the modulation signature associated with the RIS705to signal received from the base station110(e.g., the reverse the modulation signature associated with the RIS705that was applied to the signal by the base station110). As further shown inFIG.7, and by reference number725, the UE120may receive the signal redirected from the RIS705, and the UE120may decode the redirected signal. The signal, received by the UE120, may be a signal that was transmitted by the base station110using the modulation signature associated with the RIS705and redirected by the RIS705using modulation (e.g., the inverse modulation pattern) that reverses the modulation signature associated with the RIS705. In this case, the signal may be decodable by the UE120based at least in part on the RIS705redirecting the signal using the modulation that reverses the modulation signature associated with the RIS705. For example, the signal, when transmitted by the base station110using the modulation signature associated with the RIS705, may be undecodable by the UE120before being redirected by the RIS705using the modulation (e.g., the inverse modulation pattern) to reverse the modulation signature associated with the RIS705. In some aspects, the UE120may determine whether a detected/received signal is decodable by the UE120based at least in part on a measurement of the signal performed by the UE120. For example, the UE120may measure the RSRP of the signal and determine whether the signal is decodable by the UE120based at least in part on a determination of whether the RSRP measurement of the signal satisfies a threshold. In some aspects, in a case in which the UE120receives a scrambled signal (e.g., a signal modulated using the modulation signature associated with the RIS705), the RSRP measurement may not satisfy the threshold, and the UE120may determine that the signal is undecodable. In some aspects, when the UE120receives an unscrambled signal (e.g., a signal transmitted by the base station110without using the modulation signature or a signal transmitted using the modulation signature that has been redirected by the RIS705using the inverse modulation pattern to reverse the modulation signature), the UE120may determine that the RSRP measurement satisfies the threshold, and the UE120may decode the signal. In some aspects, the signal received by the UE120may be (or may include) an SSB. In this case, the UE120may decode the SSB and perform initial access using the SSB. For example, the UE120may perform RIS-assisted initial access in which messages between the UE120and the base station110in an initial access procedure (e.g., a RACH procedure) are transmitted via an indirect link through the RIS705. In some aspects, the signal received by the UE120may include an RS, data (e.g., downlink data), and/or control information (e.g., downlink control information), among other examples. As described above, the base station110may transmit a signal to be redirected by the RIS705using a modulation signature associated with the RIS705. The RIS705may redirect the signal using modulation that reverses the modulation signature. The UE120may receive the signal that is transmitted by the base station110with the modulation signature and redirected by the RIS705using the modulation that reverses the modulation signature. The UE120may decode the signal. In some aspects, the signal may be decodable by the UE120based at least in part on the signal being redirected by the RIS705using the modulation that reverses the modulation signature. In some aspects, the signal may be undecodable by the UE120before being redirected by the RIS705using the modulation that reverses the modulation signature. As a result, an SSB associated with an RIS may be undecodable by a UE prior to being redirected by the RIS, which may prevent the UE from performing initial access using the SSB associated with the RIS before the SSB has been redirected by the RIS. This may reduce unnecessary control signaling overhead for the base station, and may reduce interference to other UEs resulting from unnecessarily redirecting signals for the UE to a region other than the region where the UE is located. In some aspects, an RIS may apply the modulation to reverse the modulation signature associated with the RIS to a signal that is not modulated using the modulation signature associated with the RIS, which may cause the signal to become undecodable by a UE once the signal is redirected by the RIS. For example, a signal (e.g., an SSB) associated with a different operator from an operator that controls the RIS may be transmitted without the modulation signature associated with the RIS, and may become undecodable by a UE once the signal is redirected by the RIS. This may prevent a UE associated with the different operator from the operator that control the RIS from establishing a connection with a base station through the RIS, which may increase reliability of network communications for the UE. As indicated above,FIG.7is provided as an example. Other examples may differ from what is described with respect toFIG.7. FIG.8is a diagram illustrating an example800associated efficient RIS-assisted communication, in accordance with the present disclosure. As shown inFIG.8, example800includes a base station110, a first UE120-1, a second UE120-2, a third UE120-3, and an RIS705. As shown inFIG.8, the base station110may transmit an SSB burst that includes a first set of SSBs (e.g., SSB1, SSB2, SSB3, and SSB4) that are associated with direct transmission from the base station110and a second set of SSBs (e.g., SSB5, SSB,6, and SSB7) that are associated with the RIS705. The base station110may transmit SSB1, SSB2, SSB3, and SSB4on different beams having different beam directions. The base station110may transmit SSB5, SSB6, and SSB7on a beam directed toward the RIS705, and the RIS705may redirect SSB5, SSB6, and SSB7at different reflection angles associated with different reflection states of the RIS705. As shown by reference number805, the base station110may scramble each of SSB5, SSB6, and SSB7. For example, the base station110may modulate each of SSB5, SSB6, and SSB7using a modulation signature associated with the RIS705. As shown by reference number810, the RIS705may unscramble each of SSB5, SSB6, and SSB7. For example, the RIS705may modulate each of SSB5, SSB6, and SSB7, using an inverse modulation pattern that reverses the modulation signature associated with the RIS705, such that the SSBs (e.g., SSB5, SSB6, and SSB7) are unscrambled after being redirected by the RIS705. The base station110may transmit SSB1, SSB2, SSB3, and SSB4, without applying the modulation signature associated with the RIS705. As shown inFIG.8, SSB1, SSB2, and SSB3may be used to serve UEs in region A (e.g., the first UE120-1). In some aspects, the first UE120-1may detect/receive one of the SSBs (e.g., SSB1, SSB2, or SSB3) serving region A. For example, the first UE120-1may detect/receive SSB2. In this case, SSB2, which is not modulated using the modulation signature associated with the RIS705, may be decodable by the first UE120-1. The first UE120-1may decode SSB2, obtain system information associated with SSB2, and perform initial access using SSB2(e.g., based at least in part on the system information associated with SSB2). For example, the first UE120-1may transmit an initial access message (e.g., a first message (Msg 1) in a RACH procedure) to the base station110using a RACH resource associated with SSB2. SSB4may be used to serve UEs in region C (e.g., the third UE120-3). SSB5, SSB6, and SSB7may be used to serve UEs120in region B (e.g., the second UE120-2) through the RIS705. As shown inFIG.8, the base station110may transmit SSB4, which is associated with direct transmission from the base station110, on a same beam as SSB5, SSB6, and SSB7, which are associated with the RIS705, or on a beam close to (e.g., within a distance threshold of) the beam used for transmitting SSB5, SSB6, and SSB7. The third UE120-3in region C may observe or detect any (or all) of SSBs4-7. SSBs5-7may be scrambled by the base station110(e.g., using the modulation signature associated with the RIS705), and the third UE120-3may detect/receive SSBs5-7before SSBs5-7are redirected and unscrambled by the RIS705(e.g., using the inverse modulation pattern that reverses the modulation signature associated with the RIS705). Due to the scrambling of SSBs5-7, SSBs5-7may be undecodable for the third UE120-3, and the third UE120-3may be only able to decode SSB4. For example, the RSRP measurements for third UE120-3for SSBs5-7may be very low due to scrambling, and may not satisfy a threshold, resulting in the SSBs5-7undecodable for the third UE120-3. The third UE120-3may decode SSB4, obtain system information associated with SSB4, and perform initial access using SSB4(e.g., based at least in part on the system information associated with SSB4). For example, the third UE120-3may transmit an initial access message (e.g., Msg 1 in a RACH procedure) to the base station110using a RACH resource associated with SSB4. In some aspects, the second UE120-2may detect/receive one or more of the SSBs (e.g., SSB5, SSB6, or SSB7) serving region B. In this case, the second UE120-2may detect/receive one or more SSBs5-7after the RIS705redirects and unscrambles SSBs5-7(e.g., using the inverse modulation pattern that reverses the modulation signature associated with the RIS705). For example, the second UE120-2may detect/receive SSB6after SSB6is redirected and unscrambled by the RIS705. In this case, SSB6may be decodable for the second UE120-2based at least in part on the RIS705redirecting and unscrambling SSB6. The second UE120decode SSB6, obtain system information associated with SSB6, and perform initial access (e.g., RIS-assisted initial access) using SSB6(e.g., based at least in part on the system information associated with SSB6). For example, the second UE120-2may transmit an initial access message to the base station110, via an indirect link through the RIS705, using a RACH resource associated with SSB6. In some aspects, the system information associated with the SSBs (e.g., SSBs1-4) associated with direction transmission from the base station110may be different from the system information associated with the SSBs (e.g., SSBs5-7) associated with RIS-assisted communication. For example, the system information in the MIB, PBCH payload, system information block (SIB) type1(SIB1), and/or one or more other SIBs may be different for access (e.g., via a direct link with the base station110) using SSBs1-4and for access (e.g., via an indirect link with the base station110through the RIS705) using SSBs5-7. In some aspects, common control signaling (e.g., paging) may also be different for access (e.g., via a direct link with the base station110) using SSBs1-4and for access (e.g., via an indirect link with the base station110through the RIS705) using SSBs5-7. For example, based at least in part on the different system information and common control signaling, the RIS705may be treated by a UE120as an independent cell from the base station110. In some aspects, SSBs1-4may have a first PBCH payload (e.g., MIB), and may indicate a first SIB1. The first SIB1may include RACH and initial access related information associated with SSBs1-4. SSBs5-7may have a second PBCH payload (e.g., MIB) and may indicate a second SIB1. The second SIB1may include RACH and initial access related information associated with SSBs5-7. In some aspects, due to the scrambling of SSBs5-7by the base station110, a UE120in region C (e.g., the third UE120-3) may not be prevented from decoding SSBs5-7and obtaining the second MIB and the second SIB1. As indicated above,FIG.8is provided as an example. Other examples may differ from what is described with respect toFIG.8. FIG.9is a diagram illustrating an example900associated efficient RIS-assisted communication, in accordance with the present disclosure. As shown inFIG.6, example900includes a first base station110-1at a first cell site (site1), a second base station110-2at a second cell site (site2), a first UE120(UE A), a second UE120(UE B), and an RIS705. As shown inFIG.9, a first operator (operator A) and a second operator (operator B) may share sites1and2. For example, operator A and operator B may share the first base station110-1at site1, and operator A and operator B may share the second base station110-2at site2. In some aspects, the first base station110-1may be a first TRP in a cell, and the second base station110-2may be a second TRP in the cell. Operator A and operator B may operate in different frequency bands. UE A may be served (e.g., from the first base station110-1and/or the second base station110-2) by operator A in a first frequency band, and UE B may be served (e.g., from the first base station110-1and/or the second base station110-2) by operator B in a second frequency band. As shown inFIG.9, operator B may deploy the RIS705. For example, operator B may control the state of the RIS for redirecting signals transmitted via the RIS between the second base station110-2and UE B. In some cases, signals transmitted by operator A from the second base station110-2may be reflected by the RIS305(e.g., based at least part on the state of the RIS controlled by operator B). Operator A may transmit, from the second base station110-2, one or more SSBs (e.g., SSB-A) for connecting with the second base station110-2. Operator B may transmit, from the second base station110-2, one or more SSBs (e.g., SSB-B) for connecting with the second base station110-2. The transmission of SSB-A and SSB-B from the second base station110-2may be synchronized, and SSB-A and SSB-B may both be reflected by the RIS705. As shown by reference number905, the second base station110-2may apply to SSB-B (e.g., the SSB associated with operator B that controls the RIS705) the modulation signature associated with the RIS705, and the second base station110-2may transmit a scrambled SSB-B toward the RIS705. Because SSB-A is not associated with the operator that controls the RIS705, the second base station110-2transmit SSB-A without applying the modulation signature associated with the RIS705(e.g., an unscrambled SSB-A may be transmitted by the second base station110-2). The RIS705may redirect SSB-A and SSB-B. As shown by reference910, the RIS705may apply the inverse modulation pattern associated with the modulation signature to each signal redirected by the RIS705. When the RIS705redirects the scrambled SSB-B, the inverse modulation pattern may reverse the modulation signature associated with the RIS705, resulting in an unscrambled SBB-B. When the RIS705redirects the unscrambled SSB-A, the application of the inverse modulation pattern to the SSB-A that was not modulated using the modulation signature associated with the RIS705may result in SSB-A being scrambled with a pattern that is unknown to devices (e.g., UE A) served by operator A. In some aspects, the unscrambled SSB-B may be decodable by UE B. For example, UE B may decode the SSB-B and perform initial access using SSB-B to establish a connection with the second base station110-2through the RIS705. In some aspects, the scrambled SSB-A resulting from the redirection by the RIS705may be undecodable by UE A. Accordingly, UE A may be unable to decode any SSB from the second base station110-2(e.g., from site2) that is redirected or reflected by the RIS705. In this case, UE A may instead detect and decode an SSB from the first base station110-1(e.g., from site1), and the UE A may perform initial access to establish a connection with the first base station110-1. As a result, the connection for UE A is not affected by changes to the state of the RIS705controlled by operator B, which may increase reliability of network communications for UE A. As indicated above,FIG.9is provided as an example. Other examples may differ from what is described with respect toFIG.9. FIG.10is a diagram illustrating an example process1000performed, for example, by a UE, in accordance with the present disclosure. Example process1000is an example where the UE (e.g., UE120) performs operations associated with efficient RIS-assisted communication. As shown inFIG.10, in some aspects, process1000may include receiving a signal that is transmitted by a base station with a modulation signature associated with an RIS or a repeater and redirected by the RIS or the repeater using modulation that reverses the modulation signature associated with the RIS or the repeater (block1010). For example, the UE (e.g., using communication manager140and/or reception component1302, depicted inFIG.13) may receive a signal that is transmitted by a base station with a modulation signature associated with an RIS or a repeater and redirected by the RIS or the repeater using modulation that reverses the modulation signature associated with the RIS or the repeater, as described above. As further shown inFIG.10, in some aspects, process1000may include decoding the signal, wherein the signal is decodable by the UE based at least in part on the signal being redirected by the RIS or the repeater using the modulation that reverses the modulation signature associated with the RIS or the repeater (block1020). For example, the UE (e.g., using communication manager140and/or decoding component1308, depicted inFIG.13) may decode the signal, wherein the signal is decodable by the UE based at least in part on the signal being redirected by the RIS or the repeater using the modulation that reverses the modulation signature associated with the RIS or the repeater, as described above. Process1000may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the signal is undecodable by the UE before being redirected by the RIS or the repeater using the modulation that reverses the modulation signature associated with the RIS or the repeater. In a second aspect, alone or in combination with the first aspect, decoding the signal includes decoding the signal in connection with a determination that the signal is decodable by the UE, and the determination that the signal is decodable by the UE is based at least in part on a measurement of the signal. In a third aspect, alone or in combination with one or more of the first and second aspects, the signal includes a synchronization signal block (SSB) associated with the RIS or the repeater. In a fourth aspect, alone or in combination with one or more of the first through third aspects, information associated with the SSB block is different from system information associated with one or more other SSBs that are not associated with the RIS or the repeater. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, the modulation signature is at least one of a frequency modulation signature, a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. AlthoughFIG.10shows example blocks of process1000, in some aspects, process1000may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.10. Additionally, or alternatively, two or more of the blocks of process1000may be performed in parallel. FIG.11is a diagram illustrating an example process1100performed, for example, by a base station, in accordance with the present disclosure. Example process1100is an example where the base station (e.g., base station110) performs operations associated with efficient RIS-assisted communication. As shown inFIG.11, in some aspects, process1100may include transmitting a first signal using a modulation signature associated with an RIS or a repeater, wherein the RIS or the repeater redirects the first signal and reverses the modulation signature (block1110). For example, the base station (e.g., using communication manager150and/or transmission component1404, depicted inFIG.14) may transmit a first signal using a modulation signature associated with a reconfigurable intelligent surface (RIS) or a repeater, wherein the RIS or the repeater redirects the first signal and reverses the modulation signature, as described above. As further shown inFIG.11, in some aspects, process1100may include transmitting a second signal without using the modulation signature associated with the RIS or the repeater (block1120). For example, the base station (e.g., using communication manager150and/or transmission component1404, depicted inFIG.14) may transmit a second signal without using the modulation signature associated with the RIS or the repeater, as described above. Process1100may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the first signal is undecodable by a user equipment (UE) before being redirected by the RIS or the repeater. In a second aspect, alone or in combination with the first aspect, transmitting the first signal includes transmitting the first signal in a first beam direction associated with the RIS or the repeater, and transmitting the second signal includes transmitting the second signal in the first beam direction or in a second beam direction that satisfies a distance threshold with respect to the first beam direction. In a third aspect, alone or in combination with one or more of the first and second aspects, the second signal is decodable by a user equipment (UE) without being redirected by the RIS or the repeater, and the first signal is decodable by the UE after the RIS or the repeater redirects the first signal and reverses the modulation signature. In a fourth aspect, alone or in combination with one or more of the first through third aspects, the RIS or the repeater redirects the second signal and applies an inverse modulation associated with the modulation signature to the second signal, and the second signal is undecodable by the UE after the RIS or the repeater applies the inverse modulation associated with the modulation signature to the second signal. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, the modulation signature is at least one of a frequency modulation signature, a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, the modulation signature is a frequency modulation signature, and transmitting a first signal using a modulation signature associated with the RIS or the repeater includes applying a frequency shift to the first signal, the frequency shift is associated with the RIS or the repeater. In a seventh aspect, alone or in combination with one or more of the first through sixth aspects, the modulation signature is a phase modulation signature, and transmitting a first signal using a modulation signature associated with the RIS or the repeater includes applying a time-domain phase shift pattern to the first signal, the time-domain phase shift pattern is associated with the RIS or the repeater. In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, transmitting the first signal includes transmitting one or more first synchronization signal blocks (SSBs) associated with the RIS or the repeater in a first beam direction using the modulation signature associated with the RIS or the repeater, and transmitting the second signal includes transmitting one or more second SSBs not associated with the RIS or the repeater without using the modulation signature. In a ninth aspect, alone or in combination with one or more of the first through eighth aspects, transmitting the one or more second SSBs includes transmitting a second SSB of the one or more second SSBs in the first beam direction or in a second beam direction that satisfies a distance threshold with respect to the first beam direction. In a tenth aspect, alone or in combination with one or more of the first through ninth aspects, the one or more first SSBs are associated with first system information, and the one or more second SSBs are associated with second system information that is different from the first system information. In an eleventh aspect, alone or in combination with one or more of the first through tenth aspects, the first signal is associated with a first operator and the second signal is associated with a second operator. AlthoughFIG.11shows example blocks of process1100, in some aspects, process1100may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.11. Additionally, or alternatively, two or more of the blocks of process1100may be performed in parallel. FIG.12is a diagram illustrating an example process1200performed, for example, by an RIS, in accordance with the present disclosure. Example process1200is an example where the RIS (e.g., RIS750) performs operations associated with efficient RIS-assisted communication. As shown inFIG.12, in some aspects, process1200may include receiving, from a base station, a first signal modulated by a modulation signature associated with the RIS (block1210). For example, the RIS (e.g., using communication manager170and/or reception component1502, depicted inFIG.15) may receive, from a base station, a first signal modulated by a modulation signature associated with the RIS, as described above. As further shown inFIG.12, in some aspects, process1200may include redirecting the first signal using modulation that reverses the modulation signature associated with the RIS (block1220). For example, the RIS (e.g., using communication manager170, reflection component1508, and/or modulation component1510, depicted inFIG.15) may redirect the first signal using modulation that reverses the modulation signature associated with the RIS, as described above. Process1200may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the first signal is decodable by a UE based at least in part on the first signal being redirected using the modulation that reverses the modulation signature associated with the RIS. In a second aspect, alone or in combination with the first aspect, the modulation signature is at least one of a frequency modulation signature, a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. In a third aspect, alone or in combination with one or more of the first and second aspects, redirecting the first signal using modulation that reverses the modulation signature associated with the RIS includes modulating the first signal using an inverse modulation pattern associated with the modulation signature. In a fourth aspect, alone or in combination with one or more of the first through third aspects, process1200includes receiving, from the base station, an indication of at least one of the modulation signature associated with the RIS or the inverse modulation pattern associated with the modulation signature. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, process1200includes receiving, from the base station, a second signal that is not modulated by the modulation signature associated with the RIS, and redirecting the second signal and modulating the second signal using the inverse modulation pattern associated with the modulation signature. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, the second signal is undecodable by a UE after being modulated by the RIS using the inverse modulation associated with the modulation signature. AlthoughFIG.12shows example blocks of process1200, in some aspects, process1200may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.12. Additionally, or alternatively, two or more of the blocks of process1200may be performed in parallel. FIG.13is a diagram of an example apparatus1300for wireless communication. The apparatus1300may be a UE, or a UE may include the apparatus1300. In some aspects, the apparatus1300includes a reception component1302and a transmission component1304, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus1300may communicate with another apparatus1306(such as a UE, a base station, or another wireless communication device) using the reception component1302and the transmission component1304. As further shown, the apparatus1300may include the communication manager140. The communication manager140may include a decoding component1308, among other examples. In some aspects, the apparatus1300may be configured to perform one or more operations described herein in connection withFIGS.7-9. Additionally, or alternatively, the apparatus1300may be configured to perform one or more processes described herein, such as process1000ofFIG.10, or a combination thereof. In some aspects, the apparatus1300and/or one or more components shown inFIG.13may include one or more components of the UE described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.13may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component1302may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus1306. The reception component1302may provide received communications to one or more other components of the apparatus1300. In some aspects, the reception component1302may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus1300. In some aspects, the reception component1302may include one or more antennas, a modem, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. The transmission component1304may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus1306. In some aspects, one or more other components of the apparatus1300may generate communications and may provide the generated communications to the transmission component1304for transmission to the apparatus1306. In some aspects, the transmission component1304may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus1306. In some aspects, the transmission component1304may include one or more antennas, a modem, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. In some aspects, the transmission component1304may be co-located with the reception component1302in a transceiver. The reception component1302may receive a signal that is transmitted by a base station with a modulation signature associated with an RIS or a repeater and redirected by the RIS or the repeater using modulation that reverses the modulation signature associated with the RIS or the repeater. The decoding component1308may decode the signal, wherein the signal is decodable by the UE based at least in part on the signal being redirected by the RIS or the repeater using the modulation that reverses the modulation signature associated with the RIS or the repeater. The number and arrangement of components shown inFIG.13are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.13. Furthermore, two or more components shown inFIG.13may be implemented within a single component, or a single component shown inFIG.13may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.13may perform one or more functions described as being performed by another set of components shown inFIG.13. FIG.14is a diagram of an example apparatus1400for wireless communication. The apparatus1400may be a base station, or a base station may include the apparatus1400. In some aspects, the apparatus1400includes a reception component1402and a transmission component1404, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus1400may communicate with another apparatus1406(such as a UE, a base station, or another wireless communication device) using the reception component1402and the transmission component1404. As further shown, the apparatus1400may include the communication manager150. The communication manager150may include a modulation component1408, among other examples. In some aspects, the apparatus1400may be configured to perform one or more operations described herein in connection withFIGS.7-9. Additionally, or alternatively, the apparatus1400may be configured to perform one or more processes described herein, such as process1100ofFIG.11, or a combination thereof. In some aspects, the apparatus1400and/or one or more components shown inFIG.14may include one or more components of the base station described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.14may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component1402may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus1406. The reception component1402may provide received communications to one or more other components of the apparatus1400. In some aspects, the reception component1402may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus1400. In some aspects, the reception component1402may include one or more antennas, a modem, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. The transmission component1404may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus1406. In some aspects, one or more other components of the apparatus1400may generate communications and may provide the generated communications to the transmission component1404for transmission to the apparatus1406. In some aspects, the transmission component1404may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus1406. In some aspects, the transmission component1404may include one or more antennas, a modem, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. In some aspects, the transmission component1404may be co-located with the reception component1402in a transceiver. The transmission component1404may transmit a first signal using a modulation signature associated with a reconfigurable intelligent surface (RIS) or a repeater, wherein the RIS or the repeater redirects the first signal and reverses the modulation signature. The transmission component1404may transmit a second signal without using the modulation signature associated with the RIS or the repeater. The modulation component1408may modulated the first signal using the modulation signature. The number and arrangement of components shown inFIG.14are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.14. Furthermore, two or more components shown inFIG.14may be implemented within a single component, or a single component shown inFIG.14may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.14may perform one or more functions described as being performed by another set of components shown inFIG.14. FIG.15is a diagram of an example apparatus1500for wireless communication. The apparatus1500may be an RIS, or an RIS may include the apparatus1500. In some aspects, the apparatus1500includes a reception component1502and a transmission component1504, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus1500may communicate with another apparatus1506(such as a UE, a base station, or another wireless communication device) using the reception component1502and the transmission component1504. As further shown, the apparatus1500may include the communication manager170. The communication manager170) may include one or more of a reflection component1508and/or a modulation component1510, among other examples. In some aspects, the apparatus1500may be configured to perform one or more operations described herein in connection withFIGS.7-9. Additionally, or alternatively, the apparatus1500may be configured to perform one or more processes described herein, such as process1200ofFIG.12, or a combination thereof. In some aspects, the apparatus1500and/or one or more components shown inFIG.15may include one or more components of the RIS described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.15may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component1502may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus1506. The reception component1502may provide received communications to one or more other components of the apparatus1500. In some aspects, the reception component1502may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus1500. In some aspects, the reception component1502may include one or more antennas, a modem, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the RIS described in connection withFIG.2. The transmission component1504may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus1506. In some aspects, one or more other components of the apparatus1500may generate communications and may provide the generated communications to the transmission component1504for transmission to the apparatus1506. In some aspects, the transmission component1504may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus1506. In some aspects, the transmission component1504may include one or more antennas, a modem, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the RIS described in connection withFIG.2. In some aspects, the transmission component1504may be co-located with the reception component1502in a transceiver. The reception component1502may receive, from a base station, a first signal modulated by a modulation signature associated with the RIS. The reflection component1508and/or the modulation component1510may redirect the first signal using modulation that reverses the modulation signature associated with the RIS. The reception component1502may receive, from the base station, an indication of at least one of the modulation signature associated with the RIS or the inverse modulation pattern associated with the modulation signature. The reception component1502may receive, from the base station, a second signal that is not modulated by the modulation signature associated with the RIS. The reflection component1508may redirect the second signal and the modulation component1510may modulate the second signal using the inverse modulation pattern associated with the modulation signature. The number and arrangement of components shown inFIG.15are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.15. Furthermore, two or more components shown inFIG.15may be implemented within a single component, or a single component shown inFIG.15may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.15may perform one or more functions described as being performed by another set of components shown inFIG.15. The following provides an overview of some Aspects of the present disclosure: Aspect 1: A method of wireless communication performed by a user equipment (UE), comprising: receiving a signal that is transmitted by a base station with a modulation signature associated with a reconfigurable intelligent surface (RIS) or a repeater and redirected by the RIS or the repeater using modulation that reverses the modulation signature associated with the RIS or the repeater; and decoding the signal, wherein the signal is decodable by the UE based at least in part on the signal being redirected by the RIS or the repeater using the modulation that reverses the modulation signature associated with the RIS or the repeater. Aspect 2: The method of Aspect 1, wherein the signal is undecodable by the UE before being redirected by the RIS or the repeater using the modulation that reverses the modulation signature associated with the RIS or the repeater. Aspect 3: The method of any of Aspects 1-2, wherein decoding the signal comprises: decoding the signal in connection with a determination that the signal is decodable by the UE, wherein the determination that the signal is decodable by the UE is based at least in part on a measurement of the signal. Aspect 4: The method of any of Aspects 1-3, wherein the signal includes a synchronization signal block (SSB) associated with the RIS or the repeater. Aspect 5: The method of Aspect 4, wherein system information associated with the SSB block is different from system information associated with one or more other SSBs that are not associated with the RIS or the repeater. Aspect 6: The method of any of Aspects 1-5, wherein the modulation signature is at least one of a frequency modulation signature, a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. Aspect 7: A method of wireless communication performed by a base station, comprising: transmitting a first signal using a modulation signature associated with a reconfigurable intelligent surface (RIS) or a repeater, wherein the RIS or the repeater redirects the first signal and reverses the modulation signature; and transmitting a second signal without using the modulation signature associated with the RIS or the repeater. Aspect 8: The method of Aspect 7, wherein the first signal is undecodable by a user equipment (UE) before being redirected by the RIS or the repeater. Aspect 9: The method of any of Aspects 7-8, wherein transmitting the first signal comprises transmitting the first signal in a first beam direction associated with the RIS or the repeater, and wherein transmitting the second signal comprises transmitting the second signal in the first beam direction or in a second beam direction that satisfies a distance threshold with respect to the first beam direction. Aspect 10: The method of Aspect 9, wherein the second signal is decodable by a user equipment (UE) without being redirected by the RIS or the repeater, and wherein the first signal is decodable by the UE after the RIS or the repeater redirects the first signal and reverses the modulation signature. Aspect 11: The method of Aspect 10, wherein the RIS or the repeater redirects the second signal and applies an inverse modulation associated with the modulation signature to the second signal, and wherein the second signal is undecodable by the UE after the RIS or the repeater applies the inverse modulation associated with the modulation signature to the second signal. Aspect 12: The method of any of Aspects 7-11, wherein the modulation signature is at least one of a frequency modulation signature, a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. Aspect 13: The method of any of Aspects 7-12, wherein the modulation signature is a frequency modulation signature, and wherein transmitting a first signal using a modulation signature associated with the RIS or the repeater comprises: applying a frequency shift to the first signal, wherein the frequency shift is associated with the RIS or the repeater. Aspect 14: The method of any of Aspects 7-13, wherein the modulation signature is a phase modulation signature, and wherein transmitting a first signal using a modulation signature associated with the RIS or the repeater comprises: applying a time-domain phase shift pattern to the first signal, wherein the time-domain phase shift pattern is associated with the RIS or the repeater. Aspect 15: The method of any of Aspects 7-14, wherein transmitting the first signal comprises transmitting one or more first synchronization signal blocks (SSBs) associated with the RIS or the repeater in a first beam direction using the modulation signature associated with the RIS or the repeater, and wherein transmitting the second signal comprises transmitting one or more second SSBs not associated with the RIS or the repeater without using the modulation signature. Aspect 16: The method of Aspect 15, wherein transmitting the one or more second SSBs comprises: transmitting a second SSB of the one or more second SSBs in the first beam direction or in a second beam direction that satisfies a distance threshold with respect to the first beam direction. Aspect 17: The method of any of Aspects 15-16, wherein the one or more first SSBs are associated with first system information, and wherein the one or more second SSBs are associated with second system information that is different from the first system information. Aspect 18: The method of any of Aspects 7-17, wherein the first signal is associated with a first operator and the second signal is associated with a second operator. Aspect 19: A method of wireless communication performed by a reconfigurable intelligent surface (RIS), comprising: receiving, from a base station, a first signal modulated by a modulation signature associated with the RIS; and redirecting the first signal using modulation that reverses the modulation signature associated with the RIS. Aspect 20: The method of Aspect 19, wherein the first signal is decodable by a user equipment (UE) based at least in part on the first signal being redirected using the modulation that reverses the modulation signature associated with the RIS. Aspect 21: The method of any of Aspects 19-20, wherein the modulation signature is at least one of a frequency modulation signature, a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. Aspect 22: The method of any of Aspects 19-21, wherein redirecting the first signal using modulation that reverses the modulation signature associated with the RIS comprises: modulating the first signal using an inverse modulation pattern associated with the modulation signature. Aspect 23: The method of Aspect 22, further comprising: receiving, from the base station, an indication of at least one of the modulation signature associated with the RIS or the inverse modulation pattern associated with the modulation signature. Aspect 24: The method of any of Aspects 22-23, further comprising: receiving, from the base station, a second signal that is not modulated by the modulation signature associated with the RIS; and redirecting the second signal and modulating the second signal using the inverse modulation pattern associated with the modulation signature. Aspect 25: The method of Aspect 24, wherein the second signal is undecodable by a UE after being modulated by the RIS using the inverse modulation associated with the modulation signature. Aspect 26: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 1-6. Aspect 27: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 1-6. Aspect 28: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 1-6. Aspect 29: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 1-6. Aspect 30: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 1-6. Aspect 31: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 7-18. Aspect 32: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 7-18. Aspect 33: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 7-18. Aspect 34: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 7-18. Aspect 35: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 7-18. Aspect 36: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 19-25. Aspect 37: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 19-25. Aspect 38: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 19-25. Aspect 39: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 19-25. Aspect 40: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 19-25. The foregoing disclosure provides illustration and description but is not intended to be exhaustive or to limit the aspects to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware and/or a combination of hardware and software. “Software” shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, and/or functions, among other examples, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. As used herein, a “processor” is implemented in hardware and/or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the aspects. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code, since those skilled in the art will understand that software and hardware can be designed to implement the systems and/or methods based, at least in part, on the description herein. As used herein, “satisfying a threshold” may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. Many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. The disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a+b, a+c, b+c, and a+b+c, as well as any combination with multiples of the same element (e.g., a+a, a+a+a, a+a+b, a+a+c, a+b+b, a+c+c, b+b, b+b+b, b+b+c, c+c, and c+c+c, or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the terms “set” and “group” are intended to include one or more items and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms that do not limit an element that they modify (e.g., an element “having” A may also have B). Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
124,191
11863287
Reference will now be made to the exemplary embodiments illustrated, and specific language will be used herein to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. DETAILED DESCRIPTION Before the present invention is disclosed and described, it is to be understood that this invention is not limited to the particular structures, process steps, or materials disclosed herein, but is extended to equivalents thereof as would be recognized by those ordinarily skilled in the relevant arts. It should also be understood that terminology employed herein is used for the purpose of describing particular examples only and is not intended to be limiting. The same reference numerals in different drawings represent the same element. Numbers provided in flow charts and processes are provided for clarity in illustrating steps and operations and do not necessarily indicate a particular order or sequence. Example Embodiments An initial overview of technology embodiments is provided below and then specific technology embodiments are described in further detail later. This initial summary is intended to aid readers in understanding the technology more quickly but is not intended to identify key features or essential features of the technology nor is it intended to limit the scope of the claimed subject matter. FIG.1illustrates an exemplary signal booster120in communication with a wireless device110and a base station130. The signal booster120can be referred to as a repeater. A repeater can be an electronic device used to amplify (or boost) signals. The signal booster120(also referred to as a cellular signal amplifier) can improve the quality of wireless communication by amplifying, filtering, and/or applying other processing techniques via a signal amplifier122to uplink signals communicated from the wireless device110to the base station130and/or downlink signals communicated from the base station130to the wireless device110. In other words, the signal booster120can amplify or boost uplink signals and/or downlink signals bi-directionally. In one example, the signal booster120can be at a fixed location, such as in a home or office. Alternatively, the signal booster120can be attached to a mobile object, such as a vehicle or a wireless device110. In one configuration, the signal booster120can include an integrated device antenna124(e.g., an inside antenna or a coupling antenna) and an integrated node antenna126(e.g., an outside antenna). The integrated node antenna126can receive the downlink signal from the base station130. The downlink signal can be provided to the signal amplifier122via a second coaxial cable127or other type of radio frequency connection operable to communicate radio frequency signals. The signal amplifier122can include one or more cellular signal amplifiers for amplification and filtering. The downlink signal that has been amplified and filtered can be provided to the integrated device antenna124via a first coaxial cable125or other type of radio frequency connection operable to communicate radio frequency signals. The integrated device antenna124can wirelessly communicate the downlink signal that has been amplified and filtered to the wireless device110. Similarly, the integrated device antenna124can receive an uplink signal from the wireless device110. The uplink signal can be provided to the signal amplifier122via the first coaxial cable125or other type of radio frequency connection operable to communicate radio frequency signals. The signal amplifier122can include one or more cellular signal amplifiers for amplification and filtering. The uplink signal that has been amplified and filtered can be provided to the integrated node antenna126via the second coaxial cable127or other type of radio frequency connection operable to communicate radio frequency signals. The integrated device antenna126can communicate the uplink signal that has been amplified and filtered to the base station130. In one example, the signal booster120can filter the uplink and downlink signals using any suitable analog or digital filtering technology including, but not limited to, surface acoustic wave (SAW) filters, bulk acoustic wave (BAW) filters, film bulk acoustic resonator (FBAR) filters, ceramic filters, waveguide filters or low-temperature co-fired ceramic (LTCC) filters. In one example, the signal booster120can send uplink signals to a node and/or receive downlink signals from the node. The node can comprise a wireless wide area network (WWAN) access point (AP), a base station (BS), an evolved Node B (eNB), a baseband unit (BBU), a remote radio head (RRH), a remote radio equipment (RRE), a relay station (RS), a radio equipment (RE), a remote radio unit (RRU), a central processing module (CPM), or another type of WWAN access point. In one configuration, the signal booster120used to amplify the uplink and/or a downlink signal is a handheld booster. The handheld booster can be implemented in a sleeve of the wireless device110. The wireless device sleeve can be attached to the wireless device110, but can be removed as needed. In this configuration, the signal booster120can automatically power down or cease amplification when the wireless device110approaches a particular base station. In other words, the signal booster120can determine to stop performing signal amplification when the quality of uplink and/or downlink signals is above a defined threshold based on a location of the wireless device110in relation to the base station130. In one example, the signal booster120can include a battery to provide power to various components, such as the signal amplifier122, the integrated device antenna124and the integrated node antenna126. The battery can also power the wireless device110(e.g., phone or tablet). Alternatively, the signal booster120can receive power from the wireless device110. In one configuration, the signal booster120can be a Federal Communications Commission (FCC)-compatible consumer signal booster. As a non-limiting example, the signal booster120can be compatible with FCC Part 20 or 47 Code of Federal Regulations (C.F.R.) Part 20.21 (Mar. 21, 2013). In addition, the signal booster120can operate on the frequencies used for the provision of subscriber-based services under parts 22 (Cellular), 24 (Broadband PCS), 27 (AWS-1, 700 MHz Lower A-E Blocks, and 700 MHz Upper C Block), and 90 (Specialized Mobile Radio) of 47 C.F.R. The signal booster120can be configured to automatically self-monitor its operation to ensure compliance with applicable noise and gain limits. The signal booster120can either self-correct or shut down automatically if the signal booster's operations violate the regulations defined in FCC Part 20.21. In one configuration, the signal booster120can improve the wireless connection between the wireless device110and the base station130(e.g., cell tower) or another type of wireless wide area network (WWAN) access point (AP). The signal booster120can boost signals for cellular standards, such as the Third Generation Partnership Project (3GPP) Long Term Evolution (LTE) Release 8, 9, 10, 11, 12, 13, 14, 15, or 16, 3GPP 5G Release 15 or 16, or Institute of Electronics and Electrical Engineers (IEEE) 802.16. In one configuration, the repeater220can boost signals for 3GPP LTE Release 16.0.0 (January 2019) or other desired releases. The signal booster120can boost signals from the 3GPP Technical Specification (TS) 36.101 (Release 15 Sep. 2017) bands or LTE frequency bands. For example, the signal booster120can boost signals from the LTE frequency bands: 2, 4, 5, 12, 13, 17, 25, and 26. In addition, the signal booster120can boost selected frequency bands based on the country or region in which the signal booster is used, including any of bands 1-85 or other bands, as disclosed in 3GPP TS 36.104 V16.0.0 (January 2019). In another configuration, the repeater220can boost signals from the 3GPP Technical Specification (TS) 38.104 (Release 15 Jan. 2019) bands or 5G frequency bands. In addition, the repeater220can boost selected frequency bands based on the country or region in which the repeater is used, including any of bands n1-n86, n257-n261, or other bands, as disclosed in 3GPP TS 38.104 V15.4.0 (January 2019). The number of 3GPP LTE or 5G frequency bands and the level of signal improvement can vary based on a particular wireless device, cellular node, or location. Additional domestic and international frequencies can also be included to offer increased functionality. Selected models of the signal booster120can be configured to operate with selected frequency bands based on the location of use. In another example, the signal booster120can automatically sense from the wireless device110or base station130(or GPS, etc.) which frequencies are used, which can be a benefit for international travelers. In one configuration, multiple signal boosters can be used to amplify UL and DL signals. For example, a first signal booster can be used to amplify UL signals and a second signal booster can be used to amplify DL signals. In addition, different signal boosters can be used to amplify different frequency ranges. In one configuration, the signal booster120can be configured to identify when the wireless device110receives a relatively strong downlink signal. An example of a strong downlink signal can be a downlink signal with a signal strength greater than approximately −80 decibel-milliwatts (dBm). The signal booster120can be configured to automatically turn off selected features, such as amplification, to conserve battery life. When the signal booster120senses that the wireless device110is receiving a relatively weak downlink signal, the integrated booster can be configured to provide amplification of the downlink signal. An example of a weak downlink signal can be a downlink signal with a signal strength less than −80 dBm. FIG.2illustrates an example diagram of a repeater200that includes a pre-amplification system205for a modem210(or modem module). The repeater200(or signal booster) can be a cellular repeater. The pre-amplification system205can receive a downlink signal on a downlink signal path. The downlink signal can be a downlink cellular signal. The pre-amplification system205can include a pre-amplifier226to amplify the downlink signal to produce an amplified downlink signal. The pre-amplifier226can be a low noise amplifier (LNA) or another type of amplifier. The pre-amplifier226can be a low-gain and wideband amplifier, as the pre-amplifier226can cover all of the cellular bands amplified by the repeater200. The pre-amplification system205can provide the amplified downlink signal to the modem210. In other words, the pre-amplification system205can be responsible for downlink signal amplification prior to or before a downlink signal reaches the modem210. In one example, the modem210(or modem module) can include an amplifier, such as a low noise amplifier (LNA). The amplifier included in the modem210(or modem module) can be separate from the pre-amplifier226. In other words, the pre-amplifier226can be outside the modem210(or modem module). In one example, the pre-amplification system205can be considered a pre-amplifier for the modem210because the pre-amplification system205can amplify a downlink (DL) signal before the downlink signal is received at the modem210. An amplifier on a DL-only port won't violate the modem's government certification, such as a certification by the federal communication commission (FCC). Amplifying the uplink (UL) output from the modem would violate the modem's government certification. Signals going to a modem's UL/DL port can't be amplified without separating the UL and DL signals, which would add so much insertion loss that it wouldn't be worthwhile. However, using a pre-amplifier on a DL only port can improve the downlink signal without violating the modem's government certification. Therefore, the pre-amplification system205can serve to improve a performance of the modem210. Further, as described in further detail below, the downlink signal path can be communicatively coupled to a diversity donor antenna. In one example, downlink signal amplification for the downlink signal path that is coupled to the diversity donor antenna, as performed by the pre-amplification system205, can provide about 3 dB of increased receiver sensitivity, thereby allowing a higher data throughput via an Ethernet port216of the modem210. In other words, this additional 3 dB of receiver sensitivity can result in the higher data throughput via the Ethernet port216of the modem210. The increased receiver sensitivity can be particularly useful when the repeater200is used in a rural geographical area with poor cellular reception. In one example, the modem210can include a first modem port212, a second modem port214and the Ethernet port216. The first modem port212can be an uplink-downlink modem port. The second modem port214can be a downlink-only modem port. The Ethernet port216can be communicatively coupled to a coaxial cable218. In an alternative configuration, the modem210may not include an Ethernet port, but rather a port for an optical fiber cable or another suitable port for a specific type of cable. In one example, the modem210can receive a downlink signal via the first modem port212and/or the second modem port214. The modem210can modify the downlink signal (e.g., amplify and/or filter the downlink signal) to produce a modified downlink signal. The modem210can direct the modified downlink signal to the coaxial cable218via the Ethernet port216. Similarly, the modem210can receive an uplink signal through the coaxial cable218via the Ethernet port216. The modem210can modify the uplink signal (e.g., amplify and/or filter the uplink signal) to produce a modified uplink signal. The modem210can direct the modified uplink signal to the first modem port212and/or the second modem port214. In one example, the repeater200can include a first donor antenna port233and a second donor antenna port235. The first donor antenna port233can be communicatively coupled to a first donor antenna232, and the second donor antenna port235can be communicatively coupled to a second donor antenna234. In one example, the first donor antenna232can be a main donor antenna, and the second donor antenna234can be a diversity donor antenna. Similarly, the first donor antenna port233can be a main donor antenna port, and the second donor antenna port235can be a diversity donor antenna port. In one example, the first donor antenna232can be an uplink-downlink antenna, and the second donor antenna234can be a downlink-only antenna. In other words, the first donor antenna232can be capable of transmitting uplink signals and receiving downlink signals, whereas the second donor antenna234can be capable of only receiving downlink signals. Further, the first donor antenna232and the second donor antenna234can achieve antenna diversity using spatial diversity, pattern diversity, polarization diversity, etc. For example, the first donor antenna232and the second donor antenna234can be cross polarized antennas. In one example, the first donor antenna232and the second donor antenna234can be configured to receive and/or transmit signals in the same set of bands. In one specific example, the second donor antenna234(e.g., the downlink-only donor antenna) can be configured to only accommodate downlink frequencies. In one example, the repeater200can include a first signal path222communicatively coupled between the first donor antenna port233and the first modem port212. The first signal path222can be an uplink-downlink signal path. In other words, the first signal path222can carry uplink signals received from the modem210via the first modem port212, and the first signal path222can direct the uplink signals for transmission via the first donor antenna232. The first signal path222can comprise of a coaxial cable that is connected between the first donor antenna port233and the first modem port212. The first donor antenna232can transmit the uplink signals to a base station. In addition, the first signal path222can carry downlink signals received from the first donor antenna232via the first donor antenna port233. The first donor antenna232can receive the downlink signals from the base station. The first signal path222can direct the downlink signals to the modem210via the first modem port212. In one example, the first signal path222may not include amplifiers or filters, and a signal is directed on the first signal path222to the modem210without modification (e.g., without amplification or filtering) of the signal. In one example, the repeater200can include a second signal path224communicatively coupled between the second donor antenna port235and the second modem port214. The second signal path224can be a downlink-only signal path. Thus, the second signal path224can carry downlink signals received from the second donor antenna234via the second donor antenna port235, and the second signal path224can direct the downlink signals to the modem210via the second modem port214. Further, the second signal path224can include the pre-amplifier226to amplify received downlink signals. For example, the pre-amplifier226may amplify a received downlink signal to produce an amplified downlink signal, and the amplified downlink signal can be directed on the second signal path224to the second modem port214. The pre-amplifier226can be included in the pre-amplification system205, which can be responsible for providing amplified downlink signals to the modem210. In other words, the pre-amplifier226in the pre-amplification system205can perform downlink signal amplification prior to or before a downlink signal reaches the modem210. In one example, the second donor antenna234can receive a downlink signal from the base station. The downlink signal can be directed onto the second signal path224. More specifically, the downlink signal can be directed to the pre-amplification system205, and the pre-amplifier226in the pre-amplification system205can amplify the downlink signal to produce an amplified downlink signal. The amplified downlink signal can be directed to the modem210via the second modem port214. The modem210can modify the amplified downlink signal by performing amplification, filtering, etc. on the amplified downlink signal. In other words, the modem210can modify the amplified downlink signal to produce a modified amplified downlink signal. The modem210can direct the modified amplified downlink signal to the coaxial cable218via the Ethernet port216of the modem210. In other words, the modem210can output the modified amplified downlink signal via the Ethernet port216, and the modified amplified downlink signal can be sent on the coaxial cable218to a destination. In one example, the modem210can act as a cellular-to-WiFi converter. For example, the modem210can combine a first downlink cellular signal received on the first signal path222and an amplified cellular downlink signal received on the second signal path224to form a combined downlink signal. The modem210can demodulate the combined downlink signal for output to the Ethernet port216. Alternatively, the modem210can demodulate the combined downlink signal for output to a fiber optic port. In one example, the pre-amplifier226can be inserted on the second signal path224(e.g., the downlink-only signal path) to be closer to the second donor antenna234(e.g., the diversity donor antenna), which can improve a receive sensitivity on the second modem port214(e.g., the downlink-only modem port) and thereby improve the receive sensitivity and performance of the modem210. For example, inserting the pre-amplifier226on the second signal path224can increase the receiver sensitivity by about 3 dB, thereby allowing a higher data throughput via the Ethernet port216of the modem210. In addition, the insertion of the pre-amplifier226on the second signal path224can improve a system noise figure (e.g., reduce the system noise figure), thereby improving a performance of the modem210. Thus, the system noise figure can be improved by amplifying a downlink signal received from the second donor antenna234before the downlink signal is received at the modem210. In one example, the incorporation of the pre-amplification system205in the repeater200may not affect a regulatory certification of the modem210. In other words, the incorporation of the pre-amplification system205in the repeater200may not require a regulatory recertification of the modem210. For example, since the pre-amplification system205only amplifies downlink signals that are directly routed to the modem210and does not amplify uplink signals, which could adversely affect the network, the pre-amplification system205does not change the certification of the modem210. In one example, the modem210can be a pre-certified modem, and incorporating the pre-amplification system205to the repeater200may not affect a certification status of the modem210. The pre-amplification system can be limited to a selected gain or power level that will not affect the certification status. For example, the pre-amplification system may be limited to a gain of 3 dB or 6 dB. Therefore, the pre-amplification system205can serve to increase the receiver sensitivity and reduce the system noise figure, without affecting the certification of the modem210. Further, the incorporation of the pre-amplification system205in the repeater200may not affect a network protection for the repeater200. In one example, the pre-amplification system205for the modem210can be unrelated to a repeater. For example, the pre-amplification system205for the modem210can be incorporated into any type of hardware device having a modem or an integrated modem. The pre-amplification system205can serve to amplify (e.g., pre-amplify) signals before the signals are received at the modem210. As a result, the modem210can receive signals that are already amplified, and the modem210can perform further processing on amplified signals. FIG.3illustrates another example diagram of a repeater300that includes a pre-amplification system305for a modem310(or modem module). The modem310can include a first modem port312(e.g., an uplink-downlink modem port), a second modem port314(a downlink-only modem port) and an Ethernet port316communicatively coupled to a destination via a coaxial cable318. Further, the repeater300can include a first donor antenna port333communicatively coupled to a first donor antenna332(e.g., a main donor antenna), and the repeater300can include a second donor antenna port335communicatively coupled to a second donor antenna334(e.g., a diversity donor antenna). Further, the repeater300can include a first signal path322communicatively coupled between the first modem port312and the first donor antenna port333, and the repeater300can include a second signal path324communicatively coupled between the second modem port314and the second donor antenna port335. The first signal path322can be an uplink-downlink signal path, and the second signal path324can be a downlink-only signal path. In other words, the first signal path322can be capable of carrying uplink signals and downlink signals, whereas the second signal path324can be capable of carrying only downlink signals. The pre-amplification system305can correspond to the pre-amplification system205, as described earlier. Further, the second signal path324can correspond to the second signal path224, as described earlier. In one configuration, the pre-amplification system305can be communicatively coupled to the second signal path324, and the pre-amplification system305can be between the second modem port314and the second donor antenna port335. For example, the pre-amplification system305can include a first signal modification device342and a second signal modification device348, where the first signal modification device342and the second signal modification device348can be communicatively coupled to the second signal path324. The first signal modification device342and the second signal modification device348can be diplexers, triplexers, splitters, circulators, etc. Further, the pre-amplification system305can include a first amplifier344and a second amplifier346, where the first amplifier344and the second amplifier346can be communicatively coupled between the first signal modification device342and the second signal modification device348. The first amplifier344and the second amplifier346can be LNAs or another type of amplifier. Further, the first amplifier344and the second amplifier346can be in parallel with respect to each other. In one example, the first amplifier344can be a high band amplifier and the second amplifier346can be a low band amplifier, or vice versa. Thus, a received downlink signal can be directed by the first signal modification device342to either the first amplifier344or the second amplifier346depending on whether the received downlink signal is either a high band signal or a low band signal. In this example, amplifiers can have varying gain across a frequency spectrum, so the first amplifier344and the second amplifier346can serve to amplify low bands separately from high bands. The high bands can include, but are not limited to, band 4 (B4) or band 25 (B25). The low bands can include, but are not limited to, band 5 (B5), band 12 (B12) or band 13 (B13). FIG.4Aillustrates an example diagram of a pre-amplification system405for a modem that includes multiple amplifiers and a band pass filter444. The pre-amplification system405can be communicatively coupled to a modem port414(e.g., a downlink-only modem port) of the modem (or modem module) via a signal path424(e.g., a downlink-only signal path). The signal path424can be communicatively coupled to a donor antenna434(e.g., a diversity donor antenna) via a donor antenna port435(e.g., a diversity donor antenna port). In this example, the pre-amplification system405(or the signal path424) can include an amplifier442(e.g., an LNA). In addition, the pre-amplification system405(or the signal path424) can include a band pass filter444, a variable attenuator446and/or an additional amplifier448. Thus, the amplifier442, the band pass filter444, the variable attenuator446and/or the additional amplifier448can be communicatively coupled between the modem port414and the donor antenna port435. In some cases, the band pass filter444can be a single-input single-output (SISO) filter, where the SISO filter can filter signals in one or more bands. Further, the band pass filter444can be a low loss filter to protect wideband interference to the modem. The pre-amplification system405can correspond to the pre-amplification system205,305, as described earlier. Further, the signal path424can correspond to the second signal path224,324, as described earlier. Further, the modem port414can correspond to the second modem port214,314, as described earlier. Further, the donor antenna434and the donor antenna port435can correspond to the second donor antenna234,334and the second donor antenna port235,335, respectively, as described earlier. FIG.4Billustrates an example diagram of a pre-amplification system405for a modem that includes multiple amplifiers and a switchable band pass filter445. In this example, the pre-amplification system405(or the signal path424) can include a switchable band pass filter445, as opposed to a non-switchable band pass filter (as shown inFIG.4A). In one example, bypassing the switchable band pass filter445can result in an additional 1-2 dB of receiver sensitivity, thereby improving a performance of the modem. FIG.4Cillustrates an example diagram of a pre-amplification system405for a modem that includes multiple amplifiers and multiple band pass filters. In this example, the pre-amplification system405(or the signal path424) can include an additional band pass filter441prior to the amplifier442. In other words, the additional band pass filter441can be communicatively coupled between the amplifier442and the donor antenna port435. FIG.4Dillustrates a diagram of a pre-amplification system405for a modem that includes multiple amplifiers and multiple switchable band pass filters. In this example, the pre-amplification system405(or the signal path424) can include an additional switchable band pass filter440prior to the amplifier442. In other words, the additional switchable band pass filter440can be communicatively coupled between the amplifier442and the donor antenna port435. FIG.5is a flowchart illustrating a method for pre-amplifying downlink cellular signals for a modem. The method can be executed as instructions on a machine, where the instructions are included on at least one computer readable medium or one non-transitory machine readable storage medium. The method can include the operation of: receiving a downlink cellular signal on a downlink signal path communicatively coupled between a diversity donor antenna port and a downlink-only modem port of the modem, as in block510. The method can include the operation of: directing the received downlink cellular signal to a pre-amplifier of the downlink signal path to produce an amplified downlink cellular signal, as in block520. The method can include the operation of: directing the amplified downlink cellular signal to the downlink-only modem port, as in block530. FIG.6provides an example illustration of the wireless device, such as a user equipment (UE), a mobile station (MS), a mobile communication device, a tablet, a handset, a wireless transceiver coupled to a processor, or other type of wireless device. The wireless device can include one or more antennas configured to communicate with a node or transmission station, such as an access point (AP), a base station (BS), an evolved Node B (eNB), a baseband unit (BBU), a remote radio head (RRH), a remote radio equipment (RRE), a relay station (RS), a radio equipment (RE), a remote radio unit (RRU), a central processing module (CPM), or other type of wireless wide area network (WWAN) access point. The wireless device can communicate using separate antennas for each wireless communication standard or shared antennas for multiple wireless communication standards. The wireless device can communicate in a wireless local area network (WLAN), a wireless personal area network (WPAN), and/or a WWAN. FIG.6also provides an illustration of a microphone and one or more speakers that can be used for audio input and output from the wireless device. The display screen can be a liquid crystal display (LCD) screen, or other type of display screen such as an organic light emitting diode (OLED) display. The display screen can be configured as a touch screen. The touch screen can use capacitive, resistive, or another type of touch screen technology. An application processor and a graphics processor can be coupled to internal memory to provide processing and display capabilities. A non-volatile memory port can also be used to provide data input/output options to a user. The non-volatile memory port can also be used to expand the memory capabilities of the wireless device. A keyboard can be with the wireless device or wirelessly connected to the wireless device to provide additional user input. A virtual keyboard can also be provided using the touch screen. EXAMPLES The following examples pertain to specific technology embodiments and point out specific features, elements, or actions that can be used or otherwise combined in achieving such embodiments.Example 1 includes a system, comprising: a first donor antenna port; a second donor antenna port; a modem comprising a first modem port and a second modem port; a first signal path communicatively coupled between the first donor antenna port and the first modem port, wherein the first signal path is operable to direct a first received cellular signal; and a second signal path communicatively coupled between second donor antenna port and the second modem port, wherein the second signal path includes a pre-amplifier operable to amplify a second received cellular signal to produce an amplified cellular signal to be directed to the second modem port.Example 2 includes the system of Example 1, wherein: the first modem port is an uplink-downlink port; and the second modem port is a downlink-only port.Example 3 includes the system of any of Examples 1 to 2, wherein: the first signal path is an uplink-downlink signal path; and the first received cellular signal is a received downlink cellular signal.Example 4 includes the system of any of Examples 1 to 3, wherein: the second signal path is a downlink signal path; the second received cellular signal is a received downlink cellular signal; and the amplified signal is an amplified downlink signal.Example 5 includes the system of any of Examples 1 to 4, wherein: the first donor antenna port is communicatively coupled to a first donor antenna; and the second donor antenna port is communicatively coupled to a second donor antenna, wherein the first donor antenna is a main donor antenna and the second donor antenna is a diversity donor antenna.Example 6 includes the system of any of Examples 1 to 5, wherein the pre-amplifier in the second signal path is a low noise amplifier (LNA).Example 7 includes the system of any of Examples 1 to 6, wherein the LNA does not necessitate a regulatory recertification of the modem.Example 8 includes the system of any of Examples 1 to 7, wherein the second signal path includes a band pass filter, wherein the band pass filter is communicatively coupled between the second modem port and the pre-amplifier.Example 9 includes the system of any of Examples 1 to 8, wherein the second signal path includes a band pass filter, wherein the band pass filter is communicatively coupled between the pre-amplifier and the second donor antenna port.Example 10 includes the system of any of Examples 1 to 9, wherein the second signal path includes a band pass filter, and further comprising a third signal path between the second modem port and the pre-amplifier that forms a switchable bypass path to bypass the bandpass filter.Example 11 includes the system of any of Examples 1 to 10, wherein the second signal path includes a variable attenuator.Example 12 includes the system of any of Examples 1 to 11, wherein the pre-amplifier is a first pre-amplifier, and wherein the second signal path includes a second pre-amplifier communicatively coupled between the second modem port and the first pre-amplifier.Example 13 includes the system of any of Examples 1 to 12, wherein the second signal path is communicatively coupled to a first signal modification device and a second signal modification device, wherein the pre-amplifier is a first pre-amplifier communicatively coupled between the first signal modification device and the second signal modification device, and the second signal path includes a second pre-amplifier in parallel with the first pre-amplifier and communicatively coupled between the first signal modification device and the second signal modification device.Example 14 includes the system of any of Examples 1 to 13, wherein: the first signal modification device is one of: a first diplexer, a first splitter or a first circulator; and the second signal modification device is one of: a second diplexer, a second splitter or a second circulator.Example 15 includes the system of any of Examples 1 to 14, wherein: the first pre-amplifier is a high band pre-amplifier and the second pre-amplifier is a low band pre-amplifier, or vice versa.Example 16 includes the system of any of Examples 1 to 15, wherein the modem is configured to combine the first received cellular signal and the amplified cellular signal to form a combined downlink signal, and wherein the modem is configured to demodulate the combined downlink signal for output to one of an Ethernet port or a fiber optic port.Example 17 includes the system of any of Examples 1 to 16, wherein the modem acts a cellular-to-WiFi converter configured to combine the first received cellular signal and the amplified cellular signal to form the combined downlink signal, wherein the combined downlink signal is outputted to one of the Ethernet port or the fiber optic port as a Wi-Fi signal.Example 18 includes the system of any of Examples 1 to 17, wherein the modem is a modem module that includes an amplifier, wherein the amplifier is a low noise amplifier (LNA), and the pre-amplifier is outside of the modem module and separate from the amplifier in the modem module.Example 19 includes a pre-amplification system for a modem, the pre-amplification system, comprising: an uplink-downlink signal path communicatively coupled between a first modem port of the modem and a first donor antenna port; and a downlink signal path communicatively coupled between a second modem port of the modem and a second donor antenna port, the downlink signal path including a pre-amplifier configured to amplify a received downlink cellular signal to produce an amplified downlink cellular signal to be directed to the second modem port.Example 20 includes the pre-amplification system of Example 19, wherein: the first modem port is an uplink-downlink port; and the second modem port is a downlink-only port.Example 21 includes the pre-amplification system of any of Examples 19 to 20, wherein: the first donor antenna port is communicatively coupled to a main donor antenna; and the second donor antenna port is communicatively coupled to a diversity donor antenna.Example 22 includes the pre-amplification system of any of Examples 19 to 21, wherein the pre-amplifier is a low noise amplifier (LNA).Example 23 includes the pre-amplification system of any of Examples 19 to 22, wherein the downlink signal path includes a band pass filter, wherein the band pass filter is: communicatively coupled between the second modem port and the pre-amplifier, or communicatively coupled between the pre-amplifier and the second donor antenna port.Example 24 includes the pre-amplification system of any of Examples 19 to 23, wherein the downlink signal path includes a band pass filter, and further comprising a switchable bypass path between the second modem port and the pre-amplifier to bypass the bandpass filter.Example 25 includes the pre-amplification system of any of Examples 19 to 24, wherein the downlink signal path is communicatively coupled to a first signal modification device and a second signal modification device, wherein the pre-amplifier is a first pre-amplifier communicatively coupled between the first signal modification device and the second signal modification device, and the downlink signal path includes a second pre-amplifier in parallel with the first pre-amplifier and communicatively coupled between the first signal modification device and the second signal modification device.Example 26 includes the pre-amplification system of any of Examples 19 to 25, wherein: the first pre-amplifier is a high band pre-amplifier and the second pre-amplifier is a low band pre-amplifier, or vice versa.Example 27 includes a method for pre-amplifying downlink cellular signals for a modem, comprising: receiving a downlink cellular signal on a downlink signal path communicatively coupled between a diversity donor antenna port and a downlink-only modem port of the modem; directing the received downlink cellular signal to a pre-amplifier of the downlink signal path to produce an amplified downlink cellular signal; and directing the amplified downlink cellular signal to the downlink-only modem port.Example 28 includes the method of Example 27, wherein the diversity donor antenna port is communicatively coupled to a diversity donor antenna.Example 29 includes the method of any of Examples 27 to 28, further comprising: receiving a second downlink cellular signal on an uplink-downlink signal path communicatively coupled between a main donor antenna port and an uplink-downlink modem port of the modem; combining the second downlink cellular signal and the amplified downlink cellular signal to form a combined downlink signal; and demodulating the combined downlink signal for output to one of an Ethernet port or a fiber optic port of the modem. Various techniques, or certain aspects or portions thereof, can take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, compact disc-read-only memory (CD-ROMs), hard drives, non-transitory computer readable storage medium, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the various techniques. Circuitry can include hardware, firmware, program code, executable code, computer instructions, and/or software. A non-transitory computer readable storage medium can be a computer readable storage medium that does not include signal. In the case of program code execution on programmable computers, the computing device can include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. The volatile and non-volatile memory and/or storage elements can be a random-access memory (RAM), erasable programmable read only memory (EPROM), flash drive, optical drive, magnetic hard drive, solid state drive, or other medium for storing electronic data. One or more programs that can implement or utilize the various techniques described herein can use an application programming interface (API), reusable controls, and the like. Such programs can be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language can be a compiled or interpreted language, and combined with hardware implementations. As used herein, the term processor can include general purpose processors, specialized processors such as VLSI, FPGAs, or other types of specialized processors, as well as base band processors used in transceivers to send, receive, and process wireless communications. It should be understood that many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module can be implemented as a hardware circuit comprising custom very-large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module can also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like. In one example, multiple hardware circuits or multiple processors can be used to implement the functional units described in this specification. For example, a first hardware circuit or a first processor can be used to perform processing operations and a second hardware circuit or a second processor (e.g., a transceiver or a baseband processor) can be used to communicate with other entities. The first hardware circuit and the second hardware circuit can be incorporated into a single hardware circuit, or alternatively, the first hardware circuit and the second hardware circuit can be separate hardware circuits. Modules can also be implemented in software for execution by various types of processors. An identified module of executable code can, for instance, comprise one or more physical or logical blocks of computer instructions, which can, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but can comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module. Indeed, a module of executable code can be a single instruction, or many instructions, and can even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data can be identified and illustrated herein within modules, and can be embodied in any suitable form and organized within any suitable type of data structure. The operational data can be collected as a single data set, or can be distributed over different locations including over different storage devices, and can exist, at least partially, merely as electronic signals on a system or network. The modules can be passive or active, including agents operable to perform desired functions. Reference throughout this specification to “an example” or “exemplary” means that a particular feature, structure, or characteristic described in connection with the example is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in an example” or the word “exemplary” in various places throughout this specification are not necessarily all referring to the same embodiment. As used herein, a plurality of items, structural elements, compositional elements, and/or materials can be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary. In addition, various embodiments and example of the present invention can be referred to herein along with alternatives for the various components thereof. It is understood that such embodiments, examples, and alternatives are not to be construed as defacto equivalents of one another, but are to be considered as separate and autonomous representations of the present invention. Furthermore, the described features, structures, or characteristics can be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of layouts, distances, network examples, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, layouts, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. While the forgoing examples are illustrative of the principles of the present invention in one or more particular applications, it will be apparent to those of ordinary skill in the art that numerous modifications in form, usage and details of implementation can be made without the exercise of inventive faculty, and without departing from the principles and concepts of the invention. Accordingly, it is not intended that the invention be limited, except as by the claims set forth below.
47,411
11863288
DETAILED DESCRIPTION OF SOME EMBODIMENTS The following embodiments are examples. Although the specification may refer to “an”, “one”, or “some” embodiment(s) in several locations, this does not necessarily mean that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments. Furthermore, words “comprising” and “including” should be understood as not limiting the described embodiments to consist of only those features that have been mentioned and such embodiments may contain also features/structures that have not been specifically mentioned. Embodiments and examples described herein may be implemented in any communications system comprising wireless connection(s). In the following, different exemplifying embodiments will be described using, as an example of an access architecture to which the embodiments may be applied, a radio access architecture based on new radio (NR, 5G) or long term evolution advanced (LTE Advanced, LTE-A), without restricting the embodiments to such an architecture, however. It is obvious for a person skilled in the art that the embodiments may also be applied to other kinds of communications networks having suitable means by adjusting parameters and procedures appropriately. Some examples of other options for suitable systems are the universal mobile telecommunications system (UMTS) radio access network (UTRAN or E-UTRAN), long term evolution (LTE, the same as E-UTRA), beyond 5G, wireless local area network (WLAN or WiFi), worldwide interoperability for microwave access (WiMAX), Bluetooth®, personal communications services (PCS), ZigBee®, wideband code division multiple access (WCDMA), systems using ultra-wideband (UWB) technology, sensor networks, mobile ad-hoc networks (MANETs) and Internet Protocol multimedia subsystems (IMS) or any combination thereof. FIG.1depicts examples of simplified system architectures only showing some elements and functional entities, all being logical units, whose implementation may differ from what is shown. The connections shown inFIG.1are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the system typically comprises also other functions and structures than those shown inFIG.1. The embodiments are not, however, restricted to the system given as an example but a person skilled in the art may apply the solution to other communication systems provided with necessary properties. The example ofFIG.1shows a part of an exemplifying radio access network FIG.1shows user devices101and101′ configured to be in a wireless connection on one or more communication channels in a cell with an access node (such as (e/g)NodeB)102providing the cell. The physical link from a user device to a (e/g)NodeB is called uplink or reverse link and the physical link from the (e/g)NodeB to the user device is called downlink or forward link. It should be appreciated that (e/g)NodeBs or their functionalities may be Implemented by using any node, host, server or access point (AP) etc. entity suitable for such usage. A communications system100typically comprises more than one (e/g)NodeB in which case the (e/g)NodeBs may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for signalling purposes. The (e/g)NodeB is a computing device configured to control the radio resources of communication system it is coupled to. The NodeB may also be referred to as a base station, an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment. The (e/g)NodeB includes or is coupled to transceivers. From the transceivers of the (e/g)NodeB, a connection is provided to an antenna unit that establishes bi-directional radio links to user devices. The antenna unit may comprise a plurality of antennas or antenna elements. The (e/g)NodeB is further connected to core network105(CN or next generation core NGC). Depending on the system, the counterpart on the CN side can be a serving gateway (S-GW, routing and forwarding user data packets), packet data network gateway (P-GW), for providing connectivity of user devices (UEs) to external packet data networks, or mobile management entity (MME), etc. The user device (also called UE, user equipment, user terminal, terminal device, etc.) illustrates one type of an apparatus to which resources on the air interface are allocated and assigned, and thus any feature described herein with a user device may be implemented with a corresponding apparatus, such as a relay node. An example of such a relay node is a layer 3 relay (self-backhauling relay) towards the base station. The user device typically refers to a portable computing device that includes wireless mobile communication devices operating with or without a subscriber identification module (SIM), including, but not limited to, the following types of wireless devices: a mobile station (mobile phone), smartphone, personal digital assistant (PDA), handset, device using a wireless modem (alarm or measurement device, etc.), laptop and/or touch screen computer, tablet, game console, notebook, and multimedia device. It should be appreciated that a user device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network. A user device may also be a device having the capability to operate in Internet of Things (IoT) network which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. The user device may also utilise cloud. In some applications, a user device may comprise a small portable device with radio parts (such as a watch, earphones or eyeglasses) and the computation is carried out in the cloud. The user device (or in some embodiments a relay node, such as a mobile termination (MT) part of the integrated access and backhaul (IAB) Node), is configured to perform one or more of user equipment functionalities. The user device may also be called a subscriber unit, mobile station, remote terminal, access terminal, user terminal or user equipment (UE) just to mention but a few names or apparatuses. Various techniques described herein may also be applied to a cyber-physical system (CPS) (a system of collaborating computational elements controlling physical entities). CPS may enable the implementation and exploitation of massive amounts of Interconnected ICT devices (sensors, actuators, processors microcontrollers, etc.) embedded in physical objects at different locations. Mobile cyber physical systems, in which the physical system in question has inherent mobility, are a subcategory of cyber-physical systems. Examples of mobile physical systems include mobile robotics and electronics transported by humans or animals. Additionally, although the apparatuses have been depicted as single entities, different units, processors and/or memory units (not all shown inFIG.1) may be implemented. 5G enables using multiple input—multiple output (MIMO) antennas, many more base stations or nodes or corresponding network devices than the LTE (a so-called small cell concept), including macro sites operating in co-operation with smaller stations and employing a variety of radio technologies depending on service needs, use cases and/or spectrum available. 5G mobile communications support a wide range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications (such as (massive) machine-type communications (mMTC), including vehicular safety, different sensors and real-time control. 5G is expected to have multiple radio interfaces, namely below 6 GHz, cmWave and mmWave, and can also be integrated with existing legacy radio access technologies, such as the LTE. Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE. In other words, 5G is planned to support both inter-RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as below 6 GHz-cmWave, below 6 GHz-cmWave-mmWave). One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility. The current architecture in LTE networks is fully distributed in the radio and fully centralized in the core network. The low latency applications and services in 5G require to bring the content close to the radio which leads to local break out and multi-access edge computing (MEC). 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors. MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time. Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical), critical communications (autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications). The communication system is also able to communicate with other networks, such as a public switched telephone network or the Internet106, or utilise services provided by them. The communication network may also be able to support the usage of cloud services, for example at least part of core network operations may be carried out as a cloud service (this is depicted inFIG.1by “cloud”107). The communication system may also comprise a central control entity, or a like, providing facilities for networks of different operators to cooperate for example in spectrum sharing. Edge cloud may be brought into radio access network (RAN) by utilizing network function virtualization (NVF) and software defined networking (SDN). Using edge cloud may mean access node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head or base station comprising radio parts. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts. Application of cloud RAN architecture enables RAN real time functions being carried out at the RAN side (in a distributed unit, DU102) and non-real time functions being carried out in a centralized manner (in a centralized unit, CU104). It should also be understood that the distribution of labour between core network operations and base station operations may differ from that of the LTE or even be non-existent. Some other technology advancements probably to be used are Big Data and all-IP, which may change the way networks are being constructed and managed. 5G (or new radio, NR) networks are being designed to support multiple hierarchies, where MEC servers can be placed between the core and the base station or nodeB (gNB). It should be appreciated that MEC can be applied in 4G networks as well. 5G may also utilize satellite communication to enhance or complement the coverage of 5G service, for example by providing backhauling. Possible use cases are providing service continuity for machine-to-machine (M2M) or Internet of Things (IoT) devices or for passengers on board of vehicles, or ensuring service availability for critical communications, and future railway/maritime/aeronautical communications. Satellite communication may utilise geostationary earth orbit (GEO) satellite systems, but also low earth orbit (LEO) satellite systems, in particular mega-constellations (systems in which hundreds of (nano)satellites are deployed). Each satellite103in the mega-constellation may cover several satellite-enabled network entities that create on-ground cells. The on-ground cells may be created through an on-ground relay node102or by a gNB located on-ground or in a satellite. It is obvious for a person skilled in the art that the depicted system is only an example of a part of a radio access system and in practice, the system may comprise a plurality of (e/g)NodeBs, the user device may have an access to a plurality of radio cells and the system may comprise also other apparatuses, such as relay nodes, for example distributed unit (DU) parts of one or more IAB nodes, or other network elements, etc. At least one of the (e/g)NodeBs or may be a Home(e/g)nodeB or a donor (e/g) NodeB. Additionally, in a geographical area of a radio communication system a plurality of different kinds of radio cells as well as a plurality of radio cells may be provided. Radio cells may be macro cells (or umbrella cells) which are large cells, usually having a diameter of up to tens of kilometers, or smaller cells such as micro-, femto- or picocells. The (e/g)NodeBs ofFIG.1may provide any kind of these cells. A cellular radio system may be implemented as a multilayer network including several kinds of cells. Typically, in multilayer networks, one access node provides one kind of a cell or cells, and thus a plurality of (e/g)NodeBs are required to provide such a network structure. For fulfilling the need for improving the deployment and performance of communication systems, the concept of “plug-and-play” (e/g)NodeBs has been Introduced. Typically, a network which is able to use “plug-and-play” (e/g)Node Bs, includes, in addition to donor/Home (e/g)NodeBs (H(e/g)nodeBs), a home/donor node B gateway, or HNB-GW (not shown inFIG.1). A HNB Gateway (HNB-GW), which is typically installed within an operator's network may aggregate traffic from a large number of HNBs back to a core network. FIG.2illustrates a simplified example of a wireless system200, which is configured to provide integrated access and backhaul via relay nodes (relay apparatuses), which may be also called IAB nodes. In the integrated access and backhaul, the backhaul can be carried over multiple hops from one relay node (IAB node) to another relay node (IAB node) until the last relay node serving an access user device. A node (apparatus) wherefrom a relay node receives a transmission is called a parent node and a node whereto the transmission is relayed is called a child node. It should be appreciated thatFIG.2only shows some apparatuses to illustrate the integrated access and backhaul system. It is apparent to a person skilled in the art that the systems also comprise other equipment, functional entities and apparatuses. Referring toFIG.2, the illustrated system200comprises a network207to which an access node204has a fixed connection (not shown inFIG.2), two relay nodes202,202A and two user devices201A,201B. It should be appreciated that each of the nodes204,202,202A may serve user devices. Below the relay nodes are called IAB nodes, without limiting the solutions to IAB nodes. It should be appreciated that term “IAB node” covers any relaying apparatus configured/configurable to perform a corresponding functionality. The access node204having the fixed connection to the network207may be called a donor node. The access node204(the donor node) may host the centralized unit (not illustrated inFIG.2) for the IAB nodes202,202A, and a distributed unit. The centralized unit may be configured to run radio resource control (RRC), higher layer 2 (L2), such as a packet data convergence protocol (PDCP) sublayer above a radio link control (RLC) sublayer, and control functions for the subtending IAB topology. The distributed unit in the donor node may be configured to run lower layer 2 protocol layers, which are the radio link control sublayer and a medium access control sublayer below the radio link control sublayer, and a physical layer (PHY) below the layer 2. An IAB node202,202A may comprise a distributed unit (DU)202-2(illustrated inFIG.2in the node202) and a mobile termination unit (MT)202-1(illustrated inFIG.2in the node202). The distributed unit (DU)202-2in the IAB node is similar to the distributed unit in the donor node. In other words, the distributed unit (DU)202-2may be configured to run lower layer 2 protocol layers, which are the radio link control sublayer and a medium access control sublayer below the radio link control sublayer, and a physical layer (PHY) below the layer 2. The mobile termination unit (MT)202-1is configured to act towards a distributed unit of the parent node. Hence, the mobile termination unit (MT)202-1may be configured to run lower layer 2 protocol layers, which are the radio link control sublayer and a medium access control sublayer below the radio link control sublayer, and a physical layer (PHY) below the layer 2. Basically, the mobile termination unit (MTI)202-1transmits/receives transmissions from a parent node, and the distributed unit transmits/receives transmissions from child IAB nodes and access user devices (served user devices). The centralized unit may have two control interfaces (not illustrated inFIG.2) to the relay nodes (IAB nodes), one to the mobile termination unit and another to the distributed unit. In the illustrated example ofFIG.2, the IAB node202has as its parent node the access node (donor node)204, and as its children the access user devices201A,201B and the IAB node202A, which may be called a child node. The IAB node202A has the IAB node202as its parent node, but no children are illustrated inFIG.2. The communication link210provides a downlink parent backhaul and uplink parent backhaul to the IAB node202. For example, the downlink parent backhaul link in the link210will carry both physical downlink control channel (PDCCH) and physical downlink shared channel (PDSCH) transmissions towards the mobile termination unit202-1in the IAB node202. Correspondingly, the uplink parent backhaul link in the link210will carry both physical uplink control channel (PUCCH) and physical uplink shared channel (PUSCH) transmissions from the mobile termination unit202-1in the IAB node202. The communication link220provides downlink child backhaul and uplink parent backhaul to the IAB node202. The communication links230,240provide uplink access and downlink access. The communication links are supported in different time resources. In the examples it is assumed, for the sake of clarity, that the communication link210and communication links220,230,240are orthogonal in time resources. For example, a first slot may be for the communication link210and the next slot (second slot) for one of the communication links220,230,240, or for two or more of the communication links220,230,240if multiplexing in time is used. It should be appreciated that if MIMO antennas and/or a software defined monitoring (SDM) based schemes are applied, the links may not be orthogonal in the time resources. Herein a term “child apparatus” is used as a synonym of an access user device and the child IAB node (relay node, relaying apparatus), i.e. to cover both types of children, and term “parent apparatus” as a synonym to the parent node. Below different examples are described assuming that two different ways of relaying are used, without limiting the examples to such a solution. The relaying ways are called below using term “type of relaying”, a synonym to the type being a mode (relaying mode, mode of relaying). It should be appreciated that the below illustrated principles can be implemented when three or more different types of relaying are used, the implementation being a straightforward task for one skilled in the art. Further, for the sake of clarity, in the below examples it is assumed one transmission per one child apparatus, without limiting the examples to such a solution. It should be appreciated that there may be several transmission to one child apparatus, each of which may be treated as one transmission per one child apparatus. An IAB node (relay node), below simply node, or any corresponding apparatus, may be configured to implement one or more of functionalities described withFIGS.3to8and a donor node, or any corresponding apparatus, may be configured to implement the functionality described withFIG.8. It should be appreciated that the relaying node may implement also upper layer protocols, and corresponding functionality, when relaying transmissions, even though in the system example inFIG.2, described above, it is assumed that only lower layer protocols and functionality is implemented in the IAB node. Referring toFIG.3, the node is using (block300) at least a first type of relaying and second type of relaying to support latency requirements, which may differ, for a set of child apparatuses using same shared resources. The different types of relaying differ at least on the time required by the node to process received transmissions before forwarding the transmissions. The types may also require different processing capacity. In the below examples it is assumed that the first type requires more time and more processing capacity than the second type. The node determines in block301for a child apparatus, which is in a connected state, a type of relaying to use. The type may be determined using one or more of the following comprising load of processing capability in the IAB node, latency requirements for data transmissions, and feedback information on a channel. The feedback information on the channel may be information on long term statistics of channel quality, long term statistics of success rate of transmissions, for example statistics relating to hybrid automatic repeat request (HARQ), and short term information, for example one or more latest channel quality feedbacks. Different examples of determining, and re-determining the type of relaying are described in more detail below withFIGS.5and6. It should be appreciated that any other principles to determine the type of relaying may be used. The node schedules in block302transmissions to/from children according to the determined type. For example, transmissions to child apparatuses having the first type as the determined type may be scheduled in a time domain of a slot to the end part of the slot. Correspondingly, transmissions to child apparatuses having the second type as the determined type may be scheduled in the time domain of the slot to the beginning part of the slot. The node also processes in block303transmissions to/from a child apparatus according to the determined type of relaying of the child apparatus, the processing including relaying (forwarding, transmitting) the transmission using the resources allocated during scheduling. Referring toFIG.4, the process starts as described above withFIG.3, and blocks400to403corresponding to blocks300to303, correspondingly, and they are not described in detail. Therefore the blocks are not repeated in vain herein. The example inFIG.4differs from the example illustrated inFIG.3in that respect that the IAB node transmits in block404in its feedback to its parent apparatus information indicating the scheduling the IAB node is using. In other words, existing feedback mechanisms are enhanced to contain said information. The information may be a scheduling suggestion, for example a request to send transmissions to child apparatuses that are determined to be the first type in the beginning part of a slot, and transmissions to child apparatuses that are determined to be the second type in the end part of the slot. This may be indicated, for example, by sending information on child apparatuses according to preferred order, or information indicating that these child apparatuses are requested to be scheduled to the beginning part/end part. Another example includes indicating the IAB node's capability of supporting first type/second type of relaying per child apparatus. For example, one of the types, or both, may require a special hardware to be able to support the type of relaying. Further, not every child apparatus, for example, support both types of relaying, or due to channel conditions, only the first type can be used. Still further examples include the feedback information comprising feedback of child apparatuses on channel quality, long term statistics of channel quality and/or long term statistics of success rate of transmissions of the type of relaying used per child apparatus, performance and latency experienced (with the used type of relaying) in transmissions towards child apparatuses. For example, if the channel quality is poor whilst the latency is critical for a child apparatus, it indicates that the child apparatus should be scheduled to the beginning part. In another example, it may be that for all child apparatuses at first the first type of relaying is used, but if the backhaul link gets congested, the channel qualities of child apparatuses may be used to determine child apparatuses for the second type of relaying. Naturally the feedback (feedback information) may comprise any combination of the above examples to provide the parent apparatus with information indicating directly or indirectly the scheduling the IAB node uses. In other words, the purpose is to indicate, directly or Indirectly, to the parent apparatus for which child apparatuses the first type of relaying should be used and for which child apparatuses the second type of relaying can be used. The parent apparatus may take the feedback information into account when the parent apparatus schedules transmissions. FIG.5illustrates an example of how the IAB node may be configured to perform determining and re-determining the type of relaying, for example when performing block301/401(and re-performing block302/402). Referring toFIG.5, the node is using (block500) at least the first type of relaying and the second type of relaying. In the illustrated example, as a preliminary measure, a ratio between the two types is determined in block501. The ratio may be determined, for example, using the following: assuming that the processing capacity required by a transmission using the first type is cx, and the processing capacity required by a transmission using the second type is cy, and the total capacity is c, the following equation has to be fulfilled: c=N*cx+M*cy, wherein N and M are integers, having values starting from 0, N being the number of child apparatuses for which the first type is used and M the number of child apparatuses for which the second type is used. Further, the processing capacity (performance) of the IAB node may be taken into account when the ratio is determined. The processing capacity may limit the number of child apparatuses having the first type of relaying to a maximum number of N. The determined ratio may be used, together with the other information, to determine the type to be used in block301/401. Regardless whether or not the ratio is determined, i.e. whether block501is performed or not, once the type for a child apparatus is determined and transmission scheduled, the IAB node receives in block502feedback on transmissions from child apparatuses and monitors in block503the performance of the IAB node. The performance may be monitored by monitoring load of the processing capability in the IAB node and/or are transmissions forwarded at the first possible scheduled resource or in the next possible resource. The feedback on transmissions may indicate how latency requirements for data transmissions are fulfilled. The feedback may also (in addition to, or instead of, depending on implementation) indicate long term and/or short term statistics of success rate of transmissions, for example statistics relating to hybrid automatic repeat request (HARQ). If there are no problems with the performance (block504: yes), i.e. there is no overload or overflow, and the required quality is met (block505: yes), the process continues the receiving feedback and monitoring performance. If there is a problem/problems with the performance (block504: no), for example there is an overload of capacity and/or overflow of transmission, types to be used for relaying are re-determined in block506for child apparatuses, and transmissions rescheduled in block507to reflect the re-determined types. Then the process continues the receiving feedback and monitoring performance. The re-determining types of relaying may comprise changing a type of one child apparatus, with the best feedback and/or with latency requirement allowing longest latency, for example, from the first type to the second type. If the quality is not met (block505: no), for example the latency requirement is not fulfilled, the process may re-determine in block506the type of relaying, based on the feedback information indicating the cause and transmissions are rescheduled in block507to reflect the re-determined type. For example, if the type is the first type and the feedback information indicates that latency requirements are not met, the type may be determined to be the second type. In another example, if the type is the second type, and the feedback information indicates that retransmissions occur, the type may be re-determined to be the first type. However, to avoid ping-pong between types, if the type has been the first type, re-determined to the second type, and once again will be re-determined the first type, next time the re-determining is skipped. In such situations, and in situations in which re-determining the type actually results to the same type, transmissions are rescheduled in block507to reflect the result. For example, transmission may be scheduled to happen later (more at the end of the slot), if possible. Then the process continues the receiving feedback and monitoring performance. FIG.6illustrates another example how the IAB node may be configured to perform determining and re-determining the type of relaying in certain time intervals. For example, the IAB node may be configured to initially, as a default, to determine the type to be used to be the second type, or the process is used when, for example when performing block301/401(and re-performing block302/402) results to the type of relaying being the second type. Referring toFIG.6, when the type to be used has been determined in block601to be the second type, and receiving in block602feedback from child apparatus is continued until time t1 has lapsed. The time t1 may be a short time, especially if, when the determination of the type is made, there are no long term statistic on feedback available. After the time t1 has lapsed, it is checked in block603, whether the quality is met, based on the received feedback, as described above. If the quality is met (block604: yes), the process continues in block605to receive feedback until time t2 has lapsed (block606). Depending on the implementation, time t2 may be longer than time t1, or they are the same. When the time t2 lapsed (block606: yes), the process returns to block603to check, whether the quality is met. If the quality is not met (block603: no), in the illustrated example the type is re-determined in block607to be the first type, and transmissions are rescheduled in block608accordingly, and the process continues to block605to receive feedback. Naturally in block607, performance may be taken into account, as described withFIG.5. It should be appreciated that time t2 may increase gradually each time it is detected that quality is met. With the times t1 and t2 dynamicity of the scheduling may be adjusted. For example, if the IAB node uses capability information of the child apparatus, long term statistic on previous transmissions and the required latency and performance requirements to determine the type of relaying, the longer the time t1 and/or t2 may be. However, shorter time interval provides a more dynamic decision, also better taking into account possible changes in the channel quality. In a further example, if the determined type of relaying is the first type, for example the first type is used as a default type, the example ofFIG.6may be modified to check in block604, whether there are problems with the performance, and if no, continue to block605, otherwise to block607to re-determine the type to be the second type, if possible, as described above withFIG.5. The IAB node may be configured to implement redetermining based on both examples ofFIG.5andFIG.6, or based on one of them, and/or to implement redetermining using other principles. Further, it should be appreciated that in some implementations the IAB node may not perform any redetermining of types. FIG.7illustrates an example of a processing functionality of transmissions in an IAB node when the IAB node relays transmissions to the child apparatuses. In the example it is assumed that the first type of relaying is decode and forward (DF) and the second type of relaying is amplify and forward (AF). It should be appreciated that they are used only for illustrative purposes, and similar principles may be applied for other types of relaying. Further, the principles disclosed may be used also for transmissions from the child apparatuses to the parent apparatus. Also it should be noted that the amplify and forward relaying process takes place in the physical layer, whereas the decode and forward relaying process may take place in addition to the physical layer, also in the medium access control sublayer, and possibly also in the radio link control sublayer. Referring toFIG.7, the IAB node is configured (block700) to support both the decode and forward type of relaying and the amplify and forward type of relaying. When transmissions to child apparatuses are received in block701in one slot, transmissions are processed in block702starting in the time domain in the beginning of the slot. For a transmission, the type of the relaying determined for the target child apparatus is determined in block703. If the type is not the decode and forward (block704: no), the transmission is amplified in block705and transmitted in block706in the beginning part in the time domain of the transmission slot (as scheduled). If the type is the decode and forward (block704: yes), the transmission is decoded in block707to data to be relayed, and the data to be relayed is then re-encoded in block707to a transmission, and transmitted in block708in the end part in the time domain of the transmission slot (as scheduled). Combining any of the re-determining of the type to be used with the example ofFIG.7, it is possible to take into account that when the type “amplify and forward” is used, also noise is amplified, which may in some channel conditions decrease the quality too low. Then the change of the type to “decode and forward” may mitigate the noise enhancement in the IAB node. Further, it is possible to take into account that when the type “decode and forward” is used, the processing time may be too long to fulfill the latency requirements, and the change of the type to “amplify and forward” may help to achieve the latency requirements. FIG.8illustrates an example of a functionality of the IAB node in implementations in which child apparatuses (child IAB nodes) are configured to send to the parent apparatus in the feedback to the parent apparatus the information indicating the scheduling the child IAB node is using. Corresponding functionality may be performed by the donor node as well. Referring toFIG.8, when information indicating the scheduling the child IAB node is using is received in block801, scheduling of transmissions is updated in block802. It should be appreciated that when scheduling of transmissions is updated, also other factors, such as retransmission attempts, than the indicated scheduling may be taken into account. In any case, the parent apparatus remains in control of its traffic arrangements (scheduling). FIGS.9and10illustrate different examples of scheduling arrangements, wherein the slot N is used by the parent to transmit transmissions to the IAB node, and the slot N+1 is used by the IAB node to relay the transmissions. In the examples it is assumed that the IAB node has determined following types for child apparatuses: first type to CA1, second type to CA2 and first type to CA3, and transmissions are scheduled so that transmission to CA1 is scheduled to take place in the beginning of the slot, and transmissions to CA1 and CA3 in the end part. Further, the processing time required by the first type is t0, and the processing time required by the second type is t1 (or less). Referring toFIG.9, in the illustrated example it is assumed that the IAB node has sent as a scheduling suggestion to the parent node information indicating “transmit to CAL, CA3 in the beginning part of the slot and to CA2 in the end part”, and that the parent apparatus followed the suggestion. As can be seen fromFIG.9, processing of the transmissions begins when they are received, and thanks to the scheduling taking into account the types, processing of the transmission CA2 gives enough time (time t2 is longer than time t0) to process the transmissions to CA1 and CA3 so that they can be sent at the first possible slot Hence, sending the scheduling suggestion and following it by each node in the hop enables to keep the end to end priorization of low latency child apparatuses (access user devices). In the example ofFIG.10, it may be that the IAB node has not sent any scheduling suggestion to the parent apparatus or the parent apparatus is not following it Processing of the transmissions begins when they are received, and thanks to the scheduling in the IAB node taking into account the types, transmitting the transmission CA2 in the beginning part gives enough time (time t2 is longer than time t0) to process the transmissions to CAL. Since the transmission to CA3 is received later than the transmission to CA2, no additional time is provided by the scheduling arrangement to the transmission to CA3, and transmission will take place in the next slot for transmissions from the IAB node. However, should the IAB node followed the same scheduling as used by the parent apparatus, also transmission to CA1 would have taken place in the next slot. In other words, even this solution provides better support for different latency requirements. As is evident from the above examples, the IAB node's processing capability can be efficiently used for latency reductions, and to facilitate end-to-end data transmission latency requirements. Further, good channel conditions may be taken into account by using transmissions of the second type of relaying, whenever appropriate. Further, the use of second type of relaying, when the quality is good enough, results in energy savings in the IAB node (less processing capacity used means less energy is used). The blocks, related functions, and information exchanges described above by means ofFIGS.2to10are in no absolute chronological order, and some of them may be performed simultaneously or in an order differing from the given one. Other functions can also be executed between them or within them, and other information may be transmitted, and/or other rules applied. Some of the blocks or part of the blocks or one or more pieces of information can also be left out or replaced by a corresponding block or part of the block or one or more pieces of information. FIG.11illustrates an apparatus comprising a communication controller1110such as at least one processor or processing circuitry, and at least one memory1120including a computer program code (software, algorithm) ALG.1121, wherein the at least one memory and the computer program code (software, algorithm) are configured, with the at least one processor, to cause the apparatus to carry out any one of the embodiments, examples and implementations described above.FIG.11illustrates an apparatus configured to act as a relaying apparatus, such as the IAB node. The apparatus ofFIG.11may be an electronic device. Referring toFIG.11, the memory1120may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The memory may comprise a configuration storage CONF.1121, such as a configuration database, for at least storing one or more configurations and/or corresponding parameters/parameter values, or corresponding information, required for scheduling transmissions, including determining types of relaying to be used, and determined types to use. The memory1120may further store a data buffer for data waiting to be processed (including relaying). Referring toFIG.11, the apparatus1100may further comprise a communication interface1130comprising hardware and/or software for realizing communication connectivity according to one or more radio communication protocols. The communication interface1130may provide the apparatus with radio communication capabilities with one or more child apparatuses (child nodes, access user devices) served by the apparatus and with one or more donor apparatuses of a wireless network. The communication interface may comprise standard well-known analog radio components such as an amplifier, filter, frequency-converter and circuitries, conversion circuitries transforming signals between analog and digital domains, and one or more antennas. Digital signal processing regarding transmission and/or reception of signals may be performed in a communication controller1110. The communication controller1110comprises a mobile termination unit (MT)1111and a distributed unit (DU)1112configured to determine types of relaying, perform scheduling, and to relay transmissions according to any one of the embodiments/examples/implementations described above. The communication controller1110further comprises an enhanced scheduler unit (e-scheduler)1113configured to control co-operation of the mobile termination unit and the distributed unit to provide at least the first type of relaying and the second type of relaying in a coordinated way. Upon receiving resource request for data to/from the user device, a resource allocating circuitry RES1112may be triggered. The radio controller circuitry1111may communicate the reserved resources to the user device through the communication interface1130. The apparatus1100may further comprise an application processor (not illustrated inFIG.11) executing one or more computer program applications that generate a need to transmit and/or receive data. The application processor may execute computer programs forming the primary function of the apparatus. For example, if the apparatus is a sensor device, the application processor may execute one or more signal processing applications processing measurement data acquired from one or more sensor heads. If the apparatus is a computer system of a vehicle, the application processor may execute a media application and/or an autonomous driving and navigation application. In an embodiment, at least some of the functionalities of the apparatus ofFIG.11may be shared between two physically separate devices, forming one operational entity. Therefore, the apparatus may be seen to depict the operational entity comprising one or more physically separate devices for executing at least some of the processes described with respect to the relaying apparatus. As used in this application, the term ‘circuitry’ refers to all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term in this application. As a further example, as used in this application, the term ‘circuitry’ would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware. The term ‘circuitry’ would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or another network device. In an embodiment, at least some of the processes described in connection withFIGS.2to10may be carried out by an apparatus comprising corresponding means for carrying out at least some of the described processes. The apparatus may comprise separate means for separate phases of a process, or means may perform several phases or the whole process. Some example means for carrying out the processes may include at least one of the following: detector, processor (including dual-core and multiple-core processors), digital signal processor, controller, receiver, transmitter, encoder, decoder, memory, RAM, ROM, software, firmware, display, user interface, display circuitry, user interface circuitry, user interface software, display software, circuit, antenna, antenna circuitry, and circuitry. In an embodiment, the at least one processor, the memory, and the computer program code form processing means or comprises one or more computer program code portions for carrying out one or more operations according to any one of the embodiments/examples/implementations described herein. According to yet another embodiment, the apparatus carrying out the embodiments comprises a circuitry including at least one processor and at least one memory including computer program code. When activated, the circuitry causes the apparatus to perform at least some of the functionalities according to any one of the embodiments/examples/implementations ofFIGS.2to10, or operations thereof. The techniques and methods described herein may be implemented by various means. For example, these techniques may be implemented in hardware (one or more devices), firmware (one or more devices), software (one or more modules), or combinations thereof. For a hardware implementation, the apparatus(es) of embodiments may be implemented within one or more application-specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof. For firmware or software, the implementation can be carried out through modules of at least one chip set (e.g. procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory unit and executed by processors. The memory unit may be implemented within the processor or externally to the processor. In the latter case, it can be communicatively coupled to the processor via various means, as is known in the art. Additionally, the components of the systems (apparatuses) described herein may be rearranged and/or complemented by additional components in order to facilitate the achievements of the various aspects, etc., described with regard thereto, and they are not limited to the precise configurations set forth in the given figures, as will be appreciated by one skilled in the art. Embodiments/examples/implementations as described may also be carried out in the form of a computer process defined by a computer program or portions thereof. Embodiments of the methods described in connection withFIGS.2to10may be carried out by executing at least one portion of a computer program comprising corresponding instructions. The computer program may be in source code form, object code form, or in some intermediate form, and it may be stored in some sort of carrier, which may be any entity or device capable of carrying the program. For example, the computer program may be stored on a computer program distribution medium readable by a computer or a processor. The computer program medium may be, for example but not limited to, a record medium, computer memory, read-only memory, electrical carrier signal, telecommunications signal, and software distribution package, for example. The computer program medium may be a non-transitory medium, for example. Coding of software for carrying out the embodiments as shown and described is well within the scope of a person of ordinary skill in the art. In an embodiment, a computer-readable medium comprises said computer program. Even though the invention has been described above with reference to examples according to the accompanying drawings, it is clear that the invention is not restricted thereto but can be modified in several ways within the scope of the appended claims. Therefore, all words and expressions should be interpreted broadly and they are Intended to illustrate, not to restrict, the embodiment. It will be obvious to a person skilled in the art that, as technology advances, the inventive concept can be implemented in various ways. Further, it is clear to a person skilled in the art that the described embodiments may, but are not required to, be combined with other embodiments in various ways.
50,491
11863289
DETAILED DESCRIPTION FIG.1depicts a simplified representation of a satellite communications system100. The satellite communications system100generally includes a plurality of satellites130that move about the Earth120in corresponding orbits110. The orbits110relative to the Earth120are represented by ellipses, as shown inFIG.1, although it should be appreciated thatFIG.1is not to scale and is for illustrative purposes only. Furthermore, while only two satellites, including a first satellite130aand a second satellite130bare shown in a first orbit110aand a second orbit110brespectively, additional satellites and corresponding orbits may be provided in other examples. In one particular example of the satellite communications system100that is described in greater detail below, twelve satellites transit about the Earth120in respective orbits. Each of the respective orbits110for the plurality of satellites130may have shared orbital parameters such that a shape of each orbit110relative to the Earth120is the same. For example, a satellite communication system100may be provided in which all satellites of the satellite communication system have an orbit with the same shape, but with satellites offset in satellite epoch such that each of the satellites is in a different respective portion of the repeating ground track. For instance, each orbit110may have a shared or common inclination, apogee, perigee, semimajor axis, orbital period, argument of perigee, and/or longitude of ascending node. Thus, a ground track for each or all of the plurality of satellites may be of the same shape and follow a common ground track relative to the Earth. Each orbit110may be offset in relation to satellite epoch. In one example, the orbital period for each orbit110may be an integer factor of a sidereal day (e.g., the orbital period may be 12 sidereal hours, 8 sidereal hours, 6 sidereal hours, 4 sidereal hours, 2 sidereal hours, or 1 sidereal hour). A sidereal day is the time it takes for Earth to make one rotation around its axis and is approximately 23 hours, 56 minutes and 4.1 seconds. A sidereal hour is 1/24 of a sidereal day. In any regard, each of the plurality of satellites130may follow a common, repeating ground path. By offsetting the plurality of satellites130with respect to epoch, each of the plurality of satellites130may be spaced along the common, repeating ground path. In one example, each of the plurality of satellites130is offset in epoch such that the plurality of satellites130are evenly spaced within the common, repeating ground track of the satellite communication system100 The satellite communications system100may facilitate communication between ground stations on the Earth. For example, one ground station may be a satellite gateway that acts as a communication gateway to provide Internet access to a plurality of ground stations comprising user terminals on the Earth by way of communication with a satellite of the plurality of satellites. In this regard, each of the plurality of satellites130include communication equipment to facilitate communication with one or more ground stations on Earth120. For example, the communication equipment may include radiofrequency (RF) transmitters, receivers, and/or transceivers capable of transmitting and/or receiving RF signals. The RF signals may include encoded digital data according to any appropriate encoding scheme for communication. Each satellite may also include a communications module executed by a computing device on the satellite to facilitate communications. In turn, the satellite communication system100may provide continuous communication between a gateway and a plurality of user terminals (e.g., which may correspond to subscribers of a communication service facilitated by the plurality of satellites130). In one example, a given sky track in which at least one of the plurality of satellites130is continuously visible to a gateway and at least one (and preferably a plurality) of user terminals to provide communication coverage (e.g., internet access) to user terminals in a geographic area of interest, which may correspond to a continent-level, country-level, or region-level service area. In some examples, user terminals in different respective geographic areas may be assigned to different sky tracks of the satellite communications system100to facilitate communication with one or more gateways. Moreover, a single gateway may communicate to more than one different satellite in different respective ground tracks at the same time to provide distinct communications channels (e.g., to different ones or subsets of a plurality of user terminals). In this regard, different subsets of subscribers or user terminals may be served by a single gateway using different respective satellites. In an example of the satellite communications system100, the plurality of satellites130may each be configured as a “bent pipe” satellite, wherein the satellite may frequency convert the received carrier signals before retransmitting these signals to their destination, but otherwise perform little or no other processing on the contents of the signals. There could be a single carrier signal for each service spot beam of a satellite or multiple carriers in different embodiments. Similarly, single or multiple carrier signals could be used for feeder spot beams. A variety of physical layer transmission modulation and coding techniques may be used by the satellite communications system100in accordance with certain embodiments, including those defined with the DVB-S2 standard. For other embodiments, a number of configurations are possible. The plurality of satellites130may each operate in a multi-beam mode, transmitting a number of spot beams, each directed at a different region of the Earth120. Spot beams may be generated in a variety of ways, including single-feed per beam, multiple-feed per beam, onboard beamforming, ground-based beamforming, and the like. Each spot beam may be used to communicate between a satellite and a large group (e.g., thousands) of user terminal systems (e.g., user terminals within user systems of subscribers of the communication service facilitated by the satellite communications system100). The signals transmitted from the satellite may be received by one or more user terminals, via a respective user antenna. In some embodiments, some or all of the user systems include one or more user terminals and one or more customer premise equipment (CPE) devices. User terminals may include modems, satellite modems, routers, or any other useful components for handling the user-side communications. Reference to “users” should be construed generally to include any user (e.g., subscriber, consumer, customer, etc.) of services provided over the satellite communications system100. The orbits110of the plurality of satellites130may be configured relative to the Earth120to provide desirable characteristics related to communication, satellite operation, ground station design, or other benefits as will be described in greater detail herein. For example, an orbit110may be a critically inclined orbit. A critically inclined orbit is one that is inclined at 63.4 degrees relative to the equatorial plane of the Earth120. Satellites traveling in a critically inclined orbit experience zero apogee drift. Accordingly, the duration and/or frequency of station-keeping operations may be minimized for satellites in orbit at a critical inclination. In turn, the critically inclined orbit110may reduce fuel consumption of each of the plurality of satellites130due to the reduction in station-keeping operations. In addition, the orbit110may have an apogee112and a perigee114that are arranged relative to the Earth120to facilitate beneficial characteristics for satellite communication. In some examples, the orbits110may be circular orbits. In other examples, the orbits110may be elliptical. Specifically, the orbit110may have an apogee112of 15,000 km and a perigee114of 6,000 km. This example includes an eccentricity of 0.26. Regardless of whether circular or elliptical, the semimajor axis of the orbit110may be between about 8,000 km and 17,000 km. In examples where the orbits110are elliptical, the eccentricity of the orbit may be between 0 and 0.5. In another example, the orbit110may include a semimajor axis of 16,727 km, 4 orbits of the Earth per day, a maximum eccentricity of 0.26, and a perigee of 6,000 km. In yet another example, the orbit110may include a semimajor axis of 14,412 km, 5 orbits of the Earth per day, a maximum eccentricity of 0.14, and a perigee of 6,000 km. Another example orbit110includes a semimajor axis of 12,759 km, 6 orbits of the Earth per day, a maximum eccentricity of 0.03, and a perigee of 6,000 km. In another example, the orbit110may include a semimajor axis of 8,013 km, 12 orbits of the Earth per day, a maximum eccentricity of 0.16, and a perigee of 350 km. In examples that include elliptical orbits, the perigee114may occur over the Southern hemisphere of the Earth120, as shown inFIG.1(i.e., such that a satellite is over the Southern hemisphere when at perigee114). In turn, the apogee112occurs over the Northern hemisphere of the Earth120(i.e., such that a satellite is over the Northern hemisphere when at apogee112). Such an arrangement maximizes a duration in which the plurality of satellites130are each capable of communicating with ground stations in the Northern hemisphere. That is, because a velocity of the satellite130is at its slowest near the apogee of the orbit110and the apogee is disposed over the Northern hemisphere, each of the plurality of satellites130may slow and experience a longer duration in which the satellite130is capable of communication with ground stations on the Earth120in the northern hemisphere. This is desirable as many desirable targeted satellite geographic regions exist in the Northern hemisphere, including the North American landmass, European landmass, and much of the Asian landmass. As described above, a satellite communication system100may provide a ground track that regularly repeats along the same locations relative to the surface of the earth120. In this regard, the orbital period for each orbit110in the system100may be a factor (e.g., an integer factor) of a sidereal day. By integer factor, it is meant that the orbital period divides the 24 sidereal hours in a sidereal day without remainder. For example, the orbital period of each satellite130may be 6 sidereal hours. While a 6 sidereal hour orbital period is discussed, other orbital periods that are also evenly divisible factors of a sidereal day such as an orbital period of 12 sidereal hours, 8 sidereal hours, 6 sidereal hours, 4 sidereal hours, 2 sidereal hours, or 1 sidereal hour are also contemplated. In any regard, each of the plurality of satellites130may follow a single, repeating ground track relative to the surface of the Earth120and may be offset in satellite epoch. Each of the plurality of satellites130may include a navigation module operative to maintain the plurality of satellites130in orbit110. Further still, the epoch of each of the plurality of satellites130may be offset to provide a predetermined spacing between the plurality of satellites130along the repeating ground track. For example, the spacing between the plurality of satellites130in the ground track may be evenly spaced. An orbit110having the foregoing orbital parameters may facilitate a number of beneficial characteristics. As briefly stated above, these characteristics may include improved satellite operational characteristics by reducing station-keeping operations and, in turn, reducing fuel consumption of each of the plurality of satellites130. In addition, the inclined orbit having the apogee and perigee characteristics described above may allow the plurality of satellites130to operate almost entirely in sunlight, which reduces battery capacity requirements because the plurality of satellites130may not be required to operate on battery power for significant portions of the orbit110. In addition, as described in greater detail below, the orbit110may allow for improved communications operations with the satellites. Specifically, the orbit110may facilitate continuous communication between a ground station and at least one of the plurality of satellites130. Furthermore, it will be understood that because each of the plurality of satellites130follows a common ground track, each of the plurality of satellites130will correspondingly each follow a common, repeating sky track relative to a ground station on Earth120. That is, the locations directly below the orbital path of each satellite is the same for the plurality of satellites. By “directly below,” the locations of the common, repeating ground track may represent the intersection of the Earth's surface with an imaginary line extending from the center of the Earth to a satellite. The common, repeating ground track represents the locations on the Earth over which each satellite will pass directly overhead at the zenith in the frame of reference of an observer on the surface of the Earth. Each of the plurality of satellites130will follow the same path across the sky from the perspective of a ground station on Earth120. In turn, ground station antenna requirements may be simplified by permitting less complex tracking when used to communicate with the plurality of satellites130in orbit110. FIG.2depicts a ground path of the orbit110on a map of the Earth120. The orbit110may have a repeating ground track200that is constant relative to the surface of the Earth120. As the orbital period of the orbit110may be an evenly divisible factor of a sidereal day (e.g., 6 sidereal hours) and each satellite may orbit the Earth multiple times in a sidereal day, the ground track200of the orbit110appears as multiple interleaved ground track projections when projected onto a flat representation of the Earth120such that each ground track projection portion represent a portion of the orbit110as it repeatedly tracks across the surface of the Earth120. Thus, the ground projection may be represented by a first ground track projection202, a second ground track projection204, and a third ground track projection206that collectively define a continuous ground track200relative to the surface of the Earth120. The appearance of the orbit110as a plurality of interleaved ground track projections is a limitation of the projected map view shown inFIG.2, however, each of the first ground track projection202, second ground track projection204, and the third ground track projection206represents a portion of the continuous ground track200for the orbit110at different satellite epoch of the orbit. Each of the plurality of satellites130of the satellite communications system100is shown inFIG.2. In this example, the satellite communication system includes 12 satellites such that satellites130a-130lare shown, but a system need not be limited to that number. and as can be appreciated, each of the plurality of satellites130follows the common ground track200represented by the interleaved ground track projections. Also shown inFIG.2is a ground station comprising a gateway250that is located in central North America.FIG.2also depicts a gateway252located in central Europe. The gateway250and the gateway252may each include communication equipment that may be operable to communicate with the communication equipment of each of the plurality of satellites130in orbit110when a respective one of the plurality of satellites130is in view of the respective gateway along a sky track. Each of the gateway250and the gateway252may include receivers, transmitters, and/or transceivers capable of communicating with the communication equipment of each of the plurality of satellites130. The gateway250and/or gateway252may include a communication module executed by a computing system at the respective ground station. The gateways250and252may also facilitate Internet communications by being in operative communication with a wide area network, including the Internet. In the example depicted inFIG.2, the orbit110may provide communication coverage to user terminals in at least portions of North America and Europe. Specifically, central North America and Europe may each be referred to as “targeted geographic areas” in which satellite communication is targeted to user terminals in those areas. Thus, satellite communication may be continuously provided between a gateway and user terminals in the targeted geographic areas by at least one of the plurality of satellites130. While North America and Europe are shown for purposes of explanation, it may be appreciated that the orbit110may be arranged (e.g., the longitude of the ascending node may be controlled) to arrange the orbit110in a different relative position to the Earth120to target other geographic areas of interest. Also, while ground stations are shown in central North America and Europe inFIG.2, the orbit configuration depicted inFIG.2may also facilitate a repeating sky track relative to a targeted geographic area in other geographic areas (e.g., in Japan, China, Russia, India, other southeast Asian countries, or other regions without limitation) although not expressly shown inFIG.2. The gateway250has extents of visibility relative to the plurality of satellites130due to, among other factors, the curvature of the Earth, geographic formations, or other obstacles that block or otherwise preclude line-of-sight communication with the plurality of satellites130. In this regard, a first extent of visibility210of the orbit110for the gateway250along the first ground track projection202is shown as a bolded gray portion of the first ground track projection202inFIG.2. A second extent of visibility212of the orbit110for the gateway250is depicted as a bolded gray portion of the second ground track projection204in the map view ofFIG.2. In addition, a third extent of visibility214of the orbit110may be defined as a bolded gray portion of the third ground track projection206. In this regard, in the time depicted inFIG.2, satellite130lis within the first extent of visibility210of the gateway250, satellite130gis within the second extent of visibility212of the gateway250, and satellite130cand satellite130dmay be within the third extent of visibility214of the gateway250. In this regard, the gateway250may be capable of simultaneous communication with each of the satellites in the different respective sky tracks relative to the gateway250defined by the extents of visibility210,212, and214. The first extent of visibility210, the second extent of visibility212, and the third extent of visibility214for the gateway250may facilitate multiple instances in which the orbit110passes within view of the gateway250. As shown inFIG.2, a different respective one of the plurality of satellites130may be in each of the extents of visibility for the gateway250. Thus, the gateway250may be in simultaneous communication with a respective one or more of the plurality of satellites130as the satellite makes a pass relative to the gateway250in each of the extents of visibility as the satellite orbits the Earth120. FIG.3shows a map view including the gateway250relative to a first extent of visibility210, a second extent of visibility212, and a third extent of visibility214along the repeating ground track200of the orbits110of the satellite communications system100(portions of the orbit110are omitted for clarity). InFIG.3, a satellite130may be in the first extent of visibility210such that the gateway250may establish a communication link with the satellite130.FIG.3further depicts a coverage area218within which user terminals (not shown) are capable of communicating with the satellite130. As such, user terminals within the coverage area218may be provided communication services by the gateway250via the satellite130such that the user terminals in the coverage area218may exchange data messages via the satellite130. That is, Internet service may be provided to the user terminals in the coverage area218by the satellite130and the gateway250. As can be appreciated, the coverage area218may extend to generally all of the contiguous United States, much of Mexico, and much of Canada. Thus, the coverage area218provided by the satellite130may extend to a large targeted geographic area, including large portions of the North American continent. InFIG.4, the satellite130has transited in orbit110such that the satellite130now is visible to the gateway250in the third extent of visibility214in a second time shown inFIG.4. In this regard, the coverage area218of the satellite130also extends to a large portion of the contiguous United States.FIG.5depicts a third time in which the satellite130may be in a second extent of visibility212relative to the gateway250such that the coverage area218for the satellite130is shown for that time. Specifically, a satellite may transit through an acquisition of signal (AOS) boundary310, upon which the satellite becomes visible to the gateway250such that communication may be established with the satellite. The satellite may remain visible to the gateway250until the satellite transits through a loss of signal (LOS) boundary312. Thus, a satellite of the plurality of satellites130may provide a pass relative to the ground station250as the satellite traverses from the AOS boundary310to the LOS boundary312for each respective extent of visibility210,212, and214for the gateway250. The period in which the satellite130is included in the extent of visibility212, the satellite130may appear to the gateway250in a first sky track relative to the gateway250. Because all of the plurality of satellites130follow a repeating ground path, each of the plurality of satellites130may sequentially traverse the first sky track relative to the gateway250. For example, a simplified depiction of a repeating sky track420is depicted inFIG.6. InFIG.6, a representation of the local horizon at a gateway250is shown, including a horizon400that separates the Earth405and the sky410from the perspective of the gateway250. The horizon400may be the actual horizon at the gateway250or may be an artificial horizon at some elevation angle above or below the actual local horizon of the gateway250. In any regard, a repeating sky track420is shown that extends between an AOS boundary310in which a satellite130enters the extent of visibility for the gateway250and the LOS boundary312in which a satellite130exits the first extent of visibility210for the gateway250. Because each of the plurality of satellites130follows an identical ground track200, the orbit110defines a repeating sky track420relative to the gateway250in which the plurality of satellites130each sequentially traverse from the AOS boundary310to the LOS boundary312on the repeating sky track420. That is, as satellite130traverses toward the LOS boundary312, at or before the time the satellite130exists the first extent of visibility212for the gateway250, the next satellite along the repeating ground track200of the plurality of satellites130enters the first extent of visibility212by passing through the AOS boundary310such that the next satellite may establish communication with the gateway250. The AOS for each successive satellite on the repeating sky track420may occur at or before LOS for the current satellite in the repeating sky track420to provide continuous satellite communication between the gateway250and at least one of the plurality of satellites130on the repeating sky track420. In this regard, the gateway250may include communication equipment capable of tracking satellites along the repeating sky track420relative to the gateway250. For example, the gateway250may have at least two antennas, including a first antenna452and a second antenna454. The first antenna452and the second antenna454may coordinate to provide constant communication with at least one of the plurality of satellites130in the repeating sky track420. For example, the first antenna452may track a satellite130as it transits between the AOS boundary310and the LOS boundary312. At the time of or before the first antenna452tracks satellite130as it passes the LOS boundary312(i.e., transits out of the visible extent of the gateway250), the second antenna454may be tasked with acquiring communication with the next satellite as it enters the visible extent at the AOS boundary310. Because an interruption in communication is undesirable, the first antenna452and the second antenna454may work in tandem to maintain communication using a handoff between a current satellite in the repeating sky track420and the next satellite to enter the repeating sky track420. Thus, the handoff of communication between the first antenna452and the satellite130gas it transits through the LOS boundary312may be seamless as the second antenna454may provide communication with the next satellite as it travels through the AOS boundary310. In turn, the first antenna452may cycle back during the transit of the next satellite along the repeating sky track420tracked by the second antenna454such that the first antenna452is ready to acquire communication with a further subsequent satellite as that satellite passes the AOS boundary310. The first antenna452and the second antenna454may alternatively provide communication with each successive satellite in orbit110as each successive satellite passes the AOS boundary310to provide continuous communication with at least one of the plurality of satellites130in orbit110. Based on the foregoing discussion regarding the fact that a gateway250may have multiple extents of visibility occupied by different respective satellites at the same time, it may be appreciated that multiple repeating sky tracks may be provided relative to a gateway250. Thus, with further reference toFIG.7, a gateway250may be operative to track two different repeating sky tracks—repeating sky track420and repeating sky track422—each corresponding with different extents of visibility of the orbit110for the gateway250. That is, repeating sky track420may correspond to a first extent of visibility210, as shown inFIGS.2-5. The repeating sky track422may correspond to a second extent of visibility212, as shown inFIGS.2-5. For the purpose of discussion ofFIG.7, the repeating sky track422may be defined between an AOS boundary314and a LOS boundary316. In the example depicted inFIG.7, satellite130gis in the first repeating sky track420. A different one of the plurality of satellites130(e.g., satellite130c) than the satellite130gmay be simultaneously transiting along repeating sky track442while satellite130ctransits along repeating sky track420. That is, satellite130cmay be visible concurrently to satellite130g, albeit in different sky tracks. Like the foregoing discussion of the repeating sky track420, the repeating sky track422may be fixed relative to the gateway252and successively occupied by different ones of the plurality of satellites130. Accordingly, the ground station may have a third antenna552and a fourth antenna554to track successive ones of the plurality of satellites130along the repeating sky track422. Like the first antenna452and the second antenna454discussed above, the third antenna552and the fourth antenna554may alternatively track successive ones of the plurality of satellites130as they transit through the repeating sky track442such that one of the third antenna552and the fourth antenna554acquires a new satellite passing through the AOS boundary314at or before the other antenna of the third antenna552and the fourth antenna554loses communication with an existing satellite in the repeating sky track422passing through the LOS boundary316. The number of sky tracks simultaneously visible for a given targeted geographic extent may not be limited to one or two but could include at least three sky tracks in which at least one of the plurality of satellites130is visible to a gateway250. For instance, as can be appreciated with returned reference toFIGS.2-5, the first extent of visibility210, the second extent of visibility212, and the third extent of visibility214may create different respective sky tracks relative to the gateway250. Furthermore, the same repeating ground track200of the orbit110may also provide a plurality of extents of visibility and corresponding repeating sky tracks to a gateway252in central Europe as well. While not shown, multiple sky tracks may also be provided to other gateways in other targeted geographic locations (e.g., in Asia) using the orbit110that provides the sky tracks relative to the gateway250and the gateway252. Accordingly, the orbit110provides a robust communication system as a plurality of ground stations in different targeted geographic regions may be in communication with a plurality of satellites simultaneously. FIGS.8-10depict plots representative of three different repeating sky tracks relative to a given ground station. Specifically,FIG.8represents the characteristics of a repeating sky track600. The plot inFIG.8has a left vertical axis representing degree values, a right vertical axis representing range values, and the horizontal axis represents time. A pass elevation602is represented in a dash-dot line, a pass azimuth604is represented as a dashed line, and a pass range606is represented as a dotted line. In this regard, pass elevation602and pass azimuth604are measured relative to the left axis in degrees corresponding to the elevation angle and azimuth angle relative to the ground station. As may also be appreciated inFIG.8, a number of satellite epochs are represented corresponding to passes of each satellite in the plurality of satellites130in the sky track600. Thus, a first satellite epoch608in which a first satellite130ais visible to the ground station is followed successively by a second satellite epoch610in which a second satellite130bis visible to the ground station, a third satellite epoch612in which a third satellite130cis visible to the ground station, a fourth satellite epoch614in which a fourth satellite130dis visible to the ground station, a sixth satellite epoch618in which a sixth satellite130eis visible to the ground station, a seventh satellite epoch620in which a seventh satellite130fis visible to the ground station, an eighth satellite epoch622in which an eighth satellite130gis visible to the ground station, a ninth satellite epoch624in which a ninth satellite130his visible to the ground station, a tenth satellite epoch626in which a tenth satellite130iis visible to the ground station, an eleventh satellite epoch628in which an eleventh satellite130jis visible to the ground station, and a twelfth satellite epoch630in which a twelfth satellite130lis visible to the ground station. In turn, a first satellite epoch608follows the twelfth satellite epoch630, in which the first satellite130aagain becomes visible to the ground station, thus representing a complete cycle through each of the plurality of satellites130passing along the sky track600. As can be appreciated, the pass elevation602, pass azimuth604, and pass range606follow a repetitive cycle for each successive satellite epoch, indicating the repeating sky track600, which remains constant. Each satellite epoch may be at least around 2 sidereal hours, such that the twelve satellites defining the twelve satellite epochs span a full day. However, depending on the length of the sky track relative to the ground station, other numbers of satellites and epoch durations may be provided to facilitate continuous communication with at least one of the plurality of satellites130. FIG.9represents the characteristics of a repeating sky track700. The plot inFIG.9has a left vertical axis representing degrees, a right vertical axis representing range values, and the horizontal axis represents time. A pass elevation702is represented in a dash-dot line, a pass azimuth704is represented as a dashed line, and a pass range706is represented as a dotted line. In this regard, pass elevation702and pass azimuth704are measured relative to the left axis with elevation angle and azimuth angle relative to the ground station. As may also be appreciated inFIG.9, a number of satellite epochs are represented corresponding to passes of each satellite in the plurality of satellites130in the sky track700. Thus, a first satellite epoch708in which a first satellite130ais visible to the ground station is followed successively by a second satellite epoch710in which a second satellite130bis visible to the ground station, a third satellite epoch712in which a third satellite130cis visible to the ground station, a fourth satellite epoch714in which a fourth satellite130dis visible to the ground station, a sixth satellite epoch718in which a sixth satellite130eis visible to the ground station, a seventh satellite epoch720in which a seventh satellite130fis visible to the ground station, an eighth satellite epoch722in which an eighth satellite130gis visible to the ground station, a ninth satellite epoch724in which a ninth satellite130his visible to the ground station, a tenth satellite epoch727in which a tenth satellite130iis visible to the ground station, an eleventh satellite epoch728in which an eleventh satellite130jis visible to the ground station, and a twelfth satellite epoch730in which a twelfth satellite130lis visible to the ground station. In turn, a first satellite epoch708follows the twelfth satellite epoch730, in which the first satellite130aagain becomes visible to the ground station, thus representing a complete cycle through each of the plurality of satellites130passing along the sky track700. As can be appreciated, the pass elevation702, pass azimuth704, and pass range706follow a repetitive cycle for each successive satellite epoch, indicating the repeating sky track700, which remains constant. FIG.10represents the characteristics of a repeating sky track800. The plot inFIG.10has a left vertical axis representing degrees, a right vertical axis representing range values, and the horizontal axis represents time. A pass elevation802is represented in a dash-dot line, a pass azimuth804is represented as a dashed line, and a pass range806is represented as a dotted line. In this regard, pass elevation802and pass azimuth804are measured relative to the left axis with elevation angle and azimuth angle relative to the ground station. As may also be appreciated inFIG.10, a number of satellite epochs are represented corresponding to passes of each satellite in the plurality of satellites130in the sky track800. Thus, a first satellite epoch808in which a first satellite130ais visible to the ground station is followed successively by a second satellite epoch810in which a second satellite130bis visible to the ground station, a third satellite epoch812in which a third satellite130cis visible to the ground station, a fourth satellite epoch814in which a fourth satellite130dis visible to the ground station, a sixth satellite epoch818in which a sixth satellite130eis visible to the ground station, a seventh satellite epoch820in which a seventh satellite130fis visible to the ground station, an eighth satellite epoch822in which an eighth satellite130gis visible to the ground station, a ninth satellite epoch824in which a ninth satellite130his visible to the ground station, a tenth satellite epoch826in which a tenth satellite130iis visible to the ground station, an eleventh satellite epoch828in which an eleventh satellite130jis visible to the ground station, and a twelfth satellite epoch830in which a twelfth satellite130lis visible to the ground station. In turn, a first satellite epoch808follows the twelfth satellite epoch830, in which the first satellite130aagain becomes visible to the ground station, thus representing a complete cycle through each of the plurality of satellites130passing along the sky track800. As can be appreciated, the pass elevation802, pass azimuth804, and pass range806follow a repetitive cycle for each successive satellite epoch, indicating the repeating sky track800, which remains constant. As can be appreciated, each of the repeating sky track600, repeating sky track700, and repeating sky track800have different azimuth angles for each satellite epoch of a respective path. In turn, a ground station may have a corresponding set of antennas dedicated to each unique sky track available to the ground station. Also, the satellite epochs for each respective repeating sky track are offset or out of phase. Thus, different ones of the plurality of satellites130are present in each individual sky track in any given epoch of the satellite communications system100. With further reference toFIG.11, the utility of multiple sky tracks available to a gateway950is demonstrated.FIG.11generally depicts a horizon900separating the Earth905from the sky910, as seen from the gateway950. Also inFIG.9, the gateway950includes at least a first ground station antenna902and a second ground station antenna904. As may be appreciated from the foregoing description, the gateway950may also include additional antennas (e.g., to provide successive tracking of satellites in a sky track). In any regard, the first ground station antenna902may be in operative communication with a satellite130cin a first repeating sky track920that may be in further communication with a first user terminal912. Also, a second ground station antenna904may be in operative communication with a satellite130gin a second repeating sky track922that may be in further communication with a second user terminal914. In this regard, the first user terminal912may be capable of tracking satellites in the first repeating sky track920(e.g., with multiple antennae at the first user terminal912or a phased array antenna tuned to track the first repeating sky track920as described in greater detail below) to exchange a communication924with the gateway950via satellites in the first repeating sky track920. The second user terminal914may be capable of tracking satellites in the second repeating sky track922to exchange a communication926with the gateway950via satellites in the second repeating sky track922. That is, distinct communication channels may be established by the gateway950with different respective sets of user terminals. For example, a first set of user terminals may be tasked with tracking satellites in the first repeating sky track920, and a second set of user terminals may be tasked with tracking satellites in the second repeating sky track922. This may provide discrete communication modalities to different sets of user terminals and/or may provide additional bandwidth to the satellite communications system100. Moreover, while the first user terminal912and the second user terminal914are shown within the extent of visibility of the single gateway950inFIG.9, it may be that the first user terminal912and/or second user terminal914are beyond the extent of visibility to the single gateway950such that communication between the first user terminal912and the single gateway950and/or the second user terminal914and the single gateway950requires relay of the communication924or communication926using a satellite. FIG.12illustrates a polar plot1000representing the visible sky relative to a ground station. The ground station could be a gateway or a user terminal. In any regard, the polar plot includes an azimuth angle represented in the angular coordinate of the plot1000and an elevation angle in the radial coordinate of the plot1000. In this regard, a first sky track1002, a second sky track1004, and a third sky track1006may each extend in the sky relative to the ground station. The plot1000also includes a first tracking region1012for the first sky track1002, a second tracking region1014for the second sky track1004, and a third tracking region1016for the third sky track1006. Each tracking region may define a range of azimuth and elevation angles that an antenna at the ground station is capable of communicating to achieve communication with satellites in each respective sky track. In the context of the ground station being a gateway, the gateway may be in simultaneous communication with respective satellites in each of the first sky track1002, the second sky track1004, and the third sky track1006. Thus, the gateway may include one or more antennas capable of communication in the first tracking region1012, the second tracking region1014, and the third tracking region1016. The gateway may include one or more antennas for simultaneous communication in each of the tracking regions. Antennas may include mechanical and/or electrical tracking elements to allow for communication with each of the respective tracking regions. In the context of the ground station being a user terminal, it may not be that the user terminal is in communication with a satellite in more than one of the sky tracks. Moreover, as the antenna complexity of a user terminal is advantageously minimized, it may be that the user terminal may be assigned a given sky track to facilitate communication therewith. As can be appreciated, each tracking region for the sky tracks has a different area. The larger the area of the tracking region, the more complex an antenna may be to facilitate communication with the sky track. In this regard, for the example shown inFIG.12, the first tracking region1012may have the smallest area of each of the tracking regions for the user terminal. As such, the user terminal may be assigned to track satellites in the first sky track1002associated with the first tracking region1012. Thus, the antenna design (e.g., including mechanical and/or electrical tracking means such as a phased array antenna) may be simplified. In this regard, the first sky track1002may have a more limited extent of azimuth and elevation deviation than the second sky track1004and the third sky track1006, providing more efficient tracking of satellites in the first sky track1002. FIG.13illustrates an example schematic of a processing system1100suitable for implementing aspects of the disclosed technology, including a communication module1150and/or navigation module1152, as described above in relation to the satellite communications system100. Furthermore, other aspects of the satellite communications system100may be controlled by a processing system1100. The processing system1100may include one or more processor unit(s)1102, memory1104, a display1106, and other interfaces1108(e.g., buttons). The memory1104generally includes both volatile memory (e.g., RAM) and non-volatile memory (e.g., flash memory). An operating system1110, such as the Microsoft Windows® operating system, the Apple macOS operating system, or the Linux operating system, resides in the memory1104and is executed by the processor unit(s)1102, although it should be understood that other operating systems may be employed. One or more applications1112are loaded in the memory1104and executed on the operating system1110by the processor unit(s)1102. Applications1112may receive input from various input local devices such as a microphone1134, input accessory1135(e.g., keypad, mouse, stylus, touchpad, joystick, instrument mounted input, or the like). Additionally, the applications1112may receive input from one or more remote devices such as remotely-located smart devices by communicating with such devices over a wired or wireless network using more communication transceivers1130and an antenna1138to provide network connectivity (e.g., a mobile phone network, Wi-Fi®, Bluetooth®). The processing device1100may also include various other components, such as a positioning system (e.g., a global positioning satellite transceiver), one or more accelerometers, one or more cameras, an audio interface (e.g., the microphone1134, an audio amplifier and speaker and/or audio jack), and storage devices1128. Other configurations may also be employed. The processing system1100further includes a power supply1116, which is powered by one or more batteries or other power sources and which provides power to other components of the processing system1100. The power supply1116may also be connected to an external power source (not shown) that overrides or recharges the built-in batteries or other power sources. The processing system1100may include a variety of tangible processor-readable storage media and intangible processor-readable communication signals. Tangible processor-readable storage can be embodied by any available media that can be accessed by the processing system1100and includes both volatile and nonvolatile storage media, removable and non-removable storage media. Tangible processor-readable storage media excludes intangible communications signals and includes volatile and nonvolatile, removable and non-removable storage media implemented in any method or technology for storage of information such as processor-readable instructions, data structures, program modules or other data. Tangible processor-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information and which can be accessed by the processing system1100. In contrast to tangible processor-readable storage media, intangible processor-readable communication signals may embody processor-readable instructions, data structures, program modules or other data resident in a modulated data signal, such as a carrier wave or other signal transport mechanism. The term “modulated data signal” means an intangible communications signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, intangible communication signals include signals traveling through wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Some implementations may comprise an article of manufacture. An article of manufacture may comprise a tangible storage medium to store logic. Examples of a storage medium may include one or more types of processor-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, operation segments, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. In one implementation, for example, an article of manufacture may store executable computer program instructions that, when executed by a computer, cause the computer to perform methods and/or operations in accordance with the described implementations. The executable computer program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The executable computer program instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a computer to perform a certain operation segment. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language. The implementations described herein are implemented as logical steps in one or more computer systems. The logical operations may be implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system being utilized. Accordingly, the logical operations making up the implementations described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
48,767
11863290
DETAILED DESCRIPTION The invention describes a method for entry of a user terminal into a satcom network comprising at least one, geostationary or non-geostationary, satellite using beam hopping. Beam hopping is a mechanism that is very commonly used in satellite communications. It allows complete and instantaneous reconfigurability of the satellite coverage via the definition of frames, called hop frames, divided into time intervals. Each time interval is associated with one or more antenna-beam configurations. The beams are formed using an active antenna, generally allowing a plurality of directional beams to be formed in parallel with a view to irradiating a plurality of spots simultaneously, or a plurality of directional antennas, using one or more frequencies and one or more polarisations. The hop frames are defined dynamically as required in order to serve all the actors of the network as best as possible. They may be represented in the form of a two-input table associating a formed-antenna-beam configuration and an antenna port of the satellite with each time interval. The method according to the invention consists in dedicating certain resources of the hop frames to the entry into the network of user terminals, by forming in these dedicated resources, from the satellite, a directional antenna beam. By directional antenna beam what is meant is the fact that the beam covers only one portion of the area of coverage of the satellite, this allowing the gain of the link between the satellite and the user terminals to be increased. The entry beams are transmission and/or reception beams depending on whether the communications between the satellite and the user terminals occur on the uplink or on the downlink. They are not used for the transmissions of data traffic (useful data) between the user terminals and the satellite, but for the transmission of signalling information allowing the entry/re-entry of user terminals into the network. This information may be, on the downlink, information regarding the position of the satellites (ephemerides for example), and information allowing the user terminals to transmit a connection request (for example information regarding a broadcast channel). FIG.1illustrates a succession of beam-hop frames in a method for entry into a satcom network according to one embodiment of the invention. Hop frames, which are divided into a plurality of time intervals TS1, TS2, . . . , TSM, are formed. The columns ofFIG.1are the various antenna ports Ant1, Ant2, . . . , Ant N of the satellite. The number of time intervals per hop frame and the number of antenna ports are here given merely by way of illustration. In the hop frame, for each time interval, one antenna beam configuration is associated with each port. The invention consists in reserving, inside the hop frames, beams101,102and103for forming a directional beam allowing the entry of terminals into the satcom network, which beams are called entry beams. The rest of the hop frames are unaffected by the method according to the invention. In the example ofFIG.1, an entry beam is reserved in the first time interval of each hop frame on the first antenna port. However, the frequency of the beams dedicated to the entry into the network may be increased so as to accelerate entry into the network, or decreased so as to consume fewer resources. The beams dedicated to entry into the network are not necessarily regularly distributed or on the same antenna port: their distribution is free and depends only on the sought-after performance. They may also be ad hoc beams formed when resources of the satellite are available. The distribution of the entry beams in the hop frames therefore results from a compromise between time of entry into the network for the user terminals and impact on the capacity of the network. By way of example, with a hop frame of 16 ms divided into 16 time intervals of 1 ms in a satellite comprising 24 antenna ports, reserving one entry beam per frame leads to a very small decrease in the total capacity of the system, of about 0.26%. Advantageously, the entry beams may all use the same carrier frequency (or a limited number of carrier frequencies) and/or the same polarisation, so as to simplify the step in which the user terminal searches for the satellite. The entry beam formed by the satellite is a directional beam directed toward one particular geographical area of the area of coverage of the satellite, so as to intermittently offer a radio link to the terminals of areas not covered by the satellite. The size of this area is dependent on the gain sought for the transmission, on the amount of resources dedicated to the entry into the network, on the relative speed of the satellite and on the performance sought as regards the time of entry into the network. In order to increase antenna gain, the entry beams have different directions of sight. With respect to known systems in which time intervals of the hop frames are used for the entry of terminals into the network, and during which the satellite uses a non-directional antenna in order to cover the entirety of its area of coverage, the method according to the invention uses directional antenna beans that may be formed in parallel with other directional beams covering other portions of the area of coverage, as shown inFIG.1. The implementation of the method according to the invention therefore leads to a much smaller decrease in the capacity of the system than known methods. It furthermore has the advantage of being able to define the entry beams in the same frequency bands as beams dedicated to traffic, thus solving problems with allocation of frequency bands and with additional hardware required in the satellite and the user terminals. In a first embodiment, the entry beams are defined so as to irradiate in turn each of the satcom spots of the area of coverage of the satellite. In this way, each spot of the area of coverage of the satellite is covered periodically. A user terminal located in a spot that is not covered will then necessarily have periods of radio link with the satellite, which are used to carry out a standard procedure for entry into the network. In another embodiment, the entry beams are defined so as to irradiate, one after the other, geographical regions not served by the traffic beams of the hop frame. In this case, the user terminals located in areas covered by the satellite achieve their entry into the network on the basis of signalling data exchanged in the traffic beams, whereas user terminals located outside of the areas covered by the traffic beams of the satellite have access to a radio link when the entry beam is directed toward them. In these two embodiments, the traffic beams and the entry beams allow, intermittently, all of the area of coverage of the satellite to be covered, and therefore allow entry into the network of any user located in its area of coverage, even when said user is not covered by the traffic beams. In another embodiment, specific to the case of non-geostationary satellites, the entry beam is an antenna beam of constant elevation, i.e. a beam that, seen from the ground, forms a strip in which a user terminal has a radio link with the satellite when it points its antenna with an elevation corresponding to the chosen elevation. Because of the shape of the radiation pattern of the antenna of the satellite, the notion of constant elevation is to be taken with a margin, and slight variations around the set elevation are possible. FIG.2aillustrates the overall radiation pattern of a satellite antenna in one embodiment of the invention, in the case of a non-geostationary satellite. The representation is given in the frame of reference of the satellite: it represents the area of coverage of the satellite, and gives the equivalent radiated isotropic power level as a function of the direction of the beam and of the elevation in this area of coverage. The darkest areas correspond to the areas of highest power. InFIG.2a, the satellite has an antenna beam201the power of which is concentrated about a constant elevation of about 20° in the frame of reference of the satellite, for an angular aperture in azimuth equal to about 150° oriented northwards. The use of an antenna beam with a constant elevation has a plurality of advantages:associated with the movements of all of the satellites of the constellation, it allows systematically and regularly almost all of their areas of coverage to be covered, and therefore offers an opportunity of entry into the network to user terminals not covered by the traffic beams;it allows a satcom terminal to determine the position of the satellite by scanning space on the axis of the azimuths only, this removing a constraint on the beam formation of the satcom terminal and/or its mechanical movement, and decreasing the time taken to find the satellite, and therefore the time of entry into the network. Furthermore, the user terminal may use a very directional antenna beam since the elevation of the satellite is known, this improving link budget;The antenna beam of the satellite is directional, this increasing the gain of the radio link between the satellite and the visible user terminals;The antenna beam of the satellite may be oriented so as to prevent user terminals emitting in the direction of the geostationary arc. So as to improve the link budget of the network entry beans, the invention proposes to divide the entry beam into a plurality of beams having different azimuthal directions of sight, and together covering all of the angular aperture of the beam201. FIG.2bschematically shows such an embodiment, in which the entry beam of constant elevation is divided into a plurality of separate sub-beams. In the example, the entry beam is divided into four sub-beams211,212,213and214having the same elevation but different directions of sight, so as to cover all of an angular aperture similar to that ofFIG.2a. This embodiment allows the angular aperture in azimuth of the entry beams to be limited, and therefore link budget to be improved. In the example ofFIG.2b, the division of the entry beam into four sub-beams allows an increase in link budget of about 6 dB. The number of sub-beams may be set in light of the desired increase in the link budget and the desired impact on the overall capacity of the system. FIGS.2cand2dshow various embodiments of the allocation of resources in one or more hop frames in some embodiments of the method according to the invention. InFIG.2c, the sub-beams211to214are formed within each frame. To do this, the resources221to224are respectively attributed thereto within each frame. Compared to the embodiment ofFIG.1, the impact of the allocation of entry beams on the overall capacity of the network is then multiplied by 4, but the decrease in capacity remains smaller than in prior-art methods. It will be noted that the arrangement of the resources allocated within the hop frame is unimportant: they may be identically attributed in a given time interval to separate antenna ports, or any other configuration may be used. InFIG.2d, the resources231to234respectively attributed to the formation of the entry beams211to214are allocated in various hop frames. InFIG.2dthe four sub-beams are distributed in two successive frames. Compared to the embodiment ofFIG.2c, the impact on the capacity of the network is decreased, but the time for which the satellite is observable by a user terminal is divided by two. The frame definitions given inFIGS.2cand2dare given merely by way of illustration, and a person skilled in the art will be easily able to modify these definitions depending on his operational requirements, and in particular on the gain expected in the entry beam, on the duration of visibility of the satellite and on the desired impact on the overall capacity of the network. Furthermore, the distribution of the resources attributed to the formation of entry beams may be adapted dynamically, for example in order to form more beams in areas in which the conditions of propagation are unfavourable (for example around the equator, or when meteorological conditions are unfavourable) in order to improve link budgets. For example, eight entry beams could be defined (four allocations per frame spread over two successive frames) for transmissions in proximity to the equator, and only four (two allocations per frame spread over two successive frames) above 50° of latitude. FIG.3illustrates the implementation of the method for entry of a user terminal into a satcom network according to one embodiment of the invention, in the case of a non-geostationary satellite. The non-geostationary satellite301, for example an LEO satellite moving in an inclined orbit in the direction302, is configured to form an antenna beam303of constant elevation the footprint of which on the ground has been shown by a strip304that is curved because of the curvature of the Earth. The footprint on the ground304corresponds to the sum of the entry sub-beams formed so as to cover a large azimuth while benefiting from a high antenna gain, as shown inFIG.2b. The strip304has a width l that depends on the altitude of the satellite and on the aperture of the antenna beam. The area304moves at the same time as the satellite301. A user terminal305seeking to enter into the satcom network and the antenna of which is positioned at the correct elevation is therefore in radio visibility with the satellite301during a time that depends on the speed of the satellite, on its altitude, on the configuration of the entry beam, on the angle of elevation chosen and on the number of entry sub-beams formed. For example, for a non-geostationary LEO satellite forming an entry beam of 4° aperture along the north-south axis about an elevation of about 20° for a user terminal, the footprint on the ground304has a width l larger than 300 km. If the satellite is moving in a polar orbit with a speed of 7.4 km/s, a user terminal pointing with an elevation of 20° will be visible to the satellite for about 40 seconds. The user terminal may use this time to detect the satellite by scanning the sky in azimuth only, with a view to subsequently carrying out the procedure for entry into the network (synchronisation and registration). The elevation and aperture of the entry beam are chosen depending on the movement of the satellite so as to increase the time of visibility by a user terminal and to maximise antenna gain.FIGS.4aand4bshow the duration for which a user terminal is visible to a non-geostationary LEO satellite moving at 7.4 km/s with an antenna beam of aperture of 4° along the north-south axis, as a function of the choice made regarding the elevation of the entry beam.FIG.4aassumes a satellite moving in a polar orbit at about 1000 km of altitude, whereasFIG.4bassumes a satellite moving in an inclined orbit at about 1200 km of altitude. In the given case of application, the width l of the beam304is always larger than 300 km when the comprised elevation is chosen between 15° and 25°. Ideally, the antenna beam of constant elevation transmitted by a non-geostationary satellite has an angular aperture of a few degrees along the small axis of its footprint on the ground, typically a −3 dB angular aperture smaller than 10°, typically of the order of 4 to 5°, and covers 360° in azimuth, so as to offer a radio link to the highest possible number of user terminals. However, regulatory considerations forbid user terminals from emitting in the direction of the geostationary arc in certain frequency bands. For this reason, the sub-beams dedicated to the entry into the network are advantageously chosen so as to have together an azimuthal aperture angle slightly smaller than 180° and directed toward a pole. This is the case for example inFIG.2b, in which the entry beam corresponding to the four sub-beams211to214has an aperture in azimuth of about 150°. This configuration makes it possible to avoid user terminals emitting in the direction of the geostationary arc in one portion of the globe. By varying the orientation of the equivalent entry beam formed by the various entry sub-beams during the progression of the non-geostationary satellite, the emissions of the user terminals during the procedure for entry into the network are systematically carried out in the direction opposite to the geostationary arc. For example, for a satellite in a polar orbit, the beam of constant elevation may be modified as follows:when the satellite is moving from the equator in the direction of the North Pole, the various entry sub-beams form an equivalent entry beam oriented towards the South Pole, i.e. behind the satellite;when the satellite is moving from the North Pole in the direction of the equator, the various entry sub-beams form an equivalent entry beam oriented towards the South Pole, i.e. in front of the satellite;when the satellite is moving from the equator in the direction of the South Pole, the various entry sub-beams form an equivalent entry beam oriented towards the North Pole, i.e. behind the satellite;when the satellite is moving from the South Pole in the direction of the equator, the various entry sub-beams form an equivalent entry beam oriented towards the North Pole, i.e. in front of the satellite. Irrespective of whether it is in a polar orbit or an inclined orbit, orienting the equivalent entry beam in the direction of the poles, by switching at least four times during the period of rotation, allows emissions of the satcom terminals in the direction of the geostationary arc to be avoided. In the vicinity of the poles, when the region of exclusion corresponding to the geostationary arc is not visible to the satcom terminals, the satellite may orient the equivalent directional entry beam both toward the front and toward the rear of the satellite, or modify the orientation of the beam by inclining it so as to achieve a larger visible covered area on the ground. FIG.5is a chart showing the sequence of the entry of a user terminal into a telecommunication network according to one embodiment of the invention, in the case of a non-geostationary satellite with an entry beam of constant elevation and of a user located in a region not covered by the satellite. This chart is one embodiment given merely by way of illustration. The satellite forms antenna beams of constant elevation in resources of the hop frame that are dedicated to entry into the network, the entry beams being oriented in at least two different directions. Advantageously, the entry antenna beams are configured so as to cover all of an azimuth that is large but preferably substantially smaller than 180°, such as for example the sub-beams shown inFIG.2B. The satellite uses these beams to transmit signalling information511, such as for example ephemerides allowing the user terminal to determine its position and the position of the other satellites of the constellation, and information allowing the user terminal to transmit a connection request over the network, such as for example a frequency channel and/or time intervals in play. For its part, the user terminal is configured to use an antenna with a directional antenna beam oriented with an elevation corresponding to the entry beams to detect502the satellite, and find the position of the satellite in azimuth alone. Advantageously, when the satellite is configured to orient the entry beams so as to avoid user terminals transmitting in the direction of the geostationary arc, the user terminals may merely search for the satellite on an azimuth smaller than 180° in the direction opposite to the geostationary arc. Advantageously, in order to accelerate the search for the satellite, the user terminal may use information stored in memory regarding the position of the satellite to decrease the in-azimuth search area. This information may for example be ephemerides allowing it to reconstruct the position of the satellite. In this case, the user terminal is capable of computing in a quite precise manner its azimuth, this allowing it to limit the search to around the expected position of the satellite. However, ephemerides have a very short duration of validity (a few hours). Advantageously, the invention proposes to use RAAN information (RAAN being the acronym of right ascension of the ascending node), giving the angle at which a satellite moving northwards crosses the equator. This information allows the orbit of the satellite to be determined, and the in-azimuth search range to be limited accordingly. The RAAN information has a much longer duration of validity than the ephemerides, of the order of several years. The search for the position of the satellite is then faster and less expensive in terms of processing operations, this allowing time to be freed up for the procedure for entry into the network itself. Once the satellite has been detected, the user terminal collects signalling data transmitted by the satellite, and in particular ephemerides and information on connection modalities. The ephemerides allow the user terminal to track the position of the satellite during its movements during the period of visibility, and therefore to remain in radio contact with the satellite even when the antenna of the user terminal is very directional. The information on connection modalities allows it to know the times and the frequency channels dedicated to the transmission of connection requests. The user terminal is then able to make a connection request512to the satellite. The satellite transmits this request to a mission centre that records the presence of the user terminal, permits it or does not permit it to join the network, registers it and attributes thereto network parameters, such as for example an IP address. The satellite then sends a response513to the user terminal, information on the state of its registration in the network and its network parameters. Once these steps have been carried out, the user terminal is registered in the satellite communication network, and the network manager in charge of definition of the beam-hop frames takes it into account during its subsequent assignments. All of the exchanges shown inFIG.5may be carried out on a single resource of the hop frames that is dedicated to entry into the network, or on a plurality of dedicated resources during one or more passages in visibility of the satellite. The operation of the method according to the invention for geostationary satellites differs in that the entry beam is not of constant elevation, and it is not essential to transmit information relative to the position of the satellite, or to carry out step502of searching for the satellite. The method for entry into a telecommunication network according to the invention therefore comprises resources reserved in the beam-hop frames for entries/re-entries into the network, during which at least one satellite of the network is configured to have a directional antenna beam:oriented so that, with the traffic beams, the entirety of the area of coverage of the satellite has a radio link with the satellite, orformed so that the entry beams are observable with a constant elevation from the Earth, for a network of non-geostationary satellites. In the entry method according to the invention, the entry beams may be planned in parallel with traffic beams, and in the same frequency bands. For non-geostationary satellites, the method according to the invention divides the entry beam into a plurality of beams of smaller angular aperture in azimuth transmitted on different resources of the hop frames in order to improve link budget. Advantageously, it is possible to orient the beam so that the user terminals do not emit in the direction of the geostationary arc. The invention also relates to a satellite comprising means for forming antenna beams, and configured to form directional entry beams using dedicated resources of the hop frame, and to a satellite communication network comprising such a satellite. According to one embodiment, it is a question of a non-geostationary satellite configured to orient the entry beams so that they are seen from the Earth with a substantially constant elevation. The invention also relates to a satellite user terminal, configured to search for the presence of a non-geostationary satellite by positioning its antenna with the given elevation of the entry beam, and by carrying out a scan of space in azimuth alone. This user terminal is configured to, once the satellite has been detected, collect connection information and transmit a request for entry/re-entry into the satellite communication network.
24,776
11863291
DETAILED DESCRIPTION The following techniques, apparatuses, and systems may be applied to a variety of wireless multiple access systems. Examples of the multiple access systems include a code division multiple access (CDMA) system, a frequency division multiple access (FDMA) system, a time division multiple access (TDMA) system, an orthogonal frequency division multiple access (OFDMA) system, a single carrier frequency division multiple access (SC-FDMA) system, and a multicarrier frequency division multiple access (MC-FDMA) system. CDMA may be embodied through radio technology such as universal terrestrial radio access (UTRA) or CDMA2000. TDMA may be embodied through radio technology such as global system for mobile communications (GSM), general packet radio service (GPRS), or enhanced data rates for GSM evolution (EDGE). OFDMA may be embodied through radio technology such as institute of electrical and electronics engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, or evolved UTRA (E-UTRA). UTRA is a part of a universal mobile telecommunications system (UMTS). 3rd generation partnership project (3GPP) long term evolution (LTE) is a part of evolved UMTS (E-UMTS) using E-UTRA. 3GPP LTE employs OFDMA in DL and SC-FDMA in UL. Evolution of 3GPP LTE includes LTE-A (advanced), LTE-A Pro, and/or 5G NR (new radio). For convenience of description, implementations of the present disclosure are mainly described in regards to a 3GPP based wireless communication system. However, the technical features of the present disclosure are not limited thereto. For example, although the following detailed description is given based on a mobile communication system corresponding to a 3GPP based wireless communication system, aspects of the present disclosure that are not limited to 3GPP based wireless communication system are applicable to other mobile communication systems. For terms and technologies which are not specifically described among the terms of and technologies employed in the present disclosure, the wireless communication standard documents published before the present disclosure may be referenced. In the present disclosure, “A or B” may mean “only A”, “only B”, or “both A and B”. In other words, “A or B” in the present disclosure may be interpreted as “A and/or B”. For example, “A, B or C” in the present disclosure may mean “only A”, “only B”, “only C”, or “any combination of A, B and C”. In the present disclosure, slash (/) or comma (,) may mean “and/or”. For example, “A/B” may mean “A and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”. For example, “A, B, C” may mean “A, B or C”. In the present disclosure, “at least one of A and B” may mean “only A”, “only B” or “both A and B”. In addition, the expression “at least one of A or B” or “at least one of A and/or B” in the present disclosure may be interpreted as same as “at least one of A and B”. In addition, in the present disclosure, “at least one of A, B and C” may mean “only A”, “only B”, “only C”, or “any combination of A, B and C”. In addition, “at least one of A, B or C” or “at least one of A, B and/or C” may mean “at least one of A, B and C”. Also, parentheses used in the present disclosure may mean “for example”. In detail, when it is shown as “control information (PDCCH)”, “PDCCH” may be proposed as an example of “control information”. In other words, “control information” in the present disclosure is not limited to “PDCCH”, and “PDCCH” may be proposed as an example of “control information”. In addition, even when shown as “control information (i.e., PDCCH)”, “PDCCH” may be proposed as an example of “control information” Technical features that are separately described in one drawing in the present disclosure may be implemented separately or simultaneously. Although not limited thereto, various descriptions, functions, procedures, suggestions, methods and/or operational flowcharts of the present disclosure disclosed herein can be applied to various fields requiring wireless communication and/or connection (e.g., 5G) between devices. Hereinafter, the present disclosure will be described in more detail with reference to drawings. The same reference numerals in the following drawings and/or descriptions may refer to the same and/or corresponding hardware blocks, software blocks, and/or functional blocks unless otherwise indicated. FIG.1shows an example of a communication system to which implementations of the present disclosure is applied. The 5G usage scenarios shown inFIG.1are only exemplary, and the technical features of the present disclosure can be applied to other 5G usage scenarios which are not shown inFIG.1. Three main requirement categories for 5G include (1) a category of enhanced mobile broadband (eMBB), (2) a category of massive machine type communication (mMTC), and (3) a category of ultra-reliable and low latency communications (URLLC). Referring toFIG.1, the communication system1includes wireless devices100ato100f, base stations (BSs)200, and a network300. AlthoughFIG.1illustrates a 5G network as an example of the network of the communication system1, the implementations of the present disclosure are not limited to the 5G system, and can be applied to the future communication system beyond the 5G system. The BSs200and the network300may be implemented as wireless devices and a specific wireless device may operate as a BS/network node with respect to other wireless devices. The wireless devices100ato100frepresent devices performing communication using radio access technology (RAT) (e.g., 5G new RAT (NR)) or LTE) and may be referred to as communication/radio/5G devices. The wireless devices100ato100fmay include, without being limited to, a robot100a, vehicles100b-1and100b-2, an extended reality (XR) device100c, a hand-held device100d, a home appliance100e, an IoT device100f, and an artificial intelligence (AI) device/server400. For example, the vehicles may include a vehicle having a wireless communication function, an autonomous driving vehicle, and a vehicle capable of performing communication between vehicles. The vehicles may include an unmanned aerial vehicle (UAV) (e.g., a drone). The XR device may include an AR/VR/Mixed Reality (MR) device and may be implemented in the form of a head-mounted device (HMD), a head-up display (HUD) mounted in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance device, a digital signage, a vehicle, a robot, etc. The hand-held device may include a smartphone, a smartpad, a wearable device (e.g., a smartwatch or a smartglasses), and a computer (e.g., a notebook). The home appliance may include a TV, a refrigerator, and a washing machine. The IoT device may include a sensor and a smartmeter. In the present disclosure, the wireless devices100ato100fmay be called user equipments (UEs). A UE may include, for example, a cellular phone, a smartphone, a laptop computer, a digital broadcast terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a slate personal computer (PC), a tablet PC, an ultrabook, a vehicle, a vehicle having an autonomous traveling function, a connected car, an UAV, an AI module, a robot, an AR device, a VR device, an MR device, a hologram device, a public safety device, an MTC device, an IoT device, a medical device, a FinTech device (or a financial device), a security device, a weather/environment device, a device related to a 5G service, or a device related to a fourth industrial revolution field. The wireless devices100ato100fmay be connected to the network300via the BSs200. An AI technology may be applied to the wireless devices100ato100fand the wireless devices100ato100fmay be connected to the AI server400via the network300. The network300may be configured using a 3G network, a 4G (e.g., LTE) network, a 5G (e.g., NR) network, and a beyond-5G network. Although the wireless devices100ato100fmay communicate with each other through the BSs200/network300, the wireless devices100ato100fmay perform direct communication (e.g., sidelink communication) with each other without passing through the BSs200/network300. For example, the vehicles100b-1and100b-2may perform direct communication (e.g., vehicle-to-vehicle (V2V)/vehicle-to-everything (V2X) communication). The IoT device (e.g., a sensor) may perform direct communication with other IoT devices (e.g., sensors) or other wireless devices100ato100f. Wireless communication/connections150a,150band150cmay be established between the wireless devices100ato100fand/or between wireless device100ato100fand BS200and/or between BSs200. Herein, the wireless communication/connections may be established through various RATs (e.g., 5G NR) such as uplink/downlink communication150a, sidelink communication (or device-to-device (D2D) communication)150b, inter-base station communication150c(e.g., relay, integrated access and backhaul (IAB)), etc. The wireless devices100ato100fand the BSs200/the wireless devices100ato100fmay transmit/receive radio signals to/from each other through the wireless communication/connections150a,150band150c. For example, the wireless communication/connections150a,150band150cmay transmit/receive signals through various physical channels. To this end, at least a part of various configuration information configuring processes, various signal processing processes (e.g., channel encoding/decoding, modulation/demodulation, and resource mapping/de-mapping), and resource allocating processes, for transmitting/receiving radio signals, may be performed based on the various proposals of the present disclosure. AI refers to the field of studying artificial intelligence or the methodology that can create it, and machine learning refers to the field of defining various problems addressed in the field of AI and the field of methodology to solve them. Machine learning is also defined as an algorithm that increases the performance of a task through steady experience on a task. Robot means a machine that automatically processes or operates a given task by its own ability. In particular, robots with the ability to recognize the environment and make self-determination to perform actions can be called intelligent robots. Robots can be classified as industrial, medical, home, military, etc., depending on the purpose or area of use. The robot can perform a variety of physical operations, such as moving the robot joints with actuators or motors. The movable robot also includes wheels, brakes, propellers, etc., on the drive, allowing it to drive on the ground or fly in the air. Autonomous driving means a technology that drives on its own, and autonomous vehicles mean vehicles that drive without user's control or with minimal user's control. For example, autonomous driving may include maintaining lanes in motion, automatically adjusting speed such as adaptive cruise control, automatic driving along a set route, and automatically setting a route when a destination is set. The vehicle covers vehicles equipped with internal combustion engines, hybrid vehicles equipped with internal combustion engines and electric motors, and electric vehicles equipped with electric motors, and may include trains, motorcycles, etc., as well as cars. Autonomous vehicles can be seen as robots with autonomous driving functions. Extended reality is collectively referred to as VR, AR, and MR. VR technology provides objects and backgrounds of real world only through computer graphic (CG) images. AR technology provides a virtual CG image on top of a real object image. MR technology is a CG technology that combines and combines virtual objects into the real world. MR technology is similar to AR technology in that they show real and virtual objects together. However, there is a difference in that in AR technology, virtual objects are used as complementary forms to real objects, while in MR technology, virtual objects and real objects are used as equal personalities. NR supports multiples numerologies (and/or multiple subcarrier spacings (SCS)) to support various 5G services. For example, if SCS is 15 kHz, wide area can be supported in traditional cellular bands, and if SCS is 30 kHz/60 kHz, dense-urban, lower latency, and wider carrier bandwidth can be supported. If SCS is 60 kHz or higher, bandwidths greater than 24.25 GHz can be supported to overcome phase noise. The NR frequency band may be defined as two types of frequency range, i.e., FR1 and FR2. The numerical value of the frequency range may be changed. For example, the frequency ranges of the two types (FR1 and FR2) may be as shown in Table 1 below. For ease of explanation, in the frequency ranges used in the NR system, FR1 may mean “sub 6 GHz range”, FR2 may mean “above 6 GHz range,” and may be referred to as millimeter wave (mmW). TABLE 1Frequency RangeCorrespondingSubcarrierdesignationfrequency rangeSpacingFR1450 MHz-6000 MHz15, 30, 60 kHzFR224250 MHz-52600 MHZ60, 120, 240 kHz As mentioned above, the numerical value of the frequency range of the NR system may be changed. For example, FR1 may include a frequency band of 410 MHz to 7125 MHz as shown in Table 2 below. That is, FR1 may include a frequency band of 6 GHz (or 5850, 5900, 5925 MHz, etc.) or more. For example, a frequency band of 6 GHz (or 5850, 5900, 5925 MHz, etc.) or more included in FR1 may include an unlicensed band. Unlicensed bands may be used for a variety of purposes, for example for communication for vehicles (e.g., autonomous driving). TABLE 2Frequency RangeCorrespondingSubcarrierdesignationfrequency rangeSpacingFR1410 MHz-7125 MHz15, 30, 60 kHzFR224250 MHz-52600 MHZ60, 120, 240 kHz Here, the radio communication technologies implemented in the wireless devices in the present disclosure may include narrowband internet-of-things (NB-IoT) technology for low-power communication as well as LTE, NR and 6G. For example, NB-IoT technology may be an example of low power wide area network (LPWAN) technology, may be implemented in specifications such as LTE Cat NB1 and/or LTE Cat NB2, and may not be limited to the above-mentioned names. Additionally and/or alternatively, the radio communication technologies implemented in the wireless devices in the present disclosure may communicate based on LTE-M technology. For example, LTE-M technology may be an example of LPWAN technology and be called by various names such as enhanced machine type communication (eMTC). For example, LTE-M technology may be implemented in at least one of the various specifications, such as 1) LTE Cat 0, 2) LTE Cat M1, 3) LTE Cat M2, 4) LTE non-bandwidth limited (non-BL), 5) LTE-MTC, 6) LTE Machine Type Communication, and/or 7) LTE M, and may not be limited to the above-mentioned names. Additionally and/or alternatively, the radio communication technologies implemented in the wireless devices in the present disclosure may include at least one of ZigBee, Bluetooth, and/or LPWAN which take into account low-power communication, and may not be limited to the above-mentioned names. For example, ZigBee technology may generate personal area networks (PANs) associated with small/low-power digital communication based on various specifications such as IEEE 802.15.4 and may be called various names. FIG.2shows an example of wireless devices to which implementations of the present disclosure is applied. Referring toFIG.2, a first wireless device100and a second wireless device200may transmit/receive radio signals to/from an external device through a variety of RATs (e.g., LTE and NR). InFIG.2, {the first wireless device100and the second wireless device200} may correspond to at least one of {the wireless device100ato100fand the BS200}, {the wireless device100ato100fand the wireless device100ato100f} and/or {the BS200and the BS200} ofFIG.1. The first wireless device100may include at least one transceiver, such as a transceiver106, at least one processing chip, such as a processing chip101, and/or one or more antennas108. The processing chip101may include at least one processor, such a processor102, and at least one memory, such as a memory104. It is exemplarily shown inFIG.2that the memory104is included in the processing chip101. Additional and/or alternatively, the memory104may be placed outside of the processing chip101. The processor102may control the memory104and/or the transceiver106and may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts described in the present disclosure. For example, the processor102may process information within the memory104to generate first information/signals and then transmit radio signals including the first information/signals through the transceiver106. The processor102may receive radio signals including second information/signals through the transceiver106and then store information obtained by processing the second information/signals in the memory104. The memory104may be operably connectable to the processor102. The memory104may store various types of information and/or instructions. The memory104may store a software code105which implements instructions that, when executed by the processor102, perform the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. For example, the software code105may implement instructions that, when executed by the processor102, perform the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. For example, the software code105may control the processor102to perform one or more protocols. For example, the software code105may control the processor102to perform one or more layers of the radio interface protocol. Herein, the processor102and the memory104may be a part of a communication modem/circuit/chip designed to implement RAT (e.g., LTE or NR). The transceiver106may be connected to the processor102and transmit and/or receive radio signals through one or more antennas108. Each of the transceiver106may include a transmitter and/or a receiver. The transceiver106may be interchangeably used with radio frequency (RF) unit(s). In the present disclosure, the first wireless device100may represent a communication modem/circuit/chip. The second wireless device200may include at least one transceiver, such as a transceiver206, at least one processing chip, such as a processing chip201, and/or one or more antennas208. The processing chip201may include at least one processor, such a processor202, and at least one memory, such as a memory204. It is exemplarily shown inFIG.2that the memory204is included in the processing chip201. Additional and/or alternatively, the memory204may be placed outside of the processing chip201. The processor202may control the memory204and/or the transceiver206and may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts described in the present disclosure. For example, the processor202may process information within the memory204to generate third information/signals and then transmit radio signals including the third information/signals through the transceiver206. The processor202may receive radio signals including fourth information/signals through the transceiver106and then store information obtained by processing the fourth information/signals in the memory204. The memory204may be operably connectable to the processor202. The memory204may store various types of information and/or instructions. The memory204may store a software code205which implements instructions that, when executed by the processor202, perform the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. For example, the software code205may implement instructions that, when executed by the processor202, perform the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. For example, the software code205may control the processor202to perform one or more protocols. For example, the software code205may control the processor202to perform one or more layers of the radio interface protocol. Herein, the processor202and the memory204may be a part of a communication modem/circuit/chip designed to implement RAT (e.g., LTE or NR). The transceiver206may be connected to the processor202and transmit and/or receive radio signals through one or more antennas208. Each of the transceiver206may include a transmitter and/or a receiver. The transceiver206may be interchangeably used with RF unit. In the present disclosure, the second wireless device200may represent a communication modem/circuit/chip. Hereinafter, hardware elements of the wireless devices100and200will be described more specifically. One or more protocol layers may be implemented by, without being limited to, one or more processors102and202. For example, the one or more processors102and202may implement one or more layers (e.g., functional layers such as physical (PHY) layer, media access control (MAC) layer, radio link control (RLC) layer, packet data convergence protocol (PDCP) layer, radio resource control (RRC) layer, and service data adaptation protocol (SDAP) layer). The one or more processors102and202may generate one or more protocol data units (PDUs) and/or one or more service data unit (SDUs) according to the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. The one or more processors102and202may generate messages, control information, data, or information according to the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. The one or more processors102and202may generate signals (e.g., baseband signals) including PDUs, SDUs, messages, control information, data, or information according to the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure and provide the generated signals to the one or more transceivers106and206. The one or more processors102and202may receive the signals (e.g., baseband signals) from the one or more transceivers106and206and acquire the PDUs, SDUs, messages, control information, data, or information according to the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. The one or more processors102and202may be referred to as controllers, microcontrollers, microprocessors, or microcomputers. The one or more processors102and202may be implemented by hardware, firmware, software, or a combination thereof. As an example, one or more application specific integrated circuits (ASICs), one or more digital signal processors (DSPs), one or more digital signal processing devices (DSPDs), one or more programmable logic devices (PLDs), or one or more field programmable gate arrays (FPGAs) may be included in the one or more processors102and202. The descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure may be implemented using firmware or software and the firmware or software may be configured to include the modules, procedures, or functions. Firmware or software configured to perform the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure may be included in the one or more processors102and202or stored in the one or more memories104and204so as to be driven by the one or more processors102and202. The descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure may be implemented using firmware or software in the form of code, commands, and/or a set of commands. The one or more memories104and204may be connected to the one or more processors102and202and store various types of data, signals, messages, information, programs, code, instructions, and/or commands. The one or more memories104and204may be configured by read-only memories (ROMs), random access memories (RAMs), electrically erasable programmable read-only memories (EPROMs), flash memories, hard drives, registers, cash memories, computer-readable storage media, and/or combinations thereof. The one or more memories104and204may be located at the interior and/or exterior of the one or more processors102and202. The one or more memories104and204may be connected to the one or more processors102and202through various technologies such as wired or wireless connection. The one or more transceivers106and206may transmit user data, control information, and/or radio signals/channels, mentioned in the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure, to one or more other devices. The one or more transceivers106and206may receive user data, control information, and/or radio signals/channels, mentioned in the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure, from one or more other devices. For example, the one or more transceivers106and206may be connected to the one or more processors102and202and transmit and receive radio signals. For example, the one or more processors102and202may perform control so that the one or more transceivers106and206may transmit user data, control information, or radio signals to one or more other devices. The one or more processors102and202may perform control so that the one or more transceivers106and206may receive user data, control information, or radio signals from one or more other devices. The one or more transceivers106and206may be connected to the one or more antennas108and208and the one or more transceivers106and206may be configured to transmit and receive user data, control information, and/or radio signals/channels, mentioned in the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure, through the one or more antennas108and208. In the present disclosure, the one or more antennas108and208may be a plurality of physical antennas or a plurality of logical antennas (e.g., antenna ports). The one or more transceivers106and206may convert received user data, control information, radio signals/channels, etc., from RF band signals into baseband signals in order to process received user data, control information, radio signals/channels, etc., using the one or more processors102and202. The one or more transceivers106and206may convert the user data, control information, radio signals/channels, etc., processed using the one or more processors102and202from the base band signals into the RF band signals. To this end, the one or more transceivers106and206may include (analog) oscillators and/or filters. For example, the one or more transceivers106and206can up-convert OFDM baseband signals to OFDM signals by their (analog) oscillators and/or filters under the control of the one or more processors102and202and transmit the up-converted OFDM signals at the carrier frequency. The one or more transceivers106and206may receive OFDM signals at a carrier frequency and down-convert the OFDM signals into OFDM baseband signals by their (analog) oscillators and/or filters under the control of the one or more processors102and202. In the implementations of the present disclosure, a UE may operate as a transmitting device in uplink (UL) and as a receiving device in downlink (DL). In the implementations of the present disclosure, a BS may operate as a receiving device in UL and as a transmitting device in DL. Hereinafter, for convenience of description, it is mainly assumed that the first wireless device100acts as the UE, and the second wireless device200acts as the BS. For example, the processor(s)102connected to, mounted on or launched in the first wireless device100may be configured to perform the UE behavior according to an implementation of the present disclosure or control the transceiver(s)106to perform the UE behavior according to an implementation of the present disclosure. The processor(s)202connected to, mounted on or launched in the second wireless device200may be configured to perform the BS behavior according to an implementation of the present disclosure or control the transceiver(s)206to perform the BS behavior according to an implementation of the present disclosure. In the present disclosure, a BS is also referred to as a node B (NB), an eNode B (eNB), or a gNB. FIG.3shows an example of a wireless device to which implementations of the present disclosure is applied. The wireless device may be implemented in various forms according to a use-case/service (refer toFIG.1). Referring toFIG.3, wireless devices100and200may correspond to the wireless devices100and200ofFIG.2and may be configured by various elements, components, units/portions, and/or modules. For example, each of the wireless devices100and200may include a communication unit110, a control unit120, a memory unit130, and additional components140. The communication unit110may include a communication circuit112and transceiver(s)114. For example, the communication circuit112may include the one or more processors102and202ofFIG.2and/or the one or more memories104and204ofFIG.2. For example, the transceiver(s)114may include the one or more transceivers106and206ofFIG.2and/or the one or more antennas108and208ofFIG.2. The control unit120is electrically connected to the communication unit110, the memory unit130, and the additional components140and controls overall operation of each of the wireless devices100and200. For example, the control unit120may control an electric/mechanical operation of each of the wireless devices100and200based on programs/code/commands/information stored in the memory unit130. The control unit120may transmit the information stored in the memory unit130to the exterior (e.g., other communication devices) via the communication unit110through a wireless/wired interface or store, in the memory unit130, information received through the wireless/wired interface from the exterior (e.g., other communication devices) via the communication unit110. The additional components140may be variously configured according to types of the wireless devices100and200. For example, the additional components140may include at least one of a power unit/battery, input/output (I/O) unit (e.g., audio I/O port, video I/O port), a driving unit, and a computing unit. The wireless devices100and200may be implemented in the form of, without being limited to, the robot (100aofFIG.1), the vehicles (100b-1and100b-2ofFIG.1), the XR device (100cofFIG.1), the hand-held device (100dofFIG.1), the home appliance (100eofFIG.1), the IoT device (100fofFIG.1), a digital broadcast terminal, a hologram device, a public safety device, an MTC device, a medicine device, a FinTech device (or a finance device), a security device, a climate/environment device, the AI server/device (400ofFIG.1), the BSs (200ofFIG.1), a network node, etc. The wireless devices100and200may be used in a mobile or fixed place according to a use-example/service. InFIG.3, the entirety of the various elements, components, units/portions, and/or modules in the wireless devices100and200may be connected to each other through a wired interface or at least a part thereof may be wirelessly connected through the communication unit110. For example, in each of the wireless devices100and200, the control unit120and the communication unit110may be connected by wire and the control unit120and first units (e.g.,130and140) may be wirelessly connected through the communication unit110. Each element, component, unit/portion, and/or module within the wireless devices100and200may further include one or more elements. For example, the control unit120may be configured by a set of one or more processors. As an example, the control unit120may be configured by a set of a communication control processor, an application processor (AP), an electronic control unit (ECU), a graphical processing unit, and a memory control processor. As another example, the memory unit130may be configured by a RAM, a DRAM, a ROM, a flash memory, a volatile memory, a non-volatile memory, and/or a combination thereof. FIG.4shows an example of UE to which implementations of the present disclosure is applied. Referring toFIG.4, a UE100may correspond to the first wireless device100ofFIG.2and/or the wireless device100or200ofFIG.3. A UE100includes a processor102, a memory104, a transceiver106, one or more antennas108, a power management module110, a battery112, a display114, a keypad116, a subscriber identification module (SIM) card118, a speaker120, and a microphone122. The processor102may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. The processor102may be configured to control one or more other components of the UE100to implement the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. Layers of the radio interface protocol may be implemented in the processor102. The processor102may include ASIC, other chipset, logic circuit and/or data processing device. The processor102may be an application processor. The processor102may include at least one of a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), a modem (modulator and demodulator). An example of the processor102may be found in SNAPDRAGON™ series of processors made by Qualcomm®, EXYNOS™ series of processors made by Samsung®, a series of processors made by Apple®, HELIO™ series of processors made by MediaTek®, ATOM™ series of processors made by Intel® or a corresponding next generation processor. The memory104is operatively coupled with the processor102and stores a variety of information to operate the processor102. The memory104may include ROM, RAM, flash memory, memory card, storage medium and/or other storage device. When the embodiments are implemented in software, the techniques described herein can be implemented with modules (e.g., procedures, functions, etc.) that perform the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in the present disclosure. The modules can be stored in the memory104and executed by the processor102. The memory104can be implemented within the processor102or external to the processor102in which case those can be communicatively coupled to the processor102via various means as is known in the art. The transceiver106is operatively coupled with the processor102, and transmits and/or receives a radio signal. The transceiver106includes a transmitter and a receiver. The transceiver106may include baseband circuitry to process radio frequency signals. The transceiver106controls the one or more antennas108to transmit and/or receive a radio signal. The power management module110manages power for the processor102and/or the transceiver106. The battery112supplies power to the power management module110. The display114outputs results processed by the processor102. The keypad116receives inputs to be used by the processor102. The keypad116may be shown on the display114. The SIM card118is an integrated circuit that is intended to securely store the international mobile subscriber identity (IMSI) number and its related key, which are used to identify and authenticate subscribers on mobile telephony devices (such as mobile phones and computers). It is also possible to store contact information on many SIM cards. The speaker120outputs sound-related results processed by the processor102. The microphone122receives sound-related inputs to be used by the processor102. FIG.5is an example of a wireless communication system. As can be seen with reference toFIG.5, a wireless communication system includes at least one base station (BS). The BS is divided into a gNodeB (or gNB)20aand an eNodeB (or an eNB)20b. The gNB20asupports 5G mobile communication. The eNB20bsupports 4G mobile communication, that is, long term evolution (LTE). Each base station20aand20bprovides a communication service for a specific geographic area (generally referred to as a cell) (20-1,20-2, and20-3). A cell may be again divided into a plurality of regions (referred to as sectors). The UE generally belongs to one cell and the cell to which the UE belong is referred to as a serving cell. A base station that provides the communication service to the serving cell is referred to as a serving BS. Since the wireless communication system is a cellular system, another cell that neighbors to the serving cell is present. Another cell which neighbors to the serving cell is referred to a neighbor cell. A base station that provides the communication service to the neighbor cell is referred to as a neighbor BS. The serving cell and the neighbor cell are relatively decided based on the UE. Hereinafter, a downlink means communication from the base station20to the UE10and an uplink means communication from the UE10to the base station20. In the downlink, a transmitter may be a part of the base station20and a receiver may be a part of the UE10. In the uplink, the transmitter may be a part of the UE10and the receiver may be a part of the base station20. Meanwhile, the wireless communication system may be generally divided into a frequency division duplex (FDD) type and a time division duplex (TDD) type. According to the FDD type, uplink transmission and downlink transmission are achieved while occupying different frequency bands. According to the TDD type, the uplink transmission and the downlink transmission are achieved at different time while occupying the same frequency band. A channel response of the TDD type is substantially reciprocal. This means that a downlink channel response and an uplink channel response are approximately the same as each other in a given frequency area. Accordingly, in the TDD based wireless communication system, the downlink channel response may be acquired from the uplink channel response. In the TDD type, since an entire frequency band is time-divided in the uplink transmission and the downlink transmission, the downlink transmission by the base station and the uplink transmission by the terminal may not be performed simultaneously. In the TDD system in which the uplink transmission and the downlink transmission are divided by the unit of a subframe, the uplink transmission and the downlink transmission are performed in different subframes. FIG.6illustrates a structure of a radio frame used in NR. In NR, uplink and downlink transmission are composed of frames. The radio frame may have a length of 10 ms and may be defined as two 5-ms half-frames (HFs). Each half-frame may be defined as five 1-ms subframes (SFs). A subframe may be divided into one or more slots, and the number of slots in a subframe may depend on SCS (Subcarrier Spacing). Each slot may include 12 or 14 OFDM(A) symbols according to a cyclic prefix (CP). In some implementations, if a CP is used, then each slot contains 14 symbols. If an extended CP is used, then each slot contains 12 symbols. The symbol may include, for example, an OFDM symbol (or a CP-OFDM symbol) and an SC-FDMA symbol (or a DFT-s-OFDM symbol. FIG.7shows an example of subframe type in NR. A transmission time interval (TTI) shown inFIG.7may be called a subframe or slot for NR (or new RAT). The subframe (or slot) inFIG.7may be used in a TDD system of NR (or new RAT) to minimize data transmission delay. As shown inFIG.7, a subframe (or slot) includes 14 symbols as does the current subframe. A front symbol of the subframe (or slot) can be used for a downlink control channel, and a rear symbol of the subframe (or slot) can be used for a uplink control channel. Other channels can be used for downlink data transmission or uplink data transmission. According to such structure of a subframe (or slot), downlink transmission and uplink transmission may be performed sequentially in one subframe (or slot). Therefore, a downlink data may be received in the subframe (or slot), and a uplink acknowledge response (ACK/NACK) may be transmitted in the subframe (or slot). A subframe (or slot) in this structure may be called a self-constrained subframe. Specifically, first N symbols in a slot may be used to transmit a DL control channel (hereinafter, DL control region), and last M symbols in a slot may be used to transmit a UL control channel (hereinafter, UL control region). N and M are each an integer greater than or equal to 0. A resource region (hereinafter, referred to as a data region) between the DL control region and the UL control region can be used for DL data transmission or for UL data transmission. For example, a PDCCH may be transmitted in the DL control region and the PDSCH may be transmitted in the DL data region. A PUCCH may be transmitted in the UL control region, and a PUSCH may be transmitted in the UL data region. If this structure of a subframe (or slot) is used, it may reduce time required to retransmit data regarding which a reception error occurred, and thus, a final data transmission waiting time may be minimized. In such structure of the self-contained subframe (slot), a time gap may be required for transition from a transmission mode to a reception mode or vice versa. To this end, when downlink is transitioned to uplink in the subframe structure, some OFDM symbols may be set as a Guard Period (GP). <Support of Various Numerology> In a next system, a plurality of numerologies may be provided to a terminal according to the development of wireless communication technology. For example, when SCS is 15 kHz, it supports a wide area in traditional cellular bands, and when SCS is 30 kHz/60 kHz, it supports a dense-urban, lower latency and wider carrier bandwidth, and when SCS is 60 kHz or higher, it supports a bandwidth greater than 24.25 GHz to overcome phase noise. The numerology may be defined by a cycle prefix (CP) length and a subcarrier spacing (SCS). One cell may provide a plurality of numerologies to the terminal. When an index of numerology is expressed as 11, an interval of each subcarrier and a corresponding CP length may be as shown in the table below. TABLE 3μΔf = 2μ· 15 [kHz]CP015normal130normal260normal, extended3120normal4240normal In the case of normal CP, when an index of numerology is expressed as 11, the number (Nsymbslot) of OFDM symbols per slot, the number of slots (Nslotframe,μ) per frame, and the number (Nslotsubframe,μ) of slots per subframe are shown in the table below. TABLE 4μNslotsymbNframe,μslotNsubframe,μslot0141011142022144043148084141601651432032 In the case of extended CP, when the index of numerology is expressed as μ, the number (Nsymbslot) of OFDM symbols per slot, the number (Nslotframe,μ) of slots per frame, and the number (Nslotsubframe,μ) of slots per subframe are shown in the table below. TABLE 5μNslotsymbNframe,μslotNsubframe,μslot212404 FIG.8shows an example of performing measurement in E-UTRAN and NR (EN) DC case. Referring toFIG.8, the UE100are connected in EN-DC with an E-UTRAN (that is, LTE/LTE-A) cell. Here, a Pcell in EN-DC may be an E-UTRAN (that is, LTE/LTE-A) cell, and a PSCell in EN-DC may be an NR cell. The UE100may receive measurement configuration (or “measconfig”) information element (IE) of the E-UTRAN (that is, LTE/LTE-A) cell. The measurement configuration (or “measconfig”) IE received from the E-UTRAN (that is, LTE/LTE-A) cell may further include fields shown in the following table, in addition to the fields shown in Table 6. TABLE 6MeasConfig field descriptionfr1-GapThis field exists when a UE is configured with EN-DC.This field indicates whether a gap is applied to performmeasurement on FR1 bandmgtaIt indicates whether to apply a timing advance (TA) of0.5 ms for a measurement gap configuration providedby the E-UTRAN. The measurement configuration (or “measconfig”) IE may further include a measGapConfig field for setting a measurement gap (MG), as shown in Table 7. A gapoffset field within the measGapConfig field may further include gp4, gp5, gp11 for EN-DC, in addition to the example shown in Table 8. Meanwhile, the UE100may receive a measurement configuration (“measconfig”) IE of an NR cell, which is a PSCell, directly from the NR cell or through the E-UTRAN cell which is a Pcell. Meanwhile, the measurement configuration (“measconfig”) IE of the NR cell may include fields as shown in the following table. TABLE 7MeasConfig field descriptionmeasGapConfigIt indicates configuration or cancelation of a measurement gaps-MeasureConfigIt indicates a threshold value for measurement of NR SpCellRSRP when a UE needs to perform measurement on a non-serving cell. The above measGapConfig may further include fields as shown in the following table. TABLE 8MeasGapConfig field descriptiongapFR2It indicates a measurement gap configuration applicable forFR2 frequency range.gapOffsetIt indicates a gap offset of a gap pattern with an MGRP.mglIt indicates a measurement gap length by ms. There may be3 ms, 4 ms, 6 ms, etc.mgrpIt indicates a measurement gap repetition period by ms.mgtaIt indicates whether to apply a timing advance (TA) of 0.5ms for a measurement gap configuration. Meanwhile, the UE100receives a radio resource configuration information element (IE) of the E-UTRAN (that is, LTE/LTE-A) cell which is a Pcell. In addition, the UE may receive a radio resource configuration IE of an NR cell, which is a PSCell, from the NR cell or through the E-UTRAN cell which is a Pcell. The radio resource configuration IE includes subframe pattern information. The UE100performs measurement and reports a measurement result. Specifically, the UE100interrupts data transmission and reception with the E-UTRAN (that is, LTE/LTE-A) cell during the measurement gap, retunes its own RF chain, and performs measurement based on receipt of an SS block from an NR cell. FIG.9shows an example of performing measurement in NR carrier aggregation case. Referring toFIG.9, the UE100is configured for a carrier aggregation with a first cell (e.g., Pcell) and a second cell (e.g. Scell). Here, the Pcell may be an NR based cell, and the Scell may be an NR based cell. The UE100may receive measurement configuration (or “measconfig”) information element (IE). The measurement configuration (or “measconfig”) IE may include fields shown in the above tables. The UE100receives a radio resource configuration information element (IE). The UE100performs measurement and reports a measurement result. <Cell Re-Selection> The cell reselection procedure allows the UE to select a more suitable cell and camp on it. When the UE is in either Camped Normally state or Camped on Any Cell state on a cell, the UE shall attempt to detect, synchronize, and monitor intra-frequency, inter-frequency and inter-RAT cells indicated by the serving cell. For intra-frequency and inter-frequency cells the serving cell may not provide explicit neighbor list but carrier frequency information and bandwidth information only. UE measurement activity is also controlled by measurement rules, allowing the UE to limit its measurement activity. For idle mode cell re-selection purposes, the UE shall be capable of monitoring at least:Intra-frequency carrier, andDepending on UE capability, 7 NR inter-frequency carriers, andDepending on UE capability, 7 FDD E-UTRA inter-RAT carriers, andDepending on UE capability, 7 TDD E-UTRA inter-RAT carriers. In addition to the requirements defined above, a UE supporting E-UTRA measurements in RRC IDLE state shall be capable of monitoring a total of at least 14 carrier frequency layers, which includes serving layer, comprising of any above defined combination of E-UTRA FDD, E-UTRA TDD and NR layers. The UE shall measure the SS-RSRP and SS-RSRQ level of the serving cell and evaluate the cell selection criterion S for the serving cell at least once every M1*N1 DRX cycle. The UE shall filter the SS-RSRP and SS-RSRQ measurements of the serving cell using at least 2 measurements. Within the set of measurements used for the filtering, at least two measurements shall be spaced by, at least DRX cycle/2. If the UE has evaluated according to Table 9 in Nserv consecutive DRX cycles that the serving cell does not fulfil the cell selection criterion S, the UE shall initiate the measurements of all neighbor cells indicated by the serving cell, regardless of the measurement rules currently limiting UE measurement activities. If the UE in RRC IDLE has not found any new suitable cell based on searches and measurements using the intra-frequency, inter-frequency and inter-RAT information indicated in the system information for 10 s, the UE shall initiate cell selection procedures for the selected PLMN. TABLE 9DRX cycleScaling FactorNservlength(N1)[number of[s]FR1FR2Note1DRX cycles]0.3218M1*N1*40.645M1*N1*41.284N1*22.563N1*2Note 1: Applies for UE supporting power class 2&3&4. For UE supporting power class 1, N1 = 8 for all DRX cycle length. The UE shall be able to identify new intra-frequency cells and perform SS-RSRP and SS-RSRQ measurements of the identified intra-frequency cells without an explicit intra-frequency neighbor list containing physical layer cell identities. The UE shall be able to evaluate whether a newly detectable intra-frequency cell meets the reselection criteria within Tdetect,NR_Intrawhen that Treselection=0. An intra frequency cell is considered to be detectable according to the conditions for a corresponding Band. The UE shall measure SS-RSRP and SS-RSRQ at least every Tmeasure,NR_Intra(see table 10) for intra-frequency cells that are identified and measured according to the measurement rules. The UE shall filter SS-RSRP and SS-RSRQ measurements of each measured intra-frequency cell using at least 2 measurements. Within the set of measurements used for the filtering, at least two measurements shall be spaced by at least Tmeasure,NR_Intra/2. The UE shall not consider a NR neighbor cell in cell reselection, if it is indicated as not allowed in the measurement control system information of the serving cell. For an intra-frequency cell that has been already detected, but that has not been reselected to, the filtering shall be such that the UE shall be capable of evaluating that the intra-frequency cell has met reselection criterion defined [1] within Tevaluate,NR_Intrawhen Treselection=0 as specified in table 10 provided that:when rangeToBestCell is not configured:the cell is at least 3 dB better ranked in FR1 or 4.5 dB better ranked in FR2.when rangeToBestCell is configured:the cell has the highest number of beams above the threshold absThreshSS-BlocksConsolidation among all detected cells whose cell-ranking criterion R value [1] is within rangeToBestCell of the cell-ranking criterion R value of the highest ranked cell.if there are multiple such cells, the cell has the highest rank among them.the cell is at least 3 dB better ranked in FR1 or [4.5] dB better ranked in FR2 if the current serving cell is among them. When evaluating cells for reselection, the SSB side conditions apply to both serving and non-serving intra-frequency cells. If Treselection timer has a non zero value and the intra-frequency cell is satisfied with the reselection criteria, the UE shall evaluate this intra-frequency cell for the Treselection time. If this cell remains satisfied with the reselection criteria within this duration, then the UE shall reselect that cell. TABLE 10DRXcycleScaling FactorTdetect,NR_IntraTmeasure,NR_IntraTevaluate,NR_Intralength(N1)[s] (number of[s] (number of[s] (number of[s]FR1FR2Note1DRX cycles)DRX cycles)DRX cycles)0.321811.52 × N1 ×1.28 × N1 ×5.12 × N1 ×M2M2M2(36 × N1 ×(4 × N1 ×(16 × N1 ×M2)M2)M2)0.64517.92 × N11.28 × N15.12 × N1(28 × N1)(2 × N1)(8 × N1)1.28432 × N11.28 × N16.4 × N1(25 × N1)(1 × N1)(5 × N1)2.56358.88 × N12.56 × N17.68 × N1(23 × N1)(1 × N1)(3 × N1)Note 1: Applies for UE supporting power class 2&3&4. For UE supporting power class 1, N1 = 8 for all DRX cycle length.Note 2: M2 = 1.5 if SMTC periodicity of measured intra-frequency cell >20 ms; otherwise M2 = 1. The UE shall be able to identify new inter-frequency cells and perform SS-RSRP or SS-RSRQ measurements of identified inter-frequency cells if carrier frequency information is provided by the serving cell, even if no explicit neighbor list with physical layer cell identities is provided. If Srxlev>SnonIntraSearchPand Squal>SnonintraSearchQthen the UE shall search for inter-frequency layers of higher priority at least every Thigher_priority_search. If Srxlev≤SnonIntraSearchPor Squal≤SnonIntraSearchQthen the UE shall search for and measure inter-frequency layers of higher, equal or lower priority in preparation for possible reselection. In this scenario, the minimum rate at which the UE is required to search for and measure higher priority layers shall be the same as that defined below in this clause. The UE shall be able to evaluate whether a newly detectable inter-frequency cell meets the reselection criteria defined in TS38.304 within Kcarrier*Tdetect,NR_Interif at least carrier frequency information is provided for inter-frequency neighbor cells by the serving cells when Treselection=0 provided that the reselection criteria is met by a margin of at least 5 dB in FR1 or 6.5 dB in FR2 for reselections based on ranking or 6 dB in FR1 or 7.5 dB in FR2 for SS-RSRP reselections based on absolute priorities or 4 dB in FR1 and 4 dB in FR2 for SS-RSRQ reselections based on absolute priorities. The parameter Kcarrier is the number of NR inter-frequency carriers indicated by the serving cell. An inter-frequency cell is considered to be detectable according to the conditions for a corresponding Band. When higher priority cells are found by the higher priority search, they shall be measured at least every Tmeasure,NR_Inter. If, after detecting a cell in a higher priority search, it is determined that reselection has not occurred then the UE is not required to continuously measure the detected cell to evaluate the ongoing possibility of reselection. However, the minimum measurement filtering requirements specified later in this clause shall still be met by the UE before it makes any determination that it may stop measuring the cell. If the UE detects on a NR carrier a cell whose physical identity is indicated as not allowed for that carrier in the measurement control system information of the serving cell, the UE is not required to perform measurements on that cell. The UE shall measure SS-RSRP or SS-RSRQ at least every Kcarrier*Tmeasure,NR_Inter(see table 11) for identified lower or equal priority inter-frequency cells. If the UE detects on a NR carrier a cell whose physical identity is indicated as not allowed for that carrier in the measurement control system information of the serving cell, the UE is not required to perform measurements on that cell. The UE shall filter SS-RSRP or SS-RSRQ measurements of each measured higher, lower and equal priority inter-frequency cell using at least 2 measurements. Within the set of measurements used for the filtering, at least two measurements shall be spaced by at least Tmeasure,NR_Inter/2. The UE shall not consider a NR neighbor cell in cell reselection, if it is indicated as not allowed in the measurement control system information of the serving cell. For an inter-frequency cell that has been already detected, but that has not been reselected to, the filtering shall be such that the UE shall be capable of evaluating that the inter-frequency cell has met reselection criterion defined TS 38.304 within Kcarrier*Tevaluate,NR_Interwhen Treselection=0 as specified in table 4.2.2.4-1 provided that the reselection criteria is met bythe condition when performing equal priority reselection andwhen rangeToBestCell is not configured:the cell is at least 5 dB better ranked in FR1 or 6.5 dB better ranked in FR2 or.when rangeToBestCell is configured:the cell has the highest number of beams above the threshold absThreshSS-BlocksConsolidation among all detected cells whose cell-ranking criterion R value [1] is within rangeToBestCell of the cell-ranking criterion R value of the highest ranked cell.if there are multiple such cells, the cell has the highest rank among themthe cell is at least 5 dB better ranked in FR1 or [6.5] dB better ranked in FR2 if the current serving cell is among them. or6 dB in FR1 or 7.5 dB in FR2 for SS-RSRP reselections based on absolute priorities or4 dB in FR1 or 4 dB in FR2 for SS-RSRQ reselections based on absolute priorities. When evaluating cells for reselection, the SSB side conditions apply to both serving and inter-frequency cells. If Treselection timer has a non zero value and the inter-frequency cell is satisfied with the reselection criteria, the UE shall evaluate this inter-frequency cell for the Treselection time. If this cell remains satisfied with the reselection criteria within this duration, then the UE shall reselect that cell. The UE is not expected to meet the measurement requirements for an inter-frequency carrier under DRX cycle=320 ms defined in Table 4.2.2.4-1 under the following conditions:TSMTC_intra=TSMTC_inter=160 ms; where TSMTC_intra and TSMTC_inter are periodicities of the SMTC occasions configured for the intra-frequency carrier and the inter-frequency carrier respectively, andSMTC occasions configured for the inter-frequency carrier occur up to 1 ms before the start or up to 1 ms after the end of the SMTC occasions configured for the intra-frequency carrier, andSMTC occasions configured for the intra-frequency carrier and for the inter-frequency carrier occur up to 1 ms before the start or up to 1 ms after the end of the paging occasion [1]. TABLE 11DRXcycleScaling FactorTdetect,NR_InterTmeasure, NR_InterTevaluate,NR_Interlength(N1)[s] (number of[s] (number of[s] (number of[s]FR1FR2Note1DRX cycles)DRX cycles)DRX cycles)0.321811.52 × N1 ×1.28 × N1 ×5.12 × N1 ×1.51.51.5(36 × N1 ×(4 × N1 ×(16 × N1 ×1.5)1.5)1.5)0.64517.92 × N11.28 × N15.12 × N1(28 × N1)(2 × N1)(8 × N1)1.28432 × N11.28 × N16.4 × N1(25 × N1)(1 × N1)(5 × N1)2.56358.88 × N12.56 × N17.68 × N1(23 × N1)(1 × N1)(3 × N1)Note 1: Applies for UE supporting power class 2&3&4. For UE supporting power class 1, N1 = 8 for all DRX cycle length. Based on serving cell signal quality, UE may measure neighbor cell for cell selection or reselection. If the serving cell fulfils Srxlev>SIntraSearchPand Squal>SIntraSearchQ, the UE may choose not to perform intra-frequency measurements. Otherwise, the UE may perform intra-frequency measurements. Srxlevis cell selection RX level value (dB). Squal is cell selection quality value (dB). SIntraSearchPspecifies the Srxlevthreshold (in dB) for intra-frequency measurements. SIntraSearchQspecifies the Squal threshold (in dB) for intra-frequency measurements <Measurement Gap> UEs shall support the measurement gap patterns listed in Table 12 based on the applicability specified in table 13 and 14. UE determines measurement gap timing based on gap offset configuration and measurement gap timing advance configuration provided by higher layer signaling. Table 12 shows Gap Pattern Configurations. TABLE 12GapMeasurement GapMeasurement Gap RepetitionPattern IdLength (MGL, ms)Period (MGRP, ms)06401680234033804620561606420744084809416010320113160125.520135.540145.580155.5160163.520173.540183.580193.5160201.520211.540221.580231.5160 Table 13 shows Applicability for Gap Pattern Configurations supported by the E-UTRA-NR dual connectivity UE. TABLE 13MeasurementApplicablegap patternMeasurementGap PatternconfigurationServing cellPurposeIDPer-UEE-UTRA +non-NR0,1,2,3measurementFR1, orRATNote1,20-11gapE-UTRA +FR1 and/orFR2, orFR20,1,2,3E-UTRA +non-NRFR1 + FR2RATNote1,2andFR1 and/orFR2Per FRE-UTRA and,non-NR0,1,2,3measurementFR1 ifRATNote1,2gapconfiguredFR2 ifNo gapconfiguredE-UTRA and,FR1 only0-11FR1 ifconfiguredFR2 ifNo gapconfiguredE-UTRA and,FR2 onlyNo gapFR1 ifconfiguredFR2 if12-23configuredE-UTRA and,non-NR0,1,2,3FR1 ifRATNote1,2configuredandFR2 ifFR1No gapconfiguredE-UTRA and,FR1 and FR20-11FR1 ifconfiguredFR2 if12-23configuredE-UTRA and,non-NR0,1,2,3FR1 ifRATNote1,2configuredand FR2FR2 if12-23configuredE-UTRA and,non-NR0,1,2,3FR1 ifRATNote1,2configuredand FR1FR2 ifand FR212-23configuredNote: if GSM or UTRA TDD or UTRA FDD inter-RAT frequency layer is configured to be monitered, only measurement gap pattern #0 and #1 can be used for per-FR gap in E-UTRA and FR1 if configured, or for per-UE gap.NOTE 1Non-NR RAT includes E-UTRA, UTRA and/or GSM.NOTE 2The gap pattern 2 and 3 are supported by UEs which support shortMeasurementGap-r14.NOTE 3: When E-UTRA inter-frequency RSTD measurements are configured and the UE requires measurement gaps for performing such measurements, only Gap Pattern #0 can be used. For E-UTRA-NR dual connectivity, when serving cells are on E-UTRA and FR1, measurement objects are in both E-UTRA/FR1 and FR2,If MN indicates UE that the measurement gap from MN applies to E-UTRA/FR1/FR2 serving cells, UE fulfils the per-UE measurement requirements for both E-UTRA/FR1 and FR2 measurement objects based on the measurement gap pattern configured by MN;If MN indicates UE that the measurement gap from MN applies to only LTE/FR1 serving cell(s),UE fulfils the measurement requirements for FR1/LTE measurement objects based on the configured measurement gap pattern;UE fulfils the requirements for FR2 measurement objects based on effective MGRP=20 ms; When serving cells are in E-UTRA, FR1 and FR2, Measurement objects are in both E-UTRA/FR1 and FR2,If MN indicates UE that the measurement gap from MN applies to E-UTRA/FR1/FR2 serving cells, UE fulfils the per-UE measurement requirements for both E-UTRA/FR1 and FR2 measurement objects based on the measurement gap pattern configured by MN. Table 14 shows Applicability for Gap Pattern Configurations supported by the UE with NR standalone operation. TABLE 14MeasurementApplicablegap patternMeasurementGapconfigurationServing cellPurposeNOTE 2Pattern IdPer-UEFR1, orE-UTRA only0,1,2,3measurementFR1 + FR2FR1 and/or0-11gapFR2E-UTRAN0,1,2,3and FR1and/or FR2FR2E-UTRA only0,1,2,3FR1 only0-11FR1 and FR20-11E-UTRAN0,1,2,3and FR1and/or FR2FR2 only12-23Per FRFR1 ifE-UTRA only0,1,2,3measurementconfiguredgapFR2 ifNo gapconfiguredFR1 ifFR1 only0-11configuredFR2 ifNo gapconfiguredFR1 ifFR2 onlyNo gapconfiguredFR2 if12-23configuredFR1 ifE-UTRA and0,1,2,3configuredFR1FR2 ifNo gapconfiguredFR1 ifFR1 and FR20-11configuredFR2 if12-23configuredFR1 ifE-UTRA and0,1,2,3configuredFR2FR2 if12-23configuredFR1 ifE-UTRA and0,1,2,3configuredFR1 and FR2FR2 if12-23configuredNOTE 1: When E-UTRA inter-RAT RSTD measurements are configured and the UE requires measurement gaps for performing such measurements, only Gap Pattern #0 can be used.NOTE 2Measurement purpose which includes E-UTRA measurements includes also inter-RAT E-UTRA RSRP and RSRQ measurements for E-CID <Non-Terrestrial Networks> A non-terrestrial network refers to a network, or segment of networks using RF resources on board a satellite (or UAS platform). The typical scenario of a non-terrestrial network providing access to user equipment is depicted below. FIG.10shows Non-terrestrial network typical scenario based on transparent payload. FIG.11shows Non-terrestrial network typical scenario based on regenerative payload. Non-Terrestrial Network typically features the following elements:One or several sat-gateways that connect the Non-Terrestrial Network to a public data networkA GEO satellite is fed by one or several sat-gateways which are deployed across the satellite targeted coverage (e.g. regional or even continental coverage). We assume that UE in a cell are served by only one sat-gatewayA Non-GEO satellite served successively by one or several sat-gateways at a time. The system ensures service and feeder link continuity between the successive serving sat-gateways with sufficient time duration to proceed with mobility anchoring and hand-overA Feeder link or radio link between a sat-gateway and the satellite (or UAS platform)A service link or radio link between the user equipment and the satellite (or UAS platform).A satellite (or UAS platform) which may implement either a transparent or a regenerative (with on board processing) payload. The satellite (or UAS platform) generate beams typically generate several beams over a given service area bounded by its field of view. The footprints of the beams are typically of elliptic shape. The field of view of a satellites (or UAS platforms) depends on the on board antenna diagram and min elevation angle.A transparent payload: Radio Frequency filtering, Frequency conversion and amplification. Hence, the waveform signal repeated by the payload is un-changed;A regenerative payload: Radio Frequency filtering, Frequency conversion and amplification as well as demodulation/decoding, switch and/or routing, coding/modulation. This is effectively equivalent to having all or part of base station functions (e.g. gNB) on board the satellite (or UAS platform).Inter-satellite links (ISL) optionally in case of a constellation of satellites. This will require regenerative payloads on board the satellites. ISL may operate in RF frequency or optical bands.User Equipment are served by the satellite (or UAS platform) within the targeted service area. There may be different types of satellites (or UAS platforms) listed here under: Table 15 shows Types of NTN platforms. TABLE 15TypicalbeamfootprintPlatformsAltitude rangeOrbitsizeLow-Earth300-1500kmCircular around100-1000kmOrbit (LEO)the earthsatelliteMedium-Earth7000-25000km100-1000kmOrbit (MEO)satelliteGeostationary35 786kmnotional station200-3500kmEarth Orbitkeeping position(GEO) satellitefixed in termsUAS platform8-50 km (20 kmof elevation/5-200km(includingfor HAPS)azimuth withHAPS)respect toa given earthpointHigh Elliptical400-50000kmElliptical around200-3500kmOrbit (HEO)the earthsatellite GEO satellite and UAS are used to provide continental, regional or local service. A constellation of LEO and MEO is used to provide services in both Northern and Southern hemispheres. In some case, the constellation can even provide global coverage including polar regions. For the later, this requires appropriate orbit inclination, sufficient beams generated and inter-satellite links. <Problems to be Solved in the Disclosure of this Specification> NR-based NTN (non-terrestrial network) communication is a method for efficiently providing communication services to regions, where terrestrial network services are not provided, through satellites (geostationary orbiting satellites GEO, low-orbit satellite LEO, etc.). In the case of transparent satellite, the satellite amplifies the signal transmitted from the terrestrial base station (gNB-NTN gateway) and transmits the signal to the UE. In the case of regenerative satellite, in addition to signal amplification, the satellite performs the functions of a terrestrial base station such as routing, coding and modulation, and decoding and demodulation. An NTN terminal has a GPS function and periodically receives location, time, and speed information for NTN satellites. FIG.12aandFIG.12bshow Service coverage for NGSO satellite according to earth fixed beam and earth moving beam. In the case of a non-geostationary (NGSO) satellite, it moves in a fixed orbit, establishes a link with a TN base station (NTN gateway) and an NTN UE, and considers two types of service coverage, an earth fixed beam and an earth moving beam.FIG.12ashows a service coverage for NGSO satellite based on earth fixed beam.FIG.12bshows a service coverage for NGSO satellite based on earth moving beam. Earth fixed beam maintains fixed service coverage for a certain period of time even if the LEO satellite moves to a certain orbit, and the earth moving beam also moves service coverage when the LEO satellite moves to a certain orbit. FIG.13shows example of signal quality of cells for NTN system. The basic NR terminal consider the signal quality of the cell when performing cell selection/reselection in IDLE or INACTIVE state or performing HO (handover) in CONNECTED state. However, in the NTN environment, the signal quality in each cell is almost constant in the service coverage, but rapidly decreases at the edge of the cell service coverage. Therefore, cell selection/reselection only based on signal quality may not be efficient in the NTN environment. In the present specification, disclosure will be described based on the IDLE/INACTIVE state, but may be equally applied to the CONNECTED state. <Disclosure of the Present Specification> 1. Cell Service Time Service time for specific cell may be provided by NTN satellite. For example, in the case of an earth fixed beam, there may be a time for maintaining a specific service coverage due to moving of the NTN satellite. Time information (e.g., service time) may indicate service start time and end time based on UTC time or timer. The time information may be provided to UE. When the UE is in the IDLE or INACTIVE state, the network may broadcast service time information to all UEs. Therefore, when the UE determines the start time of neighbor cell measurement, the UE may consider service time in addition to the current serving cell signal quality. Service time may be based on UTC time. If the NTN satellite informs the service time for each cell to the UE, the UE may start measuring the neighbor cell X time before the end of the service time of the serving cell. The NTN satellite may be related to serving cell of the UE. That is, from the point at which the remaining service time (RST) of the serving cell reaches a certain time, the UE may start measuring the neighbor cell. This measurement may be independently performed regardless of whether the signal quality of the serving cell satisfies the cell selection criterion S. The remaining service time may be the difference time to stop serving the area and current time. X may be threshold time for RST of serving cell. When RST of serving cell is less than X, UE may perform cell measurement. Y may be threshold time for RST of neighbor cell. When RST of neighbor cell is less than Y, UE may not perform cell measurement. When RST of the serving cell is less than X, the NTN UE may consider the neighbor cell's RST in order to exclude unnecessary neighbor cell measurement when performing measurement on the neighbor cell. If RST of the neighbor cell is less than Y, the NTN UE may not perform measurement on the neighbor cell. X and Y may be indicated from the network. That is, when performing measurement of the neighbor cell, NTN UE may consider i) whether RST of serving cell is less than X or not and ii) whether RST of neighbor cell is less than Y or not. Serving cell may indicate X and Y to the UE. Serving cell may indicate cell service time to the UE. The UE may calculate RST based on the cell service time. X and Y may be pre-configured in UE. If the remaining service time is less than X seconds, the UE may initiate the measurements of all neighbor cells indicated by the serving cell. The neighbor cells with remaining service time of Y seconds may be excluded from intra-frequency or inter-frequency measurements. The remaining service time may be the difference time to stop serving the area and current time. FIG.14shows example of measurement relaxation depending on remaining service time of serving cell according to an embodiment of the present specification. The NTN UE may perform measurement relaxation on the neighbor cell to save UE power based on RST of the serving cell. Based on the RST of the serving cell, the UE may perform measurement of neighbor cell in three sections (no measurement, measurement relaxation, and normal measurement). When RST is less than N second, normal measurement may be performed by the UE. When RST is between N second and M second, measurement relaxation may be applied to measure neighbor cells. Measurement relaxation may be performed by increasing the measurement period by k times (e.g., k=2, 3, 4 . . . ) compared to normal measurement. If RST is greater than M second, measurement of neighbor cell may be stopped. N and M may be indicated from the network to the UE. Similarly, measurement relaxation may be considered based on the RST of a neighbor cell to be measured. That is, when the RST of the neighbor cell is greater than M1 second, normal measurement may be performed to increase the chance of cell reselection. When the RST of the neighbor cell is between N1 second and M1 second, measurement relaxation (e.g., increase the measurement period) may be applied to measure neighbor cell. When the RST of the neighbor cell is smaller than N1 second, no measurement may be performed to exclude neighbor cell from the target of cell reselection (to avoid frequent cell reselection). N1 and M1 may be indicated from the network. 2. Cell Reference Location The NTN satellite may inform a reference location for a specific cell to NTN UE, and the NTN UE may derive a distance from the reference location of the cell based on the reference location. When the UE is in the IDLE or INACTIVE state, the network may broadcast or dedicate the reference location information to all UEs. The network may be serving cell. Therefore, the UE may derive the location of the UE in the serving cell based on the reference location. Based on the location of the UE, the UE may consider the start time of the neighbor cell measurement. If the NTN satellite informs the UE of the reference location for each cell and the specific distance X meter for serving cell, the UE may start measuring the neighbor cell from the time when the distance between the reference location of the serving cell and the UE becomes more than X meter. This measurement based on location of the UE may be performed independently regardless of whether the signal quality of the serving cell satisfies the cell selection criterion S. In the past, UE may perform measurement for cell reselection based on signal quality of serving cell. Serving cell may transmit X meter for serving cell and Y meter for neighbor cell to the UE. The UE may measure the distance from the reference location of serving cell and may compare the distance with X meter. When UE performs measurement on the neighbor cell based on the distance from the reference location of serving cell being more than X meter, the UE may measure distance from the reference location of the neighbor cell. Then the UE compare the distance from the reference location of the neighbor cell with Y meter. If the distance from the reference location of the neighbor cell becomes more than Y meter, the UE may exclude cell measurement of the neighbor cell. That is, the UE may not perform measurement of the neighbor cell. The NTN UE may measure the distance from the reference location of the neighbor cell to exclude unnecessary neighbor cell measurement. X and Y may be indicated from the network, and the X meter and Y meter may be meaning to indicate the start of each cell boundary. If the distance between UE and the reference location of serving cell is larger than X meter, the UE may initiate the measurements of all neighbor cells indicated by the serving cell. A neighbor cell that the distance between UE and the reference location of a neighbor cell is larger than Y meter may be excluded from intra-frequency or inter-frequency measurements. FIG.15shows example of measurement relaxation depending on distance between UE and reference location of serving cell according to an embodiment of the present specification. The NTN UE may perform measurement relaxation on the neighbor cell to save UE power based on the distance from the reference location of the serving cell. For example, based on the distance from the reference location of the serving cell, the UE may perform measurement of neighbor cell in three sections (no measurement, measurement relaxation, and normal measurement). When the distance from the reference location of serving cell is greater than N meter, normal measurement may be performed by the UE. When the distance from the reference location is between M meter and N meter, measurement relaxation may be applied to measure neighbor cells. Measurement relaxation may be performed by increasing the measurement period by k times (e.g., k=2, 3, 4 . . . ) compared to normal measurement. If the distance from the reference location of serving cell is less than M meter, the measurement of the neighbor cell may be stopped. N and M may be indicated from the network. Similarly, measurement relaxation may be considered based on the distance between the reference location of the neighbor cell for measurement and UE. When the distance between the reference location of the neighbor cell and UE is less than M1 meter, normal measurement is performed to increase the chance of cell reselection. When the distance between the reference location of the neighbor cell and the UE is between M1 meter and N1 meter, measurement relaxation (e.g., increase the measurement period) may be applied to measure neighbor cell. When the distance between the reference location of the neighbor cell and the UE is greater than N1 meter, no measurement may be performed to exclude from the target of cell reselection (to avoid frequent cell reselection). N1 and M1 may be indicated from the network. Measurement of a neighbor cell may be performed based on condition, which is combination of conditions such as cell service time, cell reference location, and signal quality described above. For example, the cell service time and cell reference location may be configured by the network. If i) the signal quality is above a certain level, ii) the no measurement condition based on cell service time is satisfied and iii) the no measurement condition based on cell reference location is satisfied, the UE may skip measurement of neighbor cell. If at least one of i) signal quality, ii) cell service time and iii) cell reference location is satisfied with normal measurement condition, the UE may operate normally (that is, normal measurement) rather than no measurement or measurement relaxation. If no measurement conditions are satisfied, power saving of UE in connected may be considered. In this case, if MG (measurement gap) for inter-frequency and inter-RAT measurement is configured to the UE, the UE may transmit information that MG is not needed to the network. Then the network may perform scheduling for data signal transmission/reception at the time when MG is configured. In addition, when normal measurement or measurement relaxation starts, the UE may request the network to configure the MG and may perform measurement on the neighbor cell based on the MG. Plural of signal qualities of cells may be similar with each other when UE performs cell selection. In this case, priority of the cells may be configured based on cell service time and cell reference location. For example, if i) cell service time and cell reference location is configured, ii) RST is more than specific time and iii) the distance from reference location of a first cell is smaller than the distance from reference location of a second cell, priority of the first cell is higher than priority of the second cell. The UE may perform cell selection based on the described priority. FIG.16shows a procedure of UE according to the disclosure of the present specification. The UE may connect to a non-terrestrial network (NTN) satellite serving a targeted service area via a service link, wherein the NTN satellite is connected to a gateway via a feeder link; The UE may receive, from the NTN satellite, information on service time of a serving cell; The UE may start to perform neighbor cell measurement at a time point before end of the service time of the serving cell by a certain time, regardless of whether a cell quality of the serving cell meets a cell selection criterion S, The NTN satellite may provide an earth fixed system. The information on the service time may include information on when the serving cell is going to stop serving the targeted service area. The information on the service time may be provided based on a Coordinated Universal Time (UTC). The UE may skip perform neighbor cell measurement, based on remaining service time (RST) from the end of the service time of the serving cell being longer than the certain time. The neighbor cell measurement may be performed based on period T, based on the RST from the end of the service time of the serving cell being shorter than a first time threshold. The neighbor cell measurement may be performed based on longer period than the period T, based on i) the RST from the end of the service time of the serving cell being shorter than the certain time and ii) the RST from the end of the service time of the serving cell being longer than the first time threshold. The UE may receive, from the NTN satellite, information on service time of a neighbor cell. The neighbor cell measurement may be performed based on RST from the end of the service time of the neighbor cell. The UE may skip perform neighbor cell measurement, based on RST from the end of the service time of the neighbor cell being shorter than a second time threshold. The neighbor cell measurement may be performed based on period T, based on the RST from the end of the service time of the neighbor cell being longer than a third time threshold. The neighbor cell measurement may be performed based on longer period than the period T, based on i) the RST from the end of the service time of the neighbor cell being longer than the second time threshold and ii) the RST from the end of the service time of the neighbor cell being shorter than the third time threshold. The UE may receive, from the NTN satellite, a first reference location of the serving cell. The neighbor cell measurement may be performed based on distance between the UE and the first reference location of the serving cell being bigger than a first distance threshold. The UE may skip perform neighbor cell measurement, based on the distance between the UE and the first reference location of the serving cell being smaller than the first distance threshold. The neighbor cell measurement may be performed based on the period T, based on the distance between the UE and the first reference location of the serving cell being bigger than the second distance threshold. The neighbor cell measurement may be performed based on longer period than the period T, based on i) the distance between the UE and the first reference location of the serving cell being bigger than the first distance threshold and ii) the distance between the UE and the first reference location of the serving cell being smaller than the second distance threshold. The UE may receive, from the NTN satellite, a second reference location of the neighbor cell. The neighbor cell measurement may be performed based on distance between the UE and the second reference location of the neighbor cell being smaller than a third distance threshold. The UE may skip perform neighbor cell measurement, based on the distance between the UE and the first reference location of the neighbor cell being bigger than the third distance threshold, The neighbor cell measurement may be performed based on the period T, based on the distance between the UE and the first reference location of the neighbor cell being smaller than the fourth distance threshold, The neighbor cell measurement may be performed based on longer period than the period T, based on i) the distance between the UE and the first reference location of the neighbor cell being smaller than the third distance threshold and ii) the distance between the UE and the first reference location of the neighbor cell being bigger than the fourth distance threshold. FIG.17aandFIG.17bshow examples of procedure for neighbor cell measurement according to an embodiment of the present specification. FIG.17aandFIG.17bare a flowchart for an example of neighbor cell measurement when the cell service time or reference location for the serving cell/neighbor cell described above is configured with the X and Y values. The values for X and Y may be given down to the UE from the network as same manner with a cell service time or a reference location (e.g., SIB). X and Y may be updated periodically or aperiodically according to the NTN satellite environment. FIG.17ashows flowchart for neighbor cell measurement according to cell service time. Cell service time of serving cell and X value may be broadcasted. UE may calculate RST for serving cell based on the cell service time. Then UE may determine whether the RST for serving cell is less than X. If RST is less than X, UE may check whether there is cell service time of neighbor cell and Y value which are broadcasted. If there is no cell service time of neighbor cell and Y value, UE may perform neighbor cell measurement regardless of signal quality of serving cell. If there is cell service time of neighbor cell and Y value, UE may calculate RST for neighbor cell based on the cell service time and may determine whether the RST for neighbor cell is more than Y. If the RST for neighbor cell is not more than Y, the UE may not operate neighbor cell measurement(that is, no measurement). If the RST for neighbor cell is more than Y, UE may perform neighbor cell measurement regardless of signal quality of serving cell. FIG.17bshows flowchart for neighbor cell measurement according to reference location with X and Y value. Cell reference location of serving cell and X value may be broadcasted. UE may calculate the distance from reference location of serving cell based on the cell reference location. Then UE may determine whether the distance from reference location of serving cell is more than X. If the distance from reference location of serving cell is more than X, UE may check whether there is cell reference location of neighbor cell and Y value which are broadcasted. If there is no cell reference location of neighbor cell and Y value, UE may perform neighbor cell measurement regardless of signal quality of serving cell. If there is cell service time of neighbor cell and Y value, UE may calculate the distance from reference location of neighbor cell based on the cell reference location and may determine whether the distance from reference location of neighbor cell is less than Y. If the distance from reference location of neighbor cell is not less than Y, the UE may not operate neighbor cell measurement(that is, no measurement). If the distance from reference location of neighbor cell is less than Y, UE may perform neighbor cell measurement regardless of signal quality of serving cell. FIG.18shows examples of flowchart for power saving operation according to cell service time with (N, M) values according to an embodiment of the present specification. Cell service time of serving cell and (N, M) value may be broadcasted by network. UE may calculate RST for serving cell based on the cell service time. UE may compare the RST with N and M. If the RST is less than N, the UE may operate normally (that is, normal measurement). If the RST is less than M and more than N, measurement relaxation may be applied to measure neighbor cells. If the RST is more than M, UE may not perform measurement (no measurement). If i) measurement is performed by UE, ii) UE is IDLE or INACTIVE state and iii) condition for cell selection or cell reselection, UE may perform cell selection or cell reselection. If i) measurement is performed by UE, ii) UE is CONNECTIED state and iii) condition for cell handover, UE may perform handover. FIG.19shows examples of flowchart for power saving operation according to reference location with (N, M) values according to an embodiment of the present specification. Reference location of serving cell and (N, M) value may be broadcasted by network. UE may calculate the distance from reference location of serving cell based on the reference location. UE may compare the distance with N and M. If the distance is more than N, the UE may operate normally (that is, normal measurement). If the distance is more than M and less than N, measurement relaxation may be applied to measure neighbor cells. If the distance is less than M, UE may not perform measurement (no measurement). If i) measurement is performed by UE, ii) UE is IDLE or INACTIVE state and iii) condition for cell selection or cell reselection, UE may perform cell selection or cell reselection. If i) measurement is performed by UE, ii) UE is CONNECTIED state and iii) condition for cell handover, UE may perform handover. Hereinafter, a device configured to operate in a wireless system, according to some embodiments of the present disclosure, will be described. For example, a terminal may include a processor, a transceiver, and a memory. For example, the processor may be configured to be coupled operably with the memory and the processor. The processor may be configured to: connect to a non-terrestrial network (NTN) satellite serving a targeted service area via a service link, wherein the NTN satellite is connected to a gateway via a feeder link; receive, from the NTN satellite, information on service time of a serving cell; start to perform neighbor cell measurement at a time point before end of the service time of the serving cell by a certain time, regardless of whether a cell quality of the serving cell meets a cell selection criterion S, wherein the NTN satellite provides an earth fixed system, wherein the information on the service time includes information on when the serving cell is going to stop serving the targeted service area, wherein the information on the service time is provided based on a Coordinated Universal Time (UTC). Hereinafter, an apparatus in a mobile communication, according to some embodiments of the present disclosure, will be described. The processor may be configured to: connecting to a non-terrestrial network (NTN) satellite serving a targeted service area via a service link, wherein the NTN satellite is connected to a gateway via a feeder link; receiving, from the NTN satellite, information on service time of a serving cell; starting to perform neighbor cell measurement at a time point before end of the service time of the serving cell by a certain time, regardless of whether a cell quality of the serving cell meets a cell selection criterion S, wherein the NTN satellite provides an earth fixed system, wherein the information on the service time includes information on when the serving cell is going to stop serving the targeted service area, wherein the information on the service time is provided based on a Coordinated Universal Time (UTC). Hereinafter, a non-transitory computer-readable medium has stored thereon a plurality of instructions in a wireless communication system, according to some embodiments of the present disclosure, will be described. According to some embodiment of the present disclosure, the technical features of the present disclosure could be embodied directly in hardware, in a software executed by a processor, or in a combination of the two. For example, a method performed by a wireless device in a wireless communication may be implemented in hardware, software, firmware, or any combination thereof. For example, a software may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other storage medium. Some example of storage medium is coupled to the processor such that the processor can read information from the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. For other example, the processor and the storage medium may reside as discrete components. The computer-readable medium may include a tangible and non-transitory computer-readable storage medium. For example, non-transitory computer-readable media may include random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, or any other medium that can be used to store instructions or data structures. Non-transitory computer-readable media may also include combinations of the above. In addition, the method described herein may be realized at least in part by a computer-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer. According to some embodiment of the present disclosure, a non-transitory computer-readable medium has stored thereon a plurality of instructions. The stored a plurality of instructions may be executed by a processor of UE. The stored a plurality of instructions may cause the UE to connect to a non-terrestrial network (NTN) satellite serving a targeted service area via a service link, wherein the NTN satellite is connected to a gateway via a feeder link; receive, from the NTN satellite, information on service time of a serving cell; start to perform neighbor cell measurement at a time point before end of the service time of the serving cell by a certain time, regardless of whether a cell quality of the serving cell meets a cell selection criterion S, wherein the NTN satellite provides an earth fixed system, wherein the information on the service time includes information on when the serving cell is going to stop serving the targeted service area, wherein the information on the service time is provided based on a Coordinated Universal Time (UTC). The present disclosure can have various advantageous effects. For example, by performing cell measurement based on service time or reference location, power saving for UE is efficiently performed. Advantageous effects obtained through specific examples of the present specification are not limited to the effects listed above. For example, there may be a variety of technical effects that a person having ordinary skill in the related art can understand or derive from this specification. Accordingly, the specific effects of the present disclosure are not limited to those explicitly described herein, but may include various effects that may be understood or derived from the technical features of the present disclosure. Claims in the present disclosure can be combined in a various way. For instance, technical features in method claims of the present disclosure can be combined to be implemented or performed in an apparatus, and technical features in apparatus claims can be combined to be implemented or performed in a method. Further, technical features in method claim(s) and apparatus claim(s) can be combined to be implemented or performed in an apparatus. Further, technical features in method claim(s) and apparatus claim(s) can be combined to be implemented or performed in a method. Other implementations are within the scope of the following claims.
95,924
11863292
DETAILED DESCRIPTION The embodiments described include methods, apparatuses, and systems for coordinated satellite and terrestrial channel utilization. Channel utilization is controlled by determining channel sharing maps based on overlapping coverage areas of multiple base stations. The determined channel sharing maps are provided to the multiple base stations. Each of the multiple base stations control timing of wireless communication with hubs within the coverage areas of the multiple base stations based on one or more discrete communication delays of the base station, and a communication delay of a preceding base station according to the channel sharing map. FIG.1shows wireless communication system that includes a satellite base station120wirelessly communicating with a plurality of hubs171-176through a wireless link180and through a satellite110, wherein terrestrial base stations111-114are located within a coverage area140of the satellite base station120, according to an embodiment. For an embodiment, a controller130operative to determining one or more discrete communication delays for each base station111-114,120based upon a maximum propagation delay between each base station111-114,120and the one or more of the plurality of hubs171-176. For an embodiment, the maximum propagation delay for a base station is determined by measuring roundtrip delay times between the base station and all hubs wirelessly connected to the base station. For an embodiment, the maximum propagation delay for a base station is estimated based on a location of satellite that completes a wireless link between the base station and hubs, a location of base station and base station (satellite) coverage area. In implementation, the maximum propagation delay captures 95% of the likely use cases of the described embodiments. For at least some embodiments, the maximum propagation delay is used as a mechanism to ensure that all hubs coordinate with each other when transmitting data so that collisions do not happen at the (receiving) base station. With a maximum propagation delay, each hub ensures that its actual delay is that number (the maximum propagation delay) by holding onto (holding off transmission) their messages (packets or data for wireless transmission) for long than needed. It is to be realized that there may be other mechanisms utilized to try to ensure collision avoidance that don't use the maximum propagation delay. One example includes setting discrete delay timing blocks and forcing all hubs to adhere to one or more of the discrete delay timing blocks. An embodiment includes determining one or more discrete communication delays for each base station based upon a maximum propagation delay between each base station and the one or more of the plurality of hubs. For an embodiment the discrete communication delay is the maximum propagation delay. For an embodiment the discrete communication delay is longer than the maximum propagation delay. For an embodiment the discrete communication delays are based upon the plurality of propagation delays between the hubs and the base station and are aligned according to a base station frame structure. As stated, the one or more discrete communication delays for each base station111-114,120is determined based upon a maximum propagation delay between each base station111-114,120and the one or more of the plurality of hubs171-176. For an embodiment, the one or more discrete communication delays are equal to or less than the maximum propagation delay for each base station. For an embodiment, the controller130is further operative to generate a channel sharing map that includes a timing and order of communication between each base station and the one or more of the plurality of hubs. For an embodiment, the channel sharing map includes timings and sequence or order of wireless communication between each base station111-114,120and the one or more of the plurality of hubs171-176. For at least some embodiments, multiple unique channel sharing maps are created. For an embodiment, a different channel sharing maps are created for each base station having a different overlapping coverage area, wherein the overlapping coverage area is determined by the overlapping coverage areas the base stations. For an embodiment, the controller130communicates the channel sharing map(s) to the plurality of base stations111-114,120. Further, for an embodiment, the controller additionally or alternatively communicates the channel sharing map(s) with the plurality of hubs171-176. For an embodiment, each of the plurality of base stations111-114,120operate to (or are configured to) time wireless communication with the plurality of hubs111-114,120based on the channel sharing map, the one or more discrete communication delays of the base station, and a communication delay of a preceding base station according to the channel sharing map. For an embodiment, the timing of the wireless communication from each of the base station provides continuity of forward carrier or downlink communication (base station to hub) reception by the hubs. The delay of communication from different base stations to different hubs varies. Accordingly, each base station controls the timing of downlink communication to hubs to optimize (or improve) utilization of the communication channel used for the downlink communication. For an embodiment, the preceding base station is the base station wirelessly communicating with the one of the plurality of hubs immediately before the base station wirelessly communicates with the one of the plurality of hubs. For an embodiment, the preceding base station is identified based on the order or sequence of the base stations included within the channel sharing map. As shown, for an embodiment, at least one of the plurality of base stations (base station120) communicates with at least one of the plurality of hubs171-176through a satellite network (through satellite110), and/or at least one of the plurality of base stations111-114communicates with at least one of the plurality of hubs171-176through a terrestrial network. For an embodiment, the plurality of base stations is a part of satellite networks. For an embodiment, the plurality of base stations are a part of satellite and terrestrial networks. For an embodiment, at least one of the base stations are part of a GEO (geosynchronous) satellite network, and at least one other of the base stations are part of a LEO (low earth orbit) satellite network. The difference in lengths of the wireless links of the different satellite network is very large, and as a result, the roundtrip delays (propagation delays) between the hubs and the base stations of the different network varies by large amounts of time. Accordingly, channel utilization is improved by using the described embodiments for timing transmission of wireless communication between the base stations and the hubs. As described, the satellite network includes a directional wireless beam from the satellite110that has a physical coverage area140. Further, each of the terrestrial base stations111-114have corresponding coverage areas151,153,155,157the overall with the coverage area140of the satellite network. Further, coverage areas of the terrestrial network can overlap each other (such as, coverage areas151,153). Further, coverage areas of separate satellite networks can overlap with each other (not shown inFIG.1). The wireless links between the hubs171-176and the base station120of the satellite network are substantially longer than the wireless links between the terrestrial base stations111-114and the hubs171-176, and therefore, wireless communication signals traveling through the wireless links of the satellite network have a much longer propagation time. Accordingly, in order to efficiently utilize the available wireless communication channel, the timing of transmission between the hubs171-176and the base stations111-114,120should be controlled. The described embodiments provide coordinated satellite and terrestrial channel utilization when the maximum propagation delays of the plurality of base stations varies across the plurality of base stations with overlapping coverage areas by greater than a threshold amount. The roundtrip delays between hubs and terrestrial base stations, and between hubs and satellite base stations are greater than the threshold amount. For an embodiment, the controller further operates to communicates the channel sharing map to one of more of the plurality of hubs171-176through at least one of the base stations111-114,120. For an embodiment, the controller operates to control at least one of the plurality of base stations111-114,120to broadcast the channel sharing map to one of more of the plurality of hubs171-176through at the least one of the base stations111-114,120. Once the broadcast has been received by the hubs171-176, the hubs are able to properly time reception of wireless signals from the different base stations111-114,120. FIG.2shows a physical channel240between a hub modem234of a hub220and a base station230, and a virtual channel260between an application232of the hub220and a system platform210, according to an embodiment. For an embodiment, a multicast manager212of the system platform210generates a multicast scheduling control packet based upon a distribution of a plurality of network registered hubs. The distribution of the network registered hubs can be based on a distribution of channel sharing maps, of firmware operating on the hubs, the distribution of customers of the hubs, a distribution of application of use of the hubs, and/or based on the distribution of the geography of the hubs. The application232controls enabling or disabling of the multicast reception250of the hub modem234. For an embodiment, the system platform210communicates the multicast scheduling control packet to the base station230. For an embodiment, the base station230generates a plurality of multicast channel configurations based upon the multicast scheduling control packet. Further, for an embodiment, system platform210also communicates the multicast scheduling control packet to the wireless communication hub234, wherein the wireless communication hub234is one of the plurality of network registered hubs. For an embodiment, the system platform210communicates the multicast scheduling control packet to the wireless communication hub234through the base station230. However, the multicast scheduling control packet does not have to be communicated to the wireless communication hub234through the base station230. That is, for example, the system platform210may communicate the multicast scheduling control packet to the wireless communication hub234through another means. For example, a cellular or other wireless network (not shown inFIG.2) can be utilized to facilitate this communication. After having received the multicast scheduling control packet from the system platform210, the wireless communication hub234selects specific multicast channels from the plurality of multicast channel configurations, to receive specific multicast data based upon a condition of the hub and the multicast scheduling control packet. That is, the multicast scheduling control packet includes multicast channel configurations of which the wireless communication hub234makes a selection. For an embodiment, the selection is based on a condition of the wireless communication hub234, wherein the condition is based on a configuration of the wireless communication hub234, an environment of the wireless communication hub234, the wireless coverage area attachment of the wireless communication hub234, or a position of the hub within the channel sharing map. For at least some embodiments, the configuration includes a current firmware version of the hub. For at least some embodiments, the configuration includes a hub battery status. For at least some embodiments, the configuration includes a subscription of the hub of certain multicast services. For at least some embodiments, the configuration includes a customer ID of the hub. For at least some embodiments, the configuration includes a multicast channel priority specified in the multicast channel configuration. For at least some embodiments, the environment includes a location of the hub. After having selected the specific multicast channels, the wireless communication hub234then receives the multicast data through the selected specific multicast channel configurations. For another embodiment, before having received a channel sharing map, the hubs operate in a higher power consumption state as the hubs wait for the channel sharing map to be broadcast from the base stations. Once the hubs have received the channel sharing maps, the hubs can synchronize with one or more of the base stations, and the hubs have the information needed to know when communication with each of the hubs is to occur, and the hubs can then switch to a lower-power consumption state as the hubs do not need to be operating when not wirelessly communicating with the base stations. For at least some embodiments, once the plurality of hubs171-176has received the channel sharing map, each of the plurality of hubs171-176operate to coordinate a timing of uplink wireless communication to the base stations111-114,120based upon the one or more discrete communication delays, a propagation delay of a one of the base stations111-114,120, and the shared channel map. The coordination of timing of the uplink wireless transmission (from the hubs to the base stations111-114,120) has the purpose of avoiding interference at the base stations111-114,120in the return or uplink direction. An embodiment further includes at least one of the plurality of hubs171-176operating to maintain an estimate of roundtrip time for each base station that the hub is within the base station coverage area. That is, each hub171-176is within the wireless coverage area of one or more base stations. For this embodiment, each hub171-176maintains an estimate of the roundtrip delay between the hub and each of these base stations the hub can maintain wirelessly communication because the hub is within the wireless coverage area of the base station. For an embodiment, the at least one of the plurality of hubs further operates to time communication with a current active base station based on the maintained estimate of the roundtrip delay with the current active base station and the channel sharing map. It is to be understood that the term roundtrip delay and propagation delay may be used interchangeably. Further, for an embodiment, at least one of the plurality of hubs171-176maintains an estimate of frequency correction values needed to phase lock onto carriers of multiple base stations111-114,120. That is, the at least one of the plurality of hubs operates to select a frequency correction value based on the maintained estimates of the frequency correction values of a current active base station as identified by the channel sharing map. A frequency correction value needed by the hub to lock (frequency or phase lock) to each of the base stations the hubs can wirelessly communicate with, is maintained. The channel sharing map provides the hub with the information needed to project which base station the hub will connect with at different times. The hub then accesses the maintained frequency correction value needed to lock to the base station as indicated by the channel sharing map. For an embodiment, each of the hubs maintains physical channel properties for the base stations identified by the channel sharing map. For an embodiment, the physical channel properties include but are not limited to a channel frequency response, a channel path loss, a doppler shift, multi-path delay and/or received signal strength. Further, for an embodiment, the physical channel properties maintained as a function of time. For an embodiment, the saved physical channel properties are used by the hubs to minimize the synchronization time with the base station identified based on channel sharing map. For an embodiment, the base stations of the channel sharing map can each have different data transmission capacity and latency. For an embodiment, depending on the service provided by the connecting base station, a hub can further prioritize data transmission applications to support. For example, hub can prefer one base station for multicast applications and another base station for unicast applications. For an embodiment, controller further helps in providing a communication context of a hub to the base station when the hub switches from one base station to another. For an embodiment, based on the channel sharing map, the controller moves the context of the hub from one base station to another. In this way, hubs can get uninterrupted service while switching from one base station to another. For an embodiment, the controller also helps in maintaining context when the hub moves from one base station to another. For an embodiment, channels of the channel sharing map occupy a common frequency spectrum. Therefore, the channel sharing map provides for improved utilization of the common frequency spectrum by scheduling the time coordinated wireless communication through the common frequency spectrum. Owing to the timing of wireless communication between the base stations and the hubs, the base stations and the hubs need to be synchronized. For an embodiment, the plurality of base stations and the plurality of hubs maintain synchronization through a global satellite network. That is, the global satellite network provides a signal that can be locked onto by the base stations and the hubs. For an embodiment, generation of the shared channel map is influenced by the communication delays. For an embodiment, a timing of allocation within the shared channel map is based on (to minimize) a difference in the communication delays between preceding and subsequent base stations of the shared channel map. That is, the ordering of the base stations according to the channel sharing map is selected such that directly successive base stations of the channel sharing map have communication delays that are as similar as conveniently possible. For an embodiment, the generation of the channel sharing map is additionally influenced by a Service Level Agreement (SLA) which minimizes (or reduces) the downtime between carrier switches (that is, minimizes the downtime of the hub(s) when switching from one base station to another base station). FIG.3shows determination of a communication delay between a base station340and a hub310, according to an embodiment. An embodiment includes the base station operating to transmit a packet311containing a first timestamp representing a transmit time of the packet. After transmission of the packet311, the hub310receives the packet311through the satellite link315(including satellite391) containing the first timestamp. Further, the hub operates to receive from a local time source a second timestamp corresponding with a time of reception of the packet with the first timestamp. The hub then operates to calculating a time difference between the first timestamp and the second timestamp, and a propagation delay350of the base station based on the calculated time difference. As previously described, for an embodiment, the base stations determine the communication delays. For an embodiment, the hubs receive the communication delay(s) from the base station(s), and then subtracts from the communication delay the propagation delay, and then holds (delays) any messages (wireless communication) by that additional amount of time between when the hub is scheduled to transmit the message and when the hub actually transmits the message. For an embodiment, the hub310receives the second timestamp from a local source320of the hub310that corresponds with a time of wireless reception of the first timestamp received from the base station340. The local source320ofFIG.3is shown as being internal to the hub310, but the local source320does not have to be internal to the hub310. For an embodiment, a controller368of the hub310operates to calculate the time difference between the first timestamp and the second timestamp. Further, the controller368operates to store the time difference between the first timestamp and the second timestamp in memory330. For an embodiment, the controller368additionally stores a time of the calculating of the time difference. An embodiment further includes the hub operating to store the time difference between the first timestamp and the second timestamp, calculate a predictive model for predicting the propagation time based the time difference between the first timestamp and the second timestamp, and estimate the propagation time between the base station and the hub at a time, comprising querying the predictive model with the time. For an embodiment, only one predictive model per base station of a shared channel map allocation is queried at a time. For an embodiment, only a live or current predictive model is updated. For an embodiment, the predictability of propagation delay between the base station and the hub is a function of the frequency of new information being injected into a prediction model. For example, if the system dynamics result in a slowly changing system (that is, slowly changing propagation delay), the model is accurately predictable for lower frequency injections of new pieces of information. The validity/predictability of the propagation delay prediction model is proportionally related to the new information frequency and the rate of change of the system dynamics. For at least some embodiments, the sampled data injected into the prediction model is 2-dimensional, including the calculated time difference between the first timestamp and the second time stamp, and the time of the calculation of the time difference. The purpose of the two-dimensionality is to accommodate for variance and uncertainty in periodicity of information injection into the propagation delay prediction model. For example, the prediction model may receive 5 consecutive samples, wherein new information is injected every 10 seconds, and for the 6thinstance there is a 20 second gap. The internal (predictive) model could take on a number of different forms depending upon the system dynamics in which it is describing. Some models are better suited than others for different real-world systems. Accordingly, at least some embodiments include adaptively selecting a base model based on characteristics of the first time stamp and the second time stamp, and/or other information available related to the propagation delay between the first and hubs. For an embodiment, the predictive model is as simple as a constant model or passthrough model. For at least some embodiments, queries of the predictive model give that last received time difference. Depending upon the time number and how recently the time difference calculations are available, the order of the model (that is, how many derivatives or higher power terms) may dynamically vary. In one instance, when a model is first initiated and only one data point is available, the model may utilize a zeroth order estimation technique, however as additional data points become available 1st, 2ndand 3rdorder terms may be utilized to increase the fidelity of the predictive model and to increase the time-period of validity of the predictive model by capturing higher-order system dynamics. For an embodiment, the frequency of data sampling and model updating can also allow more of the underlying system dynamics to be captured and modeled. This is very much related to Nyquist frequency. In practicality it is often not easy to know (by the hub) what network time (what time the base station thinks it is). As previously described, wireless communication between the hub and the base station through the wireless link demands synchronization of the hub with the base station. In reality it is not desirable to receive a new timestamp from the base station every X seconds. An embodiment includes the hub (hub) receiving one or more first time timestamps from the base station once, or very infrequently. For an embodiment, the hub then uses well characterized and non-divergent discrete networking timing increment “ticks” to forward integrate network time. For an embodiment, the discrete “tick” comes in the form of the current operating frame number of the system. The challenge is that the frame number can be ambiguous because frame numbers are cyclical (that is, 1 2 3 4 5 . . . 1 2 3 4 5). For an embodiment the discrete network counting ticks include cyclical frame counters, for this embodiment the first time stamp is estimating by selecting from a group of possible cycle counts a value which produces a propagation time that is within a predefined acceptable value range. Given an expectation around propagation time, there exists a unique solution for how many frame number cycles have occurred over a large, but finite, time period. FIG.4show a predictive model of two different control loops410,420for estimating the propagation delay, according to an embodiment. Predictive Model(s) Due to the large RTT (propagation delay) drift (up to ˜1.2 μs/s) a new RTT must be calculated and sent to the modem of the hub (hub) at a frequency high enough to allow adjustment for drift of the propagation delay between the base station (base station) and the hub (hub). This can place a large burden on the requirement and availability of a GNSS (Global Navigation Satellite System) receiver of, for example, the hub. However, estimation of the RTT drift can be simplified due to the well-behaved and characterizable motion of the satellite within the wireless link between the base station (base station) and the hub (hub). FIG.4shows an embodiment of a nested loop model for RTT calculation (Loop1). For an embodiment, the exterior loop410consists of time differences (Ri) being calculated by taking the difference between the Network Time442(at the base station or base station) and Local Time444(at the hub or hub) during an NB-IoT (Narrow Band Internet of Things) modem sleep cycle. For an embodiment, this time delta Riis sent to a local primitive RTT model446(that is, the propagation delay predictive model). For an embodiment, the RTT model446provides an equation for the RTT based upon the current GNSS time (0.5 ppm→1 ppm clock drift poses negligible accuracy concerns as an input to the RTT model446) and a series of the i most recent time deltas. The inner loop420consists of the RTT model (executed on NB-IoT chipset) pushing a new RTT to the modem every <1 second. A key observation of this method is that new RTT values can be sent to the modem without the modem going into sleep modem. There is still a freshness requirement on the RTT model which requires new GNSS readings on a periodic basis, but the inclusion of the model reduces the overall sample frequency requirement of the local GNSS and disconnects taking GNSS readings with updating the RTT. For an embodiment, the modem of the hub (hub)448and the GNSS receiver of the hub utilize the same antenna and RF chain within the hub. For an embodiment, the UE (user equipment) or hub or hub performs a Ri(difference between the first time stamp and the second time stamp) measurement using a GNSS timestamp and network time available from SIB16 and frame counter. For an embodiment, the UE requires c-DRX (3GPP Defined sleep modes) and e-DRX sleep mode (to enable cohabitation between a GNSS receiver and a modem using the same RF chain to support a GNSS measurement. For an embodiment, the frequency of the Rimeasurements depends on the sleep cycle. A required sleep duration <10.24 s. (A short sleep cycle is desirable, because sleep cycle duration adds latency to any communications sent across the network. However, the sleep cycle must also be long enough to accurately capture a GNSS reading). For an embodiment, whenever a TA (timing advance) correction is available from the base station, it should be used to correct the measured delay, in addition it can be used to adjust the frequency of Loop1410or loop 2420ofFIG.4. For an embodiment, the RTT (propagation delay) is calculated using the predictive model based upon a finite and limited series of previous Rimeasurements. For an embodiment, the predictive model produces an RTT output given an input of current GNSS time. For an embodiment, this process occurs at a high frequency cycle (1 Hz) and can occur even when the modem is not in sleep mode. FIG.5shows various overlapping coverage areas of satellite base stations and terrestrial (cellular) base stations, according to an embodiment. As shown, satellite coverage areas542,544overlap with each other. Further, satellite coverage area542overlaps with terrestrial coverage areas551,552,553. Further, at least one terrestrial coverage area352overlaps with another terrestrial coverage area353. Further, satellite coverage area544overlaps with terrestrial coverage areas551,554. For an embodiment, each base station has a defined coverage area, and a unique channel sharing map is generated for each set of base stations having uniquely overlapping coverage areas. For an embodiment, a unique channel sharing map is generated for each base station based on one or more other base stations that have an overlapping coverage area with the base station. For example, inFIG.5, there are five unique channel sharing maps for the different base station having the five uniquely overlapping coverage areas. A first map for the base stations B and C includes timing schedule transmission for the base stations 1, B, C for the overlapping coverage areas of base station 1 and base stations B and C. A second map for the base station A includes timing schedule transmission for the base stations 1, 2, A for the overlapping coverage areas of base stations 1 and 2, and base station A. A third map for the base station D includes timing schedule transmission for the base stations 2, A for the overlapping coverage areas of base station 2, and base station A. A fourth map for the base station 1 includes timing schedule transmission for the base stations 1, 2, A, B, C for the overlapping coverage areas of base stations 1, 2, and base stations A, B and C. A fifth map for the base station 2 includes timing schedule transmission for the base stations 1, 2, A, D for the overlapping coverage areas of base stations 1, 2, and base stations A, and D. For an embodiment, overlapping coverage areas of the base stations change over time, and the unique channel sharing map is adaptively updated based on the changes in the overlapping coverage areas. For an embodiment, the overlapping coverage areas change as a function of time due to motion of the transmitting elements (for example, satellite motion) and thus the uniquely defined channel sharing maps also change as a function of time. For an embodiment, the coverage overlaps of the base stations are determined over time. Some examples for determining the coverage overlap include telemetry monitoring of the position, velocity, and orbit of the satellite and propagating that forward in time (for example, 1 week) to generate that unique network map. Further, for at least some embodiments, feedback from the hubs is utilized to determine the coverage overlaps. For example, a received signal strength (RSSI) of signals received at a hub for different base stations can be monitored. The value of the RSSI and changes in the RSSI can be used to further refine the coverage overlaps. At least some embodiments include a first base station and a second base station sharing core network and traffic from a hub when a wireless connection of the hub dynamically transfers from the first base station to the second base station. For at least some embodiments, the core network includes at least some of the session management, security/authorization, device provisioning, data routing. The core network allows for transferring between cell towers (base stations) without having to restart a call or wireless connection because the core network does a session handover. For example, the core management manages file transfer and data loss while switching between base stations of two networks. The described embodiments can be utilized to reduce network switchover time to ˜1-5 milliseconds which allows avoiding a session interruption by also including the core network to manage the switch over from one network to another network. It is to be understood that the described embodiments do not ensure complete continuity of reception for all hubs, but rather to reduce the gaps in continuity down to the spread in propagation delay for hubs associated with a single base station. For example, a network that includes both satellite and terrestrial base station may have a maximum round trip time spread different between the satellite base station and the terrestrial base station of 500 milliseconds. However, by using the controlled timing and the channel sharing maps, the realized timing gaps can be reduced to 4 milliseconds. FIG.6shows some examples of a timing of base station transmission based on a channel sharing map, according to an embodiment. The shared channel sharing map ofFIG.6is generated for base station 1610and base station 2620which have overlapping coverage areas, wherein the hubs631,632,633are located within the coverage areas of the base station 1610and base station 2620. The base stations BS1610, BS2620wirelessly communicate with the hubs631,632,633through wireless links680. The channel sharing map660shows a sequence of time allocations of the base stations BS1610and BS2620. Ideally, at for example hub1631, the time of the reception of wireless communication from BS1 and BS2 are timed to efficiently utilized the transmission channel. That is, ideally when reception of wireless signals from BS1 stop, the reception of wireless signals from BS2 immediately starts, thereby most efficiently utilizing the transmission channel. Efficient use of the channel includes minimal dead time in which the hub 1631is not wirelessly communicating with either of the base stations BS1, BS2. However, the communication delay from one base station to another base station will vary. Therefore, if the timing of the transmission of the wireless communication from the base stations BS1, BS2 is not precisely controlled, then the hub (Hub1) will have dead times in its wireless communication, and channel efficiency will be wasted. The shared channel map680representation shows the timing of the transmissions from BS1 and BS2 being controlled to efficiently used the transmission channel. As shown, the transmission from BS1 and BS2 begins before the allocations indicated by the channel sharing map to account for the communication delay between each base station BS1, BS2 and the hub1. As shown, BS1 begins transmission at T0ff1 before the timing of BS1 of the channel sharing map660. Toff1 accounts for the communication delay between BS1 and hub1. Further, BS2 begins transmission at Toff2 before the timing of BS2 of the channel sharing map660. Toff2 accounts for the communication delay between BS2 and hub1. If properly timed, the wireless communication of BS1 stops being received by hub1 the same time that the wireless communication from BS 2 starts being received by hub1. For at least some embodiments, each of the hubs will also have their own channel sharing maps which provide a timed schedule of wireless communication with the base stations. Similarly, the hubs need to time the transmission of wireless communication with the different base stations to minimize the downtime. As previously described, the propagation delay between a hub and different base stations will vary. Accordingly, the timing of the transmission to the different base station needs to be adjusted based on the propagation delay between the hub and the corresponding base station. For an embodiment, the transmission from hub1631to BS1 and BS2 begins before the allocations indicated by the channel sharing map to account for the communication delay between the hub1631and each base station BS1, BS2. For example, hub1631may begin transmission at T0ff1 before the timing of the scheduled hub1 to BS1 of the channel sharing map of hub 1. Toff1 accounts for the communication delay between hub1 and BS1. Further, hub 1 begins transmission at Toff2 before the timing of the hub1 to BS2 of the channel sharing map of hub1. Toff2 accounts for the communication delay between hub1 and BS2. FIG.7shows multiple base stations wirelessly communicating with multiple hubs, and further shows the propagation times for each of the wireless links between the base stations and the hubs, and further shows selected communication delays, according to an embodiment. As shown, a base station 1710has hubs731,732within its coverage area, a base station 2720has hubs731,732,733,734within its coverage area, and base station 3730has hubs733,734. The propagation delays of the wireless links from the base station 1710to the731,732are 5 s and 10 s. Therefore, the communication delay of the base station 1710is selected to be 10 s, the maximum propagation delay of the wireless links of the base station 1710. The propagation delays of the wireless links from the base station 2720to the731,732,733,734are 20 s, 22 s, 22 s, 25 s. Therefore, the communication delay of the base station 2720is selected to be 25 s, the maximum propagation delay of the wireless links of the base station 2720. The propagation delays of the wireless links from the base station 3730to the733,734are 4 s and 4 s. Therefore, the communication delay of the base station 3730is selected to be 4 s, the maximum propagation delay of the wireless links of the base station 3730. As previously described, each base station determines one or more discrete communication delays (that is, a communication delay for each base station) for the base station based upon a maximum propagation delay between the base station and one or more of the plurality of hubs. As described, for an embodiment, the discrete communication delays then are used to determine a channel sharing map in which the base stations time their transmissions based upon the channel sharing map to enable continuous reception of the signal at the hub As previously described, each base station operates to time wireless communication with the plurality of hubs based on the channel sharing map, the one or more discrete communication delays of the base station, and a communication delay of a preceding base station according to the channel sharing map. As shown inFIG.6, each base station operates to adjust a transmission time of wireless data being transmitted to a hub based on the discrete communication delay of the base station, and a communication delay of the preceding base station Further, each of the plurality of hubs operate to coordinate a timing of uplink wireless communication to the base stations based upon the one or more discrete communication delays, a propagation delay of a one of the base stations, and the shared channel map. Further, at least one of the plurality of hubs operates to maintain an estimate of roundtrip time (propagation delay) for each base station that the hub is within the coverage area of the base station, and the at least one of the plurality of hubs further operates to time communication with a current active base station based on the maintained estimate of the roundtrip delay (propagation delay) with the current active base station and the channel sharing map. FIG.8shows overlapping coverage areas of multiple base stations and corresponding unique channel sharing maps, according to an embodiment. The coverage areas of the base stations include coverage areas A, B, C, D. As previously described, the channel sharing map of each base station includes all the base stations that have overlapping coverage. For example, inFIG.8, the coverage areas of A, B, and C overlap. Therefore, a first channel sharing map includes allocations to the base stations of A, B, and C. Further, the coverage areas of A and D overlap. Therefore, a second channel sharing includes allocations to the base stations of A and D. The channel sharing maps 1 and 2 ofFIG.8show possible base station allocations. It should be noted that channel sharing maps that include a common base station, such as, base station A need to be coordinated. That is, the timing of the common base station A needs to be commonly accounted for in both channel sharing maps, and occur at the same time allocations within the channel sharing maps 1 and 2. The flow chart ofFIG.8shows steps of the channel sharing maps. A first step810includes a controller generating the channel sharing maps based on the coverage areas of the base stations. A second step820includes the controller providing the channel sharing maps to the base stations. A third step830includes each base station further allocating (scheduling) wireless communication with hubs within the channel sharing map allocations. That is, the channel sharing map provides time allocations in which each base station communicates with the hubs. The base stations than allocate or time the communication with the hubs within the base station allocations of the shared channel maps. Essentially the base stations determine a fine-tuning of the timing (map within the channel sharing map) of wireless communication with the hubs that the base station is wirelessly communicating with within the allocation for the base station within the shared channel sharing map. FIG.9shows coverage area of multiple base stations that change over time, according to an embodiment. First, the coverage area of one or more of the base stations may include motion. The motion of coverage area910ofFIG.9shows the coverage area of base station A moving over time. The motion can be a function of time due to satellite motion. The coverage areas of the base station can additionally or alternatively change over time as beamforming patterns of the base stations of the terrestrial and/or satellite networks change over time920. For at least some embodiments, the channel sharing maps are updated as the coverage areas of the base stations of the terrestrial and/or satellite networks change over time due to either motion of the transmitting elements (terrestrial and/or satellite base stations), or due to changes is coverage areas due to changes in beamforming parameters of electromagnetic beams formed by the transmitting elements (terrestrial and/or satellite base stations). As previously described, the motion of the base stations can be determined or sensed by telemetry monitoring of the position, velocity, and orbit of the satellite and propagating that forward in time (for example, 1 week) to generate that unique network map. The beamforming parameters are set by each of the base stations, corresponding changes in the beamforming patterns of the base stations can accordingly be determined. Further, for at least some embodiments, feedback from the hubs is utilized to improve the coverage overlaps. For example, a received signal strength (RSSI) of signals received at a hub for different base stations can be monitored. The value of the RSSI and changes in the RSSI can be used to further refine the coverage overlaps. FIG.10is a flow chart that includes steps of coordinated satellite and terrestrial channel utilization, according to an embodiment. A first step1010includes determining, by a controller one or more discrete communication delays for each base station based upon a maximum propagation delay for each base station and the one or more of the plurality of hubs. A second step1020includes generating, by the controller, a channel sharing map that includes a timing of communication between each base station and the one or more of the plurality of hubs. A third step1030includes communicating, by the controller, the channel sharing map to the plurality of base stations. A fourth step1040includes timing, by each of the plurality of base stations, wireless communication with the plurality of hubs based on the channel sharing map, the communication delay of the base station, and a communication delay of a preceding base station according to the channel sharing map. As previously described, for an embodiment, the controller communicates the channel sharing map to one of more of the plurality of hubs through at least one of the base stations. As previously described, for an embodiment, the controller operates to control at least one of the plurality of base stations to broadcast the channel sharing map to one of more of the plurality of hubs. As previously described, for an embodiment, each of the plurality of hubs operate to coordinate a timing of uplink wireless communication to the base stations based upon the one or more discrete communication delays, a propagation delay of a one of the base stations, and the shared channel map. As previously described, for an embodiment, at least one of the plurality of hubs operates to maintain an estimate of roundtrip time for each base station that the hub is within the coverage area of the base station, and the at least one of the plurality of hubs further operates to time communication with a current active base station based on the maintained estimate of the roundtrip delay with the current active base station and the channel sharing map. As previously described, for an embodiment, at least one of the plurality of hubs maintains an estimate of frequency correction values needed to phase lock onto carriers of multiple base stations, and wherein the at least one of the plurality of hubs operates to select a frequency correction value based on the maintained estimates of the frequency correction values of a current active base station as identified by the channel sharing map. As previously described, for an embodiment, the communication delays are determined by a base station based on a propagation delay determined by each of the plurality of hubs. As previously described, for an embodiment, the roundtrip delay of each hub is determined by each hub operating to receive a packet containing a first timestamp, wherein the packet was transmitted by the base station, and wherein the first timestamp represents a transmit time of the packet, receive from a local time source, a second timestamp corresponding with a time of reception of the packet with the first timestamp, calculate a time difference between the first timestamp and the second timestamp, and determine the roundtrip delay of the base station based on the calculated time difference. As previously described, for an embodiment, each hub operates to store the time difference between the first timestamp and the second timestamp, calculate a predictive model for predicting the propagation time based the time difference between the first timestamp and the second timestamp, and estimate the roundtrip delay between the base station and the hub at a time, comprising querying the predictive model with the time. As previously described, for an embodiment, each base station has a defined coverage area, and wherein a unique channel sharing map is generated for each set of base stations having uniquely overlapping coverage areas. As previously described, for an embodiment, overlapping coverage areas of the base stations change over time, and the unique channel sharing map is adaptively updated based on the changes in the overlapping coverage areas. Although specific embodiments have been described and illustrated, the embodiments are not to be limited to the specific forms or arrangements of parts so described and illustrated. The described embodiments are to only be limited by the claims.
48,027
11863293
DETAILED DESCRIPTION Generally, the example embodiments of the invention presented herein are directed to methods, systems and computer program products for reliably delivering content files to one or more affiliates via satellite only, internet only or a combination of satellite and internet transmission mediums. Content files are delivered from a broadcast content provider to one or more affiliate systems of affiliates. Reception of all the content files that are sent is verified and any content file that has not been received is detected. Missing content files may be subsequently delivered to ensure the affiliate(s) received all the content. This description is not intended to limit the application of the example embodiments presented herein. It will be apparent to one skilled in the relevant art(s) how to implement the following example embodiments in alternative embodiments. For example, as used herein an affiliate (also sometimes referred to as network affiliate or affiliated station) is a local broadcaster, owned by a company other than the owner of the network, which carries some or all of the lineup of television programs or radio programs of a television or radio network. The example implementation described herein are directed to radio stations. However, example aspects of the embodiments herein are applicable to television stations as well. In addition, a radio station or television station that is owned by the same owner of the broadcast network is still within the scope of the invention. Further the term delivery should be understood to be interchangeable with the terms communicate, transmit and transfer and its definition is inclusive of the definitions of those terms. FIG.1illustrates an example broadcast network100in accordance with an example embodiment of the present invention. Audio content is created by a studio talent in the broadcast production studio102. After the content is recorded, a file containing the content is uploaded via a network104, such as the public Internet, to a headend106of the broadcast content provider. The headend106processes the content file and delivers it to one or more systems of affiliates, referred to herein as affiliate systems108-1,108-2,108-3,108-4, . . . ,108-n(individually or collectively referred to as affiliate systems108) by transmitting the content via a satellite-based content delivery system (e.g., via satellite110) or a wide area network (e.g., the internet112) as shown inFIG.1. The broadcast network100can operate in three modes: satellite only, internet only, or satellite with internet backup. In an example implementation, the headend106can receive a command to deliver the content files via an internet channel, a satellite channel, or both via the internet channel and satellite channel. The headend106will, in turn, deliver the one or more content files according to the command. The content can be provided to each of the affiliates systems108in the form of discrete content files. Optionally, one or more content files may be “packaged” or “encapsulated” for delivery via the satellite delivery system using satellite110. However, what is ultimately received by each of the receivers in an affiliate system108is a number of discrete content files. After delivery, an automated affiliate system at each affiliate retrieves, plays and broadcasts at least some of its received content files in accordance with one or more electronic schedules. In this manner, each affiliate system108generates a near real-time broadcast. Each affiliate system108may be provided with different content files and a different electronic schedule (or schedules as the case may be). In one embodiment of the broadcast network100, each of the affiliates108is an affiliate radio station. FIG.2illustrates affiliate systems108operating in three modes according to an example embodiment of the present invention. As shown inFIG.2affiliate system108-1operates in a satellite only mode, affiliate system108-4operates in an internet only mode, and affiliate system108-3operates in a satellite with internet backup mode. In some embodiments, content delivered to the affiliate systems108are sorted by the content type and prioritized accordingly. Content types include (i) a voicetrack type, (ii) a spot type, (iii) an imaging piece type, and (iv) a music type. FIG.3illustrates delivery queues302,304,306and308in accordance with an example embodiment of the present invention. Higher priority content files are transmitted before lower priority content files. Such prioritization may facilitate ensuring timely delivery of the material. In some embodiments, the content types are sorted into a plurality of delivery queues, where each delivery queue has a certain priority. As shown inFIG.3, the delivery queues are arranged in accordance with an example prioritization scheme. A collection of delivery queues may also be referred to herein as a single delivery queue. Accordingly, a delivery queue may be comprised of plural delivery queues. In an example implementation, a voicetrack is a voice recording, for example, statements by a DJ, weather, news, interviews, etc. An example spot is a commercial, either a local commercial specific to the location to which the radio station broadcasts, or a network commercial that is aired to a plurality of locations by multiple radio stations. An example imaging piece can include, for example, a radio station slogan, a tagline for a specific show, or other on-air sound effects that, for example, identify a market or particular affiliate (e.g., radio station). FIG.4illustrates a first priority delivery queue402, a second priority delivery queue404, a third priority delivery queue406and a fourth priority delivery queue408each having a corresponding prioritization designation: urgent delivery410, priority delivery412, normal delivery414, and low priority delivery416, in accordance with an example embodiment of the present invention. As illustrated inFIG.4, voicetracks have the highest priority (urgent delivery410), and are delivered before any other content type (e.g., spots in delivery queue404, imaging pieces in delivery queue406, and music (e.g., songs) in delivery queue408). The order of priority in the embodiment shown inFIG.4is, in higher to lower priority levels: voicetracks, spots, imaging pieces, and music (e.g., songs). Thus, regardless of content file size, the number of content files, when a content file is added to a delivery queue, etc., the content files are delivered in the order of priority as shown. FIG.5illustrates delivery queues and a corresponding transmission order in accordance with an example embodiment of the present invention. In some embodiments a delivery queue contains multiple sets of content files. Each content file in a set has the same content type. In other words, content files having the same content type can be grouped together. The number of content files in each of the sets can be the same or different. Referring to bothFIGS.4and5, the content files (e.g., voicetrack 1, voicetrack 2, voicetrack 3, voicetrack 4, spot 1, spot 2, imaging piece 1, song 1, song 2, song 3) are delivered from the network headend (FIG.1,106) of the broadcast content provider to an affiliate, e.g., a radio station. In this example there are four voicetracks, two spots, one imaging piece, and three songs. Each content file is assigned a priority according to its content type and having the priority level preassigned to the content type. Once the delivery is initiated, the content files are transmitted in the order of the priority of each content file. In the example illustrated inFIGS.4and5, the four voicetracks are transmitted first, then the two spots, then the one imaging piece, and then the three songs. Assigning to a content file a priority based on its content type and the priority level preassigned to its content type enables the system to ensure that higher priority content files are received first. The content files assigned a higher priority are not required to wait for all the content files in a multi-file transmission to be received together as a package. In the event of a termination of a delivery, for example as a result of a power outage, the probability is greater that the content files with a higher priority have already been received. For example, if the content files for the songs are not received, an affiliate (FIG.1;108) such as a radio station may play a song content file prestored in its system instead, while playing the same voicetracks, spots, and imaging piece. If, for example, a voicetrack is the weather report, it may be difficult to replace it with a different content file since the necessary information may not be available. In contrast, if a spot content file is not received, it may be replaced with another content file having a similar content type that has been prestored on the server of the radio station. For example, a commercial can be replaced with another commercial, either for the same advertiser or for another advertiser. In the case where the content file is a music (e.g., song) content file and the music content file is not received, the content file may be replaced with the content file containing another piece of music (e.g., another song). Listeners may not notice what happened, unlike, for example, if the weather report were missing from the program. Therefore, it may be beneficial to ensure that content files of a particular content type are received prior to other types of content files that may be more easily replaced, such as in the case of voicetracks. In some embodiments, for example, it is preferable to receive voicetracks prior to other types of content files that can be more easily replaced. Unlike a package of content files being transmitted as a single unit, if a content provider sends additional content files while a transmission is in progress, the additional content files are integrated into the transmission in progress according to their respective priorities. FIG.6illustrates a queueing process for adding additional content files604to a transmission of in-queue content files602according to an example embodiment of the present invention. An additional content file604(also referred to as “a new content file604”) is a content file that has not yet been entered into a delivery queue for transmission. An in-queue content file602is a content file that is in a delivery queue and awaiting its turn to be delivered according to its priority level. An in-transmission content file606is a content file that was previously in queue to be delivered and is now currently being delivered. In the example implementation shown inFIG.6, one or more additional content files604(e.g., voicetrack #5604-a, voicetrack #6604-b) are added to a delivery queue by the network headend (FIG.1,106). The additional content files604are added to the delivery queue while an in-transmission content file606(e.g., spot #1) is in the process of being delivered. That is, while in-queue content files602(e.g., song #2602-a, song #1602-b, imaging piece #1602-c, and spot #2602-d) are in queue to be delivered, in-transmission content file606is in the process of being delivered. If a determination is made that the additional content file604has a higher priority than the one or more in-queue content files602the additional content file604to be inserted will be inserted at a location within the delivery queue of in-queue content files602according to its priority. In an example implementation a headend106is configured to receive content files, where each of the content files are associated with one of several content types. The headend106can save each of the content files in one of several delivery queues according to the content type of the content file, where each delivery queue is associated with a priority. In turn, the headend106delivers, over one or more networks to one or more affiliate systems, the plurality of content files saved in the delivery queues such that the transmission order of the plurality of content files based on the priority of the delivery queue. As explained above, the content types can include a voicetrack type, a spot type, an imaging piece type, and a music type or other type of content. As explained above in connection withFIG.3, the delivery queues are arranged in accordance with a prioritization scheme. However, the headend106can also receive a new priority level: delivery queue pair and set the priority of the corresponding delivery queue to the new priority level. In other words, the prioritization scheme can be changed, for example, through a portal in communication with the headend. Referring still toFIG.6, an example situation might be where two voicetracks (voicetrack #5604-a, and voicetrack #6604b) are added while spot #1606is being delivered. In this example, voicetrack-type content files have the highest priority. As additional content file604-a(voicetrack #5) and additional content file604b(voicetrack #6) have a priority that is higher than in-queue content file602-d(spot #2) which is next in queue to be delivered, both additional content files (voicetrack #5604-aand voicetrack #6604-b) will be added to the delivery queue after in-transmission content file606(spot #1) which is currently being delivered. In some embodiments, two or more additional content files are added to the delivery queue based on the priority of the additional content files. In the example shown, voicetrack #5604-ais added first, then voicetrack #6604-b. Likewise, if an additional content file of type spot (e.g., spot #3; not shown) were being added, the additional content file would be inserted according to its priority, which in this example is after spot #2602-dand before imaging piece #1602-c. Advantageously, this facilitates making programming changes and fixing errors in the programming while facilitating content file deliveries. In another embodiment, prioritization is administered by placing each content file in one of several file directories. This step can be performed by, for example, a content creator. Which file directory a content file is placed determines the delivery queue and thus the priority level of the content file. A configuration file can be used to save a mapping of the file directories and priority levels. The priority level for each of the directories is, in turn, retrieved from the configuration file to determine the priority of the content files contained therein. In some embodiments, a content file can include not only the content of a particular content type, but also data identifying the priority level of the content file. For example, such information, referred to herein as a file marking, can be data such as metadata, tags, file extensions, and the like. In an example implementation, headend106can be configured to identify the content type of a content file according to the file marking associated with the content file. Whereas the embodiments described herein include four content types and four delivery queues, it is to be understood that more or fewer content types and delivery queues are contemplated. For example, there may be ten content types and ten delivery queues. Alternatively, more than one content type may be placed into a common directory and thus the same delivery queue. For example, there may be twelve content types and five delivery queues, wherein one or more delivery queues receive content files having a plurality of distinct content types. Alternatively, it may be desired to sort content files into different directories but place them into the same delivery queue. Then the number of directories may be greater than the number of delivery queues. The different content file types within the same delivery queue may have the same priority level or have an additional priority level within the delivery queue dictating the priority within the delivery queue. Furthermore, whereas the order of priority in the embodiments described herein is as follows: voicetracks, spots, imaging pieces, and then music, the priority levels may be altered without deviating from the scope of the invention. In some embodiments, the received content files are stored by an affiliate such as a radio station (FIG.1; affiliate system108), retrieved, played and broadcast in accordance with a schedule from the content provider. Thus, a radio affiliate's financial efficiency may be enhanced by providing them with all the content to simply broadcast without the need for the expenses of local talent for programming production. Local spots may be sent according to the region the affiliate (e.g., radio station) broadcasts to. Directory Inventory to Verify Content File Delivery The network headend often has many audio directories which have identical copies on the affiliate systems108.FIG.7illustrates a system flow diagram including a process700for providing an inventory of content files704to an affiliate system750(e.g., to a receiver or server in the affiliate system) via a content delivery system in accordance with an example embodiment of the present invention. The content delivery system can utilize satellite110and/or the internet112. In an example implementation, in step S702, an inventory list identifying content files that were or will be delivered to an affiliate (e.g., a radio station) are created. Step S702can be performed, for example, by a content provider (audio asset). At step S704, the inventory list is communicated, by headend106, to an affiliate system108(e.g., a radio station), for example, via satellite110or Internet112, as shown in step S706. The affiliate system750receives the inventory at step S710. In turn, the affiliate system750verifies whether each of the content files in the inventory list was received. This is performed by comparing the inventory against the audio content files it has received, as shown in step S712. If a determination is made at step S712that any content file was not received at the headend, the audio server in affiliate system750generates a request to send or resend the audio content file as shown in step S714. In some embodiments, another determination is made at step S712as to whether any content file is not identical to the corresponding content file received at the headend. If not, the affiliate system750similarly generates a request to send or resend the audio content file as shown in step S714. In the example shown, the request made at step S714includes whether the retransmission of the content file should be sent via satellite, internet download, and/or direct connection to the server or CDN (content delivery network). Preferably, the process700illustrated inFIG.7is automated, the servers on both ends communicating with each other and the headend106sending content files requested by affiliate system750automatically. Thus, if there is a power outage or other termination of the delivery of content files, once the inventory list is received, the audio server may identify which content files were not received, and request them. In accordance with an embodiment of the invention, the inventory list is sent at predetermined intervals (e.g., every night). Such a system and method may substantially eliminate the risk of human error in verifying the deliveries, and expedite the process, thus further reducing cost incurred by the radio station. In an example implementation, headend106can to generate an inventory list of content files and then deliver over the satellite and/or Internet to one or more of the affiliate systems108(FIG.1), the inventory list. In turn, the headend106can receive a request to send one of the content files in the inventory list or a request to resend one of the content files in the inventory list. Whereas the embodiment described above provides a fully automated process in which the headend automatically sends the requested content files, alternatively, an approval process may be included. In some embodiments, the content provider may be notified of such requests as well. A log may be created whenever requests are received from the affiliate system which may help identify faulty or unstable networks. If a certain affiliate system consistently does not receive the content files when delivered via satellite, for example, it may be preferred to make Internet transmissions their default until the problem is resolved. In some embodiments, if certain content file types or sizes tend to fail to be delivered, or transmission fails occur more frequently at certain hours, etc., the system alerts the content provider of the problem, and they may be able to address it. A system may be implanted in which if a certain affiliate system sends requests more than a predetermined number of times within a predetermined time (e.g., 5 times within a 30-day period), the content provider is notified. Local Content File Substitution FIG.8provides a flowchart of a process800for substituting a local copy of a station imaging piece for a network imaging content file in accordance with an example embodiment of the present invention. In some embodiments, a substitution is performed by an affiliate system. At step S802, a log entry to play an imaging content file is retrieved. If a determination is made at step S804that an imaging content file exists in a local directory, in step S806the local version of the content file in the local directory is played with a priority over a network-provided imaging content file. If a determination is made at step S804that the imaging content file does not exist in the local directory, in step S808, the network version of the content file is played. The imaging content files may have the same file name, for example, 10002.mp2. If the music log directs the affiliate system to play song A, song B, then imaging content file 10002.mp2, imaging content file 10002.mp2 is selected for example, after song B ends. This can be performed automatically or manually (e.g., by double clicking the file presented to an operator via a user interface). If the same file name 10002.mp2 exists in a directory locally accessible by the affiliate system (e.g., via a local data store), that local content file is played. If there is no content filed named 10002.mp2 in the locally accessible directory, the network version is played. In some embodiments, the above substitution system and method may be used for other content types, and is not limited to imaging content files. Closed Loop Method FIG.9illustrates a system architecture incorporating a closed-loop method900for commercial scheduling, playback of a commercial by an affiliate (e.g., a radio station), verification of the playback by the affiliate, and filing of a network affidavit of play to a network commercial scheduling system902in accordance with an example embodiment of the present invention. Radio stations, for example, are often required to submit an affidavit of play confirming a certain commercial was played on the air. This affidavit may be required for each commercial, and thus can become time-consuming and tedious.FIG.9illustrates an embodiment that automates the process, thus making the process more efficient, reducing error and cost by the radio station. Such a verification system may also provide an assurance to the content provider and advertiser that the commercial was in fact aired. In the embodiment illustrated, a network scheduling system902sends a station transmission log904directing the radio station to play a certain commercial, in some cases at a certain time. The station transmission log904may be sent via satellite110and/or Internet112. The radio station plays the content file containing commercial content as directed, as shown in S906. The affiliate system provides an AM/FM/HD tuner (not shown) (or other suitable device that hears what is aired), which determines whether or not the commercial was aired, as shown in step S908. If the tuner receives the commercial (e.g., if it “hears” the commercial being aired on a specific radio station signal) at step S908, the playback time is logged as “aired” in the station receiver log, as shown in step S910. If the tuner does not receive (“hear”) the commercial content file at step S908, the commercial is logged as “not aired”, as shown in step S912. The station receiver log905is then delivered, for example, via the Internet112, to the content provider headend which, as shown in step S914, updates the station affidavit record and sends it to the network scheduling system902. In some embodiments, the closed-loop method900is automated without the need for human action. By way of non-limiting example, the hardware of the radio station includes a tuner, which records the audio (e.g., AM/FM). The radio station is previously given the audio file for the commercial, to which the recorded audio file is compared. The example embodiments of the invention may be implemented using hardware, software or a combination thereof and may be implemented in one or more computer systems or other processing systems. However, the manipulations performed by these example embodiments were often referred to in terms, such as entering, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, in any of the operations described herein. Rather, the operations may be completely implemented with machine operations. Useful machines for performing the operation of the example embodiments presented herein include general purpose digital computers or similar devices. From a hardware standpoint, a CPU typically includes one or more components, such as one or more microprocessors, for performing the arithmetic and/or logical operations required for program execution, and storage media, such as one or more memory cards (e.g., flash memory) for program and data storage, and a random access memory, for temporary data and program instruction storage. From a software standpoint, a CPU typically includes software resident on a storage media (e.g., a memory card), which, when executed, directs the CPU in performing transmission and reception functions. The CPU software may run on an operating system stored on the storage media, such as, for example, UNIX, iOS, Windows, Linux, and the like, and can adhere to various protocols such as the Ethernet, ATM, TCP/IP protocols and/or other connection or connectionless protocols. As is well known in the art, CPUs can run different operating systems, and can contain different types of software, each type devoted to a different function, such as handling and managing data/information from a particular source, or transforming data/information from one format into another format. It should thus be clear that the embodiments described herein are not to be construed as being limited for use with any particular type of server computer, and that any other suitable type of device for facilitating the exchange and storage of information may be employed instead. Although for convenience CPU is shown as being a single CPU, in other example embodiments CPU may include plural separate CPUs, wherein each is dedicated to a separate application, such as, for example, a data application, a voice application, and a video application. Software embodiments of the example embodiments presented herein may be provided as a computer program product, or software, that may include an article of manufacture on a machine accessible or machine readable medium having instructions. The instructions on the machine accessible or machine readable medium may be used to program a computer system or other electronic device. The machine-readable medium may include, but is not limited to, optical disks, CD-ROMs, and magneto-optical disks or other type of media/machine-readable medium suitable for storing or transmitting electronic instructions. The techniques described herein are not limited to any particular software configuration. They may find applicability in any computing or processing environment. The terms “machine accessible medium” or “machine readable medium” used herein shall include any medium that is capable of storing, encoding, or transmitting a sequence of instructions for execution by the machine and that cause the machine to perform any one of the methods described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, unit, logic, and so on) as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action to produce a result. In addition, not all of the components are required to practice the invention, and variations in the arrangement and type of the components may be made without departing from the spirit or scope of the invention. As used herein, the term “component” is applied to describe a specific structure for performing specific associated functions, such as a special purpose computer as programmed to perform algorithms (e.g., processes) disclosed herein. The component can take any of a variety of structural forms, including: instructions executable to perform algorithms to achieve a desired result, one or more processors (e.g., virtual or physical processors) executing instructions to perform algorithms to achieve a desired result, or one or more devices operating to perform algorithms to achieve a desired result. While various example embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein. Thus, the present invention should not be limited by any of the above described example embodiments, but should be defined only in accordance with the following claims and their equivalents. In addition, it should be understood that theFIGS.1-9are presented for example purposes only. The architecture of the example embodiments presented herein is sufficiently flexible and configurable, such that it may be utilized (and navigated) in ways other than that shown in the accompanying figures. Further, the purpose of the foregoing Abstract is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the example embodiments presented herein in any way. It is also to be understood that the procedures recited in the claims need not be performed in the order presented.
31,045
11863294
The figures are not to scale. Wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. DETAILED DESCRIPTION Media monitoring meters and/or media recognition devices are used by an audience measurement entity to gather media exposure data (e.g., exposure to audio, video, or images) from a media output device(s) (e.g., a television, a radio, a computer, etc.). In some examples, the meter may be, or be incorporated into, a device including a wired or wireless connection, microphone, magnetic coupling device, and/or other sensor to gather ambient audio, video, and/or images. In such examples, when the media output device is outputting media, the meter may receive an audio signal and/or capture a video/image signal (e.g., via a camera and/or sensor) transmitted by the media output device. As further described below, the meter may generate signatures based on the media. Alternatively, the meter may intercept the media signal transmitted to the media output device and generate signatures based on characteristics of the intercepted media signal. The meter transmits generated query signatures to the audience measurement entity and the audience measurement entity compares the generated signature to reference signatures. Reference signatures are known signatures corresponding to media that is monitored by the audience measurement entity. When the audience measurement entity matches the generated signature to a reference signature, the audience measurement entity credits the reference media content based on the exposure. Signature or fingerprint-based media recognition is a technique that generally uses one or more inherent characteristics of the media to generate a substantially unique proxy for the media. Such a proxy is referred to as a signature or fingerprint, and can take any form (e.g., a series of digital values, a waveform, etc.) representative of any aspect(s) of the media signal(s) (e.g., the audio and/or video signals forming the media presentation being monitored). A signature may be a series of signatures collected in series over a timer interval. A good signature is repeatable when processing the same media presentation, but is unique relative to other (e.g., different) presentations of other (e.g., different) media. Accordingly, the term “fingerprint” and “signature” are used interchangeably herein and are defined herein to mean a proxy for identifying media that is generated from one or more inherent characteristics of the media. Signature-based media monitoring/recognition generally involves determining (e.g., generating and/or collecting) signature(s) representative of a media signal (e.g., an audio signal and/or a video signal) output by a monitored media device and comparing the monitored signature(s) to one or more references signatures corresponding to known (e.g., reference) media sources. Various comparison criteria, such as a cross-correlation value, a Hamming distance, etc., can be evaluated to determine whether a monitored signature matches a particular reference signature. When a match between the monitored signature and some reference signatures is found, the monitored media can be identified as corresponding to the particular reference media represented by the reference signature that matched the monitored signature. Because attributes, such as an identifier of the media, a presentation time, a broadcast channel, etc., are collected for the reference signature, these attributes may then be associated with the monitored media whose monitored signature matched the reference signature. Example systems for identifying media based on codes and/or signatures are long known and were first disclosed in Thomas, U.S. Pat. No. 5,481,294, which is hereby incorporated by reference in its entirety. There is a plurality of signaturing algorithms used to identify media. Many signaturing algorithms are based on a comparison of two or more characteristics of a media signal. For example, a frequency-based energy comparison signaturing algorithm includes comparing the energies of two different frequency bands of a media signal to generate a bit value. In such an example, the bit value b corresponds to a ‘1’ when the energy of the first frequency band is greater than the energy of the second frequency band, and the bit value corresponds to a ‘0’ when the energy of the first frequency band is lower than the energy of the second frequency band (e.g., ΔE=EF1−EF2, where a bit value b=0 if ΔE<0 and b=1 if ΔE>0). A time-based energy comparison signaturing algorithm includes comparing the energies of the media signal at different points in time to generate a bit value. In such a signaturing algorithm, the bit value is set to ‘1’ when the energy of the media signal at a first time is higher than the energy of the media signal at a second time and the bit value corresponds to a ‘0’ when the energy of the media signal at the first time is lower than the energy of the media signal at the second time (e.g., ΔE=E(t)−E(t−Δt), where a bit value b=0 if ΔE<0 and b=1 if ΔE>0). A discrete cosine transform (DCT) signaturing algorithm includes comparing DCT coefficients of a media signal. In such a signaturing algorithm, the bit value is set ‘1’ when a first DCT coefficient of the media signal is higher than a second DCT coefficient of the media signal and the bit value corresponds to a ‘0’ when the first DCT coefficient of the media signal is lower than the second DCT coefficient of the media signal. A time-interval signaturing algorithm includes determining a time interval between certain characteristics, such as spectrogram peaks, in the media signal and generating a bit value based on the time interval. Additionally, there is a vast plurality of other signaturing algorithms that exploit other characteristics and transformations of a media signal, such as auto-correlation, Hilbert transform, time-frequency plane representation, etc. Examples disclosed herein preprocess a media signal to enhance the characteristics of the media signal based on a particular signaturing algorithm prior to outputting the media to a media output device. As described above, many signature techniques generate a code (e.g., a binary code) representative of multiple comparisons of two different characteristics of the media signal. For example, as described above, the frequency-based energy comparison signaturing algorithm is based on a comparison of energies of a block of audio of the media signal at two different frequency bands (e.g., ΔE=EF1−EF2, where a bit value b=0 if ΔE<0 and b=1 if ΔE>0). However, the media signal is subject to noise that may increase and/or decrease the energies of two frequency bands. When the noise increases and/or decreases the energies of frequency bands, the bit value may inadvertently change values (e.g., from b=0 to b=1 or vice versa). Examples disclosed herein alleviate the effect of noise by preprocessing the media signal to enhance the media signal and increase the difference corresponding to the comparison. For example, using the above signaturing algorithm, examples disclosed herein determine if a comparison of energies at two frequency bands results in a difference less than a threshold (e.g., |EF1−Ef2|<1). If the comparison of the energies of the two frequency bands results in a difference less than the threshold (e.g., if EF1=3.2 and EF2=2.5, then |EF1−Ef2|=0.3 which is less than the threshold), examples disclosed herein enhance the comparison by increasing the energy of the frequency band with the higher energy (e.g., increase EF1from 3.5 to higher energy) and/or decreasing the energy of the frequency band with the lower energy (e.g., increase EF2from 3.2 to some lower energy), e.g., possibly within the limits established by psychoacoustic masking properties, to increase the energy difference between the two frequency bands to satisfy the threshold, thereby increasing ΔE. In this manner, even if noise changes the energies of the two frequency bands of monitored or recognized media, the probability of change of the actual bit values (e.g., ΔE changing from positive to negative or vice versa) is minimized. Thus, the probability that a meter or other media recognizing device will match a captured signature to a corresponding reference signature substantially increases. Additionally or alternatively, examples disclosed herein may adjust any characteristic of media based on any signaturing algorithm. FIG.1illustrates an example environment100including an example signature enhancer101for preprocessing an example media signal102to increase the robustness of generated signatures of the example media signal102. The example environment100includes the example the example signature enhancer101, the example media signal102, an example enhanced media signal104, an example media output device106, and an example meter108. The example signature enhancer101preprocesses the media signal102to generate the enhanced media signal104. In some examples, the example signature enhancer101is located at a remote site and/or a remote server and preprocesses the media signal102off site. In some examples, the example signature enhancer101is located in the vicinity of the example media output device106. For example, the signature enhancer101may be a device that receives the media signal102via a communication network (e.g., a cable network, a telephonic network, a network communications network, etc.) and locally enhances the media signal102prior to transmitting to the example media output device106. Alternatively, the example signature enhancer101may be coupled to, embedded in, or otherwise connected to media signal receiving device (e.g. a set-top box, an over-the-top device, a gaming console, an antenna, a computer, a network communication device, a media player, a tablet, the example media output device106, and/or any device that is capable of receiving a media signal). The example signature enhancer101enhances characteristics of the example media signal102based on a selected signaturing algorithm to increase the robustness of media signatures corresponding to the example media signal102, as further described in conjunction withFIG.2. The example signature enhancer101transmits the example enhanced media signal104(e.g., the example media signal102after being enhanced) to the example media output device106. The example media output device106is a device that outputs media (e.g., including the example enhanced media signal104). Although the example media output device106ofFIG.1is illustrated as a television, the example media output device may be a radio, an MP3 player, a video game counsel, a stereo system, a mobile device, a computing device, a tablet, a laptop, a projector, a DVD player, a set-top-box, an over-the-top device, and/or any device capable of outputting media. The example media output device106may include and/or may be coupled to a display to output images and/or video. Additionally, the example media output device106may include speakers and/or may be coupled, or otherwise connected to portable speakers that output an audio portion of the example enhanced media signal104. The example meter108is a device that monitors exposure to media and/or otherwise recognizes the media, including media output by the example media output device106. In some examples, the example meter108is a device including a microphone and/or magnetic coupling device to gather ambient audio. In some examples, the meter108is embedded in or otherwise connected to a device that includes a microphone and/or magnetite coupling device. In some examples, the meter108is embedded in the example media output device106. In some examples, the meter108includes, or is connected to, a camera and/or sensor to gather the enhanced media signal104output by the example media output device106. The example meter108may be a media monitoring device, a media recognizing device, a mobile device, a computer, a personal digital assistance, and/or any device capable of gathering ambient audio. The example meter108generate signatures of the enhanced media signal104output by the example media output device106to identify the media. FIG.2is a block diagram of an example implementation of the example signature enhancer101ofFIG.1, disclosed herein, to increase the robustness of media signatures by enhancing the example media signal102. While the example signature enhancer101is described in conjunction with the example media signal102and media output device106ofFIG.1, the example signature enhancer101may be utilized to optimize placement of any type of media signal output by any type of media device. The example signature enhancer101receives the example media signal102and outputs the example enhanced media signal104ofFIG.1. The example signature enhancer101includes an example media signal receiver200, an example signature settings determiner202, an example signal transformer204, an example characteristic analyzer206, an example characteristics enhancer208, and an example enhanced media signal transmitter210. The example media signal receiver200receives the example media signal102. The media signal102is a signal corresponding to media that will be output by the example media output device106(FIG.1). The example media signal102may be an audio signal, a video signal, and/or an image signal. The example media signal102may be originated from a media producer and/or a media distributer. As described above, the media signal includes intrinsic characteristics that may be analyzed to generate a signature. The generated signature may be compared to a database of reference signatures to determine exposure to the media. The example signature settings determiner202selects a signaturing algorithm from a plurality of signaturing algorithms as the basis for an enhancement. As described above, there is a plurality of ways to generate a signature (e.g., a plurality of signaturing algorithms) from a media signal (e.g., comparing energy of different frequencies, comparing energies at different point in time, comparing DCT coefficients, etc.). How the example media signal102is enhanced depends on how the example meter110will generate a signature of the media signal102. In order to enhance the media signal102properly, the signaturing algorithm should match the signaturing algorithm of the example meter110. However, there may be other meters corresponding to different signaturing algorithms. Accordingly, the signature settings determiner202may select one or more signaturing algorithms as the basis for the enhancing. In some example, the signature settings determiner202determines the one or more signaturing algorithm based on user and/or manufacture settings. In some examples, the signature settings determiner202determines the one or more signaturing algorithm based on the example media signal102. For example, when the example media signal102may dynamically select a signature signal based on whether the media signal102is an audio signal, a video signal, an image signal, etc. In some examples, the signature settings determiner202selects different signaturing algorithms at different points in time. The signature settings may be adjusted at any point in time to allow the signature settings determiner202to change the selected signaturing algorithm. The example signal transformer204transforms the received media signal102into the frequency domain (e.g., determining the frequency spectrum) when the signaturing algorithm is based on the frequency domain. For example, the signal transformer204may perform a Fourier transform on the media signal102to transform the media signal102into the frequency domain. As described above, some signaturing algorithms are based on comparisons of the characteristics of different frequency bands of the frequency spectrum. Accordingly, the example signal transformer204may transform the example media signal102so that the example signature enhancer101may (A) determine when a comparison of characteristics will not satisfy a comparison threshold and (B) when the comparison of characteristics does not satisfy the comparison threshold, enhance the one of more of the characteristics to satisfy the comparison threshold, as further described below. In some examples, one the example media signal102has been enhanced in the frequency domain, the example signal transformer204transforms the enhanced media signal back into the time domain prior to being transmitted by the example enhanced media transmitter210. The example characteristic analyzer206analyzes (e.g., compares) the characteristics of the media signal102based on the selected signaturing algorithm. For example, if the signaturing algorithm is based on the differences between the energies of neighboring frequency bands (e.g., ΔE1,2=EF1−EF2, where a bit value b=0 if ΔE1,2<0 and b=1 if ΔE1,2>0), then the example characteristic analyzer206will compute all of the differences (e.g., ΔE1,2,ΔE3,4,. . . , ΔEN−1,N) in a manner consistent with the selected algorithm. In another example, if the signaturing algorithm is based on differences between magnitudes DCT values of a video or image (e.g., ΔDCT1,2=DCT1−DCT2, where a bit value b=0if ΔDCT1,2<0 and b=1 if ΔDCT1,2>0), then the example characteristic analyzer206will compute all of the differences (e.g., ΔDCT1,2,ΔDCT3,4,. . . , ΔDCTN−1,N) in a manner consistent with the selected algorithm. Additionally or alternatively, the example characteristic analyzer206may analyze the characteristics of the media signal102based on any type of selected signaturing algorithm (e.g., based on a comparison and/or ratio of characteristics of a media signal, a peak detection comparison on pseudo-energy curves, etc.). Once the example analyzer206analyzes the comparisons of the media signal102based on the selected signaturing algorithm, the example analyzer206determines which of the comparisons do not satisfy a comparison threshold (e.g., by flagging the comparisons that do not satisfy the comparison threshold). For example, if the frequency algorithm is based on the energy of the example media signal102across time (e.g., ΔE1=E(T1+ΔT)−E(T1), where a bit value (b)=0 if ΔE1<0 and b=1 if ΔE1>0) and the comparison threshold is 1, then the characteristic analyzer206may flag any comparison whose absolute value is less than 1 (e.g., |E(T+ΔT)−E(T)|<1). Each signaturing algorithm may correspond to a different comparison threshold due to the variance of noise for a particular signaturing algorithm. In some examples, where the signature settings determiner202identified more than one signaturing algorithm, the example characteristic analyzer206may analyze the media signal102in different ways using the two or more signaturing algorithms. The example characteristics enhancer208enhances the example media signal102by boosting and/or attenuating characteristics of the media signal102based on the selected signaturing algorithm and the flagged comparisons (e.g., the comparisons that don't satisfy the threshold comparison value). In some examples, the characteristics enhancer208boosts the characteristic of a flagged comparison corresponding to the stronger (e.g., higher) characteristic. Additionally or alternatively, the characteristics enhancer208may decrease (e.g., attenuate) the characteristic of the flagged comparison corresponding to the weaker (e.g., lower) characteristic. For example, in a frequency-based energy comparison signaturing algorithm, where the energy of a first frequency band is 3.3, the energy of a second frequency band is 3.0, and the threshold comparison is 1 (e.g., |EF1−Ef2|=0.3, which is less than the threshold), the example characteristics enhancer208may boost the energy of the first frequency (EF1) and/or decrease the energy of the second frequency (EF2) to satisfy the comparison threshold. In some examples, the characteristics enhancer208boosts and/or decreases characteristics of the example media signal102according to psychoacoustic masking properties. For example, the characteristics enhancer208may not boost and/or attenuate a characteristic above/below a particular level to ensure that the quality of the media signal does not deteriorate in a manner that may be identified by the human eye/ear. Additionally, there may be other limits to the amount boosting and/or decreasing of the characteristics of the media signal102based on other media signal requirements. In some examples, the characteristics enhancer208may determine which characteristics to boost and/or decrease based on subsequent comparisons. For example, if boosting an energy of a first frequency of the media signal102at a first time satisfies the comparison threshold at the first time, however such boosting of the energy of the first frequency at a second subsequent time does not satisfy the comparison threshold at the second time, the example characteristics enhancer208may decrease a second frequency of the media signal102at the first time to satisfy the comparison threshold at the first time. Once the example characteristics enhancer208enhances the example media signal102, the example enhanced media signal transmitter210transmits the example enhanced media signal104to the example media output device106ofFIG.1. While example manners of implementing the example signature enhancer101ofFIG.1is illustrated inFIG.2, elements, processes and/or devices illustrated inFIG.2may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example media signal receive200, the example signature settings determiner202, the example signal transformer204, the example characteristics analyzer206, the example characteristics enhancer208, the example enhanced media signal transmitter210, and/or, more generally, the example signature enhancer101ofFIG.2, may be implemented by hardware, machine readable instructions, software, firmware and/or any combination of hardware, machine readable instructions, software and/or firmware. Thus, for example, any of the example media signal receive200, the example signature settings determiner202, the example signal transformer204, the example characteristics analyzer206, the example characteristics enhancer208, the example enhanced media signal transmitter210, and/or, more generally, the example signature enhancer101ofFIG.2could be implemented by analog and/or digital circuit(s), logic circuit(s), programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example media signal receive200, the example signature settings determiner202, the example signal transformer204, the example characteristics analyzer206, the example characteristics enhancer208, the example enhanced media signal transmitter210, and/or, more generally, the example signature enhancer101ofFIG.2is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware. Further still, the example signature enhancer101ofFIG.2includes elements, processes and/or devices in addition to, or instead of, those illustrated inFIGS.3and4, and/or may include more than one of any or all of the illustrated elements, processes and devices. A flowchart representative of example machine readable instructions for implementing the example signature enhancer101ofFIG.1is shown inFIGS.3and4. In the examples, the machine readable instructions comprise a program for execution by a processor such as the processor512shown in the example processor platform500discussed below in connection withFIG.5. The program may be embodied in machine readable instructions stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor512, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor512and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowchart illustrated inFIGS.3and4, many other methods of implementing the example signature enhancer101ofFIGS.1and2may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. As mentioned above, the example processes ofFIGS.3and4may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, “tangible computer readable storage medium” and “tangible machine readable storage medium” are used interchangeably. Additionally or alternatively, the example processes ofFIGS.3and4may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” is open ended. FIG.3is an example flowchart300representative of example machine readable instructions that may be executed by the example signature enhancer101ofFIGS.1and2to increase the robustness of signatures corresponding to the example media signal102ofFIGS.1and2. Although the instructions ofFIG.3are described in conjunction with the example signature enhancer101ofFIGS.1and2, the example instructions may be utilized by any type of signature enhancer. At block301, the example media signal receiver200receives the example media signal102. As described above in conjunction withFIG.2, the example media signal102is a signal corresponding to audio, video, and/or an image to be output by the example media output device106. The media signal102includes unique characteristics that may be used to identify the media signal by generating a signature of the example media signal102and comparing the generated signature to a reference signature. At block302, the example signature settings determiner202selects a signaturing algorithm from a plurality of signaturing algorithms. As described above in conjunction withFIG.2, the selection of the signaturing algorithm may be based on the media signal102, the meter108, settings and/or preferences of a user and/or manufacturer of the example signature enhancer101, etc. At block304, the example characteristics analyzer206determines which characteristics are evaluated by the selected signaturing algorithm. For example, the signaturing algorithm may include a comparison of energy levels in the time domain, energy levels in the frequency domain, peak values in the frequency domain, DCT coefficients, and/or any other comparison of characteristics of any type of media signal. At block306, the example characteristic analyzer206identifies a comparison threshold for the signaturing algorithm. As described above in conjunction withFIG.2, each signaturing algorithm may correspond to a different comparison threshold based on variance of noise that may affect the characteristics. At block308, the example the example signal transformer204determines if the selected signaturing algorithm corresponds to the frequency domain. For example, if the signaturing algorithm includes comparing characteristics associated with the frequency spectrum, then the signaturing algorithm corresponds to the frequency domain. If the example signal transformer204determines that the selected signaturing algorithm corresponds to the frequency domain (block308), then the example signal transformer204transforms the media signal102into the frequency domain (block310). In some examples, the signal transformer204transforms the media signal102into the frequency domain using a Fourier transform. At block312, the example characteristic analyzer206analyzes comparisons of characteristics of the example media signal102based on the selected signaturing algorithm. For example, if the selected signaturing algorithm is based on a comparison (e.g., a difference) between DCT values of a video signal, then the example characteristic analyzer206computes the differences between DCT values of the video signal that would be utilized to generate a signature. In other words, the example characteristic analyzer206analyzes the media signal102based on performing the selected signaturing algorithm on the received media signal102to identify the differences. At block314, the example characteristic analyzer206determines if all comparisons associated with the selected signaturing algorithm satisfy the identified comparison threshold. In some examples, the example characteristic analyzer206flags each comparison that does not satisfy the identified comparison threshold. If the example characteristic analyzer206determines that all of the comparisons satisfy the comparison threshold (block314), the example enhanced signal transmitter210outputs the example media signal102(block316), because the example media signal102does not need to be enhanced. If the example characteristic analyzer206determines that all of the comparisons do not satisfy the comparison threshold (block314), the example characteristics enhancer208enhances the media signal102to satisfy the comparison threshold (block320), as further described in conjunction withFIG.4. At block322, the example enhancer media signal transmitter210outputs the example enhanced media signal104to the example media output device106ofFIG.1. FIG.4is an example flowchart320representative of example machine readable instructions that may be executed to implement the example signature enhancer101ofFIGS.1and2to enhance the example media signal102to satisfy the selected comparison threshold, as described above in conjunction with block320ofFIG.3. Although the example flowchart320is based on a signaturing algorithm corresponding to a comparison of a larger characteristic and a smaller characteristic of a media signal, the example flowchart320may be utilized for any type of signaturing algorithm comparing any number of characteristics. The larger characteristic corresponds to the characteristic with the larger value in the comparison and the smaller characteristic corresponds to the characteristic with the smaller value in the comparison. For example, in a frequency-based energy comparison signaturing algorithm, if EF1is 3.2 and EF2is 3.5, EF1is the smaller characteristic and EF2is the larger characteristic. At block400, the example characteristics enhancer208identifies a comparison that does not satisfy a comparison threshold (e.g., a first comparison flagged by the example characteristic analyzer206). In the illustrated example ofFIG.4, the comparison is a comparison of a larger characteristic and a smaller characteristic of the media signal102. Alternatively, any number of characteristics may be compared in any signaturing algorithm. Here, the larger characteristic is the stronger (e.g., higher) characteristic and the smaller characteristic is the weaker (e.g., lower) characteristic. At block402, the example characteristics enhancer208determines if boosting a larger characteristic of the comparison will create audible distortion and/or violate media signal requirements. If the example characteristics enhancer208determines that boosting the larger characteristic of the comparison will create audible distortion and/or violate media signal requirements, the process will continue to block410. If the example characteristics enhancer208determines that boosting the larger characteristic of the comparison will not create audible distortion and/or will not violate media signal requirements, the example characteristics enhancer208will determine if boosting the larger characteristic will negatively affect a subsequent comparison (block404). As described above in conjunction withFIG.2, boosting a characteristic at a first time may negatively affect a subsequent comparison by decreasing the difference of the subsequent comparison such that the subsequent comparison no longer satisfies the comparison threshold. If the example characteristics enhancer208determines that boosting the larger characteristic will negatively affect a subsequent comparison (block404), the process continues to block410. If the example characteristics enhancer208determines that boosting the larger characteristic will not negatively affect a subsequent comparison (block404), the example characteristics enhancer208will boost the larger characteristic (block406). The example characteristics enhancer208boosts the larger characteristic such that the boost will not create audible distortion, violate media signal requirements, and/or will negatively affect subsequent comparison. For example, if boosting the larger characteristic will create audible distortion at 3.2 J, the example characteristic enhancer208may boost the larger characteristic to 3.1 J. At block408, the example characteristic enhancer208determiners if, after boosting the larger characteristic, the comparison (e.g., the comparison of the larger characteristic and the smaller characteristic) satisfies the comparison threshold. Because the boosting of the larger characteristic is limited by audible distortion, media signal requirements, and/or subsequent comparisons, boosting the larger characteristic may or may not satisfy the comparison threshold. For example, in a frequency-based energy comparison signaturing algorithm where the energy of the larger characteristic is 2.9 and the energy of the smaller characteristic is 2.7, the larger characteristic may be boosted to 3.1. However, if the threshold comparison is 1, the comparison threshold will still not be satisfied after the larger characteristic is boosted (e.g., |EF1−Ef2|=3.1−2.7=0.4<1). Accordingly, the smaller characteristic may also need to be decreased to satisfy the comparison threshold. If the example characteristic enhancer208determines that the comparison satisfies the comparison threshold (block408), the process continues to block416. If the example characteristic enhancer208determines that the comparison does not satisfy the comparison threshold (block408), the example characteristic enhancer208determines if attenuating (e.g., decreasing) the smaller characteristic of the comparison will create audible distortion and/or violate media signal requirements (block410). If the example characteristics enhancer208determines that attenuating the smaller characteristic of the comparison will create audible distortion and/or violate media signal requirements (block410), the process will continue to block416. If the example characteristics enhancer208determines that attenuating the smaller characteristic of the comparison will not create audible distortion and/or will not violate media signal requirements (block410), the example characteristics enhancer208will determine if attenuating the smaller characteristic will negatively affect a subsequent comparison (block412). If the example characteristics enhancer208determines that attenuating the smaller characteristic will negatively affect a subsequent comparison (block412), the process continues to block416. If the example characteristics enhancer208determines that attenuating the smaller characteristic will not negatively affect a subsequent comparison (block412), the example characteristics enhancer208will attenuate (e.g., decrease) the smaller characteristic (block414). The example characteristics enhancer208attenuates the smaller characteristic such that the attenuation will not create audible distortion, violate media signal requirements, and/or will negatively affect subsequent comparison. At block416, the example characteristics enhancer208determines if there is a subsequent comparison that does not satisfy a comparison threshold (e.g., a second comparison of two different characteristics or a second comparison with one of the first or smaller characteristic with an additional characteristic). If the example characteristics enhancer208determines that there is a subsequent comparison that does not satisfy the comparison threshold (block416), the example characteristics enhancer208returns to block402to enhance one or more characteristics of the subsequent comparison. FIG.5is a block diagram of an example processor platform500capable of executing the instructions ofFIG.3to implement the example signature enhancer101ofFIGS.1and2. The processor platform500can be, for example, a server, a personal computer, a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, or any other type of computing device. The processor platform500of the illustrated example includes a processor512. The processor512of the illustrated example is hardware. For example, the processor512can be implemented by integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. The processor512of the illustrated example includes a local memory513(e.g., a cache). The example processor512ofFIG.5executes the instructions ofFIG.3to implement the example media signal receive200, the example signature settings determiner202, the example signal transformer204, the example characteristics analyzer206, the example characteristics enhancer208, and/or the example enhanced media signal transmitter210ofFIG.2to implement the example signature enhancer101. The processor512of the illustrated example is in communication with a main memory including a volatile memory514and a non-volatile memory516via a bus518. The volatile memory514may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory516may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory514,516is controlled by a clock controller. The processor platform500of the illustrated example also includes an interface circuit520. The interface circuit520may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface. In the illustrated example, one or more input devices522are connected to the interface circuit520. The input device(s)522permit(s) a user to enter data and commands into the processor512. The input device(s) can be implemented by, for example, a sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. One or more output devices524are also connected to the interface circuit520of the illustrated example. The output devices524can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, and/or speakers). The interface circuit520of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor. The interface circuit520of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network526(e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.). The processor platform500of the illustrated example also includes one or more mass storage devices528for storing software and/or data. Examples of such mass storage devices528include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. The coded instructions532ofFIGS.3and4may be stored in the mass storage device528, in the volatile memory514, in the non-volatile memory516, and/or on a removable tangible computer readable storage medium such as a CD or DVD. From the foregoing, it would be appreciated that the above disclosed method, apparatus, and articles of manufacture increase the robustness of media signatures. Meters, or other media recognizing devices, intercept ambient audio, capture images, and/or intercept media signals to generate signatures based on characteristics of the ambient audio, captured image, and/or media signal to identify exposure to media. However, because media signals are subject to noise, the generated signatures may be inaccurate. Examples disclosed herein alleviate such signature-based problems related to noise by preprocessing a media signal to enhance the characteristics of the media signal. Examples disclosed herein include determining where the signaturing algorithm is subject to inaccuracies. For example, when the signaturing algorithm is based on a difference between a first characteristic and a second characteristic, examples disclosed herein determines which differences are below a comparison threshold. Examples disclosed herein enhance at least one of the first or second characteristics to increase the difference, thereby decreasing the inaccuracies related to generating the signature. Examples disclosed herein may enhance the media signal by boosting the first characteristic and/or decreasing the second characteristic. Using examples disclosed herein, the robustness of media signatures is significantly increased, thereby increasing signature recovery accuracy. Although certain example methods, apparatus and articles of manufacture have been described herein, other implementations are possible. The scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
43,703
11863295
While each of the drawing figures illustrates a particular embodiment for purposes of illustrating a clear example, other embodiments may omit, add to, reorder, or modify any of the elements shown in the drawing figures. For purposes of illustrating clear examples, one or more figures may be described with reference to one or more other figures. However, using the particular arrangement illustrated in the one or more other figures is not required in other embodiments. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, that the present disclosure may be practiced without these specific details. The detailed description that follows describes exemplary embodiments and the features disclosed are not intended to be limited to the expressly disclosed combination(s). Therefore, unless otherwise noted, features disclosed herein may be combined to form additional combinations that were not otherwise shown for purposes of brevity. It will be further understood that: the term “or” may be inclusive or exclusive unless expressly stated otherwise; the term “set” may comprise zero, one, or two or more elements; the terms “first”, “second”, “certain”, and “particular” are used as naming conventions to distinguish elements from each other, and does not imply an ordering, timing, or any other characteristic of the referenced items unless otherwise specified; the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items; that the terms “comprises” and/or “comprising” specify the presence of stated features, but do not preclude the presence or addition of one or more other features. This document generally describes systems, methods, devices, and other techniques for identifying and monitoring connections in an optical system. An optical system includes one or more source optical devices and one or more remote optical devices that implement an identification mechanism and/or a monitor mechanism. To implement the identification mechanism, a source optical device includes an identification (ID) block comprising optical elements that perform identification of one or more connections to one or more remote optical devices. The one or more remote optical devices each include a remote ID block comprising one or more optical elements. ID signals generated at the source optical device are transmitted to the one or more remote optical devices, processed by the remote ID block, and transmitted back to the source optical device, where the ID block identifies the one or more connections based on the returned ID signals. The ID signals generated at the source optical device for identification belong to a set of ID wavelengths λ{ID}. In some embodiments, λ{ID}does not overlap with a set of service wavelengths λ{service}, and the identification mechanism is not used during normal operation of the source optical device and the remote optical device. To implement the monitor mechanism, a source optical device includes a monitor block comprising optical elements that evaluate connectivity of one or more connections between the source optical device and one or more remote optical devices. The one or more remote optical devices each include a remote monitor block comprising one or more optical elements. Monitor signals generated at the source optical device are transmitted to the one or more remote optical devices, processed by the remote monitor block, and transmitted back to the source optical device, where the monitor block evaluates the connectivity of the one or more connections based on the returned monitor signals. The monitor signals generated at the source optical device may have a reference wavelength λr. In some embodiments, λrdoes not overlap with a set of service wavelengths λ{service}, and the monitoring mechanism is used during normal operation of the source optical device and the remote optical device. In some embodiments, the remote ID block/s and/or the remote monitor block/s only include passive elements that do not require electronic elements and/or electronic power. In this manner, only purely passive optical circuits are deployed in the remote optical device/s that can be tested using the ID mechanism and/or the monitor mechanism. Additional features and advantages are apparent from the specification and the drawings. FIG.1illustrates an optical system in an example embodiment. The optical system100includes an optical network with one or more source optical devices102and one or more remote optical devices106-108. As used herein, the term “optical device” refers to optical equipment with one or more optical ports to communicatively couple the optical device to another device so that optical signals can travel over a communication link between the optical devices. An optical device may be a standalone device, and/or may include two or more optical device components. The source optical device102communicates with the one or more remote optical devices106-108via one or more optical links1-i. A source optical device102may be coupled with a particular remote optical device106-108by one or multiple optical links. An optical link may include a transmitter, receiver, and cable assembly that can transmit information between two points. An optical link may include unidirectional or bidirectional fibers. For example, optical link1includes two fibers used for unidirectional communication, while optical link3includes one fiber used for bidirectional communication. As used herein, a fiber within an optical link is referred to as an optical link component. An optical link may include one or more cables that terminate with one or more optical connectors designed to mate with an optical port of an optical device. The source optical device102may be configured to perform one or more functions in conjunction with one or more remote optical devices106-108. The function/s are carried out by a function block104at the source optical device102and one or more function blocks110-112at the remote optical devices106-108. As used herein, an optical block, such as function blocks104and110-112, is a set of one or more optical elements that generate and/or process one or more optical signals related to a particular function. In some embodiments, the source optical device102is a direction device in an OADM node, and the remote optical devices106-108are add-drop group devices in the OADM node. Function block104generates service signals having frequencies selected from a set of service wavelengths λ{service}that are transmitted to one or more remote optical devices106-108over the one or more communication links1-i. In some embodiments, λ{service}includes wavelengths in a particular communication band, such as the O-band, E-band, S-band, C-band, L-band, 850-nm-band, U-band, and/or another communication band. A channel refers to an optical signal transmitted at a particular wavelength. As used herein, the term “transmission path” refers to the path of service signals from a function block104of a source optical device102to a function block112of a remote optical device108. As used herein, the term “receiving path” refers to the path of service signals from a function block112of a remote optical device108to a function block104of a source optical device102. A transmission path and/or a receiving path may travel over an optical link. Transmission path i carries service signals of a particular service wavelength λifrom W1to X1over optical link i. Receiving path i carries service signals of λifrom Z1to Y1travel over optical link i. In some embodiments, the source optical device102is configured to identify optical connections at one or more remote optical devices106-108. For example, the source optical device102may determine that signals of a particular wavelength travel over a particular optical link. In some embodiments, the source optical device102identifies a plurality of optical connections to a plurality of remote optical devices106-108. The source optical device102may include an identification (ID) block114that includes one or more optical elements that identify connections to one or more remote optical devices106-108. The ID block114transmits identification (ID) signals having frequencies selected from a set of wavelengths λ{ID}to one or more remote optical devices106-108, such as by directing the ID signals into transmission path i at B1. A remote ID block116at a remote optical device108processes the ID signals from the ID block114and transmits the returned ID signals back to the ID block114. At the remote optical device108, ID signals from transmission path i are directed to the remote ID block116at G1, and returned ID signals from the remote ID block116are directed to the receiving path at H1. The returned ID signals are directed from the transmission path i to the ID block114at M1. The term “identification mechanism” is used herein to refer to the combination of the ID block114at the source optical device102and the remote ID block116at one or more remote optical devices108optically connected to the source optical device. The identification mechanism is described in greater detail hereinafter. Alternatively and/or in addition, the source optical device102may be configured to monitor optical connections at one or more remote optical devices106-108. For example, the source optical device102may include a monitor block118that includes one or more optical elements that monitor connections between the source optical device102and one or more remote optical devices106-108, such as to evaluate connectivity of optical links. The monitor block118transmits monitor signals having one or more reference frequencies of wavelength λrto one or more remote optical devices106-108, such as by directing the monitor signals into transmission path i at Pi. A remote monitor block120at a remote optical device108processes the monitor signals from the monitor block118and transmits the returned monitor signals back to the monitor block118. At the remote optical device108, monitor signals from transmission path i are directed to the remote monitor block120at E1, and returned monitor signals from the remote monitor block120are directed to the receiving path at F1. The returned monitor signals are directed from the transmission path i to the monitor block118at K1. The source optical device102may include one or more microprocessors150. The microprocessor/s150may perform one or more computations required by function block104, ID block114, and/or monitor block118. In some embodiments, the microprocessor/s150execute one or more control instructions to carry out one or more control processes. The control instructions may include hard-coded instructions, firmware, and/or software. In some embodiments, the microprocessor/s150execute instructions for a ID control process to generate ID signals, and process measurements of returned ID signals to generate output comprising the identity one or more optical links to the remote optical devices106-108. In some embodiments, the microprocessor/s150execute instructions for a monitor control process to generate monitor signals, and process measurements of returned monitor signals to generate output comprising the health one or more connections to one or more remote optical devices106-108. The term “monitor mechanism” is used herein to refer to the combination of the monitor block118at the source optical device102and the remote monitor block120at one or more remote optical devices108optically connected to the source optical device. The monitor mechanism is described in greater detail hereinafter. In an optical system, a source optical device102and one or more connected remote optical devices106-108may implement both the identification mechanism and the monitor mechanism, or may independently implement either the identification mechanism or the monitor mechanism. Different source optical devices in the same optical system may implement none, one, or both of the identification mechanism and/or the monitor mechanism. In some embodiments, the source optical device102is a direction device in an optical add-drop multiplexer (OADM) node, and each add-drop group device in the OADM node implements the identification mechanism, the monitor mechanism, or both the identification mechanism and the monitor mechanism. For ease of illustration, aspects described herein with respect to a particular source optical device, a particular remote optical device, and/or a particular optical link may apply to one or more other source optical devices, remote optical devices and/or optical links. For example, an optical system may include one or multiple source optical devices; a source optical device may communicate with a remote optical device over one or multiple optical links; and/or a source optical device may communicate with one or multiple remote optical devices. Furthermore, the techniques for identification and monitoring may be applied to one optical link, multiple optical links, and/or all optical links from a source optical device. While one or more specific elements may be shown in a particular embodiment, other elements and configurations may provide equivalent functionality without departing from the spirit or the scope of this disclosure. FIG.2illustrates an optical system with an ID block in a source optical device and a remote ID block in a remote optical device in an example embodiment. The optical system200includes a source optical device202and a remote optical device208connected by an optical link i. A transmission path i from W2to X2carries service signals from a function block204of the source optical device202to a function block212of the remote optical device208, and a receiving path i from Z2to Y2carries service signals from the function block212of the remote optical device208to the function block204of the source optical device202using a particular service wavelength of a set of service wavelengths λ{service}. The source optical device202includes an ID block214that identifies one or more connections at one or more remote optical devices. The remote optical device208includes a remote ID block216. As previously noted, a remote ID block may be present in one or multiple remote optical devices connected to the source optical device202. Furthermore, multiple remote ID blocks may be present in a remote optical device208. The ID block214transmits ID signals over transmission path i using a set of ID wavelengths λ{ID}. At A2, a light source220generates light of the set of wavelengths λ{ID}. In some embodiments, the light source220includes one or more broadband light sources, one or more tunable lasers, one or more diodes such as light-emitting diodes (LEDs) and laser diodes (LDs), and/or one or more other light sources that can provide λ{ID}light. In some embodiments, the light source220is a light source that exists for another purpose in the source optical device202, such as a light source that belongs to function block204. The ID signals are directed into transmission path i at B2using one or more elements222-224. For example, element224may be a splitter and/or switch, a multiplexer, or another optical element. In some embodiments, the light source220generates λ{ID}light that is directed into transmission paths that travel over one or more other optical links. For example, element222may be a switch element and/or splitter element that transmits light to one or more other transmission paths, such as but not limited to a transmission path that travels over optical link2, using one or more elements242. At G2, the ID signals are directed into a bypass path from G2and H2by using an element226that can direct the ID signals into the bypass path, such as a switch, or another optical element at G2. In some embodiments, the bypass path is only enabled when identification of optical links is performed for the optical system200. In such cases, element226may be a switch without affecting the transmission of service signals during normal operation of function block204and function block212. The ID signals transmitted over transmission path i enter the bypass path G2−H2and travel to a set of wavelength-division Multiplexing (WDM) filters228. Each WDM filter of the set of WDM filters228can either pass or block a different wavelength. The set of WDM filters228can be used in different combinations. When the set of WDM filters includes a maximum number of different wavelength filters l, and a maximum number of filters to “build such Optical ID Block” is k (k<=l), the total number of unique identifiers (IDs) that can be created by “such Optical ID Block” will be equal to Clk+Clk-1+ . . . +Cl1. For example, if the set of WDM filters has 400 GHz channel spacing in a typical C band with 4 THz total bandwidth, then the set of WDM filters can have a maximum of l=10 filters with different wavelength (4 THz/400 GHz). If only one filter is used to build the “Optical block” (k=1), then 10 optical links can be identified. If up to two filters are used to “build the Optical block (k=2), then 55 optical links can be identified (C102+C101=45+10=55). Based on the maximum connectivity of the source optical device202, a minimum number of filters needed can be determined to ensure every connection can be uniquely identified among all connections from the source optical device202to the remote optical device/s208. In some embodiments, the set of WDM filters228and/or the remote ID block216is a pluggable component in the remote optical device208. When the set of WDM filters228and/or the remote ID block216is a pluggable component, the number of WDM filters can be changed, such as to accommodate a greater number of remote optical devices208identifiable by the source optical device202. FIG.3Aillustrates a configuration for a set of WDM filters (e.g. set of WDM filters228) in a remote ID block (e.g. remote ID block216) in an example embodiment. A set of WDM filters328in a bypass path (e.g. G2-H2) includes one or multiple optical notch filters which can block signals of different wavelengths (λi, λj, λk). The filters are placed in series, and one or a series of wavelengths will be blocked if light pass through the set of WDM filters328. FIG.3Billustrates a configuration for a set of WDM filters (e.g. set of WDM filters228) in a remote ID block (e.g. remote ID block216) in an example embodiment. A set of WDM filters358in a bypass path (e.g. G2-H2) includes one or multiple optical band pass filters, which can each pass signals of different wavelengths (λi, λj, λk, λm). The filters are cascaded together, and one or a series of wavelengths will be passed while rest will be blocked if light pass through this block. Returning toFIG.2, the ID signal is directed into the receiving path i at H2. at M2, one or more elements232-234, such as one or more splitters, filters, demultiplexers, and/or other optical modules, direct the returned ID signals from the receiving path i to a set of one or more elements236-238of an optical channel monitor (OCM)240. The OCM240measures properties of returned ID signals, such as the wavelength of a particular received signal. In some embodiments, the OCM240includes a tunable filter236and a photodetector238. The tunable filter236and photodetector238are integrated to perform optical wavelength channel monitoring. The OCM240allows the ID block214to determine which wavelength(s) of the set of ID wavelengths λ{ID}have been blocked or passed, allowing the ID block214to uniquely identify optical link i. The light generated by the light source220passes through the path A2-B2-C2-D2-G2-H2-I2J2-M2-N2-O2. In order to perform identification, light returning from the receiving path (e.g. receiving path i) of an optical link is directed through a channel monitor (e.g. OCM240). The source optical device202may have one or multiple OCMs to test a set of optical link/s (e.g. optical link i) with remote ID block/s (e.g. remote ID block216). In some embodiments, the OCM240is shared between two or more receiving paths such that returned ID signals returning over one or more other optical links are also directed to the OCM240. For example, element244, such as a splitter element and/or a filter element, directs light from a receiving path that travel over optical link2to the OCM240. In some embodiments, one OCM240is shared between all testable optical links with remote ID blocks. Alternatively and/or in addition, one or more additional OCM elements may be present in one or more connections to other remote ID blocks. The source optical device202may include electronic circuitry that uses the output of the OCM240to perform identification. In some embodiments, the ID block214may identify a wavelength associated with one or more optical links, one or more ports associated with a particular wavelength, or other identification information. In some embodiments, each connection between the source optical device202and a remote optical device208includes a remote monitor block and a monitor block, which may include shared elements. In some embodiments, the ID block may include 214 electrical circuitry, and/or may share electrical circuitry and/or resources used by other functionality (e.g. function block204) of the source optical device202. In some embodiments, the source optical device202includes one or more microprocessors (e.g. microprocessor150) that executes one or more control instructions to carry out one or more identification control processes as described herein. In some embodiments, the remote ID block216is a passive optical block that includes only passive optical elements. In some embodiments, the set of ID signal wavelengths λ{ID}may overlap with the set of service signal wavelengths λ{service}, and the identification mechanism does not operate during normal operation of the optical system200. For example, the identification mechanism described herein may be used during installation, modification, testing, and/or provisioning of the source optical device202and the remote optical device/s208. In some embodiments, λ{ID}does not overlap with λ{service}. When there is no conflict or overlap between the ID wavelengths λ{ID}and the service wavelengths λ{service}, the identification mechanism may be used during normal operation of the source optical device202and the remote optical device/s208. FIG.4illustrates an optical system with a monitor block in a source optical device and a remote monitor block in a remote optical device in an example embodiment. The optical system400includes a source optical device402and a remote optical device408connected by an optical link i. A transmission path i from W4to X4carries service signals from a function block404of the source optical device402to a function block412of the remote optical device408over optical link i using light of a particular service wavelength λiof a set of service wavelengths λ{service}. A receiving path i from Z4to Y4carries service signals from the function block412to the source optical device402using λilight. The source optical device402includes a monitor block418that monitors one or more connections between the source optical device402and one or more remote optical devices408. The remote optical device408includes a remote monitor block420that is communicatively coupled with the monitor block418. As previously noted, a remote monitor block420may be present in one or multiple remote optical devices connected to the source optical device402. Furthermore, multiple remote monitor blocks may be present in a remote optical device408. The monitor block418transmits monitor signals over transmission path i using monitor signals of a reference wavelength λr. The monitor signals are directed into transmission path i at P4. For example, a light source422at Q4may generate the monitor signal. In some embodiments, the light source422includes one or more broadband light sources, one or more tunable lasers, one or more diodes such as light-emitting diodes (LEDs) and laser diodes (LDs), and/or one or more other light sources that can provide light of the reference wavelength λr. In some embodiments, the light source422is a light source that exists for another purpose in the source optical device402, such as a light source that belongs to function block404. In some embodiments, multiple reference wavelengths and/or dynamically-selected reference wavelengths are used. In some embodiments, the light source422generates λrlight that is directed into transmission paths that travel over one or more other optical links. For example, element424may be a switch element and/or splitter element that transmits light to one or more other transmission paths, such as but not limited to a transmission path that travels over optical link3, using one or more elements. The monitor signal is added into the transmission path i corresponding to optical link i at P4using one or more elements426. For example, element426may be a multiplexer (MUX) element that combines a service signal from the function block404with the monitor signal from the light source422. The monitor signal travels over a path Q4-P4-C4-D4-E4-F4-I4-J4-K4-L4. At E4, the monitor signals are directed into a bypass path from E4to F4, such as by using element432. For example, the bypath path may be set up using WDM techniques, such as by using an optical demultiplexer (DEMUX) element432at E4and a MUX element434at F4. The DEMUX element432separates λrmonitor signals at E4so that they are not received at the function block412of the remote optical device408. The MUX element434adds the λrmonitor signals of wavelength λrto the receiving path i at F4so that they return to the source optical device402for processing. At K4, one or more optical elements436-438direct the monitor signal from the receiving path i to a photodetector440. For example, a DEMUX element436may separate λrmonitor signals wavelength at K4and direct them to the photodetector440. The redirected monitor signals are not received at the function block404of the source optical device402. Alternatively, other elements may be used to direct the monitor signal from the receiving path i to a photodetector440. The photodetector440evaluates returned monitor signal from the remote optical device408. For example, the photodetector440may be used to detect a power of the returned monitor signal, such as to determine an optical loss on the path C4-D4-E4-F4-I4-J4. Based on the configuration of the remote monitor block420, it may be assumed in one or more embodiments that the optical loss between D4and I4is negligible. The connectivity and/or health of the optical link i can be compared and continuously monitored. For example, the optical loss C4-D4and I4-J4may be compared with baseline data at factory calibration and/or provisioning. The monitoring mechanism may detect a severe fiber broken event or loss degradation issue during normal operation of the source optical device402and the remote optical device408. In some embodiments, the photodetector440is shared between two or more optical links such that monitor signals from one or more other receiving paths are also directed to the same photodetector440. For example, an element438, such as but not limited to an optical coupler or switch element, may direct light from a receiving path of optical link3to the photodetector440. In some embodiments, one photodetector440is shared between all monitored optical links with remote ID blocks. Alternatively and/or in addition, one or more additional photodetector elements may be present in one or more connections to other remote monitor blocks. In some embodiments, each connection between the source optical device402and a remote optical device includes a remote monitor block and a monitor block, which may include shared elements. In some embodiments, the monitor block418may include electrical circuitry, and/or may share electrical circuitry and/or resources used by other functionality (e.g. function block404) of the source optical device402. In some embodiments, the source optical device402includes one or more microprocessors (e.g. microprocessor150) that executes one or more control instructions to carry out one or more monitor control processes as described herein. In some embodiments, the remote monitor block420is a passive optical block that includes only passive optical elements. In some embodiments, the monitor mechanism operates during normal operation of the optical system400, and the reference wavelength λrof the monitor signals does not overlap with the wavelengths λ{service}of the service signals. For example, λrmay be outside of a frequency band selected for the service signals. In some embodiments, more than one reference wavelength is used. In some embodiments, a source optical device is configured to independently monitor connectivity and health of a first link component570used by a transmission path and a second link component572used by a receiving path.FIG.5illustrates an optical system with a monitor block for a source optical device and a remote monitor block for a remote optical device in an example embodiment. The optical system500includes a source optical device502and a remote optical device508connected by an optical link i. A transmission path i from W5to X5carries service signals from function block504of the source optical device502to the function block512of the remote optical device508. From C5to D5, transmission path i travels over a first link component570of optical link i, such as a first optical fiber. A receiving path i from Z5to Y5carries service signals from the function block512to function block504. From I5to J5, the transmission path i travels over a second link component572of optical link i, such as a second optical fiber. The service signals have a particular service wavelength λiof a set of service wavelengths λ{service}. The source optical device502includes a monitor block518. One or more remote monitor blocks520may be present in one or multiple optical devices connected to the source optical device502. The monitor block518transmits monitor signals of a reference wavelength λrover one or more optical link components to be monitored. A first circuit including monitor block elements522-530and remote monitor block elements552-554is configured to monitor transmission path i, and a second circuit including monitor block elements532-540and remote monitor block elements556-558is configured to monitor receiving path i. In some embodiments, the first circuit and the second circuit operate in the same or similar fashion using elements that perform the same or similar functionality with respect to transmission path i and receiving path i. The first circuit is described in greater detail hereinafter. In a first circuit associated with transmission path i, a light source528generates the λrmonitor signals. The monitor signals are directed into transmission path i corresponding at P5using one or more elements. For example, a WDM element524may comprise a MUX element that adds the λrmonitor signal to the λiservice signal. In some embodiments, the light source528includes one or more broadband light sources, one or more tunable lasers, one or more diodes such as light-emitting diodes (LEDs) and laser diodes (LDs), and/or one or more other light sources that can provide λrlight. In some embodiments, the light source528is a light source that exists for another purpose in the source optical device502, such as a light source that belongs to function block504. In some embodiments, the light source528generates monitor signals that are directed into the transmission path of one or more other optical links. For example, element522may be a switch element and/or splitter element that transmits light to one or more other transmission paths, such as but not limited to a transmission path that travels over optical link3. In the remote monitor block520at the remote optical device508, the monitor signal enters a bypass path at E5using one or more elements. For example, a WDM element552may comprise a DEMUX element that separates the λrmonitor signals at E5so that they are not received at function block512. The WDM element552directs the λrmonitor signals to R5. At R5, a reflector554reflects the λrmonitor signal. The λrmonitor signal travels back to the WDM element552, which may comprise a MUX element that directs the reflected monitor signal back to the source optical device502. Although transmission path i is illustrated with arrows indicating a direction of the service signals from W5to X5, the transmission path i allows bidirectional signaling, allowing the reflected monitor signal to travel from the reflector554at R5to the WDM element524at P5. The reflected monitor signal travels to a photodetector526at L5. For example, a circulator at T5may direct outgoing monitor signals from the light source528to the WDM524via element522, and may direct incoming reflected monitor signals to the photodetector526. In some embodiments, a DEMUX element of the WDM element524at P5separates the returned monitor signals of wavelength λrso that they do not travel to function block504. The photodetector526detects a power of the reflected monitor signal, such as to determine an optical loss over its path from the light source528to the photodetector526, Q5-T5-A5-P5-C5-D5-E5-R5-E5-D5-C5-P5-A5-T5-L5. The source optical device502may have one or multiple photodetectors526to evaluate reflected monitor signals. In some embodiments, the photodetector526is shared between two or more optical links such that reflected monitor signals from one or more other receiving paths are also directed to the same photodetector526. Alternatively and/or in addition, photodetector elements may be present in one or more other optical links. Based on the configuration of the monitor block518and the remote monitor block520, it may be assumed in one or more embodiments that the optical loss on segments outside of the first link component570is negligible. The connectivity and/or health of the first link component570can be compared and continuously monitored. For example, the optical measurements detected by the photodetector526may be compared with baseline data at factory calibration and/or provisioning to determine optical loss. The monitoring mechanism may detect a fiber disconnection or failure event or loss degradation issue in the first link component570during normal operation of the source optical device502and the remote optical device508. In some embodiments, the first circuit associated with transmission path i has additional components to improve health and connectivity monitoring. For example, a photodetector530may be used to monitor the health of the light source528. Light travels from the light source528to the photodetector530without traveling over any optical links. For example, light may travel from the light source528to the photodetector530via an element529, such as but not limited to an optical splitter or switch element that directs light away from the path Q5-T5to the photodetector530. The photodetector530may determine a current output of the light source528and compare the current output of the light source to the baseline data at factory calibration to determine a health of the light source528. In some embodiments, the optical measurements detected by the photodetector526are compared to the current output of the light source as detected by the photodetector530to determine optical loss over transmission path i. In some embodiments, each connection between the source optical device502and a remote optical device includes a remote monitor block and a monitor block, which may include shared elements. In some embodiments, the monitor mechanism operates during normal operation of the optical system500, and the reference wavelength λrdoes not overlap with the wavelengths λ{service}of the service signals. For example, λrmay be outside of a frequency band selected for the service signals. In some embodiments, more than one reference wavelength is used. In some embodiments, the monitor mechanism operates during normal operation of the optical system500, and the wavelength λrof the monitor signals does not overlap with the wavelengths λ{service}of the service signals. For example, the reference wavelength λrmay be outside of a frequency band selected for the service signals. In some embodiments, more than one reference wavelength is used. In some embodiments, the monitor block518may include electrical circuitry, and/or may share electrical circuitry and/or resources used by other functionality (e.g. function block504) of the source optical device502. In some embodiments, the source optical device502includes one or more microprocessors (e.g. microprocessor150) that executes one or more control instructions to carry out one or more monitor control processes as described herein. In some embodiments, the remote monitor block520is a passive optical block that includes only passive optical elements. An optical add-drop multiplexer (OADM) is an optical device used in wavelength-division multiplexing (WDM) systems for multiplexing and routing different wavelengths of light into or out of a single fiber. This allows multiple communication channels with different wavelengths to travel over a fiber. An OADM device generally includes an optical demultiplexer (DEMUX), an optical multiplexer (MUX), a method of reconfiguring the paths between the optical demultiplexer and the optical multiplexer, as well as a set of ports for adding and dropping signals. OADMs are often used in telecommunications networks. An OADM may refer to both a fixed optical add-drop multiplexer (FOADM) and/or a reconfigurable optical add-drop multiplexer (ROADM). FIG.6illustrates an optical system with an OADM node in an example embodiment. The optical system600includes a plurality of OADM nodes including OADM node608. The OADM node608is coupled to a plurality of other nodes in the optical system600by at least one inter-node optical link620-626. An inter-node optical link620-626includes at least one optical fiber for transmission of multiple wavelength signals in a unidirectional and/or bidirectional manner to and from the OADM node608. Typically, one or more OADM nodes608are arranged in a bus, ring, star, mesh, or hybrid topology arrangement. An OADM node608may be a terminal node in the optical system600, such as when the OADM node608is connected to only one inter-node optical link620-626. The OADM node608includes at least one direction device610-616. A direction device610-616routes signals received over a corresponding inter-node optical link620-626to other components within the OADM node608, such as but not limited to one or more add-drop group devices602-606and/or one or more other direction devices610-616. For example, the OADM node608may include one or more express communication links that directly transmit and receive service signals between direction devices610-616without adding or dropping any channels. A direction device610may be coupled to one or more add-drop group devices602-606with one or more optical links. An add-drop group device602-606may perform add-drop functionality for signals a different set of wavelengths. For example, a particular direction device610may communicate signals with a first set of wavelengths with a first add-drop group device602, signals with a second set of wavelengths with a second add-drop group device604, and signals with a third set of wavelengths with a third add-drop group device606. In some embodiments, the signals assigned to a particular add-drop group device60are a sub-band of band of frequencies used by the optical system600. In some embodiments, an OADM node608only has one add-drop group device602, and a direction device610-616transmits the entire band of service signals to the single add-drop group device602. An add-drop group device602-606separates and combines individual channels of particular wavelengths in the received service signals. For example, an add-drop group device602may drop or separate signals of wavelength λx, transmit the λxsignals to a device628over an optical link630coupling the device628and the add-drop group device602-606, receive λxsignals from the device628over the optical link630, and add the received λxsignals to a combined outgoing signal comprising outgoing signals of multiple wavelengths from one or more other devices. The device628may be an optical device, electrical device, and/or electro-optical device. One or more transponders, receivers, transceivers, and/or other optical-electrical and electrical-optical devices may be employed to communicate with the device628. The add-drop group device602may drop and add signals of a plurality of wavelengths (such as but not limited to λx) and may communicate individual wavelength signals with a plurality of devices (such as but not limited to device628). The add-drop group device602transmits the combined signal comprising multiple channels assigned to the add-drop group device602to one or more direction devices610-616. Although the OADM node608is illustrated as a logical device, the components of the OADM node608may be deployed separately. Add-drop group devices602-606are often physically deployed separately and independently from direction devices610-616. For example, one or more add-drop group devices602-606may be located in different slots of the same optical network device shelf as one or more direction devices610-616, one or more different shelves of same network device rack, one or more different locations at the same site, and/or remotely from a site comprising one or more direction devices610-616. In some embodiments, one or more add-drop group devices602-606are located close to a location of one or more end-users. In some embodiments, one or more optical links between an add-drop group device602-606and a direction device610-616go through one or more optical cabling systems, such as but not limited to one or more optical patch panels and/or optical shuffle boxes. The direction devices610-616may be well equipped with powered electrical elements, such as light sources (such as photodiodes, laser diodes, and/or other light sources) and/or optical channel monitors (OCMs). Furthermore, the direction devices610-616may be closely linked a to powered optical network device and/or network controllers, making their optical connectivity simpler to identify and/or monitor during provisioning and/or operation. Alternatively, one or more add-drop group devices602-606may have complex connection paths to the direction devices610-616and/or other devices in the OADM node608. Furthermore, one or more add-drop group devices602-606may be passive, having no electrical circuitry and no powered optical element. FIG.7illustrates a direction device and an add-drop group device in an OADM node in an example embodiment. An OADM node700includes one or more direction devices760and one or more add-drop group devices762. The direction device may transmit and receive signals over one or more optical links718-720with one or more other direction devices (e.g. direction devices610-616). For clarity in explanation, one direction device760and one add-drop group device762are described in greater detail hereinafter; one or more described features may apply to one or more other direction devices and/or add-drop group devices within the OADM node700. In some embodiments, one or more direction devices760are source optical devices (e.g. source optical device102,202,402,502) that include one or more identification blocks and/or one or more monitor blocks. In some embodiments, one or more add-drop group devices762are remote optical devices (e.g. remote optical device108,208,408,508) that include one or more remote identification blocks and/or one or more remote monitor blocks. Specific examples are described inFIGS.8-9without limiting the disclosure to the example embodiments. A direction device760may include a DEMUX element704to separate signals so that a particular sub-band assigned to a particular add-drop group device762can be directed to the particular add-drop group device762. The DEMUX element704separates service signals of a set of wavelengths λ{service}received over communication link720into one or more signal subsets and transmits the separated signals to one or more corresponding add-drop group devices762over one or more optical links722-726. For example, signals of wavelengths λ{i}are directed from DEMUX element704to add-drop group device762over communication link722; signals of wavelengths λ{j}are directed from DEMUX element704to another add-drop group device over communication link724; and signals of wavelengths λ{k}are directed from DEMUX element704to another add-drop group device over communication link726. The direction device760may include a MUX element702to combine signals from one or more add-drop group devices (e.g. add-drop group devices602-606) so that the combined signals can be transmitted to one or more direction devices (e.g. direction devices610-616) over one or more communication links718-720. For example, the MUX element702may combine returned λ{i}signals from add-drop group device762over communication link722; returned λ{j}signals from another add-drop group device over communication link724; and returned λ{k}signals from another add-drop group device over communication link726. A direction device760may include one or more powered electrical and/or optical elements, such as a pre-amplifier708, an optical channel monitor710, a booster amplifier706, a photodiode712, an optical supervisory channel714, a variable optical attenuator, a light source, a power source, electronic circuitry, a processor, and/or other elements, including powered elements, that can be used by an ID block (e.g. ID block114,214) and/or a monitor block (e.g. monitor block118,418,518) in one or more embodiments. In the add-drop group device762, the DEMUX element736separates signals based on wavelength and directs the separated signals to a plurality of single-wavelength optical links728-732. Each single-wavelength optical link728-732may carry signals of a particular wavelength (e.g. λa, λb, λc) between the add-drop group device762and a device (e.g. device628). The MUX element734combines returned signals received over the single-wavelength optical links728-732so that the combined returned signals can be transmitted to the direction device760over communication link722. An add-drop group device762may be connected to one or multiple direction devices760. For example, add-drop group device762may be connected to one or more other direction devices (e.g. direction devices610-616) by one or more optical links752-754. For example, add-drop group device762may also receive λ{i}signals from other direction devices over optical links752-754. In some embodiments, signals from two or more direction devices may be directed to the MUX element734and the DEMUX element736in an add-drop group device762. Alternatively and/or in addition, signals from a direction device may have its own MUX element734and DEMUX element736. For example, the combined signals may also be transmitted from MUX element734to one or more other direction devices over communication links752-754. FIG.8illustrates a direction device and an add-drop group device in an OADM node that implements an ID mechanism in an example embodiment. An OADM node800includes one or more direction devices860and one or more add-drop group devices862. A direction device860may transmit and receive signals over one or more optical links818-820to/from one or more other direction devices (e.g. direction devices610-616). For clarity in explanation, one direction device860and one add-drop group device862are described in greater detail hereinafter; one or more described features may apply to one or more direction devices and/or add-drop group devices within the OADM node800. In some embodiments, the OADM node800, one or more direction devices860and/or one or more add-drop group devices862include one or more elements described with respect to one or more other embodiments described herein. The direction device860includes one or more ID block components, such as a light source850upstream of DEMUX element804and/or a light source856downstream of DEMUX element804. The DEMUX element804separates service signals of a set of wavelengths λ{service}received over communication link820into one or more signal subsets and transmits the separated signals to one or more corresponding add-drop group devices862over one or more optical links822-826. In the add-drop group device862, the DEMUX element836separates signals based on wavelength and directs the separated signals to a plurality of single-wavelength optical links828-832, which may couple the add-drop group device862to one or more devices. The MUX element834combines returned signals received over the single-wavelength optical links828-832so that the combined returned signals can be transmitted to the direction device860over optical link822. The ID signals are added to a transmission path of service signals transmitted from the direction device860to the add-drop group device862. The add-drop group device862includes one or more remote ID block components, such as elements866-870. For example, element866may direct the ID signals into a bypass path that includes a set of WDM filters868, and element870may direct the ID signals into a receiving path for returned service signals from the add-drop group device862to the direction device860. In the direction device860, the returned ID signals are directed to an optical channel monitor (OCM)810. The OCM810measures properties of returned ID signals, such as the wavelength of a particular returned ID signal. The OCM810allows the direction group860to determine which wavelength(s) of the set of ID wavelengths λ{ID}have been blocked or passed, allowing identification of a corresponding optical link822. In some embodiments, the OCM810receives the returned ID signals after a MUX element802combines received signals from one or more add-drop group devices862over one or more optical links822-826. An add-drop group device862may be connected to one or multiple direction devices860. For example, add-drop group device862may be connected to one or more other direction devices (e.g. direction devices610-616) by one or more optical links852-854. The add-drop group device862may also receive ID signal and/or service signals from other direction devices over optical links852-854. In the add-drop group device862, the bypass path with the set of WDM filters may be present in each connection path between each direction device and each add-drop group device. For example, ID signals from optical links852-854may pass through elements866-870, or may pass through one or more similar set of elements. Furthermore, the bypass path/s for each direction device of the OADM node800may be present in one or more other add-drop group devices of the OADM node800. In some embodiments, the direction device860includes one or more microprocessors (e.g. microprocessor150) that executes one or more control instructions to carry out one or more identification control processes as described herein. In some embodiments, the add-drop group device862is a passive optical device that includes only passive optical elements. FIG.9illustrates a direction device and an add-drop group device in an OADM node that implements a monitor mechanism in an example embodiment. An OADM node900includes one or more direction devices960and one or more add-drop group devices962. A direction device960may transmit and receive signals over one or more optical links918-920to/from one or more other direction devices (e.g. direction devices610-616). For clarity in explanation, one direction device960and one add-drop group device962are described in greater detail hereinafter; one or more described features may apply to one or more other direction devices and/or add-drop group devices within the OADM node900. In some embodiments, the OADM node900, one or more direction devices960, and/or one or more add-drop group devices962include one or more elements described with respect to one or more other embodiments described herein. The direction device960includes one or more monitor block components, such as a light source940for generating monitor signals of a reference wavelength λr. A MUX element942may add the λrmonitor signals to one or more service signals, such as a λ{i}service signals having wavelengths in a set of wavelengths assigned to a particular add-drop group device962. The monitor signals added are transmitted from the direction device960to the add-drop group device962over optical link922. A same or similar mechanism may add reference signals to services signals transmitted to one or more other add-drop group devices over one or more other optical links924-926. The add-drop group device962includes one or more remote monitor block components, such as elements944-946. For example, element944may direct the monitor signals into a bypass path, such as by using a DEMUX element944to drop the λrmonitor signals from a transmission path from the direction device960and a MUX element946to add the λrmonitor signals to a receiving path to the direction device960. The remaining service signal is processed by the add-drop group device962, such as by DEMUX element936and MUX element934to separate signals transmitted to optical links928-932and combine signals received from optical links928-932. At the direction device960, the returned λrmonitor signals are evaluated. For example, a DEMUX element948may separate λrmonitor signals from the receiving path and direct them to a photodetector950. The photodetector950evaluates returned monitor signal from add-drop group device962. For example, the photodetector950may be used to detect a power of the returned monitor signal, such as to determine an optical loss over optical link922. An add-drop group device962may be connected to one or multiple direction devices960. For example, add-drop group device962may be connected to one or more other direction devices (e.g. direction devices610-616) by one or more optical links952-954. The add-drop group device962may also receive monitor signal and/or service signals from other direction devices over optical links952-954. In the add-drop group device962, the bypass path may be present in each connection path between each direction device and each add-drop group device. For example, monitor signals from optical links952-954may pass through elements944-946, or may pass through one or more similar set of elements. Furthermore, the bypass path/s for each direction device of the OADM node900may be present in one or more other add-drop group devices of the OADM node900. In some embodiments, the direction device960includes one or more microprocessors (e.g. microprocessor150) that executes one or more control instructions to carry out one or more monitor control processes as described herein. In some embodiments, the add-drop group device962is a passive optical device that includes only passive optical elements. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the disclosure, and what is intended by the applicants to be the scope of the disclosure, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. In the foregoing specification, embodiments are described with reference to specific details that may vary from implementation to implementation. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. The examples set forth above are provided to those of ordinary skill in the art as a complete disclosure and description of how to make and use the embodiments, and are not intended to limit the scope of what the inventor/inventors regard as their disclosure. Modifications of the above-described modes for carrying out the methods and systems herein disclosed that are obvious to persons of skill in the art are intended to be within the scope of the present disclosure and the following claims. The sole and exclusive indicator of the scope of the disclosure, and what is intended by the applicants to be the scope of the disclosure, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
57,018
11863296
DETAILED DESCRIPTION Semiconductor optical amplifier (SOA) is among few means to amplify optical signal in telecommunication systems. Miniature size of the waveguide structure that potentially allows high level of integration with flexibility in designing amplification in the desired bandwidth makes it attractive for optical communication systems. Long haul optical transmission using SOAs to compensate for the loss of the fiber spans revealed a potential to transmit WDM signals over distances that approach a transoceanic scale. One of the drawbacks of SOAs that limit the transmission distance is polarization dependent gain (PDG). SOAs can be designed with polarization independent gain, which is also referred to as “polarization diversity gain. It turns out, that low PDG of a typical SOA with polarization diversity is achieved over the bandwidth, that is smaller, compared to the potentially wide gain bandwidth, supported by the semiconductor structures. Broad amplification transmission bandwidth can be achieved by using single polarization SOAs and polarization multiplexing. While the configuration includes more complexity due to the use of polarization splitters and combiners that limit the polarization extinction ratio of the optical signals, but the level of potential integration is reduced. The disclosed examples describe an alternative approach to increasing bandwidth using SOA with polarization independent gain. The disclosed amplifier stages described herein may include wave division multiplexers, semiconductor optical amplifiers and wave division demultiplexers that amplify optical signals. An input optical signal having a first bandwidth is partitioned into a plurality of subband optical signals by thin film filters tuned to a selected bandwidth that is less than the first bandwidth. Each of the plurality of subband optical signals has a bandwidth that is a portion of the first bandwidth. Each subband optical signal is input into a semiconductor optical amplifier that is tuned to the respective portion of the first bandwidth that corresponds to the subband optical signal. The combination of the partitioned input optical signal and tuned semiconductor optical amplifiers provides improved optical signal transmission performance by reducing polarization dependent gain. In contrast to the amplification system ofFIG.1,FIG.2Aillustrates an example of an aspect of the subject matter in which bandwidth is extended and polarization dependent gain is minimized enabling two bandwidths to be amplified. SOAs are utilized in this instance as it is configured to only amplify the inputted optical signal. The output of two or more SOAs may be combined using a wavelength division multiplexer (WDM). In the amplifier stage200a, the two types of SOAs204and206are combined with wave division demultiplexer202and wave division multiplexer208. For example, in the amplifier stage200a, the input signal214may be a multi-channel optical signal that extends over a set bandwidth. The wave division demultiplexer202may be operable to demultiplex the multi-channel optical signal into respective channels that are input respectively into SOAs204and206for amplification. The parameters of the respective SOAs204and206that may be tuned to include a maximum PDG (i.e., PDGmax) and a maximum transmission performance penalty resulting from PDGmaxthat is allowed by the transmission system design, a maximum operating bandwidth where PDGmaxcan be satisfied, and/or a minimum gain in the operating bandwidth, where PDGmaxcan be achieved or the like. By optimizing one or more of the parameters of the SOA, amplification in the transverse electric (TE) mode may be increased while the signal gain in the transverse magnetic (TM) mode may be reduced. An accurate match between gains in the TE and TM modes in an SOA may result in zero polarization dependent gain (PDG), or at least substantially zero PDG, and essentially complete polarization diversity. Although it is possible to achieve the difference between the TE and TM mode gains of less than 0.1 dB and resulting PDG<0.1 dB, maintaining the difference between the TE and TM modes at that level and the PDG at less than 0.1 dB over an extended bandwidth, such as the entire range of the C-band, may be challenging. Note that the C-band is widely used in optical communications. An SOA, such as204, may be selected having an amplification bandwidth with polarization diversity gain and sufficiently low PDG for a chosen transmission distance. In addition, according to various non-limiting embodiments, an SOA may be designed to provide amplification and also minimize PDG within a limited optical bandwidth, such as 15, 20, 30, 40, 80 nanometers (nm), or other suitable bandwidth. The anticipated maximum PDG within the bandwidth, for example, may be between 0.3-0.5 dB between two polarizations. In addition, by designing the gain allocation of the SOA, the low PDG bandwidth can be achieved over different sections of the transmission spectrum. The SOA design differences between the number of SOAs enables a first SOA to amplify signals within a first bandwidth, such as a 15 nm bandwidth, with low PDG and a second SOA within a second bandwidth also 15 nm bandwidth but shifted in the electromagnetic spectrum from the first bandwidth, and also providing low PDG. For example, one type of SOA design can provide amplification over C-band. The gain characteristics of another SOA can be shifted to an L-band and designed to provide similar amplification and similarly low PDG over L-band. The WDMs202and208may be based on fiber fused devices or thin film (TF)-based couplers in which the TF-based couplers allow integration using micro assemblies. Due to the nature of the SOAs204and206, the transmission bandwidth can be extended beyond the conventional C+L range used in commercial telecommunication systems. The amplified output from each respective SOA204and206may be combined by wave division multiplexer208as an output signal216that combines the respective bands of each channel and is an amplified version of the input signal214for output from the amplifier stage200a. To obtain a greater transmission bandwidth (e.g., 80 nm), the transmission signal may be divided into different bandwidths (e.g., 20 nm or 40 nm) and a portion of the divided signal bandwidth may be directed to an SOA specifically designed to amplify that portion of the signal bandwidth and another portion of the divided signal bandwidth may be directed to another SOA specifically designed to amplify the other portion. For example, the transmission bandwidth may be set (e.g., 80 nm in the spectrum of 1530 nm to 1610 nm, where 1530-1565 nm corresponds to the C band, and 1565-1610, which corresponds to the L band that are used in telecommunications) and be subdivided utilizing WDMs, such as202. FIG.2Billustrates an example of an aspect of the subject matter in which bandwidth is extended and polarization dependent gain is minimized enabling multiple bandwidths combined. Similarly, three or more types of SOAs, such as SOA 1, SOA 2 . . . SOA n, may be designed to amplify WDM signals over three or more parts of the signal spectrum that may be combined to form a continuous transmission bandwidth that is either focused on minimizing polarization dependent gain and/or extending the transmission bandwidth. For example, in the amplifier stage200b, the several types of SOAs, such as SOAs 1 and 2-n, are combined with wave division demultiplexer210and wave division multiplexer212. The input signal218to amplifier stage200bmay be a multi-channel optical signal that extends over a set bandwidth, e.g., 1530-1565 nm (which corresponds to the C band),1565-1625(which corresponds to the L band),1460-1530(which corresponds to the S band), 1360-1460 nm (which corresponds to the E band), combinations of the respective C, L, S and E bands, or the like. For example, a multi-channel optical signal may be in the range of 1530-1625 nm and encompass all or portions of the C and L bands. The wave division demultiplexer210may be operable to demultiplex the multi-channel optical signal into respective channels that are input respectively into SOA 1 and SOA 2-SOA n for amplification. The amplified signal output from each of the respective SOAs 1 and 2-n may be combined by wavelength division multiplexer212. Due to the nature of the SOAs 1 through n being configurable to amplify optical signals having different wavelengths with different polarizations, the respective optical signals may be amplified differently by each respective SOA 1 through n, which enables the transmission bandwidth of the optical signals to be extended beyond the conventional C+L band range used in commercial telecommunication systems. FIG.3illustrates an example of a multichannel amplifier stage that utilizes multiple three-port WDM combiners and dividers in combination with semiconductor optical amplifiers. The optical signal amplifier stage300may include an input wavelength division demultiplexing section326, an amplifier section328and an output wavelength division multiplexing section330. The input wavelength division demultiplexing section326may include a first wavelength division multiplexer302and a number of second wavelength division demultiplexers, such as304and306. The first wavelength division demultiplexer302may be tuned to a first bandwidth. In this example, the optical signal322may be 80 nm, which may be the first bandwidth. The 80 nm bandwidth may include a number of separate bands. Each respective second wavelength division demultiplexer,304and306of the number of second wavelength division demultiplexers may tuned to one or more of the plurality bands of the optical spectrum encompassed in the 80 nm bandwidth. The amplifier section328may include a number of amplifiers, such as semiconductor optical amplifiers308,310,312and314. Each respective semiconductor optical amplifier308,310,312and314of the number of semiconductor optical amplifiers may include an input coupled to a respective second wavelength division demultiplexer of the input wavelength division demultiplexing section. And each respective semiconductor optical amplifier308,310,312and314may be configured to amplify the respective portion of the first bandwidth output from the respective second wavelength division demultiplexer. For example, the first wavelength division multiplexer302second wavelength division demultiplexer304may divide the first bandwidth that inFIG.3is 80 nm into 2 signals having bandwidths of 40 nm and output a first of the 40 nm bandwidth signals to second wavelength division demultiplexer304and output a second of the 40 nm bandwidth signals to second wavelength division demultiplexer306. The second wavelength division demultiplexer304and the second wavelength division demultiplexer306may each further divide the 40 nm input signal into two 20 nm bandwidth signals. The two 20 nm bandwidth signals may be output from the second wavelength division demultiplexer304and input into the amplifier stage328. A first of the two 20 nm bandwidth signals may be input to the SOA308and a second of the two 20 nm bandwidth signals may be input to the SOA310. Similarly, the two 20 nm bandwidth signals output from the second wavelength division demultiplexer306may be input into the amplifier stage328. A first of the two 20 nm band width signals output from second wavelength division demultiplexer306may be input to the SOA312and a second of the two 20 nm bandwidth signals may be input to the SOA314. Each respective SOA308,310,312and314may amplify respective portions of the first bandwidth with specified gain that also has low polarization dependent gain and output an amplified optical signal. In this example, the amplified optical signal output from each of the respective SOAs308,310,312and314is the 20 nm bandwidth signals. Each respective SOA of SOAs308,310,312and314may be tuned (e.g., during fabrication) to a range of wavelengths that provide a specified gain in that range of wavelengths while maintaining a low polarization dependent gain. Tuning of the SOAs308-314may, for example, be based on a transmission system cost function that takes into account the use of SOAs in the transmission system. Of course, other tuning methods may be used. Outputs from the amplifier section328may be applied to the output wavelength division multiplexing section330. The output wavelength division multiplexing section330may include a number of secondary output wavelength division multiplexers and a primary output wavelength division multiplexer. Each of the respective secondary output wavelength division multiplexers, such as secondary output wavelength division multiplexer316and secondary output wavelength division multiplexer318, may individually be referred to as a secondary output WDM or a secondary WDM combiner. The primary output wavelength division multiplexer, such as320, may also be referred to as a primary output WDM or a primary WDM combiner. In theFIG.3example, the respective outputs of SOA308and SOA310are coupled to a secondary WDM combiner316of the number of secondary output wavelength division multiplexers, which in this case, is a pair of secondary WDM combiners. The respective outputs of SOA308and SOA310may be coupled to a secondary WDM combiner318of the pair of WDM combiners. The 20 nm bandwidth signals output from each respective SOA308and SOA310may be combined by the first WDM combiner316to form a 40 nm bandwidth signal. Similarly, the 20 nm bandwidth signals output from each respective SOA312and SOA314may be combined by the first WDM combiner318to form an amplified 40 nm bandwidth signal. Each of the respective first WDM combiners316and318output the 40 nm bandwidth signal to the primary output wavelength division multiplexer320(also referred to as the primary WDM combiner320). The primary WDM combiner320is operable to combine the amplified 40 nm bandwidth signals received from the respective first WDM combiners316and318and to output an amplified optical signal324. The SOAs308and310may be designed to provide equal, or substantially equal, amplification to each signal. The amplified optical signal324has a bandwidth equal to the first bandwidth, which is 80 nm. It is noted thatFIG.3shows a specific implementation of demultiplexing and multiplexing an optical signal having a given bandwidth (in this case 80 nm), where the bandwidths of the different demultiplexed signals and multiplexed signals are equal to one another at each stage (80 nm, 40 nm, 20 nm, 40 nm, 80 nm), in various embodiments, the different signals need not have the same bandwidth at each stage. Thus, an 80 nm bandwidth signal may be demultiplexed by first wavelength division multiplexer302into a 50 nm bandwidth signal and a 30 nm signal. Likewise, the of second wavelength division demultiplexers, such as304and306may demultiplex the respective 50 nm bandwidth signal and 30 nm bandwidth signal, respectively, according to any suitable scheme. For example, the second wavelength division demultiplexer304may partition the 50 nm bandwidth signal into a 30 nm bandwidth signal and a 20 nm bandwidth signal, while the second wavelength division demultiplexer306may partition the 30 nm bandwidth signal into a 18 nm bandwidth signal and a 12 nm bandwidth signal. These non-equal-bandwidth signals may then be multiplexed to generate the outgoing optical signal having, for example, an 80 nm bandwidth, generally as described above with respect to WDM combiners316,318, and320. The embodiments are not limited in this context. FIG.4illustrates an aspect of the subject matter in accordance with an embodiment. The example illustrated inFIG.4is explained based on the use of the C and L bands of the electromagnetic spectrum for ease of discussion. However, in other embodiments, other bands, such as the S band, O band, or portions overlapping bands of the electromagnetic spectrum may be used and SOAs that accommodate the other bands or portions may be fabricated and used. The C+L amplifier stage400includes at least a C-band amplification section402and a L-band amplification section404. The respective inputs to the C+L amplifier stage400are provided by a WDM divider WDM3athat may include thin film filters (TFF) tuned to a selected bandwidth, such as the C+L band or another bandwidth. For example, WDM3amay transmit wavelengths in part of the C band (as well as some wavelengths that overlap in the L band) and may reflect wavelengths in part of the L band (as well as some wavelengths that overlap in the C band). The WDM3amay be an input wavelength division demultiplexing section having an input operable to receive an optical signal having a plurality of bands. In the amplifier stage400, the WDM3aoutputs optical signals substantially in the C band to amplification section402and outputs optical signals substantially in the L band to amplification section404. Respective WDM dividers WDM1aof amplification section402and WDM2aof amplification section404may further divide the inputted signal for amplification. In amplification section402, for example, WDM1amay subdivide the inputted, substantially C band signal into a pair of narrower bandwidth signals for amplification of each respective narrower bandwidth signals by respective SOA 1 and SOA 2. The amplified signals output from respective SOA 1 and SOA 2 may be combined by a WDM combiner WDM1bto provide an amplified signal having a bandwidth substantially equal to, or greater bandwidth than, the bandwidth of the signal input to WDM1a. While in amplification section404, for example, WDM2amay subdivide the inputted, substantially L band signal into a pair of narrower bandwidth signals for amplification of each respective narrower bandwidth signals by respective SOA 3 and SOA 4. Similar to amplification section402, the amplified signals output from respective SOA 3 and SOA 4 may be combined by a WDM combiner2bto provide an amplified signal having substantially equal or greater bandwidth than the signal input to WDM2a. The WDM3bmay be an output wavelength division multiplexing section that is operable to output an amplified optical signal of a first bandwidth. The amplified signals output from the respective amplification sections402and404are combined by another WDM combiner WDM3bto provide an amplified signal having a bandwidth substantially equal to, or greater bandwidth than, the bandwidth of the signal input to WDM3a. The example ofFIG.4provides an amplifier stage that enables increased bandwidth (due to allowing less separation between bands) and PDG minimization by using WDM dividers and WDM couplers based on TFFs. For example, a TFF may be designed based on a bandwidth selected for amplification. Alternatively, or in addition, one or more TFFs may be designed for used in the amplification based on the band or channel transitions within the bandwidth selected for amplification. The WDM division and combining (or multiplexing) uses alternative TFF properties: if WDM division uses transmissive properties of TFF, then WDM combining may be done using reflective properties of TFF. Alternatively, if WDM division uses reflective properties of TFF, then WDM combining may be done using transmissive properties of TFF. The alternating of transmissions and reflections by the combiner enables a maximum and equal extinction ratio between bands, such as the C band and the L band in this example. The alternation of the transmissions and reflections by the combiner is used because reflective and transmissive characteristics of TFF does not provide the same extinction of the adjacent band. For example, the C+L band signal at the input of C- and L-band WDM will experience minimum insertion loss for the C-band and high rejection of the L-band for signal passing to the output port. Typical rejection may exceed 40 dB for the entire L-band, which is sufficiently high for penalty-free operation. At the same time signal reflected to the other output port will have minimum insertion loss for the L-band but lower rejection for the C-band. Typical rejection of the C-band may be less than 20 dB. If the same type of WDM is used to combine the amplified C- and L-band signals back into one path, repetitive use of that arrangement will result in consistently lower rejection of L-band signals in the C-band amplification section. Due to broadband amplification nature of SOA, interference penalty will occur because of the amplification of signals from adjacent bands and subsequent combination of all signals. The arrangement of amplifier stage400minimizes cross talk between bands and the interference penalty after transmission. While the example ofFIG.4utilizes the reflective and transmissive properties of TFFs, other WDM combiner/dividers that are based on different principles that do not involve consideration of reflective and transmissive properties of TFFs may be used. FIG.5illustrates an example of a semiconductor package suitable for implementing one or more of the examples ofFIGS.2-4. The semiconductor package500may be a microassembly, a photonic integrated circuit (PIC) or integrated optical circuit. The PIC may be a device that integrates multiple (at least two) photonic functions and as such is similar to an electronic integrated circuit. An optical input signal504that has a selected optical bandwidth, such as 40, 60 or 80 nm, may be input to the semiconductor package500. The semiconductor package500may include an optical WDM splitter/divider stage510, a number of semiconductor optical amplifiers502and optical WDM combiner stage520. The optical WDM splitter/divider stage510receives the optical input signal504and as described above with reference to the earlier examples is operable to split or divide the optical input signal504for distribution to two or more of the semiconductor optical amplifiers502. The amplified optical signals from the respective two or more semiconductor optical amplifiers502are output to the optical WDM combiner stage520. The optical WDM combiner stage520combines the respective amplified signals output from each of the respective semiconductor optical amplifiers502to form an optical output signal506that has substantially bandwidth as the selected bandwidth of the optical input signal504. FIG.6illustrates an example system implementation utilizing semiconductor packages implementing the examples ofFIGS.2-5. The fiber optic transmission system600may include receiver602and transmitter604connected on one side of a fiber optic transmission line610. The receiver602is operable to receive optical signals from the transmitter606located on the other side of the fiber optic transmission line610. The transmitter604is operable to transmit optical signals to the receiver608also located on the other side of the fiber optic transmission line610. The fiber optic transmission line610may have a number of spans, such as Span 1A to Span NA and Span 1B to Span NB. Between each respective span may be a repeater such as repeater 1, repeater 2 to repeater N, where N may be nearly any integer. The number of repeaters in the fiber optic transmission line610depends upon the length of the fiber optic transmission line610. Each of the repeaters 1-N may include a semiconductor package612that is similar to semiconductor package500ofFIG.5. The repeaters 1-N may be tuned to the respective bandwidth of the optical signal or optical signals transmitted by the respective transmitters604and606. Herein, novel and unique techniques for an improved amplification of optical signals are disclosed. The present disclosure is not to be limited in scope by the specific examples described herein. Indeed, other various examples of and modifications to the present disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other examples and modifications are intended to fall within the scope of the present disclosure. Further, although the present disclosure has been described herein in the context of a particular implementation in a particular environment for a particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein.
24,641
11863297
Unless otherwise indicated, the drawings provided herein are meant to illustrate features of embodiments of this disclosure. These features are believed to be applicable in a wide variety of systems including one or more embodiments of this disclosure. As such, the drawings are not meant to include all conventional features known by those of ordinary skill in the art to be required for the practice of the embodiments disclosed herein. DETAILED DESCRIPTION In the following specification and claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where the event occurs and instances where it does not. Approximating language, as used herein throughout the specification and claims, may be applied to modify any quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term or terms, such as “about,” “approximately,” and “substantially,” are not to be limited to the precise value specified. In at least some instances, the approximating language may correspond to the precision of an instrument for measuring the value. Here and throughout the specification and claims, range limitations may be combined and/or interchanged; such ranges are identified and include all the sub-ranges contained therein unless context or language indicates otherwise. As used herein, the term “database” may refer to either a body of data, a relational database management system (RDBMS), or to both, and may include a collection of data including hierarchical databases, relational databases, flat file databases, object-relational databases, object oriented databases, and/or another structured collection of records or data that is stored in a computer system. As used herein, the terms “processor” and “computer” and related terms, e.g., “processing device”, “computing device”, and “controller” are not limited to just those integrated circuits referred to in the art as a computer, but broadly refers to a microcontroller, a microcomputer, a programmable logic controller (PLC), an application specific integrated circuit (ASIC), and other programmable circuits, and these terms are used interchangeably herein. In the embodiments described herein, memory may include, but is not limited to, a computer-readable medium, such as a random access memory (RAM), and a computer-readable non-volatile medium, such as flash memory. Alternatively, a floppy disk, a compact disc-read only memory (CD-ROM), a magneto-optical disk (MOD), and/or a digital versatile disc (DVD) may also be used. Also, in the embodiments described herein, additional input channels may be, but are not limited to, computer peripherals associated with an operator interface such as a mouse and a keyboard. Alternatively, other computer peripherals may also be used that may include, for example, but not be limited to, a scanner. Furthermore, in the exemplary embodiment, additional output channels may include, but not be limited to, an operator interface monitor. Further, as used herein, the terms “software” and “firmware” are interchangeable, and include computer program storage in memory for execution by personal computers, workstations, clients, and servers. As used herein, the term “non-transitory computer-readable media” is intended to be representative of any tangible computer-based device implemented in any method or technology for short-term and long-term storage of information, such as, computer-readable instructions, data structures, program modules and sub-modules, or other data in any device. Therefore, the methods described herein may be encoded as executable instructions embodied in a tangible, non-transitory, computer readable medium, including, without limitation, a storage device and a memory device. Such instructions, when executed by a processor, cause the processor to perform at least a portion of the methods described herein. Moreover, as used herein, the term “non-transitory computer-readable media” includes all tangible, computer-readable media, including, without limitation, non-transitory computer storage devices, including, without limitation, volatile and nonvolatile media, and removable and non-removable media such as a firmware, physical and virtual storage, CD-ROMs, DVDs, and any other digital source such as a network or the Internet, as well as yet to be developed digital means, with the sole exception being a transitory, propagating signal. Furthermore, as used herein, the term “real-time” refers to at least one of the time of occurrence of the associated events, the time of measurement and collection of predetermined data, the time for a computing device (e.g., a processor) to process the data, and the time of a system response to the events and the environment. In the embodiments described herein, these activities and events occur substantially instantaneously. As used herein, unless specified to the contrary, “modem termination system,” or “MTS′” may refer to one or more of a cable modem termination system (CMTS), an optical network terminal (ONT), an optical line terminal (OLT), a network termination unit, a satellite termination unit, and/or other termination devices and systems. Similarly, “modem” may refer to one or more of a cable modem (CM), an optical network unit (ONU), a digital subscriber line (DSL) unit/modem, a satellite modem, etc. As used herein, the term “transceiver,” unless specified otherwise, refers to a P2P coherent optics transceiver, having a coherent optics transmitting portion and a coherent optics receiving portion. In some instances, the transceiver may refer to a specific device under test (DUT) for several of the embodiments described herein. As described herein, a “PON” generally refers to a passive optical network or system having components labeled according to known naming conventions of similar elements that are used in conventional PON systems. For example, an OLT may be implemented at an aggregation point, such as a headend/hub, and multiple ONUs may be disposed and operable at a plurality of end user, customer premises, or subscriber locations. Accordingly, an “uplink transmission” refers to an upstream transmission from an end user to a headend/hub, and a “downlink transmission” refers to a downstream transmission from a headend/hub to the end user, which may be presumed to be generally broadcasting continuously (unless in a power saving mode, or the like). The embodiments described herein provide innovative access network architectures and processes that are useful for achieving simplified carrier phase recovery (CPR) for polarization multiplexed coherent optics in access network applications. In an exemplary embodiment, the present systems and methods leverage coherent optics technologies, and with respect to P2P or P2MP systems and communication links, to significantly improve the cable access network paradigm by reducing the cost, complexity, and power consumption from DSP on a received optical carrier. In an embodiment, a CPR algorithm is implemented in three DSP steps or subprocesses for one or more single polarization signals: (1) a one-tap state-of-polarization (SoP) estimation/polarization demultiplexing step; (2) a training sequence (TS)-based frequency offset estimation (FOE)/compensation step; and (3) a digital filtering step (e.g., using two digital filters) for channel equalization. The output of the estimated carrier phase and noise from one polarization direction (e.g., X-polarization) may then be used for the signals from the other polarization direction (e.g., Y-polarization) and combined with an estimated fixed phase offset rotation between the two polarizations. In another embodiment, the communication network includes a differential coded coherent system, such as polarization multiplexed differential quadrature phase shift keying (PM-DQPSK). In this embodiment, a fixed phase offset between the two polarizations is not required, thereby further reducing the total DSP complexity, which enables a significantly more hardware-efficient coherent optical system for the access network. The following embodiments are described with respect to receivers operating at 100 and 200 Gb/s. However, the person of ordinary skill in the art will appreciate that such operating parameters are described by way of example, and not in a limiting sense. The principles herein are applicable to access networks, PONs, and coherent optics systems operating at different transmission speeds, and particularly as the demand for increased speed and bandwidth continues to grow. The following examples are also described with respect to exemplary fiber links of approximately 50 km. However, the person of ordinary skill in the art will further appreciate that the present techniques support links of up to 80 km, 120 km, or greater in some circumstances. FIG.1depicts a DSP flow100of a receiver processor102. In an exemplary embodiment, receiver processor102is a digital coherent optical receiver and DSP flow100illustrates the DSP functionality for a polarization multiplexed signal with respect to a structural level104and an algorithmic level106of processor102. In an exemplary embodiment, the polarization multiplexed signal may be a dual-polarization (e.g., X/Y) in-phase/quadrature (I/Q) quadrature amplitude modulation (QAM, or PM-QAM) carrier signal. The structural and algorithmic functionality of the coherent optical receiver is described in further detail in co-pending U.S. patent application Ser. No. 16/370,873, filed Mar. 29, 2019, the subject matter of which is incorporated herein by reference. Structural level104may, for example, include one or more of: a first block108for compensation of front-end imperfections; a second block110for channel impairment equalization and compensation of major channel transmission impairments; a third block112for timing and clock recovery; a fourth block114for carrier recovery; and a fifth block116for bit stream recovery. Algorithmic level106may, for example, include one or more of: a first module118for deskewing, normalization, and/or orthogonality correction; a second module120for chromatic dispersion (CD) estimation or compensation (e.g., static equalization); a third module122for symbol synchronization; a fourth module124for PMD compensation, residual CD compensation, and/or polarization demultiplexing (e.g., dynamic equalization); a fifth module126for estimation and/or compensation of carrier frequency offset; and a sixth module128for carrier phase estimation (CPE) and/or compensation. In exemplary operation of DSP flow100, four digitized signals130(i.e., I and Q components for each X and Y polarization) are passed through first block108(i.e., in digital form, for example, after conversion by an ADC) to compensate front-end imperfections. Such front end imperfections may be compensated by one or more correction algorithms of first module118, which may include a deskew algorithm to correct the timing skew between the four channels resulting from the difference in both optical and electrical path lengths within the coherent receiver, normalization and orthogonality correction algorithms, and/or algorithms to compensate for differences between the respective output powers of the four channels (due to different responses of PINs and/or transimpedance amplifiers (TIAs) in the receiver), as well as quadrature imbalances resulting from a particular optical hybrid not exactly introducing a 90-degree phase shift. In further operation of DSP flow100, major channel transmission impairments may be compensated through use of appropriate digital filters of second block110, which may, through second module120, utilize estimation and compensation algorithms to address impairments such as CD and PMD. Second module may further include algorithms for performing, based on the different time scales of the dynamics of the respective impairments, static equalization for CD compensation because of its independence of SoP and modulation format, as well as the impact on subsequent blocks of structural level102before the CD estimation may be needed to achieve accurate compensation. At third block112, clock recovery for symbol synchronization may be processed within structural level102to track the timing information of incoming samples, for example, using third module122. In an embodiment, joint processing between third block112and fourth module124may be performed to achieve symbol synchronization within algorithmic level104after all channel impairments are equalized (e.g., as represented by respective arrows indicated inFIG.1). In at least one embodiment, a fast-adaptive equalization subprocess may be jointly performed for two polarizations within fourth module124through a butterfly structure and stochastic gradient algorithms, such as a constant modulus algorithm (CMA) and variants thereof. Fourth module124may further include one or more additional algorithms for further PMD compensation, residual CD compensation, and/or polarization demultiplexing (e.g., dynamic equalization). At fourth block114, carrier recovery is performed in cooperation with fifth module126, which may include one or more algorithms to perform carrier frequency offset estimation or compensation. In an embodiment, fifth module126may further include algorithms configured to estimate, and then remove, the frequency offset between a source laser (not shown inFIG.1) and a local oscillator (LO), to prevent the constellation rotation at the intradyne frequency. Within sixth module128, algorithms may be configured such that the carrier phase noise may be estimated and removed from the modulated signal, which may further include algorithms for symbol estimation and hard or soft-decision forward error correction (FEC) for channel decoding. At fifth block116, the final bit streams may be recovered at both structural level104and algorithmic level106. It may be noted that, for a particular digital coherent receiver, the ordering of blocks and modules for DSP flow100may, according to design choices at the receiver, differ from the order described above. For example, instead of, or in addition to, a feed-forward process, joint processing and feedback among different process blocks may be performed, including without limitation, clock recovery and polarization demultiplexing. In some embodiments, a coherent receiver may include fewer, or additional, blocks and/or modules than those described herein. For example, an alternative algorithmic level architecture is described below with respect toFIG.2. In other embodiments, similar functionality may be achieved through use of training sequences, data-aided, or blinded algorithms, as described further below with respect toFIGS.3-7. Coherent detection and DSP technologies have thus been key factors enabling the development of 100G coherent optical transmission systems. DSP technology has played in even more ubiquitous role, at both the transmitter and receiver, and the development of 200G coherent optical systems, and this trend is expected to continue in the development of further next-generation coherent optical systems. Although specific algorithms may be different for each block or module of the DSP, general functionality at the structural level (e.g., structural level104) or functional abstractions (e.g., algorithmic level106) are expected to be similar for relevant commercial products implementing such technology. FIG.2depicts an exemplary DSP flow200in an algorithmic level202of receiver processor102,FIG.1. In an exemplary embodiment, algorithmic level202replaces algorithmic level106,FIG.1, within receiver processor102. In some embodiments, algorithmic level202may include one or more algorithms, modules, or subprocesses of algorithmic level106in a complementary fashion. In the exemplary embodiment, algorithmic level202may, for example, include one or more of: a first module204for performing SoP estimation and polarization demultiplexing (e.g., 1-tap); a second module206for performing training sequence (TS)-based FOE and compensation; a third module208for performing dynamic channel equalization (e.g., two digital filters); and a fourth module210for performing carrier phase estimation (CPE) and compensation. In exemplary operation of DSP flow200, first module204and second module206are all configured to functionally process all four of digitized signals212for the respective I/Q components of the X/Y polarizations, similar to the various respective modules of algorithmic level106. In the embodiment depicted inFIG.2though, third module208may be configured to functionally process one component212from each polarization (e.g., YQ and XQ signals212, in this example). The operational functionality of first module204, second module206, and third module208is otherwise described in greater detail in co-pending U.S. application Ser. No. 16/412,104, filed May 15, 2019, the subject matter thereof which is incorporated by reference herein. Although similar in functional operation, fourth module210particularly differs from sixth module128,FIG.1, in that whereas sixth module128is configured to perform carrier phase estimation and compensation on all four signals130(i.e., the I/Q components of both X/Y polarizations), fourth module210is configured such that carrier phase estimation/compensation need be performed on one of only the I/Q components of one of the two polarizations (e.g., the YQ signal212, in the example depicted inFIG.2). That is, DSP flow200represents a significantly simplified algorithmic DSP flow in the digital optical coherent receiver for the optical access network, in comparison with algorithmic level106of DSP flow100,FIG.1. Accordingly, the following DSP embodiments are described with particular focus on the innovative simplified DSP techniques of fourth module210that produces recovered bit streams214for both X and Y polarizations, but through performance of CPE on only one such polarization signal212. According to the innovative embodiments described herein, the complexity of the DSP flow in the receiver processor is advantageously reduced such that the processor need not implement fixed CD compensation. Instead, as illustrated in the embodiment depicted inFIG.2, the accumulated CD in the access network may be alternatively compensated within third module208(i.e., dynamic channel equalization). Moreover, the complexity of DSP flow200is still further reduced, in comparison with DSP flow100or conventional techniques, by the performance of adaptive polarization demultiplexing and PMD compensation with multiple taps in a single processing block/module, namely, first module204. That is, a single tap is employed for SoP tracking and polarization demultiplexing prior to channel equalization (e.g., an third module208). By separating these two functional blocks/modules within DSP flow200, single polarization equalization may be achieved with two digital finite impulse response (FIR) filters, as opposed to conventional systems that implement a butterfly-based bank configuration with four finite impulse response (FIR) filters and crossing computation. Thus, in comparison with conventional techniques, systems and methods according to the “simplified” configuration of DSP flow200are capable of reducing the DSP computational complexity by 50% for adaptive equalization functionality. In an exemplary embodiment of DSP flow200, TS-based frequency-offset estimation and compensation may be further achieved (e.g., through implementation of second module206) using a training sequence having an optimized length with respect to the single-polarization signals, or with respect to the average of the dual-polarization signals. Accordingly, after frequency offset correction (e.g., second module206) and channel equalization (e.g., third module208) accomplished, carrier phase recovery (CPR) may then be achieved at, or by implementation of, fourth module210. FIG.3is a schematic illustration depicting a conventional CPR process300, for a dual-polarization carrier input signal302, by a receiver processor (not separately shown). For each single polarization of input signal302(i.e.,302(X) and302(Y)), process300passes the original input signal302through a respective dynamic phase noise estimation unit304. An output of unit304is then combined, at a respective mixer306, with the original input signal302to achieve phase recovery and generate an output signal308for the respective single X- or Y-polarization of the dual-polarization carrier. In this example, unit304includes a plurality of taps310and a phase estimation module312. Phase estimation module312implements, for example, a Viterbi-Viterbi (VV) CPR algorithm or a blind-phase-search (BPS) algorithm to obtain the phase estimate, φ(t), for the respective single-polarization such that mixer306achieves phase recovery through a function e−jφ(t)based on that phase estimate (i.e., e−jφx(t)for the X-polarization and e−jφy(t)for the Y-polarization). In further operation of conventional CPR process300, dynamic phase noise estimation unit304includes L+1 taps310for L-tap symbols S. The symbols S are used for phase estimation of the center symbol Sn+L/2, based on, for example, a 4thpower VV CPR or BPS algorithm. In the case where input signal302is a QPSK signal having four phase states, the received complex symbols of the QPSK signal are first raised to the 4thpower to remove modulation, leaving only the phase noise present. Center symbol Sn+L/2is then added to N predecessors and successors to average the estimated phase. In conventional CPR process300, because the phase varies over a range of 2π, the estimated phase must be “unwrapped” to provide a continuous and unambiguous phase estimation. After the phase unwrapping, estimated phase error compensation is performed with respect to the received complex symbols. Again, and as illustrated in the example depicted inFIG.3, conventional CPR process300requires that the processing for phase estimation is performed independently for each of the X-polarization and the Y-polarization signals. These conventional techniques, therefore, require considerable processing resources for complex dual-polarization signals, which presents particular challenges to the implementation of DSP processing in the developing near-future access network paradigm. An innovative solution is challenges is described further below with respect toFIG.4. FIG.4is a schematic illustration depicting an exemplary CPR process400for performing carrier phase recovery and compensation on a dual-polarization carrier input signal402by a receiver processor (e.g., processor102,FIG.1). CPR process400is architecturally similar, in some respects, to conventional CPR process300,FIG.3. CPR process400differs though, from conventional CPR process300in that CPR process400utilizes a single dynamic phase noise estimation unit404for both of the X-polarization and the Y-polarization portions (i.e., input signals402(X) and402(Y), respectively) of input signal402. In an exemplary embodiment, unit404may be similar in structure and functionality to dynamic phase noise estimation unit304,FIG.3, and similarly processes only a single-polarization input signal402(e.g., input signal402(X), in this example). Different from unit304, however, unit404outputs to both of two dynamic mixers406(X) and406(Y) for the two polarizations, respectively. Dynamic mixers406(X) and406(Y) may be otherwise similar to respective mixers306(X) and306(Y),FIG.3. CPR process further differs from conventional CPR process300in that CPR process400may include, for the other single-polarization lane (e.g., Y-polarization, in this example), a fixed phase rotation estimation unit408and a fixed mixer410configured to receive an output from unit408. More specifically, dynamic mixer406(X) combines single-polarization input signal402(X) with an output of single-polarization dynamic phase noise estimation unit404(e.g., φx(t)-based, in this example). Thus, in the example depicted inFIG.4, phase recovery for an X-polarization output signal412(X) is thereby achieved from dynamic mixer406(X) through the function e−jφx(t). In contrast, dynamic mixer406(Y) combines single-polarization input signal402(Y) with the same φx(t)-based output of the single dynamic phase noise estimation unit404. Since a phase recovery output414of mixer406(Y) is based on the function e−jφx(t), output414will exhibit rotation with respect to X-polarization output signal412(X). Accordingly, in the example depicted inFIG.4, output414is passed through fixed phase rotation estimation unit408, and the output of unit408(e.g., φy0-based) is then combined with output414at fixed mixer410to achieve phase recovery for a Y-polarization output signal412(Y) through a function e−jφy0relating to unit408. The person of ordinary skill in the art will understand, through comprehension of the present description and illustrations, that either polarization direction may be selected for processing through the single dynamic phase noise estimation unit. Therefore, according to the innovative configuration of CPR process400, a simplified and hardware-efficient DSP flow (e.g., DSP flow200,FIG.2) is accomplished. In an exemplary embodiment, CPR process400is accomplished in two stages: (1) phase noise estimation using only a single polarization direction; and (2) phase recovery for both polarization directions using the same single-polarization-based phase noise estimation. More particularly, phase noise estimation is performed in the first stage at only a single polarization direction, and this single-direction estimate for the first polarization signal is thus also shared with the second polarization signal to accomplish phase recovery for both polarizations in the second stage. In the exemplary embodiment, phase recovery of the second polarization signal may further utilize fixed phase rotation estimation and recovery through implementation of data-aided or blind estimation processes. Thus, according to the present systems and methods, DSP processing for a dual-polarization carrier signal be effectively accomplished through performance of only one dynamic phase noise estimation processing stage for both polarizations of the dual-polarization signal. Dynamic phase noise estimation processing is time varying, with high computational complexity. The innovative configuration depicted inFIG.4advantageously reduces this computational burden and complexity by approximately half. Whereas the particular example for CPR process400described herein does include an additional fixed phase rotation estimation that is not performed in conventional CPR process300,FIG.3, this fixed phase rotation estimation is considered, in comparison with dynamic phase noise estimation, to be a one-time process having a relatively negligible computation complexity. An exemplary technique for performing fixed phase rotation estimation is described below with respect toFIG.5. FIG.5is a graphical illustration depicting an exemplary fixed phase rotation estimation subprocess500for CPR process400,FIG.4. In an exemplary embodiment, subprocess500may be implemented at fixed phase rotation estimation unit408for the second polarization direction that is not subject to dynamic phase noise estimation (i.e., through unit404). Thus, according to the exemplary embodiment depicted inFIG.4, because X-polarization input signal402(X) and Y-polarization402(Y) originate from the same carrier, subprocess500is able to advantageously leverage the relationship between these two signal portions to utilize a single dynamic estimation from only one signal polarization to achieve CPR for both signal polarizations. That is, even though the phase of the respective individual signal polarizations may change substantially in relation to one another (e.g., from multiple DSP stages on the individual polarization lanes), fixed phase estimation subprocess500may utilize one or more training sequences502in the second polarization signal (i.e., signal402(Y), in this example) to achieve a TS-based estimate for phase recovery of the second polarization signal portion. More particularly, and as illustrated in the example depicted inFIG.5, a training sequence, Ts, is inserted into second input signal polarization402(Y) to coincide with first input signal polarization402(X). Thus, a given Ts502may be represented according to [Ts1, Ts2, . . . TsN], where N represents the training length in the Y-polarization direction. In this manner, a received signal, Rs, at the X-polarization is [Rs1, Rs2, . . . RsN], and may represent input signal402(X), which may have been subject to FOE and channel equalization (e.g., from second module206and third module208, respectively,FIG.2), and after dynamic phase noise estimation (e.g., by dynamic phase noise estimation unit404,FIG.4). Using these values, subprocess500is able to determine the fixed phase rotation φy0according to: φy0=avg(angle(Rs/Ts))  (Eq. 1) FIG.6is a graphical illustration depicting an alternative fixed phase rotation estimation subprocess600for CPR process400,FIG.4. In an exemplary embodiment, subprocess600may be implemented as an alternative to the implementation of TS-based fixed phase rotation estimation subprocess500,FIG.5, within or in conjunction with, an alternative embodiment of fixed phase rotation estimation unit408′,FIG.4. In the exemplary embodiment, subprocess600represents a blind phase estimation processing technique useful to determine an estimate of the fixed phase rotation φy0. That is, similar to the innovative technique described above with respect toFIG.5, the fixed phase rotation φy0is still determined to achieve phase recovery for the second of the two single-polarization signals after dynamic phase noise estimation for only the first of the single-polarization input signals. As depicted in the example illustrated inFIG.6, a plurality of received symbols602(i.e., 1-N received symbols602) are fed into and processed by an algorithm of a phase noise estimation unit604, which in turn generates the fixed phase rotation estimation φy0. That is, through this alternative subprocessing technique, the same fixed phase rotation estimation (i.e., φy0) is obtained according to this blind phase estimation approach subprocess600, as is obtained through implementation of training sequence-based subprocess500. Both techniques fully support the simplified DSP flow approach described above with respect toFIGS.2and4. Although the blind phase estimation approach described with respect toFIG.6is similar, in some respects, to conventional blind phase recovery methods (e.g., BPS, or even VV). However, according to the innovative and simplified approach of subprocess600, the sliding window that is necessary to the conventional approach, is no longer needed according to the blind phase estimation approach of subprocess600. Indeed, according to subprocess600only a one-time phase estimation of the N symbols (R1-RN) is performed, and then the fixed phase rotation estimate φy0may be obtained by averaging these N symbols. FIG.7is a schematic illustration depicting an alternative CPR process700. CPR process700is similar in many respects to CPR process400,FIG.4, and performs carrier phase recovery and compensation on the respective polarizations of a dual-polarization carrier input signal702(i.e.,702(X) and702(Y)) by a receiver processor (e.g., processor102,FIG.1). In the exemplary embodiments depicted inFIG.7, CPR process700represents the implementation of the innovative and simplified algorithmic embodiments described above, but in this example, applied to a dual-polarized signal according to a differential modulation format, such as a DQPSK signal. Similar to CPR process400, CPR process700also implements only a single dynamic phase noise estimation unit704, which may be similar in structure and function to dynamic phase noise estimation unit404,FIG.4. Also similar to unit404, the phase noise estimation φx(t) output from unit704is based only on a single polarization (i.e., the X-polarization signal, in this example) but shared with respective mixers706for both polarizations, that is, mixer706(X) for the X-polarization and mixer706(Y) for the Y-polarization. The phase recovery from both mixers706(X),706(Y) thus also utilizes the same function e−jφx(t)corresponding to the phase noise estimation value from unit704. However, for the exemplary embodiment depicted inFIG.7, because dual-polarization carrier input signal702is a DQPSK signal, to implement the simplified DSP flow techniques described above, CPR process700implements only the dynamic phase noise estimation stage of processing, and may avoid the need for fixed phase rotation recovery for the polarization signals. Accordingly, in this example, CPR process700may employ an individual differential decoding unit708at the output of each mixer706, respectively, to obtain the relevant output polarization signal710. Accordingly, the person of ordinary skill the art can see that the complexity of DSP processing may be even further substantially reduced in the case of input carriers utilizing differential modulation formats. FIGS.8A-Bare schematic illustrations depicting exemplary optical network architectures800,802, respectively. More particularly, optical network architecture800illustrates an exemplary implementation of the present DSP embodiments within a P2P configuration, and optical network architecture802illustrates an exemplary implementation of the present DSP embodiments within a P2MP configuration. In an embodiment, P2P optical network architecture800includes a first transceiver804in operable communication with a second transceiver806over an optical communication transport medium808. First transceiver804includes a first transmitter810and a first receiver812, and second transceiver806includes a second receiver814and a second transmitter816. In the exemplary embodiment, first receiver812includes a first DSP unit818, and/or second receiver814includes a second DSP unit820. In this exemplary P2P configuration, both of first and second receivers812,814may be configured to operate as continuous mode coherent optical receivers, and either or both of first and second DSP units818,820are configured to implement the reduced-complexity DSP flow techniques described above. In contrast, P2MP optical network architecture802includes an upstream hub transceiver822(e.g., at a headend) in operable communication with a plurality (i.e., 1-k) of downstream transceivers824over an optical communication transport medium826. Hub transceiver822includes a downstream transmitter828and an upstream receiver830. In this exemplary P2MP configuration of architecture802, each of downstream transceivers824may therefore include a respective downstream receiver832and an upstream transmitter834. In an exemplary embodiment, one or more of downstream receivers832includes a respective downstream DSP unit836, and upstream receiver830includes an upstream DSP unit838. In an exemplary embodiment, some or all of upstream DSP unit838and downstream DSP units836are configured to implement the reduced-complexity DSP flow techniques described above. In an embodiment, downstream (DS) transmissions from downstream transmitter828to downstream receivers832may be sent as continuous mode coherent optical transmissions, and upstream (US) transmissions from respective upstream transmitters834to upstream receiver830may represent burst mode coherent optical transmissions. FIG.9is a schematic illustration of an exemplary test architecture900for verifying experimental results implementing the receiver processing embodiments herein. More particularly, test architecture900was implemented in a real-world experimental setup to verify the proof of concept for the CPE and DSP flow systems and methods, as well as the several algorithmic blocks modules thereof, described above. Test architecture900simulated a real-world operation of a coherent optics communication network, and included transmitter end902operably coupled to a receiver end904by an optical communication medium906(e.g., a 50-km single mode fiber (SMF), in this case). Transmitter end902included an arbitrary waveform generator (AWG)908(e.g., including an 80 GSa/s DAC), which generated of 25 GBaud polarization multiplexed QPSK and 16QAM signals910. Signals910were modulated using an I/Q modulator912coupled with a laser source914(e.g., a laser diode, 100 kHz), and then amplified by amplifier916(e.g., a booster erbium-doped fiber amplifier (EDFA) for transmission over the 50-km SMF of medium906. At the receiver end904, the power of the transmitted signal was measured after a variable optical attenuator (VOA)918deployed along medium906at an input of receiver end for coherent detection. The received signal was then amplified by a pre-EDFA920, input to an integrated coherent receiver (ICR)922in operable communication with a local oscillator (LO) source924, sampled by a digital sampling oscillator (DSO)926(e.g., also 80 GSa/s), and processed by a Matlab-capable computer (PC)928. That is, in the actual experimental setup of test architecture900, the several reduced-complexity algorithms, described above, for the receiver were implemented to demodulate the transmitted signal through a Matlab offline process employed by PC928. In practical applications, such functionality may be performed within the coherent receiver itself, or by a DSP unit thereof. Results obtained from the experimental setup of test architecture900are described further below with respect toFIGS.10A-11B. FIG.10A-Bare graphical illustrations depicting experimental phase estimation measurement plots1000,1002, respectively, obtained according to test architecture900,FIG.9. More particularly, plots1000,1002illustrate phase estimation results for both polarizations of a multi-symbol dual-polarization signal, as well as the comparative differences between the conventional approach (e.g.,FIG.3) and the reduced-complexity/simplified CPR systems and methods described herein (e.g.,FIGS.2,4-7). For example, plot1000illustrates the estimated phase-versus-symbol results according to the conventional technique that requires independent estimation of dynamic phase noise for each of the X- and Y-polarizations individually. As shown in plot1000, an X-polarization phase subplot1004has the same phase evolution, but with a fixed phase offset, as a Y-polarization phase subplot1006. That is, since the independent phase noise from fiber nonlinearity (e.g., from medium906,FIG.9) is considered to be relatively rather small at the transmission distances associated with the access paradigm, the respective phase noise in the two polarizations exhibits effectively the same behavior, except the fixed phase rotation. In contrast, plot1002illustrates the estimated phase-versus-symbol results according to the CPR processing techniques described herein for the simplified DSP flow of a receiver processor. More particularly, a first subplot1008(solid line) illustrates the residual phase for one polarization using a dynamic phase estimation result, and a second subplot1010(dotted line) illustrates the results obtained using fixed phase rotation for the other polarization. As can be seen from the graphical illustration depicted inFIG.10B, first and second subplots1008,1010substantially align with one another, thereby demonstrating the particular effectiveness of embodiments according to the present systems and methods. FIG.11A-Bare graphical illustrations depicting comparative BER performance result plots1100,1102, respectively, obtained according to test architecture900,FIG.9. More particularly, plot1100,FIG.11Aillustrates a comparative BER-versus-symbol length overlay of a first subplot1104utilizing a training sequence-based fixed phase rotation estimation (e.g.,FIG.5) against a second subplot1106utilizing a BPS-based fixed phase rotation estimation (e.g.,FIG.6). As can be seen from the graphical illustration depicted inFIG.11A, first and second subplots1104,1106substantially align with one another. That is, BER performance is similar using either of the TS or blind estimation algorithms (in the condition of fixed receiver power at −38.3 dBm, for the experimental results of this example). As can also be seen from plot1100, a converged result1108for fixed phase rotation estimation is obtained for a training sequence or average window size of 64 symbols. In contrast, plot1102illustrates BER-versus-received optical power comparative overlays1110,1112for a 100G QPSK signal and a 200G 16QAM signal, respectively. More particularly, comparative overlay1110superimposes a first subplot1114depicting the BER performance of the QPSK signal according to conventional CPR techniques (i.e., where both polarizations are independently subject to dynamic phase noise estimation) with a second subplot1116depicting the BER performance of the same QPSK signal according to the simplified CPR techniques described herein. Similarly, comparative overlay1112superimposes a third subplot1118depicting the BER performance of the 16QAM signal according to the conventional CPR techniques with a fourth subplot1120depicting the BER performance of the same 16QAM signal according to the present simplified CPR techniques. As can be seen from the optical power sensitivity comparisons of plot1102, the innovative reduced-complexity DSP flow techniques of the present embodiments may be effectively implemented for different modulation formats with no significant or observable performance degradation therefrom. The systems and methods described herein are therefore of particular advantageous use for the access network paradigm, for example, in the cable environment or other telecommunication applications, and may be implemented with respect to 4G, 5G, and 6G networks and related applications, as well as fronthaul, backhaul, and midhaul deployments, and also for both short- and long-haul architectures. Exemplary embodiments of DSP systems and methods for digital and/or optical communication networks are described above in detail. The systems and methods of this disclosure though, are not limited to only the specific embodiments described herein, but rather, the components and/or steps of their implementation may be utilized independently and separately from other components and/or steps described herein. Data-Aided SoP Estimation and Channel Equalization for Coherent Access Networks As described above mobile Internet, 5G technology, cloud networking, and video streaming services are presently driving the growth of bandwidth requirements in optical access networks. As a P2MP system, PON technologies have been one of the dominant architectures to meet such high capacity demand for the end users. A high-speed PON, based on a single wavelength having a time-division multiplexing (TDM) mechanism, has been an attractive solution in the field due to its capability of reducing the number of required optical components and associated costs, while also saving wavelength resources. However, the limited sensitivity of such systems has become a critical challenge to support high-speed PONs with high power budgets using direct detection technologies. One such direct detection PON, for example, has a PR 30 power budget, transmits at greater than 50 Gbps per wavelength, and at a distance greater than 20 km. Coherent detection offers a solution that both enables high-speed data transmission with advanced modulation formats, and also enhances the link power budget due to the increased sensitivity of the coherent receivers. However, implementation of digital coherent technologies into the optical access network paradigm creates new challenges arising from the differences between the access network and present long haul coherent technologies. A first challenge arises from the fact that coherent detection in long haul transmissions requires powerful DSP at the receiver-side of the network to compensate for the channel linear and nonlinear distortions. In the access network (e.g., PON, P2P Ethernet), however, the transmission distance is generally limited, that is, over considerably shorter distances than in the long haul paradigm. In the access network, many distortions such as CD, PMD, and fiber nonlinearity, may be relatively small, and may often be ignored with little penalty. The long haul coherent detection techniques are considered too costly, in terms of hardware and power budget, to simply drop into the access network as is. Accordingly, it is necessary to fundamentally redesign the computation complexity of DSP functionality in the access network to reduce both the cost and power consumption in access network applications. A second challenge arises with respect to upstream burst-mode digital coherent detection at the coherent digital receiver. In the long haul transmission paradigm, coherent detection operates in continuous mode, which may, for signal processing, tolerate convergence over significantly longer time durations, or larger latencies. One of the critical DSP functions at the coherent digital receiver though, is polarization recovery. The embodiments described above estimate the polarization and apply channel equalization to compensate for polarization dependent effects using such techniques as (i) blind estimation algorithms (e.g., 2×2 multi-input/multi-output (MIMO)-based adaptive equalization), (ii) CMA, and/or other known techniques, such as a multi-modulus algorithm (MMA). However, since these algorithms are based on the error signal feedback to update the filter coefficients, each such algorithm requires considerable convergence time. Furthermore, the blind algorithms are prone to sub-optimum convergence and instability, including the possibility of wrong convergence. Therefore, such techniques are not suitable for burst-mode coherent detection in the access network paradigm, particularly in the case of short burst frames. Accordingly, the present embodiments offer an innovated technique for providing a data-aided method for performing both state-of-polarization (SoP) estimation, as well as, simplified channel equalization. In some embodiments, the present systems and methods are based on a specially designed data unit, in combination with a redesigned corresponding DSP, which separates the polarizations directly in a feedforward manner. The unique data unit of the present embodiment is uniquely configured to generate special frame structuring for burst-mode signal frames. In some embodiments, by basing the proposed innovation on the feedforward estimation, the convergence time may be greatly reduced, which is a particular advantage in a burst-mode coherent communication system. The present systems and methods achieve still further advantages over conventional techniques through the innovative implementation of data-aided features. That is, although the estimation techniques of the present embodiments may be data-aided, the estimation does not depend on the specific bit information carried by the data itself. Instead, according to the present systems and methods, the estimation may be advantageously based on the relationship between two detected polarization-diversity signals. Accordingly, a data unit that is specially designed according to these principles is not limited to SoP estimation only; such a data unit apparatus is further useful for carrying net bit information, as well as other DSP functions beyond estimation. The following embodiments are therefore of particular use with channel equalization algorithms, while achieving equalization results with significantly reduced complexity and shorter convergence time. In some embodiments, the present techniques may reduce convergence time implementing a conventional 2×2 channel equalization structure, for example, by initializing the filter taps for the four adaptive equalizers of the 2×2 structure, or by pre-separating the respective polarizations before equalization. An exemplary embodiment of this principle, implemented with respect to a 2×2 channel equalization structure, is described further below with respect toFIG.15. In an alternative embodiment, a simplified channel equalization structure may utilize only two adaptive equalizers. In this example, after SoP estimation, the two adaptive equalizers may be independently utilized for channel equalization for each polarization of a polarization multiplexed signal, thereby reducing the computation complexity by approximately half, in comparison with non-data-aided techniques. For a single polarization signal, this simplified structure may be even further simplified, since only one adaptive equalizer would be needed according to this technique. An exemplary embodiment of this principle, implemented with respect to a simplified channel equalization structure, is described further below with respect toFIG.17. FIG.12is a schematic illustration depicting a polarization-diversity coherent receiver1200. In the exemplary embodiment depicted inFIG.12, coherent receiver1200is implemented to demonstrate an operational principle of detecting two polarizations with cross coupling. More particularly, in the exemplary embodiment, coherent receiver1200includes a first polarization beam splitter (PBS)1202configured to receive an input polarization multiplexed signal1204(e.g., from an optical transport medium, not shown inFIG.12), and a second PBS1206configured to receive a local oscillator (LO) signal from an LO source1208. In some embodiments, second PBS1206may be a conventional splitter. Coherent receiver further includes a first 90 degree optical hybrid1210and a second 90 degree optical hybrid1212. In this example, first 90 degree optical hybrid1210is configured to receive as inputs an X-polarization signal component from first PBS1202and the LO signal from second PBS1204. Similarly, second 90 degree optical hybrid1212is configured to receive as inputs a Y-polarization signal component from first PBS1202and the LO signal from second PBS1204. Each 90 degree optical hybrid1210,1212is further configured to output separate I and Q components1214for its respective polarization signal component (i.e., XI and XQ, or YI and YQ, in this example). These components are described for purposes of illustration, and are not intended to be limiting. The person of ordinary skill in the art will understand, for example, that coherent receiver1200may include additional components1216, such as photodetectors (PDs), amplifiers or transimpedance amplifiers (TIAs), ADCs, and/or additional components conventionally utilized in coherent optical receivers, without departing from the scope herein. As illustrated inFIG.12, polarization-diversity coherent receiver1200operates to detect signal1204on two polarizations. However, when signal1204is received over fiber transmission, the respective polarizations of signal1204may no longer be aligned to LO1208. That is, along fiber transmission, the signal polarization may become randomly rotated due to birefringence from the fiber. Other polarization effects, such as PMD, may also affect the phase difference between the two polarizations, and the post-transmission SoP of signal1204may drift with time as a result of environmental conditions on the installed fibers/cables. Accordingly, in the exemplary embodiment, polarization-diversity coherent detection is achieved by separately detecting the two selected orthogonal polarizations (i.e., X/Y) of the received input signal1204, thereby enabling coherent receiver1200to implement SoP estimation and recovery in the digital domain. The following embodiments are described with respect to particular specially-designed data sequences and/or frame structures for dual-polarization signals, that is, polarization division multiplexed signals. These examples are provided by way of illustration, and not in a limiting sense. The principles described herein will be understood to be applicable to other types of polarized multiplexed signals and multiplexed signals having multiple signal subcomponents. FIG.13is a schematic illustration depicting an exemplary network communication system1300. In an exemplary embodiment, system1300includes a transmitter-side1302and a receiver-side1304, and transmitter-side1302includes a data unit generator1306configured for frame structuring and design of data units communicated to a corresponding digital signal processor (DSP)1308at receiver-side1304. In the exemplary embodiment, data unit generator1306is configured to generate a data unit having a specially-designed frame structure for each of two polarizations, and then modulate the generated data unit onto the optical carrier corresponding to each polarization for transmission, for example, as a dual-polarization multiplexed signal (e.g., signal1204,FIG.12) to receiver-side1304over an optical fiber medium1310. At receiver-side1304, DSP1308is configured to have knowledge of the frame structure(s) of the data unit(s) generated by data unit generator1306, and applies DSP functions corresponding to the known data units after the respective signal components are coherently detected (e.g., at ICR922,FIG.9, optical hybrids1210/1212,FIG.12). In the exemplary embodiment, DSP1308is logically disposed after conversion by an ADC (not shown inFIG.13), and operates in the digital domain to estimate and recover SoP with channel equalization, in coordination with data unit generator1306and/or knowledge of the special frame structures of the data units generated thereby. In the exemplary embodiment, SoP estimation is therefore based on the data unit frame structures generated by specially-designed data unit generator1306. In this respect, the exemplary embodiment depicted inFIG.13may be considered “data-aided,” due to the insertion of the data units on to the respective polarization signal frames. Nevertheless, because the corresponding SoP estimation performed by DSP1308need not be based on the actual bit information carried by the data of the respective signal frames, and is instead based on the relation between the two polarization-diversity detected signals, the generated data units may also be utilized for information carrying, that is, in addition to their implementation for SoP estimation and recovery. In some embodiments, may alternatively or additionally be used to coordinate with other corresponding DSP functions at DSP1308, such as those described above. Exemplary data unit frame structures are described further below with respect toFIGS.14A-C. FIGS.14A-Care schematic illustrations depicting exemplary respective data architectures1400A-C generated in accordance with data unit1306,FIG.13. In the exemplary embodiment, each respective data architecture1400includes at least one specially-designed data unit1402placed with respect to a dual-polarization signal frame1404(e.g., the data payload) in the time domain. In the example depicted inFIGS.14A-C, dual-polarization signal frame1404is illustrated with respect to orthogonal X- and Y-polarizations, with data carried by their respective optical signals represented as Data X and Data Y (or Data Xi and Data Yi, in the case of multiple signal frames within the same data architecture1400). Accordingly, data unit1402correspondingly structured to include an X-data component and a Y-data component represented herein as Ux and Uy (or Uxi and Uyi). More particularly,FIG.14Aillustrates an implementation example of data unit1402A including N-symbol length data only on the X-data component for the X-polarization, and zeros on the Y-data component for the Y-polarization; that is, the frame structure of data unit1402A is configured such that Ux includes N-symbol length data in the time slot of data unit1402A, and Uy includes zeros (no data) within that same time slot. Similarly,FIG.14Billustrates the counter-implementation example of data unit1402B including zeros on the X-data component Ux, and N-symbol length data on only the Y-data component Uy within the same time slot. FIG.14C, on the other hand, illustrates an implementation example of data unit1402C having a hybrid frame structure spanning at least two time slots within a single data unit. That is, data unit1402C includes N-symbol length data on both polarizations, but in different time slots within a single data unit. More particularly, within the first time slot of data unit1402C, Ux includes N-symbol length data, and Uy includes zeros, whereas in the second time slot of data unit1402C, Ux includes zeros, and Uy includes N-symbol length data. It will be understood by persons of ordinary skill in the art that the disposition of data within the two time slots of data unit1402C may be reversed, as long as one polarization of data unit1402C is structured no with zeros while the other polarization is structured to include data within a particular time slot. The operating principles of data architectures1400A-C may otherwise be considered similar to one another with respect to SoP estimation and recovery. However, the hybrid implementation example illustrated inFIG.14Cmay be considered to provide more balanced data loading between the X- and Y-polarizations. In an exemplary embodiment, implementation of data architecture1400C may further provide a more robust performance due to the enablement of corresponding DSP calculations on both of the two polarizations. An analysis of such DSP calculations and related processing is described further below. In general, for an access network having relatively limited transmission distance, the polarization dependent loss and fiber nonreality may be ignored. Based on this principle, the Jones matrix of the fiber channel after signal transmission can be expressed, as a unitary matrix, according to: J=[cos⁢θej⁢ϕ1sin⁢θ⁢ej⁢ϕ2-s⁢in⁢θ⁢e-j⁢ϕ2cos⁢θ⁢e-j⁢ϕ1](Eq.2) Here, θ represents the overall polarization rotation effect, whereas ϕ1and ϕ2represent the phase caused by PMD after fiber transmission. To solve this equation with three variables, the problem to be simplified by expanding Eq. 2 into: J=[cos⁢θej⁢ϕ1sin⁢θ⁢ej⁢ϕ2-sin⁢θ⁢e-j⁢ϕ2cos⁢θ⁢e-j⁢ϕ1]=[cos⁢θej⁡(ϕ1+ϕ2)sin⁢θ-s⁢in⁢θcos⁢θ⁢e-j⁡(ϕ1+ϕ2)]⁢[e-j⁢ϕ200ej⁢ϕ2]=[cos⁢θ⁢ej⁢γsin⁢θ-sin⁢θcos⁢θ⁢e-j⁢γ][e-j⁢ϕ200ej⁢ϕ2]=J1⁢J2(Eq.3) Here, γ=ϕ1+ϕ2, and it may thus be seen that J now has two parts, J1and J2, with the first part J1having only two variables (θ and δ), which may be solved more easily considering that the second part J2has no contribution to the polarization crosstalk (i.e., power transfer between the X- and Y-polarizations). Accordingly, this second part J2only adds a phase difference between the X- and Y-polarizations, which may be solved in the phase recovery process. Thus, only the first part of the equation requires a solution. It is known that, for a unitary matrix, its inverse H is the conjugate transpose of that unitary matrix. Accordingly, the equation need be solved only to obtain: H=[cos⁢θ⁢e-j⁢γ-s⁢in⁢θsin⁢θcos⁢θ⁢ej⁢γ](Eq.4) The systems and methods of the present embodiments thus provide an advantageously simplified technique to solve this equation. For example, assuming that the received signal ERafter polarization diversity detection may be represented as: ER=[ExEy],(Eq.5) then the recovery signal ETmay be represented according to: ET=H[ExEy]=[cos⁢θ⁢e-j⁢γ⁢Ex-sin⁢θ⁢Eysin⁢θ⁢Ex+cos⁢θ⁢ej⁢γ⁢Ey](Eq.6) Here, Exand Eyrepresent the revised signals on the two respective polarizations. As described above, the second part J2has no contribution on the power transfer between the X- and Y-polarizations. Accordingly, the innovative data unit frame structure of the present embodiments enables a simplified solution to the equation, due to the unique property of the present data unit that one polarization is null with zeros. For example, in the case where the Y-polarization of the transmitted data unit is null with zeros (e.g., data unit1402A), the following is true: sin θEx+cos θejγEy=0  (Eq. 7) Under this principle, the equation may then be solved according to: sin⁢θ=❘"\[LeftBracketingBar]"Ey/Ex❘"\[RightBracketingBar]"21+❘"\[LeftBracketingBar]"Ey/Ex❘"\[RightBracketingBar]"2,(Eq.8)cos⁢θ=11+❘"\[LeftBracketingBar]"Ey/Ex❘"\[RightBracketingBar]"2,γ=angle⁢(-ExEy) Accordingly, the inverse matrix H for SoP estimation, polarization recovery, and demultiplexing may be easily obtained. A similar algorithmic process may be implemented in the case where the X-polarization of the transmitted data unit is null with zeros (e.g., data unit1402B). That is, in the case where the X-polarization of the transmitted data unit is null with zeros, the following is true: sin⁢θ=❘"\[LeftBracketingBar]"Ex/Ey❘"\[RightBracketingBar]"21+❘"\[LeftBracketingBar]"Ex/Ey❘"\[RightBracketingBar]"2,(Eq.9)cos⁢θ=11+❘"\[LeftBracketingBar]"Ex/Ey❘"\[RightBracketingBar]"2,γ=-angle⁢(EyEx) Thus, the inverse matrix H may be solved through implementation of any data units1402A-C. Namely, as long as one of the respective data components of the polarizations is null with zero, the equation may be solved according to either Eq. 8 or Eq. 9, depending on the configuration of the particular data unit1402that is used. For example, in the case of hybrid data unit1402C, the equation may be solved using both Eq. 8 and Eq. 9, but in different respective time slots. In an exemplary embodiment, in practical use, each data unit1402may contain N symbols to improve the estimation accuracy. Therefore, for either of data1402A and1402B, the respective results from Eq. 8 and Eq. 9 may be averaged by the N symbols. In the case where data unit1402C is implemented, since both of Eq. 8 and Eq. 9 are separately calculated for the inverse matrix H, the matrix is counted twice as many times. In this case, the results from both equations may also be separately averaged. Alternatively, the results of the first equation may be averaged prior to calculation of the second equation, namely, between the X- and Y-polarizations, to further improve accuracy. In at least one embodiment, Eq. 8 may be implemented for the lower two elements of the inverse matrix H, and Eq. 9 may be implemented for the upper two elements of the inverse matrix H. In an exemplary embodiment, the respective implementation examples depicted inFIGS.14A-Cmay be further implemented with a channel equalization. In this case, the inverse Jones Matrix H may be a 2×2, 1-tap filter, which can be used for polarization recovery and demultiplexing independently before any channel equalization is performed, as described further below with respect toFIG.15. Alternatively, the inverse matrix H may be used for the initialization of a 2×2 multi-tap adaptive equalizer, as described further below with respect toFIG.17. FIG.15is a flow diagram depicting an exemplary SoP estimation technique1500for digital signal processing. In the exemplary embodiment depicted inFIG.15, technique1500illustrates the DSP flow of an SoP estimation unit1502implemented together with a 2×2 multi-tap equalization schemes using adaptive equalizers1504for initialization. More specifically, each of the four adaptive equalizers1504(1),1504(2),1504(3),1504(4) are utilized in the exemplary embodiment for channel equalization. In some embodiments each of adaptive equalizers1504is further enabled to compensate for both intra-polarization linear channel distortions (e.g., inter-symbol-interference (ISI) from transmitter and receiver bandwidth limitations, residual CD) and inter-polarization cross-talk. In exemplary operation of technique1500, both of revised polarization signals Exand Eyare input to SoP estimation unit1502, whereas only revised polarization signal Exis input to adaptive equalizers1504(1) and1504(2), and only revised polarization signal Eyis input to adaptive equalizers1504(3) and1504(4). Outputs from adaptive equalizers1504(1) and1504(2) are summed to generate an output X-polarization signal Xout, and outputs from adaptive equalizers1504(3) and1504(4) are summed to generate an output Y-polarization signal Yout. Both of output signals Xoutand Youtmay then be fed back into an error function unit1506. In an exemplary embodiment, the respective filter coefficients of adaptive equalizers1504(i.e., [dxx, dxy; dyx, dyy]) may then be updated (e.g., using CMA, MMA, or LMS algorithms, described above) based on the error signal feedback processed by error function unit1506. As discussed above, using all-blind adaptive equalization (e.g.,FIGS.3,6) the initialization of the filter coefficients of adaptive equalizers1504may be non-optimized with a default starting point. For example, assuming no polarization rotation, and setting the values of the center tap coefficients dxx, dxy, dyx, dyyas [1 0 0 1], a gradual update to these filter coefficients may cost and unreasonable amount of time when significant polarization rotation occurs. Accordingly, considering that the most significant determinant of the convergence time for adaptive equalizers1504is the initialization, the present embodiments advantageously avoid this cost by implementing the SoP estimation (i.e., SoP estimation unit1502) together with the four multi-tap adaptive equalizers1504for initialization, as depicted inFIG.15. Furthermore, since four adaptive equalizers1504are utilized for adaptive equalization, the polarization change may be tracked, including the inter-polarization time of dxyand dyx. That is, for each frame, no further estimation is needed after initialization. Accordingly, the respective frame structures described above with respect toFIGS.14A-Cmay be implemented with technique1500, with only the respective data unit1402heading data architecture1400in time before the data payload of signal frame1404. Therefore, according to technique1500, using the detected signals from each of two polarizations (i.e., Exand Ey), the SoP estimation may be initially performed based on the first data unit1402in the frame head of the respective data architecture1400. That is, SoP estimation unit1502may be programmed with algorithms or computer-executed instructions to perform the calculations described above with respect to Eq. 8 and 9, and thereby obtain the inverse matrix H. In an embodiment, technique1500further includes a multiplication unit1508, a channel response storage unit1510, and a normalization unit1512. In exemplary operation of this embodiment, the convergence time may be further reduced by multiplying, using multiplication unit1508, the inverse matrix H output from SoP estimation unit1502with an initial normalized channel response stored in channel response storage unit1510to initialize the four adaptive equalizers1504. That is, the normalized channel response D=[DxxDxy; DyxDyy] may be initially set, at the very beginning of system operation according to technique1500, with the initial pre-stored channel response such that the center taps of Dxxand Dyyare 1, and all other elements of D are set to zero. For example, in the case of a 5-tap channel response, the initial pre-stored normalized channel response D in channel response storage unit1510may be set according to: D=[DxxDx⁢yDyxDyy]=[001000⁢0⁢0⁢0⁢00⁢0⁢0⁢0⁢000100][Eq.10] The initialization for the four adaptive equalizers1504may then be set according to: [dxxdx⁢ydyxdyy]=H[DxxDx⁢yDy⁢xDyy](Eq.11) Thus, after this initialization procedure, the SoP estimation is complete. In further operation according to technique1500, respective adaptive equalizers1504may then proceed with continuous updating taps to track the channel response and polarization changes. As described above, the respective filter coefficients of adaptive equalizers1504(i.e., [dxx, dxy; dyx, dyy]) may then be updated based on the error signal feedback from error function unit1506(e.g., CMA, MMA, LMS algorithms). In some embodiments, training sequences may be implemented when updating the filter coefficients to achieve faster convergence. In further exemplary operation of technique1500, when channel equalization is completed, the tap values of adaptive equalizers1504(i.e., [dxx, dxy; dyx, dyy]) may be fed to normalization unit1512, normalized as an updated channel response D, and stored in channel response storage unit1510. The value of updated channel response D may then be utilized to equalize the next sequential frame. In the exemplary embodiment, any or all of the respective components illustrated with respect to technique1500may be contained within a DSP of a receiver (e.g., DSP1308of receiver-side1304,FIG.13), and/or executed as a software module by a processor thereof. According to the exemplary embodiment depicted inFIG.15, technique1500is of particular use for, and fully compatible with, upstream burst detection in a PON, where each ONU of the PON may have its own stored channel response D (e.g., and a memory thereof), that is, a particular channel response Difor the respective ONUi. Thus, according to technique1500, the channel response Dimay be utilized in the initialization process for next burst frame coming from that ONUi, whereas different ONUs may have different pre-stored coefficients. An exemplary implementation process of technique1500is described below with respect toFIG.16. FIG.16is a flow diagram depicting an exemplary channel equalization process1600implementing estimation technique1500,FIG.15. In the exemplary embodiment, process1600begins at step1602, in which SoP estimation is performed (e.g., by SoP estimation unit1502) using a data unit (e.g., data unit1402A,1402B, or1402C,FIG.14) in the frame header of a received data frame (e.g., respective data architecture1400A,1400B, or1400C,FIG.14) of an input data signal (e.g., Exand/or Ey). In an exemplary embodiment of step1602, the SoP estimation utilizes non-zero data in the data unit to calculate the inverse matrix H. Step1604is a decision step. If, in step1604, process1600determines that the received data frame is the first frame in the received signal sequence, process1600proceeds to step1606. In step1606, the calculated inverse matrix H is multiplied (e.g., by multiplication unit1508) by an initial pre-stored normalized channel response D (e.g., in channel response storage unit1510), and the multiplied result thereof is fed the respective taps of adaptive equalizers (e.g., adaptive equalizers1504) for adaptive channel equalization. In step1608, the adaptive filters perform adaptive channel equalization on the multiplied inverse matrix H. In step1610, process1600determines that channel equalization has been completed, outputs the respective polarization output signal (e.g., Xoutand/or Yout), and feeds an updated normalized channel response D to the channel response storage unit (e.g., from adaptive filters1504by way of normalization unit1512). In step1612, the channel response storage unit stores the updated normalized channel response D. In an exemplary embodiment of step1612, the updated normalized channel response D is stored within a table contained within the channel response storage unit. Referring back to step1604, if process1600alternatively determines that the received data frame is not the first frame in the signal sequence, process1600instead proceeds to step1614. In step1614, the calculated inverse matrix H is multiplied (e.g., by multiplication unit1508) by a stored updated normalized channel response D (e.g., from step1612) read from the channel response storage unit, and the multiplied result thereof is fed the respective taps of adaptive equalizers (e.g., adaptive equalizers1504) for adaptive channel equalization. Process1600then proceeds from step1614to step1608, and process1600may then be repeated for each successive received signal frame. In an alternative embodiment, SoP estimation, polarization recovery, and demultiplexing may be implemented independently from adaptive channel equalization. An exemplary technique for such independent SoP estimation is described further below with respect toFIG.17. FIG.17is a flow diagram depicting an alternative SoP estimation technique1700for digital signal processing. In the exemplary embodiment, components of processing technique1700are implemented within, and/or using software-based processing algorithms of, a DSP of a coherent receiver (e.g., DSP1308,FIG.13). In the embodiment depicted inFIG.17, technique1700is similar in some respects to technique1500,FIG.15, and includes an SoP estimation unit1702, two adaptive equalizers1704(as opposed to the four adaptive equalizers1504utilized according to technique1500), an error function unit1706, and a channel response storage unit1708. Although individual elements of technique1700are thus similar to analogous elements of technique1500, the operating principle of technique1700is different from that of technique1500. For example, different from technique1500, technique1700utilizes a 1-tap inverse matrix H unit1710to achieve instant polarization recovery, and before channel equalization by adaptive equalizers1504(1),1504(2). Accordingly, because unit1710is able to recover both of the X- and Y-polarization signals prior to equalization, only two adaptive equalizers1704(1) and1704(2) (e.g., dxxand dyy, respectively), are needed at the receiver, thereby significantly reducing the computation complexity, in comparison with technique1500, by approximately half. However, since technique1700does not include inter-polarization equalizers (e.g., dxyand dyxin example 1), the two adaptive equalizers1704(1) and1704(2)/dxxand dyywill not be able to track slower polarization changes. For example, in the case of short burst frames, there may be little or no change in the polarization, and therefore the inability to track slower polarization changes would be considered to result in a very small penalty on the performance. However, in the case of longer bursts or continuous mode operation, technique1700may be further configured to periodically check the polarization state and apply SoP estimation and recovery as needed. Nevertheless, even with this additional periodic checking, receiver DSP systems and methods according to technique1700represent significantly simplified channel equalization schemes, and with respect to both hardware costs and the processing resource burdens thereof. In exemplary operation of technique1700, both of revised polarization signals Exand Eyare input to SoP estimation unit1702, and both also to 1-tap inverse matrix H unit1710. The outputs from adaptive equalizer1704(1) becomes the output X-polarization signal Xout, and the outputs from adaptive equalizer1704(2) becomes the output Y-polarization signal Yout. Both of output signals Xoutand Youtmay again be fed back into error function unit1706, similar to the analogous operation in technique1500. In further exemplary operation of technique1700, using the detected signals from two polarizations (e.g., Exand Ey), the SoP estimation is initially performed based on the data unit received in the frame head (e.g., data unit1402A,1402B,1402C,FIGS.14A-C, described further below with respect toFIGS.18A-C). That is, SoP estimation unit1702is programmed to process the detected polarization signals according to either or both of Eq. 8 and Eq. 9, above, to solve for the inverse matrix H. Instant polarization recovery unit1710then performs polarization recovery for the received signals Exand Eyusing the inverse matrix H produced by SoP estimation unit1702to output two polarization-recovered signals for the respective X- and Y-polarizations, which may then be separately fed to respective adaptive equalizers1704. In the example illustrated inFIG.17, the recovered X-polarization signal is input to first adaptive equalizer1704(1), and the recovered Y-polarization signal is input to second adaptive equalizer1704(2), both of which may then apply channel equalization to the respective input recovered polarization signal. As discussed above, in the example depicted inFIG.17, only two adaptive equalizers1704are implemented, and are not expected to include tracking capability for polarization state changes according to this configuration. Nevertheless, a receiver DSP that implements technique1700may easily configured such that SoP estimation unit1702is enabled to perform SoP estimation a plurality of times for a single frame, depending on the length of the particular frame. Technique1700is thus further different from technique1500,FIG.15, in that technique1500would be expected implement SoP estimation (e.g., by SoP estimation unit1502) only once for each received frame/data unit. In contrast, technique1700may advantageously perform SoP estimation the plurality of times between received data units. In further exemplary operation of technique1700, after SoP estimation is performed, adaptive equalizers1704may be initialized using a pre-stored channel response D=[Dxx; Dyy] stored in a memory of channel response storage unit1708. Similar to the exemplary embodiment described with respect toFIG.15, channel response storage unit1708may also set with a pre-stored default channel response value to reduce the convergence time. Thus, at the very beginning of system operation according to technique1700, the pre-stored default channel response may be set with center tap of Dxxand Dyyhaving a value of 1, and all other elements of D set to zero. Accordingly, again considering the case of a 5-tap channel response, the initial pre-stored channel response D in channel response storage unit1708may be set according to: D=[DxxDyy]=[0010000100](Eq.12) And thus, adaptive equalizers1704may be initialized according to: [dxxdyy]=[DxxDyy](Eq.13) After this initialization, each adaptive equalizer1704may then start with continuously updating taps to track the channel response and polarization changes. The corresponding filter coefficients [dxx; dyy] of adaptive equalizers1704may then be updated based on the error signal feedback from error function unit1706(e.g., which may use algorithms such as CMA, MMA, LMS etc.). Similar to technique1500, technique1700may further utilize training sequences when updating the filter coefficients to achieve faster convergence. Once channel equalization is completed, the tap values [dxx; dyy] of adaptive equalizers1704may be stored channel response storage unit1708as an updated channel response D. This stored value for the updated channel response D may then be used for equalization processing of the next frame in the signal sequence. Different from technique1500, technique1700does not include a normalization unit or normalization processing, since SoP estimation is performed before channel equalization according to technique1700. According to the exemplary embodiment depicted inFIG.17, technique1700is also of particular use for, and fully compatible with, upstream burst detection in a PON, where each ONU of the PON may have its own stored channel response D (e.g., and a memory thereof), that is, a particular channel response Difor the respective ONUi. Thus, according to technique1700, the channel response Dimay be utilized in the initialization process for next burst frame coming from that ONUi, whereas different ONUs may have different pre-stored coefficients. According to technique1700, the respective data architectures and data units thereof may be similar to those described above with respect toFIG.15. However, whereas the operating principles may be the same, the particular data architectures used with respect to technique1700may differ somewhat from those used with respect to technique1500, as described below with respect toFIGS.18A-C. Additionally, an exemplary implementation process of technique1700is described further below with respect toFIG.19. FIGS.18A-Care schematic illustrations depicting alternative respective data architectures1800A-C. Data architectures1800A-C are similar in principle to data architectures1400A-C,FIGS.14A-C, but illustrate alternative designs and/or dispositions of data units1802with respect to a sequential series of dual-polarization signal frames1804containing data payloads. Similar to data architectures1400A-C, data units1802are structured to include an X-data component and a Y-data component (i.e., Uxi and Uyi), and signal frames1804include both X-polarization data and Y-polarization data (i.e., Data Xi and Data Yi).FIGS.18A-Cthus depict three respective exemplary implementation examples of data unit frame structures having periodic frame units1802generated within the respective frame architecture1800for the simplified channel equalization processing described above with respect to technique1700,FIG.17. More particularly,FIG.18Aillustrates an implementation example of periodic data units1802Ai including N-symbol length data only on the X-data component Ux, and zeros on the Y-data component Uy. In this example, the same frame structure is employed for each data unit1802A in the sequential series of data architecture1800A, such that a first data unit1802A(1) that precedes a first signal frame1804A(1) has the same frame structure as a second data unit1802A(2) preceding a second signal frame1804A(2) and a third data unit1802A(3) preceding a third signal frame1804A(3), etc. Data architecture1800A is therefore substantially similar to data architecture1400A,FIG.14A, but applied to a sequence of signal frames1804Ai. The frame structure of data architecture1800A is depicted inFIG.18A, by way of example and not in a limiting sense, as implementing a frame structure having X-data Uxi on the X-polarization and zeros on the Y-polarization. Persons of ordinary skill in the art will understand though, that the operating principle of data architecture1800A will be substantially the same if implemented using frame structure having zeros on the X-polarization and Y-data Uyi on the Y-polarization, which is therefore similar in principle to data architecture1400B,FIG.14B, but applied to a sequence of signal frames1804Ai. FIG.18Billustrates an implementation example of data architecture1800B having periodic data units1802Bi that are individually similar to data units1802A,FIG.18A(i.e., N-symbol length data on one polarization, zeros on the other polarization), but is different than data architecture1800A when the plurality of data units1802Bi are considered in the aggregate, seen over the entire series of signal frames1804Bi included within data architecture1800B. That is, although each data unit1800B N-symbol length data on one polarization and zeros on the other polarization, the polarization that includes the N-symbol length data alternates for each sequential data unit1802B in the series. Thus, in the example depicted inFIG.18B, the frame structure for a first data unit1802B(1) preceding a first signal frame1804B(1) includes data Ux on the X-polarization and zeros on the Y-polarization, but a second data unit1802B(2) preceding a second signal frame1804B(2), which is next in the sequence, reverses the frame structure to include zeros on the X-polarization and data Uy on the Y-polarization. The frame structure of a third data unit1802B(3) then alternates again to a frame structure substantially similar to that of first data unit1802B(1), and the subsequent sequence of frames is processed accordingly. FIG.18Cillustrates an implementation example of data architecture1800C utilizing data units1802Ci having a hybrid frame structure substantially similar to that of data units1402C,FIG.14C, but applied to the sequence of signal frames1804Ci. That is, each of data units1802Ci similarly include at least two time slots within a single data unit, with N-symbol length data on one polarization/zeros on the other polarization in the first time slot, and alternating in the second time slot of the same data unit1802C. Because the respective U-data and zeros thus alternate within a single data unit, the same overall frame structure may remain the same for each periodic data unit1802C(1),1802C(2),1802C(3), etc. Accordingly, a general operational principle of the present systems and methods fundamental is to periodically leave one polarization of periodic data units1802null with zeros, while placing data on the other polarization. According to the exemplary embodiments depicted inFIGS.18A-C, SoP estimation may be easily and rapidly performed for sequential frames in a data stream. Thus, data units1802Ai and1802Ci operate according to the principles described above, with the implementation of data units1802Ci providing more balanced data loading between the X- and Y-polarizations, as well as a more robust performance due to the calculation on both of the two polarizations. Implementation of data units1802Bi, on the other hand, strike a balance between the different approaches that implement data units1802Ai or1802Ci. That is, use of data units1802Bi may achieve the more balanced data loading between polarizations, similar to that achieved through use of data units1802Ci, but implementing only a single time slot within each periodic data unit. FIG.19is a flow diagram depicting an alternative channel equalization process1900implementing the estimation technique1700,FIG.17. In the exemplary embodiment, process1900begins at step1902, in which SoP estimation is performed (e.g., by SoP estimation unit1702) using a data unit (e.g., data unit1802A,1802B, or1802C,FIG.18) in the frame header of a received data frame (e.g., respective data architecture1800A,1800B, or1800C,FIG.18) of an input data signal (e.g., Exand/or Ey). In an exemplary embodiment of step1902, the SoP estimation utilizes non-zero data in the data unit to calculate the inverse matrix H, and then uses the calculated inverse matrix H (e.g., in instant polarization recovery unit1710) to recovered outputs for each respective polarization. Step1904is a decision step. If, in step1904, process1900determines that the received data frame is the first frame in the received signal sequence, process1900proceeds to step1906. In step1906, an initial pre-stored channel response D (e.g., from channel response storage unit1708) is applied to each of the recovered polarization outputs (e.g., at adaptive equalizers1704). In step1908, the adaptive filters perform adaptive channel equalization on the recovered polarization outputs according to the channel response D (e.g., obtained from channel response storage unit1708). Step1910is also a decision step, in which process1900determines whether a last SoP estimation cycle has been performed on the data sequence. If, in step1910, process1900determines that the SoP estimate is not for the last estimation cycle, process1900proceeds to step1912. In step1912, process1900performs an additional SoP estimation and recovery operation (e.g., using SoP estimation unit1702and instant polarization recovery unit1710) using the periodic data units in the frame and the calculated inverse matrix H, after which, process1900returns to step1908for additional adaptive channel equalization. If, however, in step1910, process1900determines that the last SoP estimation cycle has been completed, process1900outputs the respective polarization output signal (e.g., Xoutand/or Yout), and then proceeds to step1914, in which an updated channel response D is provided to the channel response storage unit (e.g., from adaptive filters1704). In an exemplary embodiment of step1914, the updated channel response D is stored within a table contained within the channel response storage unit. Referring back to step1904, if process1900alternatively determines that the received data frame is not the first frame in the signal sequence, process1900instead proceeds to step1916. In step1916, the stored updated channel response D (e.g., from step1914) is read from the channel response storage unit, and then applied to each of the recovered polarization outputs by the adaptive equalizers. Process1900then proceeds from step1916to step1908, and process1900may then be repeated for each successive received signal frame or subsequent SoP cycle. In accordance with the DSP systems and methods described above, performance of the respective SoP estimation techniques was tested in an experimental simulation set up. For the simulation performance testing, 25 GBaud dual-polarization 16QAM training symbols were used. Experimental results of the simulated performance testing are described further below with respect toFIGS.20A-Band21. FIG.20Ais a graphical illustration depicting a signal plot2000before implementation of SoP estimation and polarization recovery.FIG.20Bis a graphical illustration depicting a signal plot2002after implementation of SoP estimation and polarization recovery. A comparison of the respective results of signal plots2000and2002indicates that the polarizations of the respective PDM-16QAM signals, seen before SoP estimation and polarization recovery in plot2000, were correctly separated implementing the innovative techniques described above, as indicated in plot2002, scene after SoP estimation and polarization recovery. FIG.21is a graphical illustration depicting a comparative BER performance result plot2100obtained according to techniques1500,FIG.15, and1700,FIG.17. More particularly, plot2100superimposes the BER-versus-received optical power results, scene after SoP estimation and polarization recovery, for both of a two-equalizer DSP implementation and a four-equalizer DSP implementation (i.e., FIR equalizers, for this performance test). More particularly, the two-equalizer implementation is representative of the simplified channel equalization technique illustrated inFIG.17, and the four-equalizer implementation is representative of the more complex channel equalization technique depicted inFIG.15. As may be seen from plot2100, the penalty resulting from the selection of one technique over the other is substantially negligible. Nevertheless, implementation of DSP equalization according to the simplified channel equalization technique ofFIG.17will reduce the computation complexity by half. According to the systems and methods described above, an innovative data-aided technique is provided for SoP estimation and simplified channel equalization. These techniques advantageously utilize a specially designed data unit at the transmitter-side, which, in cooperation with complementary DSP at the receiver-side, efficiently and correctly separate the polarizations of a dual-polarization directly in a feedforward manner. The innovative data units of the present embodiments therefore include a frame structure that is particularly useful for burst-mode signal frames. Additionally, because the present systems and methods are based on feedforward estimation, convergence time may be greatly reduced, which is a unique advantage, in comparison with conventional techniques, to burst-mode coherent communication systems. Additionally, because the DSP estimation techniques described herein are not based on the bit information carried by the data, but instead based on the relation between polarization-diversity detected signals, the innovative data units of the present embodiments may also be used for carrying bit information, if desired, or other DSP functions. The present techniques are thus also fully compatible with the utilization of channel equalization algorithms having reduced complexity and reduced convergence time, and may be implemented utilizing a conventional 2×2 channel equalization architecture to reduce the convergence time, whether by initializing the filter taps of the four adaptive equalizers therein, or by pre-separating the polarizations of the dual-polarization signal. Alternatively, the present techniques further provide significantly simplified channel equalization using two adaptive equalizers instead of the four equalizers of the conventional 2×2 channel equalization architecture. That is, after SoP estimation, two independent adaptive equalizers perform channel equalization on each respective polarization of a polarization multiplexed signal, thereby reduces the computation complexity by 50% when compared with non-data-aided methods. Additionally, for single polarization signals, these techniques may be even further simplified to utilize only one adaptive equalizer. Efficient Preamble Design and DSP in Coherent-PON Upstream Burst-Mode Detection As described above, the advance of high-speed optical access networks has been propelled by new business and application drivers, such as 5G, mobile x-haul, cloud networking, and high-bandwidth 4K/8K video streaming services. As a result of this advance, the bandwidth requirements in the optical access network have grown significantly in proportion. PON technologies have been a dominant solution to meet such high-capacity demand from end users, by offering relatively low-cost P2MP services. Accordingly, the industry expects to upgrade the access network to 25/50-Gb/s, and even 100-Gb/s, PON technologies in the near future. The IEEE 802.3ca Task Force has recently, for example, released a 25/50G NG-EPON specification based on wavelength multiplexing of 25 Gb/s per single channel, and ITU-T/FSAN has launched new projects to standardize higher speed PONs, such as 50G single-wavelength TDM-PONs. However, both of these recent PON standardization proposals are based on intensity modulation and direct detection (IM/DD) in physical layer, and not, for example, based on coherent detection. Single-wavelength high-speed TDM-PON systems are nevertheless of great interest in the industry, in comparison with system mechanisms for bonding multiple wavelengths, because the single-wavelength solution not only reduces the number of required optical components and the associated costs thereof, but also saves wavelength resources. Furthermore, 100G PON proposals using wavelength multiplexing and IM/DD of four 25 Gb/s, or two 50 Gb/s, channels are considered in the industry to be too challenged by their limited power budget and complicated wavelength resource management techniques. For example, a 100G PON based on O-band IM/DD has been recently proposed downstream transmission, however, this proposal requires a prohibitively large launch power at the OLT-side. A correspondingly high launch power is therefore considered out of reach at the ONU-side, that is, for upstream transmission. Moreover, appropriate transmission wavelength windows are difficult to obtain in the O-band in consideration of coexistence with legacy PON services. Therefore, the limited sensitivity of the 100G TDM-PON is considered too great of a challenge to increasing the data rate on a single wavelength to meet the PR-30 (>29-dB link loss) power budget using direct detection in the O-band. The present embodiments overcome these challenges by providing a 100-Gb/s, single wavelength, coherent detection TDM-PON. Coherent PONs, for example, provide higher sensitivity, and due to continuing DSP advancements, coherent PONs enable significantly higher access capacity and longer coverage reach. Coherent technology though, remains costly. Recent efforts to reduce the cost and complexity of coherent optics in the access network include semi-coherent systems using heterodyning, amplitude modulation, and Alamouti-coding based polarization-independent detection. However, these efforts to simplify the complexity have resulted in trade-offs that have penalized the sensitivity of the network, increased the device bandwidth requirements, and required non-standard coherent transceiver architectures. For example, where a single Mach-Zehnder modulator has been substituted for the dual-polarization I/Q modulator at the transmitter-side, this reduction to the complexity of the transmitter as required a corresponding increase to the complexity of the receiver, which in this example requires twice the bandwidth in comparison with an analogous receiver in a full-coherent QPSK system. This example of a semi-coherent system also still suffers the sensitivity penalty trade-off Because coherent optics in a fully-coherent system is at present the only practically-available, commercially-developed, and mass-deployed optical coherent communication technology in the field, the present embodiments build on this existing mature platform, in consideration of recent developments in opto-electronic integration and CMOS technology, as well as the existing market size in the access network, to achieve 100G coherent PON in a full-coherent system. The present embodiments are further fully compatible with related techniques that reduce and optimize the costs, complexity, and power consumption of the access network. To realize these advantageous results, the following embodiments provide systems and methods for robustly achieving upstream burst mode coherent detection. That is, as discussed above, upstream transmission in the TDM-PON is burst-mode, which is different from the operation of the downstream transmission, where signals are continuously broadcast to all end users. In an exemplary embodiment, a centralized OLT receives signals, burst-by-burst, from different user-side ONUs. The different respective incoming upstream bursts signals are typically received by the OLT at different respective signal powers, carrier phases, times or clocks, and/or SoPs. The present embodiments thus realize significant improvements to signal recovery and processing of the upstream burst signals at the OLT. For efficient recovery and processing of upstream transmissions at the OLT, the OLT must be able to respond rapidly to recover the burst signals from the various ONUs within a short time duration, and then be able to reset itself for the next incoming upstream burst. In comparison with burst-mode signal recovery techniques used by direct-detection PONs, signal recovery in the coherent PON is considerably more challenging due to the greater complexity of coherent optical signals, which are modulated and multiplexed on phase, polarization, and amplitude. The present embodiments still further overcome the unsuitability of conventional continuous-mode coherent detection and DSP used in P2P links, which are typically based on blind or feedback-type equalization techniques, and thus required too long an acquisition time to accomplish signal recovery for burst-mode detection. The present systems and methods additionally effectively address the additional challenges arising from burst-mode DSP, such as: (i) other non-DSP subsystems are required to operate at sufficient similar high speed to detect the short optical bursts; and (ii) frequency-offset estimation must be similarly sufficiently fast, and also able to withstand a large offset range due to possible laser wavelength drift. Some recent proposals attempt to address these additional challenges through techniques such as: (i) designed preambles and fast DSPs to achieve fast polarization separation, which fit pilot sequences into the burst-mode detection of a 100G PDM-QPSK coherent TDM-PON; (ii) a real-time 20-Gb/s single-polarization QPSK coherent burst-mode detection using 1.0-MHz clock frequency difference; and (iii) fast I/Q imbalance compensation for 100G PDM-QPSK burst-mode detection with using an 826-ns preamble. However, none of these recent proposals provide practical details on the overall preamble design, the related burst-mode signal processing performance, or importantly, how to reduce and optimize the preamble length for the 100G coherent TDM-PON. Although a burst-mode DSP architecture for coherent PONs has also been proposed, this previous architectural proposal utilizes pre-calculated tap coefficients for adaptive equalization during the ONU discovery process to enable preamble lengths, and is thus unsuitable for easy integration with different architectural configurations. The following embodiments therefore further solve these additional problems, by providing systems and methods for reliable and efficient preamble design, with corresponding burst-mode DSP, for coherent upstream burst-mode detection in a 100G coherent TDM-PON. The present systems and methods still further provide detailed descriptions of the preamble design configuration and associated principles, as well as related key DSP functions, such as frame synchronization, SoP estimation, and FOE. The following description additionally provides detailed analyses that demonstrate the utility of the present systems and methods, including experimental results showing improvements with respect to frequency-offset and fiber CD, as well as verification of the efficiency and overall performance using the present designed preamble techniques under different test conditions. According to the present systems and methods, the preamble length may be advantageously reduced by sharing the preamble unit among multiple DSP functions, and a robust performance in large frequency-offset and residual fiber dispersion is confirmed. FIG.22Ais a schematic diagram depicting a burst-frame architecture2200for a conventional direct-detection PON.FIG.22Bis a schematic diagram depicting an upstream recovery technique2202for direct-detection burst-frame architecture2200,FIG.22A. In the embodiments depicted inFIGS.22A-B, burst frame architecture2200and recovery technique2202respectively represent burst frame structures and upstream burst-mode signal recovery functions for a conventional direct-detection TDM-PON according to the IEEE 802.3ca NG-EPON example. As illustrated inFIG.22A, upstream burst-frame architecture2200begins with three synchronization patterns (SPs)2204, which are not under the FEC protection. More specifically, a first SP2204(1) (SP1) is used for receiver (Rx) settling with the function of automatic gain control, a second SP2204(2) (SP2) is used for the function of burst clock and data recovery (BCDR), and a third SP2204(3) (SP3) is used for frame synchronization with the state-of-burst delimiter (SBD) function to indicate the start of the burst after synchronization. Upstream burst-frame architecture2200further includes a payload portion2206following SPs2204, and an end-of-burst delimiter (EBD) portion2208following payload portion2206. As illustrated inFIG.22B, corresponding burst-mode signal recovery functions of conventional technique2202include an automatic gain control section2210and a burst-mode signal processor2212. In operation, automatic gain control section2210is configured to first perform automatic gain control with SP1, for example, using a burst-mode transimpedance amplifier (BM-TIA, not separately shown inFIG.22B). After the BM-TIA achieves steady-state, the burst-mode signal processor in the receiver of the OLT begins processing the steady-state signal from automatic gain control section2210to acquire phase lock on the incoming data stream at a clock and data recovery portion2214. In the conventional direct-detection PON, no channel equalization is generally required. Once processor2212of the receiver successfully locks the clock of burst signal, the data will also be recovered with SP2. Once recovered, a frame synchronization unit of processor2212then processes payload portion2206after the SBD of SP3 indicates the start-of-burst. Finally, end of the upstream burst is indicated upon detection of EBD portion2208. A comparison of this conventional direct-detection frame architecture and processing functionality may be seen with respect to the exemplary coherent architecture and functionality described further below with respect toFIGS.23A-B. For example although some of the burst-mode detection principles of the conventional direct-detection PON may be applicable to a coherent PON, coherent upstream burst-mode detection is known to be significantly more challenging due to the increased complexity of coherent optical signals, which are modulated and multiplexed on phase, polarization, and amplitude. As described further below, the DSP of a coherent OLT receiver is required to process different clocks, different carrier frequency-offsets, different carrier phases, random SoPs, and different channel responses from different bursts. FIG.23Ais a schematic diagram depicting an exemplary burst-frame architecture2300for a coherent passive optical network.FIG.23Bis a schematic diagram depicting an exemplary upstream recovery technique2302for coherent burst-frame architecture2300,FIG.23A. The exemplary embodiments depicted inFIGS.23A-Brespectively depict exemplary burst-frame structures and upstream burst-mode signal recovery functions for a coherent PON. As illustrated inFIG.23A, different from the three-SP structure of upstream burst-frame architecture2200,FIG.22A, upstream burst-frame architecture2300includes four different SPs2304for coherent detection of upstream burst PDM signals, which generally require polarization separation and channel equalization for DSP, as described in greater detail with respect to the embodiments above. According to the exemplary frame structure of burst-frame architecture2300, the overall preamble is advantageously designed for coherent burst synchronization and channel equalization. More specifically, a first SP2304(1) (SP1) is used for receiver (Rx) settling with the function of automatic gain control (e.g., similar to first SP2204(1),FIG.22A), a second SP2304(2) (SP2) is designed for digital clock recovery (Clock RCY), a third SP2304(3) (SP3) may be optimized for channel synchronization (CH SYNC) with multiple functions, and a fourth SP2304(4) (SP4) is used for channel adaptive equalization (CH EQ). Upstream burst-frame architecture2300further includes a payload portion2306following SPs2304, and an EBD portion2308following payload portion2306. As illustrated inFIG.23B, corresponding burst-mode signal recovery functions of coherent recovery technique2302include an automatic gain control section2310and a burst-mode signal processor2312, similar to conventional technique2202,FIG.22B. In operation, automatic gain control section2310is configured to first perform automatic gain control with first SP2304(1). At the time of this application there is no commercially-available BM-TIA for coherent upstream burst-mode detection. Accordingly, in an embodiment, automatic gain control section2310may perform optical automatic gain control using a semiconductor optical amplifier (SOA) or an erbium doped fiber amplifier (EDFA). Nevertheless, the present inventors contemplate that, once a linear coherent BM-TIA is demonstrated and available, such a coherent BM-TIA may be integrated with automatic gain control section2310to perform optical automatic gain control without departing from the scope of the embodiments herein. In further operation of coherent recovery technique2302, burst-mode signal processor2312is configured to perform burst-mode digital signal processing functions after automatic gain control section2310, and based on the preamble design of second, third, and fourth SPs2304(2-4). In the exemplary embodiment, all functions of burst-mode signal processor2312may be implemented digitally, acting as a DSP, and which may follow an ADC (not separately shown inFIG.23B). Burst-mode signal processor2312may further include one or more of an optional CD compensation unit2314, a clock recovery unit2316, a channel synchronization unit2318, a channel equalization unit2320, and a payload processing unit2322. After the receiver achieves steady-state from automatic gain control section2310, digital clock recovery may be implemented by clock recovery unit2316, with second SP2304(2), to acquire frequency and phase lock to the clock of an incoming burst structured according to burst-frame architecture2300. After digital clock recovery, channel synchronization unit2318may perform channel synchronization, with third SP2304(3), and which may employ additional multiple sub-functions, including one or more of accurate frame synchronization, carrier frequency-offset estimation, and SoP estimation for polarization separation and recovery. Channel equalization unit2320may then apply, with fourth SP2304(4), channel response estimation for adaptive channel equalizations. Using the relevant respective information obtained from SPs2304(2-4) of the preamble, a payload demodulation process implemented by payload processing unit2322may be greatly simplified, along with a significant reduction of the convergence time. Persons of ordinary skill in the art will appreciate that the particular order of burst-mode DSP functions/functional units of burst-mode signal processor2312are illustrated inFIG.23Bfor illustration purposes, and are not intended to be limiting. The functional order may, for example, differ according to the particular algorithms implemented, as shown by the different implementation techniques described above with respect toFIGS.15and17. Additionally, because many DSP algorithms that based on training sequences typically require accurate starting positions, frame synchronization is a first sub-function implemented in the channel synchronization process performed by channel synchronization unit2318, in the case where training sequences are employed. In some embodiments, two or more of SPs may be combined into a single SP using the same sequence pattern. Nevertheless, all of the corresponding functions described herein may still be applied to incoming bursts. Additional robust and efficient preamble architectures, having data-assisted burst-mode DSPs in coherent upstream burst-mode detection after Rx-settling, are described further below. FIG.24depicts an exemplary preamble architecture2400for a coherent burst-mode passive optical network. In the embodiment depicted inFIG.24, preamble architecture2400represents an innovative high-efficiency upstream frame structure for preambles preceding payload sections2402of respective polarization signals2404of an upstream burst transmission polarization multiplexed signal in a 100G coherent PON. In an exemplary embodiment, preamble architecture2400includes a first preamble processing SP2406(SP-A), a second preamble processing SP2408(SP-B), and a third preamble processing SP2410(SP-C). In this example, first, second, and third preamble processing SPs2406,2408,2410are respectively analogous to SP2304(2), SP2304(3), SP2304(4),FIG.23A. That is, SP-A is analogous to SP2, SP-B is analogous to SP3, and SP-C is analogous to SP4. As illustrated inFIG.24, second preamble processing SP2408is configured for each respective polarization signal2404to have a 2×N conjugate symmetric symbol length over each of a first time slot2412and a second time slot2414within SP-B, for a total length of 4N symbols (described further below). As illustrated, each time slot2412,2414includes 2N conjugate symmetric symbols in one polarization, and 2N zeros on the other polarization in that time slot with the symbol/zero polarization relationship alternating in the other time slot. In this respect, second preamble processing SP2408may be seen to include a frame structure similar to data unit1402C,FIG.14C. It may be noted here that the exemplary preamble architecture2400depicted inFIG.24does not exclude the additional inclusion of a preceding control SP analogous to SP1 ofFIG.23A, namely, a preamble control SP used for Rx-settling and automatic gain control. However, as illustrated above with respect toFIGS.22A and23A, because the control SP1 is not utilized for DSP, such a control SP is not further illustrated inFIG.24to simplify the illustration. That is, the design of preamble processing SPs2406,2408,2410is drawn toward the corresponding burst-mode DSP functions of the receiver that utilize the relevant preamble processing SPs. Therefore, in the exemplary embodiment, in practical application of the techniques described herein, a preamble control SP1 is included in the preamble before preamble architecture2400, and therefore the length (in time) of the entire preamble will be the sum of the respective lengths of SP1, SP-A, SP-B, and SP-C. As described further below with respect toFIG.25, the overall burst-mode DSP flow at the corresponding receiver may be designed according to a feed-forward configuration to reduce the processing latency and shorten the burst preamble length of preamble architecture2400. FIG.25depicts an exemplary DSP2500for processing an upstream burst transmission implementing preamble architecture2400,FIG.24. In the example depicted inFIG.25, DSP2500represents a data-aided burst-mode DSP for a 100G coherent PON. In an exemplary embodiment, DSP2500includes one or more of a frame detection and normalization unit2502, a CD compensation unit2504, a burst clock recovery unit2506, a frame synchronization unit2508, a burst SoP estimation and polarization demultiplexing unit2510, a preamble-based FOE unit2512, a channel estimation unit2514, a payload signal processing unit2516, and a phase recovery unit2518. As with processor2312of coherent recovery technique2302,FIG.23B, some of the respective functional units of DSP2500may be optional and/or disposed in a different functional order, without departing from the scope of the embodiments herein. In some embodiments, additional processing units or functionality may also be included beyond the DSP functions shown inFIG.25. In the exemplary embodiment, DSP2500is disposed after Rx-settling has occurred (e.g., after an automatic gain control section of that particular receiver). In exemplary operation of DSP2500, after normalization and non-data-aided CD compensation is performed by frame detection and normalization unit2502and CD compensation unit2504, respectively, five particular data-aided DSP functions are performed based on the three preamble processing SPs2406,2408,2410(SP-A, SP-B, SP-C). In this example, first preamble processing SP2406/SP-A is used by burst clock recovery unit2506for burst-clock-recovery based on DC-balanced, state QPSK symbols that are distributed nearly equally. In an exemplary embodiment, a fast square-timing-recovery algorithm may additionally be applied based on the received symbols (not separately shown) within SP-A. In this case, because the square-timing-recovery algorithm is not training based, there would be no need for accurate frame synchronization, which may eliminate the need for the separate frame synchronization unit2508. In at least one embodiment, the particular pattern used for SP-A may also be used to achieve burst-mode automatic gain control. For example, SP-A may include a symbol portion corresponding to an additional preamble control SP1. In this case, the overall length of SP-A would be increased to include the additional SP 1 symbol portion. In further operation of DSP2500, second preamble processing SP2408/SP-B is of particular importance, and which may be specially designed to perform one or more of three key data-aided DSP functions: (1) frame synchronization (e.g., by frame synchronization unit2508); (2) SoP estimation (e.g., by burst SoP estimation and polarization demultiplexing unit2510); and (3) FOE (e.g., by preamble-based FOE unit2512). Accordingly, by utilizing a single preamble SP to perform all three of these DSP functions, the overall preamble length is advantageously reduced by sharing the same preamble SP (i.e., SP-B). Of these three key functions, it may be desirable to implement accurate frame synchronization first, in the case where the other key functions may be based on a training sequence that requires perfect frame synchronization. Thus, where training sequences may be implemented, frame synchronization unit2508may be logically placed prior to burst SoP estimation and polarization demultiplexing unit2510and preamble-based FOE unit2512. In this example, it is therefore assumed that the relevant frame synchronization algorithm is tolerant of carrier frequency offset. In an exemplary embodiment, the sub-architecture of second preamble processing SP2408is advantageously designed to include 4N symbols within SP-B, including 2N conjugate symmetric symbols and 2N zeros on each respective polarization, as described above. As illustrated inFIG.24, the dual-polarization SP-B may be transmitted according to the pattern [SX, 0, 0, SY]. In this example, SX=[sx1, . . . , sxN*, sxN*, . . . , sx1*], and SY=[sy1, . . . , syN, syN*, . . . , sy1*]. In this manner, essentially all of the 4N symbols in SP-B of the preamble may be staggered and transmitted with 2N symbols for each polarization. Additionally, without inter-polarization crosstalk between the X- and Y-polarizations, accurate frame synchronization may, for example, be realized using a sliding window with normalized auto-correlation processing on each polarization according to: Cx,y(m)=ab⁢s⁢[∑k=0N-1⁢rx,y(m+k)⁢rx,y(m+2⁢N+k-1)]/PN(Eq.14) Here, Cxand Cyrepresent respective normalized auto-correlation functions on each polarization, PNrepresents a normalization signal power factor, and rxand ryrepresent the received signals from X- and Y-polarizations, respectively. Because of the conjugate symmetric symbol distribution across the two time slots SP-B, it may therefore be demonstrated that the correlation results of Eq. 14 is tolerant of FOE. Accordingly, referring to the transmitted signals by T(mts), the received signals r(mts) may be expressed according to: r(mts)=T(mts)exp(j2πΔf(mts)+φ)  (Eq. 15) Here, φ represents the carrier phase, and Δf represents the frequency-offset between the burst signal and the LO in the OLT. Assuming m0two symbolize the first symbol of the designed 2N conjugate symmetric symbols, the following is true: TS(k+1)=TS(2N−k)*=Sk, 0≤k≤N(Eq. 16) Therefore, when synchronized, the normalized auto-correlation peak may be expressed according to: (Eq.17)Cx(m0)=abs⁢{∑k=0N-1rx[(m0+k)]⁢rx[(m0+2⁢N-k-1)]}/PN=abs⁢{∑k=0N-1TS[(m0+k)]⁢exp[j⁢2⁢π⁢Δ⁢f⁡(m0+k)⁢ts+φ)×TS[(m0+k)]*⁢exp[j⁢2⁢π⁢Δ⁢f⁡(m0+2⁢N-k-1)⁢ts+φ)}/PN=abs⁢{∑k=0N-1❘"\[LeftBracketingBar]"TS[(m0+k)⁢ts]❘"\[RightBracketingBar]"2⁢exp[j⁢2⁢π⁢Δ⁢f⁡(2⁢m0+2⁢N-1)⁢ts+2⁢φ)}/PN=∑k=0N-1❘"\[LeftBracketingBar]"TS[(m0+k)⁢ts]❘"\[RightBracketingBar]"2/PN According to this advantageous processing configuration, both the frequency-offset and the signal phase may be seen to have no impact on these processing results, while the normalized auto-correlation peak is nevertheless tolerant of carrier frequency offset errors. Polarization, however, is known to randomly rotate after fiber transmission. Accordingly, to improve the tolerance to such polarization rotations, a combining scheme may be further implemented according to: C(m)=Wx(m)Cx(m)+Wy(m)Cy(m),  (Eq. 18) where C(m) represents the combined function for peak searching, and Wxand Wyare defined to represent the respective power ratio of each polarization. For example, Wxand Wymay be expressed according to: Wx(m)=Px(m)Px(m)+Py(m),(Eq.19)Wy(m)=Py(m)Px(m)+Py(m) In this manner, an exact location of the SP-B symbols may be found from the received signal, and the synchronization algorithm discussed above is shown to be robust to carrier frequency-offset and polarization rotations. In an exemplary embodiment, the same SP-B portion of the preamble may also be used for SoP estimation. For example, assuming that the received SP-B symbols may be expressed according to [rx1, ry2; ry1, ry2)], the SoP may be instantly estimated after frame synchronization from the received SP-B symbols. Thus, considering the single polarization case described above, the inverse Jones Matrix H may be estimated according to: H=[α2⁢e-j⁢γ2-(1-α1)(1-α2)α1⁢ej⁢γ1],(Eq.20) where α2and γ2may be calculated based on the received signals according to: α1=❘"\[LeftBracketingBar]"rx⁢1/ry⁢1❘"\[RightBracketingBar]"2(1+❘"\[LeftBracketingBar]"rx⁢1/ry⁢1❘"\[RightBracketingBar]"2),(Eq.21)γ1=arg⁡(rx⁢1/ry⁢1) In a similar manner, α2and γ2may be obtained using the second half of the symbols within SP-B (e.g., within second time slot2414,FIG.24). Polarization separation may thus be effectively realized based on the inverse of Jones Matrix H. After separating the two polarizations, frequency-offset may be estimated based on the same training symbols contained in SP-B. In an exemplary embodiment, to achieve fast and accurate FOE, a maximum likelihood (ML) criteria FOE algorithm may be modified, by considering the different polarizations, and implemented to estimate the carrier frequency-offset according to: Δf=avg(Δfx,Δfy),  (Eq. 22) where Δfxand Δfyrepresent the estimated frequency-offset in the X- and Y-polarizations based on the 2N non-zero symbols described above. In the exemplary embodiment, third preamble processing SP2410/SP-C may be advantageously designed to include training QPSK symbols for channel estimation (e.g., by channel estimation unit2514), and which may be based on a CMA algorithm for DSP. It is noted that this implementation of CMA in DSP is different than the conventional CMA implementation in a continuous-mode DSP, where the CMA is blind, without any information regarding the SoP. Here we apply the inverse of Jones Matrix H to reduce the convergence time of CMA. In contrast, the CMA implementation for SP-C enables all relevant information to be obtained from the preamble, and then applied to the following payload processing performed by payload signal processing unit2516, thereby greatly simplifying the payload demodulation process while also significantly reducing the convergence time in comparison with the conventional blind CMA techniques that are devoid of any SoP information of SOP, as confirmed by the experimental demonstration results described further below. In at least one embodiment, a feed-forward phase recovery algorithm is implemented by phase recovery unit2518as a final step in the signal recovery process before BER measurements. FIG.26is a schematic diagram depicting an exemplary test system2600for upstream burst detection. The embodiment depicted inFIG.26, system2600represents an experimental setup to demonstrate coherent upstream burst detection in a 100-Gb/s/k TDM coherent PON, which exhibited a detected power waveform2602of a burst frame transmitted therein including a preamble design according to preamble architecture2400,FIG.24. For the experimental demonstration set up depicted inFIG.26, system2600included an ONU-side2604transmitting upstream to an OLT-side2606over a 50 km single mode fiber (SMF)2608. At ONU-side2604, two synchronized ONUs2610were separately run to generate respective 25-GBaud PDM-QPSK burst frames utilizing a preamble frame structure according to preamble architecture2400,FIG.24. For this setup, the burst frames were generated by respective 80-GSa/s arbitrary waveform generators (AWGs)2612, and then fed into respective dual-polarization I/Q modulators2614. For this setup, each dual-polarization I/Q modulator2614included four drivers for optical signal modulation. Each ONU2610further included a respective tunable DFB laser2616, each tuned to a 1550-nm wavelength with a linewidth of approximately 1 MHz as the laser source of that ONU2610. For demonstration purposes, after modulation by dual-polarization I/Q modulator2614(1), the burst signals generated from first ONU2610(1) are combined with a dummy signal from second ONU2610(2) using a 3-dB optical coupler (OC)2618, and the respective burst frames from the two ONUs2610were staggered to avoid collision. Using an automatic bias-control and synchronization2620between the two AWGs2612, the burst signal from one ONU2610was coupled only with the null signal from the other ONU2610. The combined burst signals from ONU-side26044then transmitted over 50-km SMF2608, and received optical power to OLT-side2606was controlled by a variable optical attenuator (VOA)2622for BER testing. At OLT-side2606, a burst-mode EDFA2624was used for signal pre-amplification. The pre-amplified signal was then mixed with LO2626in an integrated coherent receiver (ICR)2628for coherent detection. In this setup, LO2626included a tunable external-cavity-laser (ECL) at 1550-nm and a linewidth <100 kHz. After coherent detection by ICR2628, the received signals were sampled by an 80-GSa/s digital sampling oscilloscope (DSO)2630and then processed using an offline burst-mode2632conforming to the exemplary configuration of DSP2500,FIG.25. For this setup, DSO2630was free-running, and there was no synchronization deployed between DSO2630and AWGs2612. The symbol lengths of the respective preamble processing SPs, used in the burst frames generated by ONUs2610, are listed below in Table 1, which features a summary of the respective preamble SP types, lengths, and functions. TABLE 1PreambleLength (Symbols)FunctionsSP-A1024Burst Clock RecoverySP-B512Frame Sync, Pol DeMux, FOESP-C256Channel Estimation As may be seen from Table 1, SP-A, SP-B, and SP-C have symbol length of 1024, 512, and 256 symbols, respectively. Accordingly, each burst frame may be calculated to contain a total preamble length of 71.68 ns (i.e., 1792 symbols), a payload length of 3.072 μs, an end of burst (EOB) length of 30.72 ns. For this setup, a guard interval (GI), having a length of 102.4 ns, was included to separate the bursts. FIGS.27A-Dare graphical illustrations depicting respective experimental result plots2700,2702,2704,2706from test system2600,FIG.26. Because SP-B is configured to have the most notable processing impact of the present preamble embodiments, plots2700,2702, and2704were generated to demonstrate the testing performance of SP-B with respect to frame synchronization, SoP estimation, and FOE, respectively. More particularly, plot2700ofFIG.27Adepicts the experimental results of the normalized auto-correlation output for peak search. Plot2700therefore demonstrates the frame synchronization result based on the combined auto-correlation result of C(m). It may be seen from plot2700that two peaks2708appear at the respective X- and Y-polarizations, and peak very sharply at the synchronization point locations. These sync-peak locations therefore represent the start of non-zero SP-B symbols in the received signals. To quantify this performance of the frame synchronization, peak-to-maximum-noise ratio (PMNR) metric2710is here defined to indicate the quality of sync-peaks in comparison with noise peaks. Plot2702ofFIG.27Bdepicts experimental results of PMNR against the length of SP-B non-zero symbols. As may be seen from plot2702, each polarization had 256 non-zero symbols in SP-B, that is, 512 symbols total within SP-B that included 256 zeros, provided over 10-dB PMNR, showing very high-quality peaks. Plot2704ofFIG.27Cdepicts experimental results of PMNR against frequency offset. As described above, the present frame synchronization techniques are tolerant of FOEs, which is demonstrably verified by the experimental results shown in plot2704. Plot2704further confirms that a PMNR of over 10 dB may be achieved even with a 25 GHz offset range (−12.5 to 12.5 GHz, in this example). Plot2706ofFIG.27D, on the other hand, depicts the experimental results of PMNR against different polarization rotations namely, the X-polarization, the Y-polarization, and a combination thereof. Plot2706thus demonstrates the performance of frame synchronization under different polarization rotation states. According to plot2706, it may further be seen that the PMNR from each individual polarization is polarization-dependent, and changes according to the polarization rotation, as shown by individual polarization PMNR subplots2712. In contrast, as shown by combined PMNR subplot2714, the combined PMNR is substantially polarization-independent thereby verifying the mathematical principles described above with respect to Eq. 18 that indicate the advantageous tolerance of polarization rotation. FIG.28Ais a graphical illustration depicting a comparative plot2800depicting an estimated frequency offset against a target frequency offset. More particularly, comparative plot2800illustrates the FOE performance of the present preamble-based DSP FOE (e.g., of preamble-based FOE unit2512using second preamble processing SP2408/SP-B), over the 25 GHz estimation range, shown in a first subplot2802of plot2800. For comparison, a second subplot2804of plot2800is superimposed to show the performance of a feed-forward blind FOE DSP technique, which, in this example, was based on a long data sequence. A comparison of first subplot2802with second subplot2804demonstrates that the present preamble-based FOE technique, which utilizes the innovative SP-B architecture described above, outperforms the blind long-data-based technique. Additionally, it is notable that the present preamble-based FOE technique realized a larger estimation range (e.g., from −12.5 to 12.5 GHz), and the blind FOE technique used 2500 symbols, whereas the present preamble-based FOE technique used only 512 symbols within SP-B (i.e., 256 non-zero symbols on each polarization). FIG.28Bis a graphical illustration depicting a comparative BER performance result plot2806for subplots2802,2804,FIG.28A. More particularly, comparative BER performance result plot2800includes a first subplot2808demonstrating the BER performance of the present preamble-based FOE over the 25 GHz range, and a second subplot2010demonstrating the BER performance of the blind long-data-based FOE over the same range. From first subplot2808, it may be seen that the BER penalty is almost negligible for the present preamble-based FOE when the frequency offset is less than ±10 GHz. However, as illustrated by second subplot2810, due to the phase ambiguity, the feed-forward blind FOE method fails when the frequency-offset is greater than ±2.5 GHz. FIG.29is a graphical illustration depicting a residual frequency offset plot2900. More particularly, plot2900illustrates the performance of residual FOE against the length of non-zero training symbols in SP-B. The results illustrated in plot2900thus confirm that 256 non-zero symbols for each polarization in SP-B is sufficient to accurately achieve a residual offset of <2-MHz. In an exemplary embodiment, this residual offset value may be subsequent processed in carrier phase recovery function/functional unit of the DSP (e.g., phase recovery unit2518,FIG.25). FIG.30Ais a graphical illustration depicting a plot3000of signal mean square error (MSE) before channel equalization.FIG.30Bis a graphical illustration depicting a comparative plot3002of signal MSE after channel equalization. More particularly, plot3000illustrates the results of SoP estimation by plotting the MSE against the length of non-zero symbols in SP-B, and comparative plot3002illustrates the signal MSE against the training length of symbols in SP-C. These SoP estimation results of comparative plot further demonstrate the impact on the required symbol length in SP-C used for channel estimation (e.g., by channel estimation unit2514,FIG.25). Plot3000further shows the results from testing the required SP-B symbols for SoP estimation before the channel equalization. It may be seen from plot3000that 256 non-zero symbols on each polarization in SP-B (512 symbols in total) is sufficient to minimize the impact from MSE. Comparative plot further shows the results from testing the impact on adaptive channel equalization (i) without using SP-B for SoP estimation, as illustrated in a first subplot3004, and (ii) with SP-B, as illustrated in a second subplot3006. First subplot3004thus illustrates how, without SP-B as described herein, the CMA process for channel equalization requires a considerably long convergence time due to the random polarization rotation. In contrast, as illustrated in second subplot3006, use of SP-B for SoP estimation, drastically reduces the minimum convergence time (i.e., indicating the length of SP-C training symbols) from 2560 symbols without SP-B to only 256 symbols with SP-B. Accordingly, by greatly reducing the channel response estimation time in this manner, the overall preamble length is also similarly reduced. FIG.31is a graphical illustration depicting a comparative BER performance result plot3100. More particularly, plot3100depicts the BER performance against received optical power for an ECL-based continuous 100G PDM-QPSK signal, a back-to-back (B2B) DFB-based burst signal, and the DFB-based burst signal transmitted over 50 km SMF2608,FIG.26. After 50-km fiber transmission, plot3100demonstrates that a required optical power value3102, at an average BER of 1×10−3, is shown to be −39 dBm. Plot3100therefore further includes, for illustration purposes, a first constellation3104of the 50 km DFB-based burst signal and a second constellation3106of the B2B DFB-based burst signal, with both of first and second constellations3104,3106taken at −39 dBm. For further comparison, the plotted results of the ECL-based continuous 100G PDM-QPSK signals demonstrate the consistent performance over the different signal types. Furthermore, due to the high receiver sensitivity offered by the coherent detection technology employed in the test setup ofFIG.26, system2600was able to implement a pre-FEC BER threshold of 1×10−3, as opposed to the 1×10−2pre-FEC threshold of non-coherent PON systems. That is, lower coding and decoding complexities are expected from such simpler FEC coding schemes. Plot3100further shows that, when compared with the ECL-based continuous signals, there is less than a 0.3-dB penalty for the DFB laser-based burst signals after burst-mode coherent detection. Plot3100still further illustrates the results from testing a dynamic range3108of the coherent receiver. That is, without having changed the receiver setup in OLT-side2606(i.e., the same BM-EDFA2624and ICR2628were kept), a dynamic range3108of approximately 20 dB is exhibited for the received power of the 100G coherent PON upstream burst signals. For system2600, dynamic range3108only depicts the test results using BM-EDFA2624. Nevertheless, the present inventors contemplate that an effective dynamic range will also be achieved using an SOA instead of an EDFA. FIG.32is a graphical illustration depicting a BER performance plot3200as a function of residual CD. More particularly, plot3200demonstrates the test results of the overall BER performance under different residual CD values. Plot3200thus confirms that there is no overt BER penalty experienced when the residual dispersion is within 87.5 ps/nm (i.e., −43.75 to +43.75 ps/nm, in this test setup), and only is a small BER penalty is experienced for residual dispersion within 437.5 ps/nm (i.e., −218.75 to +218.75 ps/nm, in this test setup). For this test, the received optical power was maintained at −38.5 dBm during the entirety of the test from which the results are illustrated in plot3200. FIG.33is a graphical illustration depicting a long-term bit-error-ratio performance result plot3300. For plot3300, the BER was tested continuously for a duration of approximately 6 hours. Plot3300thus effectively demonstrates testing of BER performance of the present systems and methods for long-term operation. For this testing implementation, a margin of 1 dB was reserved, and thus the received optical power was maintained at −38 dBm. As shown in plot3300, measured BER values3302, over the 6-hour test duration, all stayed below an FEC threshold value3304of BER at 1×10−3, thereby further confirming the long-term stability of the present preamble architectures and corresponding burst-mode DSP for upstream burst-mode coherent detection. For the testing considerations that produced the results shown in plot3300, the output power from the ONU was −2 dBm. Therefore, plot3300still further demonstrates the achievement of a 36-dB power budget (including a 1-dB margin). According the embodiments described above, an innovative preamble architectural design is provided, as well as a corresponding burst-mode DSP solution, enabling significantly improved coherent upstream burst-mode detection in a 100G TDM coherent-PON. The above embodiments further demonstrate that these advantageous architectural and DSP function systems and methods are experimentally verified to be both reliable and efficient over a variety of different relevant test scenarios and test conditions. The unique preamble architectural configuration described herein provides still further advantages over conventional techniques by enabling individual portions of the new preamble structure to be shared by multiple DSP functions, or functional units, thereby greatly reducing the overall preamble length. The experimental results described above further confirmed a robust performance of the present embodiments over a large frequency-offset, residual fiber dispersion, and long running times. As a proof-of-concept, a relevant testing system setup achieved effective coherent upstream burst-mode detection of a 100 Gb/s PDM-QPSK signal, with 36-dB power budget, and after 50-km SMF transmission using the present preamble architectures having a length of 71.68 ns at the transmission-side, with corresponding burst-mode DSP at the receiver-side. The present systems and methods still further demonstrated approximately 20 dB of received power dynamic range for burst signal detection in a 100-Gb/s/λ. TDM coherent-PON. Although specific features of various embodiments of the disclosure may be shown in some drawings and not in others, this convention is for convenience purposes and ease of description only. In accordance with the principles of the disclosure, a particular feature shown in a drawing may be referenced and/or claimed in combination with features of the other drawings. Some embodiments involve the use of one or more electronic or computing devices. Such devices typically include a processor or controller, such as a general purpose central processing unit (CPU), a graphics processing unit (GPU), a microcontroller, a reduced instruction set computer (RISC) processor, an application specific integrated circuit (ASIC), a programmable logic circuit (PLC), a field programmable gate array (FPGA), a digital signal processing (DSP) device, and/or any other circuit or processor capable of executing the functions described herein. The processes described herein may be encoded as executable instructions embodied in a computer readable medium, including, without limitation, a storage device and/or a memory device. Such instructions, when executed by a processor, cause the processor to perform at least a portion of the methods described herein. The above examples are exemplary only, and thus are not intended to limit in any way the definition and/or meaning of the term “processor.” This written description uses examples to disclose the embodiments, including the best mode, and also to enable any person skilled in the art to practice the embodiments, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
139,748
11863298
DESCRIPTION OF THE EMBODIMENTS The following description of embodiments of the invention is not intended to limit the invention to these embodiments but rather to enable a person skilled in the art to make and use this invention. Variations, configurations, implementations, example implementations, and examples described herein are optional and are not exclusive to the variations, configurations, implementations, example implementations, and examples they describe. The invention described herein can include any and all permutations of these variations, configurations, implementations, example implementations, and examples. 1. Method As shown inFIG.1, a method S100includes, accessing a network graph in Block S110including: a set of nodes, each node in the set of nodes representing a transceiver in a network of transceivers; and a set of edges, each edge in the set of edges connecting two nodes in the set of nodes and representing a communication channel between a pair of transceivers in the network of transceivers represented by the pair of nodes. The method S100also includes: accessing a network state including a set of edge values for the set of edges in Block S120; and identifying a set of triangle graphs in the network graph in Block S130. The method S100additionally includes, for each triangle graph in the network graph: calculating a component diagnostic score based on a subset of edge values in the set of edge values, the subset of edge values associated with edges in the triangle graph in Block S140; and for each node in the triangle graph, updating a cumulative diagnostic score for the node based on the component diagnostic score in Block S150. The method S100further includes, in response to detecting a first cumulative diagnostic score for a first node exceeding a threshold cumulative diagnostic score, triggering a corrective action at a first transceiver represented by the first node in Block S160. As shown inFIG.2, one variation of the method S100includes accessing a network graph in Block S110including: a set of nodes, each node in the set of nodes representing a transceiver in a network of transceivers; and a set of edges, each edge in the set of edges connecting two nodes in the set of nodes and representing a communication channel between a pair of transceivers in the network of transceivers represented by the pair of nodes. The method S100also includes: accessing a network state including a set of edge values for the set of edges in Block S120; and identifying a set of triangle graphs in the network graph in Block S130. The method S100additionally includes, for each triangle graph in the network graph: calculating a component diagnostic score based on a subset of edge values in the set of edge values, the subset of edge values associated with edges in the triangle graph in Block S140; and for each edge in the triangle graph, updating a cumulative diagnostic score for the edge based on the component diagnostic score in Block S152. The method S100further includes, in response to detecting a first cumulative diagnostic score for a first edge exceeding a first threshold cumulative diagnostic score, triggering a first corrective action at a first communication channel represented by the first edge in Block S162. As shown inFIG.3, one variation of the method S100includes accessing a network graph in Block S110including: a set of nodes, each node in the set of nodes representing a transceiver in a network of transceivers; and a set of edges, each edge in the set of edges connecting a pair of nodes in the set of nodes and representing a communication channel between a pair of transceivers in the network of transceivers represented by the pair of nodes. This variation of the method S100also includes: identifying a target subgraph of the network graph associated with a target node in the set of nodes, the target node representing a target transceiver in Block S132; accessing a target network state of the target subgraph comprising a set of edge values for each edge in the first subgraph, the set of edge values comprising a pairwise time bias associated with each edge in the subgraph and a pairwise frequency offset associated with each edge in the target subgraph in Block S122; calculating a first probability of failure of the target transceiver within a threshold period of time based on the target network state of the target subgraph and a failure prediction model in Block S170; and, in response to detecting the first probability of failure of the target transceiver exceeding a threshold likelihood, triggering a corrective action at the target transceiver in Block S164. One variation of the method S100includes accessing a network graph including: a set of nodes and a set of edges. The set of nodes includes: a set of transceiver nodes representing a set of transceivers operating in a mesh network of transceivers; and a set of transmitter nodes representing a set of transmitters communicating with the mesh network of transceivers. Each edge in the set of edges: connects a pair of nodes in the set of nodes; and represents a communication channel between a pair of transceivers in the network of transceivers represented by the pair of nodes or a communication channel between a transmitter in the set of transmitters and a transceiver in the set of transceiver. This variation of the method S100also includes: identifying a first subgraph of the network graph associated with a first node in the set of nodes, the first node representing a first transceiver; accessing a first network state of the first subgraph comprising a set of edge values for each edge in the first subgraph; calculating a first probability of failure of the first transceiver within a threshold period based on the first network state of the first subgraph and a failure prediction model; and in response to detecting the first probability of failure of the first transceiver exceeding a threshold likelihood, triggering a corrective action at the first transceiver. One variation of the method S100includes accessing a network graph including: a set of nodes and a set of edges. The set of nodes includes: a set of transceiver nodes representing a set of transceivers operating in a mesh network of transceivers; and a set of transmitter nodes representing a set of transmitters communicating with the mesh network of transceivers. Each edge in the set of edges: connects a pair of nodes in the set of nodes; and represents a communication channel between a pair of transceivers in the network of transceivers represented by the pair of nodes or a communication channel between a transmitter in the set of transmitters and a transceiver in the set of transceiver. This variation also includes: identifying a first triangle graph in the network graph comprising a first node in the set of nodes, the first node representing a first transceiver in the set of transceivers; accessing a first network state of the first triangle graph comprising a set of edge values for each edge in the first triangle graph; calculating a component diagnostic score based the set of edge values; and updating a first cumulative diagnostic score for the first node based on the component diagnostic score; and in response to detecting a first cumulative diagnostic score for the first transceiver node exceeding a threshold cumulative diagnostic score, triggering a corrective action at a first transceiver represented by the first node. 2. Applications Generally, the method S100can be executed by a computer system—such as an individual computer, a computer network, or a set of servers connected over a network (hereinafter “the system”)—to monitor clock synchronicity (e.g., frequency offset and time bias) and/or propagation distance between transceivers in a transceiver network and to selectively trigger individual transceivers to resynchronize and/or recalibrate with other transceivers in the network based on identified inconsistencies in reported values of pairwise frequency offsets, pairwise time biases, and/or pairwise distances between the transceivers in the transceiver network. In particular the computer system can: receive pairwise frequency offset, time bias, and/or distance data from a mesh network (i.e. a network) of time-synchronized and self-localizing transceivers; identify inconsistencies in these data; and automatically trigger particular transceivers in the network to recalibrate or resynchronize with other transceivers in the network in order to reduce network-wide inaccuracy in the reference time of the network and/or the calculated relative locations of transceivers in the network. For example, the mesh network of transceivers can calculate—on a pairwise basis—clock frequency offsets between pairs of transceivers, time biases between pairs of transceivers, and distances between pairs of transceivers in the network. The mesh network can then maintain a reference clock for the set of transceivers (to which the network may be synchronized e.g., to within one nanosecond) and localize these transceivers relative to each other in two- or three-dimensional space (e.g., to within 30 centimeters) based on these pairwise frequency offset, time bias, and distance values. However, small errors in the accuracy of these pairwise frequency-offset, time bias, and distance values originating at a single transceiver—such as due to time and/or frequency domain drift within a transceiver, environmental changes around a transceiver, or degradation of a transceiver's hardware over time—may introduce network-wide errors in the value of the reference clock and the relative locations of the transceivers calculated by the network based on these values. Additionally, the time-synchronized and self-localized network of transceivers can also execute other functions, such as localization of third-party transmitters or execution of high-data rate TDMA protocols. For example, the network can perform precise time-of-arrival or time-difference-of-arrival localization based on signals received from a third-party transmitter. In another example, the network can leverage the high degree of synchronicity between the network of transceivers to execute high data rate TDMA protocols by reducing buffer durations between slots in these protocols. However, the errors described above with respect to the pairwise frequency offset, time bias, and distance values may also negatively impact the accuracy of these applications of the mesh network. Conversely, by automatically correcting for these inaccuracies according to the method S100the system can better compensate for inaccuracies on the transmitter side and improve localization and transmission of data via TDMA (e.g., by measuring the clock drift of transmitters relative to the mesh network of transceivers). Therefore, the system can execute Blocks of the method S100to identify, diagnose, and correct these errors in pairwise frequency-offset, time bias, and distance values between transceivers in the network by: scoring each transceiver based on a set of diagnostic tests; identifying particular transceivers based on these scores; and triggering the particular transceivers to recalibrate or resynchronize with other transceivers in the network and/or scheduling the particular transceivers for repair or replacement. Alternatively, the system can execute Blocks of the method S100to: score each pairwise connection (i.e., a pairwise communication channel) between transceivers; identify particular problematic pairwise communication channels based on these scores; and identify an obstruction or anomaly within the communication channel that may be preventing accurate synchronization, syntonization, and/or distance calculation between transceivers associated with the identified communication channel. More specifically, the system can access a network graph that includes: nodes, representing each transceiver; and edges, representing the pairwise communication channel between each pair of transceivers in the network. Furthermore, the network graph stores the position of each transceiver and the time bias of each transceiver relative to each other transceiver in the mesh network. By representing the mesh network via primarily relative measurements, the system obviates the need for a physical absolute time, frequency, or positional reference and enables construction of an abstract dynamic reference clock and set of relative locations that is more accurate and up-to-date compared to most readily available absolute timing and position references such as those available from mapping/floor plans or from the clock hardware on any individual transceiver. However, the system can also calculate time biases and/or locations relative to absolute reference clocks and/or absolute positional references. The system can access a network graph including a set of edge values associated with the edges in the network graph. These edge values characterize current and/or historical states of the pairwise relationship between the two transceivers represented as nodes along the edge in the network graph. For example, edge values associated with edges in the network graph can include: the current pairwise time bias between the pair of transceivers represented by the nodes associated with the edge, the current pairwise frequency offset, the current pairwise distance, values representing the uncertainties associated with each of the aforementioned values, and/or time series of each of these aforementioned edge values characterizing the progression of the aforementioned values over time. Likewise, the system can access a network graph including a set of node values associated with nodes in the network graph. These node values characterize current and/or historical states of the particular transceiver represented by each node in the network graph. For example, node values associated with nodes in the network graph can include: the current relative position of the transceiver represented by the node, the current relative time bias of the transceiver represented by the node, a current temperature value at the transceiver represented by the node, a current acceleration value at the transceiver represented by the node, a current barometric pressure value at the transceiver represented by the node, a global navigation satellite system (hereinafter “GNSS”) location of the transceiver represented by the node, a GNSS time synchronization offset of the transceiver represented by the node, and/or time series of each of these aforementioned node values characterizing the progression of these aforementioned node values over time. The system can then execute self-consistency tests, based on the network graph, for each fully connected group of three nodes (i.e., for each triangle graph) present in the current network graph; and score each node based on these self-consistency tests. The system can calculate higher scores to indicate a greater inaccuracy attributable to a node or an edge. Alternatively, the system can calculate lower scores to indicate a greater inaccuracy attributable to a node or an edge. However, for ease of explanation, the method S100is described with respect to the implementation where high scores indicate greater inaccuracy. In response to detecting that a score for an individual node is greater than a threshold score, the system can trigger the transceiver represented by the node to resynchronize and/or recalibrate with other transceivers in the network. In response to calculating consistently high scores for edges associated with a particular node over multiple iterations of the method S100or over a period of time, the system can also trigger replacement of the transceiver represented by the particular node. Additionally or alternatively, the system can: score individual edges; identify particular edges with high scores; and prompt users/maintenance personnel of the system to investigate anomalies affecting the communication channel between transceivers represented by the nodes along the particular edge. Thus, the system can automatically identify, diagnose, and correct errors at transceivers in a decentralized mesh network of transceivers, thereby maintaining a nanosecond-level synchronization and sub-meter localization accuracy of the transceivers in the network. While the network is accurately synchronized and frequency-calibrated, the frequency offset between a first transceiver and a second transceiver plus the frequency offset between the second transceiver and a third transceiver is equal to the frequency offset between the first transceiver and the third transceiver. Therefore, the system can leverage this transitivity of frequency offsets between transceivers to execute a frequency offset self-consistency test for each group of three fully connected nodes in the current network graph to evaluate the extent to which this transitivity holds true based on the reported pairwise frequency offset values from the network. The system can, for each group of three fully connected nodes in the network graph: subtract a first frequency offset represented by a first edge from the sum of a second frequency offset represented by a second edge, and a third frequency offset represented by a third edge; and calculate a frequency offset self-consistency score based on the value of this calculation. Additionally, while the network is accurately synchronized and frequency-calibrated, the time bias between a first transceiver and a second transceiver plus the time bias between the second transceiver and a third transceiver is equal to the time bias between the first transceiver and the third transceiver. Therefore, the system can leverage this transitivity of time biases between transceivers to execute a time bias self-consistency test for each group of three fully connected nodes in the current network graph to evaluate the extent to which this transitivity holds true based on the reported pairwise time bias values from the network. The system can, for any group of three nodes in the network graph: subtract a first time bias represented by a first edge from the sum of a second time bias represented by a second edge, and a third time bias represented by a third edge; and calculate a time bias self-consistency score based on the value of this calculation. Furthermore, the sum of two sides of a triangle formed between three transceivers is always greater than the third side of the triangle based on the triangle inequality theorem. Therefore, the system can execute a distance self-consistency test for each group of three fully connected nodes in the current network graph by evaluating whether this triangle inequality holds true based on the reported pairwise distance values from the network. In response to determining that a triangle cannot be formed based on the pairwise distances represented by edges between a group of three nodes in the network graph, the system can calculate a triangle inequality score based on this result. Upon calculating the frequency offset self-consistency score, the time bias self-consistency score, and/or the triangle inequality score for a group of three nodes, the system can calculate a component diagnostic score for each node in the group of three nodes based on these scores. Upon calculating a new component diagnostic score for each node in the group of three nodes, the system can combine this component diagnostic score with a cumulative diagnostic score for the node. The system can then repeat this testing and scoring process for each group of three fully connected nodes defined by the current network graph, thereby updating the cumulative diagnostic score of a node each time this node is included in the group of three nodes under test. Thus, even in a network with sparse connectivity, the system can use these algorithms to test local subsets of connected transceivers or estimate the network connectivity graph, all of which can then enhance the accuracy of the relative state even for sets of transceivers without a direct physical layer connection. Once the system has tested and scored each triangle graph identified within the network graph, the system can identify nodes with cumulative diagnostic scores greater than a threshold and trigger the transceivers represented by these nodes to recalibrate and/or resynchronize. Alternatively, the system can rank the nodes based on their cumulative diagnostic scores and trigger the transceivers represented by a predetermined percentile of these nodes to recalibrate and/or resynchronize. Conversely, the system can identify nodes and edges whose cumulative diagnostic scores are lower than the threshold diagnostic score and designate a dynamic list of verified nodes and/or verified edges. The system can then generate a reference time and reference location for the mesh network based on only the verified nodes and/or verified edges. Thus, the system generates a reference time and a reference location for the mesh network that exploits the dynamically changing network topology to increase the accuracy of this reference time and reference location. For example, by leveraging only verified nodes and/or edges in the mesh network the system can generate a reference time with sub-nanosecond timing accuracy and a reference location with sub-meter positional accuracy even for applications in which: the mesh network is highly distributed; and individual transceivers and/or communication channels may not meet the specifications of the mesh network (such as due to errors in time-offset synchronicity caused by wireless multipath, due to errors in frequency offset synchronicity or initial calibration caused by relative motion, or due to low timing/location precision caused by an unstable local oscillator reference). The system can periodically execute the method S100to test the accuracy of the transceivers in the network over time as the system updates the network graph based on pairwise frequency offsets, pairwise time biases, and pairwise distances reported by the network of transceivers. The system can also track the history of each node's diagnostic score, over multiple testing intervals, to identify trends in transceiver performance over time. Therefore, upon calculating repeatedly high cumulative diagnostic scores for a node, the system can trigger an operator of the network to repair the transceiver represented by the node or to replace the transceiver entirely. In one variation, the system can execute a failure prediction model (e.g., a machine learning model), based on a network graph including time series of edge values and/or node values, in order to predict failure (defined based on cumulative diagnostic score) of nodes and/or edges in the network prior to a current network graph exhibiting a failure. In this variation, the system can train the failure prediction model to recognize patterns in node values and or edge values that are predictive of a future failure of one or more transceivers in the mesh network. 3. Network Graph Generally, the system maintains and/or accesses a network graph that represents a set of transceivers configured in a mesh network in Block S110. The transceivers can establish pairwise “connections”—such as a radio-frequency communication channel, optical communication channel, or any other form of direct communication between transceivers—with other transceivers in the mesh network. The transceivers in the mesh network can execute a frequency calibration and a time synchronization protocol on a pairwise basis to calculate the frequency offset between the clocks of each pair of transceivers, the time bias between the clocks of each pair of transceivers, and/or the distance between each pair of transceivers (e.g., based on the propagation delay between transceivers and the speed of the data in the communication medium). In particular, pairs of nodes within the network can transmit and receive synchronization signals and calibration signals and calculate time bias, propagation delay, and frequency offset (i.e., pairwise edge values) between the pair of nodes according to time synchronization and frequency calibration protocols described in U.S. patent application Ser. No. 17/135,566. Based on these pairwise edge values, the system can: calculate a reference time for the mesh network (in order to synchronize the clocks of each transceiver to a common time); and calculate the relative location of each transceiver in the mesh network according to the self-localization protocol described in U.S. patent application Ser. No. 17/080,729. The system can generate or access an abstraction (i.e. the network graph) of the mesh network based on these pairwise data in order to track the performance of the mesh network. The system can access a network graph including a set of nodes, where each node represents a transceiver in the mesh network, and a set of edges between the nodes, where each edge represents a connection between a pair of nodes of sufficient quality (e.g., signal-to-noise ratio) to obtain pairwise data for the pair of nodes. Thus, the system can access a network graph that defines a connected graph. However, the system can access network graphs that are not maximally connected (i.e. complete) graphs due to low signal-to-noise ratio (hereinafter “SNR”) caused by environmental factors or physical dispersion of the nodes. For example, the mesh network can include a set of RF transceivers spread over a large geographic area or an area with physical obstructions. Therefore, various pairs of transceivers in the mesh network may not be able to execute the frequency offset calibration and time synchronization protocols due to an insufficient SNR for transmissions between the pair of transceivers and therefore the network graph does not include an edge between these transceivers. Thus, in Block Sio, the system can access a network graph including: a set of nodes, each node in the set of nodes representing a transceiver in the network; and a set of edges, each edge in the set of edges connecting a pair of nodes in the set of nodes and representing a communication channel between a pair of transceivers in the network of transceivers represented by the pair of nodes, the communication channel characterized by a signal-to-noise ratio greater than a threshold signal to noise ratio. Additionally, the system can access and/or maintain a network state based on the network graph in order to characterize the state of the transceivers and the connections between them in the mesh network. More specifically, the network state includes edge values in association with each edge in the network graph and/or node values in association with each node in the network graph that represent aspects of the communication channel represented by the edge and/or the transceiver represented by the node respectively. Thus, in Block S120, the system can access a network state including a set of edge values for the set of edges and a set of node values for the set of nodes. In one implementation, in addition to accessing a current network state of the mesh network represented by the network graph, the system can access a time series of network states in Block S120. More specifically, the system can access a network state including: a time series of edge values for each edge in the set of edges; and/or a time series of node values for each node in the set of nodes. Thus, the system can identify trends in the edge values and node values characterizing the status of the network. In another implementation, the system can access a set of edge properties and/or a set of node properties representing static characteristics of the node that are not subject to change. For example, the system can access a set of node properties including the type of hardware of the transceiver represented by the node. In another example, the system can access a set of edge properties including a predetermined line-of-sight distance between two static transceivers and/or a predetermined frequency of the communication channel represented by an edge. In yet another implementation, in which the set of nodes in the network graph include both transmitter nodes and transceivers nodes, the system can access a network state including a set of transceiver-transceiver edge values for each edge between transceiver nodes in the network graph; and a set of transmitter-transceiver edge values for each edge between a transmitter node and a transceiver node in the network graph. 3.1 Synchronization, Calibration, and Ranging Signals Generally, the system can access or generate the network state, based on synchronization, calibration, and ranging signals transmitted between nodes in the network, or between transmitters (e.g., user equipment, asset tags) and nodes in the network. More specifically, these synchronization signals, calibration signals, and/or ranging signals can include orthogonal frequency division multiplexed signals (including a set of subcarrier signals) or frequency-hopping signals (including a set of time-divided carrier signals), thereby enabling spread-spectrum precision timing of these signals by nodes in the network. 3.2 Relative Time Bias Generally, the system can access, from the network state, reported pairwise time biases between transceivers (e.g., from transceivers executing the time synchronization process) and reported pairwise frequency offsets (e.g., from transceivers executing the frequency offset calibration process) to calculate a reference time for the mesh network of transceivers. More specifically, the system can generate and maintain an abstraction of a reference clock that defines a reference clock frequency and reference time (e.g., an initial time from which subsequent times can be measured). In order to construct this abstraction, the system can execute the method S100to select the most accurate clock from amongst the transceivers as a master clock in the mesh network. Alternatively, the system can calculate a weighted average of nominal clock frequencies and nominal reference times tracked by each transceiver in the set of verified transceivers and verified communication channels in the mesh network. The system can weigh each nominal clock frequency and/or reference time corresponding to a transceiver based on a history of diagnostic scores generated for that transceiver. For example, the system can weigh nominal clock frequency and/or nominal reference times for transceivers based on a running average, which can be a temporally weighted running average or windowed running average. In one implementation in which the system generates the reference clock based on an unweighted average of transceiver clocks, the system can, upon generating the reference clock, calculate the time bias (i.e., the relative time bias) of each transceiver relative to the reference clock and store this value as a node value in association with a node representing the transceiver. Thus, in addition to representing conditions at each node, the set of node values can also represent the time bias of a node relative to the reference clock, which may be indicative of poor performance of the node relative to other nodes in the network. In another implementation, the system can test each nominal clock frequency and nominal reference time from each transceiver in the mesh network and select the nominal clock frequency and nominal reference time that minimizes diagnostic errors across the network. However, the system can establish a reference time and a reference clock frequency in any other way. 3.3 Relative Location Generally, the system can calculate a location for each node relative to a reference location based on the pairwise distances between nodes in the set of edge values in the network graph. More specifically, the system can access location data for one or more transceivers represented by nodes in the network graph in order to position those transceivers within a reference coordinate system. Upon calculating the relative location of one transceiver in the network, the system can access the location of additional transceivers and/or orientation data for the same transceiver in order to orient the transceivers within a reference coordinate system. The system can then localize other transceivers within the network based on the pairwise distances calculated between these transceivers. Thus, the system can access a set of node values including a relative location estimate of a transceiver represented by a node and can compare this relative location estimate to other sources of location data for the transceiver, thereby identifying errors in relative location estimates that may be indicative of failure at the transceiver. In one implementation, the system can access location and or orientation data for transceivers via secondary localization technologies executed at the transceiver. For example, the system can access reference location and/or reference orientation data via a GNSS receiver, via an ultrawideband localization protocol (hereinafter “UWB” localization protocol”), via a signal strength localization protocol, via an angle-of-arrival localization protocol, via a dead-reckoning or accelerometer-based localization protocol, via magnetometer-based orientation detection, and/or any other localization or orientation protocol. Alternatively, the system can access a reference location database including known installed locations and/or orientations of a subset of transceivers in order to anchor the subset of transceivers in the reference coordinate system. The system can then estimate the location of other transceivers relative to the subset of transceivers. By accessing location estimates for each transceiver based on multiple available localization technologies, the system can estimate network localization performance at each transceiver and identify failure at particular transceivers within the network. Thus, the system can access node values for a node including: a relative location estimate of the transceiver represented by the node based on pairwise distances in the set of edge values; and a set of location estimates based on other localization technologies. 3.4 Environmental Map Generally, the system can also store an environmental map, which is a representation of a physical environment surrounding the mesh network of transceivers. More specifically, the system can store the network graph as a 3D representation of the calculated positions of the transceivers in the mesh network and overlay a map of the environment, which may include the locations and orientations of obstructions—such as physical obstacles or barriers—or anomalies (e.g., in the near field RF environment) that prevent transceivers from communicating with each other, over the network graph. Additionally, the system can augment the environmental map based on sensor data retrieved from the mesh network of transceivers such as temperature data recorded at each node. In this example, the system can estimate the temperature in a region of the environmental map and predict temperature changes and a corresponding decrease in the accuracy of the pairwise time bias and/or propagation delays reported by transceivers in this region. Furthermore, the system can update the environmental map by detecting anomalies in the environment based on errors attributed to pairwise connections between transceivers (as opposed to errors attributable to specific transceivers). In one implementation, the system can store the environmental map relative to an absolute reference location or orientation. In this implementation, if one transceiver in the mesh network of transceivers is positioned in a known location relative to the environmental map, the system can transform the relative location of each transceiver to an absolute location of the transceiver based on the environmental map. 3.5 Node Properties Generally, the system can also store and/or access, in association with the network graph, node properties (e.g., static node values) that represent known non-transient characteristics of the transceiver represented by each node in the network graph, such as: nominal clock frequency (prior to correction by the calculated frequency offset of the transceiver), oscillator type (e.g., crystal oscillator, atomic oscillator), type of crystal oscillator cut (e.g., AT, SC, BT, IT, FC), oscillator circuit type (e.g., ATCXO, CDXO, DTCXO, OCXO), levels of temperature or voltage control of the oscillator, atomic clock type (e.g., cesium, rubidium, hydrogen), antenna type of the transceiver, and/or any other hardware-related property of the transceiver. Based on the set of node properties of each node in the network graph, the system can correlate the specific properties of each node with a range of expected behaviors of the transceivers represented by the nodes. The system can then utilize these correlations to predict and/or identify failure of transceivers exhibiting behavior outside of this expected range. More specifically, the system can include the set of node properties in an input vector to a failure prediction model, along with the set of edge values, the set of node values, and/or the set of edge properties, in order to predict whether the current network state (and/or a time series of past network states) predicts failure of any transceivers in the network. 3.6 Edge Properties Generally, the system can also store and/or access, in association with the network graph, edge properties (e.g., static edge values) that represent known non-transient characteristics of the communication channel represented by each edge in the network graph, such as: a known, static, line-of-sight distance between stationary transceivers associated with the edge, a known, static, number of line-of-sight obstructions between stationary transceivers associated with an edge, a predetermined frequency or frequency range of the communication channel represented by the edge, and/or a medium of the communication channel (e.g., radio frequency channel, a fiber optic cable communication channel). Based on the set of edge properties of each edge in the network graph, the system can correlate specific edge properties with a range of expected performance of the communication channels represented by each edge in the network graph. The system can then utilize these correlations to predict and/or identify failure of particular communication channels exhibiting performance levels outside of the expected range. More specifically, the system can include the set of node properties within an input vector for a failure prediction model, along with the set of node properties, the set of edge values, and/or the set of node values, in order to identify whether the current network state (and/or a time series of past network states) predicts failure of any transceivers or communication channels in the network (as is further described below). 3.7 Node Values As shown inFIG.5, the system can store a set of node values in association with each node in the network graph, where the node represents a transceiver in the mesh network. More specifically, the system can access a network state including node values representing: transceiver relative location (calculated according to mesh network localization as described in U.S. patent application Ser. No. 17/080,729), transceiver GNSS location, transceiver frequency offset relative to a reference frequency of the mesh network, transceiver time bias relative to a reference time of the mesh network, transceiver temperature, transceiver velocity, transceiver acceleration, and/or transceiver oscillator age. Thus, the system can access a network state that represents the status of each transceiver in the mesh network. In one implementation, the system can access a network state including a time series of node values, thereby representing the variation of these node values over time in addition to each current node value. 3.7.1 Transceiver Relative Location Generally, the system can execute the method described in U.S. patent application Ser. No. 17/080,729 in order to localize the transceivers in the mesh network relative to each other. More specifically, the system can define a relative coordinate system based on the location and/or orientation of a primary transceiver in the mesh network; and execute the method described in U.S. patent application Ser. No. 17/080,729 to estimate the relative location of each transceiver in the relative coordinate system. Alternatively, the system can access a network state including the relative location of the transceiver derived via any method executed by the transceivers in the mesh network. Thus, the system can estimate the relative location of each transceiver in the mesh network based on pairwise time bias and propagation delay values. 3.7.2 Transceiver GNSS Location Generally, the system can access a network state including node values representing the GNSS location of each transceiver in the mesh network. More specifically, each transceiver can include a GNSS receiver capable of calculating a GNSS location of the transceiver. Thus, by including both the relative location and the GNSS location of each transceiver in the mesh network, the system can predict failure of transceivers in the mesh network based on large deviations of the relative location from the GNSS location. 3.7.3 Transceiver Frequency Offset Generally, the system can access a network state including node values representing a frequency offset of a transceiver relative to a reference frequency maintained by the mesh network. More specifically, the system can generate a reference frequency abstraction based on a reference oscillator of a primary transceiver in the network or as an adjusted average of oscillator frequencies across the mesh network. Thus, by including a gross transceiver frequency offset in addition to a pairwise frequency offset (in association with edges) within the network state, the system can better identify transceivers experiencing significant frequency drift from a reference frequency or reference frequency abstraction. 3.7.4 Transceiver Time Bias Generally, the system can access a network state including node values representing a time bias of a transceiver relative to a reference time maintained by the mesh network. More specifically, the system can generate a reference time abstraction based on a reference clock of a primary transceiver in the network or as an adjusted average of transceiver clocks across the mesh network. Thus, by including a gross transceiver time bias in addition to a pairwise time bias (in association with edges) within the network state, the system can better identify transceivers experiencing significant clock drift relative to a reference clock or reference clock abstraction. 3.8 Edge Values As shown inFIG.4, the system can store a set of edge values in association with each edge in the network graph, where the edge represents a connection or communication channel between transceivers in the mesh network. More specifically, the system can access a network state including edge values representing: any pairwise data generated by execution of the frequency offset calibration process and/or the time bias synchronization process by the pair of transceivers represented by the pair of nodes associated with the edge, such as the pairwise frequency offset between the transceivers associated with this pair of nodes, the pairwise time bias between transceivers associated with this pair of nodes, and the pairwise distance between transceivers associated with this pair of nodes. The system can also store edge values that represent characteristics of the connection between transceivers in the network represented by the pair of nodes associated with an edge, such as the SNR of the connection between these transceivers. Additionally or alternatively, the system can store, in the set of edge values, open-source band usage information within the operating band of the transceivers in the network. For example, the system can identify occupied frequency bands and, therefore, predict a level of interference on a communication channel represented by an edge in the network graph. Furthermore, the system can store a time series of these edge properties in order to track these properties over time. As is further described below, the system can access a network state including: secondary edge values (e.g., a pairwise time bias), a pairwise frequency offset (e.g., crystal oscillator frequency offset), a pairwise distance, and uncertainty measures for these values; and/or primary edge values, such as a SNR, a multipath profile, and a number of physical obstructions. For example, the system can access a network state including, for each edge in the network graph or some subgraph of the network graph (e.g., a triangle graph): the pairwise time bias associated with the edge; a time bias uncertainty associated with the edge; the pairwise frequency offset associated the edge; a frequency offset uncertainty associated the edge; and a pairwise distance associated with the edge. 3.8.1 Pairwise Frequency Offset Calculation Generally, assuming signal reception between a pair of transceivers in the mesh network, the pair of transceivers can execute a frequency offset calibration process to characterize relative frequency drift between clocks of the pair of transceivers. This frequency offset calibration process is detailed in U.S. patent application Ser. No. 17/135,566, which is incorporated in its entirety by reference. As a result of the frequency offset calibration process, each pair of transceivers can calculate to within a few parts-per-billion the frequency offset between their clocks. The system can then access the frequency offset for a pair of transceivers and associate this frequency offset data with an edge in the network graph representing the connection between the pair of transceivers. More specifically, the system can access a network state including a set of edge values for the set of edges including a set of pairwise frequency offsets in the network of transceivers. However, the system can access the frequency offset between a pair of transceivers in any other way. 3.8.2 Pairwise Time Bias Calculation Generally, assuming signal reception between a pair of transceivers in the mesh network, the pair of transceivers can execute a time bias synchronization process to characterize the relative time bias between clocks of the pair of transceivers. This process is detailed in U.S. patent application Ser. No. 17/025,635, which is incorporated in its entirety by reference. As a result of the time bias synchronization process, each pair of transceivers can calculate to within one nanosecond the time bias between the pair of transceivers. The system can then access the time bias for a pair of transceivers and associate this time bias with an edge in the network graph representing the connection between the pair of transceivers. More specifically, the system can access a network state including a set of edge values for the set of edge values, the set of edge values including a set of pairwise time biases in the network of transceivers. However, the system can access the time bias between a pair of transceivers in any other way. 3.8.3 Pairwise Distance Calculation Generally, assuming signal reception between a pair of transceivers in the mesh network, the pair of transceivers can execute the time bias synchronization process reference above to also characterize the propagation delay between the pair of transceivers. This process is detailed in U.S. patent application Ser. No. 17/025,635, which is incorporated in its entirety by reference. As a result of the time bias synchronization process, each pair of transceivers can calculate to within one nanosecond the propagation delay between the pair of transceivers and can, therefore, calculate the distance between the pair of transceivers based on the speed of data communication in the medium between these transceivers (e.g., the speed of light for RF and/or optical transmissions). The system can then access this propagation distance for a pair of transceivers and associate this propagation distance with an edge in the network graph representing the connection between the pair of transceivers. More specifically, the system can access the network state including the set of edge values for the set of edges, the set of edge values including a set of pairwise distances in the set of transceivers. However, the system can access the propagation distance between a pair of transceivers in any other way. 3.8.4 Timing Uncertainty Generally, in addition to calculating pairwise time bias, the system can access and/or measure the uncertainty of this time bias calculation in order to provide additional data characterizing the performance of transceivers in the network. More specifically, the system can access a network state including a set of edge values for the set of edges, the set of edge values including a time bias uncertainty associated with each edge in the set of edges. Thus, the system can identify network states indicating a high-uncertainty in the pairwise time bias between a pair of transceivers in the network and predict and/or identify failure of these transceivers. In one implementation, the system can access a network state including measures of time bias uncertainty, such as: uncertainty due to the sampling frequency of the transceiver, measures of phase-of-arrival variability across subcarrier signals between transceivers, measures of phase-of-arrival variability across successive synchronization signals of variable carrier frequencies, measures of hybrid matched filter peak width, and/or any other measure of timing uncertainty, which may arise when executing time bias calculation as described in U.S. patent application Ser. No. 17/025,635. In another implementation, the system can access a set of edge values including a pairwise distance uncertainty derived from a measure of timing uncertainty. The system can calculate a measure of pairwise distance uncertainty based on a measure of timing uncertainty because the pairwise propagation delay between transceivers (from which the distance between transceivers is calculated) is algebraically related to the pairwise time bias between transceivers, as described in further detail in U.S. patent application Ser. No. 17/025,635. 3.8.5 Frequency Offset Uncertainty Generally, in addition to calculating the pairwise frequency offset between transceivers, the system can access and/or measure the uncertainty in this frequency offset calculation in order to further characterize the performance of transceivers in the network. More specifically, the system can access a network state including a set of edge values for the set of edges, the set of edge values including a frequency offset uncertainty associated with each edge in the set of edges. Thus, the system can identify network states in which the variability of frequency offsets has increased and predict and/or identify failure of transceivers based on the frequency offset uncertainty. In one implementation, the system can access a network state including, for each edge in the network graph, a measure of frequency offset uncertainty, such as measure of calibration signal interval variability for a set of calibration signals sent between the pair of nodes (further described in U.S. patent application Ser. No. 17/135,566). In one example, the system can access a network state including a ratio of non-zero calibration signal interval deviations to calibration signal intervals equal to zero (i.e., the proportion of calibration signal interval deviations greater than zero) in a set of successive calibration signals. In this example, a lower value of this ratio indicates low uncertainty in the pairwise frequency offset associated with the same edge, while a higher value of this ratio indicates high uncertainty in the pairwise frequency offset associated with the same edge. In another example, the system can access a network state including, for each edge in the network graph, a cumulative measure of the calibration signal interval deviation over a series of calibration signal intervals. In this example a higher cumulative deviation indicates greater uncertainty in the pairwise frequency offset associated with the same edge while a lower cumulative deviation indicates lower uncertainty in the pairwise frequency offset associated with the same edge. 3.8.6 Multipath Profile Generally, the transceivers in the network can execute multipath profile detection methods on inbound synchronization signals or calibration signals and report a multipath profile for a communication channel represented by an edge. More specifically, the system can access a network state that includes a multipath profile associated with each edge in the set of edges. Thus, the system can calculate a level of uncertainty based on detected multipath components of the signal. In one implementation, the system and/or transceivers in the network can execute a multiple signal classification (hereinafter, “MUSIC”) algorithm on an inbound signal, as described in U.S. patent application Ser. No. 17/511,433. More specifically, a transceiver in the network can: receive an inbound synchronization signal or an inbound calibration signal from another transceiver in the network or a transmitter (in the form of a frequency-hopping spread spectrum signal or an orthogonal frequency division multiplexed signal); generate a received signal vector based on a series of digital samples representing each sub-signal of the inbound synchronization signal or inbound calibration signal; calculate an autocorrelation matrix of the received signal vector; for each frequency in the set of frequencies, calculate an eigenvector of the autocorrelation matrix and a corresponding eigenvalue of the autocorrelation matrix; sort the eigenvector for each frequency in the set of frequencies based on the corresponding eigenvalue for each frequency in the set of frequencies to identify a noise subspace of eigenvectors and a signal subspace of eigenvectors; evaluate an estimation function over a range of possible times-of-arrival, thereby generating a multipath profille based on the noise subspace of eigenvectors and a steering vector; and identify peaks of the estimation function as the set of multipath components of the ranging signal, each multipath component in the set of multipath components corresponding to a multipath time-of-arrival. This process is described in further detail below. The system applies the MUSIC algorithm (often used for angle-of-arrival detection) to the TOA calculation and multipath characterization according to the receiver and signal model shown inFIG.2. InFIG.2: each multipath component of the signal is represented as si(k)=αi(k)ej2πfτiwith received TOAs τifor i∈{1, . . . , D}, wherein D represents the number of multipath components and k represents the digital sample of the ranging signal; the received signal vector, x(k) is captured for M sub-signals with different frequencies and is represented as {right arrow over (x)}(k)=[x1(k), x2(k), . . . , xM(k)]T. Additionally, the system executes the MUSIC algorithm based on the steering matrix, A describing each multipath component with TOA τ in terms of a composition of sub-carrier signals: A=[a(τ1),a(τ2), . . . ,a(τD)], wherein a(τi) [1, ej2πΔfτi, . . . , ej2π(M-1)Δfτi]T∈M, and Δf is the difference between two consecutive frequencies. Based on the above described receiver and signal model: {right arrow over (x)}(k)=A(τ){right arrow over (s)}(k)+{right arrow over (n)}(k), wherein {right arrow over (s)}(k)=[α1(k)ej2πf0τ1, α2(k)ej2πf0τ2, . . . , αD(k)ej2πf0τD]T∈Drepresents the signal vector and {right arrow over (n)}(k) represents the noise within each sub-signal of the ranging signal. The system then computes the following autocorrelation matrix: Rxx=E({right arrow over (x)}{right arrow over (x)}H), wherein {right arrow over (x)}Hrepresents the conjugate transpose of {right arrow over (x)} and E represents an expected value function equivalent to a statistical average over the elements of its input. Once the system has computed Rxx, the system calculates the eigenvectors vi∈Mfor i∈{1, . . . , M} and corresponding eigenvalues of Rxxand sorts the eigenvectors in descending order of the corresponding eigenvalues. Based on an estimation of the number of multipath signals, the system then separates the eigenspace of Rxxinto a signal subspace QS=[v1, v2, . . . , vD] and a noise subspace, Qn=[vD+1, vD+2, . . . , vM]. Because steering vector a(τ) at the true multipath delays spans the signal subspace, it is orthogonal to the eigenvector in the noise subspace, i.e., a(τ)Hvi=0 for i∈{D+1, . . . , M}. Hence, by evaluating the MUSIC estimation function, PMU(τ), over a range of discrete values of T, the system can generate a pseudospectrum, and identify the peaks in this spectrum as the TOAs of each multipath component of the signal, as follows: PM⁢U(τ)=1∑i=p+1M⁢❘"\[LeftBracketingBar]"a⁡(τi)H⁢vi❘"\[RightBracketingBar]"2 In this implementation, the system can derive the relative timing, relative signal strength, and the specific number of multipath components. Thus, via inclusion of these multipath metrics in the set of edge values of the network state, the system can utilize the detected multipath severity of a communication channel to predict failure of transceivers transmitting over this communication channel. 3.8.7 Physical Obstructions Generally, the system can access a set of edge values including a number of line-of-sight obstructions between two nodes based on the environmental map. More specifically, the system can access a network state that includes a number of line-of-sight obstructions associated with each edge in the set of edges based on a physical map and locations of each node relative to the physical map. Thus, the system can predict failure of one or more transceiver or communication channels between transceivers based on the number of physical obstructions between them. In this implementation, the system can include obstructions indicated on the environmental map along a communication channel in the number of obstructions metric. These obstructions can include walls, windows, floors (for three-dimensional maps), shelving, terrain, and/or any other objects known to affect radio frequency communication within the frequency band occupied by the communication channels utilized by the network of transceivers. In one implementation, the system can access a detailed three-dimensional environmental map that represents the material and the thickness of obstructions between transceivers. The system can then generate a more accurate prediction regarding whether the obstructions are likely to cause a time synchronization or frequency calibration failure between transceivers. 3.8.8 Multipath Prediction Generally, the system can access a set of edge values including a predicted multipath profile for the communication channel between transceivers represented by an edge. More specifically, the system can access a network state including a predicted multipath profile associated with each edge in the set of edges based on a physical map, locations of each node relative to the physical map, and a signal propagation model. Additionally or alternatively, the system can leverage historical multipath profiles associated with each edge in the set of edges (e.g., for an edges between two stationary nodes) in order to inform the predicted multipath profile. Furthermore, the system can leverage multipath profiles associated with similar edges in the set of edges (e.g., edges sharing one node in the pair of nodes associated with each edge) based on the assumption that these similar edges are characterized by similar wireless channels. Thus, the system can better identify anomalies in the communication channel between nodes by comparing a current multipath profile associated with an edge with a predicted multipath profile for the same edge. The system can further calculate a difference metric between the predicted multipath profile and the current multipath profile and execute edge scoring based on the difference metric. In one implementation, for each edge between transceiver nodes in the network graph, the system can generate a predicted multipath profile of a communication link represented by the edge based on a series of historical multipath profiles associated with the edge. Additionally, the system can access a network graph including both the predicted multipath profile and the current multipath profile associated with a communication channel. More specifically, the system can access a network graph including a set of transceiver-transceiver edge values for each edge between transceiver nodes in the network graph, the set transceiver-transceiver edge values including: a current multipath profile of the communication link represented by the edge; and the predicted multipath profile of the communication link represented by the edge. Thus, in this implementation, the system can generate the predicted multipath profile as a secondary metric based on previously observed multipath profiles associated with a communication link between transceivers in the network. 3.8.9 Mobility Classification Generally, the system can access a set of edge values including a mobility classification representing the level of mobility of the transceivers associated with the communication channel represented by an edge. More specifically, the system can access a network state including a mobility classification associated with each edge in the set of edges. Thus, the system can access a network state that indicates whether either of the transceivers associated with a communication channel are capable of moving from their current positions. In one implementation, the system can access a set of edge values including a mobility classification selected from a group of classifications including mobile, mobile-stationary, and/or stationary. In this implementation, a mobile classification indicates both transceivers associated with the edge can move (i.e., they are not fixed in position), a mobile-stationary classification indicates that at least one transceiver in the pair of transceivers can move, and a stationary classification indicates that both transceivers associated with the edge are fixed in place. 3.8.10 Multipath Variability Generally, the system can access a set of edge values including a predicted multipath variability for the communication channel between transceivers represented by an edge. More specifically, the system can access a network state including a predicted multipath variability associated with each edge in the set of edges based on a historical set of multipath profiles associated with the edge. Thus, the system can adjust a threshold value of the difference metric for anomaly detection relative to the expected variability of multipath in a particular communication channel between nodes. For example, a communication channel that spans a street or other space within which moving objects are often present may experience greater multipath profile variability as large objects move through the LOS of the communication channel. In this example, the system can record a set of historical multipath profiles in association with the edge representing the communication channel and calculate a multipath variability metric characterizing variability present within the set of historical multipath profiles. The system can then adjust a threshold difference metric for the communication channel (i.e., comparing the current and predicted multipath profiles) based on the multipath variability metric. In one implementation, the system can predict multipath variability based on other edge values, such as a communication medium associated with the edge, the pairwise distance associated with the edge, the number of physical obstructions associated with the edge, the mobility classification associated with the edge, and/or any other edge value associated with the edge. Thus, the system can predict multipath variability of a communication based on a holistic view of the characteristics of the communication channel as represented by the edge values of the edge associated with the communication channel. 4. Diagnostic Scoring In one variation of the method S100, the system can predict, identify, and/or diagnose failures occurring at specific transceivers in the mesh network and/or along pairwise connections between transceivers in the mesh network by calculating set of consistency tests. More specifically, the system can execute a series of tests on groups of three connected nodes in the network graph in order to attribute the source of system-wide errors (e.g., localization errors and/or time synchronization errors) to particular nodes and/or edges in the network graph. The system can then calculate a component diagnostic score based on the set of tests and repeat this process for each group of three connected nodes in the network graph (i.e., each triangle graph in the network graph). Thus, the system can calculate a component diagnostic score (or a separate component score for each consistency test applied to the triangle graph) for each node or edge based on the number of groups of three nodes of which the node or edge is a member. The system can then calculate a cumulative sum or other cumulative function of the component diagnostic scores associated with each node or edge to calculate a cumulative diagnostic score for the node or edge. The system can then analyze these cumulative diagnostic scores to identify high cumulative diagnostic scores, which may indicate a greater contribution, to network-wide errors, of the transceiver or the connection between transceivers represented by the node or the edge respectively. Thus, the system can detect failure of particular transceivers and/or communication channels between transceivers based on a cumulative diagnostic score for a node or edge that exceeds a threshold cumulative diagnostic score. Alternatively, the system can identify local maxima of cumulative diagnostic scores across the nodes and edges included in the network graph in order to identify a root cause of time synchronization, frequency calibration, and/or distance calculation errors within the mesh network of transceivers. 4.1 Triangle Graph Identification Generally, in order to calculate component diagnostic scores for particular nodes and/or edges in the network graph, the system can identify a set of (unique) unique triangle graphs within the network graph and calculate a component diagnostic score for the nodes and edges within each identified triangle graph. More specifically, the system can identify a set of triangle graphs based on the network graph in Block S130. For example, in a network graph of four fully connected nodes a, b, c, and d, the system can identify four groups of three nodes: a, b, and c; a, b, and d; a, c, and d; and b, c, and d. For network graphs that are not fully connected (e.g., not all of the transceivers in a network can establish a direct connection) the system can identify only groups of three nodes that are connected by three edges. The system can identify triangle graphs from the network graph by executing a triangle counting algorithm—such as by computing an adjacency matrix representing the network graph, by executing a clustering coefficient algorithm, or by executing any triangle counting algorithm. Thus, by identifying triangle graphs in the network graph, the system can access subsets of the network state corresponding to the triangle graph in order to calculate component diagnostic scores for each edge and node of the triangle graph. 4.2 Subgraph Identification In one variation, shown inFIG.3, the system can identify subgraphs of other topologies (non-triangular subgraphs) associated with each node of the system instead of or in addition to identifying a set of triangle graphs. In this variation, instead of or in addition to executing the time bias self-consistency test, the frequency offset self-consistency test, and/or the triangle inequality self-consistency test, the system can execute a failure prediction model based on the network state of each identified subgraph in the network graph. More specifically, the system can, for each node in the network graph, identify a subgraph of the network graph associated with the node in Block S132. Thus, by identifying a subgraph associated with each node in the network graph, the system can reduce the size of the vector input to the failure prediction model (e.g., by including node values and edge values from only the subset of nodes in the network closely associated with a target node). In one implementation, the system can identify a subset of nodes within a threshold number of edges from a target node based on the network graph. For example, the system can identify a subset of nodes within two degrees of separation (within two edges) of the target node and identify all edges between the subset of nodes in order to identify a subgraph associated with the particular node. Additionally, the system can identify a subset of nodes within some weighted edge distance from the target node based on an edge value or an edge property of each edge extending from the target node. More specifically, the system can identify a subset of nodes within a threshold edge distance of the target node in the network graph, such that the threshold edge distance is weighted by an edge value. For example, the system can calculate edge distance between nodes in the network graph weighted by the SNR of each edge, the pairwise distance of each edge, or any of the aforementioned edge values or edge properties of edges in the network graph. Thus, the system can limit the number of nodes and edges included in the subgraph to nodes and edges that are relevant to the target node. In another implementation, the system can input the whole network state for the network graph into the failure prediction model, which, when executed by the system, outputs a cumulative diagnostic score for each node and/or each edge in the network graph. Thus, in this implementation, the system does not identify subgraphs—either of arbitrary or triangular topology—and instead interprets the network state as whole in order to predict failure at particular nodes and/or edges within the network graph. 4.2 Network State Access In one implementation, upon identifying a subgraph or a triangle graph, the system can access a subset of edge values and/or a subset of node values in association with the subgraph or triangle graph in order to calculate a component diagnostic score for edges and/or nodes associated with the subgraph or triangle graph. More specifically, the system can access a network state of a subgraph including a subset of edge values for each edge in the subgraph and/or a subset of node values for each node in the subgraph in Block S122. For example, the system can access a subset of edge values, each edge in the subgraph including the pairwise time bias associated with each edge in the subgraph and a pairwise frequency offset associated with each edge in the first subgraph. Thus, the system can access edge values and/or node values relevant to the identified subgraphs of the network graph from the overall network state of the mesh network. 4.4 Component Scoring Generally, the system can calculate a component diagnostic score based on a time-bias self-consistency test, a frequency offset self-consistency test, a triangle inequality self-consistency test, and/or the failure detection model, as executed based on a subset of edge values and/or a subset of node values associated with an identified subgraph of the network graph (e.g., a triangle graph or other subgraph topology). More specifically, the system can, for each triangle graph in the network graph, calculate the component diagnostic score based on: a subset of edge values in the set of edge values, the subset of edge values associated with the edges in the triangle graph; and a subset of node values in the set of node values, the subset of node values associated with nodes in the triangle graph in Block S140. Alternatively, the system can, for each identified subgraph of the network graph, calculate the component diagnostic score based on: a subset of edge values in the set of edge values, the subset of edge values associated with edges in the subgraph; and a subset of node values in the set of node values, the subset of node values associated with nodes in the subgraph. Thus, the system can iterate through each triangle graph or subgraph present within the network graph and calculate a component diagnostic score for each edge and/or each node included in the triangle graph or a target node of a subgraph. 4.4.1 Frequency Offset Self-Consistency Test Generally, the system can execute a frequency offset self-consistency test on a triangle graph within the network graph. More specifically, the system can calculate a frequency offset self-consistency score based on a subset of pairwise frequency offsets in the set of pairwise frequency offsets, the subset of frequency offsets associated with the edges in the triangle graph. Thus, by calculating a frequency offset self-consistency score for each triangle graph identified within the network graph, the system can directly identify discrepancies between pairwise frequency offsets within the network of transceivers. In one implementation, the system can: access a triangle graph; access, from the network state, the pairwise frequency offset values associated with edges of the triangle graph; and calculate frequency offset self-consistency score equal to a difference between a first pairwise frequency offset and a sum of a second pairwise frequency offset and a third pairwise frequency offset. Assuming accurate frequency calibration between the transceivers represented by the triangle graph, the system calculates a frequency offset self-consistency score of zero. However, if one or more of the transceivers represented by the triangle graph experience frequency calibration failures or experience crystal oscillator frequency drift, then the system calculates a non-zero frequency offset self-consistency score. Therefore, the system can correlate higher values of the frequency offset self-consistency score with a greater error in frequency offset calibration. In order to calculate the frequency offset self-consistency score ∈fthe system can solve the following equation of the form: ∈f=|Δfa,b+Δfb,c−Δfa,c|, where Δfa,brepresents the pairwise frequency offset between transceiver a and b relative to a, which is stored in an edge between nodes representing transceivers a and b. Thus, the group of three transceivers represented by the group of three nodes is perfectly synchronized with respect to frequency, the system calculates a frequency offset self-consistency score equal to zero. However, a small nonzero value of the frequency offset self-consistency score is generally expected for any group of three nodes. Furthermore, the system can calculate a frequency offset self-consistency score equal to any function of ∈f. 4.4.2 Time Bias Test Generally, the system can execute a time bias self-consistency test on a triangle graph identified within the network graph. More specifically, the system can calculate a time bias self-consistency score based on a subset of pairwise time biases in the set of pairwise time biases, wherein the subset of pairwise biases are associated with the edges in the triangle graph. Thus, by calculating a time bias self-consistency score for each triangle graph identified within the network graph, the system can directly identify discrepancies between pairwise time bias values in the network of transceivers. In one implementation, the system can: access a triangle graph; access, from the network state a subset of pairwise time bias values, wherein the subset of pairwise time bias values are associated with edges of the triangle graph; and calculate time bias self-consistency score equal to a difference between a first pairwise time bias and a sum of a second pairwise time bias and a third pairwise time bias. Assuming accurate time synchronization between the transceiver represented by the triangle graph, the system calculates a time bias self-consistency score of zero. However, if one or more of the transceivers represented by the triangle graph experience clock drift and/or time synchronization failure, then the system calculates a non-zero time bias self-consistency score. Therefore, the system can correlate higher values of the time bias self-consistency score with greater error in time bias synchronization. In order to calculate the time bias self-consistency score ∈b, the system can solve the following equation of the form: ∈β=|Δβa,b+Δβb,c−Δβa,c|, where Δβa,brepresents the pairwise time bias between transceiver a and b relative to a, which is stored in an edge between nodes representing transceivers a and b. Thus, the group of three transceivers represented by the group of three nodes is perfectly synchronized with respect to time, the system calculates a time bias self-consistency score equal to zero. However, a small nonzero value of the time bias self-consistency score is generally expected for any group of three nodes. Furthermore, the system can calculate a time bias self-consistency score equal to any function of ∈β. 4.4.3 Triangle Inequality Test Generally, the system can execute a triangle inequality self-consistency test on a triangle graph within the network graph in order to test the accuracy of the pairwise distances between nodes represented by the triangle graph executing the time bias synchronization process. More specifically, the system can calculate a triangle inequality score based on a subset of pairwise distances in the set of pairwise distances, the subset of pairwise distances associated with the edges in the triangle graph. Thus, by calculating a triangle inequality self-consistency score, the system can identify geometric discrepancies between pairwise distance values in the network of transceivers and associate this discrepancy with a failure in time synchronization and/or frequency calibration of these transceivers. In one implementation, the system can: access triangle graph; access, from the network state, a subset of pairwise distance values, wherein the subset of pairwise distance values are associated with edges of the triangle graph; and evaluate a triangle inequality to determine whether the distances between each of transceiver represented by the triangle graph can form a triangle. In one implementation, the system can provide a binary triangle inequality score indicating whether the set of nodes have failed the triangle inequality self-consistency test or passed the triangle inequality self-consistency test. In order to evaluate the triangle inequality, the system can evaluate each of the following inequalities: dab+dbc>dac, dab+dac>dbc, and dbc+dac>dab, where dabrepresents the distance between nodes a and b. Additionally, the system can perform similar geometric comparisons between the pairwise distances represented by the current network graph and known distances between transceivers within the environmental map. For example, if a distance between two transceivers is known, the system can compare calculated distances between these two transceivers based on various groups of nodes representing other transceivers to evaluate the accuracy of the transceivers represented by the group of nodes. In one implementation, instead of calculating a binary triangle inequality self-consistency score, the system can calculate a triangle inequality score based on the difference between the sum of the two shorter pairwise distances and the longest pairwise distance. Thus, in this alternative implementation, the system can output a continuous triangle inequality self-consistency score that represents the degree to which the edges in the triangle graph fail the triangle inequality self-consistency test. In another implementation, the system can execute diagnostic scoring based on a triangle inequality test for triangle graphs within the network graph including transmitter nodes, transceiver nodes, transmitter-transceiver edges, and transceiver-transceiver edges. More specifically, the system can identify a triangle graph in the network graph including: a first node in a set of nodes, the first node representing a first transceiver; a second node in the set of nodes, the second node representing a second transceiver; a third node in the set of nodes, the third node representing a first transmitter; a first transceiver-transceiver edge representing a communication channel between the first transceiver and the second transceiver; a first transmitter-transceiver edge representing a ranging signal transmitted from the first transmitter and received by the first transceiver; and a second transmitter-transceiver edge representing the ranging signal transmitted from the first transmitter and received by the second transceiver. Once the system has identified the triangle graph including both a transmitter and transceivers, the system can access from the network state: a first pairwise distance between the first transceiver and the second transceiver associated with the first transceiver-transceiver edge; a first pseudorange between the first transmitter and the first transceiver associated with the first transmitter-transceiver edge; and a second pseudorange between the first transmitter and the second transceiver associated with the second transmitter-transceiver edge. Therefore, the system can calculate a triangle inequality based on pseudoranges between the first and second transceiver and the transmitter instead of ranges between transceivers as described above. More specifically, the system can calculate a component diagnostic score based the set of edge values by: calculating a triangle inequality based on the first pairwise distance, the first pseudorange, and the second pseudorange; and calculating the component diagnostic score based on the triangle inequality. 4.5 Cumulative Diagnostic Scoring Generally, because the system evaluates each of the above-described self-consistency tests on a triangle graph (as opposed to a single node), any component self-consistency score calculated for the triangle graph is equally relevant to the three nodes and three edges included in the triangle graph. In order to calculate an individual score of a target node (or edge) such that the system can identify failure of an individual transceiver or communication channel within the mesh network, the system can calculate a cumulative diagnostic score based on multiple component diagnostic scores calculated for triangle graphs of which a target node is a member. For example, in a set of four fully connected nodes, a, b, c, and d, the system can identify four triangle graphs: a, b, and c; a, b, and d; a, c, and d; and b, c, and d. Therefore, in order to calculate a cumulative diagnostic score for node a, the system can combine the component diagnostic scores calculated for triangle graphs: a, b, and c; a, b, and d; and a, c, and d. Thus, the system can calculate a cumulative diagnostic score that is a combination of component diagnostic scores calculated for each unique triangle graph of which a target node and/or a target edge is a member. 4.5.1 Node Scoring In one implementation, upon executing the frequency offset self-consistency test, the time bias self-consistency test, and/or the triangle inequality test for a triangle graph in the network graph, the system can combine the calculated frequency offset self-consistency score, the calculated time bias self-consistency score, and/or the calculated triangle inequality self-consistency score to calculate a component diagnostic score representing the error attributable to a target triangle graph and store this component diagnostic score in association with each of the three nodes in the triangle graph. More specifically, the system can, for each unique triangle graph in the network graph and for each node in each unique triangle graph, update the cumulative diagnostic score for the node based on the time bias self-consistency score, the frequency offset self-consistency score, and/or the triangle inequality self-consistency score in Block S150. As the system executes the frequency offset self-consistency test, the time bias self-consistency test, and/or the triangle inequality test on each triangle graph in the network graph, the system can combine the calculated component diagnostic scores to calculate a cumulative diagnostic score for each node by adding (or otherwise combining) successively calculated component diagnostic scores for each triangle graph of which the node is a member. Therefore, by repeatedly testing the nodes in the network graph that are members of multiple triangle graphs, the system calculates higher cumulative diagnostic scores for nodes representing transceivers responsible for a larger number of errors in the mesh network. In one implementation, the system can calculate multiple sets of component diagnostic scores and cumulative diagnostic scores, with each set of scores score corresponding to a different self-consistency test. For example, the system can calculate a frequency offset cumulative diagnostic score, a time bias cumulative diagnostic score, and a triangle inequality cumulative diagnostic score. By calculating separate diagnostic scores, the system can isolate causes of detected errors to the particular type of process that may not have been executed properly at each transceiver represented by each node. In another implementation, the system can calculate a cumulative diagnostic score as a normalized sum of component diagnostic scores of a node. More specifically, the system can calculate a cumulative diagnostic score for a target node based on an average, or other measure of central tendency, of component diagnostic scores calculated for the target node. Thus, the system calculates cumulative diagnostic scores that are independent from the number of triangle graphs of which a target node is a member, thereby preventing nodes with many connections in the mesh network from receiving artificially high cumulative diagnostic scores. 4.5.2 Edge Scoring In one implementation, upon executing the frequency offset self-consistency test, the time bias self-consistency test, and/or the triangle inequality test for a target triangle graph in the network graph, the system can combine the calculated frequency offset self-consistency score, the calculated time bias self-consistency score, and/or the calculated triangle inequality self-consistency score to calculate a component diagnostic score representing the error attributable to the target triangle graph and store this component diagnostic score in association with each of the three edges in the triangle graph. More specifically, the system can, for each unique triangle graph in the network graph and for each edge in each unique triangle graph, update the cumulative diagnostic score for the edge based on the time bias self-consistency score, the frequency offset self-consistency score, and/or the triangle inequality self-consistency score in Block S152. As the system executes the frequency offset self-consistency test, the time bias self-consistency test, and/or the triangle inequality test on each triangle graph in the network graph, the system can combine the calculated component diagnostic scores to calculate a cumulative diagnostic score for each edge by adding (or otherwise combining) successively calculated component diagnostic scores for each triangle graph of which the edge is a member. Therefore, by repeatedly testing the edges in the network graph that are members of multiple triangle graphs, the system calculates higher cumulative diagnostic scores for edges representing communication channels responsible for a larger number of errors in the mesh network. In one implementation, the system can calculate multiple component diagnostic scores and cumulative diagnostic scores, each cumulative score corresponding to a different self-consistency test. For example, the system can calculate a frequency offset cumulative diagnostic score, a time bias cumulative diagnostic score, and a distance-based cumulative diagnostic score. By calculating separate diagnostic scores, the system can isolate causes of detected errors to the particular type of signal being transmitted by transceivers over a communication channel represented by the target edge. In another implementation, the system can calculate a cumulative diagnostic score as a normalized sum of component diagnostics scores of an edge. More specifically, the system can calculate a cumulative diagnostic score for a target edge based on an average, or other measure of central tendency, of component diagnostic scores calculated for the target edge. Thus, the system calculates cumulative diagnostic scores that are independent from the number of triangle graphs of which a target edge is a member, thereby preventing edges with many adjacent edges in the mesh network from receiving artificially high cumulative diagnostic scores. 4.6 Failure Prediction Model Generally, the system can execute a failure prediction model on a subgraph of the network graph associated with a target node (or a target edge) in Block S170. More specifically, upon identifying a target subgraph associated with the target node (or edge) and accessing the network state associated with nodes and edges within this subgraph, the system can: generate an input vector comprising the subset of edge values associated with the target subgraph and the subset of node values associated with the target subgraph; and calculate a probability of failure of a transceiver represented by the target node within a threshold period of time based on the input vector and a failure prediction model. Thus, the system can execute a machine learning model in order to predict failure of a target transceiver (or target communication channel) based on a target network state of a target subgraph associated with the target transceiver (or target communication channel). The system can execute a failure prediction model that is a statistical model correlating the cumulative diagnostic scores of a node or an edge with the failure of the associated transceiver or communication channel. Alternatively, the system executes a failure prediction model that is a statistical model correlating a network state represented by a subgraph of the network graph with the failure of a transceiver or a communication channel. Thus, the system can correlate any metric or set of metrics associated with a set of nodes and edges with a failure event within the mesh network via the failure prediction model. By predicting failure prior to detectable failure (e.g., via the aforementioned self-consistency tests), the system can execute preemptive corrective actions to maintain performance of the mesh network. For example, an increase in temperature around a transceiver in the mesh network concurrent with a small increase in the frequency offset self-consistency score associated with the transceiver may predict a future failure of a transceiver. In this example, the system can detect these otherwise unobservable signs of failure via the failure prediction model and execute corrective actions before performance of the mesh network degrades as a result of the imminent failure of the transceiver. In one implementation, the system can generate an input vector for a target node that includes a current time bias self-consistency score of the target node, a current frequency offset self-consistency score of the target node, and/or a current triangle inequality self-consistency score. Thus, the system can predict a future failure as defined by these self-consistency scores based on the current value of these self-consistency scores. Additionally, the system can generate an input vector for a target node that includes a time series of time bias self-consistency scores of the target node, a time series of frequency offset self-consistency scores of the target node, and/or a time series of triangle inequality self-consistency scores of the target node. In another implementation, the system can generate an input vector including a set of node values or a time series of node values associated with the target node and/or any other nodes included in the target subgraph associated with the target node. For example, the system can generate an input vector including a time series of temperature values at each transceiver represented in the subgraph, a time series of inertial data at each transceiver represented in the subgraph, and a time series of cumulative diagnostic scores of each transceiver represented in the subgraph. Thus, the system can utilize data from transceivers associated with the target transceiver to predict failure of the target transceiver before failure occurs at the target transceiver. In yet another implementation, the system can generate an input vector including a set of edge values or a time series of edge values associated with edges connected to the target node and/or any other edges included in the target subgraph associated with the target node. For example, the system can generate an input vector including a multipath profile of each edge in the target subgraph, a number of physical obstructions of each edge in the target subgraph, a pairwise frequency offset of each edge in the target subgraph, an uncertainty of the pairwise frequency offset of each edge in the target subgraph, a pairwise time bias of each edge in the target subgraph, an uncertainty of the pairwise time bias of each edge in the target subgraph, a pairwise distance of each edge in the target subgraph, and/or an uncertainty of the pairwise distance of each edge in the target subgraph. Thus, the system can utilize data about communication channels between transceivers associated with the target transceiver to predict failure of the target transceiver before failure occurs at the target transceiver. In one example, the system can generate an input vector to the failure prediction model that includes a multipath profile as an edge value in the set of edge values associated with each edge in the target subgraph. In this example, the system can generate an input vector including a full MUSIC pseudospectrum representing the multipath characteristics associated with an edge. Alternatively, the input vector can include a summary derived from the MUSIC pseudospectrum, such as a set of peak delays and a set peak amplitudes of the MUSIC pseudospectrum or a Rician K-factor calculated based on the peaks of the MUSIC pseudo spectrum. Thus, the system, via the failure prediction model, can predict failure of a communication channel between two nodes based on detection of a strong multipath signal between In another example, the system can generate an input vector to the failure prediction model that includes a current multipath profile and a predicted multipath profile as edge values in the set of edge values associated with each edge in the target subgraph. In this example, the system can generate an input vector representing both the current multipath profile and the predicted multipath profile in the same format (e.g., both profiles represented as a full MUSIC pseudospectrum, both profiles represented via the same summary metric). In this example, the system, via the failure prediction model, can correlate differences between the current multipath profile and the predicted multipath profile with transceiver and/or communication channel failure. In yet another implementation, the system can generate an input vector to the failure prediction model that includes a set of node properties and/or a set of edge properties associated with the nodes and/or edges of the target subgraph respectively. Although the node properties and/or edge properties are non-transient properties of the transceivers and/or communication channels represented by the nodes and/or edges of the subgraph, the system can still include these node properties and/or edge properties to provide additional context to the failure prediction model. Generally, the system can execute a failure prediction model implemented as a machine learning model such as an artificial recurrent neural network or a long short-term memory neural network that receives, as input, the input vector generated by the system and outputs a probability of failure if a particular target node or on each of the nodes within the network graph. More specifically, the system can execute the failure prediction model to generate a probability of failure for a target node or to generate an output vector including a probability of failure of each node in the network graph. In one implementation, the system can: execute the failure prediction model on a target subgraph of the network graph associated with a target node; and output a probability of failure of the target node. Therefore, in this implementation, the system executes the failure prediction model once for each node in the network graph. Alternatively, the system can: generate an input vector for the network graph (e.g., the whole network graph or some subgraph of interest within the network graph); execute the failure prediction model on the input vector; and generate an output vector that includes a probability of failure for each node in the network graph. Although the failure prediction model is described above with respect to a target node, the system can also execute the failure prediction model based on a target edge and a target subgraph associated with the target edge. In this implementation, the system executes a failure prevention model and outputs the likelihood of a communication anomaly developing along the communication channel represented by the target edge within a threshold period of time. 4.6.1 Training Generally, the system, or a cooperative computer system, can train the failure prediction model based on training examples that include historical network graphs in addition to historical edge values and node values associated with the historical network graph captured during normal operation under a range of workloads/use cases of the mesh network. Each training example captures a moment in time and can include a topology of the historical network graph, a set of current edge values for the historical network graph, a set of current node values of the historical network graph, a time series of edge values preceding the current state of the historical network graph, and/or a time series of node values preceding the current state of the historical network graph. Thus, each training example represents a snapshot of a historical mesh network at a particular time and includes data available to the system at that particular time. Additionally, each training example is labelled according to the status (either “failure” or “continued operation”) of one or more transceivers in the historical mesh network after the threshold period of time for failure detection (e.g., ten seconds, one minute, five minutes). For example, the training example can include a status for each node in the historical mesh network five minutes from the current time of the training example and can include a time series of edge values and a time series of node values preceding the particular time. Generally, the system can label training examples based on historical data as a failure for a particular transceiver within the historical mesh network in response to the cumulative diagnostic score of the particular transceiver having exceeded a threshold score. Alternatively, the system can label failures of transceivers via any other means. Thus, the system can distinguish training examples of transceiver failure from training examples exhibiting proper functioning of transceivers in the mesh network. In one implementation, the system can generate a large number of training examples from a historical data set by generating a training example for each time step in the time series of node values and edge values associated with an associated historical network graph. For example, the system can generate a training example for each one second interval in a historical data set associated with a historical network graph. Thus, the system can extract a multitude of training examples based on a limited number of historical data sets for mesh networks. Upon generating a set of training examples, the system can then execute a supervised learning algorithm to train the failure prediction model based on the training examples. Thus, the system can train an accurate failure prediction model based on historical data from prior mesh network implementations. Although generation of training examples is described above with respect to the status of particular nodes in the historical network graph, the system can also generate training examples that are labeled according to the failure of communication channels represented by edges in the historical network graph. Thus, the system can train the failure prediction model to calculate a likelihood of a communication anomaly developing along the communication channel represented by an edge within a threshold period of time. 5. Correction Triggers Upon calculating a cumulative diagnostic score or probability of failure for a node in a network, the system can trigger a corrective action at the transceiver in the mesh network based on the cumulative diagnostic score or probability of failure of the node. More specifically, the system can, in response to detecting a cumulative diagnostic score for a target node that exceeds a threshold cumulative diagnostic score, trigger a corrective action at a target transceiver represented by the target node in Block S160. Additionally or alternatively, the system can, in response to detecting a first cumulative diagnostic score for a first edge exceeding a first threshold cumulative diagnostic score, trigger a first corrective action at a first communication channel represented by the first edge in Block S162. In another alternative implementation, the system can, in response to detecting a probability of failure of a target node representing a target transceiver exceeding a threshold likelihood, trigger a corrective action at the target transceiver in Block S164. In yet another alternative implementation, the system can, in response to detecting a probability of failure of a target edge representing a target communication channel exceeding a threshold likelihood, trigger a corrective action at the target communication channel. Thus, the system can trigger corrective actions at transceivers and/or communication channels between transceivers based on either a current cumulative diagnostic score of a node or edge, or based on a probability of failure of a node or edge. 5.1.1 Threshold Triggers In one implementation, the system can: establish a threshold diagnostic score; and, in response to a cumulative diagnostic score of a node exceeding the threshold diagnostic score, trigger the transceiver associated with the node to recalibrate (i.e. execute a frequency offset calibration process) or resynchronize (i.e. execute the time synchronization process) with other nodes in the mesh network. In one implementation, in response to a cumulative diagnostic score of an edge exceeding the threshold diagnostic score, the system can trigger transceivers represented by the nodes associated with the edge to recalibrate and/or resynchronize. In another implementation, the system can establish separate diagnostic score thresholds for each separate type of cumulative diagnostic score calculated by the system. For example, the system can calculate a frequency offset cumulative diagnostic score, a time bias cumulative diagnostic score, and a triangle inequality cumulative diagnostic score, and store each of these in association with each node and edge in the network. The system can then compare each type of cumulative diagnostic score to a corresponding threshold diagnostic score. The system can then trigger specific actions by the transceivers based on the type of cumulative diagnostic score that exceeds the threshold diagnostic score. 5.1.2 Proportional Triggers In another implementation, instead of triggering corrective actions based on thresholds, the system can: identify particular nodes and/or edges characterized by a high cumulative diagnostic score or a high probability of failure; and trigger corrective actions to improve the functionality of the transceiver or communication channel represented by the node or edge. In one implementation, the system can rank the nodes and/or edges from the node and/or edge with the highest cumulative diagnostic score or probability of failure to the node and/or edge with the lowest cumulative diagnostic score or probability of failure. The system can then trigger corrective actions at a threshold number or threshold proportion of highest ranked transceivers represented by these nodes and/or associated with these edges to prevent or recover from failure identified at these transceivers. For example, the system can trigger corrective actions at five transceivers represented by the five nodes characterized by the highest cumulative diagnostic score or probability of failure. In another example, the system can trigger corrective actions at a ninety-fifth percentile of transceivers with the highest cumulative diagnostic scores and/or likelihoods of failure in the mesh network. 5.1.2 Local Maxima Detection In yet another implementation, the system can: identify nodes and/or edges characterized with a cumulative diagnostic score or a probability of failure that represents a local maximum within the network graph. More specifically, the system can, in response to detecting a local maximum cumulative diagnostic score in the network graph, trigger a corrective action at the transceiver represented by the node at the local maximum. Alternatively, the system can, in response to detecting a local maximum probability of failure in the network graph, trigger a corrective action at the transceiver represented by the node at the local maximum. Likewise, in response to detecting a local maximum of cumulative diagnostic scores or likelihoods of failure at an edge within the network graph, the system can trigger corrective actions at transceivers represented by nodes connected via the identified edge. 5.2 Corrective Actions Generally, upon identifying and/or failure of a transceiver or communication channel within the mesh network, the system can trigger a corrective action or a set of corrective actions in order to return the transceiver and/or communication channel that has been identified as failing or predicted to fail to normal operation. More specifically, the system can execute automated corrective actions such as frequency recalibration or time resynchronization at a target transceiver or a set of target transceivers (e.g., a set of target transceivers associated with a target communication channel). Alternatively, the system can prompt operators of the mesh network of the transceivers to perform a replacement of specific hardware at a target transceiver in the mesh network. Thus, the system can remedy failures or predicted failures in order to return the mesh network to normal operation 5.2.1 Frequency Recalibration Generally, the system can, in response to identifying or predicting failure of a transceiver or communication channel, trigger frequency recalibration of a target transceiver or a set of target transceivers. In one implementation, the system can, in response to detecting a cumulative diagnostic score for a target node exceeding the threshold cumulative diagnostic score, trigger resyntonization (i.e., frequency recalibration) of a target transceiver represented by the target node. Alternatively, the system can, in response to calculating a probability of failure of a target transceiver exceeding a threshold probability of failure, trigger resyntonization of the target transceiver. In another implementation, the system can: calculate a frequency offset cumulative diagnostic score for a node; and, in response to the frequency offset cumulative diagnostic score exceeding a threshold frequency offset diagnostic score, trigger the transceiver represented by the node to execute a frequency offset calibration process. The system can evaluate the nodes and/or edges based on the frequency offset diagnostic score prior to evaluating the nodes based on the time bias diagnostic score or the triangle inequality diagnostic score due to the relationship between the frequency offset calibration process and the time bias synchronization process. Because the time bias synchronization process can occur after accurate frequency offset calibration has occurred between two transceivers, the system can prioritize errors detected in the frequency offset between nodes. In yet another implementation, in response to detecting failure or predicting failure at a transceiver in the mesh network, the system can trigger frequency recalibration between the transceiver and each connected transceiver within the mesh network. Alternatively, in response to detecting failure or predicting failure of a communication channel in the mesh network, the system can trigger the pair of transceivers associated with the communication channel to execute frequency recalibration. 5.2.2 Time Bias Resynchronization Generally, the system can, in response to detecting or predicting failure of a transceiver or communication channel in the mesh network, trigger time bias resynchronization of a target transceiver or set of target transceivers. In one implementation, the system can, in response to detecting a cumulative diagnostic score for the target node exceeding the threshold cumulative diagnostic score, trigger resynchronization of a target transceiver represented by the target node. Alternatively, the system can, in response to predicting failure of a target transceiver, trigger a resynchronization of the target transceiver in the mesh network. In another implementation, the system can: calculate a time bias cumulative diagnostic score for a node and/or edge in the network graph; and, in response to the time bias cumulative diagnostic score exceeding a threshold time bias diagnostic score, the system can trigger the transceiver represented by the node to execute the time bias resynchronization process. Additionally, the system can: calculate a triangle inequality cumulative diagnostic score for a node and/or edge in the network graph; and, in response to the triangle inequality cumulative diagnostic score exceeding a threshold triangle inequality diagnostic score, the system can trigger the transceiver represented by the node to execute a time bias synchronization process. Because the time bias synchronization process provides both a time bias between two transceivers and a distance between transceivers, the system can trigger a transceiver to resynchronize in response to errors in the distance between nodes or the time bias between nodes. In yet another implementation, the system can, in response to detecting or predicting failure of a transceiver in the mesh network, trigger time bias resynchronization between the transceiver and each connected transceiver in the mesh network (on a pairwise basis). Alternatively, in response to detecting or predicting failure, of a communication channel in the mesh network, the system can trigger time bias resynchronization of the pair of transceivers associated with the communication channel. 5.2.3 Hardware Reset or Replacement Generally, in response to detecting repetitive failure of a transceiver or communication channel in the mesh network, the system can prompt an operator of the mesh network to execute a hardware reset or replacement at the transceiver. For example, the system can prompt an operator of the mesh network to replace the timing circuit of the transceiver or the crystal oscillator of the transceiver in order to improve performance of the transceiver in accurately calculating time bias, frequency offset, and distance with other transceivers in the mesh network. In one implementation, the system can, in response to detecting a cumulative diagnostic score for a target node exceeding the threshold cumulative diagnostic score, trigger hardware replacement of a target transceiver represented by the target node. In another implementation, the system can track diagnostic scores for a node over multiple iterations of the method S100; and, in response to calculating a running average diagnostic score greater than a threshold running average diagnostic score for the node, the system can trigger an operator of the mesh network to replace or physically repair a transceiver corresponding to the node. The system can provide information to the operator regarding the types of errors that triggered the trigger. 5.2.4 Anomaly Detection Generally, in response to detecting or predicting failure of a communication channel in the mesh network, the system can: detect an anomaly in the near field environment and prompt an operator of the mesh network to investigate the anomaly. For example, the system can prompt an operator to remove large metal objects or other obstructions in the vicinity of either transceiver associated with the failed communication channel. Thus, the system can aid operators of the mesh network in identifying potential causes of failure of communication channels in the mesh network. In one implementation, the system can track diagnostic scores for an edge over multiple iterations of the method S100; and in response to calculating a running average diagnostic score greater than a threshold running average diagnostic score for the edge, the system can identify an anomaly in the near field RF environment that may be causing repeated errors in the connection between two transceivers represented by the edge. 6. Transmitter-Based Node Characterization Variation In one variation of the method S100, the system can access a network state that also includes data based on signals transmitted from transmitters (e.g., user devices, asset tags, or any other devices transmitting to the network). More specifically, the system can access a network state including a set of edges representing communication channels between transceivers and transmitters; and a set of nodes representing the transmitters. In this variation, the system can diagnose failure of transceivers in the network based on localization errors for these transmitters made by the system, as well as various characteristics of communication channels between the transmitters and transceivers. Thus, by including additional diagnostic information, the system can more accurately predict failure of nodes in the network. In this variation, transmitters are not limited to transmitting functionality (e.g., the may also receive radio frequency signals). However, transmitters are distinguished from transceivers in the mesh network in that the transmitters transmit ranging signals that are received by the set of transceivers while transceivers receive these ranging signals in addition to synchronization signals and calibration signals transmitted by other transceivers in the set of transceivers. 6.1 Transmitter Node Values As shown inFIG.6, the system can store a set of transmitter node values in association with each transmitter node in the network graph, where the transmitter node represents a transmitter in the mesh network. More specifically, the system can access a network state including transmitter node values representing: transmitter relative location (calculated by the system via TOA or TDOA multilateration by the set of transceivers based on a ranging signal transmitter by the transmitter) and/or transmitter GNSS location. Thus, the system can access a network state that represents the status of each transmitter in the mesh network in order to correlate transmitter values that are assigned to each transmitter node by the system or GNSS with failure of the system (e.g., by estimating the accuracy of the relative location of the transmitter as determined by the network based on a comparison to the GNSS location of the transmitter). In one implementation, the system can access a network state including a time series of transmitter node values, thereby representing the variation of these transmitter node values over time in addition to each current transmitter node value. 6.1.1 Transmitter Relative Location Generally, the system can access a network state including a transmitter relative location associated with a set of transmitter nodes, thereby representing the system-estimated location of the transmitter. More specifically, the system can execute a TOA or TDOA multilateration algorithm to estimate the location of each transmitter in the set of transmitter based on a ranging signal transmitted by the transmitter and received by a subset of the set of transceivers. The system can then estimate the relative location of the transmitter (i.e., relative to the mesh network) based on estimated relative locations of the set of transceivers and the TOA or TDOA of the ranging signal at the set of transceivers (or a subset of the set of transceivers). Thus, the system may accurately estimate the location of each transmitter when the set of transceivers are functioning properly (e.g., to within 30 centimeters). However, when on or more of the transceivers are not functioning properly the accuracy of the relative location calculated by the system may be reduced in comparison to the GNSS location. In one implementation, the system can access a network state including the transmitter relative location expressed in the relative coordinate system of the mesh network. Therefore, the transmitter relative location can be directly compared to the transceiver relative locations stored in association with the transceiver nodes of the network state. 6.1.2 Transmitter GNSS Location Generally, the system can access a network state including a transmitter GNSS location associated with the set of transmitter nodes. More specifically, each transmitter operating within the mesh network can include a GNSS chipset configured to estimate the location of the transmitter via a GNSS, each transmitter can then report this GNSS location to the system. In particular, the system can access a network state including a set of transmitter node values including: a GNSS location estimate of a transmitter represented by the transmitter node; and a relative location estimate of the transmitter represented by the transmitter node. The system can then associate the reported GNSS location of each transmitter in association with the transmitter node representing each transmitter. Thus, the system can include a network state that includes both the GNSS location and the relative location of each transmitter operating within the mesh network, thereby enabling the system to correlate localization inaccuracies by the system with future failure of a transceiver or communication channel in the mesh network. In one implementation, the system can convert the reported GNSS location for each transmitter operating within the mesh network to the relative coordinate system of the mesh network such that both the relative location and the GNSS location of each transmitter in the mesh network are represented in the relative coordinate system. Thus, in this implementation, the system can directly calculate the offset between the relative location and the GNSS location of each transmitter in the set of transmitters. 6.2 Transmitter-Transceiver Edge Values Generally, the system can store a set of transmitter-transceiver edge values in association with each edge between a transmitter and a transceiver in the network graph, where the transmitter-transceiver edge represents a connection or communication channel between a transmitter and a transceiver in the mesh network. More specifically, the system can access a network state including transmitter-transceiver edge values representing any characteristics of the ranging signal transmitted from the transmitter associated with the transmitter-transceiver edge and received by the transceiver associated with the transmitter-transceiver edge. For example, the system can access a network state including an SNR of the ranging signal transmitted from the transmitter to the transceiver, the multipath profile of the ranging signal transmitted from the transmitter to the transceiver, and/or a pseudorange between the transmitter and the transceiver. Furthermore, the system can store a time series of these transmitter-transceiver edge values in order to track these transmitter-transceiver edge values over time. In one implementation, the system can, for each edge between a transmitter node and a transceiver node in the in the network state, generate a predicted multipath profile of a current ranging signal represented by the edge based on a series of multipath profiles associated with the edge. Additionally, the system can access a network state including a set of transmitter-transceiver edge values for each edge between transceiver nodes in the network graph, the set transceiver-transceiver edge values including: a current multipath profile of the current ranging signal represented by the edge; and the predicted multipath profile of the current ranging signal represented by the edge. Thus, the system can compare, via the failure detection model, predicted multipath profiles and current multipath profiles for a ranging signal between a transmitter and a transceiver operating within the mesh network. 6.3 Transmitter-Transceiver Diagnostic Scoring In this variation of the method S100, the system can predict, identify, and/or diagnose failures occurring at specific transceivers in the mesh network and/or along pairwise connections between transceivers in the mesh network by calculating set of consistency tests based on a subgraph of the network graph including transmitter nodes and transceiver nodes, as well as transmitter-transceiver edges and transceiver-transceiver edges. More specifically, the system can execute a series of tests on groups of three connected nodes in the network graph—which can include transmitter or transceiver nodes and, therefore, transmitter-transceiver edges and transceiver-transceiver edges—in order to attribute the source of system-wide errors (e.g., localization errors and/or time synchronization errors) to particular nodes and/or edges in the network graph. The system can then calculate a component diagnostic score based on the set of tests and repeat this process for each group of three connected nodes (either transmitter nodes or transceiver nodes) in the network graph (i.e., each triangle graph in the network graph). Thus, the system can calculate a component diagnostic score (or a separate component score for each consistency test applied to the triangle graph) for each node or edge based on the number of groups of three nodes of which the node or edge is a member. The system can then calculate a cumulative sum or other cumulative function of the component diagnostic scores associated with each node or edge to calculate a cumulative diagnostic score for the node or edge. The system can then analyze these cumulative diagnostic scores to identify high cumulative diagnostic scores, which may indicate a greater contribution, to network-wide errors, of the transceiver or the connection between transceivers represented by the node or the edge respectively. Thus, the system can detect failure of particular transceivers and/or communication channels between transceivers based on a cumulative diagnostic score for a node or edge that exceeds a threshold cumulative diagnostic score. Alternatively, the system can identify local maxima of cumulative diagnostic scores across the nodes and edges included in the network graph in order to identify a root cause of time synchronization, frequency calibration, and/or distance calculation errors within the mesh network of transceivers. Therefore, in this variation of the method S100, the system can include edges and nodes associated with transmitters within the mesh network as well as edges or nodes associated with transceivers in the mesh network. Thus, the system, via cumulative diagnostic scoring and/or failure prevention modelling, can correlate data associated with these additional nodes and edges associated with transmitters with failure of transceivers in the mesh network. 6.4 Transmitter-Transceiver Failure Prediction Model In this variation, the system can execute a failure prediction model on a subgraph of the network graph associated with a target node (or a target edge) in Block S170, wherein the subgraph of the network graph includes transmitter nodes and transmitter-transceiver edges. More specifically, upon identifying a target subgraph associated with the target node (or edge) and accessing the network state associated with nodes and edges within this subgraph (including transmitter nodes and transmitter-transceiver edges), the system can: generate an input vector comprising the subset of edge values and transmitter-transceiver edge values associated with the target subgraph and the subset of node values and transmitter nodes values associated with the target subgraph; and calculate a probability of failure of a transceiver represented by the target node within a threshold period of time based on the input vector and a failure prediction model. Thus, the system can execute a machine learning model in order to predict failure of a target transceiver (or target communication channel) based on a target network state of a target subgraph associated with the target transceiver (or target communication channel) while also accounting for data associated with transmitters operating within the mesh network. Generally, in this variation, the system can train and execute the failure prediction model as described above with respect to network state not including transmitter nodes and transmitter-transceiver edges. More specifically, while executing subgraph identification for generating of the input vector, the system can: identify a subgraph within the network graph including transmitter nodes, transceiver nodes, transmitter-transceiver edges, and transceiver-transceiver edges, associated with a target node or a target edge; generate an input vector to the failure prediction model based on the subgraph; and generate a probability of failure of the target node or the target edge based on the input vector and the failure prediction model. The systems and methods described herein can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the application, applet, host, server, network, website, communication service, communication interface, hardware/firmware/software elements of a user computer or mobile device, wristband, smartphone, or any suitable combination thereof. Other systems and methods of the embodiment can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated by computer-executable components integrated with apparatuses and networks of the type described above. The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component can be a processor but any suitable dedicated hardware device can (alternatively or additionally) execute the instructions. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the embodiments of the invention without departing from the scope of this invention as defined in the following claims.
129,328
11863299
The use of the same reference symbols in different drawings indicates similar or identical items. DETAILED DESCRIPTION FIG.1shows an architecture for distributing time-of-day (ToD)102from the grandmaster101to the slave line card103. The slave line card ToD becomes synchronized with the grandmaster ToD104through exchange of time stamps in accordance with the IEEE 1588 protocol. The exchange is shown at105. Each of the time stamps t1-t4 represents the departure time (t1, t3) or the receive time (t2, t4). The timestamps exchange allows determination of one-way delay (OWD) and error offset as shown at107. The slave shares information with the master timing card109. The system ofFIG.1includes backup timing card110. The slave line card103supplies the master timing card109with a SyncE signal (signal line111) and timestamps are exchanged between the slave line card and the master timing card to synchronize the master timing card clock signal to the timing of the slave line card, which in turn has been synchronized with the grandmaster. The master timing card supplies a system clock (SysClk)115to the slave line card. The SysClk clocks the timestamper in the line card. A servo loop adjusts a controllable oscillator (such as a digitally controlled oscillator (DCO)) in the master timing card to synchronize the SysClk in both frequency and phase with the slave line card timing. The master timing card also supplies a 1 Pulse-Per-Second (PPS) signal117ato the slave line card that is synchronized through a phase adjust to the time-of-day (ToD) rollover of the grand master. The various ToD counters contain the same value and turnover at the same time based on the 1 PPS signal. The servo loop ensures that the slave line card and the master timing card are synchronized. The master timing card109also supplies the master line cards121with the SysClk116and the 1 PPS signal118. However, absent a feedback mechanism, such as the servoloop utilized to synchronize the slave line card and the timing cards, the ToD distribution to the master line cards may lack desired precision due to, e.g., trace differences, path length differences, driver differences, voltage differences, and/or temperature differences between the timing card and the different master line cards121. Thus, although the grandmaster receives the most accurate clock in network and the master timing card and slave line card are synchronized, other line cards (master line cards121) are not synchronized given the open loop distribution of 1 PPS signal by the master timing card without compensation for different trace delays between the master line cards121and the master timing card109and other differences between the line cards. Embodiments described herein describe approaches to distribute Time-of-Day (ToD) and the 1 PPS signal to master line cards in an IEEE 1588 Central Timing Architecture. These methods overcome challenges such as (a) ToD and 1 PPS signal alignment across all line-cards <1 ns (+/−500 ps) (b) backplane trace lengths should be matched or compensated c) delay uncertainty of backplane drivers/receivers should be considered in the timing budget d) I/O delay of the line card PLL should also be considered in the budget. At present the alignment of ToD and 1 PPS is a manual, cumbersome and an open-loop process. The approaches outlined in this disclosure provide more accurate closed loop measurement and adjustment techniques. Some embodiments use existing hardware and/or software infrastructure to distribute ToD and the 1 PPS signal with <1 ns alignment accuracy across all line-cards. FIG.2illustrates an embodiment in which delays between the master timing card and the line cards can be determined using existing back plane communication paths between the timing card and the line cards.FIG.2shows the slave line card203and the master timing card209. In the embodiment ofFIG.2, a synchronous packet-based communication network, such as Synchronous Ethernet (SyncE) and IEEE 1588, exists between the master timing card209and the slave line card203. InFIG.2, the connection between the timing card209and the slave line card203includes Tx SyncE221and Rx SyncE223. In order to determine the trace delay, one approach measures the roundtrip time for a signal sent from the timing card209to the slave line card203and return to the timing card over path231. Thus, inFIG.2, the master timing card209sends a signal such as a pulse over the 1 PPS signal line225and the slave line card203returns the signal back over the Rx SyncE path223. The line card can be configured to enter a test mode in order to return the received pulse and allow the trace delay to be determined. The trace delay to the slave line card203is assumed to be one half of the roundtrip time for the pulse issued by the timing card209. The measured trace delays can be accounted for at the line card to provide greater accuracy in the 1 PPS signal. FIG.3illustrates an embodiment where the timing card utilizes the SyncE transmit signal line221to send the test signal and uses SyncE receive signal line223to receive the return signal from the slave line card203. The trace delays to the line card can be assumed to be one half of the roundtrip time over path233for the pulse issued by the timing card209. FIGS.4and5illustrate that the identical approach can be used for all of the master line cards on the backplane, not just the slave line card.FIG.4shows the 1 PPS signal line being used for the transmit path and the Rx SyncE path being used for the return path to measure trace delays between master line card401and the timing card403using path431.FIG.5shows the Tx SyncE signal line being used for the transmit path and the Rx SyncE being used for the return path for path531to measure the trace delays between master line card501and the timing card503. The trace delays between the additional line cards409and509shown inFIGS.4and5and the timing card are measured in the same manner. One assumption made for embodiments illustrated inFIGS.2,3,4and5is that the forward and return paths are symmetric. Lack of symmetry in the forward and return paths can lead to errors in the trace delay compensation made by the line cards based on roundtrip measurements. Referring toFIG.6, one way to limit asymmetry is to utilize bidirectional buffers601and603for the 1 PPS signal line (or the Tx SyncE signal line) so the return path is identical to the transmit path. In a test mode, the test pulse received at the slave line card605is returned through the bidirectional buffer603to the timing card607. Thus, roundtrip delay through existing backplane communication paths can be measured for each line card at startup and other suitable times and appropriate compensation made based on the roundtrip measurements. That improves on the alignment of ToD and 1 pps using a manual and an open-loop process. FIG.7illustrates another embodiment to achieve greater accuracy for ToD distribution to master line cards. In an embodiment system700functions as a Telecom Boundary Clock (TBC) at the edge of a larger system but the teachings of the embodiment ofFIG.7can be used in multiple environments The embodiment ofFIG.7includes a control plane, used to exchange time stamps. Physically, the control plane can be a backplane communications path and the line cards are physically coupled to the control plane through cabling or other electrical/optical connections with the backplane. As illustrated inFIG.7, the control plane can further include circuits, e.g., field program gate arrays (FPGAs) and processors to perform necessary functions such as switching and transparent clocking as described further herein. The intelligence required by the control plane may be disposed directly on the backplane or disposed in a printed circuit board plugged or cabled into the backplane. The control plane can utilize, e.g., various high-speed communication protocols according to the requirements of the system. In embodiments the control plane is an ethernet based network. In other embodiments, rather than a backplane, the various components in the system may be integrated circuits coupled to a motherboard and the control plane may provide communications between the various components over the motherboard. The system700further includes a slave line card703, master line cards705, and master timing card707and a backup timing card708. If the slave line card703goes down, then the system switches to using one of the master line cards705as the slave card. That is possible because there are control plane inputs and outputs to every line card. Thus, there is infrastructure available for sharing time stamps between all line cards and the timing card707. All though not shown inFIG.7, in embodiments the timing card is also coupled to the control plane. The use of the control plane, which is available to all of the line cards in the system, allows use of the control plane to use time stamps to align all of the ToD counters in the master line cards. In an embodiment, the slave line card703synchronizes its ToD counter (ToDA) with the grandmaster (GM). That can occur in a manner similar to that described for the system illustrated inFIG.1. The 1 PPS signal is normally used to synchronize the 1 second rollover of the ToD counter. However, through the control plane, the system can synchronize time of day counters by exchanging time stamps such as shown at709. Assume the slave line card703becomes synchronized with the grand master (not shown inFIG.7) and in turn the master timing card707becomes synchronized with the timing of the slave line card by using a clock signal on Rx SyncE712. The network processor710(also referred to herein as a host processor) in the slave line card (or another control processor) controls the 1588 time stamp exchange with the grandmaster. The phase-locked loop (PLL)716generates the system clock signal711synchronized to the timing of the slave line card. The timing card distributes a system clock711through the backplane715(or motherboard) to all of the line cards, including slave line card703and the master line cards705. The line cards are frequency locked to the system clock711through their PLLs718. Each of the master line cards705include a ToD counter that needs synchronization. For example, one of the master line cards705includes a ToD counter ToDB. The slave line card703initiates a time stamp exchange709over the control plane with the one of the master line cards705to synchronize the ToDAcounter and ToDBcounter but with the advantage that master and slave are working with the same frequency using existing backplane frequency distribution of the system clock711. Based on the time stamp exchange, the one way delay ((t⁢⁢2-t⁢⁢1)+(t⁢⁢4-t⁢⁢3)2) and error offset ((t⁢⁢2-t⁢⁢1)-(t⁢⁢4-t⁢⁢3)2) (see107inFIG.1) are used to synchronize the ToD counters ToDAand ToDB. Thus, the signal used to update the ToDBcounter is based on the time stamp exchange and utilizes the system clock711that is synchronized to the slave line card. The 1 PPS signal is not necessary. The ToD counters throughout the system are synchronized in the same way. Note that once a master line card is aligned through a time stamp exchange, that master line card can be used to align other line cards. That is, there is no need for the slave line card to perform all the time stamp exchanges. Thus, in an embodiment one of the master line cards705synchronizes the ToD counter in another one of the master line cards using a time stamp exchange. In embodiments, systems may choose to have all the time stamp exchanges initiated by a single entity such as the slave line card, but that is not necessary. There is low packet delay variations in the system because the time stamp exchange is localized. Although no longer necessary to align the ToD counters, the 1 PPS signal indicating rollover of the ToD counters can also be adjusted through this approach (time stamp exchange) if there is a desire to distribute the 1 PPS signal. Thus, the alignment of distributed 1 PPS signals can be improved using the time stamp exchange. Any kind of static asymmetry can be calibrated out. In addition, the control plane should be 1588 aware. That is the physical layer (PHYs) used in the control plane to exchange the time stamps should not add their own delay to the time stamp messages and instead use transparent clocking to pass the time stamp without adding delay by accounting for latency through the circuits of the PHY either by adjusting the time stamp to account for the latency or sending an additional message that indicates the latency. In addition to the need for 1588 awareness, to achieve desired accuracy for the ToD, the time stamp should be high resolution. For example, a time stamp with nanosecond resolution or higher would be considered high resolution in this context. Such resolution is available in high performance timing integrated circuits. Network processors (NPs), FPGAs, and PHYs on the line cards and the timing card may be used to provide high resolution time stamping. An advantage of the embodiment inFIG.7is that existing infrastructure of the control plane can be exploited to achieve higher ToD accuracy. Option 4 1 PPS signals generated by various boards in a distributed system need to be synchronized so that they occur at nearly the same instant throughout the system. To achieve that degree of synchronization, and referring again toFIG.1in conventional systems, the timing card109sends simultaneous 1 PPS signals to every line card. Since these 1 PPS signals are transmitted simultaneously, separate communications paths are required from the timing card to the line cards for each of these 1 PPS signals, as shown for 1 PPS signal117a,117b, and117cinFIG.1. That has two problems: (1) multiple dedicated communications paths and (2) the lengths of these paths to and from the master timing card110are not identical, adding uncompensated delay, and causing misalignments. As the number of line cards increases, so does the number of traces in the backplane. The control and coordination of these systems is performed over yet another shared communications channel (see the control plane701inFIG.7), adding more uncertainty. To address such issues, embodiments herein utilize a shared bus that time interleaves 1 PPS signals in such a manner that the delays introduced by interleaving the data in time can be precisely removed. Furthermore, the same shared bus can be utilized to also send control and coordination data, avoiding the use of another system. The shared bus provides one trace in the backplane that connects to all of the line cards as opposed to separate traces to all of the line cards. FIG.8illustrates utilization of the 1 PPS signal line in conventional systems. Over the 1 second interval, the 1 PPS signal line contains a 0.1 μs pulse801. The remainder of the time (greater than 99.99%) indicated at803, the signal line remains unused. In addition, the timing card sending the 1 PPS signals and the master line cards receiving the 1 PPS signals are configured in a star configuration with the timing card in the center and the master line cards connected by separate 1 PPS signal lines to the timing card. Referring toFIG.9, embodiments address the shortcomings of dedicating a 1 PPS connection to each master line card in a star configuration by using time information bus901in which the 1 second between rising edges of the 1 PPS signal on the PPS signal line are divided into multiple time slots. For example, the time information bus may be divided into frames that have 64 time slots for a time information bus that supports a system with 32 line cards. Of course, other embodiments may use a different number of time slots and support a different number of line cards. In the embodiment ofFIG.9, the timing card905generates and distributes the system clock (SysClk)902to all the line cards. Each of the line cards receives the system clock at phase-locked loop (PLL)906and maintains phase and frequency lock to the system clock. The microcontroller unit (MCU) also shown in906provides control functionality for the PLL including adjusting the phase and/or frequency of the local clock signal931generated by the PLL906based on time stamp exchanges. The host processor926implements the messaging and protocol stack associated with the 1588 and communicates with the time stamp logic in logic block928. The local clock931, based on the system clock902, clocks the ToD counter908in each line card. The system clock is synchronous with the 1 PPS signal. In embodiments, a second timing card (not shown) provides redundancy. With a 125 MHz system clock and 64 time slots, each time slot is 1,953,125 cycles of the system clock, or approximately 15.6 ms. Rather than being distributed in a star configuration, the time information bus901is a passive bi-directional bus (a trace in the backplane going to each line card) and every card connected to the bus can transmit to or receive from the bus. That approach minimizes the number of traces in the backplane, which makes extending the bus to more line cards straightforward compared to the star configuration. In addition, the physical path is the same for both the receive and transmit directions, thus providing symmetry, which can be useful for accounting for path length differences. FIG.10illustrates a 1 second time information bus frame1001divided into multiple time slots.FIG.10shows 64 time slots numbered (0, 1, 2, . . . , 63). Some of the time slots are allocated for transmission by the primary timing source to the line cards, and other time slots are allocated for transmission to the primary timing source from the line cards. In an embodiment, the primary timing source utilizes even time slots to send out the 1 PPS timing signals. The primary timing source provides the primary timing reference for the system and could be one of the line cards, e.g., the slave line card903, or the timing card905. The odd time slots are used by the line cards to send back a pulse, e.g., in a test mode, to the primary timing source and/or to send back other control and/or timestamp (TS) information. Using the even time slots for transmissions to the line cards and odd time slots for transmission from the line cards eliminates contention on the time information bus. In other embodiments the role of the odd and even time slots are swapped. If the slave line card is the primary timing source, the timing card communicates on the time information bus the same as one of the line cards. Time slot 0 or 1 may be encoded with identifying information in a predetermined location in the time slot for other cards to identify the time slot to keep the time information bus aligned. Alternatively, one or more other time slot(s) may include time slot identifying information in a predetermined location. Line cards utilize the bus based on an identification unique to the line card, e.g., their line card number on the bus (0, 1, 2 3, . . . ). Thus, e.g., line cards receive 1 PPS signals on the slot number equal to (line card number×2) and transmit on the time information bus on the time slot number equal to ((line card number×2)+1). In that way line cards receive 1 PPS signals on even time slots and transmit information on odd time slots. Other embodiments use different approaches to assigning time slots to line cards based on the line card number. When the system starts, the timing card905functions as the primary timing source. At some point in time, one of the line cards becomes a Precision Time Protocol (PTP) slave and in embodiments the PTP slave line card903assumes the primary timing source role. In the embodiment ofFIG.9, the PTP slave is communicatively coupled to the Grandmaster (GM) through the physical layer (PHY). That change in primary timing source role is coordinated via communications on the timing information bus or via the control plane (seeFIG.7).FIG.9also shows a communication channel935from the line card907to a downstream external device that can be, e.g., an optical fiber connection. FIG.11illustrates an exemplary time slot1100. At the beginning of the time slot, the primary timing source sends the 0.1 μs 1 PPS signal as pulse1101. Guard bands1103and1105extend for 1 ms from the beginning and end of the time slot leaving approximately 1000 bits for transmission of other information during the time slot to the line card from the primary timing source. Each of the line cards907is assigned to one of the 64 slots for receiving the 1 PPS signal. Thus, for an embodiment with 32 line cards, a 1 PPS signal is sent out 32 times during each second, one for each line card. The timing of the 1 PPS signal is known because it is known to occur at a particular offset of the system clock from the beginning of the frame. Assuming the 1 PPS signal occurs at the beginning of a time slot, fora 125 MHz system clock, the offset is (N×1,953,125) system clock cycles from the beginning of the frame, where N is the time slot number in the frame. For other locations for the 1 PPS signal in the time slot, the offset is increased based on the specific location in the time slot. The timeslots can also serve as dedicated data channels for transmitting timestamp (TS) data t1, t2, t3, and t4. The time stamp logic is shown at910in the line cards903and907and919in the timing card905. Due to the time slots, certain time stamp information is already known. For example, assume the 1 PPS signal from the primary timing source (or any other signal at a known location in the time slot) serves as the first time stamp. The time stamp itself is known a priori by the primary timing source based on the time slot number for the timing pulse. The t2 time stamp indicates the time the 1 PPS signal is received by the line card and can have, e.g., a range of ±1 μs to encompass worst case backplane travel. An 11 bit time stamp in time stamp logic910provides for 1 nanosecond accuracy, while a 15 bit time stamp gives 100 picosecond resolution. The t3 timestamp represents the local time the message is sent to the primary timing source and is known a priori by the time slot number (since each line card is assigned a unique time slot number) and assuming the time stamp is sent at a known location in the time slot. The t4 time stamp represents the time the t3 message is received by the primary timing source. Again, a range of ±1 μs should encompass worst case backplane travel. An 11 bit time stamp provides for 1 nanosecond accuracy, while a 15 bit time stamp provides 100 picosecond resolution. The length of the time stamp depends on the accuracy requirements of the particular implementation. The one way delay ((t⁢⁢2-t⁢⁢1)+(t⁢⁢4-t⁢⁢3)2) and error offset ((t⁢⁢2-t⁢⁢1)-(t⁢⁢4-t⁢⁢3)2) (see107inFIG.1) can be used to determine the appropriate compensation to be used to account for delays between the primary timing source and the line card. Trace delays between the primary timing source and the line cards can also be determined in a test mode by the primary timing source sending a pulse, which the line card returns over the time information bus. The symmetry on the bus makes the calculation of the delay a divide by two that can be used to accurately compensate for the delay in the backplane between the primary timing source and each of the line cards. While the embodiment of the time slots shown inFIG.11places one 1 PPS signal in a time slot, in other embodiments, all the 1 PPS signals occur in the first ms of the frame. Thus, each master line card receives the 1 PPS signal in a predefined time of the first ms of the frame. The rest of the 1 second frame can be used for transmitting data/control in assigned time slots based on the unique line card identification, e.g., (0, 1, 2 3, . . . ). Other embodiments group the 1 PPS signals at other predetermined times in the frame allowing the remainder of the frame to be used for data/control information. In still other embodiments, the 1 PPS signal shown at inFIG.11is supplied to all of the line cards at the same time. That is, at a predetermined time in the frame, e.g., the beginning of the first time slot, the line cards listen to a broadcast of the 1 PPS signal and the remainder of the frame is available for messaging between the line cards and the primary timing source. The remainder of the frame can be divided up into time slots for transmissions to and from respective line cards according to their line card ID. In an embodiment, time slot 0 belongs to the primary timing source and when functioning as the primary timing source, the timing card assumes the time slot of the primary timing source. However, the assignment of the primary timing source does not have to be static and whichever card is the primary timing source can assume the first time slot. Present implementations have a separate system for incorporating 1 PPS information into distributed systems from satellite timing signals such as GPS (United States), Galileo (Europe), BeiDou (China) and other types of Global Navigation Satellite System (GNSS) technology. By timestamping the received satellite 1 PPS signal, a single approach can be used to interface an IEEE 1588 system900to other networked IEEE 1588 systems and GNSS signals. The source of the “system” for 1 PPS/ToD will move to where the primary timing signal is coming into the system. The primary timing signal may come into the system from the line card that has the primary Precision Time Protocol (PTP) role (i.e. the PTP slave). In embodiments, when the system is in GPS (or other satellite system) operation or initial bring-up or free-run, the timing card provides the source of the system timing. The timing card has a GPS unit, which can be used as a backup in case the PTP slave goes down. Moving the source of the system to where the primary timing signal is coming into the system helps reduce the degradation of the timing information as it is being processed by more cards. Embodiments in GPS operation use a timestamper, which simplifies the operation of the system considerably. Use of the timestamper keeps the PTP timestamp concept used for the PTP slave, but switches to using timestamps based on the GPS1PSS signal. The operation is similar to the PTP one-way time sync configuration. That is, with a GPS signal, there is no communications back to the GPS system. Once the GPS information is time-stamped the system treats the GPS information as a primary timing source. Thus, referring back toFIG.9, time stamp logic919in timing card905receives a satellite 1 PPS signal921. The PLL925in the timing card905becomes synchronized with the 1 PPS signal and the 1 PPS signal sent over the time information bus901is synchronized with the satellite 1 PPS signal. That makes the ToD of the system900synchronized to the satellite 1 PPS signal. Switching between PTP and GPS can be smoother (since the same control loop is used) if time stamps are used to align all of the line cards in the system over a dedicated time information bus since any concerns about buffer delays of a shared communications resource are eliminated. Similar to PTP, timestamp data is exchanged between the primary timestamper, e.g., the timing card905, and the distributed timestampers, e.g., the line cards907. The data is exchanged via the time information bus901. Note that the time stamper in the line cards and the timing card may reside in field programmable gate arrays (FPGAs)928or other types of integrated circuits and in embodiments the timestamper has the ability to time stamp internal signals or external signals received by the integrated circuit as needed to implement the 1588 time stamp exchange. When the primary timing source is moved, e.g., from the slave line card coupled to the grand master to the timing card coupled to receive a GPS signal, the current primary timing source, the slave line card goes into holdover. In holdover, the phase and frequency of the 1 PPS signal is held to its current phase and frequency. In addition, the timing card enters holdover of the system clock (SysClk), which is distributed over the backplane to the line cards and synchronized to 1 PPS/ToD used in the system. Thus, the system clock is held at its current phase and frequency. The new primary timing source (the timing card905) does the equivalent to phase jamming of its 1 PPS, that is adjusting the phase of 1 PPS signal to match the new primary timing source. Remember the system was locked, so the system clock is very close in frequency to what it should be, as is the 1 PPS signal. In an embodiment, the communication regarding the change in primary timing source occurs over the time information bus. Thus, the current or future timing source sends a message over the time information bus requesting the change, which is acknowledged by the message recipient. Additional messages as needed to make the change are exchanged over the time information bus. Thus, e.g., the new primary timing source (the timing card905) communicates to the PTP Slave line card903that starting, e.g., at the next frame, the new primary timing source (the timing card905) will be supplying the 1 PPS signal. Once nominally locked, the timing card905exits holdover. New time stamps are exchanged with all the line cards since the path lengths between the new primary timing source and the line cards differs from the path lengths between the previous primary timing source and the line cards. Note that path asymmetries (to and from) the line cards are nil since the time information bus is being used for communication in both directions. The time stamper measures, at the pin, the outgoing pulse as well as the incoming pulses. The only sensitivity is to the variability in the path between the pin of the integrated circuit to the time stamper internal to the integrated circuit. FIG.12illustrates the system clock (SysClk) primary loop1201and the PTP master timing loop1203. The SysClk primary timing loop1201locks the system clock (SysClk)902to the timing of the PTP slave and thus the grandmaster (GM) assuming the PTP Slave is acting as the primary timing source. The PTP Master Timing Loop1203allows the PTP Masters to have their timing adjusted based on time stamp exchanges over the time information bus. Thus, assuming the PTP Slave903is supplying the 1 PPS signals over the time information bus, the 1 PPS signals can be adjusted based on, e.g., time stamp exchange over the time information bus901. Thus, the time information bus may be utilized to provide both the 1 PPS signal and provide bidirectional communication between the primary timing source and the other cards (e.g., line cards or timing card) in the system. The time information bus may be used to exchange time stamps between the primary timing source and the other cards in the system. The time information bus may also be used when the primary timing source changes from, e.g., the timing card based on a satellite 1 PPS signal to the PTP slave line card coupled to the GrandMaster or vice versa. The terms “first,” “second,” “third,” and so forth, as used in the claims, unless otherwise clear by context, is to distinguish between different items in the claims and does not otherwise indicate or imply any order in time or location. For instance, “a first time slot,” “a second time slot,” does not indicate or imply that the first time slot occurs in time before the second slot or in a particular location in a frame. Thus, various aspects have been described related to use of a time information bus to transport the 1 PPS signal. The description of the invention set forth herein is illustrative, and is not intended to limit the scope of the invention as set forth in the following claims. Other variations and modifications of the embodiments disclosed herein, may be made based on the description set forth herein, without departing from the scope of the invention as set forth in the following claims.
32,037
11863300
DETAILED DESCRIPTION One or more embodiments of the inventive subject matter described herein provide systems and methods that use efficient determinism of time-sensitive networking to increase cybersecurity by examining positive feedback between non-classical physics and time-sensitive networking. The difference of elapsed time that occurs due to relativity is treated by the timing and synchronization standard as a contribution to clock drift of network nodes (e.g., switches) and a time-aware scheduler device of a time-sensitive network is configured relative to a time reference of a grandmaster clock device of the network, but then loses simultaneity with a local relative time reference of the scheduler device. FIG.1schematically illustrates one embodiment of a network control system107of a time-sensitive network system100. The components shown inFIG.1represent hardware circuitry that includes and/or is connected with one or more processors (e.g., one or more microprocessors, field programmable gate arrays, and/or integrated circuits) that operate to perform the functions described herein. The components of the network system100can be communicatively coupled with each other by one or more wired and/or wireless connections. Not all connections between the components of the network system100are shown herein. The network system100includes several nodes105formed of network switches104and associated clocks112(“clock devices” inFIG.1). While only a few nodes105are shown inFIG.1, the network system100can be formed of many more nodes105distributed over a large geographic area. The network system100can be an Ethernet network that communicates data signals along, through, or via Ethernet links103between devices106(e.g., computers, control systems, etc.) through or via the nodes105. The data signals are communicated as data packets sent between the nodes105on a schedule of the network system100, with the schedule restricted what data signals can be communicated by each of the nodes105at different times. For example, different data signals can be communicated at different repeating scheduled time periods based on traffic classifications of the signals. Some signals are classified as time-critical traffic while other signals are classified as best effort traffic. The time-critical traffic can be data signals that need or are required to be communicated at or within designated periods of time to ensure the safe operation of a powered system. The best effort traffic includes data signals that are not required to ensure the safe operation of the powered system, but that are communicated for other purposes (e.g., monitoring operation of components of the powered system). The control system107includes a time-aware scheduler device102that enables each interface of a node105to transmit an Ethernet frame (e.g., between nodes105from one computer device106to another device106) at a prescheduled time, creating deterministic traffic flows while sharing the same media with legacy, best-effort Ethernet traffic. The time-sensitive network100has been developed to support hard, real-time applications where delivery of frames of time-critical traffic must meet tight schedules without causing failure, particularly in life-critical industrial control systems. The scheduler device102computes a schedule that is installed at each node105in the network system100. This schedule dictates when different types or classification of signals are communicated by the switches104. The scheduler device102remains synchronized with a grandmaster clock device110as clock instability results in unpredictable latency when frames are transmitted. The grandmaster clock device110is a clock to which clock devices112of the nodes105are synchronized. A consequence of accumulated clock drift is that a frame misses a time window for the frame, and must wait for the next window. This can conflict with the next frame requiring the same window. A centralized network configurator device108of the control system107is comprised of software and/or hardware that has knowledge of the physical topology of the network100as well as desired time-sensitive network traffic flows. The configurator device108can be formed from hardware circuitry that is connected with and/or includes one or more processors that determine or otherwise obtain the topology information from the nodes105and/or user input. The hardware circuitry and/or processors of the configurator device108can be at least partially shared with the hardware circuitry and/or processors of the scheduler device102. The topology knowledge of the network system100can include locations of nodes105(e.g., absolute and/or relative locations), which nodes105are directly coupled with other nodes105, etc. The configurator device108can provide this information to the scheduler device102, which uses the topology information to determine the schedules. The configurator device108and/or scheduler device102can communicate the schedule to the different nodes105. A link layer discovery protocol can be used to exchange the data between the configurator device108and the scheduler device102. The scheduler device102communicates with the time-aware systems (e.g., the switches104with respective clocks112) through a network management protocol. The time-aware systems implement a control plane element that forwards the commands from the centralized scheduler device102to their respective hardware. The Timing and Synchronization standard is an enabler for the scheduler device102. The IEEE 802.1AS (gPTP) standard can be used by the scheduler device102to achieve clock synchronization by choosing the grandmaster clock device110(e.g., which may be a clock device112of one of the switch devices104), estimating path delays, and compensating for differences in clock rates, thereby periodically pulling clock devices112back into alignment with the time that is kept by the grandmaster clock device110. By pulling the clock devices112back into alignment with the grandmaster clock device112, the use of phase locked loops (PLL) are not used in one embodiment of the network system100due to the slow convergence of the loops and because the loops are prone to gain peaking effect. The clock devices112can be measured by the configurator device108or the grandmaster clock device110periodically or otherwise repeatedly sending generalized time-precision protocol messages (gPTP). The operation consists mainly of comparing the timestamps of the time-precision protocol messages the transmits or receives of local switch device104with the timestamps advertised by neighbor switch devices104. This way, any factors affecting clock drift are correctly detected by the protocol. A clock device112that is suddenly pulled into the past or moved to the future relative to the time kept by the grandmaster clock device110can impact the local execution of a time-aware schedule. For example, time-critical traffic may not be communicated by the node105that includes the non-synchronized clock device112within the scheduled time period for time-critical traffic. The gPTP standard provides a continuous and monotonically increasing clock device112. Consequently, the scheduler device102relies on a clock device112that cannot be adjusted and alignment of the clock device112is based on logical syntonization, offset from the grand master clock device110, the link propagation delays with the neighbors, and the clock drifts between the local clock devices112. The IEEE 802.1AS standard can be used to detect intrinsic instability and drift of a clock device112. This drift can occur for a variety of reasons, such as aging of the clock device112, changes in temperature or extreme temperatures, etc. Relativistic effects from the theory of special and general relativity can be viewed as an extrinsic clock drift and can encompass gravitational and motion time dilation. For example, two clock devices112with the same intrinsic parameters would detect no drift, but relativity would cause drift of the time kept by these clock devices112from the grandmaster clock device110. While general relativity can be rather complicated, gravitational time dilation is straight-forward to apply. In the equation that follows, G is the gravitational constant, M is the mass of the gravitational body in kilograms, R is the radius, or the distance from the center of the mass, in meters, and c is the speed of light in meters per second. Two clock devices112, one located at a height of 100 m within the Earth's gravitational field and another at an infinite distance from a gravitational field, that is, experiencing no gravitation. Time passes slower within a gravitational field, so the hypothetical clock device112located at infinity would be the fastest known clock device112. When one second has passed for the clock device112located at infinity, consider how much time has passed as measured by the clock near Earth. The time at infinity is denoted as T and the time on Earth as T0. To determine how much time has passed on a clock device112at altitude h as compared to the passage of time measured on a clock at the surface of the earth, calculate the time dilation ratio at altitude h and divide this by the time dilation calculated at the surface of the earth, take the square root of the result and then multiply this calculated ratio by the time interval at the surface of the earth and the result of the calculation is the amount of time that has passed on the faster clock by 11 femtoseconds compared to the clock device112located higher in the field at altitude h. T=1-2⁢G⁢M(R+h)⁢c21-2⁢G⁢MR⁢c2⁢T0(1) Clock drift induced by gravitational time dilation seems negligible at first glance. Particularly when the speed of transmission is of 1 Gbps. It means that, to make an Ethernet frame of 64 bytes miss its Time-Aware schedule, 672 ns of drift must have elapsed if it is considered that for the 20 bytes of preamble, start frame delimiter, frame check sequence and interframe gap, for a port speed of 1 Gbps. With a difference of height clock of 100 m within the network, such a drift can be obtained within two years of uninterrupted service. In one embodiment, the schedules provided by the configurator device108are relative to grandmaster time and may ignore time dilation. As a result, the schedules lose simultaneity. While neglecting time dilation can be done within an acceptable error margin, the inventive subject matter described herein addresses cases where error on the scheduler devices102due to relativity are important. That is, where error caused by clock drift at the nodes105can cause time-critical traffic to not be communicated within the scheduled time window for time-critical traffic at one or more of the nodes105. Several use cases involving pico-satellites or high-speed networks (for example, plane-to-ground transmissions, high speed train communications, smart cities interacting with cars in highways, etc.) subject to significant gravitational gradient are examples where relativity can cause significant drift in the scheduler device102. One or more embodiments of the inventive systems and methods described herein examine the impact of time synchronization error upon time-sensitive network scheduling by the scheduler device102of the control system107, the impact of time synchronization error on the location, placement, or selection of the grandmaster clock device110in the network system100, and the impact of time synchronization error on bandwidth. The systems and methods define specific local guard bands that dynamically change size based on changes in the time dilation. The guard bands are determined as time periods and/or network bandwidths in which non-time-critical Ethernet frame traffic cannot be communicated through the node or nodes that are allocated or assigned the guard bands. FIG.2schematically illustrates a high-level concept behind the analysis described herein. A network of clock devices112represented at the top ofFIG.2are assumed to synchronize imperfectly with one another due to time dilation. The clock devices112provide timing for corresponding systems of IEEE 802.1Qbv gates200represented at the bottom ofFIG.2. These gates200can represent the nodes105of the network system100shown inFIG.1. Time-sensitive data flows202of data frames between the gates200also are shown inFIG.2. Clock devices112may never perfectly synchronize and synchronization error has an impact on the ability of time sensitive network flows202to operate correctly. Time-sensitive data flows202cross diverse local time references and are subject to time dilation that cannot be measured by the gPTP standard. For example,FIG.2shows clock devices112located in different altitudes, and subject to different relativities. The clock devices112located in the mountains, for example, are synchronized to the grand master relative time (e.g., of the grandmaster clock device110shown inFIG.1), but time-sensitive network data flows202reaching the clock devices112are “accelerating” because of time dilation. The configurator device108shown inFIG.1can prevent or correct for this acceleration by applying compensation on the configuration of the scheduler device102. This compensation can occur by determining a guard band to be applied for communication of data flows at one or more of the nodes105or gates200. This guard band can dynamically change as the compensation needed to correct for clock drift changes over time. To compute the impact of time-sensitive network timing error, the scheduler device102computes schedules for network bridges (e.g., switches104). The scheduler device102can use a heuristic approach that is non-deterministic polynomial-time hardness (NP-hard). The schedules can be computed by assuming that individual clock error is independent and normally distributed. The clock devices112may drift with a mean p and have a variance a. Each gate system200can receive or determine time from one of the distributed clocks112that is synchronized by the IEEE 802.1AS standard. Time-sensitive data flow paths are scheduled by the centralized scheduler device102assuming perfect synchronization. If clock synchronization fails to achieve a sufficient degree of synchronization, this failure could cause multiple Ethernet frames from different time-sensitive network flows202to be simultaneously transmitted on the same link. This would cause an alternate scheduling mechanism to mitigate potential collision and frame loss at the expense of an unnecessary and unpredictable delay in transmission. Thus, in the presence of synchronization error, Ethernet frames in time-sensitive network flows202will have a probability of exceeding their maximum, deterministic latency requirement and suffer significant jitter. Under certain synchronization errors, it may even be possible for Ethernet frames to completely miss scheduled transmission window time and catch another open window, thus impacting other time-sensitive network flows202that were initially scheduled on different time windows. A guard band can be dynamically calculated and added to the schedules to mitigate clock error and ensure that time-critical traffic is successfully communicated. This provides at least one technical effect of the inventive subject matter described herein. Dynamically altering the guard band can ensure that packets (that are needed to be delivered at certain designated times to ensure the same operation of systems using the time-sensitive network) are delivered on time, even with drift of clocks away from the grandmaster clock and/or other differences between the times tracked by the clocks and the master time maintained by the grandmaster clock. In one embodiment of the inventive subject matter, the scheduler device102is provided the details of an Ethernet network system100(shown inFIG.1) and requested time-sensitive network flows202and computes schedules for each flow202. While the scheduler device102is designed to operate with real Ethernet networks100and manually crafted time-sensitive network flows202, one component for this analysis is the ability to randomly generate large numbers of time-sensitive network flows202in a large, randomly generated Ethernet network100. Thus, the scheduler device102is able to analyze large, complex time-sensitive network schedules in large, complex networks100. Random jitter can be unpredictable and is assumed to be Gaussian (e.g. thermal noise). Deterministic jitter can be predictable and bounded (e.g., duty cycle, distortion, and inter-symbol interference). Clock jitter can have a Gaussian distribution. Jitter and parts-per-million (PPM) are related by df=f1⁢06⁢PPM, where f is the center frequency of an oscillator and df is the maximum frequency variation. In one embodiment, the clock devices112can be assumed by the scheduler device102to have an accuracy of +/−100 PPM with 5 picoseconds of root mean square (RMS) jitter. The RMS error can be related to Gaussian variance by σn/√{square root over (2N)}, where Nis the number of samples (e.g., 10,000) and peak-to-peak period jitter equals +/−3.72 RMS jitter. One part of the analysis performed by the scheduler device102examines how jitter propagates from one clock device112to another clock device112. Random noise can be added by the scheduler device102, while correlation in noise reduces the purely additive characteristic and creates additional uncertainty. The scheduler device102can propagate clock drift and jitter from the grandmaster clock device110through all other (e.g., slave) clock devices112. For example, the other clock devices112can be repeatedly synchronized with the grandmaster clock device110. The model also considers the fact that path delay reduces the ability of the gPTP standard to keep slave clock devices112synchronized with the grandmaster clock device110. The scheduler device102implementation enables experimentation with clock accuracy and placement and determines the impact of clock accuracy experimentation on time-sensitive network scheduling. FIG.3illustrates a fundamental model showing a master clock device110and a slave clock device112separated by an Ethernet link103. The slave clock device112is sampling from a Gaussian distribution that represents the dynamics of oscillation in the master clock110. The probability density function will flatten due to jitter (e.g., variance). Sync messages carrying the latest statistical sample of the time and frequency of the master clock device110can be periodically or otherwise repeatedly sent to the other clock devices112. The brings the times and frequencies of the clock devices110,112back into alignment, subject to drift until the next sync message is sent from the master clock device110to the other clock devices112. There is a delay between corrections limited ultimately by the time to transfer a message across the link103. As a result, the sync messages only correcting the drift (e.g., the mean), while the Gaussian probability density function for the clock devices112will continue to flatten further from the master clock device110. In one example, jitter and Allan variance can be disregarded, and only the drift for 100 PPM clock devices110,112may be considered. Assuming 100 MHz clock devices110,112, the clock devices110,112may deviate between the limits of −100,000 ns and 100,000 ns every second. If a sync message is transmitted from the master clock device110to the clock devices112every millisecond (or an even less frequent rate), a slave clock device112can drift from −100 ns to 100 ns, not including additional drift due to delay of communication along the link103. Faster links and a faster sync message transmission rate can enable better synchronization between the clock devices110,112. Jitter, however, adds to the variance of the clock time distribution and accumulates along each hop along the links103from the master clock device110. Systemic clock inaccuracy, such as temperature change, also can have an impact. If multiple clock devices110,112experience the same temperature change and drift at approximately the same rate, the clock devices110,112can continue to remain correlated with one another and there is little impact on the timely communication of frames according to the schedule dictated by the scheduling device102. If variance were impacted, however, this could have an impact. Since clock drift and variance can be independently and normally distributed, mean and variance accumulate via simple summation when experienced through time-sensitive paths103. Two statistical properties that impact frame scheduling are clock correlation and clock variance. One can look at the correlation of clock means and sum the clock variances of the clock devices112in the nodes105along a scheduled path103for communication of frames between the computing devices106. Thus, for any set of scheduled paths103, the probability of Ethernet frame overlap in a schedule can be determined by computing the probability of overlap of normal distributions as follows: (x-μ2)22⁢σ22-(x-μ1)22⁢σ12=log⁢σ1σ2(2) This probability can reflect how likely it is that two or more frames collide on a link103, which can result in one or all of these frames not being delivered or otherwise communicated. In order to eliminate or reduce the likelihood of frame collisions, the scheduler device102can schedule the communication of frames to occur through or over routes that are along the paths103that are most (or more) immune to clock synchronization inaccuracy, as well as by selecting smaller (e.g., the smallest possible) guard bands that reduce the impact of timing inaccuracies. FIG.4illustrates one example of synchronization error analysis using multicast. Vertices are end-systems and switches104, and are labeled one through eight. Edges are Ethernet links103and are also numbered inFIG.4. Links18and43experience overlapping paths and thereby are exposed to the possibility of frame transmission overlap. Path1connects vertex1to vertices7,4, and6. Path2connects from vertex5to vertex6. Possible contention (e.g., overlap) exists at links between vertices2and3, as well as vertices3and6. Each interface can be assumed to have a local clock device112. In the illustrated example, the clock error mean is one microsecond, the variance is two microseconds, and the required or scheduled end-to-end latency for communication along each path is 80 ms. Using the result of the scheduler device102for this example and the accumulated clock error along each path, Path1can be computed to have a mean latency of 80 ms and a probability of only 0.5 of meeting that requirement given the variance due to clock error along Path1. Path two has a mean of 71 ms and a probability of success in meeting that latency of 0.93. FIG.5illustrates probabilities of frame collision along several paths.FIG.5illustrates a matrix of bar plots showing the relationship between every pair of time-sensitive paths103. The matrix is square, symmetric, and will have all ones along the diagonal, that is, perfect along the same paths. The probability of overlap is results in the probability of congestion, increase in latency, and loss of determinism due to adjacent traffic sharing the same channel. FIG.5also shows the probability of frame buffering along each path103due to clock synchronization error as computed using (1). The same paths overlap perfectly with one another as shown along the diagonal. The more interesting plots are in the non-diagonal positions. Since bar graphs form a matrix, the graphs form a symmetric matrix and only examine the upper right diagonal may be examined. In the illustrated example, Paths one and two will suffer non-deterministic frame delay drops with 0.0027 ms (imperceptibly in the bar graph) at the link from vertices two to three, but there is a 0.42 probability of delay at the link from vertices three to six in this example. The notion of time-sensitive network time dilation for guard bands leads to consideration of the prospects and implications of physical gravitational time dilation. The uncertainty in time increases with the distance from the grandmaster clock device110, and this uncertainty requires a proportionally-sized mechanism for compensation, typically a guard band in the network101. A guard band effectively increases the Ethernet frame size by increasing the duration that a gate200is open, and thus stretching the effective length of the time-sensitive network frame. A gate200is open during a time period that is scheduled by the scheduler device102for communication of data packets through the switch in that gate200. The scheduler device102can determine a guard band as a time period or bandwidth that a gate200remains open for communicating data packets. The scheduler device102can repeatedly determine the clock drift and variance for multiple clock devices112and, based on the drift and/or variance, determine a probability that Ethernet frames will collide along one or more paths103in the network. If the probability is sufficiently large (e.g., greater than a non-zero, previously defined threshold, such as 15%, 20%, or the like), then the scheduler device102determines and creates a dynamically adjustable guard band for one or more nodes105. The guard band defines time periods and/or network bandwidth that cannot be used by the node(s)105for communication of frames along one or more links103. The effective change in length of a data frame varies with distance of the slave clock device112from the grandmaster clock device110. For example, clock devices112that are farther from the grandmaster clock device110(e.g., along links103in the Ethernet network) may have larger guard bands determined by the scheduler device102. This effective change in length can be referred to as time dilation in analogy with gravitational time dilation from general relativity. The scheduler device102can use a guard band to guarantee that the switch104is idle when time-sensitive network frames are transmitted at the cost of dedicating bandwidth for protection. The scheduler device102can change the size of the guard band for a node105at different times based on clock drift and/or variance. Thus, the size of the guard band can be dynamically changed by the scheduler device102to reduce or minimize the time during which a switch104is idle, while maintaining determinism in the delivery of time-sensitive network frames. Not all embodiments of the inventive subject matter described herein are limited to wired networks. One or more embodiments of the inventive subject matter can be used in connection with entirely or partially wireless time-sensitive networks. When time-sensitive network devices are subject to change in motion or altitude, the scheduler device102is affected by time dilation. Guard band sizes can be controlled (e.g., by the scheduler device102) as functions not only of distance of a clock device112from the grandmaster clock device110, but also of port speed and clock height and speed. For example, the scheduler device102can create larger guard bands for longer distances along the links103between a slave clock device112and the master clock device110, and can create smaller guard bands for shorter distances along the links103between a slave clock device112and the master clock device110. The scheduler device102can create larger guard bands for switches104that are slower in communicating data frames and can create smaller guard bands for switches104that are faster in communicating the data frames. The scheduler device102can create larger guard bands for clock devices112located at higher altitudes and can create smaller guard bands for clock devices112located at lower altitudes. The scheduler device102can create larger guard bands for clock devices112that are faster or slower than the master clock device110by larger time differences, and can create smaller guard bands for clock devices112that are faster or slower than the master clock device110by smaller time differences. The guard band size can be set by the scheduler device102considering a worst-case scenario, for instance, based on the distance of a grandmaster clock device110and the height or speed of the clock device112. A control plane can be used to advertise height and speed of the different clocks device112to enable switches104to continuously or repeatedly adjust the size of the guard band based on the gPTP error correction and time dilation. The scheduler device102can rely on several metrics and values to allocate a guard band of a variable (e.g., dynamic, or changing with respect to time) size. The scheduler device102can calculate an eigenvalue centrality measure for one or more of the nodes105, which can represent an overall shape of the network100. Longer, thin networks100are subject to bigger guard bands than small compact networks100. For example, networks100formed from fewer nodes105, fewer links103, and/or having fewer alternate paths of links103and nodes105between devices106for data frame communication can be allocated larger guard bands by the scheduler device102than networks100formed from more nodes105, more links103, and/or having more alternate paths of links103and nodes105for communication of data frames between the devices106. Additionally, nodes105that are farther from the master clock device110and/or are farther from a center of the network100may be assigned larger guard bands than nodes105that are closer to the master clock device110and/or the center of the network100. The clock variance at different nodes105impacts time-to-time clock measurement and is accumulated by all traversed nodes105. The variance is an additive parameter in that the total clock variance between the clock devices112and the master clock device110increases for more nodes105along a path for a data frame and/or for larger differences between the clock devices112and the master clock device110along the path. The scheduler device102can fetch all or many of the variances from the network100and compute the total variance of one or more paths through the network100. The scheduler device102can also apply an overall eigenvalue centrality metric that provides a global variance value of the network100. Each node105can add up a local variance of that node105and the clock reference variance to the global variance of the network100. When the network100is made of different time domains with different reference clock devices112, the eigenvalue centrality metrics may differ from one domain to another. The accumulated drift may also differ because the clock references do not necessarily send synchronization messages at the same rate and the same speed. If a time-sensitive network stream needs to cross multiple time domains, the guard band determined by the scheduler device102corresponding to the node105egressing to a new domain is the maximum of this node105. By applying an optimal guard band the network resource usage used by the guard band can be decreased, and the heuristic finds more solution to establish a new time-sensitive network stream (and the number of time-sensitive network streams on a network is statistically higher with optimal guard bands). This can lead to a reduced OPEX and a reduced cost per bit of data sent over the network100. The scheduler device102can use eigenvector centrality to estimate the impact of time-sensitive network time dilation. Eigenvector centrality measures or represents the importance of a node105in the network100, such as how far the node105is from a location another node105, the grandmaster clock device110, the center of the network100, etc. This importance of the node105can go beyond simply counting the number of computer devices106that interface with the node105, but also can include the degree to which a computer device106supports the interconnection of other highly-connected computer devices106. The network edges are weighted by link speed. Let x be the centrality measure, a be either zero or one as indicated in the adjacency matrix, λ a constant, and f and t indicate the “from” and “to” indices of a vertex in the adjacency matrix respectively as shown in: xf=1λ⁢∑t⁢aft⁢xt(3) This simplifies to (4) below, where A is the eigenvalue of the adjacency matrix A. The eigenvector solutions play a wide range of roles in network partitioning, dimensionality reduction, and many other applications. For the centrality measure, the eigenvectors are non-negative. This means λ will be the largest of the many possible eigenvalue solutions, or may be larger than most (but not all) possible eigenvalue solutions. Ax=λx(4) Thus, the eigenvalue centrality of a vertex is simply the eigenvector element corresponding to the vertex derived from the adjacency matrix corresponding the largest eigenvalue. The eigenvector centrality for each node105is viewed as a gravitational gradient through which time-sensitive network flows travel. Consider what the eigenvalue centrality value for a node105means if the adjacency matrix is weighted by link speed. The centrality value is a scale factor that provides a time dilation correction based upon the topology of the network100. A rate of synchronization messages reported to the local clock drift of the traversed nodes104also can be determined by the scheduler device102. The scheduler device102can allocate smaller guard bands for faster synchronization rates and can allocate larger guard bands for slower synchronization rates. The effect of sync locks, and needs for adjusting flows crossing different time domains, and then subject to time discrepancies also can be determined by the scheduler device102. FIG.6illustrates a flowchart of one embodiment of a method700for dynamically determining guard bands for a time-sensitive network. At702, the clock drifts and the clock variances of nodes105can be determined. At704, a maximum or upper accumulated clock offset along a time-sensitive network path of links103and nodes105is determined. This can be a sum of the clock offsets (e.g., drifts and/or variances) or a sum of the absolute values of the clock offsets) of the clocks112of the nodes along a path between the devices106. At706, a synchronization rate is communicated to the scheduler devices102. This rate can be adapted to the conditions of the network100so that clock drifts can be diminished. This rate can indicate how frequently the clock devices112of the nodes105are synchronized with the master clock device110. At708, one or more guard bands of dynamic size is determined by and communicated from the scheduler device102to the nodes105. A guard band can have a size that is based on the schedules of the nodes105, as well as based on other factors described herein. If multiple time domains are present in the network100, then the dynamic guard band can be applied on the border schedule. For a node105, the guard band can be inserted before and after the scheduled window time of the node105for forwarding a time-sensitive network frame. As a result, if the local clock device112of the node105is slightly in advance or late from the universal time of the grandmaster clock device110, the queue at the node105that forwards this frame is maintained open for a duration that is proportional to or otherwise based on the size of the guard band. The size of a guard band can be adjusted to the maximum local time error of this node105in one embodiment. A node105can measure frequency error of the node105on a real-time basis, which also can be used to dynamically adapt the guard band to environmental conditions such as the temperature and the aging of the clock device112of that node105. Table 1 below shows the delay before the scheduler device102is effected by between two points within a gravitational time dilation at the point that may make a time-sensitive Ethernet frame of 64 bytes miss an associated schedule. Table 1 illustrates the difference in height of clock devices112on the scheduler device102, for a time-sensitive Ethernet frame of 64 bytes, and as a function of the network transmission speed. The times expressed in the table show how long a service must be uninterrupted before seeing such a frame miss a scheduled time window. TABLE 1Δ Height10 Gbps100 Gbps1 Tbps10 m707 days70 days7days,1 hour, 41minutes,and 49seconds100 m70 days7 days, 116 hours,hour, 4158 minutes,minutes,and 10and 49secondsseconds1000 m7 days, 116 hours,1 hour, 411 hour, 4158 minutes,minutes,minutes, andand 10and 449 secondssecondsseconds For example, a difference of 100 m from sea level between two clock devices112will result in time dilation of 1.000000000000011s. Even if this change may be too small to be represented by an offset scaled rate ratio in gPTP frames, this leads to a cumulated drift of 11 femtosecond per second of usage. Time dilation effects become important after 14 days and 3 hours causing a time-sensitive frame of 128 bytes to miss its schedule at 100 Gbps. Special relativity applies to devices in motion. In general, this effect can be neglected. However, when high precision timing is required, correction may need to be applied to the scheduler device102. Note that this time dilation differs from the Doppler-Fizeau effect impacting the frequency of communication of mobile devices. As the gravitational time dilation, this cannot be measured by gPTP, and a GNSS receiver is not able to apply correction induced by the speed of the device. Table 2 shows different effects of speed on the time dilation observed by a device in motion. Three different speed are shown here and correspond respectively to a car driving on a highway, a high-speed train, and an airplane in motion. Table 2 shows the difference of speed on the scheduler device102, for a time-sensitive frame of 64 bytes, and as a function of the network transmission speed. The times expressed in the table show how long a service must be uninterrupted before seeing such a frame miss its time window. TABLE 2Δ Speed10 Gbps100 Gbps1 Tbps30 ms−1159days2 weeks38 hours, 15minutes, and5 seconds90 ms−12weeks41 hours,4 hours, 828minutes, andminutes,53 secondsand 53seconds300 ms−137 hours3 hours22 minutesand 20and 44and 24minutesminutesseconds Special relativity applies to devices in motion. In general, this effect can be neglected. However, when high precision timing is required, correction must be applied to the scheduler device102. Note that this time dilation differs from the Doppler-Fizeau effect impacting the frequency of communication of mobile devices. As the gravitational time dilation, this cannot be measured by gPTP, and a GNSS receiver is not able to apply correction induced by the speed of the device. Table 2 shows different effects of speed on the time dilation observed by a device in motion. Three different speed are shown here and correspond respectively to a car driving on a highway, a high-speed train, and an airplane in motion. Table 2 shows the difference of speed on the scheduler device102, for a time-sensitive frame of 64 bytes, and as a function of the network transmission speed. The times expressed in the table show how long a service must be uninterrupted before seeing such a frame miss its time window. As a result, the scheduler device102optionally can dynamically change the size of a guard band for a node105depending on or based on motion of the node105. The scheduler device102can calculate larger guard bands for nodes105that are moving or moving faster than the guard bands for stationary or slower moving nodes105. In one embodiment, a method includes determining a clock drift and a clock variance of each node in plural nodes of a time-sensitive Ethernet network, determining an accumulated clock offset along a time-sensitive network path in the time-sensitive network based on the clock drifts and the clock variances that are determined, determining a guard band having a dynamic size based on the accumulated clock offset, and restricting when Ethernet frames are communicated through the nodes by communicating the guard band with the dynamic size to one or more of the nodes. Optionally, the method also includes determining an eigenvalue centrality metric based on a location of one or more of the nodes in the time-sensitive network, where the dynamic size of the guard band is based on the eigenvalue centrality metric. Optionally, the method also includes determining a rate at which clock synchronization messages are reported to the nodes along the time-sensitive network path, where the dynamic size of the guard band is based on the rate at which clock synchronization messages are reported to the nodes along the time-sensitive network path. Optionally, the method also includes inserting the guard band before and after a scheduled window time of forwarding a time-sensitive network frame at each of the nodes. Optionally, the clock drift and the clock variance are determined for local clock devices of the nodes relative to a master clock device for the Ethernet network. Optionally, the guard band is determined as one or more of a time period or a bandwidth in which non-time-critical Ethernet frame traffic cannot be communicated through the nodes. Optionally, the guard band is determined based on distances between clock devices of the nodes and a master clock device of the Ethernet network. Optionally, the guard band is determined based on one or more of altitudes or speeds of clock devices of the nodes. Optionally, the guard band is determined based on motion of one or more of the nodes. In one embodiment, a system includes one or more processors configured to determine a clock drift and a clock variance of each node in plural nodes of a time-sensitive network. The one or more processors also are configured to determine an accumulated clock offset along a time-sensitive network path in the time-sensitive network based on the clock drifts and the clock variances that are determined. The one or more processors also are configured to determine a guard band having a dynamic size based on the accumulated clock offset and to communicate the guard band with the dynamic size to the nodes. The one or more processors are configured to allocate the guard band to at least one of the nodes. The guard band restricts when Ethernet frames are communicated through the at least one of the nodes. Optionally, the one or more processors also are configured to determine an eigenvalue centrality metric based on a location of one or more of the nodes in the time-sensitive network. The one or more processors can be configured to determine the dynamic size of the guard band based on the eigenvalue centrality metric. Optionally, the one or more processors are configured to determine a rate at which clock synchronization messages are reported to the nodes along the time-sensitive network path. The one or more processors can be configured to determine the dynamic size of the guard band based on the rate at which clock synchronization messages are reported to the nodes along the time-sensitive network path. Optionally, one or more processors are configured to insert the guard band before and after a scheduled window time of forwarding a time-sensitive network frame at each of the nodes. Optionally, the one or more processors are configured to determine the clock drift and the clock variance for local clock devices of the nodes relative to a master clock device for the Ethernet network. Optionally, the one or more processors are configured to determine the guard band as one or more of a time period or a bandwidth in which non-time-critical Ethernet frame traffic cannot be communicated through the nodes. Optionally, the one or more processors are configured to determine distances between clock devices of the nodes and a master clock device of the Ethernet network. The one or more processors also are configured to determine the guard band based on the distances that are determined. Optionally, the one or more processors are configured to determine the guard band based on one or more of altitudes or speeds of clock devices of the nodes. In one embodiment, a system includes one or more processors configured to determine clock drifts and clock variances of plural nodes in a time-sensitive Ethernet network. The one or more processors also are configured to determine an eigenvalue centrality metric based on a location of one or more of the nodes in the time-sensitive network. The one or more processors are configured to dynamically allocate a guard band to one or more of the nodes to prevent communication of one or more Ethernet frames through the one or more nodes during the guard band in a time sensitive network schedule of the Ethernet network. The one or more processors are configured to dynamically allocate the guard band based on the clock drifts, the clock variances, and the eigenvalue centrality metric. Optionally, the one or more processors are configured to dynamically allocate the guard band by changing a size of the guard band responsive to a change in one or more of the clock drifts, the clock variances, or the eigenvalue centrality metric. Optionally, the one or more processors are configured to determine an accumulated clock offset of the nodes along a path between two or more computer devices based on the clock drifts and the clock variances associated with the nodes along the path. The one or more processor can be configured to allocate the guard band based on the accumulated clock offset. As used herein, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is explicitly stated. Furthermore, references to “one embodiment” of the presently described subject matter are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments “comprising” or “having” an element or a plurality of elements having a particular property may include additional such elements not having that property. It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments (and/or aspects thereof) may be used in combination with each other. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the subject matter set forth herein without departing from its scope. While the dimensions and types of materials described herein are intended to define the parameters of the disclosed subject matter, they are by no means limiting and are exemplary embodiments. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the subject matter described herein should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Further, the limitations of the following claims are not written in means-plus-function format and are not intended to be interpreted based on 35 U.S.C. § 112(f), unless and until such claim limitations expressly use the phrase “means for” followed by a statement of function void of further structure. This written description uses examples to disclose several embodiments of the subject matter set forth herein, including the best mode, and also to enable a person of ordinary skill in the art to practice the embodiments of disclosed subject matter, including making and using the devices or systems and performing the methods. The patentable scope of the subject matter described herein is defined by the claims, and may include other examples that occur to those of ordinary skill in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
48,566
11863301
PREFERRED MODE FOR CARRYING OUT THE INVENTION Description is given below regarding embodiments of the present invention. FIG.1is a block diagram that illustrates an example of a configuration of a transmission/reception system that includes an optical transmission device according to an embodiment of a signal processing device of the present invention. The example of a transmission/reception system inFIG.1is configured so as to include an optical transmission device1, an optical reception device2, and an optical communication cable3for connecting these. The optical transmission device1is configured so as to include a transmission data provision unit11, a cryptographic key provision unit12, an encryption unit13, a carrier wave generation unit14, and a cryptographic signal transmission unit15. The transmission data provision unit11generates plaintext data to be transmitted or receives plaintext data from a generation source (not illustrated), and provides the plaintext data to the encryption unit13as transmission data. The cryptographic key provision unit12provides the encryption unit13with a cryptographic key to use in encryption at the encryption unit13. It is sufficient if the cryptographic key is a key that can be used in encryption and decryption by the optical transmission device1and the optical reception device2, and there is no limitation in particular on the source of provision of the cryptographic key (place where the cryptographic key is generated or place where the cryptographic key is stored), the method of providing the cryptographic key, and methods of encryption and decryption. The encryption unit13uses the cryptographic key provided from the cryptographic key provision unit12to encrypt the transmission data provided from the transmission data provision unit11, superimposes the encrypted transmission data on a carrier wave (optical signal) generated by the carrier wave generation unit14which is described later, and provides the result of the superimposition to the cryptographic signal transmission unit15. The optical signal outputted from the encryption unit13, in other words the result of superimposing the encrypted transmission data on a carrier wave, is referred to below as a “cryptographic signal”. The carrier wave generation unit14generates a carrier wave as an optical signal, and provides the carrier wave to the encryption unit13. The cryptographic signal transmission unit15transmits the cryptographic signal provided from the encryption unit13to the optical reception device2via the optical communication cable3after, for example, amplifying the cryptographic signal if necessary. As described above, the cryptographic signal (optical signal) is outputted from the optical transmission device1, transferred by the optical communication cable3, and received by the optical reception device2. The optical reception device2decrypts the received cryptographic signal to thereby cause the plaintext data (transmission data) to be restored. Accordingly, the optical reception device2is configured so as to include a cryptographic signal reception unit21, a cryptographic key provision unit22, and a decryption unit23. The cryptographic signal reception unit21receives the cryptographic signal (optical signal), converts the cryptographic signal to an electrical signal, and provides the electrical signal to the decryption unit23. A result of converting a cryptographic signal (optical signal) to an electrical signal at the cryptographic signal reception unit21is referred to below as “cryptographic data”. The cryptographic key provision unit22provides the decryption unit23with a cryptographic key that is used when decrypting cryptographic data. The decryption unit23uses the cryptographic key provided from the cryptographic key provision unit22to decrypt the cryptographic data provided from the cryptographic signal reception unit21and thereby restore the plaintext data (transmission data). In this way, in the present embodiment, the cryptographic signal is employed as an example of an optical signal transferred by the optical communication cable3. Accordingly, in the example ofFIG.1, optical fiber communication, which is representative of wired communication, is employed as a method of communicating the cryptographic signal. In optical fiber communication, a third party can in principle introduce a branch into an optical fiber and extract some of the signal power to thereby steal large amounts of information (the cryptographic signal here) in one occasion. Accordingly, it is necessary to have a technique such that the meaning and content of the cryptographic signal, in other words the content of the plaintext (transmission data), cannot be recognized by a third party, even if the cryptographic signal is intercepted. As such as method, the applicant is developing a technique that uses the Y-00 optical communication quantum cipher. The Y-00 optical communication quantum cipher has “not allowing an eavesdropper to correctly obtain ciphertext due to effects of quantum noise” as a feature, and was developed by the applicant. In the Y-00 optical communication quantum cipher, transmission data (plaintext) is represented by an aggregate of one or more items of bit data: “0” or “1”. Each item of bit data that makes up the transmission data is modulated by a predetermined algorithm to a predetermined level from among N (N is an integer greater than or equal to 2) levels. The number N is referred to below as the “number of modulations N”. In the Y-00 optical communication quantum cipher, encryption of transmission data (plaintext) is performed by modulating at least one of the phase or amplitude of an optical signal (carrier wave) by one of the number of modulations N of levels, in accordance with a cryptographic key present on the encrypting side and the decrypting side. By making the number of modulations N a very large number, the feature of “not allowing an eavesdropper to correctly obtain ciphertext due to effects of quantum noise” is realized. Regarding the “predetermined protocol” employed in the Y-00 optical communication quantum cipher, please refer to Japanese Patent No. 5170586, for example. With reference toFIG.2andFIG.3, simple description is given regarding an overview of the principles of the Y-00 optical communication quantum cipher, taking phase modulation as an example. FIG.2is a view for describing an overview of the principles of the Y-00 optical communication quantum cipher. The A modulation through C modulation shown inFIG.2illustrate IQ planes that represent the phase and amplitude (intensity) of an optical signal, with the intersection of the vertical axis and the horizontal axis as the origin. When a point on one of these IQ planes is determined, the phase and amplitude of the optical signal are uniquely determined. Taking the origin of the IQ plane as the start point, the phase is the angle formed between the line segment ending at the point representing the optical signal and the line segment representing phase 0. In contrast, the amplitude is the distance between the point representing the optical signal and the origin of the IQ plane. The A modulation illustrated inFIG.2is to facilitate understanding of the Y-00 optical communication quantum cipher, and is a graph for describing the principles of normal two-level modulation. For example, if plaintext (transmission data) is superimposed as is on an optical signal (carrier wave) and transmitted, two-level modulation indicated as the A modulation illustrated inFIG.2will be performed on each item of bit data (1 or 0) that makes up the plaintext. In this case, in the A modulation illustrated inFIG.2, the arrangement of a point (hereinafter, referred to as a “symbol point”) indicating the optical signal after phase modulation when the bit data is “0” is the arrangement of a point given by 0(0) on the right side on the horizontal axis, in other words an arrangement where the phase is 0. In contrast, the arrangement of a symbol point when the bit data is 1 is the arrangement of π(1) on the left side on the horizontal axis, in other words an arrangement when the phase is n. The B modulation illustrated inFIG.2is to describe principles of phase modulation when the number of modulations N=16, in a case where the Y-00 optical communication quantum cipher is employed. In the case of the example of B modulation illustrated inFIG.2, a random level from among eight levels is generated by using the cryptographic key, for each item of bit data that makes up the plaintext. The phase modulation is performed by, for each bit, rotating the phase of the symbol point in the normal two-level modulation indicated as the A modulation illustrated inFIG.2(the point for phase 0 corresponding to 0 and the point for phase n corresponding to 1) in the IQ plane in accordance with a level from among the eight levels and is generated randomly. Because the value that bit data can take is binary—either “0” or “1”, as a result, when the phase modulation of the example of B modulation illustrated inFIG.2is performed, the arrangement of the symbol points becomes an arrangement of 16 points (number of modulations N=16) for which the phase respectively differs by π/8. However, in the case of the example of B modulation illustrated inFIG.2, the value—“0” or “1”—that the bit data can take is merely modulated to one of the levels from among the number of modulations N=16 levels. Therefore, if the optical signal (cryptographic signal), which has the arrangement of 16 symbol points, is intercepted, there is the risk that the meaning of its content—in other words the content of the plaintext (transmission data)—will be recognized by a third party. In other words, the security of the Y-00 optical communication quantum cipher is not sufficient at only around the number of modulations N=16. Accordingly, in practice, as indicated by the C modulation illustrated inFIG.2, a very large number, for example 4096, is employed as the number of modulations N, and the security of the Y-00 optical communication quantum cipher is improved. The C modulation illustrated inFIG.2is to describe principles of phase modulation when the number of modulations N=4096, in a case where the Y-00 optical communication quantum cipher is employed.FIG.3is a view that expands the C modulation illustrated inFIG.2in order to enable visual recognition of the arrangement of three adjacent symbol points from among the arrangement of N=4096 symbol points in the phase modulation for the C modulation illustrated inFIG.2. As illustrated inFIG.3, for each symbol point, there is variation due to shot noise (quantum noise) in only a range SN. The shot noise is noise due to the quantum nature of light, is truly random, and has a characteristic of being one of the laws of physics that is not set aside. When phase modulation with a very large number, such as 4096, as the number of modulations N, is performed, adjacent symbol points cannot be discriminated from one another because they are obscured by shot noise, as illustrated inFIG.3. Specifically, when the distance D between two adjacent symbol points is sufficiently smaller than the range SN of shot noise (when phase modulation with a very large number as the number of modulations N is performed so as to make the distance D this small), it is difficult to determine the position of the original symbol points from phase information measured at a receiving side. In other words, for example, the phase of an optical signal at a certain time corresponds to the position of a central symbol point of the three symbol points illustrated inFIG.3. In such a case, it not possible to distinguish whether this central symbol point is something transmitted as an optical signal for a symbol point that was originally at the central position, or whether this was actually something transmitted as an optical signal for a neighboring symbol point but was measured at the central position under the effect of shot noise. To summarize the above, modulation where the number of modulations N is very large is employed in the Y-00 optical communication quantum cipher. Although the modulation is phase modulation in the example ofFIG.2andFIG.3, the modulation may be amplitude (intensity) modulation instead of or in addition to phase modulation. In other words, optical signal modulation using the Y-00 protocol can employ any modulation method such as intensity modulation, amplitude modulation, phase modulation, frequency modulation, and quadrature amplitude modulation. By this, it becomes possible to make the distance D between two symbol points sufficiently smaller than the range SN of shot noise, and the feature “not allowing an eavesdropper to correctly obtain ciphertext due to effects of quantum noise” becomes possible. In addition, although quantum noise ensures security, in practice an eavesdropper is prevented from obtaining the correct ciphertext due to the effect of all noise, including classical noise such as thermal noise in addition to quantum noise. In other words, it can be said that the security of the Y-00 optical communication quantum cipher relates to how large the number of modulations N can be set to. With reference to several concrete examples in which the Y-00 optical communication quantum cipher is employed, description is given separately for detailed configurations of the encryption unit13of the optical transmission device1ofFIG.1. Firstly, in order to facilitate understanding of the present invention, description is given separately for two concrete examples of the basic encryption unit13, with reference toFIG.4andFIG.5. In the description of the basic encryption unit13, it is assumed that phase modulation with the number of modulations N=4096 is performed for the convenience of the description. FIG.4is a block diagram illustrating an example of a detailed configuration of the basic encryption unit13in the optical transmission device1ofFIG.1. The basic encryption unit13of the example ofFIG.4is provided with a cryptographic generation unit31and a multi-level modulation unit32. The cryptographic generation unit31of the example ofFIG.4encrypts the transmission data by using a cryptographic key provided from the cryptographic key provision unit12to convert each item of bit data (0 or 1) that makes up the transmission data provided from the transmission data provision unit11into data (hereinafter referred to as “multi-level data”) having an arbitrary level from among the multiple levels for the number of modulations N=4096. In other words, the cryptographic generation unit31generates multi-level data for each item of bit data that makes up the transmission data, and, via a signal path L1, supplies the multi-level data to the multi-level modulation unit32as a digital signal. The multi-level modulation unit32of the example ofFIG.4is provided with a digital analog converter (hereinafter abbreviated as “DAC”)41and a phase modulation element42. The DAC41obtains, via the signal path L1, multi-level data corresponding to each item of bit data supplied from the cryptographic generation unit31. The DAC41converts the multi-level data (digital signal) into an analog voltage (hereinafter referred to as a “multi-level voltage”) having a level from among multiple levels, and applies the multi-level voltage to the phase modulation element42via the signal path L2. The phase modulation element42is, via a signal path L3, inputted with an optical signal generated as a carrier wave in the carrier wave generation unit14. The phase modulation element42causes the phase of this optical signal to rotate (modulates the phase) in accordance with the multi-level voltage that is from the DAC41via a signal path L2and is applied for each item of bit data, and supplies results of the rotation to the cryptographic signal transmission unit15via a signal path L4. FIG.5is a block diagram illustrating an example, different to that ofFIG.4, of a detailed configuration of the basic encryption unit13in the optical transmission device1ofFIG.1. The basic encryption unit13of the example ofFIG.5is provided with the cryptographic generation unit31and the multi-level modulation unit32. The cryptographic generation unit31of the example ofFIG.5uses a cryptographic key provided from the cryptographic key provision unit12to generate multi-level (2048 levels) data. Here, the reason why the levels that a piece of multi-level data can take is 2048, which is half the number of modulations N=4096, will be described later in the description of the multi-level modulation unit32. The multi-level modulation unit32of the example ofFIG.5is provided with a DAC51and a Mach-Zehnder modulator MZ1that includes a phase modulation element52. Here, the Mach-Zehnder modulator MZ1is a modulator that uses the principles of a Mach-Zehnder interferometer. The signal path L3branches into a signal path L21and a signal path L22. A phase modulation element52is arranged on the signal path L21. By this, an optical signal that passes along the signal path L21via the phase modulation element52and an optical signal that passes along the signal path L22mutually interfere, and a result of this interference is outputted from a signal path L23. Note that the Mach-Zehnder modulator MZ1of the configuration ofFIG.5is merely an example. That is, by interposing a phase modulation element on one or both of the branched signal paths, it is possible to use a Mach-Zehnder interferometer as the Mach-Zehnder modulator MZ1. The DAC51converts the transmission data supplied from the transmission data provision unit11into a binary voltage (analog signal) for each item of bit data, and applies the binary voltage to the phase modulation element52via a signal path L12. The phase modulation element52is inputted with an optical signal (carrier wave) that is generated at the carrier wave generation unit14and transmitted on the signal path L21. The phase modulation element52causes the phase of the optical signal to rotate in accordance with the binary voltage that is from the DAC51via the signal path L12and is applied for each item of bit data (performs phase modulation), and outputs results of the rotation. The optical signal outputted from the phase modulation element52interferes with the optical signal (carrier wave) that is generated in the carrier wave generation unit14and transmitted on the signal path L22, and the result is the normal two-level phase modulation signal indicating the A modulation illustrated inFIG.2. A number of modulations N1=2 for this two-level modulation. The two-level modulation signal is supplied to a phase modulation element54via the signal path L23. A DAC53obtains, via a signal path L13, multi-level (2048 levels) data supplied from the cryptographic generation unit31. The DAC53converts the multi-level data to a multi-level (2048 levels) voltage, and applies the multi-level voltage to the phase modulation element54via a signal path L14. The phase modulation element54is inputted with an optical signal from the Mach-Zehnder modulator MZ1, via the signal path L23. The phase modulation element54causes the phase of this optical signal to rotate in accordance with the multi-level (2048 levels) voltage that is from the DAC53via the signal path L14and is applied for each item of bit data (modulates the phase), and supplies results of the rotation to the cryptographic signal transmission unit15via the signal path L4. Here, the optical signal supplied to the phase modulation element54is subject to two-level modulation in the Mach-Zehnder modulator MZ1as described above. In other words, the number of modulations N1of the data modulation in the Mach-Zehnder modulator MZ1is two. In addition, a number of modulations N2in the phase modulation element54is 2048. Accordingly, the product of the number of modulations N1and the number of modulations N2, in other words N1×N2=2×2048=4096, is the overall number of modulations N. In this way, in the basic encryption unit13of the example ofFIG.5, the phase modulation (data phase modulation) with respect to the binary data that makes up the bit data is performed by the Mach-Zehnder modulator MZ1, whereas the phase modulation (phase rotation) for encryption is performed by the phase modulation element54. In other words, in the example ofFIG.4, processing that uses the cryptographic key provided from the cryptographic key provision unit12to convert each item of the bit data (0 or 1) that makes up the transmission data provide from the transmission data provision unit11into data having an arbitrary level from among the multiple levels for the number of modulations N=4096, in other words multi-level data, is performed in the digital (electrical) domain, whereas this processing is performed in the optical domain in the example ofFIG.5. Here, as the phase modulation element (the phase modulation element42in the example ofFIG.4, and the phase modulation elements52and54in the example ofFIG.5), it is possible to employ a high speed phase modulation element put into practical use when the present application was filed, specifically, for example, a lithium niobate (LiNbO3) modulation element, an indium phosphide (InP) modulation element, a silicon p-n junction modulation element, or the like. Incidentally, as described above, the number of modulations N is important to ensure the security of the Y-00 cipher. In a case where the basic encryption unit13ofFIG.4andFIG.5described above is used, there will be limitations placed on the number of modulations N due to the output voltage resolution of the DAC (the DAC41in the example ofFIG.4, and the DACs51and53in the example ofFIG.5). More specifically, there is a strong trade-off between output voltage resolution and modulation bandwidth (speed), and DACs that can be obtained at present have 1024 levels for modulation at 10 Gbit/s. In other words, in a case where something put into practical use at present at the time of filing of the present application is employed as a DAC, when the basic encryption unit13ofFIG.4andFIG.5described above is used, it is difficult to realize 4096 as the number of modulations N in modulation at 10 Gbit/s. Conversely, in a case where something put into practical use at present at the time of filing of the present application is employed, it will be necessary to decrease the transfer speed from 10 Gbit/s in order to achieve having 4096 as the number of modulations N. Furthermore, in order to further ensure high security, the number of modulations N is required to be approximately 10,000. It is not be possible at all to meet such a requirement with the basic encryption unit13ofFIG.4andFIG.5. Accordingly, in order to meet such a requirement, the inventors conceived of a new technique as follows. That is, the inventors devised a technique of performing modulation (at least one of phase modulation and amplitude modulation) of light in k stages (k is an integer greater than or equal to 2), specifically, for example, a technique of decomposing the number of modulations N to have N=M1×M2× . . . ×Mk, performing a first type of optical modulation once with the number of modulations M1, and subsequently performing a second type of optical modulation (k−1) times corresponding to the number of modulations M2through Mk. The first type of modulation is referred to below as “coarse modulation” and the second type of modulation is referred to below as “fine modulation”. Accordingly, with reference to the drawings ofFIG.6and thereafter, description is given below for several concrete examples in relation to applying this new technique to the basic encryption unit13ofFIG.4orFIG.5described above. FIG.6is a block diagram illustrating an example of a detailed configuration of an encryption unit resulting from applying a new technique to the basic encryption unit ofFIG.4described above, in other words a first example of an encryption unit in which the present invention is applied. In order to facilitate understanding of the present invention, phase modulation where there are k=2 stages is performed in the example ofFIG.6. In other words, in the example ofFIG.6, coarse modulation with the number of modulations M1and fine modulation with the number of modulations M2are performed, so that modulation with the overall number of modulations N=M1×M2is performed. The encryption unit13of the example ofFIG.6is provided with the cryptographic generation unit31and the multi-level modulation unit32. The cryptographic generation unit31of the example ofFIG.6has an essentially similar function to that of the cryptographic generation unit31of the example ofFIG.4. However, the manner of output by the cryptographic generation unit31of the example ofFIG.6differs in comparison to the cryptographic generation unit31of the example ofFIG.4as follows. In other words, from the cryptographic generation unit31of the example ofFIG.4, multi-level (N levels) data corresponding to the number of modulations N is outputted to the signal path L1. In contrast to this, from the cryptographic generation unit31of the example ofFIG.6, multi-level (M1levels) data (hereinafter referred to as “coarse multi-level data”) corresponding to the number of modulations M1for coarse modulation is outputted to a signal path L31, and multi-level (M2levels) data (hereinafter referred to as “fine multi-level data”) corresponding to the number of modulations M2for fine modulation is outputted to a signal path L33. The multi-level modulation unit32of the example ofFIG.6is provided with a coarse DAC61A, a fine DAC61B, a coarse phase modulation element62A, and a fine phase modulation element62B. The coarse DAC61A obtains, via the signal path L31, coarse multi-level data corresponding to each item of bit data supplied from the cryptographic generation unit31. The coarse DAC61A converts the coarse multi-level data (a digital signal) to a multi-level voltage (an analog signal), and applies the multi-level voltage to the coarse phase modulation element62A via a signal path L32. Note that the voltage outputted from the coarse DAC61A is referred to below as a “coarse multi-level voltage”. The coarse phase modulation element62A is, via the signal path L3, inputted with an optical signal (carrier wave) generated in the carrier wave generation unit14. The coarse phase modulation element62A causes the phase of the optical signal to rotate (modulates the phase) in accordance with the coarse multi-level voltage applied for each item of bit data from the coarse DAC61A via the signal path L32, and supplies the results of the rotation to the fine phase modulation element62B. In other words, the optical signal to which the coarse modulation with the number of modulations M1has been performed is supplied to the fine phase modulation element62B. The fine DAC61B obtains, via the signal path L33, fine multi-level data corresponding to each item of bit data supplied from the cryptographic generation unit31. The fine DAC61B converts the fine multi-level data (a digital signal) to a multi-level voltage (an analog signal), and applies the multi-level voltage to the fine phase modulation element62B via a signal path L34. Note that the voltage outputted from the fine DAC61B is referred to below as a “fine multi-level voltage”. The fine phase modulation element62B is inputted with the optical signal outputted from the coarse phase modulation element62A, in other words the optical signal to which the coarse modulation with the number of modulations M1has been performed. The fine phase modulation element62B causes the phase of this optical signal to rotate in accordance with the fine multi-level voltage that is from the fine DAC61B via the signal path L34and is applied for each item of bit data (modulates the phase), and supplies results of the rotation to the cryptographic signal transmission unit15via the signal path L4. In other words, the optical signal to which the coarse modulation with the number of modulations M1has already been performed also undergoes fine modulation with the number of modulations M2by the fine phase modulation element62B, and, as a result, an optical signal to which modulation with the number of modulations N=M1×M2has been performed is supplied to the cryptographic signal transmission unit15via the signal path L4. FIG.7is a view for describing an overview of respective principles of coarse phase modulation and fine phase modulation applied to the encryption unit13of the example ofFIG.6. Note that, in the example ofFIG.6described above, the number of coarse modulations M1and the number of fine modulations M2are determined so that the number of modulations N=4096. In contrast to this, for convenience of the description forFIG.7, modulation is respectively performed with the number of coarse modulations M1=4 and the number of modulations M2=4 so that the number of modulations N=16. The A modulation illustrated inFIG.7is to describe the principles of coarse phase modulation when the number of coarse modulations M1=4. It is assumed that the carrier wave (optical signal) generated in the carrier wave generation unit14is not modulated, and a reference phase angle is zero, for example. The coarse phase modulation element62A causes the phase of the optical signal to rotate from zero to a phase corresponding to any one of the number of coarse modulations M1=4 symbol points (solid line white circles in the A modulation illustrated inFIG.7). Which point from among the number of coarse modulations M1=4 symbol points (solid line white circles in the A modulation illustrated inFIG.7) is rotated is based on the applied voltage. In the coarse phase modulation for the A modulation illustrated inFIG.7, with the phase angle θ (the horizontal axis in the A modulation illustrated inFIG.7) as a reference, the phase is rotated in a range VP1(−3π/4 through 3π/4) in phase modulation where M1=4. The B modulation illustrated inFIG.7is to describe the principles of fine phase modulation when the number of fine modulations M2=4. In fine phase modulation for the B modulation illustrated inFIG.7, the optical signal which is to be modulated is already subject to phase modulation: coarse phase modulation where M1=4. Accordingly, the phase angle that is to be the reference is one of the four phases respectively corresponding to the arrangement positions of each symbol point indicated in the A modulation illustrated inFIG.7, in other words −3π/4, −π/4, π/4, and 3π/4. The symbols corresponding to these reference phase angles are illustrated as solid line white circles in the B modulation illustrated inFIG.7. The fine phase modulation element62B causes the phase of the optical signal to rotate from the reference phase angle to a phase corresponding to any one of the number of fine modulations M2=4 symbol points (dotted line white circles in the B modulation illustrated inFIG.7). In the fine phase modulation of the B modulation illustrated inFIG.7, because there are four reference phase angles (the solid line white circles indicated in the B modulation illustrated inFIG.7), in fine phase modulation with M2=4, the phase is rotated in a range VP2between two reference phase angles, in other words ¼ of the range VP1for the coarse phase modulation which is the A modulation illustrated inFIG.7. In other words, assuming that the coarse phase modulation element62A and the fine phase modulation element62B have the same characteristics, it is necessary to set the voltage output of the fine DAC61B to be ¼ of the voltage output of the coarse DAC61A. By fine phase modulation being performed after coarse phase modulation in this way, phase modulation with N=16 (=M1×M2) becomes possible as a result. If the characteristics (efficiency) of the coarse phase modulation element62A and the fine phase modulation element62B differ, it is necessary to consider the difference here. For example, assuming that the output voltage from the fine DAC61B is the same as that of the coarse DAC61A, by setting the efficiency of the fine phase modulation element62B to be ¼ of that of the coarse phase modulation element62A (for example, by setting an element length to be ¼), it is possible to achieve the coarse phase modulation indicated by the A modulation illustrated inFIG.7described above, and the fine phase modulation indicated by the B modulation illustrated inFIG.7. In other words, what is important, in order to realize phase modulation with the number of modulations N=16, in accordance with the coarse phase modulation indicated by the A modulation illustrated inFIG.7described above and the fine phase modulation indicated by the B modulation illustrated inFIG.7, is as follows. That is, it is important for the phase rotation amount (peak-to-peak), which is specified by the range VP2for the fine phase modulation element62B, to be ¼ of the phase rotation amount (peak-to-peak), which is specified by the range VP1for the coarse phase modulation element62A. Here, the phase rotation amount (peak-to-peak) is defined as follows. That is, the absolute value of the difference between the maximum phase and the minimum phase from among phases that can be rotated to by a phase adjustment element is the phase rotation amount (peak-to-peak). For example, phase rotation amount (peak-to-peak) for the coarse phase modulation indicated by the A modulation illustrated inFIG.7described above is 3π/2 as indicated by the range VP1. A technique of specifying the phase rotation amount (peak-to-peak) by the maximum phase and the minimum phase may be used instead of the phase rotation amount (peak-to-peak) in accordance with this definition, but the level will change in accordance with the arrangement of symbol points in this case. Therefore, it is desirable to employ the phase rotation amount (peak-to-peak) in accordance with the definition described above. To summarize the above, by employing the encryption unit13illustrated inFIG.6, phase modulation as follows becomes possible. That is, phase modulation with the number of modulations N=the number of coarse modulations M1× the number of fine modulations M2becomes possible. In this case, a phase rotation amount (peak-to-peak) Ia for the coarse phase modulation element62A is represented by the following formula (1). 2⁢πM1×(M1-1)(1) In addition, a phase rotation amount (peak-to-peak) Ib for the fine phase modulation element62B is set so that a ratio with the phase rotation amount (peak-to-peak) Ia of the coarse phase modulation element62A, that is Ib/Ia is represented by the following formula (2). 1M1-1×(1-1M2)(2) It is desirable for the number of coarse modulations M1and the number of fine modulations M2to be as large as possible in order to improve the security of the Y-00 optical communication quantum cipher. For example, configuration may be taken to have the number of coarse modulations M1=64 and have the number of fine modulations M2=1024. Furthermore, the number of fine modulation elements does not particularly need to be one as illustrated inFIG.7, and may be a plurality as illustrated inFIG.8.FIG.8is a block diagram illustrating an example of a detailed configuration of an encryption unit resulting from applying the new technique to the basic encryption unit ofFIG.4, in other words a second example of an encryption unit in which the present invention is applied. Phase modulation with k=2 stages is performed in the example ofFIG.7, whereas phase modulation with k stages where k is 3 or more is performed in the example ofFIG.8. In other words, in the example ofFIG.8, coarse modulation with the number of modulations M1is performed and fine modulation is respectively performed (k−1) times with the number of modulations M2through Mk, so that modulation with the overall number of modulations N=M1×M2× . . . ×Mk is performed. The encryption unit13of the example ofFIG.8is provided with the cryptographic generation unit31and the multi-level modulation unit32. The cryptographic generation unit31of the example ofFIG.8has an essentially similar function to that of the cryptographic generation unit31of the example ofFIG.6. However, the manner of output by the cryptographic generation unit31of the example ofFIG.8differs in comparison to the cryptographic generation unit31of the example ofFIG.6as follows. That is, the cryptographic generation unit31of the example ofFIG.6outputs coarse multi-level data as well as a first type of fine multi-level data. In contrast to this, the cryptographic generation unit31of the example ofFIG.8outputs coarse multi-level data as well as (k−1) types of fine multi-level data. In other words, in comparison to the multi-level modulation unit32of the example ofFIG.6, the multi-level modulation unit32of the example ofFIG.8has a similar configuration for coarse modulation (the configuration for the group of the coarse DAC61A and the coarse phase modulation element62A), but the configuration for fine modulation differs as follows. That is, a difference is that inFIG.8there are (k−1) types of groups of a fine DAC61B-L (L is an integer from 1 through (k−1)) and a fine phase modulation element62B-L in comparison to the one type in the example ofFIG.6. By this, an optical signal to which coarse modulation with the number of modulations M1has been performed and then fine modulation has been respectively performed (k−1) times with the number of modulations M2through Mk is outputted from the multi-level modulation unit32of the example ofFIG.8, and this optical signal is supplied to the cryptographic signal transmission unit15. In other words, in the example ofFIG.8, with k=3 stages or more, coarse modulation with the number of modulations M1is performed and fine modulation is respectively performed (k−1) times with the number of modulations M2through Mk, so that modulation with the overall number of modulations N=M1×M2× . . . ×Mk is performed. Even in the case where k=3 stages or more in the example ofFIG.8, similarly with k=2 stages, the phase rotation amount (peak-to-peak) Ia of the coarse phase modulation element62A is represented by formula (1) described above. In contrast, a phase rotation amount (peak-to-peak) In for the nth (n is value from 2 through k) fine phase modulation element62B-n is set so that a ratio with the phase rotation amount (peak-to-peak) of the coarse phase modulation element62A, that is In/Ia is represented by the following formula (3). Where II is the symbol for the product of a sequence of factors. 1M1-1×1∏i=2n-1⁢Mi×(1-1Mn)(3) FIG.9is a block diagram illustrating an example of a detailed configuration of an encryption unit resulting from applying the new technique to the basic encryption unit ofFIG.5, in other words a third example of an encryption unit in which the present invention is applied. The encryption unit13of the example ofFIG.9is provided with the cryptographic generation unit31and the multi-level modulation unit32. The transmission data provided from the transmission data provision unit11is directly supplied to the multi-level modulation unit32without going through the cryptographic generation unit31. The cryptographic generation unit31of the example ofFIG.9uses the cryptographic key provided by the cryptographic key provision unit12to generate respective multi-levels for coarse modulation and fine modulation, and respectively provides these multi-levels to a coarse DAC72A and a fine DAC72B which are described below. The multi-level modulation unit32of the example ofFIG.9is provided with a DAC70, a Mach-Zehnder modulator MZ2that includes a phase modulation element71, a coarse DAC72A, a coarse phase modulation element73A, a fine DAC72B, and a fine phase modulation element73B. In other words, the DAC70and the Mach-Zehnder modulator MZ2have a similar configuration to that of the DAC51and the Mach-Zehnder modulator MZ1of the example ofFIG.5. However, while the group of the DAC53and the phase modulation element54are arranged as a stage after the DAC51and the Mach-Zehnder modulator MZ1in the example ofFIG.5, a group of the coarse DAC72A and the coarse phase modulation element73A and a group of the fine DAC72B and the fine phase modulation element73B are arranged in this order in the example ofFIG.9. The group of the coarse DAC72A the coarse phase modulation element73A and the group of the fine DAC72B and the fine phase modulation element73B in the example ofFIG.9are respectively similar to the group of the coarse DAC61A and the coarse phase modulation element62A and the group of the fine DAC61B and the fine phase modulation element62B of the example ofFIG.6. In other words, the product of the number of modulations N1(=2) of data modulation in the Mach-Zehnder modulator MZ2corresponding to each item of bit data with the number of modulations M1for coarse modulation and the number of modulations M2for fine modulation in accordance with encryption—that is 2×M1×M2—is the overall number of phase modulations N. In other words, ½ of the M1×M2of the example inFIG.6may be set as M1×M2in the example ofFIG.9. FIG.10is a block diagram illustrating an example of a detailed configuration of an encryption unit, in which a configuration that uses an IQ modulator has been employed as the configuration for coarse modulation of the first example of the encryption unit illustrated inFIG.6in which the present invention is applied, in other words a fourth example of an encryption unit in which the present invention is applied. The encryption unit13of the example ofFIG.10is provided with the cryptographic generation unit31and the multi-level modulation unit32. The cryptographic generation unit31of the example ofFIG.10has an essentially similar function and configuration to that of the cryptographic generation unit31of the example ofFIG.6. In contrast, in comparison to the multi-level modulation unit32of the example ofFIG.6, the multi-level modulation unit32of the example ofFIG.10has a different configuration for coarse modulation with the number of coarse modulations M1. That is, the coarse DAC61A and the coarse phase modulation element62A are provided in the multi-level modulation unit32of the example ofFIG.6. In contrast to this, the multi-level modulation unit32of the example ofFIG.10is provided with two coarse DACs80Aa and80Ab and an IQ modulator IQ1that includes two coarse phase modulation element81Aa and81Ab. The IQ modulator IQ1is the following type of modulator. In other words, it is a modulator configured by two interferometers which have a Mach-Zehnder and a further interferometer, in which an inputted optical signal is split into four signal paths, and the optical signal that travels along respective signal paths interfere with each other and are outputted. At this point, by interposing phase modulation elements (the two coarse phase modulation elements81Aa and81Ab in the example ofFIG.10) on at least two of the signal paths from among the split signal paths, it becomes possible to cause light to occur at any point on the IQ plane (in other words, at any amplitude and phase). The multi-level voltages outputted from the coarse DACs80Aa and80Ab are respectively 2 (=N1) and M1voltages. Here, 2 (=N1) is the number of modulations for data modulation in the IQ modulator IQ′. M1is the number of coarse modulations in each of the coarse phase modulation elements81Aa and81Ab. The group of the fine DAC80B and the fine phase modulation element81B in the example ofFIG.10and the group of the fine DAC61B and the fine phase modulation element62B in the example ofFIG.6have essentially the same function and configuration. In other words, the configuration for fine modulation is the same between the example ofFIG.10and the example ofFIG.6. In other words, in a case where the IQ modulator IQ1is employed as a configuration for coarse modulation, the configuration for fine modulation in the example ofFIG.10is merely an example, and, although no illustration is given, it is possible to employ a configuration similar to that in the example ofFIG.8for example, in other words a configuration where fine modulation is performed (k−1) times with k=3 or more. Thus far, examples of performing multi-level phase modulation with respect to an optical signal have been given as embodiments of the present invention. However, these embodiments are merely examples, and the present invention can be broadly applied to processing that modulates a signal to one from among a plurality of patterns. In other words, in the examples described above, a method of causing rotation to a phase from among multi-level phases (an example of a plurality of patterns) is employed as an example of such processing. Furthermore, multi-level amplitude (intensity) modulation of an optical signal as illustrated inFIG.11, in other words a method that modulates to an intensity from among multi-level intensities (another example of a plurality of patterns) may be employed. FIG.11is a view for describing an overview of principles of the Y-00 optical communication quantum cipher in a case where amplitude modulation is employed instead of phase modulation. The A modulation through C modulation shown inFIG.11illustrate IQ planes that represent the phase and amplitude (intensity) of an optical signal, with the intersection of the vertical axis and the horizontal axis as the origin. The A modulation illustrated inFIG.11is to facilitate understanding of the Y-00 optical communication quantum cipher, and is a graph for describing the principles of normal two-level modulation. For example, if plaintext (transmission data) is superimposed as is on an optical signal (carrier wave) and transmitted, two-level modulation indicated as the A modulation illustrated inFIG.11will be applied on each item of bit data (1 or 0) that makes up the plaintext. Specifically, for example, in the A modulation illustrated inFIG.11, the arrangement of the symbol point after amplitude modulation in the case where the bit data is “0” is the arrangement of a point set as the origin 0 on the vertical axis, in other words an arrangement for which intensity (amplitude) is the minimum (distance 0). Meanwhile, the arrangement of the symbol point in the case where the bit data is “1” is the arrangement of a point set as 1 on the vertical axis, in other words an arrangement for which the intensity is the maximum (distance 1). The B modulation illustrated inFIG.11is to describe principles of four-level amplitude modulation in a case where the Y-00 optical communication quantum cipher is employed. In the case of the example of B modulation illustrated inFIG.11, a random level from among four levels is generated by using the cryptographic key, for each item of bit data (1 or 0) that makes up the plaintext. Amplitude modulation is performed by expanding or contracting the distance of the symbol point (the point for the intensity minimum corresponding to 0 and the point for the intensity maximum corresponding to 1) with the normal two-level modulation indicated in the A modulation illustrated inFIG.11to give a point where the distance is 0, ⅓, ⅔ or 1 (four levels), for each bit in accordance with a level that is randomly generated from these four levels. However, similarly to the example of the B modulation illustrated inFIG.2which is described above, the case of the example of the B modulation illustrated inFIG.11is not sufficient from a perspective of security of the Y-00 optical communication quantum cipher. Accordingly, in practice, as indicated by the C modulation illustrated inFIG.11, a very large number, for example 4096, is employed as the number of modulations N, and the security of the Y-00 optical communication quantum cipher is improved. The C modulation illustrated inFIG.11is to describe principles of amplitude modulation when the number of modulations N=4096, in a case where the Y-00 optical communication quantum cipher is employed. FIG.12is a block diagram illustrating an example of a detailed configuration of the basic encryption unit13, in a case where amplitude modulation is employed instead of phase modulation. Amplitude modulation can be realized by having a phase modulation element take the configuration of an interferometer (a typical example is a Mach-Zehnder interferometer), for example. In the example ofFIG.12, an example of a configuration that uses a Mach-Zehnder modulator is employed. The encryption unit13of the example ofFIG.12is provided with the cryptographic generation unit31and the multi-level modulation unit32. The cryptographic generation unit31of the example ofFIG.12has an essentially similar function to that of the cryptographic generation unit31of the example ofFIG.6. The multi-level modulation unit32of the example ofFIG.12is provided with a DAC90and a Mach-Zehnder modulator MZ3that includes a phase modulation element91. The encryption units13illustrated inFIG.13and thereafter are realized by applying the new technique described above, in other words the technique of performing coarse modulation once with the number of modulations M1and subsequently performing fine modulation (k−1) times with the respective number of modulations M2through Mk to give the number of modulations N=M1×M2× . . . ×Mk, to the example ofFIG.12. That is, the encryption units13illustrated inFIG.13and thereafter are examples of various types that are different to the above-described examples of the encryption unit13to which the present invention has been applied. FIG.13is a block diagram illustrating an example of a detailed configuration of an encryption unit resulting from applying the new technique to the basic encryption unit ofFIG.12in which amplitude modulation is employed, in other words a fifth example of an encryption unit in which the present invention is applied. The example ofFIG.13employs a coarse DAC100A, a fine DAC100B, a coarse phase modulation element101A, and a fine phase modulation element101B instead of the DAC90and the phase modulation element91of the example ofFIG.12. The coarse phase modulation element101A and the fine phase modulation element101B are serially connected in one of the two paths in the Mach-Zehnder modulator MZ3. The modulation amplitude (peak-to-peak) in amplitude modulation can be defined as follows. That is, the absolute value of the difference between the maximum amplitude and the minimum amplitude from output modulated amplitudes in the optical system (for example, a Mach-Zehnder interferometer configuration and a phase modulation element incorporated therein) for amplitude modulation is the modulation amplitude (peak-to-peak). However, in a case where reception with a square-law detection method that observes an intensity corresponding to the square of the amplitude is employed, the absolute value of the difference between the maximum intensity and the minimum intensity is the modulation amplitude (peak-to-peak). The modulation amplitude (peak-to-peak) is used to give a description below of the amount of modulation in the coarse modulation and fine modulation of the example ofFIG.13. A modulation amplitude (peak-to-peak) Ia for coarse modulation with the number of coarse modulations M1is an amount in accordance with output of the carrier wave and thus is not limited. It is desirable for a modulation amplitude (peak-to-peak) Ib for fine modulation with the number of fine modulations M2to be set so that a ratio with the modulation amplitude (peak-to-peak) Ia for coarse modulation, that is Ib/Ia is represented by the following formula (4). 1M1-1×(1-1M2)(4) Furthermore, in the case where the new technique described above, in other words, with the number of modulations N=M1×M2× . . . ×Mk, performing coarse modulation once with the number of modulations M1and subsequently performing fine modulation (k−1) times respectively with the number of modulations M2through Mk, is performed for amplitude modulation, it is desirable for a modulation amplitude (peak-to-peak) In (n is any one of 2 through k) representing the nth range for the nth second modulation element to be such that a ratio with the modulation amplitude (peak-to-peak) Ia for coarse modulation with the number of modulations M1, in other words In/Ia is represented by the following formula (5). Where II is the symbol for the product of a sequence of factors. 1M1-1×1∏i=2n-1⁢Mi×(1-1Mn)(5) FIG.14is a block diagram illustrating an example of a detailed configuration of an encryption unit resulting from applying the new technique to the basic encryption unit ofFIG.12in which amplitude modulation is employed, in other words a sixth example of an encryption unit in which the present invention is applied. The example ofFIG.14employs the coarse DAC100A, a fine DAC100Bp, a coarse phase modulation element101A, and a fine phase modulation element101Bp instead of the DAC90and the phase modulation element91of the example ofFIG.12. The primary components are similar to the example ofFIG.14and the example ofFIG.13, with the following differences. That is, the coarse phase modulation element101A and the fine phase modulation element101B are serially connected in one of two paths in the Mach-Zehnder modulator MZ3in the example ofFIG.13, whereas the coarse phase modulation element101A and the fine phase modulation element101Bp are connected in parallel by being respectively arranged in the two paths in the Mach-Zehnder modulator MZ3in the example ofFIG.14. In the example ofFIG.14, the change from the arrangement of the fine phase modulation element101B in the example ofFIG.13, in other words the fine phase modulation element connected in parallel to the coarse phase modulation element101A, is illustrated as the fine phase modulation element101Bp. In amplitude modulation that uses a Mach-Zehnder modulator, in a case of arranging in parallel phase modulation elements that were arranged in series, it is possible to achieve a similar effect to that of the series arrangement by inverting the polarity (the voltage which is positive or negative is inverted) of one of the phase modulation elements arranged in parallel. Accordingly, differing to the example ofFIG.13, in the example ofFIG.14, the polarity of the drive signal for the fine DAC100Bp is inverted in relation to the polarity of the drive signal for the fine DAC100B in the example ofFIG.13. Examples of performing multi-level phase modulation or multi-level amplitude (intensity) modulation as modulation with respect to an optical signal have been given above as embodiments of the present invention. In other words, in the examples described above, a method of performing modulation (rotation to a phase of any level or changing the amplitude) to any pattern from among a plurality of patterns prepared in advance, where only one of multi-level phases and multi-level (intensities) amplitudes are used, is employed. However, there is no particular need to prepare only multi-level phases or multi-level amplitudes for the plurality of patterns. The plurality of patterns may be prepared by combining phases of one or more levels and amplitudes of one or more levels. In other words, the present invention can be applied to multi-level modulation that combines phase modulation and amplitude (intensity) modulation as modulation with respect to an optical signal. FIG.15is a block diagram illustrating an example of a detailed configuration of a seventh example of an encryption unit in which the present invention is applied, in a case of application to multi-level modulation, which combines phase modulation and amplitude (intensity) modulation as modulation for an optical signal. As illustrated inFIG.15, both of phase modulation and amplitude modulation can be realized by having a configuration that uses an IQ modulator IQ2. In other words, the multi-level modulation unit32of the example ofFIG.15is configured so as to include a coarse DAC120Aa, a coarse DAC120Ab, a fine DAC120Ba, and a fine DAC120Bb. The multi-level modulation unit32of the example ofFIG.15is configured so as to include the IQ modulator IQ2, which has a coarse phase modulation element121Aa, a coarse phase modulation element121Ab, a fine phase modulation element121Ba, and a fine phase modulation element121Bb. The IQ modulator IQ2is configured by connecting a first Mach-Zehnder modulator and a second Mach-Zehnder modulator in parallel. The coarse phase modulation element121Aa and the fine phase modulation element121Ba are serially connected in one of the two paths in the first Mach-Zehnder modulator. The coarse phase modulation element121Ab and the fine phase modulation element121Bb are serially connected in one of the two paths in the second Mach-Zehnder modulator. FIG.16is a block diagram illustrating a modification of the example ofFIG.15, in other words an example of a detailed configuration of an eighth example of an encryption unit in which the present invention is applied, in a case of application to multi-level modulation, which combines phase modulation and amplitude (intensity) modulation as modulation with respect to an optical signal. The multi-level modulation unit32of the example ofFIG.16has essentially similar components to those of the example ofFIG.15, and is configured so as to include the coarse DAC120Aa, the coarse DAC120Ab, a fine DAC120Bap, and a fine DAC120Bbp. The multi-level modulation unit32of the example ofFIG.16has essentially similar components to those of the example ofFIG.15, and is configured so as to include the IQ modulator IQ2which has the coarse phase modulation element121Aa, the coarse phase modulation element121Ab, a fine phase modulation element121Bap, and a fine phase modulation element121Bbp. However, in the example ofFIG.16, the coarse phase modulation element121Aa and the fine phase modulation element121Bap are connected in parallel by being arranged in respective ones of the two paths in the first Mach-Zehnder modulator. In addition, the coarse phase modulation element121Ab and the fine phase modulation element121Bbp are connected in parallel by being arranged in respective ones of the two paths in the second Mach-Zehnder modulator. In the example ofFIG.16, the change from the arrangement of the fine phase modulation element121Ba in the example ofFIG.15, in other words the fine phase modulation element connected in parallel to the coarse phase modulation element121Aa, is illustrated as the fine phase modulation element121Bap. Similarly, the change from the arrangement of the fine phase modulation element121Bb in the example ofFIG.15, in other words the fine phase modulation element connected in parallel to the coarse phase modulation element121Ab, is illustrated as the fine phase modulation element121Bbp. In other words, in a case of employing multi-level modulation that combines phase modulation and amplitude (intensity) modulation as modulation with respect to an optical signal and using an IQ modulator, the coarse phase modulation elements and the fine phase modulation elements are respectively arranged on any of two or more paths. In this case, for example, the polarity of each of the coarse DACs and fine DACs is inverted if necessary. Various embodiments of the optical transmission device1to which the present invention is applied are described above. However, for the optical transmission device1to which the present invention is applied, it is sufficient if improvement of the overall number of modulations N can be achieved by performing coarse modulation and fine modulation, and the configuration of the optical transmission device1is not limited to the various embodiments described above and may be as follows, for example. For example, in the embodiments described above, for the convenience of the description, the optical communication cable3is employed as the transmission path for the optical signal transmitted from the optical transmission device1and received by the optical reception device2, but there is no particular limitation to this. For example, a device for optical communication such as an optical amplifier, an optical switch, or a wavelength switch may be inserted between the optical communication cable3and the optical transmission device1or the optical reception device2. In addition, an optical transmission path is not limited to something that uses an optical fiber, and may comprise a communication path such that propagation is performed over a so-called optical wireless space, for example. In other words, any communication channel may be used between the optical communication cable3and the optical transmission device1or the optical reception device2. The transmission data provision unit11is incorporated in the optical transmission device1, but the transmission data may be received from outside of the optical transmission device in accordance with a predetermined reception means that is wired or wireless, by providing the transmission data reception unit (not illustrated). Furthermore, a storage device (not illustrated) or removable media may be used to provide the transmission data. In other words, the transmission data provision unit may have any kind of transmission data obtainment means. The cryptographic key provision unit12may provide a key sufficient for the encryption unit13to generate multi-level data relating to encryption. In other words, the cryptographic key may be a shared key, and may be a key that uses a different algorithm such as a private key and a public key. The carrier wave generation unit14does not need to be incorporated in the optical transmission device1. In other words, the optical transmission device may be an optical signal encryption device that is inputted with a carrier wave and transmits a cryptographic signal. Furthermore, the optical signal encryption device may be something that is inputted with an optical signal which is a carrier wave on which transmission data is already placed, and performs multi-level modulation for encryption. The cryptographic signal transmission unit15performs processing such as amplifying the intensity of the cryptographic signal as needed, but configuration may be taken to not incorporate the cryptographic signal transmission unit15in the optical transmission device1, have the optical transmission device1output cryptographic data without amplification, and use an external optical signal amplification device (not illustrated). Each signal in a circuit diagram may be amplified or attenuated in accordance with a signal intensity converter, such as a signal amplifier, as necessary. For example, an electrical signal outputted from a DAC such as a coarse DAC and a fine DAC may be amplified to a signal intensity in accordance with the specification of a phase modulation element. For example, in the embodiments described above, for the convenience of the description, fine phase modulation is performed on an optical signal that had been subjected to coarse phase modulation, but there is no particular limitation to this. In other words, the coarse modulation and the fine modulation may be performed in any order. Furthermore, the coarse modulation and the fine modulation may be performed on any path of an interferometer configuration that branches into any number of paths, and the modulated signal may be subject to interference any number of times at any location. In addition, a signal in accordance with signal processing in the above examples of the present invention is given as an optical signal, but there is no limitation to this. In other words, there is no limitation to an optical signal, and it may be a signal that enables transmission of data after performing various kinds of modulation on an electrical signal or the like. To summarize the above, it is sufficient if a signal processing device to which the present invention is applied is as follows, and various embodiments can be taken. In other words, a signal processing device to which the present invention is applied comprises: a first modulation element (for example, the coarse phase modulation element62A ofFIG.8) that modulates a signal to any one of M1(M1is any integer) patterns (for example, a pattern of arrangement positions for respective symbols) in a first range (for example, the range VP1inFIG.7); (k−1) (k is an integer that is greater than or equal to 1) second modulation elements (for example, the fine phase modulation elements62B-1through62B-(k−1) ofFIG.8) that respectively modulate the signal to any one of M2through Mk (M2through Mk are arbitrary integers that are mutually independent of each other and M1) respective patterns in a second range (for example, the range VP2inFIG.7) through a kth range, respectively; and a control unit (for example, the coarse DAC61A and the fine DACs61B-1through61B-(k−1) inFIG.8) that controls the first range for the first modulation element and the respective second range through the kth range for the (k−1) second modulation elements, wherein the control unit performs control to make the respective second range through the kth range for the (k−1) second modulation elements narrower than the first range for the first modulation element. By employing this signal processing device, it is possible to improve modulation resolution. EXPLANATION OF REFERENCE NUMERALS 1. . . optical transmission device,2. . . optical reception device,3. . . optical communication cable,11. . . transmission data provision unit,12. . . cryptographic key provision unit,13. . . encryption unit,14. . . carrier wave generation unit,15. . . cryptographic signal transmission unit,21. . . cryptographic signal reception unit,22. . . cryptographic key provision unit,23. . . decryption unit,31. . . cryptographic generation unit,32. . . multi-level modulation unit,41. . . DAC,42. . . phase modulation element, L1. . . signal path, L2. . . signal path, L3. . . signal path, L4. . . signal path, MZ1. . . Mach-Zehnder modulator,51. . . DAC,52. . . phase modulation element,53. . . DAC,54. . . phase modulation element, L11. . . signal path, L12. . . signal path, L21. . . signal path, L22. . . signal path, L23. . . signal path,61A . . . coarse DAC,61B . . . fine DAC,62A . . . coarse phase modulation element,62B . . . fine phase modulation element, L31. . . signal path, L32. . . signal path, L33. . . signal path, L34. . . signal path,61A . . . coarse DAC,61B-1. . . fine DAC,61B-k−1 . . . fine DAC,62A . . . coarse phase modulation element,62B-1. . . fine phase modulation element,62B-k−1 . . . fine phase modulation element, MZ2. . . Mach-Zehnder modulator,70. . . DAC,71. . . phase modulation element,72A . . . coarse phase modulation element,72B . . . fine phase modulation element, IQ1. . . IQ modulator,80Aa,80Ab . . . coarse DAC,80B . . . fine DAC,81Aa,81Ab . . . coarse phase modulation element,81B . . . fine phase modulation element, MZ3. . . Mach-Zehnder modulator,90. . . DAC,91. . . phase modulation element,100A . . . coarse DAC,100B . . . fine DAC,101A . . . coarse phase modulation element,101B . . . fine phase modulation element,101Bp . . . fine phase modulation element, IQ2. . . IQ modulator,120Aa,120Ab . . . coarse DAC,120Ba,120Bb . . . fine DAC,121Aa,121Ba . . . fine phase modulation element,121Ab,121Bb . . . fine phase modulation element,121Bap . . . fine phase modulation element,121Bbp . . . fine phase modulation element
67,493
11863302
DETAILED DESCRIPTION A detailed description of illustrative embodiments will now be described with reference to the various Figures. Although this description provides a detailed example of possible implementations, it should be noted that the details are intended to be exemplary and in no way limit the scope of the application. FIG.1Ais a diagram of an example communications system100in which one or more disclosed embodiments may be implemented. The communications system100may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system100may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications system100may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), and the like. As shown inFIG.1A, the communications system100may include wireless transmit/receive units (WTRUs), e.g., WTRUs,102a,102b,102c, and/or102d(which generally or collectively may be referred to as WTRU102), a radio access network (RAN)103/104/105, a core network106/107/109, a public switched telephone network (PSTN)108, the Internet110, and other networks112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs102a,102b,102c,102dmay be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs102a,102b,102c,102dmay be configured to transmit and/or receive wireless signals and may include user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, consumer electronics, and the like. The communications system100may also include a base station114aand a base station114b. Each of the base stations114a,114bmay be any type of device configured to wirelessly interface with at least one of the WTRUs102a,102b,102c,102dto facilitate access to one or more communication networks, such as the core network106/107/109, the Internet110, and/or the networks112. By way of example, the base stations114a,114bmay be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a site controller, an access point (AP), a wireless router, and the like. While the base stations114a,114bare each depicted as a single element, it will be appreciated that the base stations114a,114bmay include any number of interconnected base stations and/or network elements. The base station114amay be part of the RAN103/104/105, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station114aand/or the base station114bmay be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown). The cell may further be divided into cell sectors. For example, the cell associated with the base station114amay be divided into three sectors. Thus, in some embodiments, the base station114amay include three transceivers, e.g., one for each sector of the cell. In another embodiment, the base station114amay employ multiple-input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell. The base stations114a,114bmay communicate with one or more of the WTRUs102a,102b,102c,102dover an air interface115/116/117, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface115/116/117may be established using any suitable radio access technology (RAT). More specifically, as noted above, the communications system100may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station114ain the RAN103/104/105and the WTRUs102a,102b,102cmay implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface115/116/117using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA). In another embodiment, the base station114aand the WTRUs102a,102b,102cmay implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface115/116/117using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A). In other embodiments, the base station114aand the WTRUs102a,102b,102cmay implement radio technologies such as IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1×, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like. The base station114binFIG.1Amay be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, and the like. In some embodiments, the base station114band the WTRUs102c,102dmay implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In another embodiment, the base station114band the WTRUs102c,102dmay implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station114band the WTRUs102c,102dmay utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.) to establish a picocell or femtocell. As shown inFIG.1A, the base station114bmay have a direct connection to the Internet110. Thus, the base station114bmay not be required to access the Internet110via the core network106/107/109. The RAN103/104/105may be in communication with the core network106/107/109, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs102a,102b,102c,102d. For example, the core network106/107/109may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown inFIG.1A, it will be appreciated that the RAN103/104/105and/or the core network106/107/109may be in direct or indirect communication with other RANs that employ the same RAT as the RAN103/104/105or a different RAT. For example, in addition to being connected to the RAN103/104/105, which may be utilizing an E-UTRA radio technology, the core network106/107/109may also be in communication with another RAN (not shown) employing a GSM radio technology. The core network106/107/109may also serve as a gateway for the WTRUs102a,102b,102c,102dto access the PSTN108, the Internet110, and/or other networks112. The PSTN108may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet110may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite. The networks112may include wired or wireless communications networks owned and/or operated by other service providers. For example, the networks112may include another core network connected to one or more RANs, which may employ the same RAT as the RAN103/104/105or a different RAT. Some or all of the WTRUs102a,102b,102c,102din the communications system100may include multi-mode capabilities, e.g., the WTRUs102a,102b,102c,102dmay include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU102cshown inFIG.1Amay be configured to communicate with the base station114a, which may employ a cellular-based radio technology, and with the base station114b, which may employ an IEEE 802 radio technology. FIG.1Bis a system diagram of an example WTRU102. As shown inFIG.1B, the WTRU102may include a processor118, a transceiver120, a transmit/receive element122, a speaker/microphone124, a keypad126, a display/touchpad128, non-removable memory130, removable memory132, a power source134, a global positioning system (GPS) chipset136, and other peripherals138. It will be appreciated that the WTRU102may include any sub-combination of the foregoing elements while remaining consistent with an embodiment. Also, embodiments contemplate that the base stations114aand114b, and/or the nodes that base stations114aand114bmay represent, such as but not limited to transceiver station (BTS), a Node-B, a site controller, an access point (AP), a home node-B, an evolved home node-B (eNodeB), a home evolved node-B (HeNB or HeNodeB), a home evolved node-B gateway, and proxy nodes, among others, may include some or all of the elements depicted inFIG.1Band described herein. The processor118may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor118may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU102to operate in a wireless environment. The processor118may be coupled to the transceiver120, which may be coupled to the transmit/receive element122. WhileFIG.1Bdepicts the processor118and the transceiver120as separate components, it will be appreciated that the processor118and the transceiver120may be integrated together in an electronic package or chip. The transmit/receive element122may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station114a) over the air interface115/116/117. For example, in some embodiments, the transmit/receive element122may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element122may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element122may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element122may be configured to transmit and/or receive any combination of wireless signals. In addition, although the transmit/receive element122is depicted inFIG.1Bas a single element, the WTRU102may include any number of transmit/receive elements122. More specifically, the WTRU102may employ MIMO technology. Thus, in some embodiments, the WTRU102may include two or more transmit/receive elements122(e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface115/116/117. The transceiver120may be configured to modulate the signals that are to be transmitted by the transmit/receive element122and to demodulate the signals that are received by the transmit/receive element122. As noted above, the WTRU102may have multi-mode capabilities. Thus, the transceiver120may include multiple transceivers for enabling the WTRU102to communicate via multiple RATs, such as UTRA and IEEE 802.11, for example. The processor118of the WTRU102may be coupled to, and may receive user input data from, the speaker/microphone124, the keypad126, and/or the display/touchpad128(e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor118may also output user data to the speaker/microphone124, the keypad126, and/or the display/touchpad128. In addition, the processor118may access information from, and store data in, any type of suitable memory, such as the non-removable memory130and/or the removable memory132. The non-removable memory130may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory132may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor118may access information from, and store data in, memory that is not physically located on the WTRU102, such as on a server or a home computer (not shown). The processor118may receive power from the power source134, and may be configured to distribute and/or control the power to the other components in the WTRU102. The power source134may be any suitable device for powering the WTRU102. For example, the power source134may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like. The processor118may also be coupled to the GPS chipset136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU102. In addition to, or in lieu of, the information from the GPS chipset136, the WTRU102may receive location information over the air interface115/116/117from a base station (e.g., base stations114a,114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU102may acquire location information by way of any suitable location-determination implementation while remaining consistent with an embodiment. The processor118may further be coupled to other peripherals138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals138may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like. FIG.1Cis a system diagram of the RAN103and the core network106according to an embodiment. As noted above, the RAN103may employ a UTRA radio technology to communicate with the WTRUs102a,102b,102cover the air interface115. The RAN103may also be in communication with the core network106. As shown inFIG.1C, the RAN103may include Node-Bs140a,140b,140c, which may each include one or more transceivers for communicating with the WTRUs102a,102b,102cover the air interface115. The Node-Bs140a,140b,140cmay each be associated with a particular cell (not shown) within the RAN103. The RAN103may also include RNCs142a,142b. It will be appreciated that the RAN103may include any number of Node-Bs and RNCs while remaining consistent with an embodiment. As shown inFIG.1C, the Node-Bs140a,140bmay be in communication with the RNC142a. Additionally, the Node-B140cmay be in communication with the RNC142b. The Node-Bs140a,140b,140cmay communicate with the respective RNCs142a,142bvia an Iub interface. The RNCs142a,142bmay be in communication with one another via an Iur interface. Each of the RNCs142a,142bmay be configured to control the respective Node-Bs140a,140b,140cto which it is connected. In addition, each of the RNCs142a,142bmay be configured to carry out or support other functionality, such as outer loop power control, load control, admission control, packet scheduling, handover control, macrodiversity, security functions, data encryption, and the like. The core network106shown inFIG.1Cmay include a media gateway (MGW)144, a mobile switching center (MSC)146, a serving GPRS support node (SGSN)148, and/or a gateway GPRS support node (GGSN)150. While each of the foregoing elements are depicted as part of the core network106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. The RNC142ain the RAN103may be connected to the MSC146in the core network106via an IuCS interface. The MSC146may be connected to the MGW144. The MSC146and the MGW144may provide the WTRUs102a,102b,102cwith access to circuit-switched networks, such as the PSTN108, to facilitate communications between the WTRUs102a,102b,102cand traditional land-line communications devices. The RNC142ain the RAN103may also be connected to the SGSN148in the core network106via an IuPS interface. The SGSN148may be connected to the GGSN150. The SGSN148and the GGSN150may provide the WTRUs102a,102b,102cwith access to packet-switched networks, such as the Internet110, to facilitate communications between and the WTRUs102a,102b,102cand IP-enabled devices. As noted above, the core network106may also be connected to the networks112, which may include other wired or wireless networks that are owned and/or operated by other service providers. FIG.1Dis a system diagram of the RAN104and the core network107according to an embodiment. As noted above, the RAN104may employ an E-UTRA radio technology to communicate with the WTRUs102a,102b,102cover the air interface116. The RAN104may also be in communication with the core network107. The RAN104may include eNode-Bs160a,160b,160c, though it will be appreciated that the RAN104may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs160a,160b,160cmay each include one or more transceivers for communicating with the WTRUs102a,102b,102cover the air interface116. In some embodiments, the eNode-Bs160a,160b,160cmay implement MIMO technology. Thus, the eNode-B160a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU102a. Each of the eNode-Bs160a,160b,160cmay be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink (UL) and/or downlink (DL), and the like. As shown inFIG.1D, the eNode-Bs160a,160b,160cmay communicate with one another over an X2 interface. The core network107shown inFIG.1Dmay include a mobility management gateway (MME)162, a serving gateway164, and a packet data network (PDN) gateway166. While each of the foregoing elements are depicted as part of the core network107, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. The MME162may be connected to each of the eNode-Bs160a,160b,160cin the RAN104via an S1 interface and may serve as a control node. For example, the MME162may be responsible for authenticating users of the WTRUs102a,102b,102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs102a,102b,102c, and the like. The MME162may also provide a control plane function for switching between the RAN104and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA. The serving gateway164may be connected to each of the eNode-Bs160a,160b,160cin the RAN104via the S1 interface. The serving gateway164may generally route and forward user data packets to/from the WTRUs102a,102b,102c. The serving gateway164may also perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs102a,102b,102c, managing and storing contexts of the WTRUs102a,102b,102c, and the like. The serving gateway164may also be connected to the PDN gateway166, which may provide the WTRUs102a,102b,102cwith access to packet-switched networks, such as the Internet110, to facilitate communications between the WTRUs102a,102b,102cand IP-enabled devices. The core network107may facilitate communications with other networks. For example, the core network107may provide the WTRUs102a,102b,102cwith access to circuit-switched networks, such as the PSTN108, to facilitate communications between the WTRUs102a,102b,102cand traditional land-line communications devices. For example, the core network107may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the core network107and the PSTN108. In addition, the core network107may provide the WTRUs102a,102b,102cwith access to the networks112, which may include other wired or wireless networks that are owned and/or operated by other service providers. FIG.1Eis a system diagram of the RAN105and the core network109according to an embodiment. The RAN105may be an access service network (ASN) that employs IEEE 802.16 radio technology to communicate with the WTRUs102a,102b,102cover the air interface117. As will be further discussed below, the communication links between the different functional entities of the WTRUs102a,102b,102c, the RAN105, and the core network109may be defined as reference points. As shown inFIG.1E, the RAN105may include base stations180a,180b,180c, and an ASN gateway182, though it will be appreciated that the RAN105may include any number of base stations and ASN gateways while remaining consistent with an embodiment. The base stations180a,180b,180cmay each be associated with a particular cell (not shown) in the RAN105and may each include one or more transceivers for communicating with the WTRUs102a,102b,102cover the air interface117. In some embodiments, the base stations180a,180b,180cmay implement MIMO technology. Thus, the base station180a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU102a. The base stations180a,180b,180cmay also provide mobility management functions, such as handoff triggering, tunnel establishment, radio resource management, traffic classification, quality of service (QoS) policy enforcement, and the like. The ASN gateway182may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to the core network109, and the like. The air interface117between the WTRUs102a,102b,102cand the RAN105may be defined as an R1 reference point that implements the IEEE 802.16 specification. In addition, each of the WTRUs102a,102b,102cmay establish a logical interface (not shown) with the core network109. The logical interface between the WTRUs102a,102b,102cand the core network109may be defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management. The communication link between each of the base stations180a,180b,180cmay be defined as an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations. The communication link between the base stations180a,180b,180cand the ASN gateway182may be defined as an R6 reference point. The R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs102a,102b,102c. As shown inFIG.1E, the RAN105may be connected to the core network109. The communication link between the RAN105and the core network109may defined as an R3 reference point that includes protocols for facilitating data transfer and mobility management capabilities, for example. The core network109may include a mobile IP home agent (MIP-HA)184, an authentication, authorization, accounting (AAA) server186, and a gateway188. While each of the foregoing elements are depicted as part of the core network109, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. The MIP-HA may be responsible for IP address management, and may enable the WTRUs102a,102b,102cto roam between different ASNs and/or different core networks. The MIP-HA184may provide the WTRUs102a,102b,102cwith access to packet-switched networks, such as the Internet110, to facilitate communications between the WTRUs102a,102b,102cand IP-enabled devices. The AAA server186may be responsible for user authentication and for supporting user services. The gateway188may facilitate interworking with other networks. For example, the gateway188may provide the WTRUs102a,102b,102cwith access to circuit-switched networks, such as the PSTN108, to facilitate communications between the WTRUs102a,102b,102cand traditional land-line communications devices. In addition, the gateway188may provide the WTRUs102a,102b,102cwith access to the networks112, which may include other wired or wireless networks that are owned and/or operated by other service providers. Although not shown inFIG.1E, RAN105may be connected to other ASNs and the core network109may be connected to other core networks. The communication link between the RAN105the other ASNs may be defined as an R4 reference point, which may include protocols for coordinating the mobility of the WTRUs102a,102b,102cbetween the RAN105and the other ASNs. The communication link between the core network109and the other core networks may be defined as an R5 reference, which may include protocols for facilitating interworking between home core networks and visited core networks. An air interface, e.g., for a new radio (NR) access technology in a 5G system, may support a variety of use cases, such as improved broadband performance (IBB), Industrial control and communications (ICC) and vehicular applications (V2X) and Massive Machine-Type Communications (mMTC). Use cases may have associated support in an air interface (e.g., 5G air interface). An air interface may support, for example, ultra-low transmission latency (LLC), ultra-reliable transmission (URC) and MTC operation (including narrowband operation). Support for ultra-low transmission latency (LLC) may comprise, for example, air interface latency such as 1 ms RTT and TTIs between 100 us to 250 us. Support may be provided for ultra-low access latency (e.g., time from initial system access until the completion of the transmission of the first user plane data unit). End-to-end (e2e) latency less than 10 ms may be supported, for example, for IC and V2X. Support for ultra-reliable transmission (URC) may comprise, for example, improved transmission reliability, such as 99.999% transmission success and service availability. Support may be provided for mobility speed in the range of 0-500 km/h. Packet Loss Ratio of less than 10e−6may be supported, for example, for IC and V2X. Support for MTC operation may comprise, for example, air interface support for narrowband operation (e.g., using less than 200 KHz), extended battery life (e.g., up to 15 years of autonomy) and minimal communication overhead for small and infrequent data transmissions (e.g., low data rate in the range of 1-100 kbps with access latency of seconds to hours). A 5gFLEX system may be implemented with OFDM and/or other waveforms for uplink and/or downlink. Description of examples herein is non-limiting. Examples are applicable and adaptable to other waveforms and wireless technologies. OFDM may be used as a signal format for data transmissions, e.g., in LTE and IEEE 802.11. OFDM may efficiently divide spectrum into multiple parallel orthogonal subbands. A (e.g., each) subcarrier may be shaped using a rectangular window in the time domain, which may lead to sinc-shaped subcarriers in the frequency domain. OFDMA may rely on (e.g., perfect) frequency synchronization and tight management of uplink timing alignment within the duration of the cyclic prefix, for example, to maintain orthogonality between signals and to minimize intercarrier interference. Tight synchronization may be difficult, for example, in a system where a WTRU may be simultaneously connected to multiple access points. Additional power reduction may be applied to uplink transmissions, for example, to comply with spectral emission requirements for adjacent bands. Fragmented spectrum may be aggregated for WTRU transmissions. OFDM (CP-OFDM) performance may be improved, for example, by more stringent RF requirements for implementations, such as operation using a large amount of contiguous spectrum that may not require aggregation. A CP-based OFDM transmission scheme may provide a downlink physical layer for 5G similar to a 4G system with modifications to pilot signal density and location. A 5gFLEX downlink transmission scheme may be based on a multicarrier waveform that may be characterized by high spectral containment (e.g., lower side lobes and lower OOB emissions). A multicarrier (MC) waveform for 5G may comprise, for example, OFDM-OQAM and/or UFMC (UF-OFDM). Multicarrier modulation waveforms may divide a channel into subchannels and may modulate data symbols on subcarriers in the subchannels. In an example of Filtered Band Multi-Carrier (FBMC), such as OFDM-OQAM, a filter may be applied in the time domain per subcarrier to an OFDM signal, for example, to reduce OOB. OFDM-OQAM may cause very low interference to adjacent bands, may not need large guard bands and may be implemented without a cyclic prefix. OFDM-OQAM may be sensitive to multipath effects and to high delay spread in terms of orthogonality, which may complicate equalization and channel estimation. In an example of Universal Filtered MultiCarrier (UFMC), such as UF-OFDM, a filter may be applied in the time domain to the OFDM signal to reduce OOB. Filtering may be applied per subband to use spectrum fragments, which may reduce complexity and make UF-OFDM more practical to implement. OOB emissions in unused spectrum fragments in a band may be as high as in OFDM. UF-OFDM may provide some improvement over OFDM at the edges of the filtered spectrum with little to no improvement in the spectral hole. These waveforms enable frequency multiplexing of signals with non-orthogonal characteristics (such as different subcarrier spacing) and co-existence of asynchronous signals without requiring complex interference cancellation receivers. These waveforms may facilitate the aggregation of fragmented pieces of spectrum in baseband processing, e.g., as a lower cost alternative to its implementation as part of RF processing. Co-existence of different waveforms within the same band may be considered, for example, to support mMTC narrowband operation, e.g., using SCMA. Different waveforms, e.g., CP-OFDM, OFDM-OQAM and UF-OFDM, may be combined in the same band, e.g., for all aspects and for downlink and uplink transmissions. Co-existence of different waveforms may include transmissions using different types of waveforms between different WTRUs or transmissions from the same WTRU, e.g., simultaneously, with some overlap or consecutive in the time domain. Other co-existence aspects may include support for hybrid types of waveforms, e.g. waveforms and/or transmissions that may support, for example: a possibly varying CP duration (e.g., from one transmission to another), a combination of a CP and a low power tail (e.g., a zero tail) and/or a form of hybrid guard interval (e.g., using a low power CP and an adaptive low power tail), etc. Wavefroms may support dynamic variation and/or control of other aspects, such as how to apply filtering (e.g., whether filtering is applied at the edge of the spectrum used for reception of any transmission(s) for a given carrier frequency, at the edge of a spectrum used for reception of a transmission associated with a specific SOM, per subband or per group thereof). An uplink transmission scheme may use the same or different waveform that is used for downlink transmissions. Transmissions to and from different WTRUs in the same cell may be multiplexed, for example, based on FDMA and TDMA. 5gFLEX radio access may be characterized by a very high degree of spectrum flexibility that enables deployment in different frequency bands with different characteristics, which may include different duplex arrangements, different and/or variable sizes of available spectrum, such as contiguous and non-contiguous spectrum allocations in the same or different bands. 5gFLEX radio access may support variable timing aspects, such as support for multiple TTI lengths and asynchronous transmissions. Multiple duplexing schemes (e.g., TDD, FDD) may be supported. Supplemental downlink operation may be supported, e.g., for FDD operation, for example, using spectrum aggregation. FDD operation may support full-duplex FDD and half-duplex FDD operation. DL/UL allocation may be dynamic (e.g., may not be based on a fixed DL/UL frame configuration), e.g., for TDD operation. The length of a DL or a UL transmission interval may be set per transmission opportunity. A 5G air interface characteristic or capability may enable different transmission bandwidths on uplink and downlink ranging, e.g., varying between a nominal system bandwidth to a maximum value corresponding to the system bandwidth. Single carrier operation may support a variety or range of system bandwidths, such as 5, 10, 20, 40 and 80 MHz, 160 MHz. Nominal bandwidths may have one or more fixed values. Narrowband transmissions (e.g., 0 to 200 KHz) may be supported within the operating bandwidth for MTC devices. System bandwidth may refer to the largest portion of spectrum that may be managed by a network for a given carrier. The spectral portion of a carrier that a WTRU minimally supports for cell acquisition, measurements and initial access to the network may correspond to the nominal system bandwidth. A WTRU may be configured with a channel bandwidth that may be within the range of the entire system bandwidth. A WTRU's configured channel bandwidth may or may not include a nominal part of system bandwidth, e.g., as shown in an example inFIG.2. FIG.2is an example of transmission bandwidths.FIG.2shows a nominal system bandwidth (cell) (e.g., 5 MHz), a UEx channel bandwidth (e.g., 10 Mhz), a UEy channel bandwidth (e.g., 20 MHz), and UEz channel bandwidth (5 MHz) all at different allocations, which may or may not overlap, within the system bandwidth (e.g., 20 MHz). UE refers to a WTRU. Bandwidth flexibility may be achieved, for example, because (e.g., all) applicable sets of RF requirements for a given maximum operating bandwidth in a band may be met without the introduction of additional allowed channel bandwidths for that operating band, e.g., due to the efficient support of baseband filtering of the frequency domain waveform. A WTRU's channel bandwidth for single carrier operation may be configured, reconfigured and/or dynamically changed. Spectrum for narrowband transmissions within the nominal system, system or configured channel bandwidth may be allocated. A 5G air interface physical layer may be band-agnostic and may support operation in licensed bands (e.g., below 5 GHz) and unlicensed bands (e.g., in the range 5-6 GHz). LBT Cat4based channel access framework similar to LTE LAA may be supported, e.g., for operation in unlicensed bands. Cell-specific and/or WTRU-specific channel bandwidths for arbitrary spectrum block sizes may be scaled and managed (e.g., scheduling, addressing of resources, broadcasted signals, measurements, etc.). Downlink control channels and signals may support FDM operation. A WTRU may acquire a downlink carrier, for example, by receiving transmissions using (e.g., only) the nominal part of the system bandwidth. For example, a WTRU may not initially receive transmissions covering the entire bandwidth being managed by the network for the concerned carrier. Downlink data channels may be allocated over a bandwidth that may or may not correspond to nominal system bandwidth, e.g., without restrictions other than being within the WTRU's configured channel bandwidth. For example, a network may operate a carrier with a 12 MHz system bandwidth using a 5 MHz nominal bandwidth allowing devices supporting 5 MHz maximum RF bandwidth to acquire and access the system while potentially allocating+10 to −10 MHz of the carrier frequency to other WTRU's supporting up to 20 MHz worth of channel bandwidth. FIG.3is an example of flexible spectrum allocation.FIG.3shows an example of spectrum allocation where different subcarriers may be (e.g., at least conceptually) assigned to different modes of operation (hereafter Spectrum Operation Mode or SOM). Different SOM may be used to fulfill different requirements for different transmissions. A SOM may consist of a subcarrier spacing, a TTI length and/or one or more reliability aspects (e.g., HARQ processing aspects, secondary control channel). A SOM may be used to refer to a (e.g. specific) waveform or may be related to a processing aspect (e.g., in support of co-existence of different waveforms in the same carrier using FDM and/or TDM or coexistence of FDD operation in a TDD band (e.g., with support in a TDM manner or similar)). A WTRU may be configured to perform transmissions according to one or more SOMs. For example, a SOM may correspond to transmissions that use at least one of the following: a specific TTI duration, a specific initial power level, a specific HARQ processing type, a specific upper bound for successful HARQ reception/transmission, a specific transmission mode, a specific physical channel (uplink or downlink), a specific waveform type or even a transmission according to a specific RAT (e.g., LTE or according to a 5G transmission technique). A SOM may correspond to a QoS level and/or a related aspect (e.g., maximum/target latency, maximum/target BLER or similar). A SOM may correspond to a spectrum area and/or to a specific control channel or aspect thereof (e.g., search space or DCI type). For example, a WTRU may be configured with a SOM for a URC type of service, a LLC type of service and/or an MBB type of service. A WTRU may have a configuration for a SOM for system access and/or for transmission/reception of L3 control signaling (e.g., RRC), for example, in a portion of a spectrum associated with a system, such as in a nominal system bandwidth. Spectrum aggregation may be supported (e.g., for single carrier operation). A WTRU may support transmission and reception of multiple transport blocks over contiguous or non-contiguous sets of physical resource blocks (PRBs), e.g., within the same operating band. Mapping of a single transport block to separate sets of PRBs may be supported. Support may be provided for simultaneous transmissions associated with different SOM requirements. Multicarrier operation may be supported, for example, using contiguous or non-contiguous spectrum blocks within the same operating band or across two or more operating bands. Support may be provided for aggregation of spectrum blocks using different modes (e.g., FDD and TDD) and/or different channel access methods (e.g., licensed and unlicensed band operation below 6 GHz). Support may be provided for procedures that configure, reconfigure and/or dynamically change a WTRU's multicarrier aggregation. Downlink (DL) and uplink (UL) transmissions may be organized into radio frames characterized by a number of fixed aspects (e.g., location of downlink control information) and a number of varying aspects (e.g., transmission timing, supported types of transmissions). Basic time interval (BTI) may be expressed in terms of an integer number of one or more symbol(s), which symbol duration may be a function of subcarrier spacing applicable to the time-frequency resource. Subcarrier spacing (e.g., for FDD) may differ between an uplink carrier frequency fULand a downlink carrier frequency fDLfor a given frame. A transmission time interval (TTI) may correspond to a minimum time supported by a system between consecutive transmissions where each may be associated with different transport blocks (TBs) for the downlink (TTIDL), for the uplink (UL TRx), which may exclude preambles and may include control information (e.g., DCI for downlink or UCI for uplink). A TTI may be expressed in terms of an integer number of one of more BTI(s). A BTI may be specific and/or associated with a given SOM. For example, supported frame durations may include, for example, 100 us, 125 us (⅛ ms), 142.85 us ( 1/7 ms may be 2 nCP LTE OFDM symbols) and 1 ms, e.g., to enable alignment with an LTE timing structure. A frame may start with downlink control information (DCI) of a fixed time duration tdcipreceding downlink data transmission (DL TRx) for a concerned carrier frequency—fUL+DLfor TDD and fDLfor FDD. A frame may (e.g., for TDD duplexing) consist of a downlink portion (DCI and DL TRx) and (e.g., optionally) an uplink portion (UL TRx). A switching gap (swg) may (e.g., for frames of a given configuration) precede the uplink portion of the frame, e.g., when present. A frame may (e.g., for TDD duplexing) consist of a downlink reference TTI and one or more TTI(s), e.g., for the uplink. The start of an uplink TTI may be derived, for example, using an offset (toffset), applied from the start of the downlink reference frame that may overlap with the start of the uplink frame. 5gFLEX may (e.g., for TDD) support D2D/V2x/Sidelink operation in the frame, for example, by including respective downlink control and forward direction transmission in the DCI+DL TRx portion (e.g., when a semi-static allocation of the respective resources is used) or in the DL TRx portion (e.g., for dynamic allocation) and by including the respective reverse direction transmission in the UL TRx portion. 5gFLEX may (e.g., for FDD) support D2D/V2x/Sidelink operation in the UL TRx portion of a frame, for example, by including respective downlink control, forward direction and reverse direction transmissions in the UL TRx portion. Dynamic allocation of the respective resources may be used. FIGS.4and5provide examples of frame structures.FIG.4is an example of timing relationships for TDD duplexing.FIG.5is an example of timing relationships for FDD duplexing. A scheduling function may be supported in the MAC layer. Support may be provided for multiple (e.g., two) scheduling modes, e.g., network-based scheduling (e.g., for tight scheduling in terms of resources, timing and transmission parameters of downlink transmissions and/or uplink transmissions) and WTRU-based scheduling (e.g., for more flexibility in terms of timing and transmission parameters). Scheduling information for modes may be valid for one or more TTIs. Network-based scheduling may enable a network to tightly manage available radio resources assigned to different WTRUs, which may permit optimal sharing of resources. Dynamic scheduling may be supported. WTRU-based scheduling may enable a WTRU to opportunistically access uplink resources with minimal latency on a per-need basis, for example, within a set of shared or dedicated uplink resources assigned (e.g., statically or dynamically) by the network. Support may be provided for synchronized and unsynchronized opportunistic transmissions. Support may be provided for contention-based transmissions and contention-free transmissions. Support for opportunistic transmissions (scheduled or unscheduled) may be provided, for example, to meet ultra-low latency requirements for 5G and power saving requirements for mMTC. 5gFLEX may support one or more forms of association between data available for transmission and available resources for uplink transmissions. Multiplexing of data with different QoS requirements within the same transport block may be supported, for example, when multiplexing does not introduce a negative impact to the service with the most stringent QoS requirement and does not introduce an unnecessary waste of system resources. A transmission may be encoded using a number of different encoding methods. Different encoding methods may have different characteristics. For example, an encoding method may generate a sequence of information units. An (e.g., each) information unit, or block, may be self-contained. For example, an error in the transmission of a first block may not impair the ability of a receiver to successfully decode a second block, such as when the second block is error-free and/or when sufficient redundancy may be found in the second block or in a different block for which at least a portion was successfully decoded. An example of an encoding technique may include raptor/fountain codes, e.g., where a transmission may consist of a sequence of N raptor codes. One or more codes may be mapped to one or more transmission “symbols” in time. A “symbol” may correspond to one or more sets of information bits, e.g., one or more octets. Encoding may be used to add FEC to a transmission, e.g., where a transmission may use N+1 or N+2 raptor codes or symbols (e.g., assuming a one raptor code symbol relationship). A transmission may be more resilient to the loss of one “symbol” e.g., due to interference or puncturing by another transmission overlapping in time. A WTRU may be configured to receive and/or detect one or more system signatures. A system signature may consist of a signal structure using a sequence. A signal may be similar to a synchronization signal, e.g., similar to LTE PSS and/or SSS. A signature may be specific to (e.g., may uniquely identify) a particular node (or TRP) within a given area or it may be common to a plurality of nodes (or TRPs) within an area, which aspect may not be known and/or relevant to a WTRU. A WTRU may determine and/or detect a system signature sequence and may further determine one or more parameters associated with the system. For example, a WTRU may further derive an index therefrom and may use the index to retrieve associated parameters, e.g., within a table, such as an access table. For example, a WTRU may use received power associated with a signature for open-loop power control, e.g., to set an initial transmission power when a WTRU determines that it may access (and/or transmit) using applicable resources of the system. For example, a WTRU may use the timing of a received signature sequence, e.g., to set the timing of a transmission (e.g., a preamble on a PRACH resource) when the WTRU determines that it may access (and/or transmit) using applicable resources of the system. A WTRU may be configured with a list of one or more entries. A list may be referred to as an access table. A list may be indexed, e.g., where an (e.g., each) entry may be associated with a system signature and/or to a sequence thereof. An access table may provide initial access parameters for one or more areas. An (e.g., each) entry may provide one or more parameters necessary for performing an initial access to the system. Parameters may include at least one of a set of one or more random access parameters (e.g., including applicable physical layer resources, such as PRACH resources) in time and/or frequency, initial power level and/or physical layer resources for reception of a response. Parameters may (e.g., further) include access restrictions (e.g., PLMN identity and/or CSG information). Parameters may (e.g., further) include routing-related information, such as one or more applicable routing areas. An entry may be associated with (and/or indexed by) a system signature. An such entry may be common to a plurality of nodes (or TRPs). A WTRU may receive an access table, for example, via a transmission using dedicated resources (e.g., by RRC configuration) and/or by a transmission using broadcast resources. In the latter case, the periodicity of the transmission of an access table may be relatively long (e.g., up to 10240 ms), which may be longer than the periodicity of the transmission of a signature (e.g., in the range of 100 ms). A Logical Channel (LCH) may represent a logical association between data packets and/or PDUs. An association may be based on data units being associated with the same bearer (similar to legacy), and/or being associated with the same SOM and/or slice (e.g., a processing path using a set of physical resources). For example, an association may be characterized by at least one of a chaining of processing functions, an applicable physical data (and/or control) channel (or instance thereof) or an instantiation of a protocol stack with (i) a specific portion being centralized (e.g., PDCP or anything beyond portions of the physical layer processing such as Radio Front (RF) end) and (ii) another portion being closer to the edge (e.g., MAC/PHY in the TRP or RF) potentially separated by a fronthauling interface. The term LCH as used herein may have a different and/or broader meaning than a similar term for LTE systems. A WTRU may be configured to determine a relationship between different data units. A relationship may be based on a matching function (e.g., based on the configuration of one or more field values common to data units that are part of the same logical association). Fields may correspond to fields in a protocol header associated with the data unit(s). For example, a matching function may use a tuple of parameters for fields of the IP headers of a data unit, such as IP source/destination address(es), transport protocol source/destination port(s) and transport protocol type, IP protocol version (e.g., IPv4 or IPv6), etc. For example, data units that are part of the same logical association may share a common radio bearer, processing function, SOM and/or may (e.g., at least conceptually) correspond to the same LCH and/or LCG. A Logical Channel Group (LCG) may consist of a group of LCH(s) (or equivalent as per the definition above), e.g., where a grouping may be based on one or more criteria. Criteria may be, for example, that one or more LCH(s) may have a similar priority level applicable to all LCHs of the same LCG or may be associated with the same SOM (or type thereof), the same slice (or type thereof). For example, an association may characterized by at least one of a chaining of processing functions, an applicable physical data (and/or control) channel (or instance thereof) or instantiation of a protocol stack, which may include (i) a specific portion being centralized (e.g., PDCP or anything except RF) and (ii) another portion being closer to the edge (e.g., MAC/PHY in the TRP or RF) potentially separated by a fronthauling interface. The term LCG as used herein may have a different and/or broader meaning than a similar term for LTE systems. A Transport Channel (TrCH) may consist of a specific set of processing steps and/or a specific set of functions applied to data information that may affect one or more transmission characteristics over a radio interface. LTE may define multiple types of TrCH, such as the Broadcast Channel (BCH), the Paging Channel (PCH), the Downlink Shared Channel (DL-SCH), the Multicast Channel (MCH), the Uplink Shared Channel (UL-SCH) and the Random Access Channel (which may not carry user plane data). Transport channels for carrying user plane data may include the DL-SCH and the UL-SCH for the downlink and for the uplink, respectively. An augmented set of requirements may be supported by an air interface for a 5G system. Support may be provided for multiple transport channels, e.g., for user and/or control plane data, for one or more WTRU devices. The term TrCH as used herein may have a different and/or broader meaning than a similar term for LTE systems. For example, a transport channel for URLLC (e.g., URLLCH), for mobile broadband (MBBCH) and/or for machine type communications (MTCCH) may be defined for downlink transmission (e.g., DL-URLLCH, DL-MBBCH and DL-MTCCH) and for uplink transmissions (e.g., UL-URLLCH, UL-MBBCH and UL-MTCCH). In an example, multiple TrCH may be mapped to a different set of physical resources (e.g., PhCH) belonging to the same SOM. This may be advantageous, for example, to support simultaneous transmission of traffic with different requirements over the same SOM. An example of this may be transmitting a URLLCH along MTCCH simultaneously when the WTRU is configured with a single SOM. A WTRU may be configured with one or more parameters associated with a characterization of how data should be transmitted. A characterization may represent constraints and/or requirements that a WTRU may be expected to meet and/or enforce. A WTRU may perform different operations and/or adjust its behavior based on the state associated with the data based on a characterization. Parameters may include, for example, time-related aspects (e.g., Time to Live (TTL)—for a packet, which represents the time before which the packet should be transmitted to meet, acknowledged, etc. to meet latency requirements), rate-related aspects and configuration related aspects (e.g., absolute priority). Parameters may (e.g., also) be changed with time while a packet or data may be pending for transmission. A 5G air interface may support a wide variety of use cases with different QoS requirements, e.g., in terms of differentiation between applicable radio resources and transmission methods. For example, TTI duration, reliability, diversity applied to the transmission and maximum latency may vary in a wide variety of use cases. A WTRU may face additional challenges in terms of processing bottlenecks, for example, due to increased throughput and decreased latency (e.g., shorter TTI duration and reduced processing times). Procedures may optimize the creation and assembly of Layer 2 Protocol Data Units (e.g., MAC PDUs). RLC segmentation, assembly, MAC layer multiplexing and PHY layer encoding may be performed after reception of a grant. Latency of a grant to UL transmission may not be improved beyond the hardware and software latency of these operations. Procedures may be provided for segmentation, assembly and multiplexing. A scheduling function (e.g., in the network) may or may not have timely information and/or exact knowledge of QoS requirements associated with data available for transmission in a WTRU buffer. A WTRU may implement behavior to enable services that have strict reliability and/or latency requirements (e.g., for URLLC services). A WTRU may use parameters to impact how and what data is transmitted and how PDUs are generated. A WTRU may be configured with one or more parameters associated with a characterization of how data should be transmitted. Characterization may represent constraints and/or requirements that a WTRU may be expected to meet and/or enforce. A WTRU may perform different operations and/or adjust its behavior, for example, based on a state associated with data based on a characterization. Behavior may be related to PDU assembly and restrictions, e.g., in terms of processing time. A WTRU may determine that one or more procedures such as those described herein may be applicable. Procedures described herein may be utilized in whole or in part, alone or in combination with any other procedure, whether described herein or elsewhere. One or more example procedures described herein may be executed or applied in part or in full on a network or a WTRU. Procedures may be provided for determination of PHY layer parameters prior to a grant. For example, a WTRU may determine or be configured with PHY layer parameters for transmission of data prior to reception of a grant for UL transmission. Early determination of parameters may allow for some PHY layer processing to be performed by a WTRU in advance of a UL grant, which may be beneficial to allow a WTRU to perform UL transmission with minimal delay from the transmission of the UL grant for certain types of data, for example, to minimize the latency associated with a UL transmission. An early determination of PHY layer parameters may (e.g., also) be employed in conjunction with other procedures described herein. PHY layer parameters determined prior to the grant may be applied to specific logical channels, transport channels, traffic type or SOMs. Parameters configured or provided to a WTRU in advance of grant reception may consist of, for example, one or more of the following: a modulation scheme to be applied to the data, a coding scheme and coding-related parameters, HARQ related parameters (e.g., HARQ process type or characteristics of the HARQ to be employed), a transport block size, rules for associating L2 data to specific PHY resources (e.g., which PHY resources or range of PHY resources may be used to transmit specific resources), PHY resources or a super-set of PHY resources associated with an eventual grant. PHY layer information may be a superset of the resources that may be refined by the grant itself. Parameters may be signaled from a network. For example, a WTRU may receive PHY layer parameters in advance, e.g., through signaling by the network. Parameters may be received by a WTRU for a certain type of data (e.g., URLLC) or certain types of logical channels, transport channels or the like. Parameters may be applicable (e.g., only) to certain PHY layer resources that may be intended to carry the data. Parameters may be applicable to data transmitted in a certain set of resource blocks or in a defined frequency/time range. A WTRU may receive PHY layer parameters from the network. Parameters may be received periodically or in response to one or more triggers. Triggers may comprise, for example, (i) a significant change in channel characteristics detected by the network or detected by the WTRU and signaled to the network, (ii) through a request from the WTRU and/or (iii) at the initiation by the WTRU of a service or logical channel, bearer, or the like, which may require the WTRU to have access to the PHY layer parameters in advance. PHY layer parameters received by a WTRU may be valid or applicable until, for example, one or more of the following occurs: (i) a WTRU receives a new/different set of PHY layer parameters, (ii) expiration of a timer following the reception of the PHY layer parameters, (iii) reception of the grant for which the PHY layer parameters should be applied and/or (iv) transmission of (e.g., all) data by the WTRU associated with a specific flow, logical channel, bearer or the like (e.g., when the WTRU has finished transmission of all URLLC data in its buffers). A WTRU may (e.g., further) indicate to the network when an event, such as one or more of the foregoing events occurs. MCS may be received and used for future grants. In an example realization, a WTRU may periodically receive an MCS to be used for transmission of data on a portion of transmit bandwidth. This may, for example, be limited to a set of predefined transport blocks or similar (e.g., pre-defined frequency range). A WTRU may (e.g., upon reception of the periodic MCS transmissions) apply the signaled MCS to (e.g., all) transmissions made on the associated transmission bandwidth. A WTRU may determine (e.g., a-priori or based on configuration) to associate one or more L2 protocol data units with a bandwidth range and (as a result) the MCS that was signaled initially. For example, a WTRU may determine that a set of logical channels may be served with the MCS. The WTRU and may map those logical channels to the portion of the bandwidth for which the MCS has been signaled. Periodic transmissions of MCS may be delivered to a WTRU, for example, through dedicated signaling on a PHY channel, through a MAC CE or similar communication or via RRC signaling. A WTRU may utilize an MCS following the transmission, e.g., until it receives a new or updated MCS value for the same bandwidth area. A WTRU may receive multiple different MCS values, e.g., to utilize for different bandwidth areas. A WTRU may receive MCS for (e.g. only) certain bandwidth areas. A WTRU may receive a subset of resources that a grant may (e.g., subsequently) choose from. For example, a WTRU may receive a resource range within its transmit bandwidth. A resource range may be used, for example, to indicate to the WTRU the set of resources the WTRU may be required to transmit from when the grant arrives. A frequency range indicated by PHY layer parameters may identify a set of resource blocks usable during the validity time of PHY layer parameters, a set of subframes, TTIs or symbols usable during the validity of the PHY layer parameters or a combination thereof. A grant may indicate to a WTRU the specific resources within the initial resource range. For example, PHY layer parameters may select x resource blocks for each TTI that may be usable by the WTRU. A UL grant may indicate to the WTRU one or more of those x resource blocks to use by the WTRU to satisfy the grant. An advantage of this technique may be to reduce latency associated with grant decoding, for example, given that a portion of the resources indicated by the grant are already known a-priori in the PHY layer information previously received by the WTRU. A WTRU may determine its PHY layer parameters (e.g., coding, modulation, power setting, etc.) in advance of the reception of a grant. Parameters may be determined, for example, using one or more of the following: (i) measurements of SNR, CQI or similar performed by the WTRU on the DL; (ii) measurements of ACK/NACK frequency of the transmissions made on the frequency range of interest and/or (iii) measurements of reference signal power, SINR or the like related to the reference signals on the frequency range of interest. A WTRU may be configured (e.g., dynamically or semi-statically) by a network with frequency range(s) that the WTRU may (e.g., must) use to define its own set of PHY layer parameters. A WTRU may periodically determine its PHY layer parameters, for example, based on measurements for a frequency range or set of frequency ranges. A WTRU may associate PHY layer parameters to be applied to a transmission made on any resources for which the WTRU receives a grant. Frequency ranges for WTRU determination of parameters nay be dynamically configured by a network. For example, a WTRU may be configured by a network to perform the above measurements and calculation of the MCS for frequency range A and B (e.g., only), where A and B may be subsets of the entire frequency. A WTRU may apply MCS A to uplink transmissions performed on frequency range A, and may apply MCS B to uplink transmissions performed on frequency range B. Configuration of frequency ranges in which a WTRU may perform its own determination of PHY layer parameters may be configured by the network, e.g., through RRC signaling. A configuration may be changed by an updated configuration. For example, a network may utilize the frequency range with the best channel characteristics for URLLC transmissions at any given time. A network may dynamically reconfigure the frequency range for which a WTRU may perform its own determination of PHY layer parameters. A WTRU may signal PHY layer parameters. For example, a WTRU may signal to a network the PHY layer parameters the WTRU autonomously selected. A WTRU may signal parameters, for example, during or in response to one or more of the following: (i) upon selection/determination of the parameters; (ii) during a transmission of data that uses the parameters, in which case the WTRU may signal the parameters explicitly in control information and/or implicitly based on properties of the data being transmitted that imply the use of a specific selection of control parameters; (iii) upon request by the network and/or (iv) upon the network providing resources for transmission of data or control that may or may not be intended for transmission of these parameters. A WTRU may (e.g., also) use any combination of procedures discussed herein. For example, a WTRU may combine a first procedure that may provide a set of physical parameters with a second procedure that may provide a second set of parameters. A WTRU may signal a Data Block Size or a TB size in the SR, BSR, RA or similar uplink transmission. A WTRU may signal a Data Block Size or a TB size that it may or will use for future transmissions. For example, a WTRU may have a set of Data Blocks prepared and ready for transmission. The WTRU may (e.g., also) have combined the Data Blocks into a transport block. The WTRU may provide the Data Block size and/or TB size in the SR, BSR, or RA transmitted to the network. A WTRU may indicate a course size for a Data Block or TB size, for example, to allow signaling to be sent with less overhead. For example, the set of TB sizes that may be signaled by the WTRU may be limited to x levels. Signaling may be sent with a limited number of bits, which may allow the WTRU to signal one of the x levels. For example, a WTRU may signal the next TB size larger than x when it wishes to transmit a TB of size x. A transport block size may be implicitly provided in a CRC check value. A WTRU may select its modulation and coding (MCS) (e.g., without the network providing it). A WTRU may select its transport block size, for example, based on the number of available fixed size MAC PDUs to transmit and the size of the resource grants. An MCS may be determined by a WTRU, for example, using procedures described herein. A WTRU may signal its MCS (e.g., explicitly) to the network, for example, based on one on one or more procedures described herein. Transport block size utilized by a WTRU may be (e.g., implicitly) indicated, for example, as part of a CRC check value of one or more individual MAC PDUs within a transport block. A WTRU may, for example, insert padding into (e.g., each of) fixed sized MAC PDUs to obtain a CRC check value that implicitly indicates (e.g., to the network) an overall transport block size used or that selects from one of the allowable transport block sizes. In an example, a CRC check value may implicitly signal a selection by a WTRU of a first transport block size, for example, when the CRC check value of the first encoded block transmitted is divisible by one value. A CRC check value may implicitly signal a selection by a WTRU of a second transport block size and so on, for example, when the CRC check value of the first encoded block transmitted is divisible by another value. A MAC PDU/Transport Block may be created incrementally. TBs may be created from fixed data block sizes. This may provide an advantage over the performance of RLC segmentation, assembly, MAC layer multiplexing and PHY layer encoding after reception of a grant. Latency of a grant to UL transmission may not be improved beyond the hardware and software latency of these operations. For example, a WTRU may perform incremental creation of a transport block through the assembly of Data Blocks with fixed size. A WTRU may, as higher layer data arrives in the WTRU buffers, perform creation of the Data Blocks immediately or, without the need to wait for information in the grant from the network, by creating Data Blocks of a fixed size immediately as the data arrives. The WTRU may create a TB, for example, by assigning it a number of Data Blocks in such a way as to occupy the size of the grant. A WTRU may (e.g., also) be given a grant that may be a multiple of the fixed Data Block size, e.g., in order to minimize padding. A WTRU may (e.g., alternatively) fit as many Data Blocks into a transport block allowed by the grant size. A WTRU may occupy any remaining data, for example, with one or more of the following: (i) padding; (ii) MAC control information, such as information about required resources, pending MAC PDUs to transmit, MAC PDU size, an indication of packets with expired TTL, etc. and/or (iii) additional coding, rate matching, or the like, which may be inserted by the PHY layer. A WTRU may be configured to utilize a (e.g., one) or a finite set of specific Data Block sizes for a specific flow, bearer, logical channel or the like. A WTRU may (e.g., further) be restricted to use a specific Data Block size for (e.g., only) one or more particular flows, logical channel, bearer, or the like, and may not need to be restricted to a Data Block size for data associated with other flows, logical channels, bearers, or data type. For example, a WTRU may be required to utilize a specific Data Block size for data associated with URLLC or a flow with QoS characteristics associated with URLLC, but may create Data Blocks that are not restricted in size for other flows or data. A data block may consist of, for example, an RLC PDU or a PDU associated with a different protocol layer (MAC, PDCP, etc.). Configured Data Block sizes may be, for example, statically configured in a WTRU or signaled by a network. A WTRU may receive a set of allowable Data Block sizes, e.g., periodically or aperiodically, such as based on changes in channel conditions determined by the network. Data Block sizes may be signaled to a WTRU, e.g., via broadcast or dedicated signaling, such as part of RRC signaling in a MAC configuration. A WTRU may receive one or more allowable Data Block size configurations to be applied for a specific (e.g., a first) service type, flow, logical channel or the like, and a different set of allowable Data Block sizes for another set of (e.g., second) service type, flow, logical channel or the like. A WTRU may (e.g., in addition) receive a configuration change to allowable Data Block sizes through (e.g., the same) signaling. A WTRU may (e.g., upon reception of a change in configuration) change the corresponding size of the Data Blocks created, for example, from the time of reception of the signaling until the reception of a new set of Data Block sizes. A WTRU may derive fixed Data Block sizes to be used, for example, based on PHY layer information that may be provided prior to the grant. For example, a WTRU may compute an allowable Data Block size based on one or more of PHY layer parameters, such as those described herein. For example, a WTRU may determine a Data Block size to be equal to a coding block size that is provided as part of the PHY layer parameters indicated prior to the grant. A WTRU may select one or more Data Block sizes from a set of allowable sizes to be used (e.g., independently) for each Data Block created. Selection of a data block size may be made, for one or more reasons, such as to accommodate the type of traffic at higher layers, based on size of packets received over a recent time span, the buffering capability of the WTRU and/or other implementation related aspects. A WTRU may (e.g., alternatively) select a Data Block size to be used for a specific flow, logical channel and/or the like from the list of allowable sizes. A WTRU may continue to use a selected Data Block size for the same flow, logical channel and/or the like for a finite period of time. A WTRU may (e.g., also) perform a selection upon the occurrence of other triggers, such as one or more of the following: (i) an arrival of a new type of data, (ii) the next reception of a new set of Data Block sizes, (iii) the end of a frame/superframe or similar defined boundary, (iv) periodically or upon expiration of a timer, (v) upon the detection (e.g., by the WTRU) of a change in channel quality or other similar measurements and/or (vi) upon the reception of a new configuration from the network (e.g., a change of frequency, HARQ parameters, PHY configuration or the like). A WTRU may perform segmentation/reassembly of higher layer SDUs (e.g., IP packet or a PDCP SDU), for example, prior to the uplink grant for transmission of the associated data. Segmentation/reassembly of a packet or packets, which may be associated with a specific flow, data type, logical channel, or the like, may be performed by a WTRU at any one or more triggers, such as: (i) arrival of a packet or SDU, which may be targeted to a specific flow or associated with a specific logical channel, (ii) when a TTL of a specific packet or SDU becomes less than a threshold; and/or (iii) at the arrival of one or more SDUs where the total amount of data available for segmentation/reassembly is larger than a minimum size. For example, the minimum size may correspond to an allowable Data Block size or a Data Block size selected by the WTRU. A WTRU may, upon reception of a higher layer SDU, perform segmentation/reassembly of the SDU, for example, so the resulting segments may be of the fixed and selected Data Block size. A WTRU may (e.g., during segmentation/reassembly) insert padding into a Data Block, for example, when (e.g., all of) the data in the buffer may be consumed during Data Block creation and does not occupy an integer number of Data Blocks with a fixed size. A WTRU may select a Data Block size (e.g., from a list of allowable Data Block sizes), for example, that is the closest in size to a higher layer SDU that may be received or to minimize inserted padding. In an example, a WTRU may select a Data Block size (e.g., from a list of allowable sizes) that minimizes the padding, for example, when a single RLC packet associated with a specific buffer or flow may be (e.g., is) present in the WTRU at the time when Data Block creation is performed. A WTRU may create a Data Block header per fixed size Data Block. A WTRU may (e.g., also) create a header for a collection of Data Blocks that may have a common size and/or one or more other characteristics, such as, but not limited to, logical channel, bearer type, flow type, service type and/or TTL. A WTRU may complete processing of the header, for example, when deciding the number of fixed sized Data Blocks to be transmitted at a given time. A WTRU may provide one or more headers to lower layers for encoding, e.g., along with the Data Blocks. A Data Block may or may not contain a header. A single header may be included in an entire transport block containing multiple Data Blocks. The header may contain, for example, one or more of the following: (i) the number of Data Blocks in the transport block, (ii) the size or sizes of Data Blocks in the transport block, (iii) the flow, logical channel or service associated with a (e.g., each) Data Block and/or (iv) an amount of control information (e.g., MAC CEs) included in the transport block. A WTRU may include (e.g. in a TB currently being transmitted) information such as the size of a pending TB to be transmitted or ready for transmission by the WTRU. For example, the information may be included in a MAC CE transmitted as part of a current TB being transmitted. A WTRU may include (e.g., in a current TB) the Block Size or Block Sizes of prepared or pending blocks to be transmitted in a future TB. A WTRU may perform a portion of the PHY layer processing (e.g., encoding) of Data Blocks, e.g., prior to reception of the grant on a (e.g., each) fixed sized Data Block. A WTRU may rely on PHY layer parameters, such as those described herein, to perform a portion of PHY layer processing on a (e.g., each) Data Block, e.g., prior to reception of the grant. For example, a WTRU may rely on the MCS provided in PHY layer parameters to perform CRC insertion, encoding and modulation prior to reception of a grant. A WTRU may create a transport block to be transmitted in an incremental fashion, for example, by the creation of fixed size Data Blocks as data arrives in the RLC buffer and the encoding and modulation of these Data Blocks as they are received. A WTRU may (e.g., also) determine PHY layer processing to be performed, e.g., based on PHY layer information received. For example, a WTRU may determine, e.g., based on having received a coding scheme and coding parameters to use, that the WTRU may perform encoding prior to reception of the grant and that modulation may be provided as part of the grant. A WTRU may (e.g., upon reception of a grant) perform, for example, one or more of the following actions: (i) multiplexing and transport block creation, (ii) insert padding bits into a (e.g., each of) the fixed size Data Blocks or the overall transport block, (iii) create or update one or more Data Block headers to include information obtained with the arrival of the grant, such as the number of Data Blocks in the TB; (iv) creation of a Data Blocks header; and/or (v) additional PHY layer processing of a (e.g., each) Data Block or of the entire transport block. MAC multiplexing and transport block creation may be provided. A WTRU may, e.g., at grant reception, determine the number of available Data Blocks that have been constructed and processed (e.g., potentially with additional PHY layer processing, such as coding and modulation) prior to the grant that are ready to be transmitted. The WTRU may select a subset of Data Blocks to be transmitted in the grant. The selection criteria may be, for example, based on one or more of the following: (i) Data Blocks selected may contain data from a flow, logical channel, or service indicated in the grant, the data blocks are allowable based on the grant e.g. (in the case the grant allows for a number of flows) or the data blocks are fixed size Data Blocks created based on prior knowledge of the Data Block size; (ii) a WTRU may service the grant, e.g., by including the Data Blocks in a first come first serve basis, whether on a single flow, logical channel, service or over one or more (e.g., all) flows, logical channels, or services; and/or (iii) the WTRU may service the grant based on some QoS related parameters. MAC multiplexing may occur, for example, by TTL. For example, a WTRU may insert all Data Blocks in increasing order of TTL. MAC multiplexing may occur, for example, by logical channel priority and TTL. For example, a WTRU may (e.g., first) include all Data Blocks, where the data may be associated with the Data Block having data for which the TTL may be below a specific threshold, and (e.g. second) perform LCP for any additional space in the grant. MAC multiplexing may occur, for example, by relationship between Data Blocks. For example, a WTRU may perform selection of Data Blocks based on a pre-defined relationship between the Data Blocks that may be indicated as part of the QoS. For example, some Data Blocks may have been formed from the same IP or PDCP packet. A WTRU may include related Data Blocks within the same transport block, e.g., due to an indication from the QoS information that it is preferred or required. MAC multiplexing may occur, for example, by restriction of QoS allowable in Grant. For example, a WTRU may perform selection of Data Blocks by selecting (e.g., only) Data Blocks associated with (e.g., only) a single or restricted set of flows, logical channels or services. An association may be identified in the grant or may be known apriori in the WTRU, e.g., based on characteristics of the grant with respect to PHY layer parameters or Data Block sizes signaled to the WTRU prior to the grant. A WTRU may (e.g., autonomously) determine a URLLC Grant. For example, a WTRU may autonomously determine that a grant may (e.g., should or must) be used to transport data of one or more particular flows, logical channels, services. A WTRU may restrict (e.g., only) Data Blocks associated with those flows/logical channels/services to be selected and included in the transport block. A difference between TB size and grant size may be minimized. For example, a WTRU may select available Data Blocks so that the difference of the grant size and the TB size may be minimized. A combination of available Data Blocks for transmission that may result in minimization of the difference may be employed. Selection criteria may be utilized by a WTRU, for example, regardless whether it creates fixed sized Data Blocks or whether the Data Blocks are sized dynamically (e.g., as the grant arrives). A Data Block ACK/NACK may be provided. A WTRU may be configured to transmit transport blocks containing one or more data blocks, e.g., each with its own CRC referred to as a data block CRC. A transport block may carry its own CRC, which may be referred to as a TB CRC. An encoder may be configured to insert smaller length CRCs to a (e.g., each) code block, for example, to enable power saving via early detection of decoding failure. A transport block (TB) NACK may occur, for example, when a pre-configured ratio or number of data blocks is in error. A network may (e.g., correctly) receive transport blocks (e.g., all associated data blocks) without error, in which case it may be configured to transmit an ACK to the WTRU on a dedicated control channel. A network may detect an error related to a transport block, e.g., due to one or more of the data blocks being received in error. A network may be configured to determine whether to transmit ACK or NACK (e.g., a HARQ-ACK) to the associated TB transmission, for example, depending on the number of data blocks in error in the transport block. For example, a base station or TRP may be configured (e.g., via another instance in the network or via OAM) to transmit NACK when more than a specific number or ratio of data blocks are in error. For example, a TRP may be configured to transmit NACK when more than 50% of the data blocks are in error. A motivation may be to trigger a HARQ retransmission (e.g., only) when it is expected (e.g., statistically) that there may be HARQ combining gain. Otherwise, it may be more advantageous to retransmit (e.g., only) the data blocks in error, for example, at the expense of additional feedback signaling or additional delay (e.g., by letting the RLC or ARQ entity handle the error case). A WTRU may (e.g., also or alternatively) be configured (e.g., as a special case) to transmit a HARQ-NACK when (e.g., only) one data block is in error. This special case may represent an ACK or NACK on the whole of a TB. One or more example procedures described herein may be executed or applied in part or in full on a network or a WTRU, for example, when a network transmits multiple data blocks to a WTRU. A WTRU may be configured with a ratio or a number of data blocks in error above which it transmits a HARQ-NACK. A fast aggregated data block status report may be provided for ultra-low latency retransmissions. For example, a base station may be configured to provide fast status report feedback for data blocks to trigger data-block retransmissions, which may be new transmissions from the HARQ perspective. The size of the feedback may be variable, for example, given that the number of data blocks may vary. Feedback may be aggregated across multiple TTIs. An aggregated data block Ack/Nack message may consist of one or more Ack/Nack fields (e.g., 1 bit fields). A (e.g., each) field may correspond to a (e.g., one) data block in an associated uplink transmission. An aggregated data block Ack/Nack message may be transmitted by a TRP, e.g., over a predefined dedicated resource. A WTRU may determine the size of an aggregated data block Ack/Nack message, for example, based on an associated uplink grant (e.g., with an implicit time association between UL grant and DL feedback). In an (e.g., another) example, an aggregated data block Ack/Nack message may be transmitted by a TRP over a set of resources associated with the resources of the associated UL transmission. In an example, a TRP may schedule an aggregated data block Ack/Nack message along a UL grant. For example, a TRP may indicate resources for an aggregated data block Ack/Nack message that may occur at a later time. In an example, a WTRU may be configured to transmit an aggregated data block Ack/Nack message (e.g., only) when scheduled by a TRP. In an example, a similar approach described for uplink may be applicable to downlink. A TRP may be configured to transmit multiple data blocks in a transport block. A WTRU may be configured to transmit an aggregated data block Ack/Nack message. A WTRU may be scheduled with resources that may be used or needed for an aggregated data block Ack/Nack message in the DCI associated with the associated transmission. A WTRU may receive the aggregated data block Ack/Nack message grant and transmit the grant on the associated resources. A WTRU may be configured to not transmit the aggregated data block Ack/Nack message, for example, when not scheduled on the DCI. A network may (e.g., alternatively) configure a set of resources dedicated for transmission of aggregated data block Ack/Nack. In an (e.g., another) example, a WTRU may be configured to transmit an aggregated data block Ack/Nack message over L1, for example, when no data is being transmitted, or (e.g., when there is data being transmitted) the WTRU may be configured to transmit the aggregated data block Ack/Nack in a control message along with the data (e.g., in a MAC header). The size of an aggregated data block Ack/Nack message may be variable from TTI to TTI, although the size may be known to the network, e.g., due to scheduling grant. A WTRU may be configured to transmit an appropriate format for an aggregated data block Ack/Nack message, e.g., following an associated DCI. Flexible grant sizes may be permitted, e.g., while satisfying a PDU assembled prior to reception of the grant. For example, a WTRU that performs MAC PDU assembly prior to the assignment of a grant may flexibly employ a resulting grant or grants to allow it to be tailored to the assembled MAC PDU. A WTRU may (e.g., autonomously) determine a number of grants or size of each grant needed to transmit an assembled MAC PDU. Grants may span multiple consecutive TTIs. For example, a WTRU may be assigned a grant that spans multiple TTIs, multiple subframes, multiple frequency blocks or a combination thereof. Grants may be defined, for example, so that a portion of a grant on a given TTI, subframe, frequency block or the like, may be a unit portion of the overall grants. A WTRU may utilize a subset or multiple units of a grant and may provide indication to the network if and when it has completed use of a grant, e.g., to permit a network to determine the overall size of the transport block transmitted. In an example, a WTRU may receive a grant for x resource blocks that may occur repetitively over y consecutive TTIs. The x resource blocks may be the same in each of the y consecutive TTIs. The x resource blocks may (e.g., alternatively) change from one TTI to the next, e.g., in order to provide diversity in frequency. The x resource blocks may change from one TTI to the next, for example, according to one or more of the following: (i) a fixed rule known by the WTRU (e.g., [resBlock X+m] mod BW); (ii) a rule indicated in the grant itself (iii) a rule defined using broadcast or dedicated signaling prior to the grant and/or (iv) a rule specific to the cell or TRP to which the WTRU is connected and may potentially be provided by the access table or similar system information specific to the system signature. In an example, the value of y may not be defined and a grant may last indefinitely until indicated by the WTRU. A WTRU may, e.g., upon reception of a grant, perform PHY layer encoding and modulation of a MAC PDU that is ready for assembly according to the modulation and encoding provided in the grant. A WTRU may receive a (e.g., single) modulation and encoding to be utilized across the entire grant. A WTRU may (e.g., alternatively) receive different encoding or modulation parameters to use for each TTI associated with a grant. A WTRU may, e.g., prior to starting an encoding process, insert padding or additional redundancy control information or data in the MAC PDU, e.g., to ensure that the resulting encoded and modulated PDU (e.g., fully) occupies an integer number of grant units (e.g. the resources granted in M consecutive TTIs). A WTRU may indicate to a network a termination/size of a transport block (TB). For example, a WTRU may, e.g., at any time during or following the processing of a grant, indicate to the network, the number of consecutive TTIs it may (e.g., will) utilize and (e.g., therefore) the termination of the transport block, e.g., to inform the network of the size of the transport block transmitted. A WTRU may indicate the termination of the transport block to the network, for example, using one of the following procedures: (i) a WTRU may indicate the number of TTIs utilized to the network using PHY signaling, such as, but not limited to, PUCCH, SRS-like, RACH-like or similar signaling; (ii) a WTRU may indicate the number of TTIs utilized to the network using a MAC CE provided as part of the transport block; (iii) a WTRU may transmit, e.g., in part of the resources of the last TTI, a special signal indicating the end of transmission and/or (iv) a WTRU may perform padding or subdivision of a MAC PDU into blocks at the PHY layer such that one or more block CRCs may have a CRC value indicating a number of TTIs utilized to transmit the transport block. A WTRU may decide to combine separate grants to transmit a single TB. For example, a WTRU may select two or more UL grants to use in transmitting a single TB. A WTRU, e.g., having received multiple grants in the same subframe or TTI, may decide to utilize those grants in combination to transmit a single TB. A WTRU, e.g., having been provided separate grants with potentially different transmission parameters (MCS, coding, power, etc.), may select the set of parameters associated with one of the grants, e.g., in order to perform modulation and coding on the entire TB, for example, provided that it allows the TB to be transmitted with the entire set of resources. For example, a WTRU may select the grant that results in the least overall data bits transmitted, e.g., to allow it to transmit the TB into the associated grant. A WTRU may include control (e.g., MAC CE, which may include buffer status), padding and/or additional coding, for example, when a resulting encoded TB does not fully occupy the entire combination of resources with the selections made by the WTRU. A WTRU may not need to indicate the selected transmission parameters used to perform the transmission. A WTRU may (e.g., alternatively) signal selected transmission parameters using PHY signaling. A WTRU may indicate selected transmission parameters, for example, by transmission of an index that may refer to the grant chosen for the transmission parameters. An association between an indicated index and grant may be defined, for example, using static rules. For example, a grant referencing resources in the lowest frequency range may be associated with the lowest frequency range. A WTRU may (e.g., alternatively) provide in PHY layer signaling a property associated with the grant itself. For example, a WTRU may provide a modulation index of the grant whose transmission parameters may (e.g., will) be used to transmit the entire grant. A WTRU may decide to combine resources or initial transmission and retransmission. For example, a WTRU may combine the resources allocated to it for initial transmission and retransmission of a TB, e.g., in order to transmit a single TB instead of multiple TBs. A WTRU may be provided, e.g., explicitly or implicitly, with resources for eventual retransmission (e.g., in case of failed transmission). A WTRU may (e.g., when a TB occupies more than the resources provided for initial transmission) indicate this to the network, e.g., using one or more of the procedures described herein for UL indication. A WTRU may encode an entire transport block according to modulation and coding provided by a grant. A WTRU may transmit a portion of a transport block on the resources for an initial transmission. A WTRU may (e.g., at the time resources for retransmission become available) transmit the remainder of the resources of the TB block. This may be repeated multiple times (e.g., the number of retransmissions allowed for an initial UL transmission) until the TB has been completely transmitted. A WTRU may perform retransmission of the TB on a new resource or set of resources scheduled by the network, for example, when a transmission of a TB over multiple resources associated with UL grant and retransmissions failed. A WTRU may retransmit an entire TB on a single grant provided by the network, e.g., where the size of the grant may be tailored to the TB size. A WTRU may send an indication to allocate more resources to complete TB transmission. For example, a WTRU may transmit a portion of a TB in a grant provided by a network and may provide an indication to the network that the entire transport was not transmitted. An indication may (e.g., further) provide the remaining size of the TB. A WTRU may perform transmission of the remainder of a TB when the network provides a grant. A grant may be (e.g., specifically) dedicated to addressing the remaining data associated with the transport block. Transport blocks may be generated early. In an example, a WTRU may be allowed to generate one or more transport blocks (or MAC PDUs) prior to the reception of signaling allowing transmission of the one or more transport block in specific resources, where the signaling may comprise a grant received from downlink control information. Pre-generation may be feasible, for example, when used in conjunction with variable transmission durations. In an example, one or more applicable transmission parameters for physical layer processing of the MAC PDU may be determined, e.g., at the time of creation of the MAC PDU (e.g., prior to reception of the grant). For example, a WTRU may determine a coding scheme and/or coding rate applicable to a pre-generated MAC PDU of a given transport channel, for example, based on an indication received from physical layer, MAC or RRC signaling, prior to reception of the grant. One or more remaining applicable transmission parameters for physical layer processing may be signaled, e.g., as part of the grant. For example, a WTRU may determine a modulation scheme (e.g., QPSK or 16-QAM) based on information received from the grant. For example, a grant may (e.g., explicitly) indicate a modulation scheme. A grant may (e.g., alternatively) indicate a duration and/or frequency allocation for a transmission. A WTRU may implicitly derive the modulation scheme that may (e.g., needs to) be applied to accommodate (e.g., all) coded bits in the indicated duration and/or frequency allocation. A WTRU may determine the size of a MAC PDU, for example, according to one or more of the following solutions: (i) from an explicit indication and/or (ii) from a target duration and/or recently used set of transmission parameters. In an example, the size may be determined from an explicit indication from physical layer, MAC or RRC signaling, e.g., potentially for each type of service or transport channel. For example, a WTRU may be signaled a MAC PDU size of 3000 bits for a first transport channel and a MAC PDU size of 10000 bits for a second transport channel. In an example, a WTRU may determine the size of a MAC PDU, for example, based on one or more of the following: (i) a target duration for the transmission; (ii) a required duration of a transmission carrying a MAC PDU for an assumed set of transmission parameters, such as a frequency allocation, a modulation scheme, a coding scheme and/or a number of spatial layers on which the MAC PDU is mapped. One or more of the assumed set of transmission parameters may be determined, for example, based on one or more of the following: (i) a recent or the latest transmission or latest initial HARQ transmission that took place for a MAC PDU of the corresponding transport channel (e.g., modulation and coding); (ii) a currently applicable transmission parameter for physical layer processing of MAC PDU (e.g., a coding scheme and/or rate) and/or (iii) an explicit indication from physical layer, MAC or RRC signaling (e.g., a frequency allocation or number of subcarriers or resource blocks that the WTRU may assume may be signaled). A target duration for a transmission may be determined, for example, from an explicit indication from physical layer, MAC or RRC signaling. A target duration may be provided for each type of transport channel. A WTRU may set the size of a MAC PDU, for example, so that the duration of transmission for the MAC PDU may (e.g., would) match or approximately match a target duration with the assumed set of transmission parameters. This approach may ensure that the (e.g., required or maximum) duration for the transmission of a MAC PDU remains relatively close to the target despite the fact that other transmission parameters that will be applicable to the transmission of the MAC PDU may differ from the assumed set of transmission parameters, e.g., due to change of radio conditions. Conditions may be provided or otherwise known to pre-generate additional MAC PDUs. For example, a WTRU may (e.g., only) pre-generate one or more new MAC PDUs when one or more conditions are satisfied, such as one or more of the following, for example: (i) the number of outstanding pre-generated MAC PDUs may not exceed a first threshold, where a threshold may, for example, be pre-defined or obtained from physical layer, MAC or RRC signaling; (ii) the amount of data in outstanding pre-generated MAC PDUs (which may include the one or more new MAC PDUs to be pre-generated) does not exceed a second threshold. In an example, a second threshold may be pre-defined or obtained from physical layer, MAC or RRC signaling. A threshold may (e.g., alternatively) be based on the amount of data that may be transmitted within a target duration for an assumed set of transmission parameters. An assumed set of transmission parameters may be obtained, for example, according to one or more techniques described herein. In an example, a WTRU may pre-generate new MAC PDUs, for example, when the total duration required for the transmission of (e.g., all) outstanding pre-generated MAC PDUs may not exceed a threshold, such as 5 ms. A target duration may be, for example, pre-defined or obtained from physical layer, MAC or RRC signaling. A pre-generated MAC PDU may be transmitted. For example, a WTRU may receive signaling (e.g., a grant) allowing transmission of one or more MAC PDUs in a resource. A transmission may be conditional, e.g., based on a clear channel assessment condition, for example, when the WTRU operates in an unlicensed band. The one or more MAC PDUs may have been pre-generated, for example, according to one or more techniques described herein. A WTRU may receive one or more transmission parameters, such as one or more of a frequency allocation, a modulation scheme, a coding scheme and/or rate or a number of spatial layers. One or more parameters may have been provided prior to signaling allowing transmission of the MAC PDU. A WTRU may determine a required duration of a transmission, for example, so that a sufficient number of resource elements may be available for mapping modulation symbols. The determination may take into account the one or more parameters and/or any (e.g., required) reference signal and/or physical control information to be multiplexed with higher layer data. A WTRU may perform the transmission accordingly. In an example, a WTRU may transmit control information to assist a receiver in the determination of a transmission duration. For example, a WTRU may provide an indication of the duration expressed as a number of time units (e.g., symbols or subframes) encoded in uplink or sidelink control information, such as in a scheduling assignment. In an (e.g., another) example, a WTRU may transmit an indication in a (e.g., the last or after the last) symbol of the transmission indicating that the transmission does not continue. In an (e.g., another) example, a WTRU may multiplex control information in pre-defined resources occurring in (e.g., every) time unit (e.g., in every subframe) indicating whether the transmission continues in the subsequent time unit. A WTRU may receive a maximum transmission duration, for example, as part of a grant or from previous signaling. A WTRU may, for example, when a transmission duration for a MAC PDU may (e.g., would) exceed a maximum, perform, for example, one or more of the following: (i) discard the MAC PDU; (ii) transmit an indication that the maximum transmission duration is too small for the transmission of the MAC PDU, where an indication may, for example, be encoded as physical control information or as MAC signaling in a MAC PDU created for this purpose and/or (iii) modify at least one transmission parameter, such as a coding scheme, coding rate or modulation scheme, e.g., compared to the signaled transmission parameter, for example, so that the total required duration may not exceed the maximum. In an example, a WTRU may employ a higher order modulation (e.g., 16-QAM instead of QPSK), a higher (e.g., effective) coding rate (e.g., ¾ instead of ⅓), which may be implemented, for example, by puncturing a number of coded bits. A WTRU may transmit an indication whether a modification has been applied and/or of the modified value, for example, in uplink or sidelink control information. A minimum guaranteed TBS may be provided. For example, a WTRU may be configured with a minimum guaranteed TBS. A configuration may be received, for example, by RRC or by MAC CE. A configuration may be applicable to a (e.g., specific) data unit, for example, based on the type of data, the logical association (e.g., with a data flow, a LCH, a LCG, a corresponding SOM, a corresponding service, groups thereof or the like). A WTRU may be configured, for example, to permit it to perform one or more processing steps, e.g., for the creation of a MAC PDU ahead of the reception of downlink control information and/or ahead of the final determination of the TBS for an uplink transmission. MAC processing may occur ahead of TBS information. For example, a WTRU may perform a number of MAC processing steps ahead of the final determination of (e.g., all) transmission parameters, such as the TBS for a transmission and/or for an applicable data unit (e.g., a MAC SDU). A MAC PDU may include a segment of a data unit (e.g., by RLC segmentation of by MAC segmentation). A MAC PDU may be assembled with a single TBS value, padding and/or concatenation. For example, a WTRU may assemble a MAC PDU using a configured minimum guaranteed TBS (TBSmin). A single value may be configured. A WTRU may (e.g. alternatively) consider a single value as valid out of a plurality of values, e.g., based on reception of control signaling (DCI, MAC CE) that may indicate a valid value, for example, based on previously reported QoS parameters such as minimum PDU size, based on reported channel quality information or the like. A WTRU may (e.g., subsequently) determine a final value of a TBS (TBSfinal) for a transmission, for example. from the reception of a DCI that grants uplink resources. A WTRU may assemble a MAC PDU for each value (e.g., when there are multiple values) of a configured minimum guaranteed TBS (TBSmin). The WTRU may (e.g., subsequently) determine the final value of the TBS (TBSfinal) for the transmission, for example, from the reception of a DCI that grants uplink resources. TBS may be adapted over time. For example, a WTRU may determine a final set of parameters for a transmission that uses minimum TBS guarantees associated with a single transport block (TBSfinal), e.g., from the reception of downlink control signaling. A received DCI may explicitly indicate a specific data unit to be served with a transmission. A received DCI may indicate a processing time applicable to the transmission, e.g., so that a WTRU may be instructed to perform a transmission at time n+x μsec/ms/subframes or some other unit in time for a DCI received at time n. In an example, a WTRU may determine the shortest duration of the transmission of a TB, for example, based on the signaled MCS, set of PRBs etc., e.g., using the smallest configured TBS value that is larger than the TBS resulting from the information included in the DCI signaling. In an example, a WTRU may determine the shortest duration matching in time a framing boundary (e.g., matching the end of a DL transmission portion of a subframe) that may fit the smallest configured TBS value that is larger than the TBS resulting from the information included in the DCI signaling. For example, this may be conceptually similar to a bundling operation in time based on implicit (e.g., signaled TBS smaller than minimum TBS) or explicit (e.g., indicated in the DCI) indication. In an example, a WTRU may perform a multi-TTI TB transmission (e.g., for a single MAC PDU for an applicable data unit) or TTI bundling (e.g., for multiple segments, such as RLC or MAC as separate MAC PDUs for an applicable data unit) for the transmission of pre-assembled MAC PDU(s). the number of TTIs may be an integer determined based on a guaranteed/configured TBS value (e.g., TBSmin) and the TBS value resulting from the information in a DCI. A WTRU may (e.g. when the size of TBSfinal is larger than TBSmin) add padding to the pre-assembled MAC PDU, e.g., so that the PDU size matches the value TBSfinal. Padding may include one or more MAC CE, such as a BSR or a padding BSR. A WTRU may (e.g., alternatively) concatenate further data (e.g., with or without padding information) to the pre-assembled MAC PDU such that the PDU size matches the value TBSfinal. A WTRU may select a pre-assemble PDU that may be associated with a different (e.g., specific) data unit, if any, for example, when the size of TBSfinal (with or without multi-TTI transmission) may be smaller than a pre-assembled PDU with an applicable value for TBSmin. A WTRU may (e.g., alternatively) assemble a new MAC PDU that matches the TBS of the transmission or the WTRU may perform the transmission of padding only. A WTRU may include in an uplink transmission an indication of a desired TBS and/or an increase/decrease indication thereof, such as discrete values within a set of configured values. An indication may be applicable to a specific type of data unit and/or configuration. An indication may be included in a BSR or provided using bits in a MAC PDU header representing an index to preconfigured set of values. An indication may be a request for an increase in processing time. A WTRU may use information regarding a minimum guaranteed TBS for a determination of the LCH (or equivalent) from which data may be served for the assembly of a MAC PDU. One or more other aspects, such as latency, time to live, PBR, priorities or the like, may be taken into consideration, for example, according to one or more procedures described herein. Any of the above procedures may be applicable, for example. when a WTRU performs assembly of MAC PDUs when a grant is decoded and (e.g., all) information is known, such as when a WTRU is configured to serve data from a specific LCH (or equivalent) based on a minimum configured data unit size. A network (NW) may (e.g., using a procedure described herein) configure one or more minimum TBS. A WTRU may determine whether padding may be included in a received transmission, e.g., to determine whether a WTRU performs pre-assembly of MAC PDUs and/or concatenation. A network may vary the TTI duration of a transmission, for example, to ensure that a minimum TBS size may be (e.g., always) available, e.g., even as the radio link and/or the HARQ operating points changes. In other words, a network may perform adaptation of WTRU transmission for a data unit and/or a MAC SDU in time, e.g., in addition to adaptation based on MCS and/or frequency. For example, this adaptation may be useful when a network may (e.g., need to) guarantee a minimum TBS with a specific HARQ operating point for varying link adaptation needs and/or for varying link quality. In an example, a DCI may indicate a longer processing time for a WTRU, for example, following the reception of an empty MAC PDU (e.g., containing only padding) and/or the reception of an indication that the current TBS is insufficient. Transmission parameters may be selected, for example, using blind decoding or a DCI reception procedure. In an example, a WTRU may determine one or more parameters associated with a transmission, e.g., based on the decoding of a control channel. A WTRU may perform a determination, for example, based on parameters used for a decoding attempt deemed successful in its outcome. A WTRU may determine success, for example, based on successful CRC verification for the received DCI. A DCI may indicate an uplink and/or a downlink transmission. One or more parameters may correspond to a set of parameters. A WTRU may use a procedure to determine one or more of a plurality of sets of parameters. A set may be a configuration aspect of a WTRU. For example, one or more (e.g., a set of) parameters may correspond to a transmission characterized by higher reliability, lower latency, best-effort type of transmission or to another type of service, e.g., paging, system information reception, broadcast transmission, or the like. A set may correspond to a SOM. Decoding a control channel may correspond to a blind decoding attempt. A WTRU may perform one or more decoding attempts, e.g., each attempt using a different set of decoding aspects (e.g., parameter(s) and/or procedure). A decoding aspect may include the set and/or the amount of physical resources used for a channel (e.g., control channel elements), the Aggregation Level (AL), the size of the CRC (e.g., 8 bits, 16 bits and/or distinguished by using different polynomials), the associated search space, the identity of the corresponding control channel or a combination thereof. Robustness of DCI may indicate something on a transmission. For example, a WTRU may determine that a DCI received has been successfully decoded according to one of a plurality of robustness/reliability levels. A WTRU may determine an appropriate value for one or more parameters associated with a transmission, for example, so that a similar reliability level may be assumed for the transmission. For example, a network may determine an explicit/implicit indication to use, e.g., based on applicable link adaptation mechanisms such as uplink control information and/or channel state indications received from the WTRU. A WTRU may determine a minimum level of robustness and/or QoS level applicable to a transmission, for example, based on a determined set of one or more parameters. For example, a WTRU may determine an applicable SOM. The WTRU may (e.g., further) determine data applicable to a UL transmission based on the associated QoS level. A WTRU may determine HARQ processing and/or feedback. For example, a WTRU may determine the type of HARQ processing to apply to a transmission (e.g., in case of an initial transmission, uplink or downlink) and/or the type of HARQ feedback (e.g., required) for a transmission, e.g., based on of a determined set of one or more parameters. For example, a WTRU may (e.g., for reception of a DL transmission) determine, e.g., based on of the identified set of parameters, that HARQ feedback may be expected in (or within) a certain time interval using a specific transmission procedure (e.g., an applicable uplink control channel) or that feedback should not be automatically generated (e.g., only upon request). For example, a WTRU may determine an applicable SOM and may perform HARQ processing and/or feedback accordingly. Robustness may be signaled in a grant. For example, a WTRU may receive signaling (e.g., implicitly or explicitly) in a DCI to indicate that transmissions associated with a specific grant may be performed based on a different set of parameters associated with a flow, service type, SOM, or the like. For example, a WTRU may receive (e.g., in a grant) an indication that a corresponding transmission may be used for transmission of URLLC data (if any) and that the transmission parameters may be modified and/or selected accordingly, e.g., to allow for more robust transmission and/or lower latency. For example, a WTRU may perform a similar determination for a DCI that schedules a downlink transmission, such that it may appropriately determine parameters to decode an associated transmission. A WTRU may determine a robustness level of a received grant, for example, based on decoding a DCI. The WTRU may determine such robustness level, for example, according to one or more of the following: (i) characteristic(s) of the decoding attempt leading to success, such as one or more of the AL associated with the successfully decoded DCI, the size of the CRC associated with the successfully decoded DCI, the CRC polynomial associated with the successfully decoded DCI, the CCEs (or the initial CCE) associated with the successfully decoded DCI, the Search Space (or start thereof) associated with the successfully decoded DCI and/or the associated control channel type, resource in time/frequency and/or identity; (ii) the DCI format received and/or (iii) an explicit indication in a field in the DCI format. A WTRU may be configured for association with a robustness level and/or set of parameters. A robustness level may be determined from an aggregation level or CRC. For example, a WTRU may determine a robustness level based on the aggregation level or search space that resulted in a successful decoding of a DCI. For example, a robustness level may be determined as a first value when an aggregation level is determined to be 1, 2, or 4 (e.g., an aggressive AL for a certain reliability operating point) and as a second value when an aggregation level is determined to be 8 or 16 (e.g., a conservative AL for a higher reliability operating point). In an example, a WTRU may determine a robustness level based on the size of CRC for which decoding of the DCI succeeded. For example, a CRC of 8 bits may indicate the use of a normal robustness level (e.g., a first set of transmission parameters) while a CRC of 16 bits or 32 bits may indicate the use of a higher robustness level (e.g., a second set of transmission parameters). Transmission parameters may be selected based on robustness level. For example, a WTRU may select transmission parameters to be used for a UL transmission associated with a grant depending on a signaled robustness level. A selection may, for example, be based on pre-defined rules or may be configured by the network (e.g., in RRC signaling). A selection may (e.g. also) be combined with other procedures, such as one or more described herein. In an example, a selection may, for example, be based on one or more of the following: (i) the physical resources being utilized for the uplink transmission (e.g., when the determination of the set of applicable resources for the transmission may itself be independent from such determination); (ii) a type of data being transmitted (e.g., when the selection of data may itself be independent from such determination); (iii) a current state of the WTRU (e.g., current power headroom); and/or (iv) a current state of the data (e.g., the TTL of the data). In an example, a WTRU may determine a different value for parameters, for example, based on whether the robustness level indicates normal or reliable transmission. A WTRU may determine a different value for one or more parameters, such one or more of: (i) MCS; (ii) a set of applicable PRBs; (iii) applicable PRBs within a determined set of applicable PRBs; (iv) HARQ Process Type; (v) applicable procedure to generate and/or transmit (e.g., when DCI schedules DL transmission) or reception (e.g., when DCI schedules UL transmission) of HARQ feedback; (vi) power boosting and/or prioritization of power (e.g., when DCI schedules UL transmission) and/or (vii) applicable framing and/or frame structure (e.g., TTI duration). A WTRU may perform reception/transmission of data according to the determined set of parameters. Systems, methods, and instrumentalities (e.g., aspects of entities, interfaces and procedures in a WTRU and/or network layers L1, L2, l3) have been disclosed for low latency MAC PDU assembly in wireless systems, such as 5gFLEX. Latency may be reduced, for example, by WTRU determination of network transmission parameters and signaling prior to a transmission grant. A WTRU may receive an MCS, resource range, etc. prior to a grant, e.g., for use in future grants. Data blocks may be incrementally created/encoded prior to a grant. Data units may be segmented, assembled and multiplexed, for example, based on data block sizes that allow MAC and RLC processing prior to a grant. Flexible grant sizes may be provided for early generation of transport blocks before a grant. A minimum guaranteed TBS may be signaled to permit early generation of a MAC PDU. Transmission parameters may be selected prior to a grant, for example, using blind decoding or a DCI reception procedure. The processes and instrumentalities described herein may apply in any combination, may apply to other wireless technologies, and for other services. A WTRU may refer to an identity of the physical device, or to the user's identity such as subscription related identities, e.g., MSISDN, SIP URI, etc. WTRU may refer to application-based identities, e.g., user names that may be used per application. Each of the computing systems described herein may have one or more computer processors having memory that are configured with executable instructions or hardware for accomplishing the functions described herein including determining the parameters described herein and sending and receiving messages between entities (e.g., WTRU and network) to accomplish the described functions. The processes described above may be implemented in a computer program, software, and/or firmware incorporated in a computer-readable medium for execution by a computer and/or processor. The processes described above may be implemented in a computer program, software, and/or firmware incorporated in a computer-readable medium for execution by a computer and/or processor. Examples of computer-readable media include, but are not limited to, electronic signals (transmitted over wired and/or wireless connections) and/or computer-readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as, but not limited to, internal hard disks and removable disks, magneto-optical media, and/or optical media such as CD-ROM disks, and/or digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, terminal, base station, RNC, and/or any host computer.
119,528
11863303
DESCRIPTION OF EMBODIMENTS In many scenarios, a transmission path of a service flow may be a multi-hop path including a plurality of physical links. In this case, there is usually the following situation: Although a bit error of each physical link on the transmission path of the service flow is at an acceptable level, a severe bit error occurs after the service flow traverses the multi-hop transmission path. Consequently, transmission stability of the service flow is affected. For example, a transmission path of a service flow is from a cell site gateway (CSG) to a radio service gateway (RSG) through an aggregation site gateway (ASG). Neither a bit error rate of a physical link between the CSG and the ASG nor a bit error rate of a physical link between the ASG and the RSG affects transmission of the service flow. However, an accumulated bit error rate of the transmission path from the CSG to the RSG affects the transmission of the service flow. To resolve the foregoing problem, in the embodiments of this application, a network device may report, to a controller, a bit error rate at which the network device sends data traffic along a first transmission path through an egress port of the network device. The bit error rate at which the egress port is configured to send data traffic may be considered as a bit error rate of a physical link between the network device and a next-hop network device of the network device on the first transmission path. In a manner of reporting by the network device, the controller may collect and accumulate bit error rates at which data traffic is sent through all egress ports on the first transmission path, so that the controller may switch a service flow on the first transmission path to a second transmission path when a bit error rate obtained through accumulation is excessively high. Therefore, the controller may switch a service flow transmitted on a transmission path with an excessively high accumulated bit error rate to another transmission path with a relatively low accumulated bit error rate for transmission. This can avoid using a multi-hop path with a severe accumulated bit error rate to forward a service flow, and improve transmission stability of the service flow. For example, one of scenarios of the embodiments of this application may be applied to a network structure shown inFIG.1. A network110that may be configured to transmit a service flow between a base station101and a core network device102includes network devices such as a cell site gateway (CSG)103, a CSG104, a CSG105, an aggregation site gateway (ASG)106, an ASG107, a radio service gateway (RSG)108, and an RSG109. The base station101may be an evolved NodeB (eNB), a new radio (NR) base station, or the like. The core network device102may be a serving gateway (S-GW), a mobility management entity (MME), or the like. Each network device on the network110may report, to a network controller (NCE)120, a bit error rate at which data traffic is sent through a respective egress port. For example, the ASG106may report, to the NCE120, a bit error rate at which data traffic is sent through an egress port116. The egress port116is used by the ASG106to send the data traffic to the RSG108. For another example, the CSG104may report, to the NCE120by using the ASG106, a bit error rate at which data traffic is sent through an egress port113. The egress port113is used by the CSG104to send the data traffic to the ASG106. For another example, the RSG108may report, to the NCE120, a bit error rate at which data traffic is sent through an egress port128. The egress port128is used by the RSG108to send the data traffic to the core network device102. In a manner of reporting by each network device on the network110, the NCE120may collect and accumulate bit error rates at which all egress ports on a transmission path send data traffic, to obtain an accumulated bit error rate of the transmission path. For example, a first transmission path is a transmission path from the CSG103to the RSG108, and the first transmission path passes through the CSG103, the CSG104, the ASG106, and the RSG108. The NCE120may collect a bit error rate at which an egress port111is used by the CSG103to send data traffic to the CSG104, a bit error rate at which the egress port113is used by the CSG104to send data traffic to the ASG106, and a bit error rate at which the egress port116is used by the ASG106to send data traffic to the RSG108, and accumulate the bit error rates, to obtain an accumulated bit error rate of the first transmission path. When determining that the accumulated bit error rate of the first transmission path is greater than a bit error rate threshold, the NCE120may switch a service flow from the first transmission path to a second transmission path. The second transmission path is another transmission path from the CSG103to the RSG108, for example, may be a transmission path that passes through the CSG103, the CSG105, the ASG107, the RSG109, and the RSG108. It may be understood that the foregoing scenario is merely a scenario example provided in this embodiment of this application, and this embodiment of this application is not limited to this scenario. With reference to the accompanying drawings, the following describes in detail specific implementations of a link bit error-based processing method and apparatus in the embodiments of this application by using the embodiments. FIG.2is a schematic flowchart of a link bit error-based processing method200according to an embodiment of this application. The method200may include the following steps. 201: A controller receives first link status information sent by a first network device. The first link status information includes first egress port information and a first bit error rate. The first egress port information indicates a first egress port of the first network device, and the first network device is configured to send data traffic to a next-hop network device of the first network device along a first transmission path through the first egress port. The first bit error rate indicates a bit error rate at which data traffic is sent through the first egress port. In embodiments, the first network device may detect the first bit error rate at which the first network device sends the data traffic to the next-hop network device of the first network device along the first transmission path through the first egress port, generate the first link status information based on the first bit error rate and the first egress port information that is used to indicate the first egress port, and send the first link status information to the controller, to report the first bit error rate to the controller. The first bit error rate may be considered as a bit error rate of a physical link from the first network device to a second network device on the first transmission path, and the first egress port information may be considered as an identifier of the physical link. It may be understood that, in a manner of reporting link status information by a network device, the controller may receive a bit error rate at which any egress port on any network device is configured to send data traffic. In other words, the first network device may be any network device that is on the network and that is configured to send data traffic. The first egress port may be any egress port on the first network device, and the next-hop network device of the first network device may represent an adjacent network device to which the first network device is connected through the first egress port. For example, in the network structure example shown inFIG.1, the first network device may be any network device on the network110. Assuming that the first network device is the ASG106, the first egress port may be the egress port116, an egress port123, or an egress port125. If the first egress port is116, the next-hop network device of the first network device is the RSG108. If the first egress port is123, the next-hop network device of the first network device is the CSG104. If the first egress port is125, the next-hop network device of the first network device is the ASG107. The first bit error rate may be determined by performing, by the first network device, bit error detection on a packet received through the first egress port. The first network device determines an error bit in the packet by performing bit error detection on the packet received through the first egress port, and then calculates the first bit error rate based on a quantity of error bits in the packet and a total quantity of bits. For example, the first bit error rate may be a proportion of the quantity of error bits in the total quantity of bits in the packet received by the first network device through the first egress port. The first network device may detect the error bit in the packet by using an error correction algorithm. For a symbol in the packet received by the first network device through the first egress port, if the symbol can be corrected by using the error correction algorithm, a bit in the symbol is determined as a correct bit. If the symbol cannot be corrected by using the error correction algorithm, a bit in the symbol is determined as a quantity of error bits. The error correction algorithm used to detect an error bit may be, for example, a cyclic redundancy check (CRC) algorithm. Correspondingly, the symbol in the packet received by the first network device through the first egress port may be a CRC code. For example, a CRC code is generated by splicing R check bits after K-bit information. An encoding manner of the CRC code may include: representing to-be-encoded K-bit information as a polynomial M(x), shifting M(x) to the left by R bits to obtain M(x)*xR, dividing M(x)*xR by a generator polynomial G(x) of R+1 bits to obtain a remainder R(x), and performing a modulo 2 addition operation on M(x)*xR and R(x) to obtain the CRC code. In this embodiment, a plurality of reporting manners may be used by the first network device to report the first link status information to the controller. In an example, the first network device may communicate with the controller, and the first link status information may be directly sent by the first network device to the controller. For example, in the network structure example shown inFIG.1, each ASG may communicate with the NCE120, and each RSG may communicate with the NCE120. Therefore, an ASG may directly send link status information to the NCE120, and an RSG may also send link status information to the NCE120. In other words, if the first network device is an ASG or an RSG, the first link status information may be directly sent by the first network device to the controller. In another example, the first network device cannot communicate with the controller. In this case, the first link status information may be first sent by the first network device to a fourth network device, and then the fourth network device directly sends the first link status information to the controller. For example, in the network structure example shown inFIG.1, each CSG cannot communicate with the NCE120, but each ASG and each RSG may communicate with the NCE120. Therefore, a CSG may first send link status information to an ASG or an RSG, and then the ASG or the RSG directly sends the link status information to the NCE120. In other words, if the first network device is a CSG, the first link status information may be first sent by the first network device to the fourth network device, and then directly sent by the fourth network device to the controller. The fourth network device may be an ASG or an RSG. In still another example, when the first network device can communicate with the controller, the first link status information may alternatively be first sent by the first network device to a fourth network device, and then directly sent by the fourth network device to the controller. For example, in the network structure example shown inFIG.1, each ASG and each RSG may communicate with the NCE120. An ASG may first send link status information to an RSG, and then the RSG directly sends the link status information to the NCE120. In other words, if the first network device is an ASG, the first link status information may be first sent by the first network device to the fourth network device, and then the fourth network device directly sends the first link status information to the controller. The fourth network device may be an RSG. It may be understood that the controller may use a border gateway protocol (BGP) link state (LS) to receive the first link status information. For example, the first link status information may be carried in a BGP update packet and reported to the controller. In other words, the controller may receive the BGP update packet carrying the first link status information, and read the first link status information from the BGP update packet. For example, the first link status information may be carried in a multiprotocol reachable network layer reachability information (MP REACH NLRI) field or a multiprotocol unreachable network layer reachability information (MP UNREACH NLRI) field in the BGP update packet. In an example, assuming that the first network device directly sends the first link status information to the controller, the first network device may encapsulate the first link status information into a BGP update packet, and then send the BGP update packet in which the first link status information is encapsulated to the controller. In another example, assuming that the first network device sends the first link status information to the controller by using the fourth network device, the first network device may send the first link status information to the fourth network device, and the fourth network device encapsulates the first link status information into a BGP update packet, and then sends the BGP update packet to the controller. It may be understood that the first link status information may be carried in TLV information for transmission. For example, the first network device may use TLV information of the BGP update packet to carry the first link status information, to send the first link status information to the controller. For another example, the first network device may use type-length-value (TLV) information of an interior gateway protocol (IGP) to carry the first link status information, to send the first link status information to the fourth network device, and then the fourth network device uses TLV information of the BGP update packet to carry the first link status information, to send the first link status information to the controller. In an example, if the IGP protocol is an intermediate system to intermediate system (ISIS) protocol, according to request for comments (RFC) 5305, a link attribute TLV may be added to the ISIS protocol. The link attribute TLV may be used to carry the first link status information, and sent from the first network device to the fourth network device. In a TLV definition example shown inFIG.3, the link attribute TLV is a sub-TLV, a type is 19, a length is 4 octets, and a name is described as bit-error detection. In another example, if the IGP protocol is an open shortest path first (OSPF) protocol, according to RFC 7770, a link state advertisement (LSA) TLV may be added to the OSPF protocol. The LSA TLV may be used to carry the first link status information, and sent from the first network device to the fourth network device. In a TLV definition example shown inFIG.4, in an LSA TLV, a TLV code point is 32768, a length is 4 octets, and a name is described as bit-error detection. For another example, according to RFC 7752, a link descriptor-related TLV may be added to the BGP LS protocol. The link descriptor-related TLV may be used to carry the first link status information, and sent from the first network device or the fourth network device to the controller. In a TLV definition example shown inFIG.5, in a link descriptor-related TLV, a TLV code point is 266, a description is bit-error detection, an ISIS TLV is 22, and a sub-TLV is 19. 202: The controller determines a first accumulated bit error rate of the first transmission path based on the first link status information. The first accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the first transmission path. After the controller receives the first link status information, the controller may read the first egress port information and the first bit error rate from the first link status information, and update, based on the first bit error rate, a bit error rate of the first egress port indicated by the first egress port information. In this way, the controller may update the bit error rate of the first egress port to the first bit error rate. In some embodiments, the controller may determine, based on a size of the first bit error rate, to update the bit error rate of the first egress port to the first bit error rate or clear the bit error rate of the first egress port to zero. In this way, when the first bit error rate is excessively low, the controller may ignore the bit error rate of the first egress port, that is, the first egress port may be considered as free of a bit error. In some embodiments, the controller may determine whether the first bit error rate is less than a second bit error rate threshold. If the first bit error rate is less than the second bit error rate threshold, the controller may clear a value of the first bit error rate to zero, that is, the controller clears the bit error rate of the first egress port to zero. In this case, the bit error rate of the first egress port is ignored. If the first bit error rate is greater than the second bit error rate threshold, the controller may skip clearing processing on the value of the first bit error rate, and update the bit error rate of the first egress port to the first bit error rate. In this case, the bit error rate of the first egress port is not ignored. After the bit error rate of the first egress port is updated, the controller determines that the first egress port is an egress port that is on the first transmission path and that is configured to send data traffic, so that the first accumulated bit error rate of the first transmission path may be calculated by using the bit error rate of the first egress port. The first accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the first transmission path. For example, in the network structure example shown inFIG.1, it is assumed that the first transmission path is a path from the CSG103to the core network device102, and the first transmission path passes through the CSG103, the CSG104, the ASG106, and the RSG108. In this case, egress ports configured to send data traffic on the first transmission path include the egress port111used by the CSG103to send data traffic to the CSG104, the egress port113used by the CSG104to send data traffic to the ASG106, and the egress port116used by the ASG106to send data traffic to the RSG108. The first accumulated bit error rate of the first transmission path is a sum of a bit error rate of the egress port111, a bit error rate of the egress port113, and a bit error rate of the egress port116. In an example, when the controller receives the first link status information sent by the first network device, the controller may update the bit error rate of the first egress port to the first bit error rate, and record the first bit error rate. When the first accumulated bit error rate of the first transmission path needs to be calculated, the controller may obtain, from the record, bit error rates of all egress ports on the first transmission path including the bit error rate of the first egress port, and calculate a weighted sum, to obtain the first accumulated bit error rate of the first transmission path. In a specific example, it is assumed that a transmission path A includes an egress port A and an egress port B, and a bit error rate A of the egress port A and a bit error rate B of the egress port B have been reported to the controller. The controller records the bit error rate A of the egress port A and the bit error rate B of the egress port B. In this case, if an accumulated bit error rate of the transmission path A needs to be calculated, the controller may calculate a weighted sum of the bit error rate A and the bit error rate B as the accumulated bit error rate of the transmission path A. In another example, it is assumed that the controller has calculated and recorded a fourth accumulated bit error rate of the first transmission path before receiving the first link status information. The fourth accumulated bit error rate is calculated by using a fourth bit error rate that is of the first egress port and that is recorded by the controller before the controller receives the first link status information. When the controller receives the first link status information sent by the first network device, the controller may update the fourth accumulated bit error rate of the first transmission path based on the first bit error rate, so that the fourth bit error rate of the first egress port in the fourth accumulated bit error rate is replaced with the first bit error rate, to obtain and record the first accumulated bit error rate of the first transmission path. In a specific example, it is assumed that a transmission path A includes an egress port A and an egress port B. In a case in which neither a bit error rate of the egress port A nor a bit error rate of the egress port B is reported to the controller, both the bit error rate of the egress port A and the bit error rate of the egress port B are considered as zero, and the controller may record an accumulated bit error rate A of the transmission path A as zero. After that, if a bit error rate A of the egress port A is reported to the controller, because the bit error rate of the egress port A in the accumulated bit error rate A is zero, the controller may add a weighted value of the bit error rate A to the accumulated bit error rate A, to obtain an accumulated bit error rate B and record as an accumulated bit error rate of the transmission path A, that is, the accumulated bit error rate B is the weighted value of the bit error rate A. After that, if a bit error rate B of the egress port B is reported to the controller, because the bit error rate of the egress port B in the accumulated bit error rate B is zero, the controller may add a weighted value of the bit error rate B to the accumulated bit error rate B, to obtain an accumulated bit error rate C and record as an accumulated bit error rate of the transmission path A, that is, the accumulated bit error rate C is a weighted sum of the bit error rate A and the bit error rate B. After that, if a bit error rate C of the egress port A is also reported to the controller, because the bit error rate of the egress port A in the accumulated bit error rate C is the bit error rate A, the controller may replace the weighted value of the bit error rate A in the accumulated bit error rate A with a weighted value of the bit error rate C, to obtain an accumulated bit error rate D and record as an accumulated bit error rate of the transmission path A, that is, the accumulated bit error rate D is a weighted sum of the bit error rate C and the bit error rate B. 203: The controller determines whether the first accumulated bit error rate is greater than a first bit error rate threshold. 204: The controller switches a service flow from the first transmission path to a second transmission path when the controller determines that the first accumulated bit error rate is greater than the first bit error rate threshold. A head node network device of the first transmission path and a head node network device of the second transmission path are a same network device, and a tail node network device of the first transmission path and a tail node network device of the second transmission path are a same network device. The first bit error rate threshold is used to determine whether an accumulated bit error rate of a transmission path exceeds an acceptable degree. When the first accumulated bit error rate is less than the first bit error rate threshold, the first accumulated bit error rate is at an acceptable level, that is, a bit error generated by the service flow on the first transmission path is acceptable. In this case, the controller may not perform a transmission path switching operation on the service flow transmitted on the first transmission path. When the first accumulated bit error rate is greater than the first bit error rate threshold, the first accumulated bit error rate is at an unacceptable level, that is, a bit error generated by the service flow on the first transmission path is unacceptable. In this case, the controller may switch the service flow transmitted on the first transmission path to the second transmission path for transmission. The first transmission path and the second transmission path are two different transmission paths between a same head node network device and a same tail node network device. The first transmission path and the second transmission path may be, for example, segment routing-traffic engineering (SR-TE) tunnels. In some embodiments, considering that different service types usually have different requirements on bit error rates generated during service flow transmission, the controller may correspondingly set different first bit error rate thresholds for the different service types. In this case, the controller may select different first bit error rate thresholds based on the different service types, to determine an accumulated bit error rate of a transmission path that is configured to transmit a service flow of a service type, to determine whether the service flow of the service type needs to be switched to another transmission path. In some embodiments, the controller may determine a service flow transmitted on the first transmission path and a service type of the service flow. If the first accumulated bit error rate of the first transmission path is greater than the first bit error rate threshold that is set corresponding to the service type, the controller may switch the service flow to the second transmission path for transmission. During actual application, a voice service is insensitive to a bit error rate compared with a data service. Therefore, a first bit error rate threshold that is set corresponding to the voice service may be greater than a first bit error rate threshold that is set corresponding to the data service. For example, because the voice service is affected when the bit error rate exceeds 4E-2, the first bit error rate threshold that is set corresponding to the voice service may be 4E-2. For another example, because a video service is affected when a bit error rate exceeds 1E-5, a first bit error rate threshold that is set corresponding to the video service may be 1E-5. For another example, because the data service is affected when the bit error rate exceeds 1E-6, the first bit error rate threshold that is set corresponding to the data service may be 1E-6. It may be understood that the first bit error rate threshold is used to determine whether an accumulated bit error rate of a transmission path exceeds a requirement on a bit error rate of a service flow for transmission. The foregoing second bit error rate threshold is used to determine whether a bit error rate of a single egress port on the transmission path can be ignored. Therefore, the first bit error rate threshold is generally greater than the second bit error rate threshold. It should be noted that switching the service flow from the first transmission path to the second transmission path is to switch the service flow from a transmission path with a relatively high accumulated bit error rate to a transmission path with a relatively low accumulated bit error rate. Therefore, in some embodiments, the controller may switch the service flow from the first transmission path to the second transmission path when determining that the first accumulated bit error rate of the first transmission path is greater than the first bit error rate threshold and that the second accumulated bit error rate of the second transmission path is less than the first bit error rate threshold. That is, step204may be specifically: The controller switches the service flow from the first transmission path to the second transmission path when the controller determines that the first accumulated bit error rate is greater than the first bit error rate threshold and the second accumulated bit error rate is less than the first bit error rate threshold. In addition, when the controller determines that both the first accumulated bit error rate and the second accumulated bit error rate are greater than the first bit error rate threshold, the controller may not switch the service flow from the first transmission path to the second transmission path. The controller may determine the second accumulated bit error rate of the second transmission path in the following manner: The controller receives second link status information sent by a second network device, and determines a second accumulated bit error rate of the second transmission path based on the second link status information. The second link status information includes second egress port information and a second bit error rate, the second egress port information indicates a second egress port of the second network device, the second network device is configured to send data traffic to a next-hop network device of the second network device along the second transmission path through the second egress port, the second bit error rate indicates a bit error rate at which data traffic is sent through the second egress port, and the second accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the second transmission path. It may be understood that, for the manner of determining the second accumulated bit error rate, refer to related descriptions of the first accumulated bit error rate. For an embodiment of an example implementation related to the second link status information, refer to related descriptions of the first link status information. Details are not described herein again. In some cases, there are a plurality of other transmission paths between the head node network device and the tail node network device on the first transmission path different from the first transmission path. Therefore, in some embodiments, when the controller determines that the first accumulated bit error rate of the first transmission path is greater than the first bit error rate threshold, the controller may determine the second transmission path from the plurality of other paths based on performance of each of the plurality of transmission paths, to switch the service flow from the first transmission path to the second transmission path. In this way, the service flow may be switched to a transmission path with better performance, and therefore, transmission efficiency of the service flow is higher. In an example, the performance used to determine the second transmission path from the other transmission paths may be an accumulated bit error rate. In other words, the controller may determine the second transmission path from the other transmission paths based on an accumulated bit error rate of each of the other transmission paths. The second transmission path may be a transmission path with a minimum accumulated bit error rate in the other transmission paths. In another example, the performance used to determine the second transmission path from the other transmission paths may be link cost (COST). In other words, the controller may determine the second transmission path from the other transmission paths based on a COST value of each of the other transmission paths. The second transmission path may be a transmission path with a minimum COST value in the other transmission paths. In still another example, the performance used to determine the second transmission path from the other transmission paths may be bandwidth. In other words, the controller may determine the second transmission path from the other transmission paths based on bandwidth of each of the other transmission paths. The second transmission path may be a transmission path with maximum bandwidth in the other transmission paths. In still another example, the performance used to determine the second transmission path from the other transmission paths may be a delay. In other words, the controller may determine the second transmission path from the other transmission paths based on a delay of each of the other transmission paths. The second transmission path may be a transmission path with a minimum delay in the other transmission paths. In addition, the performance used to determine the second transmission path from the other transmission paths may be any combination of a plurality of types of performance including an accumulated bit error rate, a COST value, bandwidth, and a delay. To switch the service flow from the first transmission path to the second transmission path, the controller may generate a label stack used to indicate to transmit the service flow on the second transmission path, and use a path computation element communication protocol (PCEP) packet to send the label stack to the head node network device of the second transmission path. The head node network device may encapsulate the label stack of the service flow into a packet of the service flow. In this way, each network device on the second transmission path can send the packet of the service flow to a next-hop network device of the network device based on the label stack encapsulated in the packet of the service flow. In some embodiments, if the first transmission path is a primary path of the service flow, and the second transmission path is a secondary path of the service flow, after the service flow is switched from the first transmission path to the second transmission path, when a bit error rate of an egress port on the first transmission path decreases to enable the accumulated bit error rate of the first transmission path to decrease below the first bit error rate threshold, the service flow may further be switched from the second transmission path back to the first transmission path. In some embodiments, after step204, the controller may further receive third link status information sent by a third network device, determine a third accumulated bit error rate of the first transmission path based on the third link status information, and switch the service flow from the second transmission path back to the first transmission path when determining that the third accumulated bit error rate is less than the first bit error rate threshold. The third link status information includes third egress port information and a third bit error rate, the third egress port information indicates a third egress port of the third network device, the third network device is configured to send data traffic to a next-hop network device of the third network device along the first transmission path through the third egress port, the third bit error rate indicates a bit error rate at which data traffic is sent through the third egress port, and the third accumulated bit error rate is a weighted sum of bit error rates of all the egress ports configured to send data traffic on the first transmission path. It should be noted that the third egress port may be the foregoing first egress port, or may be any another egress port other than the first egress port on the first transmission path. It may be understood that, for a manner of determining the third accumulated bit error rate, refer to related descriptions of the first accumulated bit error rate. For an embodiment of an example implementation related to the third link status information, refer to related descriptions of the first link status information. Details are not described herein again. In addition, in some embodiments, after the service flow is switched from the first transmission path to the second transmission path, if the accumulated bit error rate of the first transmission path is always above the first bit error rate threshold and the accumulated bit error rate of the second transmission path also exceeds the first bit error rate threshold, the service flow may continue to be transmitted on the second transmission path, but does not need to be switched back to the first transmission path. In this embodiment, in the manner of reporting by the network device, the controller may collect and accumulate the bit error rates at which data traffic is sent through all egress ports on the first transmission path, so that the controller may switch the service flow on the first transmission path to the second transmission path when a bit error rate obtained through accumulation is excessively high. Therefore, the controller may switch a service flow transmitted on a transmission path with an excessively high accumulated bit error rate to another transmission path with a relatively low accumulated bit error rate for transmission. This can avoid using a multi-hop path with a severe accumulated bit error rate to forward a service flow, and improve transmission stability of the service flow. FIG.6is a schematic flowchart of a link bit error-based processing method600according to an embodiment of this application. The method600may include the following steps. 601: A first network device detects a first bit error rate at which data traffic is sent through a first egress port. The first network device is configured to send data traffic to a next-hop network device of the first network device along a first transmission path through the first egress port. 602: The first network device sends first link status information to a controller. The first link status information includes first egress port information and the first bit error rate, and the first egress port information is used to indicate the first egress port. 603: The first link status information is used to determine a first accumulated bit error rate of the first transmission path, the first accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the first transmission path, the first accumulated bit error rate is used to determine whether to switch a service flow from the first transmission path to a second transmission path, a head node network device of the first transmission path and a head node network device of the second transmission path are a same network device, and a tail node network device of the first transmission path and a tail node network device of the second transmission path are a same network device. In some possible embodiments, the first link status information is sent to the controller by using a BGP update packet. In some possible embodiments, the first link status information is specifically carried in a multiprotocol reachable network layer reachability information (MP REACH NLRI) field or a multiprotocol unreachable network layer reachability information (MP UNREACH NLRI) field in the BGP update packet. In some possible embodiments, the first link status information in the BGP update packet is carried in type-length-value TLV information. In some possible embodiments, the first link status information is directly sent by the first network device to the controller. In some possible embodiments, the first link status information is first sent by the first network device to a third network device, and then is directly sent by the third network device to the controller. In an example, the first network device mentioned in the method600may be the first network device mentioned in the method200, and the first link status information mentioned in the method600may be the first link status information mentioned in the method200. In another example, the first network device mentioned in the method600may be the second network device mentioned in the method200, and the first link status information mentioned in the method600may be the second link status information mentioned in the method200. In still another example, the first network device mentioned in the method600may be the second network device mentioned in the method200, and the first link status information mentioned in the method600may be the second link status information mentioned in the method200. Therefore, for various embodiments of example implementations of the first link status information in the method600, refer to related descriptions of the method200. Details are not described herein again. In a possible embodiment, the method further includes: The first network device receives second link status information sent by a second network device. The second link status information includes second egress port information and a second bit error rate, the second egress port information indicates a second egress port of the second network device, the second network device is configured to send data traffic to a next-hop network device of the second network device along a third transmission path through the second egress port, and the second bit error rate indicates a bit error rate at which data traffic is sent through the second egress port. The first network device sends the second link status information to the controller. The second link status information is used to determine a second accumulated bit error rate of the third transmission path. The second accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the third transmission path, the second accumulated bit error rate is used to determine whether to switch a service flow from the third transmission path to a fourth transmission path, a head node network device of the third transmission path and a head node network device of the fourth transmission path are a same network device, and a tail node network device of the third transmission path and a tail node network device of the fourth transmission path are a same network device. In this embodiment, the first network device mentioned in the method600may be the fourth network device mentioned in the method200, and the second link status information mentioned in the method600may be the first link status information mentioned in the method200. Therefore, for various embodiments of the second link status information in the method600, refer to related descriptions of the method200. Details are not described herein again. In this embodiment, a network device may report a bit error rate of an egress port to the controller. In this way, the controller may collect and accumulate bit error rates at which data traffic is sent through all egress ports on the first transmission path, so that the controller may switch the service flow on the first transmission path to the second transmission path when a bit error rate obtained through accumulation is excessively high. Therefore, a service flow transmitted on a transmission path with an excessively high accumulated bit error rate may be switched to another transmission path with a relatively low accumulated bit error rate for transmission. This avoids using a multi-hop path with a severe accumulated bit error rate to forward a service flow, and improves transmission stability of the service flow. The following describes an application example of this embodiment of this application in a specific scenario by using a specific scenario example. The specific scenario example may use a network structure shown inFIG.1. The first transmission path and the second transmission path are two transmission paths respectively from the CSG103to the RSG108. The first transmission path passes through the CSG103, the CSG104, the ASG106, and the RSG108. The second transmission path passes through the CSG103, the CSG105, the ASG107, the RSG109, and the RSG108. In the specific scenario example, as shown inFIG.7, a link bit error-based processing method700may include, for example, the following steps: 701: The CSG103sends first link status information to the ASG106. The first link status information includes egress port information that is used to indicate the egress port111and a bit error rate a1. a1 is a bit error rate that is detected by the CSG103and at which the egress port111is configured to send data traffic. Link status information may be specifically carried in TLV information of the IGP protocol, and sent by the CSG103to the ASG106. It may be understood that this embodiment is described by using an example in which the CSG103reports the bit error rate of the egress port111. Actually, any network device on the network110may report a bit error rate of any egress port on the network device. 702: The ASG106sends the first link status information to the NCE120. The first link status information may be carried in an MP REACH NLRI or MP UNREACH NLRI field of a BGP update packet, and sent by the ASG106to the NCE120. 703: The NCE120updates the bit error rate of the egress port111based on the first link status information. The NCE120may update the bit error rate of the egress port111according to a size relationship between a1 and a second bit error rate threshold N. If a1 is less than N, the NCE120may update the bit error rate of the egress port111to zero. If a1 is greater than N, the NCE120may update the bit error rate of the egress port111to a1. After the bit error rate of the egress port111is updated, if the bit error rate of the egress port111is greater than a first bit error rate threshold M, step705is performed; or if the bit error rate of the egress port111is less than a first bit error rate threshold M, step704is performed. M is greater than N. 704: The NCE120updates an accumulated bit error rate of the first transmission path based on the bit error rate of the egress port111. Before the bit error rate of the egress port111is updated, the accumulated bit error rate of the first transmission path is q0=a0+b0+c0. a0 is a bit error rate that is of the egress port111and that is reported by the CSG103before the first link status information is reported, or a0 is 0 if the CSG103has not reported the bit error rate of the egress port111before the first link status information is reported. b0 is a bit error rate that is of the egress port113and that is reported by the CSG104before the first link status information is reported, or b0 is 0 if the CSG104has not reported the bit error rate of the port113before the first link status information is reported. c0 is a bit error rate that is of the egress port116and that is reported by the ASG106before the first link status information is reported, or c0 is 0 if the ASG106has not reported the bit error rate of the port116before the first link status information is reported. If the bit error rate of the egress port111is updated to a1, the accumulated bit error rate of the first transmission path is updated to q1=a1+b0+c0. If the bit error rate of the egress port111is updated to zero, the accumulated bit error rate of the first transmission path is updated to q2=0+b0+c0. After the accumulated bit error rate of the first transmission path is updated, if the accumulated bit error rate of the first transmission path is greater than M, step705is performed; or if the accumulated bit error rate of the first transmission path is less than M, a subsequent path switching operation may not be performed. 705: The NCE120switches a service flow from the first transmission path to the second transmission path. In some embodiments, the NCE120may determine a service flow transmitted on the first transmission path, generate a label stack used to indicate to transmit the service flow on the second transmission path, and send the label stack to the CSG103by using the RSG108. The label stack includes labels used to indicate the CSG103, the CSG105, the ASG107, the RSG109, and the RSG108. After receiving the label stack, the CSG103may encapsulate the label stack into a packet of the service flow. In this case, the CSG103, the CSG105, the ASG107, the RSG109, and the RSG108may separately send the packet of the service flow to their next-hop network devices on the second transmission path based on the label stack. Therefore, the packet of the service flow is transmitted on the second transmission path. In this embodiment, any network device on the network110may report a bit error rate of any egress port on the network device to the NCE120. In this case, the NCE120may collect and accumulate the bit error rates of the egress port111, the egress port113, and the egress port116, to obtain the accumulated bit error of the first transmission path, so that the NCE120may switch the service flow on the first transmission path to the second transmission path when the accumulated bit error rate of the first transmission path is excessively high. Therefore, a service flow transmitted on a transmission path with an excessively high accumulated bit error rate may be switched to another transmission path with a relatively low accumulated bit error rate for transmission. This avoids using a multi-hop path with a severe accumulated bit error rate to forward a service flow, and improves transmission stability of the service flow. FIG.8is a schematic diagram of a structure of a link bit error-based processing apparatus according to an embodiment of this application. The apparatus is a controller800, and may specifically include a receiving unit801and a processing unit802. The receiving unit801is configured to receive first link status information sent by a first network device. The first link status information includes first egress port information and a first bit error rate, the first egress port information indicates a first egress port of the first network device, the first network device is configured to send data traffic to a next-hop network device of the first network device along a first transmission path through the first egress port, and the first bit error rate indicates a bit error rate at which data traffic is sent through the first egress port. The processing unit802is configured to: determine a first accumulated bit error rate of the first transmission path based on the first link status information, determine whether the first accumulated bit error rate is greater than a first bit error rate threshold, and switch a service flow from the first transmission path to a second transmission path when determining that the first accumulated bit error rate is greater than the first bit error rate threshold. The first accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the first transmission path, a head node network device of the first transmission path and a head node network device of the second transmission path are a same network device, and a tail node network device of the first transmission path and a tail node network device of the second transmission path are a same network device. In some embodiments, the processing unit802is further configured to clear a value of the first bit error rate to zero when the first bit error rate is less than a second bit error rate threshold. The first bit error rate threshold is greater than the second bit error rate threshold. In some embodiments, the receiving unit801is further configured to receive second link status information sent by a second network device. The second link status information includes second egress port information and a second bit error rate, the second egress port information indicates a second egress port of the second network device, the second network device is configured to send data traffic to a next-hop network device of the second network device along the second transmission path through the second egress port, and the second bit error rate indicates a bit error rate at which data traffic is sent through the second egress port. The processing unit802is further configured to determine a second accumulated bit error rate of the second transmission path based on the second link status information, determine whether the second accumulated bit error rate is less than the first bit error rate threshold, and switch the service flow from the first transmission path to the second transmission path when determining that the first accumulated bit error rate is greater than the first bit error rate threshold and the second accumulated bit error rate is less than the first bit error rate threshold. The second accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the second transmission path. In some embodiments, the processing unit802is further configured to: before switching the service flow from the first transmission path to the second transmission path, obtain another transmission path between the head node network device of the first transmission path and the tail node network device of the first transmission path that is different from the first transmission path, and determine the second transmission path from the another transmission path based on an accumulated bit error rate of each of the another transmission path. In some embodiments, the receiving unit801is further configured to: after the service flow is switched from the first transmission path to the second transmission path, receive third link status information sent by a third network device. The third link status information includes third egress port information and a third bit error rate, the third egress port information indicates a third egress port of the third network device, the third network device is configured to send data traffic to a next-hop network device of the third network device along the first transmission path through the third egress port, and the third bit error rate indicates a bit error rate at which data traffic is sent through the third egress port. The processing unit802is further configured to: determine a third accumulated bit error rate of the first transmission path based on the third link status information, determine whether the third accumulated bit error rate is less than the first bit error rate threshold, and switch the service flow from the second transmission path back to the first transmission path when determining that the third accumulated bit error rate is less than the first bit error rate threshold. The third accumulated bit error rate is a weighted sum of bit error rates of all the egress ports configured to send data traffic on the first transmission path. In some embodiments, the receiving unit801is further configured to receive a border gateway protocol (BGP) update packet, where the BGP update packet carries the first link status information sent by the first network device. The processing unit802is further configured to obtain the first link status information from the BGP update packet. In some embodiments, the first link status information is specifically carried in a multiprotocol reachable network layer reachability information (MP REACH NLRI) field or a multiprotocol unreachable network layer reachability information (MP UNREACH NLRI) field in the BGP update packet. In some embodiments, the first link status information in the BGP update packet is carried in type-length-value TLV information. In some embodiments, the first link status information is directly sent by the first network device to the controller; or the first link status information is first sent by the first network device to a fourth network device, and then is directly sent by the fourth network device to the controller. In some embodiments, the first bit error rate threshold is specifically a bit error rate threshold that is set corresponding to a service type of the service flow. It may be understood that the controller800is the controller mentioned in the method200. Therefore, for various specific embodiment implementations of the controller800, refer to the description of the controller in the method200. Details are not described in this embodiment again. FIG.9is a schematic diagram of a structure of a link bit error-based processing apparatus according to an embodiment of this application. The apparatus is a first network device900, and may specifically include a processing unit901and a sending unit902. The processing unit901is configured to detect a first bit error rate at which data traffic is sent through a first egress port. The first network device is configured to send data traffic to a next-hop network device of the first network device along a first transmission path through the first egress port. The sending unit902is configured to send first link status information to a controller. The first link status information includes first egress port information and the first bit error rate, and the first egress port information is used to indicate the first egress port. The first link status information is used to determine a first accumulated bit error rate of the first transmission path, the first accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the first transmission path, the first accumulated bit error rate is used to determine whether to switch a service flow from the first transmission path to a second transmission path, a head node network device of the first transmission path and a head node network device of the second transmission path are a same network device, and a tail node network device of the first transmission path and a tail node network device of the second transmission path are a same network device. In some embodiments, the first link status information is sent to the controller by using a BGP update packet. In some embodiments, the first link status information is specifically carried in a multiprotocol reachable network layer reachability information (MP REACH NLRI) field or a multiprotocol unreachable network layer reachability information (MP UNREACH NLRI) field in the BGP update packet. In some embodiments, the first link status information in the BGP update packet is carried in type-length-value (TLV) information. In some embodiments, the first link status information is directly sent by the first network device to the controller. In some embodiments, the first network device900further includes a receiving unit903. The receiving unit903is configured to receive second link status information sent by a second network device. The second link status information includes second egress port information and a second bit error rate, the second egress port information indicates a second egress port of the second network device, the second network device is configured to send data traffic to a next-hop network device of the second network device along a third transmission path through the second egress port, and the second bit error rate indicates a bit error rate at which data traffic is sent through the second egress port. The sending unit is further configured to send the second link status information to the controller. The second link status information is used to determine a second accumulated bit error rate of the third transmission path. The second accumulated bit error rate is a weighted sum of bit error rates of all egress ports configured to send data traffic on the third transmission path, the second accumulated bit error rate is used to determine whether to switch a service flow from the third transmission path to a fourth transmission path, a head node network device of the third transmission path and a head node network device of the fourth transmission path are a same network device, and a tail node network device of the third transmission path and a tail node network device of the fourth transmission path are a same network device. In some embodiments, the first link status information is first sent by the first network device to a third network device, and then is directly sent by the third network device to the controller. It may be understood that the first network device900is the first network device mentioned in the method200. Therefore, for various specific embodiment implementations of the first network device900, refer to the description of the first network device in the method200. Details are not described in this embodiment again. In addition, an embodiment of this application further provides a controller. The controller includes a processor and a memory. The memory stores instructions. When the processor executes the instructions, the controller is enabled to perform the foregoing method200. In addition, an embodiment of this application further provides a network device. The network device includes a processor and a memory. The memory stores instructions. When the processor executes the instructions, the network device is enabled to perform the foregoing method600. In addition, an embodiment of this application further provides a computer program product. When the computer program product is run on a computer, the computer is enabled to perform the foregoing method200or the foregoing method600. In addition, an embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores instructions. When the instructions are run on a computer or a processor, the computer or the processor is enabled to perform the foregoing method200or the foregoing method600. The “first” in the names such as the “first network device”, “first link status information”, “first egress port”, “first bit error rate”, “first transmission path”, and “first accumulated bit error rate” mentioned in the embodiments of this application is merely used as a name identifier, but does not indicate the first in order. This rule is also applicable to the “second”, and the like. It can be learned from the foregoing descriptions of the embodiment sand example implementations that, a person skilled in the art may clearly understand that a part or all of the steps of the methods in the foregoing embodiments may be implemented by using software and a universal hardware platform. Based on such an understanding, the technical solutions of this application may be implemented in a form of a software product. The computer software product may be stored in a storage medium, for example, a read-only memory (ROM)/random access memory (RAM), a magnetic disk, or an optical disc, and include several instructions for instructing a computer device (which may be a personal computer, a server, or a network communications device such as a router) to perform the methods described in the embodiments or some parts of the embodiments of this application. The embodiments in this specification are all described in a progressive manner, for same or similar parts in the embodiments, refer to these embodiments, and each embodiment focuses on a difference from other embodiments. Especially, the apparatus embodiment is basically similar to the method embodiment, and therefore is described briefly. For related parts, refer to partial descriptions in the method embodiment. The described device and method embodiments are merely examples. The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one position, or may be distributed on a plurality of network units. A part or all of the modules may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. A person of ordinary skill in the art may understand and implement the embodiments without creative efforts. The foregoing descriptions are merely example embodiment and implementations of this application, but are not intended to limit the protection scope of this application.
64,470
11863304
DETAILED DESCRIPTION OF THE INVENTION The invention is directed to countermeasures to side-channel-based attack mechanisms. A dynamic partial reconfiguration (DPR) method for FPGAs makes techniques such as differential power analysis (DPA) difficult and/or ineffective by frequently changing (while preserving the functionality) the implementation characteristics of components or sub-components of an encryption algorithm. This is performed by replicating components or sub-components that perform identical functions and run simultaneously in parallel. This allows the encryption engine to continue to encrypt/decrypt at full speed without needing to stall and wait for reconfiguration to complete. With DPA deriving its power by averaging power transient signals measured from an underlying invariant circuit implementation, small components of the circuit implementation are changed. Side-channel Power Resistance for Encryption Algorithms using DPR (SPREAD) introduces diversity, and uncertainty, in the analysis of power supply transient signals. One or more redundant locations are added that can be re-programmed over time while in progress, i.e., on the fly. According to an embodiment of the invention, one additional reconfiguration location is added to the architecture to allow one or more components or sub-components (SBOX, SubBytes, ShiftRows, MixColumns, AddRoundKey, registers, XOR gates) to be disconnected from the encryption engine and reconfigured. Although the invention is discussed in reference to the Advanced Encryption Standard (AES) algorithm, any encryption algorithm is contemplated that uses replicated components or sub-components such as the Data Encryption Standard (DES), RSA encryption, and elliptical curve cryptography (ECC), to name a few. The implementation characteristics of components or sub-components of an encryption algorithm are frequently changed while preserving the functionality using DPR methods. According to one contemplated embodiment, replicated primitives within AES, such as the SBOX, are synthesized to multiple implementations. During encryption/decryption, SBOX components are randomly selected and replaced dynamically with one of these implementations. The implementations are stored within FPGA Block RAM resources and a state machine coordinates with AES to carry out periodic DPR. The diversity of the implementations changes their delay characteristics and removes correlations in the power transients, making it difficult to identify the correct key. A controller according to the invention is a VHDL module that coordinates the DPR operations with a fully operational encryption engine, e.g., advanced encryption standard (AES). The system and methods according to the invention performs self-reconfiguration using Xilinx's internal configuration access port (ICAP) interface. Self-reconfiguration refers to techniques that run in the programmable logic (PL) that reconfigure other components in the PL, excluding itself. The time taken to perform DPR using the ICAP inter face is approximately 1 ms for smaller partial dynamic reconfigurable regions, referred to herein as “pblocks”. Therefore, stopping cryptographic operations to carry out DPR would introduce a significant performance penalty on the encryption or decryption operations. To address this issue, a single-unit redundancy scheme is implemented as shown inFIG.1using AES as the encryption engine. Each of the SBOX regions is reconfigurable. AlthoughFIG.1illustrates a subset of SBOXs, it should be noted that a set of 16 parallel SBOXs are needed in the 128-bit version of AES. The invention adds one additional parallel SBOX. The DPR control signals from the controller are used to create a ‘hole’ in the parallel configuration of the 17 SBOXs, by using shifters and MUXs to wire around the SBOX that is the target for reconfiguration.FIG.1shows the routing configuration when SBOX2is the target. Since DPR can take place while the rest of the system continues to operate at full speed, encryption/decryption can continue with only 1 stall cycle to reconfigure the shifters and MUXs. A block diagram of the proposed system that is applicable to FPGA SoC architectures is shown inFIG.2. Security features that exist on the Processor Side of the SoC, such as Xilinx TrustZone, can be leveraged to ensure the partial bit-streams are loaded into BRAM using a secure general-purpose I/O (GPIO) interface. Once loaded, at least two operations are carried out by the DPR Controller during encryption or decryption. First, the nonce generation engine is started (described more fully below). The nonces are used to randomize the time intervals between DRP operations, select from among the configurations that have been loaded into the BRAM, and select the target reconfigurable regions within the cryptographic engine. The second operation is to read the selected bitstream from BRAM, assert the appropriate control signals for reconfiguration of the selected cryptographic component, synchronize with the cryptographic engine to insert one or more stall cycles as needed, and execute the transfer protocol using the ICAP controller. The frequency of reconfiguration is bounded by energy consumption overhead on one hand and the requirement to keep the number of power traces that can be collected under any one configuration to a small number on the other. Based on the results (presented below) that are directed to applying DPA to an AES implementation on an FPGA, the time required to collect a sufficient number of waveforms (factoring in O-scope averaging time) is measured in hours at best of data collection. DPR carried out using AES SBOX takes approximately 1 ms, which upper bounds the frequency of reconfiguration to approximately 1000/second. Hence reconfigurations can be done at a relatively slow and random frequency, from several per second to one every couple seconds. The power consumed by DPR for a region large enough to contain the SBOX is in the 10's of microWatt range, so battery operated devices may opt for slower frequencies of reconfiguration. As presented above, a set of AES SBOX implementations are stored within FPGA BRAM resources. The implementations are created by introducing modifications to the place and route characteristics of the AES SBOX. These changes to the structural (not functional) characteristics of SBOX introduce small changes in the path delays and corresponding power trace information. The success of waveform averaging carried out in a DPA attack is critically dependent on the delay behavior of individual gates (and entire paths) remaining invariant. By changing the wiring and LUTs used by a specific implementation of SBOX over time, waveform averaging carried out across different implementations reduce the accumulated power information generated by the SBOX output bit under attack. Moreover, power peaks associated with SBOX output bits that are not targeted increase in magnitude because averaging is less effective in reducing their amplitudes to near zero, as required by the DPA algorithm. On the other hand, it is also important that the power trace distortion introduced by different implementations be small enough to make it difficult or impossible for an adversary to determine which of the implementations is currently ‘installed’ into the AES engine. The difficulty of tracking replacements is compounded by the large number of possible fully instantiated AES configurations, (16NI, with NI defined as the number of different implementations). Given the power trace represents the superposition of power traces from all 16 simultaneously executing SBOXs, this task is likely intractable for the adversary. The most significant vulnerability is the possibility of tracking replacements using the DPR power trace, which is addressed below. Implementation diversity techniques that introduce changes to the structure of SBOX can be done in several different ways. A first embodiment involves adding wire loads (stubs) to the existing wires in the ‘implemented’ view of the design. FPGA vendors provide interfaces that allow manipulation of the individual routes using, for example, the “Implemented Design View” in the Xilinx Vivado CAD tool. This strategy of manipulating wire loads introduces only small changes to the delay of the targeted paths. Another embodiment involves making a small, inconsequential change to the VHDL description of the SBOX and then re-synthesizing it. This strategy tends to create larger differences in the path delays from one implementation to the next. The delay using both of these strategies is now discussed. Although the simulation tools can be used to estimate the delay impact of these wire-load and synthesis-directed diversity strategies, the impact is measured directly in hardware experiments carried out on an FPGA. A block-level diagram of the test structure used in our experiments is shown inFIG.3. Path delay measurements can be measured with high accuracy, i.e., in the range of approximately 30 ps, using the ‘Fine Phase Shift’ feature of the digital clock manager (called multi-mode-clock-manager or MMCM in Xilinx architectures), and a clock strobing technique. The Clock Strobe Module (CSM) and MMCM are shown on the left side ofFIG.3. The CSM is implemented as a VHDL module and is the controlling module. It issues control signals to the MMCM to adjust the fine phase shift (FPS) between Clk1and Clk2and includes an up-counter to keep track of how many FPS have been applied. The initial phase shift is 0 and is incremented until all path segments in the test circuit have been timed. Each increment adjusts the FPS forward by 18 ps. The test circuit implements a sequence of 64 ‘Switch boxes’, which allow the two incoming signals to be routed straight through the switch box (with switch box ctrl set to 0) or flipped (set to 1). A pair of ‘Timing Cells’ are added to the output of each Switch box, as shown on the right side ofFIG.3. Each Timing Cell includes a flip-flop (FF) driven with the fine phase shifted clock Clk2, an XOR gate, a 2-to-1 MUX and an “n-bit” register (labeled Path Delay). This Timing Cell circuit is replicated at each pair of Switch box outputs. The FFs of all Timing Cells are initialized with the initial value of the Switch Box output signals, which is 0 when a rising edge signal is to be timed. The CSM then performs a sequence of ‘launch-capture’ tests, with Clk2phase shifted forward by 18 ps before each test. The XOR gates in the Timing Cells produce a 1 at the beginning of the sequence because the test path signals captured in the FFs remain at the initial value, i.e., the signal propagating along the test path has not had enough time to reach the FF inputs before Clk2is asserted. This causes the current value of the digital Fine Phase Shift (FPS) produced by the CSM to be stored in the Path Delay registers. As the FPS count increases in the sequence of tests, the signals propagating at the beginning of the test path begin to reach the FF inputs before Clk2is asserted. The CSM stops updates to the Path Delay register for these Timing Cells when this occurs. The final value stored in the Path Delay registers of the Timing Cell is the value of FPS counter. The count is an integer value that can be converted into an actual delay by multiplying it by 18 ps, i.e., the step size associated with consecutive FPS values. The wire-load diversity model is analyzed.FIG.4shows a screen snapshot of two consecutive Switch boxes & Timing Cells (top-to-bottom) from the block diagram ofFIG.3. The Switch boxes and Timing Cells are implemented in the LUTs as shown on the right side. The two test paths between each pair of Switch boxes are routed manually. One of the routes is highlighted in a white dashed line, while the second one is adjacent to it in a white solid line. The routes are configured using a sequence of Xilinx switch boxes as shown on the left. The switch boxes allow an incoming signal to be routed to one or more of “n” output-going routes. Although only 1 outgoing route is used in this example, any number of routs is contemplated. The wire-load implementation diversity strategy uses these switch boxes to add wire loads to existing routes within an SBOX as a mechanism to change the delay of paths. The test structure inFIG.3is used to measure the change in delay under different wire-load models using a Zynq 7020 FPGA. The plot inFIG.5shows the set of timing values for the two test paths obtained from the 64 Path Delay registers after the clock strobing operation completes. The digital timing values vary from approximately 65 to 2050. As mentioned, each increment of FPS adds approximately 18 ps to the overall delay so the actual delays of the increasingly longer path segments vary from 1.161 ns to 36.613 ns. These delays define the ‘base case’, i.e., the circuit configuration in which no wire loads have been added. In contrast,FIGS.6A,6B,6Cshows three test cases with wire stub loads added to the top test path only between two Switch boxes.FIG.6Ashows fan-out added to the lower switch box,FIG.6Bshows it added to the upper switch box andFIG.6Cshows a configuration that adds fan-out to both locations. The impact on delay that each fan-out load introduces is small. In order to measure it accurately, the switch boxes are configured along the entire top test path using each of these wire stub load models and then measured the difference in delay at each Switch box using the Timing cells.FIG.7plots the difference in units of FPS. As shown inFIG.7, the differences in path delay for the 1st and 2nd scenarios is approximately 2.8 ps per stage. This is computed using the cumulative value of 10 at stage 64, multiplying by 17.86 ps and dividing by 64 stages. The delay using both wire loads is approximately double as expected. This analysis indicates that the wire-load mechanism for changing delays provides very fine control over the obfuscation process. This strategy can be used alone or in combination with a second strategy referred to as synthesis-directed diversity. The synthesis-directed diversity model is analyzed. Synthesis-directed diversity refers to the different implementations that the FPGA (and ASIC) synthesis tools can generate from the same behavioral description. Synthesis-directed diversity can be implemented in two ways. The first is to make small (inconsequential) changes to the HDL behavioral description and then simply re-synthesize the implementation. The heuristic algorithms used within the synthesis tools are not able to find optimal solutions to, e.g., the place and route problem. Therefore, the implemented designs typically introduce larger differences in path delays from one implementation to the next (when compared with the wire-load strategy). The diversity of this approach is evaluated below. A second method is to synthesize using different versions of a standard cell library. Standard cell libraries are used in ASIC flows, e.g., Cadence RTL compiler, to convert a behavioral description of a design into a structural netlist. By changing the logic cells available within a set of standard cell libraries, the synthesis tool is forced to implement the design using different logic gates, which will have a subsequent impact on the path delays of each implementation (and the power trace behavior). This strategy can also be used in FPGA flows by using ASIC-generated netlists as the input description of a design instead of behavioral HDL. The DPR strategies according to the invention depend heavily on the adversary not being able to track which of the multiple implementations of the AES SBOX are used in the DPR operation. It may be difficult for the adversary to accomplish this for several reasons. First, the set of partial bitstreams used to implement the SBOX are the same size and are otherwise identical except for a subset of the configuration bits. Second, SPREAD is implemented as an HDL module and runs entirely within the PL side of the FPGA. The DPR power traces are analyzed by creating two instantiations of the AES SBOX, SBOX1and SBOX2, using the synthesis-directed diversity strategy described above. The power traces are measured when each is used as the source in a DPR operation. The size of the AES SBOX partial bitstreams are approximately 58 KB.FIG.8shows the power traces measured using a Tektronix 7254 digital oscilloscope. The region within the triggering pulse box corresponds to the time period associated with the DPR operation. According to this investigation, DPR using a C program running under Linux on the Zynq 7020 SoC was implemented. The two versions of the SBOX are reconfigured into the same region on the FPGA. The power traces are averaged 100 times to reduce noise and is noise-filtered using a software ‘smoothing’ routine to remove the high frequency noise. Small distinguishing features are evident in the ‘smoothed’ waveforms, which are shown inFIG.8as thick lines through the averaged (but still noisy) oscilloscope waveforms. DPA experiments are performed to evaluate critical security properties. Particularly, only one SBOX is included in the model tested. And two versions of the model are created using the synthesis-directed diversity technique. FIG.9illustrates a block diagram of a circuit according to an embodiment of the invention. A Xilinx Artix-7 XC7A35T FPGA is used as the hardware platform for the DPA experiments. All of the decoupling capacitors are removed from the Artix-7 (“Arty”) board as a mechanism allow the higher frequency components of the PL-side switching activity to be measured across a 20 Ohm resistor placed in series with the core power supply. Two active TAP3500 probes are placed across the 20 Ohm resistor and the scope is configured to measure differentially across the resistor. One thousand samples of the differential signal for each of the 1400 applied plaintexts are averaged. This is necessary to average out the large asynchronous noise transient produced by the voltage regulator installed on Arty. The same experiment was carried out on the two implementations of the SBOX, referred to as V1and V2. A differential power analysis process is applied to the 1400 power traces measured for the plaintexts in each experiment separately. The power traces measured from the V1experiment are shown inFIG.10. The majority of vertical dispersion in the waveforms is caused by small changes in room temperature. The Artix-7 chip has a large DC leakage current of approximately 27 mA that is extremely sensitive to the temperature. The change in leakage for variations of a couple degrees in room temperature is approximately 200-300 microamps that is then amplified across the 20 Ohm resistor as shown. Given this sensitivity, DPA experiments are best carried out inside a temperature chamber. The vertical drift reduces the level of correlations in the waveform differences created by the DPA procedure, discussed further below. The high order bit of the SBOX is used to partition the 1400 power traces into two groups for each of the 256 key guesses and an average power trace from each group of approximately 700 power traces is computed. The difference power traces for the correct key guess for V1and V2are shown inFIG.11. The vertical dotted lines illustrate that the correlation peak has shifted to the right for V2from approximately 4.2 ns for V1to 5.1 ns for V2. This reflects the change in delays to bit7in each of the two implementations of SBOX. A small region around the peak values of 200 ps is integrated for each of the key guess from 0 to 255 and plotted inFIG.12. The negative peak for the correct key guess of 3C is highlighted. The area is expected to be negative because the 1 group is subtracted from the 0 group, and the 1 group represents those plaintexts that cause bit7to transition to 1. A transition to 1 draws current from the power rail creating a larger voltage drop than the 0 group. As shown, the correct key guess ranks 4th as the largest negative peak for V1and 1st for V2. As indicated above, the vertical drift in the raw power traces reduces the correlation. Despite this issue, the correlation associated with the correct key guess is still evident. The results shown inFIG.13include an analysis labeled ‘Both’ in which half the power traces from V1and V2are combined. This mixing of the power traces represents a simplified scenario where the two implementations of the SBOX are swapped in and out by SPREAD during encryption. As shown, the peaks in the bottom graph (“Both”) associated with the key guesses have now changed height. More importantly, the peak height associated with the correct key guess of 3C now ranks 9th in the list of largest negative peaks. It is contemplated that the when the number of implementations is greater than 2, the reduction in correlation is likely to be even larger. These FPGA experiments evaluate key elements of the SPREAD technique. The analysis of delay is presented for an implementation diversity strategy in which wire stubs are added to existing wires. A second synthesis-directed implementation diversity strategy is evaluated using DPA experiments. The results demonstrate that correlations in the power traces are reduced. While the disclosure is susceptible to various modifications and alternative forms, specific exemplary embodiments of the invention have been shown by way of example in the drawings and have been described in detail. It should be understood, however, that there is no intent to limit the disclosure to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure as defined by the appended claims.
21,842
11863305
DETAILED DESCRIPTION The exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings. The exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the exemplary embodiments to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure). Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating the exemplary embodiments. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named manufacturer. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first device could be termed a second device, and, similarly, a second device could be termed a first device without departing from the teachings of the disclosure. FIGS.1-19are simplified illustrations of a blockchain environment20, according to exemplary embodiments. A miner system22receives one or more inputs24via a communications network26from a blockchain network server28. While the inputs24may be any electronic data30, in the blockchain environment20, the inputs24are blockchain transactions32(such as financial transactions, inventory/shipping data, and/or healthcare medical data). The actual form or content represented by the electronic data30and the blockchain transactions32may be unimportant. The blockchain network server28sends, distributes, or broadcasts the inputs24to some or all of the authorized mining participants (such as the miner system22). The blockchain network server28may also specify a proof-of-work (“PoW”) target scheme34, which may accompany the inputs24or be separately sent from the inputs24. The miner system22may mine the inputs24. When the miner system22receives the inputs24, the miner system22has a hardware processor (such as CPU36) and a solid-state memory device38that collects the inputs24(such as the blockchain transactions32) into a block40of data. The miner system22then finds a difficult proof-of-work (“PoW”) result42based on the block40of data. The miner system22performs, executes, or calls/requests a proof-of-work (“PoW”) mechanism44. The proof-of-work mechanism44is a computer program, instruction(s), or code that instruct or cause the miner system22to call, request, and/or execute an encryption algorithm46. The proof-of-work mechanism44may instruct or cause the miner system22to call, request, and/or execute a difficulty algorithm48that generates or creates a difficulty50. The proof-of-work mechanism44may also instruct or cause the miner system22to call, request, and/or execute a proof-of-work (“PoW”) algorithm52. The proof-of-work mechanism44may thus be one or more software applications or programming schemes that separate the encryption algorithm46from the difficulty algorithm48and/or from the proof-of-work algorithm52. Because the encryption algorithm46may be separately executed/called from the difficulty algorithm48and/or from the proof-of-work algorithm52, encryption of the electronic data30(representing the inputs24) is separately performed from the difficulty50of solving the proof-of-work. In other words, any encryption algorithm46may be used, along with any difficulty algorithm48, and/or along with any proof-of-work algorithm52. FIG.2further illustrates the proof-of-work mechanism44. While the encryption algorithm46may utilize any encryption scheme, process, and/or function, many readers may be familiar with a cryptographic hashing algorithm54(such as the SHA-256 used by BITCOIN®). The cryptographic hashing algorithm54may thus generate an output56(sometimes called a digest58) by implementing or executing the cryptographic hashing algorithm54using the inputs24(such as the blockchain transactions32). So, whatever the arbitrary bit values of the inputs24, and whatever the arbitrary bit length of the inputs24, the cryptographic hashing algorithm54may generate the output56as one or more hash values60, perhaps having a fixed length (or n-hit). The miner system22may thus receive the inputs24from the blockchain network server28, call and/or execute the encryption algorithm46(such as the cryptographic hashing algorithm54), and generate the hash value(s)60. AsFIG.3illustrates, the miner system22may separately perform or call the proof-of-work algorithm52. After the encryption algorithm46creates the output(s)56, the miner system22may read/retrieve the output(s)56and send the output(s)56to the proof-of-work algorithm52. The miner system22may thus generate the proof-of-work result42by calling and/or by executing the proof-of-work algorithm52using the output(s)56. The miner system22, for example, may send the hash value(s)60(generated by the cryptographic hashing algorithm54) to the proof-of-work algorithm52, and the proof-of-work algorithm52generates the proof-of-work result42using the hash value(s)60. The proof-of-work algorithm52may also compare the proof-of-work result42to the proof-of-work (“PoW”) target scheme34. The proof-of-work algorithm52may, in general, have to satisfy or solve a mathematical puzzle62, perhaps defined or specified by the proof-of-work target scheme34. The proof-of-work target scheme34may also specify, or relate to, the difficulty50of solving the mathematical puzzle62. That is, the more stringent or precise the proof-of-work target scheme34(e.g., a minimum/maximum value of the hash value60), the more difficult the mathematical puzzle62is to solve. In other words, the difficulty50is a measure of how difficult it is to mine the block40of data, given the solution requirements of the proof-of-work target scheme34. The miner system22may own the block40of data. If the miner system22is the first to satisfy the proof-of-work target scheme34(e.g., the proof-of-work result42satisfies the mathematical puzzle62), the miner system22may timestamp the block40of data and broadcast the block40of data, the timestamp, the proof-of-work result42, and/or the mathematical puzzle62to other miners in the blockchain environment20. The miner system22, for example, may broadcast a hash value representing the block40of data, and the other miners begin working on a next block in the blockchain64. Today's BITCOIN® difficulty is increasing. On or about Jun. 16, 2020, BITCOIN's network adjusted its difficulty level (the measure of how hard it is for miners to compete for block rewards on the blockchain) to 15.78 trillion, which was nearly a 15% increase in the difficulty50. As the difficulty50increases, older, less capable, and less power efficient miners are unable to compete. As a result, today's BITCOIN® miners must have the latest, fastest hardware (such as an ASIC) to profitably solve the mathematical puzzle62according to the proof-of-work target scheme34. Indeed, Satoshi envisioned that increasing hardware speed would allow miners to easier solve the proof-of-work, Satoshi thus explained that the difficulty would be a moving target to slow down generation of the blocks40of data. Conventional mining schemes are integrated. When a conventional blockchain miner attempts to solve the mathematical puzzle62, the conventional blockchain miner executes a conventional scheme that integrates hashing, difficulty, and proof-of-work. That is, conventional proof-of-work schemes require the miners to execute a combined software offering or pre-set combination of encryption and proof. These conventional proof-of-work scheme, in other words, integrate a predetermined encryption/hashing algorithm into or with a predetermined difficulty and a predetermined proof-of-work algorithm. These conventional proof-of-work schemes thus force the miners to execute a predetermined or predefined scheme that functionally marries or bundles encryption, difficulty, and proof-of-work. The conventional schemes specify a difficulty mechanism. BITCOIN's difficulty mechanism, for example, is a measure of how difficult it is to mine a BITCOIN® block of data. BITCOIN® miners are required to find a hash value below a given target (e.g., SHA256(nonce+input) has n leading zeros, where n determines the mining difficulty). The difficulty adjustment is directly related to the total estimated mining power (sometimes estimated in Total Hash Rate per second). BITCOIN's difficulty mechanism is adjusted to basically ensure that ten (10) minutes of computation are required before a miner may solve the mathematical puzzle62. The conventional schemes force the use of specialized hardware. When blockchain mining first appeared, home/desktop computers and laptops (and their conventional processors or CPUs) were adequate. However, as blockchain mining became more difficult and competitive, miners gained an advantage by repurposing a dedicated graphics processing unit (or GPU) for blockchain mining. As an example, the RADEON® HD 5970 GPU has a clocked processing speed of executing about 3,200 of 32-bit instructions per clock, which is about 800 times more than the speed of a CPU that executes only four (4) 32-bit instructions per clock. This increased processor clock speed allowed GPUs to perform far more calculations and made GPUs more desirable for cryptocurrency/blockchain mining. Later, field programmable gate arrays (FPGAs) were also re-modeled for cryptocurrency/blockchain mining. FPGAs were able to compute the mathematical operations required to mine the block40of data twice as fast as the GPU. However, FPGA devices were more labor-intensive to build and still require customized configurations (both software programming and hardware). Today's BITCOIN® miners have pushed the hardware requirements even further by using a specialized application-specific integrated circuit (ASIC) that is exclusively designed for blockchain mining. These ASICs may be 100 billion times faster than mere CPUs. These ASICs have made BITCOIN® mining undemocratic and only possible by a relatively few, well capitalized entities running mining farms. Today's BITCOIN® miners thus consume great quantities of electrical power and pose concerns for the electrical grid. Today's conventional mining hardware has further specialized. Some ASICs have also been further designed for particular blockchains to achieve additional optimizations. For example, a hardware implementation of the SHA-256 hash is much faster than a version coded in software. Today, nearly all BITCOIN® mining is performed using hardware ASICs. Specialized hardware has even been developed for particular hashing functions. The RAVENCOIN® scheme, as an example, uses several different hashing algorithms, and a particular hashing algorithm is picked for one block based off of a hash of a previous block (the RAVENCOIN® scheme resembles a random selection of the hashing algorithm). However, because fifteen (15) of the sixteen (16) algorithms sit on the sidelines unused at any given time, the RAVENCOIN® scheme makes it very expensive for a miner to buy sixteen (16) different hardware rigs in order to mine according to the RAVENCOIN® scheme. Even if a miner decides to only mine the blocks that match a particular hardware requirement, the hardware still sits idle 14-15 cycles on average. Some blockchains may also alter or modify the mining scheme. For example, the MONERO® mining scheme uses a specialized hashing function that implements a random change. That is, the MONERO® mining scheme uses a hash algorithm that unpredictably rewrites itself. The MONERO® mining network introduced a RandomX mining algorithm that was designed to deter ASICs and to improve the efficiency of conventional CPUs. MONERO's RandomX mining algorithm uses random code execution and memory-intensive techniques, rendering ASICs too expensive and ineffective to develop. The conventional mining schemes thus have many disadvantages. Conventional mining schemes have become so specialized and so expensive that only a small number of large miners have the resources to compete. Blockchain mining, in other words, has become centralized and undemocratic. Some conventional schemes try to find new hashing algorithms, new proof-of-work schemes, or modify existing schemes to de-centralize and to democratize mining participants. Some conventional mining schemes (such as ETHERIUM®) require very large memory spaces in bytes, which disadvantages its hardware. LITECOIN® also disadvantages hardware by copying large byte amounts of data. AsFIGS.4-6illustrate, though, exemplary embodiments may mix-and-match the encryption algorithm46, the difficulty algorithm48, and the proof-of-work algorithm52. The inventor has observed that there is no mining law or scheme that requires a preset or predefined difficulty scheme (such as BITCOIN'S counting zeroes on the hash to decide its difficulty). Instead, exemplary embodiments may use any encryption algorithm46that a cryptographic coin, network, or scheme desires or specifies. Exemplary embodiments may use any difficulty algorithm48that the cryptographic coin, network, or scheme desires or specifies. Exemplary embodiments may use any proof-of-work algorithm52that the cryptographic coin, network, or scheme desires or specifies.FIG.4illustrates the encryption algorithm46, the difficulty algorithm48, and proof-of-work algorithm52as separate software mechanisms.FIG.5illustrates alternative software mechanism where the difficulty algorithm48and proof-of-work algorithm52may be functionally intertwined, but the encryption algorithm46is a separate, stand-alone program, file, or service.FIG.6illustrates the inputs and outputs for the encryption algorithm46, the difficulty algorithm48, and proof-of-work algorithm52. FIG.7illustrates agnostic hashing. Exemplary embodiments may use any encryption algorithm46that a cryptographic coin, blockchain network, or scheme desires or specifies. Because most blockchain mining schemes use hashing,FIG.7illustrates the cryptographic hashing algorithm54. The proof-of-work (“PoW”) target scheme34may thus use any cryptographic hashing algorithm54, as exemplary embodiments are agnostic to hashing/encryption. The encryption algorithm46may be any cryptographic hashing algorithm54(e.g., the SHA-2 family (SHA-256 and SHA-512) and/or the SHA-3 family). The miner system22need only request, call, and/or execute the particular cryptographic hashing algorithm54specified by the proof-of-work target scheme34.FIG.7thus illustrates an electronic database70of encryption algorithms accessible to the miner system22. While the database70of encryption algorithms is illustrated as being locally stored in the memory device38of the miner system22, the database70of encryption algorithms may be remotely stored and accessed/queried at any networked location. Even though the database70of encryption algorithms may have any logical structure, a relational database is perhaps easiest to understand.FIG.7thus illustrates the database70of encryption algorithms as an electronic table72that maps, converts, or translates different proof-of-work target schemes34to their corresponding or associated encryption algorithm46(such as the particular cryptographic hashing algorithm54). The miner system22may thus identify the encryption algorithm46by querying the electronic database70of encryption algorithms for the proof-of-work target scheme34specified for use by the blockchain environment20. So, once the particular cryptographic hashing algorithm54is identified, the miner system22may acquire or retrieve any inputs24(such as the blockchain transactions32) and execute the cryptographic hashing algorithm54specified by the proof-of-work target scheme34. The miner system22may optionally send the inputs24via the Internet or other network (e.g., the communications network26illustrated inFIGS.1-3) to a remote destination for service execution (as later paragraphs will explain). The encryption algorithm46(e.g., the cryptographic hashing algorithm54specified by the proof-of-work target scheme34) may thus generate the output56/digest58represented as the hash value(s)60. FIG.8illustrates agnostic difficulty. Exemplary embodiments may use any difficulty algorithm48that a cryptographic coin, blockchain network, or scheme desires or specifies. For example, when or even after the encryption algorithm46(e.g., the cryptographic hashing algorithm54) generates the output56(such as the hash value(s)60), the miner system22may request, call, and/or execute the particular difficulty algorithm48selected by, or specified by, the proof-of-work target scheme34and/or the blockchain environment20. The proof-of-work target scheme34may thus use any difficulty algorithm48, as the miner system22is agnostic to difficulty.FIG.8, for example, illustrates an electronic database74of difficulty algorithms that is accessible to the miner system22. While the database74of difficulty algorithms is illustrated as being locally stored in the memory device38of the miner system22, the database74of difficulty algorithms may be remotely stored and accessed/queried at any networked location. Even though the database74of difficulty algorithms may have any logical structure, a relational database is again perhaps easiest to understand.FIG.8thus illustrates the database74of difficulty algorithms as an electronic table76that maps, converts, or translates different proof-of-work target schemes34to their corresponding or associated difficulty algorithm48(such as the particular cryptographic hashing algorithm54). The miner system22may thus identify the difficulty algorithm48by querying the electronic database74of difficulty algorithms. So, once the particular difficulty algorithm48is identified, the miner system22may acquire or retrieve any inputs that are required by the difficulty algorithm48(such as the output hash value(s)60generated by the cryptographic hashing algorithm54). The miner system22may execute the difficulty algorithm48specified by the proof-of-work target scheme34. The miner system22may optionally send the hash value(s)60via the Internet or other network (e.g., the communications network26illustrated inFIGS.1-3) to a remote destination for service execution (as later paragraphs will explain). The difficulty algorithm48creates or generates the difficulty50based on the hash value(s)60. FIG.9illustrates agnostic proofs-of-work. Exemplary embodiments may use any proof-of-work algorithm52that a cryptographic coin, blockchain network, or scheme desires or specifies. The proof-of-work target scheme34may thus use any proof-of-work algorithm52, as the miner system22is agnostic to encryption, difficulty, and/or proof-of-work.FIG.9, for example, illustrates an electronic database78of proof-of-work algorithms that is accessible to the miner system22. While the database78of proof-of-work algorithms is illustrated as being locally stored in the memory device38of the miner system22, the database78of proof-of-work algorithms may be remotely stored and accessed/queried at any networked location. Even though the database78of proof-of-work algorithms may have any logical structure, a relational database is again perhaps easiest to understand.FIG.9thus illustrates the database78of proof-of-work algorithms as an electronic table80that maps, converts, or translates different proof-of-work target schemes34to their corresponding proof-of-work algorithm52. The miner system22may thus identify the proof-of-work algorithm52by querying the electronic database78of proof-of-work algorithms. After the hash value(s)60are generated, and perhaps after the difficulty50is generated, the miner system22may execute the proof-of-work algorithm52(specified by the proof-of-work target scheme34) using the hash value(s)60and/or the difficulty50as inputs. The miner system22may optionally send the hash value(s)60and/or the difficulty50via the Internet or other network to a remote destination for service execution (as later paragraphs will explain). The proof-of-work algorithm52generates the proof-of-work result42using the hash value(s)60and/or the difficulty50. The proof-of-work algorithm52may also compare the proof-of-work result42to the proof-of-work (“PoW”) target scheme34to ensure or to prove a solution to the mathematical puzzle62. Exemplary embodiments may thus use any encryption algorithm46, any difficulty algorithm48, and/or any proof-of-work algorithm52. Exemplary embodiments may implement any cryptographic security. Instead of merely counting zeroes (as specified by BITCOIN®), exemplary embodiments may run the resulting hash value60through the difficulty algorithm48to calculate the difficulty50in order to determine whether it's more or less difficult than other hashes. AsFIG.10illustrates, exemplary embodiments may use any PoW target scheme34. There are many different target schemes, some of which use or specify random number/nonce values, addresses, starting points, and other security schemes. The proof-of-work algorithm52, for example, may have to compare the hash value(s)60to a target hash value82. The target hash value82may be any minimum or maximum hash value that must be satisfied. If the hash value60is less than or perhaps equal to the target hash value82, then the proof-of-work algorithm52has perhaps solved the mathematical puzzle62. However, if the hash value60is greater than the target hash value82, then perhaps the proof-of-work algorithm52has failed to solve the mathematical puzzle62. Likewise, the hash value60may need to be equal to or greater than the target hash value82to be satisfactory. Regardless, should the hash value60fail to satisfy the target hash value82, exemplary embodiments may modify any data or input (e.g., the electronic data30, a random number/nonce value, address, starting points, etc.) according to the proof-of-work target scheme34, again call or request the cryptographic hashing algorithm54to generate the corresponding hash value(s)60, and compare the hash value(s)60to the target hash value82. Exemplary embodiments may repeatedly modify the electronic data30and/or any other parameters until the corresponding hash value(s)60satisfy the target hash value82. Exemplary embodiments may also use any difficulty scheme. The inventor envisions that there will be many different difficulty schemes. The difficulty algorithm48, for example, may have to compare the difficulty50to a target difficulty84. The target difficulty84has a bit or numeric value that represents a satisfactory difficulty of the corresponding cryptographic hashing algorithm54and/or the hash value60. For example, suppose the target difficulty84is a minimum value that represents a minimum permissible difficulty associated with the corresponding cryptographic hashing algorithm54. If the difficulty50is less than or perhaps equal to the target difficulty84, then perhaps the corresponding cryptographic hashing algorithm54and/or the hash value60is adequately difficult. However, if the difficulty50is greater than the target difficulty84, then perhaps the corresponding cryptographic hashing algorithm54and/or the hash value60is too difficult. Likewise, the difficulty50may need to be equal to or greater than the target difficulty84to be adequately difficult. Regardless, should the difficulty50fail to satisfy the target difficulty84, exemplary embodiments may modify any data or input (e.g., the electronic data30, a random number/nonce value, address, starting points, etc.) and recompute the corresponding hash value(s)60. Moreover, exemplary embodiments may additionally or alternatively change the cryptographic hashing algorithm54and/or the difficulty algorithm48and recompute. Exemplary embodiments may thus functionally separate hashing, difficulty, and proof-of-work. The conventional proof-of-work target scheme34functionally combines or performs both hashing and difficulty. The conventional proof-of-work target scheme34integrates or combines the difficulty in the hash. The conventional proof-of-work target scheme34integrates or combines the difficulty in the hash, thus greatly complicating the hash determination. Exemplary embodiments, instead, may separate the hashing algorithm54from the difficulty algorithm48. Exemplary embodiments put the difficulty50in the measurement of the difficulty50. Exemplary embodiments remove the difficulty50from the hashing algorithm54. The hashing algorithm54is not complicated by also having to integrate/calculate the difficulty algorithm48. The difficulty algorithm48may thus be a separate, stand-alone function or service that determines or calculates which hash is more difficult. The hashing algorithm54is much simpler to code and much faster to execute, as the hashing algorithm54requires less programming code and less storage space/usage in bytes. The hashing algorithm54need not be complicated to deter ASIC mining. Exemplary embodiments need not rely on the hashing algorithm54to also determine the difficulty50and/or the proof-of-work. The difficulty algorithm48is, instead, a separate functional mechanism, perhaps performed or executed by a service provider. Exemplary embodiments thus need not use an electrical power-hungry mechanism that is inherent in the conventional proof-of-work scheme. FIG.11illustrates a randomized database table90. The difficulty algorithm48and/or the proof-of-work algorithm52may use or consult the database table90when conducting any proof-of-work (e.g.,34and/or44). While exemplary embodiments may use any encryption scheme, most blockchain mining uses some form of hashing.FIG.11thus the proof-of-work target scheme34that utilizes the separate cryptographic hashing algorithm54, but the difficulty algorithm48and/or the proof-of-work algorithm52implements a further randomization of the resulting hash value(s)60. The proof-of-work target scheme34or mechanism44may generate, store, and/or use the database table90when performing any proof-of-work. Exemplary, embodiments may implement a bit shuffle operation92on the hash values)60. Exemplary embodiments may use entries in the database table90to perform the bit shuffle operation92(as later paragraphs will explain). Each entry94in the database table90may contain a random selection of bits/bytes96. The difficulty algorithm48and/or the proof-of-work algorithm52may select any bit values representing the hash value(s)60and swap any one or more of the bit values with any one or more entries94specified by the database table90. The difficulty algorithm48and/or the proof-of-work algorithm52may read or select a bit portion of the bit values representing the hash value(s)60and exchange or replace the bit portion with an entry94contained in, or referenced by, the database table90. Each entry94in the database table90represents or is associated with random bits or bytes. Exemplary embodiments may thus randomly shuffle the hash value(s)60generated by the cryptographic hashing algorithm54. Exemplary embodiments randomize byte or memory block access. FIG.12illustrates RAM binding. Exemplary embodiments may discourage or deter the use of specialized hardware (such as GPUs and ASICs) in blockchain mining. The proof-of-work target scheme34, for example, may take advantage of, or target, memory size restrictions and cache latency of any on-board processor cache memory100. As the reader may understand, any hardware processing element (whether a GPU, an ASIC, or the CPU36) may have integrated/embedded L1, L2, and L3 SRAM/DRAM cache memory. The processor cache memory100is generally much smaller than a system/main memory (such as the memory device38), so the hardware processing element may store frequently-needed data and instructions. Because the processor cache memory100is physically much closer to the processing core, any hardware processing element is able to quickly fetch or hit needed information. If the processor cache memory100does not store the needed information, then a cache miss has occurred and the hardware processing element must request and write blocks of data via a much-slower bus from the system/main memory38. A cache miss implies a cache latency in time and/or cycles to fetch the needed information from the system/main memory38. Any hardware processing element (again, whether a GPU, an ASIC, or the CPU36) may sit idle, or stall, while awaiting fetches from the system/main memory38. Exemplary embodiments may thus force latency, cache misses, and stalls. Exemplary embodiments may target cache latency and processor stalls by generating, storing, and/or using the database table90when determining the hash value(s)60(as later paragraphs will explain). The database table90, however, may be sized to overload the processor cache memory100. The database table90, in other words, may have a table byte size102(in bits/bytes) that exceeds a storage capacity or cache byte size104of the processor cache memory100. The database table90, for example, may exceed one gigabyte (1 GB). Today's L1, L2, and L3 processor cache memory is typically hundreds of megabits in size. Because the database table90may exceed one gigabyte (1 GB), any caching operation will miss or invalidate. That is, the L1, L2, and L3 processor cache memory100lacks the storage capacity or byte size104to store the entire database table90. Perhaps only a portion (or perhaps none) of the database table90may be stored in the processor cache memory100. Indeed, exemplary embodiments thus force some, most, or even all of the database table90to be written or stored to the main/host memory device38(or accessed/retrieved from a remote source, as later paragraphs will explain). Because any hardware processing element (again, whether a GPU, an ASIC, or the CPU36) is unable to cache the entire database table90, exemplary embodiments force a cache miss and further force the hardware processing element to repeatedly use the processor cache memory100to fetch and load a portion of the database table90. The main/system memory38thus provides perhaps a particular portion of the database table90via the bus to the processor cache memory100, and the processor cache memory100then provides that particular portion of the database table90to the hardware processing element. The hardware processing element may then purge or delete that particular portion of the database table90from the processor cache memory100and request/fetch/load another portion of the database table90. Because exemplary embodiments may force repeated cache misses, the hardware processing element may continuously repeat this cycle for loading/retrieving most or all portions of the database table90. The hardware processing element, in other words, repeatedly queries the processor cache memory100and/or the main/host memory device38and awaits data retrieval. The hardware processing element must, therefore sit, perhaps mostly idle, while the processor cache memory100and/or the main/host memory device38processes, retrieves, and sends different segments/portions/blocks of the database table90. The processor cache memory100and/or the main/host memory device38have the cache latency (perhaps measured in clock cycles, data transfer rate, or time) that limits blockchain computations. A faster processor/GPU/ASIC, in other words, will not improve memory access times/speeds, so any computational speed/performance is limited by the latency of repeatedly accessing the processor cache memory100and/or the main/host memory device38. The database table90thus deters GPU/ASIC usage when processing the blockchain transactions32. The database table90may thus be purposefully designed to be non-cacheable by intensively using the processor cache memory100and/or the main/host memory device38as an ASIC-deterrence mechanism. Byte or memory block access may be randomized. Whatever the hashing algorithm54, exemplary embodiments may implement the bit shuffle operation92on the hash value(s)60. Exemplary embodiments may use the entries94in the database table90to perform the bit shuffle operation92(as later paragraphs will further explain). The proof-of-work target scheme34may use bit values representing the hash value(s)60, but the proof-of-work target scheme34may swap any one or more of the bit values with any one or more entries94specified by the database table90. Each entry94in the database table90may contain a random selection of bits/bytes. The proof-of-work target scheme34may cause the proof-of-work algorithm52to read or to select a bit portion of the bit values representing the hash value(s)60and exchange or replace the bit portion with an entry94contained in, or referenced by, the database table90. Each entry94in the database table90represents or is associated with random bits or bytes. The proof-of-work target scheme34may thus randomly shuffle the hash value(s)60generated by the cryptographic hashing algorithm54. Exemplary embodiments may discourage or deter specialized hardware in blockchain mining. The miner system22must have access to the database table90in order to execute the bit shuffle operation92, difficulty algorithm48, and/or the proof-of-work algorithm52. Because any processing component (e.g., ASIC, GPU, or the CPU36) is unable to cache the entire database table90, exemplary embodiments force the processing component to query the processor cache memory100and/or the main/host memory device38and to await data retrieval. The hardware processing component must therefore sit, perhaps mostly idle, while the processor cache memory100and/or the main/host memory device38processes, retrieves, and sends different segments/portions/blocks of the database table90. A faster GPU/ASIC will thus not improve memory access times/speeds. Exemplary embodiments thus force miners to choose the CPU36, as a faster GPU/ASIC provides no performance/speed gain. Moreover, because a faster GPU/ASIC is ineffective, the extra capital expense of a faster GPU/ASIC offers little or no benefit and cannot be justified. Exemplary embodiments thus bind miners to the CPU36for blockchain processing/mining. Exemplary embodiments thus include RAM hashing. The electronic database table90may have a random number of columns and/or a random number of rows. The electronic database table90may have a random number of database entries94. Moreover, each columnar/row database entry94may also have a random sequence or selection of bits/bytes (1's and 0's). So, whatever the hash values60generated by the hashing algorithm54, the separate difficulty algorithm48and/or proof-of-work algorithm52may use the electronic database table90to further randomize the hash values60for additional cryptographic security. Indeed, because only at least a portion of the electronic database table90may be stored in the processor cache memory100, exemplary embodiments effectively confine hashing operations to the main/host memory device38(such as a subsystem RAM). Regardless of what device or service provider executes the hashing algorithm54, the electronic database table90, which is mostly or entirely stored in the main/host memory device38, provides the randomized inputs to the separate difficulty algorithm48and/or proof-of-work algorithm52. Operationally and functionally, then, exemplary embodiments divorce or functionally separate any hardware processing element from the hashing operation. Simply put, no matter what the performance/speed/capability of the ASIC, GPU, or the CPU36, the database table90may be randomly sized to always exceed the storage capacity or cache byte size104of the processor cache memory100. Hashing operations are thus reliant on cache latency, cache misses, and processor stalk when using the database table90. The hashing operations are thus largely confined to, and performed by, the off-board or off-processor main/host memory device38(such as a subsystem RAM). Because the main host memory device38performs most or all of the cryptographic security, the hardware processing component (ASIC, GPU, or the CPU36) may play little or no role in the hashing operations (perhaps only performing database lookup queries). Again, a better/faster ASIC or GPU provides little to no advantage in the hashing operations. Moreover, the main/host memory device38consumes much less electrical power, thus further providing reduced energy costs that deter/resist ASIC/GPU usage. Exemplary embodiments may also add cryptographic security. Exemplary embodiments may force the miner/network to possess, or have authorized access to, the database table90. In simple words, the proof-of-work target scheme34swaps random bytes in the hash value60with other random bytes specified by the database table90. Any party that provides or determines a proof-of-work must possess (or have access to) the database table90. If the difficulty algorithm48and/or the proof-of-work algorithm52lacks authorized access to the database table90, then the difficulty algorithm48and/or the proof-of-work algorithm52cannot query the database table90nor perform database lookup operations. Difficulty and/or proof-of-work will fail without having access to the database table90. Exemplary embodiments may also separately specify the difficulty algorithm48. The proof-of-work target scheme34may cause the miner system22to apply the bit shuffle operation92to the hash value60. The proof-of-work target scheme34may also specify the difficulty algorithm48and the target difficulty84, perhaps having a high number or value. Because these byte accesses to the processor cache memory100are random and over a gigabyte of the memory space, the byte accesses blow or exceed the retrieval and/or byte size storage capabilities of the processor cache memory100. The proof-of-work target scheme34thus forces the miner system22to wait on the slower main/host memory device38(rather than waiting on the speed of the hardware processing component). A faster/better hardware processing element (such as an ASIC), in other words, does not alleviate the bottleneck of accessing the main/host memory device38. Moreover, because exemplary embodiments may heavily rely on the main/host memory device38(rather than the hardware processing component) to do proof of work, the miner system22consumes significantly less of electrical power (supplied by a power supply110). Because the proof-of-work algorithm52and the difficulty algorithm48may be separate from the cryptographic hashing algorithm54, exemplary embodiments utilize the security of a well-tested hashing function, but exemplary embodiments also require the proof-of-work scheme to use the main/host memory device38, which makes it unreasonable to build ASICS. Exemplary embodiments may thus force usage of a particular physical memory. Exemplary embodiments, for example, may overload the processor cache memory100by gorging the byte size of the database table90with additional database entries. Even as L1, L2, and L3 processor cache memory100increases in the storage capacity or byte size104, exemplary embodiments may concomitantly increase the table byte size102(in bits/bytes) to ensure the database table90continues to exceeds the storage capacity or byte size104of the processor cache memory100. Exemplary embodiments may thus bind the encryption algorithm46, the difficulty algorithm48, and/or the proof-of-work algorithm52to the main/host memory device38to deter GPU/ASIC usage. Exemplary embodiments may also unbind the hashing algorithm54from the difficulty algorithm48. Exemplary embodiments easily validate the proof-of-work by changing how proof-of-work is calculated without changing the hashing algorithm54. Because the hashing algorithm54is disassociated or disconnected from the difficulty algorithm48, the cryptographically security of the hashing algorithm54is increased or improved. Moreover, the separate difficulty algorithm48and/or proof-of-work algorithm52may have other/different objectives, without compromising the cryptographically security of the hashing algorithm54. The difficulty algorithm48and/or proof-of-work algorithm52, for example, may be designed for less consumption of the electrical power. The difficulty algorithm48and/or proof-of-work algorithm52may additionally or alternatively be designed to deter/resist ASIC/GPU usage, such as increased usage of the processor cache memory100and/or the main/host memory device38. The difficulty algorithm48and/or proof-of-work algorithm52need not be cryptographically secure. Because the hashing algorithm54ensures the cryptographically security, the difficulty algorithm48and/or proof-of-work algorithm52need not be burdened with providing the cryptographically security. The difficulty algorithm48and/or proof-of-work algorithm52each require less programming code and less storage space/usage in bytes, so each is much simpler to code and much faster to execute. FIG.13illustrates network binding. Because the encryption algorithm46, the difficulty algorithm48, and the proof-of-work algorithm52may be separate software modules, routines, or clients, network communications may be used to deter specialized hardware. AsFIG.13illustrates, the miner system22communicates with the blockchain network server28via the communications network26. Because the miner system22may be authorized to perform blockchain mining (perhaps according to the proof-of-work target scheme34specified or used by the blockchain network server28), the miner system22may receive the inputs24from the blockchain network server28. The miner system22, in other words, must use the communications network26to receive the inputs24and to subsequently mine the inputs24. The miner system22uses the inputs24to determine the hash value60and/or the difficulty50(as this disclosure above explains). However, suppose the blockchain network server28stores the database table90that is required for the difficulty algorithm48and/or the proof-of-work algorithm52. Even though the miner system22may execute the encryption algorithm46, the difficulty algorithm48, and/or the proof-of-work algorithm52, the miner system22may be forced to send one or more database queries to the blockchain network server28. The blockchain network server28may have a hardware processing element and a memory device (not shown for simplicity) that stores the database table90. The blockchain network server28may also store and execute a query handler software application (also not shown for simplicity) that receives queries from clients, identifies or looks up entries94in the database table90, and sends query responses to the clients. So, when the miner system22is instructed to perform, or require, the bit shuffle operation92, the miner system22may thus be forced to retrieve any entry94(specified by the database table90) via the communications network26from the blockchain network server28. The miner system22may thus send the database query to the network address assigned to or associated with the blockchain network server28. The miner system22then awaits a query response sent via the communications network26from the blockchain network server28, and the query response includes or specifies the random selection of bits/bytes retrieved from the particular entry94in the database table90. The miner system22may then perform the bit swap operation92on the hash value(s)60(as this disclosure above explains). Exemplary embodiments may use a network latency112to discourage or deter specialized hardware. Because the blockchain network server28may store the database table90, the miner system22is performance bound by the network latency112in the communications network26. Packet communications between the blockchain network server28and the destination miner system22require time, and the network latency112is affected by network routing, network segment travel distances, network traffic, and many other factors. Exemplary embodiments may thus additionally or alternatively force the miner system22to wait on the communications network26to obtain any entry94in the database table90. A faster/better hardware processing component (such as an ASIC) does not overcome bottleneck(s) due to the network latency112in the communications network26. Moreover, because the electrical power required by a network interface114is likely less than the hardware processing component, the miner system22consumes significantly less of electrical power. FIG.14illustrates party binding. Here the miner system22may utilize an authorized proof-of-work (“PoW”) service provider120that provides a PoW service122. The miner system22may communicate with a PoW server124via the communications network26, and the PoW server124is operated by, or on behalf of, the PoW service provider120. Perhaps only the PoW service provider120may be authorized to execute the difficulty algorithm48and/or the proof-of-work algorithm52as a provable party. The PoW server124may have a hardware processing element and a memory device (not shown for simplicity) that stores the difficulty algorithm48and/or the proof-of-work algorithm52. If an incorrect or unauthorized party attempts the proof-of-work, the proof-of-work is designed to fail. As an example,FIG.14illustrates a party identifier126as one of the inputs24to the difficulty algorithm48and to the proof-of-work algorithm52. While the party identifier126may be supplied or sent from any network location (such as the blockchain network server28and/or the miner system22), the party identifier126may be locally retrieved from the memory device of the POW server124. The miner system22may send a PoW request128to a network address (e.g., IP address) associated with the PoW server124. The PoW request128may include or specify one or more of the inputs24to the difficulty algorithm48and/or to the proof-of-work algorithm52. Suppose, for example, that the PoW request128includes or specifies the hash value(s)60(determined by the hashing algorithm54, as above explained). The PoW server124may generate the difficulty50(by calling or executing the difficulty algorithm48) and/or the proof-of-work result42(by calling and/or by executing the proof-of-work algorithm52) using the hash value(s)60and the party identifier126. The PoW server124may then send the difficulty50and/or the proof-of-work result42as a PoW service response130back to the IP address associated with the miner system22and/or back to the IP address associated with the blockchain network server28. Either or both of the PoW server124and/or the blockchain network server28may compare the difficulty50and/or the proof-of-work result42to the proof-of-work (“PoW”) target scheme34. If the difficulty50and/or the proof-of-work result42satisfies the proof-of-work (“PoW”) target scheme34, then the correct, authorized party has solved the mathematical puzzle62associated with the mining scheme. Exemplary embodiments may thus be socially bound. Because the party identifier126may be an input to the difficulty algorithm48and/or to the proof-of-work algorithm52, the party identifier126must specify the correct name, code, alphanumeric combination, binary value, or any other representation of the PoW service provider120. If the wrong, incorrect, or unauthorized value is input, the difficulty algorithm48and/or the proof-of-work algorithm52will generate incorrect results that cannot satisfy the proof-of-work (“PoW”) target scheme34. An unauthorized party has been used to conduct the proof-of-work. FIG.15illustrates machine binding. Here the miner system22may utilize a particular machine, device, or other computer to provide the NW service122. The miner system22, for example, must use the PoW server124to execute the difficulty algorithm48and/or the proof-of-work algorithm52as a provable party. That is, perhaps only the PoW server124is authorized to execute the difficulty algorithm48and/or the proof-of-work algorithm52. A different computer or server, even if also operated by, or on behalf of, the PoW service provider120, is ineligible or unauthorized.FIG.15thus illustrates a machine identifier130as one of the inputs24to the difficulty algorithm48and/or to the proof-of-work algorithm52. The machine identifier130is any value, number, or alphanumeric combination that uniquely identifies the PoW server124executing the difficulty algorithm48and/or the proof-of-work algorithm52. The machine identifier130, for example, may be a chassis or manufacturer's serial number, MAC address, or IP address that is assigned to or associated with the PoW server124. When the PoW server124receives the input(s)24from the miner system22(perhaps via the PoW request128, as above explained), the PoW server124may generate the difficulty50and/or the proof-of-work result42using the hash value(s)60and the machine identifier130as inputs. The PoW server124may then send the difficulty50and/or the proof-of-work result42as a PoW service response130back to the IP address associated with the miner system22and/or back to the IP address associated with the blockchain network server28. Either or both of the PoW server124and/or the blockchain network server28may compare the difficulty50and/or the proof-of-work result42to the proof-of-work (“PoW”) target scheme34. If the difficulty50and/or the proof-of-work result42satisfy the proof-of-work (“PoW”) target scheme34, then the correct, authorized machine or device has solved the mathematical puzzle62associated with the mining scheme. Exemplary embodiments may thus be machine bound. If the wrong, incorrect, or unauthorized machine identifier130is input, the difficulty algorithm48and/or the proof-of-work algorithm52will generate incorrect results that cannot satisfy the proof-of-work (“PoW”) target scheme34. An unauthorized computer has been used to conduct the proof-of-work. FIG.16further illustrates network binding. Here a predetermined network addressing scheme must be used to conduct the difficulty50and/or the proof-of-work result42. Suppose, for example, that the proof-of-work (“PoW”) target scheme34requires one or more predetermined network addresses134when executing the difficulty algorithm48and/or the proof-of-work algorithm52. The inputs24to the difficulty algorithm48and/or to the proof-of-work algorithm52, for example, may include one or more source addresses136and/or one or more destination addresses138when routing packetized data via the communications network26from the miner system22to the PoW service provider120(e.g., the PoW server124). The hash values60, in other words, must traverse or travel a predetermined network routing140in order to satisfy the proof-of-work (“PoW”) target scheme34. The predetermined network routing140may even specify a chronological list or order of networked gateways, routers, switches, servers, and other nodal addresses that pass or route the inputs24from the miner system22to the PoW server124. The source addresses136, the destination addresses138, and/or the predetermined network routing140may thus be additional data inputs24to the difficulty algorithm48and/or to the proof-of-work algorithm52. The PoW server124may perform network packet inspection to read/retrieve the source addresses136, the destination addresses138, and/or the predetermined network routing140associated with, or specified by, a data packet. When the PoW server124receives the input(s)24from the miner system22(perhaps via the PoW request128, as above explained), the PoW server124may generate the difficulty50and/or the proof-of-work result42using the hash value(s)60, the source addresses136, the destination addresses138, and/or the predetermined network routing140. The PoW server124may then send the difficulty50and/or the proof-of-work result42as the PoW service response130back to the IP address associated with the miner system22and/or back to the IP address associated with the blockchain network server28. Either or both of the PoW server124and/or the blockchain network server28may compare the difficulty50and/or the proof-of-work result42to the proof-of-work (“PoW”) target scheme34. If the difficulty50and/or the proof-of-work result42satisfy the proof-of-work (“PoW”) target scheme34, then the correct, authorized networked devices were used to solve the mathematical puzzle62associated with the mining scheme. If a wrong, incorrect, or unauthorized routing was used, the difficulty algorithm48and/or the proof-of-work algorithm52will fail to satisfy the proof-of-work (“PoW”) target scheme34. An unauthorized network of computers has been used to conduct the proof-of-work. FIG.17illustrates vendor processing. The miner system22may communicate with one or more service providers via the communications network26. The miner system22may enlist or request that any of the service providers provide or perform a processing service. An encryption service provider150, for example, may provide an encryption service152by instructing an encryption server154to execute the encryption algorithm46chosen or specified by the miner system22and/or the blockchain network server28. A difficulty service provider156may provide a difficulty service158by instructing a difficulty server160to execute the difficulty algorithm48chosen or specified by the miner system22and/or the blockchain network server28. The proof-of-work (PoW) service provider120(e.g., the PoW server124) may provide the PoW service122by executing the proof-of-work algorithm52chosen or specified by the miner system22and/or the blockchain network server28. The miner system22may thus outsource or subcontract any of the encryption algorithm46, the difficulty algorithm48, and/or the proof-of-work algorithm52to the service providers). Because the encryption algorithm46, the difficulty algorithm48, and/or the proof-of-work algorithm52may be separate software mechanisms or packages, the service providers150,156, and120may specialize in their respective algorithms46,48, and52and/or services152,158, and122. The encryption service provider150, for example, may offer a selection of different encryption services152and/or encryption algorithms46, with each encryption service152and/or encryption algorithm46tailored to a specific encryption need or feature. The difficulty service provider156may offer a selection of different difficulty services158and/or difficulty algorithms48that are tailored to a specific difficulty need or feature. The PoW service provider120may offer a selection of different PoW services122and/or PoW algorithms52that are tailored to a specific proof-of-work need or feature. The blockchain network server28, the miner system22, and/or the proof-of-work (“PoW”) target scheme34may thus mix-and-match encryption, difficulty, and proof-of-work options. Exemplary embodiments may thus decouple encryption, difficulty, and proof-of-work efforts. Because the encryption algorithm46may be a stand-alone software offering or module, exemplary embodiments greatly improve encryption security. The encryption algorithm46(such as the hashing algorithm54) need not intertwine with the difficulty algorithm48and/or the proof-of-work algorithm52. Because the hashing algorithm54may be functionally divorced from difficulty and proof-of-work calculations, the hashing algorithm54remains a safe, secure, and proven cryptology scheme without exposure to software bugs and errors introduced by difficulty and proof-of-work needs. The difficulty algorithm48may also be severed or isolated from encryption and proof-of-work, thus allowing a blockchain scheme to dynamically alter or vary different difficulty calculations without affecting encryption and/or proof-of-work. The proof-of-work algorithm52may also be partitioned, split off, or disconnected from encryption and difficulty, thus allowing any blockchain scheme to dynamically alter or vary different proof-of-work calculations or schemes without affecting encryption and/or difficulty. FIG.18illustrates democratic mining. Exemplary embodiments reduce or even eliminate the need for graphics processors and specialized application-specific integrated circuits. The miner system22may thus rely on a conventional central processing unit (such as the CPU36) to process the blockchain transactions32. The miner system22may thus be a conventional home or business server/desktop160or laptop computer162that is much cheaper to purchase, use, and maintain. Moreover, the miner system22may even be a smartphone164, tablet computer166, or smartwatch168, as these devices also have adequate processing and memory capabilities to realistically mine and win the block40of data (illustrated inFIGS.1-10). Indeed, the miner system22may be any network-connected device, as exemplary embodiments reduce or even eliminate the need for specialized hardware processors. The miner system22thus opens-up blockchain mining to any network-connected appliance (e.g., refrigerator, washer, dryer), smart television, camera, smart thermostat, or other Internet of Thing. FIG.19also illustrates democratic mining. Because exemplary embodiments reduce or even eliminate the need for graphics processors and specialized application-specific integrated circuits, the miner system22may even be a car, truck, or other vehicle170. As the reader may realize, the vehicle170may have many electronic systems controlling many components and systems. For example, the engine may have an engine electronic control unit or “ECU”172, the transmission may have a powertrain electronic control unit or “PCU”174, the braking system may have a brake electronic control unit or “BCU”176, and the chassis system may have a chassis electronic control unit or “CUC”178. There may be many more electronic control units throughout the vehicle170. A controller area network180thus allows all the various electronic control units to communicate with each other (via messages sent/received via a CAN bus). All these controllers may also interface with the communications network26via a wireless vehicle transceiver182(illustrated as “TX/RX”). The vehicle170may thus communicate with the blockchain network server28to receive the inputs24(such as the blockchain transactions32). The vehicle170may then use the various controllers172-178to mine the blockchain transactions32using the encryption algorithm46, the difficulty algorithm48, and/or the PoW algorithm52(as this disclosure above explains). The reader may immediately see that the vehicle170is a powerful processing platform for blockchain mining. The vehicle170may mine the blockchain transactions32when moving or stationary, as long as electrical power is available to the various controllers172-178and to the vehicle transceiver182. Indeed, even when parked with the ignition/battery/systems on or off, exemplary embodiments may maintain the electrical power to mine the blockchain transactions32. So, a driver/user may configure the vehicle17to mine the blockchain transactions32, even when the vehicle sits during work hours, sleep hours, shopping hours, and other times of idle use. The reader may also immediately see that vehicular mining opens up countless additional possibilities to win the block40of data (i.e., solve the puzzle62) without additional investment in mining rigs. Thousands, millions, or even billions of vehicles170(e.g., cars, trucks, boats, planes, buses, trains, motorcycles) may mine the blockchain transactions32, thus providing a potential windfall to offset the purchasing and operational expenses. Exemplary embodiments reduce energy consumption. Because a conventional, general purpose central processing unit (e.g., the CPU36) is adequate for mining the blockchain transactions32, exemplary embodiments consume much less electrical power. Moreover, because a conventional central processing unit consumes much less electrical power, the CPU operates at much cooler temperatures, generates less waste heat/energy, and therefore requires less cooling, air conditioning, and refrigerant machinery. Exemplary embodiments are thus much cheaper to operate than GPUs and ASICs. Exemplary embodiments thus democratize blockchain mining. Because encryption, difficulty, and proof-of-work efforts may be functionally divided, general-purpose computer equipment has the processing and memory capability to compete as blockchain miners. For example, because the function(s) that calculate(s) the magnitude of the proof of work (such as the difficulty algorithm48and/or the proof-of-work algorithm52) may be detached or isolated from the function that performs cryptography (such as the hashing algorithm54), encryption need not be modified in order to improve security (e.g., such as the MONERO® mining scheme). The well-tested SHA-256 hashing function, for example, remains stable and unaffected by difficulty and/or proof-of-work. The difficulty algorithm48, in other words, need not be determined by or with the hashing algorithm54. The difficulty algorithm48, instead, may be separately determined as a true, independent measure of the difficulty50. The inventor has realized that most or all proof of work schemes generally may have two functions (i.e., one function to do a cryptographic hash and another function to determine the level of difficulty of a given hash). Exemplary embodiments may separate, or take away, what makes proof of work hard from the cryptographic hash and, perhaps instead, put it in the difficulty algorithm48that calculates which hash is more difficult. The difficulty algorithm48, for example, may be functionally combined with the proof-of-work algorithm52that calculates the magnitude of the proof of work instead of using the hashing algorithm54(asFIG.5illustrates). Exemplary embodiments need not try to design, develop, or modify hashing functions that deter ASIC mining. Encryption may thus be independent from proof-of-work determinations. The proof of work (such as the difficulty algorithm48and/or the proof-of-work algorithm52) may be a different or separate software mechanism from the hashing mechanism. The difficulty50of the proof-of-work, for example, may be a separate component from staking in a blockchain. The difficulty algorithm48and/or the proof-of-work algorithm52may require communications networking between provably different parties. The difficulty algorithm48and/or the proof-of-work algorithm52may require network delays and/or memory bandwidth limitations. The difficulty algorithm48and/or the proof-of-work algorithm52may have a random component (such as incorporating a random function), such that the difficulty algorithm48and/or the proof-of-work algorithm52may randomly to determine the difficulty50and/or the proof-of-work result42. Exemplary embodiments thus reduce or even eliminate the power intensive mechanism that is inherent in today's proof of work schemes by changing how the proof of work is calculated. Exemplary embodiments need not change the hashing algorithm54, and exemplary embodiments allow a more easily validated proof of work. The hashing algorithm54is not bound or required to determine the proof of work. The proof of work need not be cryptographically secure. The liberated, autonomous hashing algorithm54generates and guarantees an input (e.g., the hash values60) that cannot be predicted by some other faster algorithm. The disassociated hashing algorithm54effectively generates the hash values60as random numbers. The hashing algorithm54, in other words, provides cryptographic security, so neither the difficulty algorithm48nor the proof-of-work algorithm52need be cryptographically secure. The difficulty algorithm48and/or the proof-of-work algorithm52need not be folded into the hashing algorithm54. Exemplary embodiments provide great value to blockchains. Exemplary embodiments may functionally separate encryption (e.g., the hashing algorithm54) from proof of work (such as the difficulty algorithm48and/or the proof-of-work algorithm52). Exemplary embodiments may thus hind proof-of-work to a conventional central processing unit. Deploying a different cryptographic hash is hugely dangerous for blockchains, but deploying another difficulty or proof of work mechanism is not so dangerous. Exemplary embodiments allow blockchains to experiment with different difficulty functions (the difficulty algorithms48) and/or different proof-of-work algorithms52without changing the hashing algorithm54. Exemplary embodiments thus mitigate risk and reduce problems with cryptographic security. Many blockchain environments would prefer to make their technology CPU mineable for lower power, lower costs, and more democratic participation. The barrier, though, is that conventionally these goals would require changing their hash function. Exemplary embodiments, instead, reduce costs and increase the pool of miner systems without changing the hash function. The difficulty algorithm48and/or the proof-of-work algorithm52may be refined, modified, or even replaced with little or no impact on the hashing algorithm54. Exemplary embodiments reduce electrical power consumption. Blockchain mining is very competitive, as the first miner that solves the mathematical puzzle62owns the block40of data and is financially rewarded. Large “farms” have thus overtaken blockchain mining, with each miner installation using hundreds or even thousands of ASIC-based computers to improve their chances of first solving the calculations specified by the mathematical puzzle62. ASIC-based blockchain mining requires tremendous energy resources, though, with some studies estimating that each BITCOIN® transaction consumes more daily electricity than an average American home. Moreover, because ASIC-based blockchain mining operates 24/7/365 at full processing power, the ASIC-based machines quickly wear out or fail and need periodic (perhaps yearly) replacement. Exemplary embodiments, instead, retarget blockchain mining back to CPU-based machines that consume far less electrical power and that cost far less money to purchase. Because the capital costs and expenses are greatly reduced, more miners and more CPU-based machines may effectively participate and compete. The CPU-based machines, in other words, have a realistic and profitable chance of first solving the calculations specified by the mathematical puzzle62. Democratic participation is greatly increased. FIGS.20-21are more detailed illustrations of an operating environment, according to exemplary embodiments.FIG.20illustrates the blockchain network server28communicating with the miner system22via the communications network26. The blockchain network server28and the miner system22operate in the blockchain environment20. The blockchain network server28has a hardware processing component190(e.g., “P”) that executes a server-side blockchain software application192stored in a local memory device194. The blockchain network server28has a network interface to the communications network26, thus allowing two-way, bidirectional communication with the miner system22. The server-side blockchain software application192includes instructions, code, and/or programs that cause the blockchain network server28to perform operations, such as sending the inputs24(such as the blockchain transactions32) and/or the proof-of-work (“PoW”) target scheme34via the communications network26to the network address (e.g., Internet protocol address) associated with or assigned to the miner system22. The inputs24may be any electronic data30that is shared among miners participating in the blockchain environment20. The miner system22operates as a mining node in the blockchain environment20. The miner system22has the central processing unit (e.g., “CPU”)36that executes a client-side blockchain mining software application196stored in the local memory device38. The miner system22has a network interface to the communications network26, thus allowing two-way, bidirectional communication with the blockchain network server28. The client-side blockchain mining software application196includes instructions, code, and/or programs that cause the miner system22to perform operations, such as receiving the inputs24, the electronic data30, and/or the proof-of-work (“PoW”) target scheme34. The client-side blockchain mining software application196may then cause the miner system22to execute the proof-of-work (“PoW”) mechanism44based on the electronic data30representing the inputs24. The client-side blockchain mining software application196may instruct the CPU36to call and/or to execute the encryption algorithm46, the difficulty algorithm48, and/or the PoW algorithm52. The CPU36calls or executes any or all of the encryption algorithm46, the difficulty algorithm48, and/or the PoW algorithm52using the electronic data30. The miner system22mines blockchain transactional records. Whatever the electronic data30represents, the miner system22applies the electronic data30according to the proof-of-work target scheme34. While the proof-of-work target scheme34may specify any encryption algorithm46, most blockchains specify the hashing algorithm54. The miner system22may thus generate the hash values60by hashing the electronic data30(e.g., the blockchain transactions32) using the hashing algorithm54. The miner system22may generate the difficulty50by executing the difficulty algorithm48using the hash values60. The miner system22may generate the proof-of-work result42using the hash value(s)60as inputs to the proof-of-work algorithm52. If the proof-of-work result42satisfies the mathematical puzzle62, according to the rules/regulations specified by the blockchain network server28and/or the proof-of-work target scheme34, then perhaps the miner system22earns or owns the right or ability to write/record blockchain transaction(s) to the block40of data. The miner system22may also earn or be rewarded with a compensation (such as a cryptographic coin, points, other currency/coin/money, or other value). The miner system22may own the block40of data. If the miner system22is the first to satisfy the proof-of-work target scheme34(e.g., the proof-of-work result42satisfies the mathematical puzzle62), the miner system22earns the sole right or ability to write the blockchain transactions32to the block40of data. The miner system22may timestamp the block40of data and broadcast the block40of data, the timestamp, the proof-of-work result42, and/or the mathematical puzzle62to other miners in the blockchain environment20. The miner system22, may broadcast a hash value representing the block40of data. The miner system22thus adds or chains the block40of data (and perhaps its hash value) to the blockchain64, and the other miners begin working on a next block in the blockchain64. The proof-of-work target scheme34and/or the mathematical puzzle62may vary. Satoshi's BITCOIN® proof-of-work scanned for a value that, when hashed, the hash value begins with a number of zero bits. The average work required is exponential in the number of zero bits required and can be verified by executing a single hash. BITCOIN's miners may increment a nonce in the block40of data until a value is found that gives the block's hash the required zero bits. FIG.21further illustrates the operating environment. The miner system22may optionally utilize vendors for any of the hashing algorithm54, the difficulty algorithm48, and the proof-of-work algorithm52. The miner system22may enlist or request that a service provider provide or perform a processing service. The encryption server154, for example, may communicate with the blockchain network server28and the miner system22via the communications network26. The encryption server154has a hardware processing element (“P”) that executes the encryption algorithm46stored in a local memory device. The encryption server154is operated on behalf of the encryption service provider150and provides the encryption service152. The miner system22and/or the blockchain network server28may send an encryption service request to the encryption server154, and the encryption service request may specify the inputs24(such as the blockchain transactions32). The encryption server154executes the encryption algorithm46using the inputs24to generate the hash value(s)60. The encryption server154sends a service response to the miner system22, and the service response includes or specifies the hash value(s)60. Other suppliers may be used. The difficulty server160may communicate with the blockchain network server28and the miner system22via the communications network26. The difficulty server160has a hardware processing element (“P”) that executes the difficulty algorithm48stored in a local memory device. The difficulty service provider156may provide the difficulty service158by instructing the difficulty server160to execute the difficulty algorithm48chosen or specified by the miner system22and/or the blockchain network server28. The miner system22and/or the blockchain network server28may send a difficulty service request to the difficulty server160, and the difficulty service request may specify the hash value(s)60. The difficulty server160executes the difficulty algorithm48using the hash value(s)60to generate the difficulty50. The difficulty server160sends the service response to the miner system22, and the service response includes or specifies the difficulty50. The PoW server124may communicate with the blockchain network server28and the miner system22via the communications network26. The PoW server124has a hardware processing element (“P”) that executes the proof-of-work algorithm52stored in a local memory device. The PoW service provider120(e.g., the POW server124) may provide the PoW service122by executing the proof-of-work algorithm52chosen or specified by the miner system22and/or the blockchain network server28. The POW server124sends the service response to the miner system22, and the service response includes or specifies the PoW result42. The miner system22may compare any of the hash value(s)60, the difficulty50, and/or the PoW result42to the proof-of-work target scheme34. If the proof-of-work target scheme34is satisfied, perhaps the miner system22is the first miner to have solved the puzzle62. Exemplary embodiments may be applied regardless of networking environment. Exemplary embodiments may be easily adapted to stationary or mobile devices having wide-area networking (e.g., 4G/LTE/5G cellular), wireless local area networking (WI-FI®), near field, and/or BLUETOOTH® capability. Exemplary embodiments may be applied to stationary or mobile devices utilizing any portion of the electromagnetic spectrum and any signaling standard (such as the IEEE 802 family of standards, GSM/CDMA/TDMA or any cellular standard, and/or the ISM band). Exemplary embodiments, however, may be applied to any processor-controlled device operating in the radio-frequency domain and/or the Internet Protocol (IP) domain. Exemplary embodiments may be applied to any processor-controlled device utilizing a distributed computing network, such as the Internet (sometimes alternatively known as the “World Wide Web”), an intranet, a local-area network (LAN), and/or a wide-area network (WAN). Exemplary embodiments may be applied to any processor-controlled device utilizing power line technologies, in which signals are communicated via electrical wiring. Indeed, exemplary embodiments may be applied regardless of physical componentry, physical configuration, or communications standard(s). Exemplary embodiments may utilize any processing component, configuration, or system. For example, the miner system22may utilize any desktop, mobile, or server central processing unit or chipset offered by INTEL®, ADVANCED MICRO DEVICES®, ARM®, TAIWAN SEMICONDUCTOR MANUFACTURING®, QUALCOMM®, or any other manufacturer. The miner system22may even use multiple central processing units or chipsets, which could include distributed processors or parallel processors in a single machine or multiple machines. The central processing unit or chipset can be used in supporting a virtual processing environment. The central processing unit or chipset could include a state machine or logic controller. When any of the central processing units or chipsets execute instructions to perform “operations,” this could include the central processing unit or chipset performing the operations directly and/or facilitating, directing, or cooperating with another device or component to perform the operations. Exemplary embodiments may packetize. When the blockchain network server28and the miner system22communicate via the communications network26, the blockchain network server28and the miner system22may collect, send, and retrieve information. The information may be formatted or generated as packets of data according to a packet protocol (such as the Internet Protocol). The packets of data contain bits or bytes of data describing the contents, or payload, of a message. A header of each packet of data may be read or inspected and contain routing information identifying an origination address and/or a destination address. Exemplary embodiments may use any encryption or hashing function. There are many encryption algorithms and schemes, and exemplary embodiments may be adapted to execute or to conform to any encryption algorithm and/or scheme, in the blockchain environment20, though, many readers may be familiar with the various hashing algorithms, especially the well-known SHA-256 hashing algorithm. The SHA-256 hashing algorithm acts on any electronic data or information to generate a 256-bit hash value as a cryptographic key. The key is thus a unique digital signature. However, there are many different hashing algorithms, and exemplary embodiments may be adapted to execute or to conform to any hashing algorithm, hashing family, and/or hashing scheme (e.g., Blake family, MD family, RIPE family, SHA family, CRC family). The miner system22may store or request different software packages. The hashing algorithm54may be a software file, executable program, routine, module, programming code, or third-party service that hashes the blockchain transactions32to generate the hash value(s)60. The difficulty algorithm48may be a software file, executable program, routine, module, programming code, or third-party service that uses the hash value(s)60to generate the difficulty50. The proof-of-work (“PoW”) algorithm52be a software file, executable program, routine, module, programming code, or third-party service that uses the hash value(s)60to generate the PoW result42. The miner system22may download or otherwise acquire the hashing algorithm54, the difficulty algorithm48, and/or the PoW algorithm52to provide mining operations for the blockchain transactions32. The blockchain environment20may flexibly switch or interchange encryption, difficulty, and proof-of-work. Because the hashing algorithm54, the difficulty algorithm48, and the proof-of-work algorithm52may be separate software packages, the proof-of-work (“PoW”) target scheme34and/or the blockchain environment20may mix-and-match the encryption algorithm46, the difficulty algorithm48, and the proof-of-work algorithm52. The blockchain environment20may thus easily evaluate different combinations of the encryption algorithm46, the difficulty algorithm48, and the proof-of-work algorithm52with little or no intra-algorithm or intra-application effect. The blockchain environment20may mix-and-match encryption, difficulty, and proof-of-work. FIGS.22-31illustrate mining specifications, according to exemplary embodiments. When the miner system22communicates with the blockchain network server28, the blockchain network server28may specify the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20. That is, when the miner system22participates as a miner and mines or processes blockchain records/transactions, the miner system22may be required or instructed to use the particular hashing algorithm54, the difficulty algorithm48, and/or the proof-of-work algorithm52specified by the blockchain network. For example, in order for the miner system22to be authorized or recognized as a mining participant, the miner system22may be required to download the client-side blockchain mining software application196that specifies or includes the hashing algorithm54, the difficulty algorithm48, and/or the proof-of-work algorithm52. The client-side blockchain mining software application196may thus comprise any software apps or modules, files, programming code, or instructions representing the hashing algorithm54, the difficulty algorithm48, and/or the proof-of-work algorithm52. FIGS.23-25illustrate an encryption identifier mechanism.FIG.23illustrates the miner system22receiving the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20. In order to reduce a memory byte size and/or programming line size of the PoW target scheme34and/or the client-side blockchain mining software application196, exemplary embodiments may specify an encryption identifier (encryption “ID”)200associated with the blockchain network's chosen or required encryption scheme. The encryption identifier200may be any alphanumeric combination, hash value, network address, website, or other data/information that uniquely identifies the PoW target scheme34and/or the encryption algorithm46used by the blockchain environment20. AsFIG.23illustrates, the miner system22may receive the encryption identifier200as a specification or parameter associated with the PoW target scheme34and/or the encryption algorithm46. AsFIG.24illustrates, though, the miner system22may receive a packetized message202from the blockchain network server28, and a packet header and/or payload may specify or include the encryption identifier200as a data field, specification, or parameter. Again, because many or most blockchain networks use hashing as an encryption mechanism, the encryption identifier200may specify, be assigned to, or be associated with the hashing algorithm54. The blockchain network server28may thus send the encryption identifier200(via the communications network26) to the miner system22. The encryption identifier200may be packaged as a downloadable component, parameter, or value with the client-side blockchain mining software application196. However, the encryption identifier200may additionally or alternatively be sent to the miner system22at any time via the message202. Because the encryption identifier200may be separately sent from the client-side blockchain mining software application196, the encryption identifier200may be dynamically updated or changed without downloading a new or updated client-side blockchain mining software application196. AsFIG.25illustrates, exemplary embodiments may consult the electronic database70of encryption algorithms. Once the miner system22receives or determines the encryption identifier200, the miner system22may implement the encryption scheme represented by the encryption identifier200. The miner system22may obtain, read, or retrieve the encryption identifier200specified by the client-side blockchain mining software application196and/or packet inspect the message202from the blockchain network server28. Once the encryption identifier200is determined, the miner system22may identify the corresponding blockchain encryption scheme by querying the electronic database70of encryption algorithms for the encryption identifier200.FIG.25illustrates the electronic database70of encryption algorithms locally stored in the memory device38of the miner system22. The electronic database70of encryption algorithms may store, reference, or associate the encryption identifier200to its corresponding proof-of-work target scheme34and/or encryption algorithm46. The miner system22may thus perform or execute a database lookup for the encryption identifier200to identify which proof-of-work target scheme34and/or encryption algorithm46is required for miners operating in the blockchain environment20. The miner system22may then retrieve, call, and/or execute the encryption algorithm46using the inputs24(such as the blockchain transactions32), as this disclosure above explained (with reference toFIG.7). Exemplary embodiments may outsource encryption operations. When the miner system22determines the encryption identifier200, the corresponding blockchain encryption scheme may require or specify the encryption service provider150that provides the encryption service152. AsFIG.25also illustrates, the electronic database70of encryption algorithms may map or relate the encryption identifier200to its corresponding encryption service provider150that provides the encryption service152. The miner system22may thus identify an encryption service resource204that provides the encryption service152. The encryption service resource204, for example, may be an Internet protocol address, website/webpage, and/or uniform resource locator (URL) that is assigned to, or associated with, the encryption service provider150and/or the encryption service152. The miner system22may outsource or subcontract the inputs24(such as the blockchain transactions32) to the encryption service resource204(perhaps using the service request and service response mechanism explained with reference toFIG.21). Exemplary embodiments may thus be agnostic to hashing. The miner system22may call, request, and/or execute any encryption scheme specified by any client, cryptographic coin, or blockchain network. The miner system22may dynamically switch or mix-and-match different encryption schemes. Once the miner system22determines the proof-of-work target scheme34, the encryption algorithm46, the encryption service provider150, the encryption service152, the encryption identifier200, and/or the encryption service resource204, the miner system22may perform any encryption scheme specified for the blockchain environment20. The blockchain environment20may dynamically change the encryption scheme at any time. The blockchain environment20may flexibly switch, change, and evaluate different encryption strategies, perhaps with little or no impact or effect on difficulty and proof-of-work operations. Moreover, the miner system22may operate within or mine different blockchain environments20without specialized hardware rigs. Exemplary embodiments improve computer functioning. Because exemplary embodiments may only specify the encryption identifier200, the memory byte size consumed by the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196is reduced. That is, the blockchain network server28need not send the entire software program, code, or instructions representing the hashing algorithm54used by the blockchain environment20. The blockchain environment20, the blockchain network server28, and/or the proof-of-work (“PoW”) target scheme34need only specify much smaller byte-sized data or information representing the encryption algorithm46, the encryption service provider150, the encryption service152, the encryption identifier200, and/or the encryption service resource204. The blockchain environment20need not be burdened with conveying the hashing algorithm54to the miner system22and other mining nodes. The blockchain environment20and the communications network26convey less packet traffic, so packet travel times and network latency are reduced. Moreover, especially if the miner system22outsources the hashing operation, the miner system22is relieved from processing/executing the hashing algorithm54and consumes less of the electrical power. Again, then, a faster and more expensive graphics processor or even ASIC will not speed up the hashing operation. The conventional central processing unit36is adequate, reduces costs, and promotes democratic mining. FIGS.26-28illustrate illustrates a difficulty identifier mechanism.FIG.26illustrates the miner system22receiving the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20. In order to reduce a memory byte size and/or programming line size of the PoW target scheme34and/or the client-side blockchain mining software application196, exemplary embodiments may specify a difficulty identifier (difficulty “ID”)210associated with the blockchain network's chosen or required difficulty scheme. The difficulty identifier210may be any alphanumeric combination, hash value, network address, website, or other data/information that uniquely identifies the PoW target scheme34and/or the difficulty algorithm48used by the blockchain environment20. AsFIG.26illustrates, the miner system22may receive the difficulty identifier210as a specification or parameter associated with the PoW target scheme34and/or the difficulty algorithm48. AsFIG.27illustrates, though, the miner system22may receive the packetized message202from the blockchain network server28, and a packet header and/or payload may specify or include the difficulty identifier210as a data field, specification, or parameter. The blockchain network server28may thus send the difficulty identifier210(via the communications network26) to the miner system22. The difficulty identifier210may be packaged as a downloadable component, parameter, or value with the client-side blockchain mining software application196. However, the difficulty identifier210may additionally or alternatively be sent to the miner system22at any time via the message202. Because the difficulty identifier210may be separately sent from the client-side blockchain mining software application196, the difficulty identifier210may be dynamically updated or changed without downloading a new or updated client-side blockchain mining software application196. AsFIG.28illustrates, exemplary embodiments may consult the electronic database74of difficulty algorithms. Once the miner system22receives or determines the difficulty identifier210, the miner system22may implement the difficulty scheme represented by the difficulty identifier210. The miner system22may obtain, read, or retrieve the difficulty identifier210specified by the client-side blockchain mining software application196and/or packet inspect the message202from the blockchain network server28. Once the difficulty identifier210is determined, the miner system22may identify the corresponding blockchain difficulty scheme by querying the electronic database74of difficulty algorithms for any query parameter (such as the difficulty identifier210).FIG.28illustrates the electronic database74of difficulty algorithms locally stored in the memory device38of the miner system22. The electronic database74of difficulty algorithms may store, reference, or associate the difficulty identifier210to its corresponding proof-of-work target scheme34and/or difficulty algorithm48. The miner system22may thus perform or execute a database lookup for the difficulty identifier210to identify which proof-of-work target scheme34and/or difficulty algorithm48is required for miners operating in the blockchain environment20. The miner system22may then retrieve, call, and/or execute the difficulty algorithm48using the hash value(s)60, as this disclosure above explained (with reference toFIG.8). Exemplary embodiments may outsource difficulty operations. When the miner system22determines the difficulty identifier210, the corresponding blockchain difficulty scheme may require or specify the difficulty service provider156that provides the difficulty service158. AsFIG.28also illustrates, the electronic database74of difficulty algorithms may map or relate the difficulty identifier210to its corresponding difficulty service provider156that provides the difficulty service158. The miner system22may thus identify a difficulty service resource212that provides the difficulty service158. The difficulty service resource212, for example, may be an Internet protocol address, website/webpage, and/or uniform resource locator (URL) that is assigned to, or associated with, the difficulty service provider156and/or the difficulty service158. The miner system22may outsource or subcontract the hash value(s)60to the difficulty service resource212(perhaps using the service request and service response mechanism explained with reference toFIG.21). Exemplary embodiments may thus be agnostic to difficulty. The miner system22may call, request, and/or execute any difficulty scheme specified by any client, cryptographic coin, or blockchain network. The miner system22may dynamically switch or mix-and-match different difficulty schemes. Once the miner system22determines the proof-of-work target scheme34, the difficulty algorithm48, the difficulty service provider156, the difficulty service158, the difficulty identifier210, and/or the difficulty service resource212, the miner system22may perform any difficulty scheme specified for the blockchain environment20. The blockchain environment20may dynamically change the difficulty scheme at any time. The blockchain environment20may flexibly switch, change, and evaluate different difficulty strategies, perhaps with little or no impact or effect on hashing and proof-of-work operations. Moreover, the miner system22may operate within or mine different blockchain environments20without specialized hardware rigs. Exemplary embodiments improve computer functioning. Because exemplary embodiments may only specify the difficulty identifier210, the memory byte size consumed by the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196is reduced. That is, the blockchain network server28need not send the entire software program, code, or instructions representing the difficulty algorithm48used by the blockchain environment20. The blockchain environment20, the blockchain network server28, and/or the proof-of-work (“PoW”) target scheme34need only specify much smaller byte-sized data or information representing the difficulty algorithm48, the difficulty service provider156, the difficulty service158, the difficulty identifier210, and/or the difficulty service resource212. The blockchain environment20need not be burdened with conveying the difficulty algorithm48to the miner system22and other mining nodes. The blockchain environment20and the communications network26convey less packet traffic, so packet travel times and network latency are reduced. Moreover, especially if the miner system22outsources the difficulty operation, the miner system22is relieved from processing/executing the difficulty algorithm48and consumes less of the electrical power. Again, then, a faster and more expensive graphics processor or even ASIC will not speed up the difficulty operation. The conventional central processing unit36is adequate, reduces costs, and promotes democratic mining. FIGS.29-31illustrate illustrates a proof-of-work (“PoW”) identifier mechanism.FIG.29illustrates the miner system22receiving the proof-of-work (“POW”) target scheme34that is required by the blockchain environment20. In order to reduce a memory byte size and/or programming line size of the NW target scheme34and/or the client-side blockchain mining software application196, exemplary embodiments may specify a PoW identifier214associated with the blockchain network's chosen or required PoW scheme. The PoW identifier214may be any alphanumeric combination, hash value, network address, website, or other data/information that uniquely identifies the PoW target scheme34and/or the PoW algorithm52used by the blockchain environment20. AsFIG.29illustrates, the miner system22may receive the PoW identifier214as a specification or parameter associated with the PoW target scheme34and/or the PoW algorithm52. AsFIG.30illustrates, though, the miner system22may receive the packetized message202from the blockchain network server28, and a packet header and/or payload may specify or include the PoW identifier214as a data field, specification, or parameter. The blockchain network server28may thus send the PoW identifier214(via the communications network26) to the miner system22. The PoW identifier214may be packaged as a downloadable component, parameter, or value with the client-side blockchain mining software application196. However, the PoW identifier214may additionally or alternatively be sent to the miner system22at any time via the message202. Because the PoW identifier214may be separately sent from the client-side blockchain mining software application196, the PoW identifier214may be dynamically updated or changed without downloading a new or updated client-side blockchain mining software application196. AsFIG.31illustrates, exemplary embodiments may consult the electronic database78of PoW algorithms. Once the miner system22receives or determines the PoW identifier214, the miner system22may implement the proof-of-work scheme represented by the PoW identifier214. The miner system22may obtain, read, or retrieve the PoW identifier214specified by the client-side blockchain mining software application196and/or packet inspect the message202from the blockchain network server28. Once the PoW identifier214is determined, the miner system22may identify the corresponding blockchain proof-of-work scheme by querying the electronic database78of PoW algorithms for any query parameter (such as the PoW identifier214).FIG.31illustrates the database78of PoW algorithms locally stored in the memory device38of the miner system22. The electronic database78of PoW algorithms may store, reference, or associate the PoW identifier214to its corresponding proof-of-work target scheme34and/or difficulty algorithm48. The miner system22may thus perform or execute a database lookup for the PoW identifier214to identify which proof-of-work target scheme34and/or PoW algorithm52is required for miners operating in the blockchain environment20. The miner system22may then retrieve, call, and/or execute the PoW algorithm52using the hash value(s)60, as this disclosure above explained (with reference toFIG.9). Exemplary embodiments may outsource difficulty operations. When the miner system22determines the PoW identifier214, the corresponding blockchain proof-of-work scheme may require or specify the PoW service provider120that provides the PoW service122. AsFIG.31also illustrates, the electronic database78of PoW algorithms may map or relate the PoW identifier214to its corresponding PoW service provider120and PoW service122. The miner system22may thus identify a PoW service resource216that provides the PoW service122. The PoW service resource216, for example, may be an Internet protocol address, website/webpage, and/or uniform resource locator (URL) that is assigned to, or associated with, the PoW service provider120and/or PoW service122. The miner system22may outsource or subcontract the hash value(s)60to the PoW service resource216(perhaps using the service request and service response mechanism explained with reference toFIG.21). Exemplary embodiments may thus be agnostic to proof-of-work. The miner system22may call, request, and/or execute any proof-of-work scheme specified by any client, cryptographic coin, or blockchain network. The miner system22may dynamically switch or mix-and-match different proof-of-work schemes. Once the miner system22determines the proof-of-work target scheme34, the PoW algorithm52, the PoW service provider120, the PoW service122, the PoW identifier214, and/or the PoW service resource216, the miner system22may perform any proof-of-work scheme specified for the blockchain environment20. The blockchain environment20may dynamically change the proof-of-work scheme at any time. The blockchain environment20may flexibly switch, change, and evaluate different proof-of-work strategies, perhaps with little or no impact or effect on hashing and difficulty operations. Moreover, the miner system22may operate within or mine different blockchain environments20without specialized hardware rigs. Exemplary embodiments improve computer functioning. Because exemplary embodiments may only specify the PoW identifier214, the memory byte size consumed by the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196is reduced. That is, the blockchain network server28need not send the entire software program, code, or instructions representing the PoW algorithm52used by the blockchain environment20. The blockchain environment20, the blockchain network server28, and/or the proof-of-work (“PoW”) target scheme34need only specify much smaller byte-sized data or information representing the PoW algorithm52, the PoW service provider120, the PoW service122, the PoW identifier214, and/or the PoW service resource216. The blockchain environment20need not be burdened with conveying the PoW algorithm52to the miner system22and other mining nodes. The blockchain environment20and the communications network26convey less packet traffic, so packet travel times and network latency are reduced. Moreover, especially if the miner system22outsources the proof-of-work operation, the miner system22is relieved from processing/executing the PoW algorithm52and consumes less of the electrical power. Again, then, a faster and more expensive graphics processor or even ASIC will not speed up the difficulty operation. The conventional central processing unit36is adequate, reduces costs, and promotes democratic mining. FIG.32illustrates remote retrieval, according to exemplary embodiments. After the miner system22determines the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20, the miner system22may acquire or download the encryption algorithm46, the difficulty algorithm48, and/or the PoW algorithm52. For example, the miner system22may determine the encryption identifier200(as this disclosure above explains) and send a query to the encryption server154. The query specifies the encryption identifier200. When the encryption server154receives the query, the encryption server154may query the database70of encryption algorithms for the encryption identifier200. The encryption server154may locally store the database70of encryption algorithms and function as a networked encryption resource for clients. The encryption server154identifies and/or retrieves the corresponding encryption algorithm46. The encryption server154sends a query response to the miner system22, and the query response specifies or includes the corresponding encryption algorithm46. The miner system22may then execute the encryption algorithm46, as above explained. The miner system22may remotely retrieve the difficulty algorithm48. After the miner system22determines the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20, the miner system22may acquire or download the difficulty algorithm48. For example, the miner system22may determine the difficulty identifier210(as this disclosure above explains) and send a query to the difficulty server160. The query specifies the difficulty identifier210. When the difficulty server160receives the query, the difficulty server160may query the database74of difficulty algorithms for the difficulty identifier210. The difficulty server160may locally store the database74of difficulty algorithms and function as a networked difficulty resource for clients. The difficulty server160identifies and/or retrieves the corresponding difficulty algorithm48. The difficulty server160sends a query response to the miner system22, and the query response specifies or includes the corresponding difficulty algorithm48. The miner system22may then execute the difficulty algorithm48, as above explained. The miner system22may remotely retrieve the PoW algorithm52. After the miner system22determines the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20, the miner system22may acquire or download the PoW algorithm52. For example, the miner system22may determine the PoW identifier214(as this disclosure above explains) and send a query to the PoW server124. The query specifies the PoW identifier214. When the PoW server124receives the query, the PoW server124may query the database78of PoW algorithms for the PoW identifier214. The PoW server124may locally store the database78of PoW algorithms and function as a networked proof-of-work resource for clients. The PoW server124identifies and/or retrieves the corresponding PoW algorithm52. The PoW server124sends a query response to the miner system22, and the query response specifies or includes the corresponding PoW algorithm52. The miner system22may then execute the PoW algorithm52, as above explained. FIGS.33-34further illustrate the bit shuffle operation92, according to exemplary embodiments. The difficulty algorithm48and/or the proof-of-work algorithm52may perform the bit shuffle operation92to conduct any difficulty and/or proof-of-work. After the hashing algorithm54generates the hash value(s)60(as this disclosure above explains), exemplary embodiments may use the database table90to further deter GPU/ASIC usage. The difficulty algorithm48and/or the proof-of-work algorithm52may implement the bit shuffle operation92on the hash value(s)60. AsFIG.34illustrates, suppose the hash value60is represented by a sequence or series of 256 bit values. The difficulty algorithm48and/or the proof-of-work algorithm52may select an arbitrary portion or number220of the bit values. The difficulty algorithm48and/or the proof-of-work algorithm52, for example, may call, use, or execute a random number generator (RNG)222to generate one or more random numbers224. As an example, a first random number224may be used to select a random entry94in the database table90. The difficulty algorithm48and/or the proof-of-work algorithm52may then query the database table90for the random entry94and identify/retrieve the corresponding random bits96. The difficulty algorithm48and/or the proof-of-work algorithm52may then select and replace the arbitrary portion or number220of the bit values in the hash value60with the random bits retrieved from the entry94in the database table90. The bit shuffle operation92thus converts the hash value60and generates a resulting randomized hash value226. The difficulty algorithm48and/or the proof-of-work algorithm52may instruct or cause the miner system to repeat the bit shuffle operation92as many times as desired. The randomized hash value226may, or may not, have the same number of 256 bit values. The randomized hash value226may have less than, or more than, 256 bit values. The randomized hash value226may have an arbitrary number of bit values. Once the specified or required number of bit shuffle operations92is complete, the difficulty algorithm48and/or the proof-of-work algorithm52may instruct or cause the miner system to determine the difficulty50and/or the PoW result42(as this disclosure above explains). FIGS.35-36further illustrate the database table90, according to exemplary embodiments. Exemplary embodiments may autonomously or automatically adjust the table byte size102(in bits/bytes) of the database table90to exceed the storage capacity or cache byte size104of the on-board processor cache memory100. The client-side blockchain mining application196, for example, may query the CPU36to determine the storage capacity or cache byte size104of the processor cache memory100. If the table byte size102consumed by the database table90exceeds the storage capacity or cache byte size104of the processor cache memory100, then perhaps no action or resolution is required. That is, the database table90requires more bytes or space than allocated to, or available from, the processor cache memory100(integrated/embedded L1, L2, and L3 SRAM/DRAM cache memory). Any cache read/write operation230will invalidate, thus forcing the processing component (whether a GPU, ASIC, or the CPU36) to incur a cache miss232and endure the cache latency234of requesting and writing blocks of data via the much-slower bus from the system/main memory38. The processing component (whether a GPU, ASIC, or the CPU36) stalls, thus negating the use of a faster GPU or ASIC. Exemplary embodiments may auto-size the database table90. When the client-side blockchain mining application196determines the storage capacity or cache byte size104of the processor cache memory100, the client-side blockchain mining application196may compare the storage capacity or cache byte size104to the table byte size102of the database table90. The storage capacity or cache byte size104of the processor cache memory100, for example, may be subtracted from the table byte size102of the database table90. If the resulting value (in bits/bytes) is positive (greater than zero), then the database table90exceeds the storage capacity or cache byte size104of the processor cache memory100. The client-side blockchain mining application196may thus determine a cache deficit236, ensuring the cache miss232and the cache latency234. Exemplary embodiments, however, may determine a cache surplus238. If the resulting value (in bits/bytes) is zero or negative, then the database table90is less than the storage capacity or cache byte size104of the processor cache memory100. Whatever the processing component (whether a GPU, ASIC, or the CPU36), some or even all of the database table90could be stored and retrieved from the processor cache memory100, thus giving an advantage to a faster processing component. The client-side blockchain mining application196may thus increase the table byte size102of the database table90. The client-side blockchain mining application196, for example, may add one (1) or more additional database rows240and/or one (1) or more additional database columns242. The client-side blockchain mining application196may increase the table byte size102of the database table90by adding additional entries94, with each added entry94specifying more random bits96. As an example, the client-side blockchain mining application196may call, use, or execute the random number generator222to generate the random number224and then add the additional database row(s)240and/or additional database column(s)242according to the random number224. Exemplary embodiments may thus continually or periodically monitor the storage capacity or cache byte size104of the processor cache memory100and the table byte size102of the database table90. The cache surplus238may trigger a resizing operation to ensure the database table90always exceeds the processor cache memory100. The database table90may be large. The above examples only illustrated a simple configuration of a few database entries94. In actual practice, though, the database table90may have hundreds, thousands, or even millions of the rows and columns, perhaps producing hundreds, thousands, millions, or even billions of database entries94. Exemplary embodiments may repeatedly perform the bit shuffle operation92to suit any difficulty or proof-of-work strategy or scheme. The proof-of-work target scheme34, the difficulty algorithm48, and/or the proof-of-work algorithm52may each specify a minimum and/or a maximum number of bit shuffle operations that are performed. Exemplary embodiments may use the XOR/Shift random number generator (RNG)222coupled with the lookup database table90of randomized sets of bytes. The database table90may have any number of 256 byte tables combined and shuffled into one large byte lookup table. Exemplary embodiments may then index into this large table to translate the state built up while hashing into deterministic but random byte values. Using a 1 GB lookup table results in a RAM Hash PoW algorithm that spends over 90% of its execution time waiting on memory (RAM) than it does computing the hash. This means far less power consumption, and ASIC and GPU resistance. The ideal platform for PoW using a RAM Hash is a Single Board Computer like Raspberry PI 4 with 2 GB of memory. Any or all parameters may be specified. The size of the database table90may be specified in bits for the index, the seed used to shuffle the lookup table, the number of rounds to shuffle the table, and the size of the resulting hash. Because the LXRHash is parameterized in this way, as computers get faster and larger memory caches, the LXRHash can be set to use 2 GB or 16 GB or more. The Memory bottleneck to computation is much easier to manage than attempts to find computational algorithms that cannot be executed faster and cheaper with custom hardware, or specialty hardware like GPUs. Very large lookup tables will blow the memory caches on pretty much any processor or computer architecture. The size of the database table90can be increased to counter improvements in memory caching. The number of bytes in the resulting hash can be increased for more security (greater hash space), without significantly more processing time. LXRHash may even be fast by using small lookup tables. ASIC implementations for small tables would be very easy and very fast. LXRHash only uses iterators (for indexing) shifts, binary ANDs and XORs, and random byte lookups. The use case for LXRHash is Proof of Work (PoW), not cryptographic hashing. The database table90may have equal numbers of every byte value, and shuffled deterministically. When hashing, the bytes from the source data are used to build offsets and state that are in turn used to map the next byte of source. In developing this hash, the goal was to produce very randomized hashes as outputs, with a strong avalanche response to any change to any source byte. This is the prime requirement of PoW. Because of the limited time to perform hashing in a blockchain, collision avoidance is important but not critical. More critical is ensuring engineering the output of the hash isn't possible. Exemplary embodiments yield some interesting qualities. For example, the database table90may be any size, so making a version that is ASIC resistant is possible by using very big lookup tables. Such tables blow the processor caches on CPUs and GPUs, making the speed of the hash dependent on random access of memory, not processor power. Using 1 GB lookup table, a very fast ASIC improving hashing is limited to about ˜10% of the computational time for the hash. 90% of the time hashing isn't spent on computation but is spent waiting for memory access. At smaller lookup table sizes, where processor caches work, LXRHash can be modified to be very fast. LXRHash would be an easy ASIC design as it only uses counters, decrements, XORs, and shifts. The hash may be altered by changing the size of the lookup table, the seed, size of the hash produced. Change any parameter and you change the space from which hashes are produced. The Microprocessor in most computer systems accounts for 10× the power requirements of memory. If we consider PoW on a device over time, then LXRHash is estimated to reduce power requirements by about a factor of 10. Testing has revealed some optimizations. LXRHash is comparatively slow by design (to make PoW CPU bound), but quite a number of use cases don't need PoW, but really just need to validate data matches the hash. So using LXRHash as a hashing function isn't as desirable as simply using it as a PoW function. The somewhat obvious conclusion is that in fact we can use Sha256 as the hash function for applications, and only use the approach as a PoW measure. So in this case, what we do is change how we compute the PoW of a hash. So instead of simply looking at the high order bits and saying that the greater the value the greater the difficulty (or the lower the value the lower the difficulty) we instead define an expensive function to calculate the PoW. Exemplary embodiments may break out PoW measures from cryptographic hashes. The advantage here is that what exactly it means to weigh PoW between miners can be determined apart from the hash that secures a blockchain. Also, a good cryptographic hash provides a much better base from which to randomize PoW even if we are going to use a 1 GB byte map to bound performance by DRAM access. And we could also use past mining, reputation, staking, or other factors to add to PoW at this point. PoW may be represented as a nice standard sized value. Because exemplary embodiments may use a function to compute the PoW, we can also easily standardize the size of the difficulty. Since bytes that are all 0×FF or all 0x00 are pretty much wasted, we can simply count them and combine that count with the following bytes. This encoding is compact and easily compared to other difficulties in a standard size with plenty of resolution. So with PoW represented as a large number, the bigger the more difficult, the following rules may be followed. Where bit 0 is most significant, and bit 63 is least significant:Bits 0-3 Count of leading 0×FF bytes; andBits 4-63 bits of the following bytes. For example, given the hashffffff7312334c442bf42625f7856fe0d50e4aa45c98d7a391c016b89e242d94, the difficulty is 37312334c442bf42. The computation counts the leading bytes with a value of 0×FF, then calculates the uint64value of the next 8 bytes. The count is combined with the following bytes by shifting the 8 bytes right by 4, and adding the count shifted left by 60. As computing power grows, more significant bits of the hash can be used to represent the difficulty. At a minimum, difficulty is represented by 4 bits 0x0 plus the following 0+60 bits=>60 bits of accuracy. At the maximum, difficulty is represented by 4 bits 0×F plus the following 60 bits=>120+60=180 bits of accuracy. Sha256 is very well tested as a cryptographic function, with excellent waterfall properties (meaning odds are very close to 50% that any change in the input will flit any particular bit in the resulting hash). Hashing the data being mined by the miners is pretty fast. If an application chooses to use a different hashing function, that's okay as well. FIGS.37-40illustrate a table identifier mechanism, according to exemplary embodiments. When the miner system22communicates with the blockchain network server28, the blockchain network server28may specify the proof-of-work (“PoW”) target scheme34and/or the database table90that is required by the blockchain environment20. For example, in order to reduce a memory byte size and/or programming line size of the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196, exemplary embodiments may only specify a table identifier250associated with the blockchain network's chosen or required difficulty and proof-of-work scheme. The table identifier250may be any alphanumeric combination, hash value, network address, website, or other data/information that uniquely identifies the database table90used by the blockchain environment20. The blockchain network server28may thus send the table identifier250(via the communications network26) to the miner system22. The table identifier250may be packaged as a downloadable component, parameter, or value with the client-side blockchain mining software application196. However, the table identifier250may additionally or alternatively be sent to the miner system22, such as the packetized message202that includes or specifies the table identifier250(explained with reference toFIGS.22-31). Because the table identifier250may be separately sent from the client-side blockchain mining software application196, the table identifier250may be dynamically updated or changed without downloading a new or updated client-side blockchain mining software application196. Exemplary embodiments may consult an electronic database252of tables. When the miner system22receives the table identifier250, the miner system22may use, call, and/or implement the database table90represented by the table identifier250. The miner system22may obtain, read, or retrieve the table identifier250specified by the client-side blockchain mining software application196. The miner system22may additionally or alternatively inspect, read, or retrieve the table identifier250from the message202. Once the table identifier250is determined, the miner system22may identify the corresponding database table90by querying the database252of tables for the table identifier250.FIG.37illustrates the electronic database252of tables locally stored in the memory device38of the miner system22. The database252of tables stores, references, or associates the table identifier250and/or the proof-of-work target scheme34to the corresponding database table90. The miner system22may thus identify and/or retrieve the database table90. The miner system22may then execute the difficulty algorithm48and/or the proof-of-work algorithm using the entries specified by the database table90(as this disclosure above explains). FIG.38illustrates remote retrieval.FIG.38illustrates the database252of tables remotely stored by a table server254and accessed via the communications network26. The table server254may be the only authorized source for the database table90. The table server254may thus operate within the blockchain environment20and provide the latest/current database table90for all miners in the blockchain network. The table server254, however, may be operated on behalf of an authorized third-party vendor or supplier that provides the database table90for all miners in the blockchain network. Once the miner system22determines the table identifier250, the miner system22may send a query to the network address associated with or assigned to the table server254. The query specifies the table identifier250. When the table server254receives the query, the table server254queries the electronic database252of tables for the table identifier250specified by the query. The table server254has a hardware processor and memory device (not shown for simplicity) that stores and executes a query handler software application. The query handler software application causes the table server254to perform a database lookup operation. The table server254identifies the corresponding database table90by querying the database252of tables for the table identifier250. The table server254generates and sends a query response to the network address associated with or assigned to the miner system22, and the query response includes or specifies the database table90that is associated with the table identifier250. The miner system22may thus identify, download, and/or retrieve the database table90. Because the database252of tables may store or reference many different database tables, exemplary embodiments may dynamically switch or change the database table90to suit any objective or performance criterion. Exemplary embodiments may thus need only specify the table identifier250, and the table identifier250may be dynamically changed at any time. The blockchain environment20may flexibly switch, change, and evaluate different database tables, merely by changing or modifying the table identifier250. The blockchain network may thus experiment with different database tables, different difficulty algorithms48, and/or different proof-of-work algorithms52with little or no impact or effect on hashing. Should an experimental scheme prove or become undesirable, for whatever reason(s), the blockchain environment20(such as the blockchain network server28) may distribute, assign, or restore a new/different table identifier250(perhaps by updating the client-side blockchain mining software application196and/or distributing/broadcasting the message202, as this disclosure above explains). The blockchain environment20may thus dynamically change the database table90, which may concomitantly change the difficulty algorithm48and/or the proof-of-work algorithm52, for quick evaluation and/or problem resolution. FIG.39further illustrates table services. Here the table server254may serve different blockchain environments20. For example, the table server254may server miners22aoperating in blockchain environment20a. The table server254may also server miners22boperating in blockchain environment20b. The table server254may thus be operated on behalf of a table service provider256that provides a table service258to clients and blockchain networks. The table service provider256may receive, generate, and/or store different database tables90, perhaps according to a client's or a blockchain's specification. Each different table90may have its corresponding unique table identifier250. So, whatever the proof-of-work (“POW”) target scheme (e.g.,34aand34b) and/or the blockchain environment20a-b, the table server254may offer and provide the corresponding database table90. The table service provider256and/or the table server254may thus be an authorized provider or participant in the blockchain environments20a-b. A first miner system22a, for example, operating in the blockchain environment20a, may request and retrieve the database table90athat corresponds to the proof-of-work (“POW”) target scheme34a. A different, second system22b, operating in the blockchain environment20b, may request and retrieve the database table90bthat corresponds to the proof-of-work (“PoW”) target scheme34b. Miners may query the table server254(perhaps by specifying the corresponding table ID250) and retrieve the corresponding database table90. The table service provider256may thus specialize in randomized/cryptographic database tables, and the table server254may serve different blockchain networks. FIG.40further illustrates table services. The blockchain environment20and/or the miner system22may outsource the bit shuffle operation92to the table service provider256. Once the miner system22determines or receives the hash value(s)60(generated by the hashing algorithm54), the miner system22may outsource or subcontract the bit swap operation92to the table server254. The client-side blockchain mining software application196may thus cause or instruct the miner system22to generate a bit shuffle service request that is sent to the table service provider256(such as the IP address assigned to the table server254). The bit shuffle service request may specify or include the hash values60. The bit shuffle service request may additionally or alternatively specify or include the table identifier250. The bit shuffle service request may additionally or alternatively specify or include a website, webpage, network address location, or server from which the hash values60may be downloaded, retrieved, or obtained to perform the bit shuffle operation92. While the table service provider256may utilize any mechanism to provide the bit shuffle operation92,FIG.40illustrates a vendor's server/client relationship. The miner system22sends the bit shuffle service request to the table server254that is operated on behalf of the table service provider256. When the table server254receives the bit shuffle service request, the table server254may query the database252of tables for the table identifier250specified by the bit shuffle service request. The table server254identifies the corresponding database table90. The table server254performs the bit shuffle operation92using the hash value(s)60specified by, or referenced by, the bit shuffle service request. The table server254generates and sends a service result to the network address associated with or assigned to the miner system22, and the service result includes or specifies data or information representing the randomized hash value(s)226. The miner system22may then execute, or outsource, the difficulty algorithm48and/or the proof-of-work algorithm52using the randomized hash value(s)226(as this disclosure above explained). Exemplary embodiments improve computer functioning. The database table90adds cryptographic security by further randomizing the hash value(s)60generated by the hashing algorithm54. Moreover, because the database table90may be remotely located and accessed, exemplary embodiments may only specify the table identifier250. The memory byte size consumed by the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196is reduced. That is, the blockchain network server28need not send the entire software program, code, or instructions representing the database table90used by the blockchain environment20. The blockchain environment20, the blockchain network server28, and/or the proof-of-work (“POW”) target scheme34need only specify the much smaller byte-sized table identifier250. The blockchain environment20need not be burdened with conveying the database table90to the miner system22and to other mining nodes. The blockchain environment20and the communication network26convey less packet traffic, so packet travel times and network latency are reduced. Moreover, especially if the miner system22outsources table operations, the miner system22is relieved from processing/executing the bit swap operation92and consumes less electrical power. Again, then, a faster and more expensive graphics processor or even ASIC will not speed up the proof-of-work operation. The conventional central processing unit36is adequate, reduces costs, and promotes democratic mining. Exemplary embodiments improve cryptographic security. If the blockchain environment20, the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196specifies use of the database table90, only authorized miners may have access to the actual entries referenced by the database table90. That is, if miner system22is required to perform, implement, or even execute the bit shuffle operation92, the miner system22must have access to the correct database table90. An unauthorized or rogue entity, in other words, likely could not perform the bit shuffle operation92without access to the correct database table90. Moreover, if the bit shuffle operation92is remotely performed from the miner system22(such as by the table server254, as above explained), perhaps not even the authorized miner system22need have access to the database table90. So, even if the miner system22is authorized to mine or process blockchain transactions32in the blockchain environment20, the authorized miner system22may still be blind to the database table90. The authorized miner system22, in other words, is operationally reliant on the table server254to perform the bit shuffle operation92that may be required for the difficulty algorithm48and/or for the proof-of-work algorithm52. The miner system22simply cannot solve the mathematical puzzle62without the table service258provided by the table server254. The database table90may thus be proprietary to the blockchain environment20, but, unknown and unavailable to even the authorized miner system22for added cryptographic security. FIG.41illustrates agnostic blockchain mining, according to exemplary embodiments. As the reader may now realize, the miner system22may be agnostic to the blockchain environment20. Because the miner system22may be agnostic to encryption, difficulty, and proof-of-work operations, the miner system22may process or mine the blockchain transactions32in multiple blockchain environments20. That is, because the conventional CPU36is adequate for mining blockchain transactions32, no specialized ASIC is required for any particular blockchain environment20. The miner system22may thus participate in multiple blockchain environments20and potentially earn multiple rewards. The miner system22, for example, may participate in the blockchain environment22aand mine the blockchain transactions32asent from the blockchain network server28ato authorized miners in blockchain network260a. The miner system22may thus mine the blockchain transactions32aaccording to the proof-of-work (“PoW”) target scheme34athat is specified by the blockchain environment22a, the blockchain network server28a, and/or the blockchain network260a. The miner system22, however, may also participate in the blockchain environment22band mine the blockchain transactions32bsent from the blockchain network server28bto authorized miners in blockchain network260b. The miner system22may thus mine the blockchain transactions32baccording to the proof-of-work (“PoW”) target scheme34bthat is specified by the blockchain environment22b, the blockchain network server28b, and/or the blockchain network260b. Because exemplary embodiments require no specialized GPU or ASIC, the miner's conventional CPU36may be adequate for mining operations in both blockchain environments22aand22b. The miner system22may thus download, store, and execute the client-side blockchain mining software application196athat is required to mine the blockchain transactions32ain the blockchain environment20a. The miner system22may also download, store, and execute the client-side blockchain mining software application196bthat is required to mine the blockchain transactions32bin the blockchain environment20b. The miner system22may thus call, execute, coordinate, or manage the encryption algorithm46a, the difficulty algorithm48a, and/or the proof-of-work (“PoW”) algorithm52aaccording to the proof-of-work (“PoW”) target scheme34aspecified by the blockchain environment20a. The miner system22may also call, execute, coordinate, or manage the encryption algorithm46b, the difficulty algorithm48b, and/or the proof-of-work (“PoW”) algorithm52baccording to the proof-of-work (“PoW”) target scheme34bspecified by the blockchain environment20b. Because exemplary embodiments require no specialized GPU or ASIC, the miner system22has the hardware processor capability and performance (e.g., clock speed, processor core(s)/thread(s) count, cycles, the on-board cache memory100, thermal profile, electrical power consumption, and/or chipset) to mine in both blockchain environments20aand20b. The miner system22may participate in multiple blockchain environments20, thus having the capability to earn additional rewards, while also being less expensive to purchase and to operate. FIGS.42-43illustrate virtual blockchain mining, according to exemplary embodiments. Because the miner system22may be agnostic to the blockchain environment20, the miner system22may outsource or subcontract mining operations to a virtual machine (or “VM”)262. For example, the miner system22may implement different virtual machines262, with each virtual machine262dedicated to a particular blockchain environment20. The miner system22, for example, may assign the virtual machine262ato mining the blockchain transactions32asent from the blockchain network server28a. The miner system22may assign the virtual machine262bto mining the blockchain transactions32bsent from the blockchain network server28b. The miner system22may thus be a server computer that participates in multiple blockchain environments20and potentially earns multiple rewards. The miner system22may provide virtual mining resources to multiple blockchain environments20, thus lending or sharing its hardware, computing, and programming resources. WhileFIG.42only illustrates two (2) virtual machines262aand262b, in practice the miner system22may implement any number or instantiations of different virtual machines262, with each virtual machine262serving or mining one or multiple blockchain environments20. So, when the miner system22receives the blockchain transactions32, the miner system22may inspect the blockchain transactions32for the proof-of-work (“PoW”) target scheme34that identifies the corresponding encryption, difficulty, and PoW scheme (such as by consulting the databases70,74, and78, as above explained). The miner system22may additionally or alternatively inspect the blockchain transactions32for the identifiers200,210,214, and250(as this disclosure above explains). Once the blockchain environment20is determined, the miner system22may then. FIG.43illustrates a database lookup. When the miner system22determines the PoW scheme34and/or any of the identifiers200,210,214, and250, the miner system22may identify the corresponding virtual machine262. For example, the miner system22may consult an electronic database264of virtual machines. While the database264of virtual machines may have any structure,FIG.43illustrates a relational table266having entries that map or associate the PoW scheme34and/or any of the identifiers200,210,214,250to the corresponding virtual machine262. The miner system22may thus query the electronic database264of virtual machines for any of the PoW scheme34and/or any of the identifiers200,210,214,250and determine the corresponding virtual machine262. Once the virtual machine262is identified (e.g., a memory address or pointer, processor core, identifier, network address and/or service provider, or other indicator), the miner system22may assign the blockchain transactions32to the virtual machine262for mining. The miner system22may thus serve many blockchains. The miner system22, for example, may mine BITCOIN® and other cryptographic coin transactional records. However, the miner system22may also nearly simultaneously mine financial records sent from or associated with a financial institution, inventory/sales/shipping records sent from a retailer, and transactional records sent from an online website. The miner system22may participate in multiple blockchain environments20, thus having the capability to earn additional rewards, while also being less expensive to purchase and to operate. FIG.44is a flowchart illustrating a method or algorithm for mining the blockchain transactions32, according to exemplary embodiments. The inputs24(such as the blockchain transactions32) may be received (Block300). The proof-of-work (“PoW”) target scheme34may be received (Block302). The message202may be received (Block304). The identifiers200,210,214, and/or250may be received (Block306). The block40of data may be generated (Block308). The encryption algorithm46(such as the hashing algorithm54) may be identified (Block310) and the output56(such as the hash values60) may be generated by encrypting/hashing the blockchain transactions32and/or the block40of data (Block312). The encryption/hashing service provider150may be identified and the blockchain transactions32and/or the block40of data outsourced (Block314). The output56(such as the hash values60) may be received from the encryption/hashing service provider150(Block316). The difficulty algorithm48may be identified (Block318), the database table90may be generated or identified, and the difficulty50may be generated by executing the difficulty algorithm48(Block320). The difficulty service provider156may be identified and the difficulty calculation outsourced (Block322). The difficulty50may be received from the difficulty service provider156(Block324). The PoW algorithm52may be identified (Block326), the database table90may be generated or identified, and the PoW result42determined by executing the PoW algorithm52(Block328). The PoW service provider120may be identified and the PoW calculation outsourced (Block330). The PoW result42may be received from the PoW service provider120(Block332). The output56(such as the hash values60), the difficulty50, and/or the PoW result42may be compared to the PoW target scheme34(Block334). Exemplary embodiments may win the block40of data. If the output56, the difficulty50, and/or the PoW result42satisfy the PoW target scheme34, then the miner system22may submit the output56, the difficulty50, and/or the PoW result42to the blockchain network server28. The miner system22may itself determine if the miner system22is the first to satisfy the PoW target scheme34, or the miner system22may rely on the blockchain network server28to determine the first solution. When the miner system22is the first solver, the miner system22earns the right to add the block40of data to the blockchain64. However, if the PoW target scheme34is not satisfied, the miner system22implements a change or modification and repeats. FIG.45is a schematic illustrating still more exemplary embodiments.FIG.45is a more detailed diagram illustrating a processor-controlled device350. As earlier paragraphs explained, the miner system22may be any home or business server/desktop160, laptop computer162, smartphone164, tablet computer166, or smartwatch168, as exemplary embodiments allow these devices to have adequate processing and memory capabilities to realistically mine and win the block40of data (as explained with reference toFIG.18). Moreover, exemplary embodiments allow any CPU-controlled device to realistically, and profitably, process the blockchain transactions32, thus allowing networked appliances, radios/stereos, clocks, tools (such as OBDII diagnostic analyzers and multimeters), HVAC thermostats and equipment, network switches/routers/modems, and electric/battery/ICU engine cars, trucks, airplanes, construction equipment, scooters, and other vehicles170. Exemplary embodiments may be applied to any signaling standard. Most readers are familiar with the smartphone164and mobile computing. Exemplary embodiments may be applied to any communications device using the Global System for Mobile (GSM) communications signaling standard, the Time Division Multiple Access (TDMA) signaling standard, the Code Division Multiple Access (CDMA) signaling standard, the “dual-mode” GSM-ANSI Interoperability Team (GAIT) signaling standard, or any variant of the GSM/CDMA/TDMA signaling standard. Exemplary embodiments may also be applied to other standards, such as the I.E.E.E. 802 family of standards, the Industrial, Scientific, and Medical band of the electromagnetic spectrum, BLUETOOTH®, low-power or near-field, and any other standard or value. Exemplary embodiments may be physically embodied on or in a computer-readable storage medium. This computer-readable medium, for example, may include CD-ROM, DVD, tape, cassette, floppy disk, optical disk, memory card, memory drive, and large-capacity disks. This computer-readable medium, or media, could be distributed to end-subscribers, licensees, and assignees. A computer program product comprises processor-executable instructions for processing or mining the blockchain transactions32, as the above paragraphs explain. While the exemplary embodiments have been described with respect to various features, aspects, and embodiments, those skilled and unskilled in the art will recognize the exemplary embodiments are not so limited. Other variations, modifications, and alternative embodiments may be made without departing from the spirit and scope of the exemplary embodiments.
144,257
11863306
DETAILED DESCRIPTION Disclosed herein are, inter alia, implementations of systems and techniques for automatic spotlighting in video conferencing. One aspect of this disclosure is a method that includes detecting activity in a participant video feed of a video conference. The method may include determining an activity relevance score corresponding to the detected activity. The method may include adding the participant video feed to a spotlight queue if the activity relevance score is above a relevance threshold. The method may include elevating the participant video feed from an inactive spotlight status to an active spotlight status. The method may include displaying the participant video feed adjacent to a host video feed on a display. Another aspect of this disclosure is a video conference system that includes a server, a host device, and a participant device. The host device may be configured to transmit a host video feed to the server. The participant device may be configured to transmit a participant video feed to the server. The server may be configured to detect activity in the participant video feed. The server may be configured to determine an activity relevance score corresponding to the detected activity. The activity relevance score may indicate how relevant the activity is to conference participants. The server may be configured to add the participant video feed to a spotlight queue if the activity relevance score is above a relevance threshold. The server may be configured to elevate the participant video feed from an inactive spotlight status to an active spotlight status. The server may be configured to display the participant video feed adjacent to the host video feed. Another aspect of this disclosure is a non-transitory computer-readable medium configured to store machine-readable instructions that when executed by a processor, cause the processor to sample a participant video feed of a spotlight queue. The processor may be configured to determine a relevance score of the participant video feed. The processor may be configured to elevate the participant video feed to an active spotlight status based on the relevance score. The processor may be configured to display the participant video feed adjacent to a host video feed. Another aspect of this disclosure is a non-transitory computer-readable medium configured to store machine-readable instructions that when executed by a processor, cause the processor to sample a participant video feed of a spotlight queue. The participant video feed may be sampled based on an activity of the participant video feed. The processor may be configured to determine a relevance score of the participant video feed. The processor may be configured to update the participant video feed to an active spotlight status based on the relevance score. The processor may be configured to display the participant video feed adjacent to a host video feed. Another aspect of this disclosure is a method that includes sampling a participant video feed of a spotlight queue. The participant video feed may be sampled based on an activity of the participant video feed. The method may include determining a relevance score of the participant video feed. The method may include updating the participant video feed to an active spotlight status based on the relevance score. The method may include displaying the participant video feed adjacent to a host video feed. Another aspect of this disclosure is a video conference system that includes a server, a host device, and a participant device. The host device may be configured to transmit a host video feed to the server. The participant device may be configured to transmit a participant video feed to the server. The server may be configured to sample the participant video feed based on an activity of the participant video feed associated with the spotlight queue. The server may be configured to determine a relevance score of the participant video feed. The server may be configured to update the participant video feed to an active spotlight status based on the relevance score. The server may be configured to display the participant video feed adjacent to a host video feed. In one or more aspects, the processor may be configured to classify the activity. In one or more aspects, the activity may be classified based on a comparison of the activity to a stored database of activities. In one or more aspects, the processor may be configured to determine an activity relevance score corresponding to the activity. In one or more aspects, the activity relevance score may indicate how relevant the activity is to conference participants. In one or more aspects, the activity relevance score may be based on a correlation of a participant activity in the participant video feed relative to a host activity in the host video feed. In one or more aspects, the processor may be configured to initiate a timer when the participant video feed is displayed adjacent to the host video feed. In one or more aspects, the processor may be configured to demote the participant video feed from the activity spotlight status when the timer expires. In one or more aspects, a second activity may be detected in a second participant video feed. In one or more aspects, a second relevance score may be determined that corresponds to the second detected activity. In one or more aspects, the second participant video feed may be added to the spotlight queue when the second activity relevance score is above a threshold. In one or more aspects, the second participant video feed may be updated to the active spotlight status. In one or more aspects, the activity relevance score may be compared to the second activity relevance score. In one or more aspects, the second participant video feed may be added to the spotlight queue below the participant video feed when the activity relevance score is above the second activity relevance score. In one or more aspects, the activity relevance score may be based on a correlation of a first object activity in the participant video feed relative to a second object activity in the host video feed. In one or more aspects, a determination of whether the activity is valid may be made based on a duration of the activity. In one or more aspects, the participant video feed may be updated to the active spotlight status based on a determination that the activity is valid. In one or m ore aspects, a determination may be made as to whether the activity is valid based on a dynamic threshold associated with a number of participant video feeds that have detected activity. In one or more aspects, the spotlight queue may include multiple participant feeds, and two of the multiple participant feeds that have been in the spotlight queue the longest are updated to the active spotlight status. In one or more aspects, two of the multiple participant feeds that have the highest activity relevance scores may be updated to the active spotlight status. Conference platforms allow a video conference host to manually spotlight a participant video feed to indicate that the participant is a focal point of the video conference. Manually spotlighting participants requires the host to divert his attention to select participant video feeds that may enhance the content of the video conference. Since the attention of the host is diverted, the content of the video conference can suffer, which can lead to an undesirable experience for the conference participants. Typically, the host would have to designate a secondary person to control the manual spotlight. In many cases, the host may not have a secondary person available to handle the task of monitoring the participant video feeds, determining which participant video feeds are interesting or relevant to conference participants, and selecting the participant video feeds that are interesting or relevant to conference participants for display in the video conference. Implementations of this disclosure address problems such as these by providing methods and systems to automatically select participant video feeds for display in a video conference based on one or more triggers detected in the participant video feeds. Conference platforms that have a multi-camera setup allow a video conference host to manually spotlight a host video feed by selecting a camera feed that he deems as interesting or relevant to the conference participants. Manually selecting a host video feed requires the host to divert his attention to select a host video feed that may enhance the content of the video conference. Since the attention of the host is diverted, the content of the video conference can suffer, which can lead to an undesirable experience for the conference participants. Typically, the host would have to designate the secondary person to control the manual spotlight. In many cases, the host may not have a secondary person available to handle this task. In an example where the conference is an online cooking show, where there may be a camera focused on a cutting board or cooking surface, and another camera focused on the host, a secondary person, such as a producer, is needed to control the manual spotlight during a specific segment of the show to highlight relevant and interesting content for the video conference participants. Implementations of this disclosure address problems such as these by providing methods and systems to automatically select a host video feed in a video conference based on one or more triggers detected in the host video feeds. The implementations of this disclosure are described in context of a cooking show for simplicity and clarity, and it is understood that the automatic spotlight methods and systems may be used in any scenario where there are multiple cameras at one location such that the camera that is recording the most relevant or interesting content is automatically spotlighted. The automatic spotlight methods and systems may be used in a scenario where there are multiple locations and a single camera at each location such that the camera that is recording the most relevant or interesting content is automatically spotlighted. The automatic spotlight methods and systems may be used in a scenario where there are multiple locations and multiple cameras at each location such that the camera that is recording the most relevant or interesting content is automatically spotlighted. In addition to the cooking show context, the automatic spotlight methods and systems may be used in other demonstrative show settings, educational settings, such as a webinar or classroom, or a team-based competition setting, such as a team-building exercise. To describe some implementations in greater detail, reference is first made to examples of hardware and software structures used to implement automatic spotlight methods and systems for video conferencing.FIG.1is a block diagram of an example of an electronic computing and communications system100, which can be or include a distributed computing system (e.g., a client-server computing system), a cloud computing system, a clustered computing system, or the like. The system100connects various clients102and/or phones104to services implemented within or otherwise using a datacenter106. The system100can connect a number of clients102and/or phones104or can have a configuration of clients or phones different from that generally illustrated inFIG.1. For example, and without limitation, the system100can connect hundreds or thousands of clients and/or phones. A client102may be or otherwise refer to one or both of a client device or a client application. Where a client is or refers to a client device, the client can comprise a computing system, which can include one or more computing devices, such as a mobile phone, a tablet computer, a laptop computer, a notebook computer, a desktop computer, or another suitable computing device or combination of computing devices. Where a client instead is or refers to a client application, the client can be an instance of software running on a device. In some implementations, a client can be implemented as a single physical unit or as a combination of physical units. In some implementations, a single physical unit can include multiple clients. A phone104may be or otherwise refer to one or both of a phone device or a phone application such as a softphone. For example, a phone104may be a smart phone or other cell phone which may or may not be configured to run mobile applications, such as a client102. In another example, a phone104may be a desk phone, such as a desktop unit configured to at least send and receive calls and includes an input device for receiving a telephone number or extension to dial to and an output device for outputting audio and/or video for a call in progress. In yet another example, the phone104may be a softphone representing telephony functionality of a client102. A phone104may or may not be voice over IP (VOIP)-enabled. The datacenter106includes one or more servers. The datacenter106can represent a geographic location, which can include a facility, where the one or more servers are located. The system100can include a number of datacenters and servers or can include a configuration of datacenters and servers different from that generally illustrated inFIG.1. For example, and without limitation, the system100can include tens of datacenters, and at least some of the datacenters can include hundreds or another suitable number of servers. The datacenter106includes servers used for implementing software services. The datacenter106as generally illustrated includes an application server108, a database server110, and a telephony server112. The servers108through112can each be a computing system, which can include one or more computing devices, such as a desktop computer, a server computer, or another computer capable of operating as a server, or a combination thereof. A suitable number of each of the servers108through112can be implemented at the datacenter106. In some implementations, one or more of the servers108112can be a non-hardware aspect implemented on a physical device, such as a hardware server. In some implementations, a combination of two or more of the application server108, the database server110, and the telephony server112can be implemented as a single hardware server or as a single non-hardware server implemented on a single hardware server. In some implementations, the datacenter106can include servers other than or in addition to the servers108through112, for example, a media server, a proxy server, or a web server. The application server108runs web-based software services deliverable to the clients102and at least partially to the phones104. The software services may be or include conference software which enables audio, video, and/or other forms of conferences between multiple devices (e.g., between ones of the clients102, between ones of the phones104, or between ones of the clients102and ones of the phones104), such as to facilitate a conference between the users of those devices. The conference software can include functionality for hosting, presenting scheduling, joining, or otherwise participating in a conference. The conference software may further include functionality for recording some or all of a conference and/or documenting a transcript for the conference. The application server108may, for example, be or include a unitary Java Virtual Machine (JVM). In some implementations, the application server108can include an application node, which can be a process executed on the application server108. For example, and without limitation, the application node can be executed in order to deliver software services to a client102as part of a software application. The application node can be implemented using processing threads, virtual machine instantiations, or other computing features of the application server108. In some such implementations, the application server108can include a suitable number of application nodes, depending upon a system load or other characteristics associated with the application server108. For example, and without limitation, the application server108can include two or more nodes forming a node cluster. In some such implementations, the application nodes implemented on a single application server108can run on different hardware servers. The database server110stores, manages, or otherwise provides data for delivering software services of the application server108to a client102. In particular, the database server110may implement one or more databases, tables, or other information sources suitable for use with a software application implemented using the application server108. The database server110may include a data storage unit accessible by software executed on the application server108. A database implemented by the database server110may be a relational database management system (RDBMS), an object database, an XML database, a configuration management database (CMDB), a management information base (MIB), one or more flat files, other suitable non-transient storage mechanisms, or a combination thereof. The system100can include one or more database servers, in which each database server can include one, two, three, or another suitable number of databases configured as or comprising a suitable database type or combination thereof. In some implementations, one or more databases, tables, other suitable information sources, or portions or combinations thereof may be stored, managed, or otherwise provided by one or more of the elements of the system100other than the database server110, for example, the client104or the application server108. The telephony server112enables network-based telephony and web communications from and to ones of the clients102and ones of the phones104which are VOIP-enabled devices configured to send and receive calls over a network, for example, a network114. In particular, the telephony server112includes a session initiation protocol (SIP) zone and a web zone. The SIP zone enables a client102or a VOIP-enabled phone104, to send and receive calls over the network114using SIP requests and responses. The web zone integrates telephony data with the application server108to enable telephony-based traffic access to software services run by the application server108. Given the combined functionality of the SIP zone and the web zone, the telephony server112may be or include a cloud-based private branch exchange (PBX) system. The SIP zone receives telephony traffic from a client102or VOIP-enabled phone104and directs same to a destination device. The SIP zone may include one or more call switches for routing the telephony traffic. For example, to route a VOIP call from a first VOIP-enabled client to a second VOIP-enabled client within the same domain or network, the telephony server112may initiate a SIP transaction between a first client and the second client using a PBX. However, in another example, to route a VOIP call from a VOIP-enabled client to a client or phone which is not VOIP-enabled, the telephony server112may initiate a SIP transaction via a VOIP gateway that transmits the SIP signal to a public switched telephone network (PSTN) system for outbound communication to the non-VOIP-enabled client or non-client phone. Hence, the telephony server112may include a PSTN system and may in some cases access an external PSTN system. The telephony server112includes one or more session border controllers (SBCs) for interfacing the SIP zone with one or more aspects external to the telephony server112. In particular, an SBC can act as an intermediary to transmit and receive SIP requests and responses between ones of the clients102and/or between ones of the phones104. When incoming telephony traffic for delivery to a client102or a phone104originating from outside the telephony server112is received, a SBC receives the traffic and forwards it to a call switch for routing to the client102or the phone104. The web zone receives telephony traffic from a client102or a phone104, via the SIP zone, and directs same to the application server108via one or more Domain Name System (DNS) resolutions. For example, a first DNS within the web zone may process a request received via the SIP zone and then deliver the processed request to a web service which connects to a second DNS at or otherwise associated with the application server108. Once the second DNS resolves the request, it is delivered to the destination service at the application server108. The web zone may also include a database for authenticating access to a software application for telephony traffic processed within the SIP zone, for example, a softphone. The clients102and the phones104communicate with aspects of the datacenter106via the network114. The network114can be or include, for example, the Internet, a local area network (LAN), a wide area network (WAN), a virtual private network (VPN), or another public or private means of electronic computer communication capable of transferring data between a client and one or more servers. In some implementations, a client can connect to the network114via a communal connection point, link, or path, or using a distinct connection point, link, or path. For example, a connection point, link, or path can be wired, wireless, use other communications technologies, or a combination thereof. In some implementations in which one or more of the phones104is not a VOIP-enabled device, those one or more phones104may communicate other than via the network114. The network114, the datacenter106, or another element, or combination of elements, of the system100can include network hardware such as routers, switches, other network devices, or combinations thereof. For example, the datacenter106can include a load balancer116for routing traffic from the network114to various servers associated with the datacenter106. The load balancer116can route, or direct, computing communications traffic, such as signals or messages, to respective elements of the datacenter106. For example, the load balancer116can operate as a proxy, or reverse proxy, for a service, such as a service provided to one or more remote clients, such as one or more of the clients102, by the application server108, and/or another server. Routing functions of the load balancer116can be configured directly or via a DNS. The load balancer116can coordinate requests from remote clients and can simplify client access by masking the internal configuration of the datacenter106from the remote clients. In some implementations, the load balancer116can operate as a firewall, allowing or preventing communications based on configuration settings. Although the load balancer116is depicted inFIG.1as being within the datacenter106, in some implementations, the load balancer116can instead be located outside of the datacenter106, for example, when providing global routing for multiple datacenters. In some implementations, load balancers can be included both within and outside of the datacenter106. FIG.2is a block diagram of an example internal configuration of a computing device200of an electronic computing and communications system, for example, a computing device which implements one or more of the client104, the application server108, the database server110, or the gateway112of the system100shown inFIG.1. The computing device200includes components or units, such as a processor202, a memory204, a bus206, a power source208, peripherals210, a user interface212, a network interface214, other suitable components, or a combination thereof. One or more of the memory204, the power source208, the peripherals210, the user interface212, or the network interface214can communicate with the processor202via the bus206. The processor202is a central processing unit, such as a microprocessor, and can include single or multiple processors having single or multiple processing cores. Alternatively, the processor202can include another type of device, or multiple devices, now existing or hereafter developed, configured for manipulating or processing information. For example, the processor202can include multiple processors interconnected in one or more manners, including hardwired or networked, including wirelessly networked. For example, the operations of the processor202can be distributed across multiple devices or units that can be coupled directly or across a local area or other suitable type of network. The processor202can include a cache, or cache memory, for local storage of operating data or instructions. The memory204includes one or more memory components, which may each be volatile memory or non-volatile memory. For example, the volatile memory of the memory204can be random access memory (RAM) (e.g., a DRAM module, such as DDR SDRAM) or another form of volatile memory. In another example, the non-volatile memory of the memory204can be a disk drive, a solid state drive, flash memory, phase-change memory, or another form of non-volatile memory configured for persistent electronic information storage. The memory204may also include other types of devices, now existing or hereafter developed, configured for storing data or instructions for processing by the processor202. In some implementations, the memory204can be distributed across multiple devices. For example, the memory204can include network-based memory or memory in multiple clients or servers performing the operations of those multiple devices. The memory204can include data for immediate access by the processor202. For example, the memory204can include executable instructions216, application data218, and an operating system220. The executable instructions216can include one or more application programs, which can be loaded or copied, in whole or in part, from non-volatile memory to volatile memory to be executed by the processor202. For example, the executable instructions216can include instructions for performing some or all of the techniques of this disclosure. The application data218can include user data, database data (e.g., database catalogs or dictionaries), or the like. In some implementations, the application data218can include functional programs, such as a web browser, a web server, a database server, another program, or a combination thereof. The operating system220can be, for example, Microsoft Windows®, Mac OS X®, or Linux®; an operating system for a mobile device, such as a smartphone or tablet device; or an operating system for a non-mobile device, such as a mainframe computer. The power source208includes a source for providing power to the computing device200. For example, the power source208can be an interface to an external power distribution system. In another example, the power source208can be a battery, such as where the computing device200is a mobile device or is otherwise configured to operate independently of an external power distribution system. In some implementations, the computing device200may include or otherwise use multiple power sources. In some such implementations, the power source208can be a backup battery. The peripherals210includes one or more sensors, detectors, or other devices configured for monitoring the computing device200or the environment around the computing device200. For example, the peripherals210can include a geolocation component, such as a global positioning system location unit. In another example, the peripherals can include a temperature sensor for measuring temperatures of components of the computing device200, such as the processor202. In some implementations, the computing device200can omit the peripherals210. The user interface212includes one or more input interfaces and/or output interfaces. An input interface may, for example, be a positional input device, such as a mouse, touchpad, touchscreen, or the like; a keyboard; or another suitable human or machine interface device. An output interface may, for example, be a display, such as a liquid crystal display, a cathode-ray tube, a light emitting diode display, or other suitable display. The network interface214provides a connection or link to a network (e.g., the network114shown inFIG.1). The network interface214can be a wired network interface or a wireless network interface. The computing device200can communicate with other devices via the network interface214using one or more network protocols, such as using Ethernet, transmission control protocol (TCP), internet protocol (IP), power line communication, an IEEE 802.X protocol (e.g., Wi-Fi, Bluetooth, ZigBee), infrared, visible light, general packet radio service (GPRS), global system for mobile communications (GSM), code-division multiple access (CDMA), Z-Wave, another protocol, or a combination thereof. FIG.3is a block diagram of an example of a conference system300for delivering conference software services in an electronic computing and communications system, for example, the system100shown inFIG.1. The conference system300includes a thread encoding tool302, a switching/routing tool304, and conference software306. The conference system300enables use of the conference software306by clients and phones, such as clients308and310and phone312. For example, one or both of the clients308or310may be a client102shown inFIG.1. In another example, the phone312may be a phone104shown inFIG.1. The conference system300may be implemented using one or more servers of the system100. Although two clients and a phone are shown inFIG.3, other numbers of clients and/or other numbers of phones can connect to the conference system300. A conference includes transmitting and receiving video, audio, and/or other data between clients and/or phones of conference participants. Each of the client308, the client310, and the phone312may connect through the conference system300using separate input streams to enable users thereof to participate in a conference together using the conference software. The conference software306is software for implementing conferences between users of two or more clients and/or phones. For example, the conference software306can be the conference software described above with respect to the application server108ofFIG.1. The conference software306includes a dedicated conference view for each input stream received and processed at the conference system300. For example, a conference view may be represented within a graphical user interface (GUI) of the conference software306by a dedicated box for a given participant. The content of the conference view for a given participant may be dependent upon the source of the input stream for that participant. For example, where a participant accesses the conference software306from a client, such as the client308or310, the conference view for the participant may include a video output stream transmitted from the conference system for viewing by all participants based on a video input stream received from the client, although the participant may optionally disable video features to suspend the video output stream from being presented in the conference view. In another example, where a participant accesses the conference software306from a phone, such as the phone312, the conference view for the participant may be limited to a static image or other default background aspect since there is no video output stream produced for that participant. The thread encoding tool302receives video input streams separately from the clients308and310and encodes those video input streams using one or more transcoding tools, such as to produce variant streams at different resolutions. The video input streams may be received over a network, for example, the network114shown inFIG.1, or by a direct wired connection, such as using a universal serial bus (USB) connection or like coupling aspect. After the video input streams are encoded, the switching/routing tool304direct the encoded streams through applicable network infrastructure and/or other hardware to deliver the encoded streams to the conference software306. The conference software306delivers output video streams representative of the respective encoded streams to each connected client, such as the clients308and310, which receive and decode the output video streams to output them for display by video output components of the clients, such as within respective conference views of the conference software306. A user of the phone312participates in the conference using an audio-only connection and may thus be referred to an audio-only caller. To participate in the conference from the phone312, an audio signal from the phone312is received and processed at a VOIP gateway314to prepare a digital telephony signal for processing at the conference system300. The VOIP gateway314may be part of the system100, for example, implemented at or in connection with a server of the datacenter106. Alternatively, the VOIP gateway314may be located on the user-side, such as in a same location as the phone312. The digital telephony signal is a packet switched signal transmitted to the switching/routing tool304for delivery to the conference software306. The conference software306outputs an audio signal representing a combined audio capture for each participant of the conference for output by an audio output component of the phone312. In some implementations, the VOIP gateway314may be omitted, for example, where the phone312is a VOIP-enabled phone. A conference may be referred to as a video-enabled conference in which video streaming is enabled for one or more participants. The enabling of video streaming for a participant of a conference does not require that the participant activate or otherwise use video functionality for participating in the conference. For example, a conference may still be a video-enabled conference where none of the participants joining using clients turns on their video feed for any portion of the conference. In some cases, however, the conference may have video disabled, such as where each participant connects to the conference using a phone rather than a client, or where a host of the conference selectively configures the conference to exclude video functionality. In some implementations, other software services may be accessible in connection with a conference implemented using the conference system300. For example, a conference may include or otherwise integrate functionality for instant messaging, unified messaging, and other types of messaging communications between participants of the conference, such as to facilitate a chat or like virtual conversation between users of those participants. Those other software services may be implemented at the conference system300and/or a different aspect of the system100. FIG.4Ais a diagram of an example of a user interface output400to a display showing a video conference. The user interface output400may be displayed on a component of a client or a device, such as client102or phones104shown inFIG.1, or clients308,310and phone312shown inFIG.3. In this example, the display400includes a host video feed402and an automatic spotlight queue404. The automatic spotlight queue404is an area of the display that contains participant video feeds that have the potential to be elevated to an active spotlight status. A conference system, such as conference system300shown inFIG.3, may monitor the participant video feeds in the automatic spotlight queue404to determine which participant video feeds should be elevated to an active spotlight status. As shown inFIG.4A, the automatic spotlight queue404includes participant video feeds404A-404E. Although the automatic spotlight queue404is shown at the bottom of the display, the automatic spotlight queue404may be at the top of the display, the left side of the display, the right side of the display, or any combination thereof. The participant video feeds404A-404E may be automatically added to the automatic spotlight queue404based on motion detection, object detection, or both. Once in the automatic spotlight queue404, the display areas of the participant video feeds404A-404E are monitored for activity, such as one or more gestures, facial expressions, movements, or the like, detected in the host video feed402, specific objects based on one or more detected objects in the host video feed402, or both. A conference system, such as the conference system300shown inFIG.3, may determine a relevance score for each of the participant video feeds404A-404E, for example, based on a correlation of an activity of a participant or an object in a participant video feed to an activity of the host or an object in the host video feed402, an identification of an object in the participant video feed, or any combination thereof. The relevance score may be an indication of how relevant the activity is to conference participants. The participant video feeds404A-404E may be displayed based on a relevance score, for example, a participant video feed may be displayed in the automatic spotlight queue404if the associated relevance score meets a threshold. In the example shown inFIG.4A, the participant video feeds404A-404E may be displayed from highest relevance score to lowest relevance score, where participant video feed404A has the highest relevance score and participant video feed404E has the lowest relevance score. The automatic spotlight queue404may include any number of participant video feeds. AlthoughFIG.4Ashows a single row of participant video feeds, some implementations may include multiple rows of participant video feeds, and each row may include any number of participant video feeds. FIG.4Bis a diagram of an example of the user interface output400to a display showing an automatic spotlight of a participant video feed in a video conference. This example shows one participant video feed being elevated to an automatic spotlight status, however, in some examples, multiple participant video feeds may be elevated to the automatic spotlight status concurrently. In this example, the participant video feed404A has the highest relevance score, and therefore is automatically spotlighted and elevated to an active spotlight status, where the participant video feed404A is removed from the spotlight queue404and elevated to an area of the display adjacent to the host video feed402. In some examples, the participant video feed that is in the spotlight queue the longest may be elevated to the active spotlight status. When the participant video feed404A is elevated to the active spotlight status, the conference system is configured to automatically resize the participant video feed404A and the host video feed402. In some examples, elevation of a participant video feed to an active spotlight status may be based on a threshold. For example, the participant video feed may be elevated to the active spotlight status when the relevance score meets a threshold value. The threshold value may be a correlation value between the host video feed402and a participant video feed, and may be expressed as a percent correlation. An example threshold value may be a 70 percent correlation of an object detected in the host video feed402and a participant video feed. The threshold value may be a host configurable setting that can be set based on a host preference of system sensitivity to elevate a participant video feed to the active spotlight status. In some examples, more than one participant video feed may be elevated to an active spotlight status simultaneously. In the example shown inFIG.4B, when the participant video feed404A is elevated to the active spotlight status, another participant video feed404F may be added to the automatic spotlight queue404. FIG.4Cis a diagram of an example of a user interface output400to a display showing an automatic spotlight of a team video feed in a video conference. In this example, groups of participants may be assigned a team and placed in pods406A-406F. Each of the pods406A-406F include participant video feeds of each participant assigned to the respective pod. A relevance score may be determined for each pod406A-406F. The relevance score may be a collective score of all the participants in a pod. The collective score may be a total summed value of the relevance scores of all the participants in a pod. Alternatively, the collective score may be an averaged value of the relevance scores of all the participants in a pod. In the example shown inFIG.4C, the pod406A has the highest relevance score, and therefore is automatically spotlighted and elevated to an active spotlight status, where the pod406A is removed from the spotlight queue404and elevated to an area of the display adjacent to the host video feed402. When the pod406A is elevated to the active spotlight status, the conference system is configured to automatically resize the pod406A and the host video feed402. In some examples, elevation of a participant video feed to an active spotlight status may be based on a threshold, for example, the pod may be elevated to the active spotlight status when the relevance score meets a threshold value. The threshold value may be a correlation value between the host video feed402and one or more participant video feeds of a pod, and may be expressed as a percent correlation. An example threshold value may be a 70 percent correlation of an object detected in the host video feed402and one or more participant video feeds of the pod. The threshold value may be a host configurable setting that can be set based on a host preference of system sensitivity to elevate participant video feeds of the pod to the active spotlight status. In some examples, more than one pod may be elevated to an active spotlight status simultaneously. In the example shown inFIG.4C, when the pod406A is elevated to the active spotlight status, another pod406F may be added to the automatic spotlight queue404. In another example, a pod may be elevated to the active spotlight status such that it replaces the host video feed402. In this example, the active spotlight status may be based on a relevance score of the pod and a specific participant video feed of the pod. In this example, the specific participant video feed of the pod may be determined to be particularly relevant or interesting and elevated to the active spotlight status such that the specific participant video feed is removed from the pod and displayed adjacent to the pod on the display. The specific participant video feed may be selected for active spotlighting based on a determination that the relevance score for that specific participant video feed is the highest amongst the participant video feeds of the pod. FIG.5is a diagram of an example of foreground detection in a frame500of a video feed for automatic spotlighting of a participant video feed and dynamic video feed positioning. The foreground detection may use an artificial intelligence (AI) algorithm to detect a foreground object502, such as a host, a participant, or another object, in a frame of a host video feed, a participant video feed, or both. The background504is shown in the shaded portion of a display area of the frame500of the video feed. In some examples, objects in the background504may not be detected or tracked. The foreground detection includes detecting one or more portions of a body, such as a head, shoulders, arms, torso, or legs in the video feed, and may be based on a color map to identify areas of the display that are occupied by the one or more portions of the body. In some examples, a dynamic bounding box506may be drawn around a foreground object. The dynamic bounding box506may be used to track the movement of an object in a video feed. The dynamic bounding box506may be automatically resized and/or relocated within the frame500based on a motion of the respective object and proximity to the camera. For example, if the host is the object and extends his arm, the dynamic bounding box will automatically resize to include the extended arm to indicate the increased area of the object relative to the frame500. The foreground detection may include detecting and tracking one or more objects in a video feed. The output of the foreground detection AI algorithm may be used with an object/motion detection AI algorithm to determine a relevance score for a participant video feed, where the relevance score may be based on a correlation of an activity of a participant or movement of an object in the participant video feed to an activity of the host or movement of an object in the host video feed. The relevance score may be an indication of how relevant the activity is to conference participants. FIG.6Ais a diagram of an example of a user interface output600to a display showing an automatic spotlight of participant video feeds. The user interface output may be displayed on a component of a client or device, such as client102or phones104shown inFIG.1, or clients308,310and phone312shown inFIG.3. In this example, the user interface output600includes a host video feed602and participant video feeds604A and604B. In this example, the automatic spotlight queue may be hidden to provide a less cluttered viewing option. As shown inFIG.6A, the participant video feeds604A and604B are in an active spotlight status and shown in picture-in-picture views that are overlaid on a display area of the host video feed602. The participant video feeds604A and604B may be dynamically resized and positioned based on a foreground detection as described inFIG.5. For example, the participant video feeds604A and604B may be dynamically resized and positioned based on the foreground detected in the display area of the host video feed602, such that the participant video feeds604A and604B are positioned in an area of the display identified as the background so as not to block the view of the host. FIG.6Bis a diagram of an example of the user interface output600to a display showing dynamic video feed positioning of an automatic spotlight of a participant video feed. As shown inFIG.6B, the host in the host video feed602has extended his arm such that it would be partially blocked from view by the participant video feed604B that is in an active spotlight status. In this example, when the host extends his arm, the detected foreground area is extended to include the extended arm. The conference system is configured to detect that the participant video feed604B is in an area of the display that contains a foreground object, i.e., the extended arm of the host in this example. Based on the detection that the participant video feed604B is in an area of the display that contains a foreground object, the conference system is configured to automatically reposition the participant video feed604B to a background area of the display that does not include the foreground object. In this example, participant video feed604A and participant video feed604B are tracked using bounding boxes such that the conference system is spatially aware of the participant video feeds that are in active spotlight status. The conference system may detect that the bounding box of an object in the host video feed602overlaps with a spatial location of the bounding box of a participant video feed in an active spotlight status, such as participant video feed604B, for example, based on a movement of the object in the host video feed602. Based on a detection that an object of the host video feed602is overlapping a spatial location of the participant video feed604B, the conference system determines a new placement of the participant video feed604B such that the bounding box of the object in the host video602no longer overlaps with a spatial location of the bounding box of the participant video feed604B. Repositioning of the participant video feed604B may be based on a time threshold to prevent frequent and disruptive flashing. For example, the participant video feed604B may be repositioned when a duration of time exceeds the time threshold. Alternatively, repositioning of the participant video604B may be based on a threshold number of times that the object of the host video feed overlaps with the participant video feed604B. FIG.7is a diagram of an example of a user interface output700to a display showing a show-and-tell mode. The show-and-tell mode reduces the size of a host primary video to a picture-in-picture view and elevates another host camera view to a primary area of the display. The user interface output700may be displayed on a component of a client or device, such as client102or phones104shown inFIG.1, or clients308,310and phone312shown inFIG.3. In this example, the host may have a multiple camera setup. For example, a primary camera may be directed towards the host to provide a front view, and one or more secondary cameras, for example an overhead camera directed at a cutting board surface, may be directed down to provide an overhead view. Other secondary cameras, to the extent any are used, may be used to provide side views, zoom views, or any other view. The user interface output700includes a primary host video feed702A, a secondary host video feed702B, and an automatic spotlight queue704. In this example, the host may select a show-and-tell mode via a touch input on a user interface of a host device. A conference system, such as the conference system300shown inFIG.3, may reduce the size of the primary host video feed702A to a picture-in-picture display over the secondary host video feed702B, as shown inFIG.7. In this example, the secondary host video feed702B may be used as a reference video feed to determine a relevance score. The relevance score may be an indication of how relevant the activity is to conference participants. The automatic spotlight queue704includes participant video feeds704A-704E. One or more of the participants may have a multiple camera setup. For example, a primary camera may be directed towards the participant to provide a front view, and one or more secondary cameras, for example an overhead camera directed at a cutting board surface to provide an overhead view. Other secondary cameras, to the extent any are used, may be used to provide side views, zoom views, or any other view. In this example, the participants of video feeds704A and704B may have multiple camera setups, and the video feeds of the respective secondary cameras are shown. Although the automatic spotlight queue704is shown at the bottom of the display, the automatic spotlight queue704may be at the top of the display, the left or right side of the display, or any combination thereof. In this example, the conference system is configured to monitor the secondary host video feed702B based on the selection of the show-and-tell mode. The conference system may detect one or more objects in the display area of the secondary host video feed702B, for example, hands706A and706B, and a knife706C. The participant video feeds704A-704E may be automatically added to the automatic spotlight queue704based on motion detection, object detection, or both. Once in the automatic spotlight queue704, the display areas of the participant video feeds704A-704E are monitored for activity, such as one or more gestures, facial expressions, movements, or the like, detected in the secondary host video feed702B, or specific objects based on one or more detected objects in the host video feed702, such as one or more of hands706A and706B, or knife706C. The conference system may determine a relevance score for each of the participant video feeds704A-704E, for example, based on a correlation of an activity of a participant or movement of an object in a participant video feed to an activity of the host or movement of an object in the secondary host video feed702B. In this example, participant video feeds704A and704B may be compared to the secondary host video feed702B to determine whether there is a probabilistic match between activities or movements in the secondary host video feed702B and the respective participant video feeds704A and704B. The conference system may use probabilistic matching to determine a statistical probability that an object or activity detected in one video feed represents the same object or activity detected in another video feed. If there is a probabilistic match, the relevance score may be determined based on the probabilistic match. For example, a higher statistical probability that the objects or activities in the video feeds match would indicate a higher relevance score. The probabilistic match may be based on a spatial matching that uses a color map. The color map may be used to map pixel data by color to compare pixel data between video feeds. The colors of each pixel may represent a spatial position of the pixel. The participant video feeds704A-704E may be displayed based on a relevance score. In the example shown inFIG.7, the participant video feeds704A-704E may be displayed from highest relevance score to lowest relevance score, where participant video feed704A has the highest relevance score and participant video feed704E has the lowest relevance score. The automatic spotlight queue704may include any number of participant video feeds. AlthoughFIG.7shows a single row of participant video feeds, some implementations may include multiple rows of participant video feeds, and each row may include any number of participant video feeds. Participants may have an option to toggle between gallery mode or show-and tell mode. In some examples, the conference system may allow participants to interact with the automatic spotlight queue704and the secondary host video feed702B by dragging-and-dropping a participant video feed from the automatic spotlight queue704to the secondary host video feed702B such that the participant video feed replaces the secondary host video feed702B as the main video feed. In this example, the participant video feed will become the largest video feed on the display, temporarily displacing the secondary host video feed702B until either show-and-tell mode ends or the participant drags another participant video feed to the main video feed. FIG.8is a diagram of an example of a multi-camera setup800for a video conference. The multi-camera setup800includes two or more cameras, for example, a first camera802A, a second camera802B, and a third camera802C. The first camera802A, second camera802B, and third camera802C are connected to a client (not shown), such as client102shown inFIG.1or clients308and310shown inFIG.3. In some examples, one or more of the cameras802A-802C may be clients that are configured to communicate with a conference system, such as conference system300shown inFIG.3. The cameras802A-802C are configured to transmit respective video feeds to the client wirelessly or via a wire. In some examples, one or more of cameras802A-802C may have partially overlapping fields-of-view (FOVs). In other examples, the cameras802A-802C may have non-overlapping FOVs. In this example, the first camera802A is directed at a host804. The first camera802A may be configured to capture a wide-angle front view of a recording area. The wide-angle front view may include the host, a first object806, a second object808, or any combination thereof. In this example, the video conference may be for a cooking show where the first object806may be a mixing bowl, and the second object808may be a cooking surface. In some examples, the first camera802A may be configured to capture a zoom view, for example, to frame a display area of a display with the host804, the face of the host804, the first object806, or the second object808. The first camera802A is configured to transmit a camera feed to the conference system. In some examples, the first camera802A may be configured to perform face detection, motion detection, gesture detection, object detection, or any combination thereof. The zoom view of the first camera802A may be based on face detection, motion detection, gesture detection, object detection, audio detection, or any combination thereof. For example, if the first camera802A detects motion in an area of first object806, the first camera802A may automatically adjust the zoom to frame the display area of a display with the first object806. The second camera802B is directed at the first object806in this example. In some examples, the second camera802B may be directed at the host804or the second object808. The second camera802B is configured to capture a side view of the recording area. The second camera802B may be configured to capture a zoom view, similar to the first camera802A. The second camera802B is configured to transmit a camera feed to the conference system. In some examples, the second camera802B may be configured to perform face detection, motion detection, gesture detection, object detection, or any combination thereof. Similar to the first camera802A, the zoom view of the second camera802B may be based on face detection, motion detection, gesture detection, object detection, audio detection, or any combination thereof. For example, if the second camera802B detects motion in an area of second object808, the second camera802B may automatically adjust the zoom to frame the display area of a display with the second object808. The third camera802C is directed at the second object808in this example. In some examples, the third camera802C may be directed at the first object806. The third camera802C is configured to capture an overhead view of the recording area. The third camera802B may be configured to capture a zoom view, similar to the first camera802A. The third camera802C is configured to transmit a camera feed to the conference system. In some examples, the third camera802C may be configured to perform face detection, motion detection, gesture detection, object detection, audio detection, or any combination thereof. Similar to the first camera802A, the zoom view of the third camera802C may be based on face detection, motion detection, gesture detection, object detection, or any combination thereof. For example, if the third camera802C detects motion in an area of second object808, the second camera802B may automatically adjust the zoom to frame the display area of a display with the second object808. The conference system is configured to obtain the respective video feeds from cameras802A-802C, for example, via the client. The conference system is configured to perform face detection, motion detection, gesture detection, audio detection, and object detection on the obtained video feeds. The conference system is configured to select a respective video feed to display on an area of a display, such as a primary area, a secondary area, or both. The selection of the video feeds may be based on one or more of the face detection, motion detection, gesture detection, or object detection. The conference system is configured to switch the video feeds from the primary area to the secondary area, and vice-versa, for example, based on one or more of the face detection, motion detection, gesture detection, or object detection. FIG.9is a diagram of an example of a user interface output900to a display of multi-camera host video feeds in a video conference. The user interface output900includes a primary area902and a secondary area904. A video feed from any one of cameras802A-802C shown inFIG.8may be displayed in primary area902, for example the video feed camera802A is shown. A video feed from any one of cameras802A-802C shown inFIG.8may be displayed in the secondary area904, for example the video feed from camera802C is shown. Selection of the video feed to display in the primary area902or the secondary area904may be based on a relevance score. The relevance score may be an indication of how relevant the activity is to conference participants. In the example shown inFIG.9, one secondary area is shown, however, the number of secondary areas may vary depending on the number of cameras that are in use at the host location. The secondary area904may be positioned anywhere on the display such that it does not obstruct the view of the host or a detected object in the foreground of the primary area902. The secondary area may be dynamically positioned and sized based on a determined area of background as discussed inFIGS.5and6A. To further describe some implementations in greater detail, reference is next made to examples of techniques which may be performed by or using a conference system to perform an automatic spotlight of a participant video feed.FIG.10is a flowchart of an example of a method1000for performing an automatic spotlight of a participant video feed. The method1000can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The method1000can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the method1000or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. For simplicity of explanation, the method1000is depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement method1000in accordance with the disclosed subject matter. At1002, the method1000includes monitoring for activity in a display area of a video feed. The video feed may be a video feed of a participant in a video conference. Activity may be monitored in one or more areas of the video feed or the entire display area of the video feed. In some examples, an area determined to be a background area may not be monitored for activity. Monitoring for activity may include periodically sampling the display area of the video feed to detect an activity. For example, the display area of the video feed may be sampled every 10 seconds. The duration of the sampling may be fixed or variable. The periodicity and duration of the sampling may be based on the activity. For example, the sampling periodicity and duration for an activity that has a high level of movement, such as an exercise class, may be lower than the sampling periodicity and duration for an activity that has a low level of movement, such as a webinar. The low level movement activities may require a higher sampling periodicity and duration in order to avoid missing an activity. The type of activity may be determined based on a rate at which changes in motion are detected, based on an analysis of detected movements using an activity AI algorithm, or based on a host setting. At1004, the method1000includes detecting activity in the display area of the video feed. Detection of the activity may include motion detection, facial detection, gesture detection, object detection, or any combination thereof. In some examples, the detected activity may be audio corresponding to the video feed. Detection of the activity may be based on a threshold. For example, motion detection may be associated with a low threshold to be considered a valid activity. A valid activity is an activity that is determined to be relevant to the content of the video conference. For example, known activities that are not relevant to the content of the video conference may be stored at the conference system to compare against detected activities. For example, a person shifting in a chair may be an activity that is stored as an activity that is not relevant. If a detected activity matches a stored activity that is not relevant, the detected activity will be deemed not relevant. If the detected activity does not match a stored activity that is not relevant, the detected activity will be deemed valid. Gesture detection may be associated with a medium to high threshold to be considered a valid activity. Object detection may be associated with a medium to high threshold to be considered a valid activity. In some examples, the method1000includes starting1006a timer. The timer may be started based on the detection of an activity that satisfies the threshold in the display area of the video feed. The duration of the timer may be fixed or variable. The duration of the time may be based on the activity. For example, the duration of the timer for an activity that has a high level of movement, such as an exercise class, may be lower than the duration of the timer for an activity that has a low level of movement, such as a webinar. The low level movement activities may require a longer timer duration to avoid a false positive identification of an idle period in a low level movement activity. In some cases, the use of a timer may be omitted in method1000. At1008, the method1000includes determining whether the detected activity is relevant to the content of the video conference. Determining whether the detected activity is relevant may include determining a relevance score for the video feed, for example, based on a correlation of an activity of a participant or movement of an object in the video feed to an activity of the host or a movement of an object in a host video feed. In this example, the video feed may be compared to the host video feed to determine whether there is a probabilistic match between activities or movements in the host video feed and the video feed. If there is a probabilistic match, the relevance score of the activity may be determined based on the probabilistic match. The probabilistic match may be based on a spatial matching that uses a color map. The color map may be used to map pixel data by color to compare pixel data between video feeds. The colors of each pixel may represent a spatial position of the pixel. If it is determined that the activity is not relevant, the method1000will continue monitoring1002for activity in the video feed. If it is determined that the activity is relevant, the method1000includes adding1010the video feed into a spotlight queue, such as the automatic spotlight queue404shown inFIGS.4A-4Cor the automatic spotlight queue704shown inFIG.7. Video feeds may be displayed in the spotlight queue based on their relevance score. For example, the video feeds may be shown in order of ranking from the highest relevance score to the lowest relevance score. In another example, the video feeds may be shown for all video feeds that meet a threshold relevance score without ranking. The relevance score of the video feed may be compared to a relevance score of a video feed that is in an active spotlight status. At1012, the video feed is elevated to an active spotlight status. Elevating the video feed to an active spotlight status includes removing the video feed from the spotlight queue and elevating the video feed to automatically display the video feed in an area of the display adjacent to the host video feed. When the video feed is elevated to the active spotlight status, the conference system is configured to automatically resize the video feed and the host video feed. In some examples, the conference system may be configured to automatically unmute the audio component of the video feed when elevating the video feed to the active spotlight status. In an example, the video feed that has the highest relevance score may be automatically spotlighted and elevated to an active spotlight status. For example, if the relevance score of the video feed is higher than a relevance score of another video feed in the active spotlight status, the video feed may be elevated to the active spotlight status to replace the other video feed that may be demoted to the spotlight queue. In another example, elevation of a video feed to an active spotlight status may be based on a threshold, for example, the video feed may be elevated to the active spotlight status when the relevance score meets a threshold value. In some examples, the timer may be started when the participant video feed is displayed adjacent to the host video feed. At1014, the method1000includes determining whether the timer has expired. If the timer has not expired, the conference system is configured to maintain1016the active spotlight status for the video feed. If it is determined that the timer has expired, the method1000includes determining1018whether activity is detected in the display area of the participant video feed. Detection of the activity may include motion detection, facial detection, gesture detection, object detection, or any combination thereof. Detection of the activity may be based on a threshold. For example, motion detection may be associated with a low threshold to be considered a valid activity. Gesture detection may be associated with a medium to high threshold to be considered a valid activity. Object detection may be associated with a medium to high threshold to be considered a valid activity. If it is determined that activity is not detected at1018, the method1000includes demoting1020the video feed to an inactive spotlight status. Demoting1020the video feed to an inactive spotlight status includes removing the video feed from the display area adjacent to the host video feed and adding the video feed to the spotlight queue. When the video feed is demoted to the inactive spotlight status, the conference system is configured to automatically resize the video feed and the host video feed. For example, the conference system is configured to reduce the size of the video feed as it is entered into the spotlight queue, and increase the size of the host video feed to accommodate the primary display area of the display. In some examples, a next video feed may be automatically elevated to the active spotlight status when the video feed is demoted to the inactive spotlight status. Video feeds in the inactive spotlight status may be displayed in a general gallery of the video conference or the spotlight queue. If it is determined that activity is detected at1018, the method1000includes determining1022whether the detected activity is relevant. In some examples, the activity detected at1018may be different than the activity detected at1004. Determining whether the detected activity is relevant may include determining a relevance score for the video feed, for example, based on a correlation of an activity of a participant or a movement of an object in the video feed to an activity of the host or a movement of an object in a host video feed. In this example, the video feed may be compared to the host video feed to determine whether there is a probabilistic match between activities or movements in the host video feed and the video feed. If there is a probabilistic match, the relevance score may be determined based on the probabilistic match. If it is determined that the activity is not relevant, the method1000will demote1020the video feed to the inactive status. If it is determined that the activity is relevant, the conference system is configured to maintain1016the active spotlight status for the video feed. In some examples, the timer may be restarted if the relevance score is determined to be above a threshold. In some implementations, multiple participant video feeds may be elevated to an active spotlight status concurrently. For example, if two or more participant feeds are determined to have relevance scores above a threshold, the two or more participant feeds may be elevated to the active spotlight status and displayed adjacent to the host video feed. FIG.11is a flowchart of an example of another method1100for performing an automatic spotlight of a participant video feed. The method1100can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The method1100can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the method1100or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. For simplicity of explanation, the method1100is depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement method1100in accordance with the disclosed subject matter. At1102, the method1100includes selecting a participant video feed. Selecting the participant video feed may include the video conference system automatically detecting activity in a display area of the participant video feed. Detection of the activity may include motion detection, facial detection, gesture detection, object detection, or any combination thereof. In some examples, the detected activity may be audio corresponding to the participant video feed. Detection of the activity may be based on a threshold. The threshold may be based on a pixel motion. For example, motion detection may be associated with a low threshold to be considered a valid activity. Gesture detection may be associated with a medium to high threshold to be considered a valid activity. Object detection may be associated with a medium to high threshold to be considered a valid activity. At1104, the method1100includes adding the participant video feed to a spotlight queue. In some examples, the participant video feed is added to the spotlight queue based on a determination that the detected activity is a valid activity. The determination of whether the detected activity is a valid activity may be based on a duration of the activity. For example, if the duration of the detected activity meets a threshold, the detected activity may be determined to be a valid activity. The threshold may be a dynamic threshold such that it increases based on the number of participant feeds that have detected motion so as not to overload the spotlight queue with meaningless participant video feeds. Increasing the threshold may lead to conference system to add participant video feeds to the spotlight queue that have a higher probability of having meaningful content. In an example where the number of participant video feeds that have detected motion is low, the threshold may be low, for example 1-2 seconds to provide a sufficient number of participant video feeds to the spotlight queue. In an example where the number of participant video feeds is high, the threshold may be high, for example 5-10 seconds to avoid adding participant video feeds to the spotlight queue that have a low probability of having meaningful content. At1106, the method1100includes sampling the display area of the participant video feed. One or more areas of the participant video feed or the entire display area of the participant video feed may be sampled for activity. The duration of the sampling may be fixed or variable. The duration of the sampling may be based on a classification of the activity. The classification of the activity may be based on a comparison of the detected activity to a stored database of activities. For example, the detected activity may be classified as a particular gesture based on a probabilistic match when compared to stored gestures in the database of activities. Detected activities that remain unclassified may be stored and processed through machine learning algorithms for future classification. The periodicity and duration of the sampling may be based on the activity. For example, the sampling periodicity and duration for an activity that has a high level of movement, such as an exercise class, may be lower than the sampling periodicity and duration for an activity that has a low level of movement, such as a webinar. The low level movement activities may require a higher sampling periodicity and duration in order to avoid missing an activity. At1108, the method1100includes elevating the participant video feed to an active spotlight status. Elevating the participant video feed to an active spotlight status includes removing the participant video feed from the spotlight queue and elevating the participant video feed to automatically display the participant video feed in an area of the display adjacent to the host video feed. When the participant video feed is elevated to the active spotlight status, the conference system is configured to automatically resize the participant video feed and the host video feed. Elevation of the participant video feed to the active spotlight status may be based on the duration of the detected activity, the classification of the detected activity, or both. In some examples, the method1100may include determining1110a relevance score of the participant video feed. The relevance score may be based on a correlation of an activity of a participant or object in the participant video feed to an activity of the host or object in a host video feed. In this example, the participant video feed may be compared to the host video feed to determine whether there is a probabilistic match between activities in the host video feed and the video feed. If there is a probabilistic match, the relevance score may be determined based on the probabilistic match. A higher correlation value will result in a higher relevance score. In these examples, the participant video feed that has the highest relevance score may be automatically spotlighted and elevated to an active spotlight status. Alternatively, elevation of a participant video feed to an active spotlight status may be based on a threshold, for example, the participant video feed may be elevated to the active spotlight status when the relevance score meets a threshold value. The threshold for elevation may be used in examples where multiple participants are simultaneously elevated to the active spotlight status. FIG.12is a flowchart of an example of another method1200for performing an automatic spotlight of a participant video feed. The method1200can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The method1200can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the method1200or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. For simplicity of explanation, the method1200is depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement method1200in accordance with the disclosed subject matter. At1202, the method1200includes grouping participants into teams. Grouping the participants into teams includes grouping participant video feeds into team video feeds. The team video feeds may be displayed as pods, as shown inFIG.4C. The participants may be grouped automatically based on a current geographic location of the participants or user account data of the participants. User account data may include data associated with a participant, for example, a name, an age, an office location, a department, an organizational position, or any other relevant data that can be used to group participants into teams. In some examples, the participants may be grouped based on a received input, for example a touch input on a user interface or other input obtained via a host conference device. At1204, the method1200includes adding the team video feeds to a spotlight queue. In some examples, the team video feeds are added to the spotlight queue based on a determination that a detected activity in one or more of the participant video feeds of the team video feed is a valid activity. The determination of whether the detected activity is a valid activity may be based on a duration of the activity. For example, if the duration of the detected activity meets a threshold, the detected activity may be determined to be a valid activity. The threshold may be a dynamic threshold such that it increases based on the number of team video feeds that have detected motion so as not to overload the spotlight queue with meaningless team video feeds. Increasing the threshold may lead to conference system to add team video feeds to the spotlight queue that have a higher probability of having meaningful content. In an example where the number of team video feeds that have detected motion is low, the threshold may be low, for example 1-2 seconds to provide a sufficient number of participant video feeds to the spotlight queue. In an example where the number of team video feeds is high, the threshold may be high, for example 5-10 seconds to avoid adding team video feeds to the spotlight queue that have a low probability of having meaningful content. At1206, the method1200includes sampling the display areas of the participant video feeds of the team video feeds. One or more areas of the participant video feeds or the entire display areas of the participant video feeds may be sampled for activity. The duration of the sampling may be fixed or variable. The duration of the sampling may be based on a classification of the activity. The classification of the activity may be based on a comparison of the detected activity to a stored database of activities. For example, the detected activity may be classified as a particular gesture based on a probabilistic match when compared to stored gestures in the database of activities. Detected activities that remain unclassified may be stored and processed through machine learning algorithms for future classification. The periodicity and duration of the sampling may be based on the activity. For example, the sampling periodicity and duration for an activity that has a high level of movement, such as an exercise class, may be lower than the sampling periodicity and duration for an activity that has a low level of movement, such as a webinar. The low level movement activities may require a higher sampling periodicity and duration in order to avoid missing an activity. At1208, the method1200includes determining a relevance score for each team video feed. The relevance score may be based on a correlation of an activity of one or more participants or objects in the respective participant video feeds of the team video feed to an activity of the host or object in a host video feed. In this example, the participant video feed may be compared to the host video feed to determine whether there is a probabilistic match between activities in the host video feed and the video feed. If there is a probabilistic match, the relevance score may be determined based on the probabilistic match. A higher correlation value will result in a higher relevance score. In some examples, a relevance score may be determined based on a correlation of activities between participants on the same team. At1210, the method1200includes elevating one or more team video feeds to an active spotlight status. Elevating a team video feed to an active spotlight status includes removing the team video feed from the spotlight queue and elevating the team video feed to an area of the display adjacent to the host video feed. When the team video feed is elevated to the active spotlight status, the conference system is configured to automatically resize the team video feed and the host video feed. Elevation of the team video feed to the active spotlight status may be based on the duration of the detected activity, the classification of the detected activity, or both. In some examples, the team video feed that has the highest relevance score may be automatically spotlighted and elevated to an active spotlight status. Alternatively, elevation of a team video feed to an active spotlight status may be based on a threshold, for example, the team video feed may be elevated to the active spotlight status when the relevance score meets a threshold value. The threshold for elevation may be used in examples where multiple teams are simultaneously elevated to the active spotlight status. FIG.13is a flowchart of an example of a method1300for automatically switching a video from one camera to a video from another camera for display in a video conference. The method1300can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The method1300can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the method1300or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. For simplicity of explanation, the method1300is depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement method1300in accordance with the disclosed subject matter. At1302, the method1300includes obtaining a first video feed from a first camera, such as camera802A shown inFIG.8, for example, and obtaining a second video feed from a second camera, such as camera802B shown inFIG.8, for example. In this example, the first camera and the second camera share a location, such as a recording area of a host. The video feeds from the first camera and the second camera may be obtained by a video conference system, such as video conference system300shown inFIG.3. The video feeds from the first camera and the second camera may be obtained by the video conference system via a client, such as client102shown inFIG.1or clients308and310shown inFIG.3. The first video feed from the first camera may have a first FOV and the second video from the second camera may have a second FOV. In some examples, the first FOV may partially overlap with the second FOV. In other examples, the first FOV and the second FOV may be non-overlapping FOVs. At1304, the method1300includes displaying the first video feed. The first video feed may be displayed in a primary area of a display, such as the primary area902shown inFIG.9. At1306, the method1300includes detecting an object or activity in the first video feed, the second video feed, or both. The object may be a person, a face, or another object, such as a knife in a cooking show example. Detecting an object may include identifying an area of the first video feed that may contain an object. The area of the first video feed may be identified using an AI algorithm trained for object detection. In some examples, the area may be identified based on a grouping of pixels, for example a grouping of differently colored pixels. A bounding box may be drawn around the identified area. The identified area may be classified as a particular object based on a probabilistic match when compared to stored objects in a database of objects. Detected objects that remain unclassified may be stored and processed through machine learning algorithms for future classification. In some examples, the identified area may include surrounding area of the detected object. The identified area may be sampled for objects using a machine learning (ML) object detection model. In some examples, the ML object detection model may identify the area to be sampled. In some examples, at1308, the method1300includes determining whether the detected object is relevant. Determining whether the detected object or activity is relevant may include determining a relevance score for the object or activity. The relevance score may be based on a correlation of facial detection and an audio component of a corresponding video feed. For example, the facial detection may be used to determine that a mouth is moving and correlate the movement of the mouth with the audio component of the corresponding video feed to determine that a speaker is speaking to adjust the relevance score of the corresponding video feed. The relevance score may be based on a gaze detection of a video conference host. For example, the video conference system may determine that the video conference host is looking at a particular camera for a minimum time duration and may adjust the relevance score of the corresponding video feed. The relevance score may be based on a correlation of the object or activity to a set of one or more objects or activities stored in a database that are associated with an activity, a determination of the object type relative to the activity, a duration or time that the object is in motion, a duration of time that the activity persists, or any combination thereof. In a cooking show example, the set of one or more objects stored in a database include, and are not limited to, a knife, a spoon, a fork, a mixing bowl, a frying pan, a cutting board, or any other object that may be relevant to a cooking show. In a woodworking show example, the one or more objects stored in the database include, and are not limited to, a hammer, a nail, a screw, a screwdriver, a chisel, a block of wood, or any other object that may be relevant to a woodworking show. The detected object may be compared to the set of one or more objects to determine whether there is a probabilistic match between the detected object and the set of one or more objects. If there is a probabilistic match, the relevance score may be determined based on a correlation value of the probabilistic match. A higher correlation value will result in a higher relevance score. The probabilistic match may be based on a spatial matching that uses a color map. The relevance score may be based on a participant engagement score. The participant engagement score may be determined using gaze tracking of participant video feeds. The gaze tracking can be used to determine which areas of the display participants are viewing. For example, the gaze tracking may be used to determine whether the participants are viewing the primary area of the display or the secondary area of the display. The participant engagement score may be based on a number of participants viewing a particular area of the display. For example, if a threshold number of participants are viewing a secondary area of the display, the system may automatically switch the primary area of the display to display another video feed to increase participant engagement. In some examples, facial detection of participant video feeds may be used to determine the participant engagement score. At1310, the method1300includes displaying the second video feed. The second video feed may be automatically displayed in the primary area of the display based on the detection of the object or activity. In some examples, the second video feed may be displayed in the primary area of the display based on the detection of a gesture in the second video feed. In some examples, the second video feed may be displayed in the primary area of the display based on a detection of a face in the second video feed, or an absence of the detection of a face in the first video feed, for example, when the host looks away from the camera such that the face of the host is no longer detected. In some examples, the second video feed may be displayed in the primary area of the display when the relevance score of the second video feed is determined to be greater than the relevance score of the first video feed. In some examples, the video conference system may be configured to automatically adjust the zoom on a video feed when the video feed is displayed in the primary area of the display. In some examples, the first video feed from the first camera may be minimized to the secondary area of the display when the second video feed from the second camera is displayed in the primary area of the display. In some examples, the first video from the first camera may be terminated in the primary area of the display when the second video feed from the second camera is displayed in the primary area of the display. In some implementations, the video conference system may determine a conference type associated with a video conference that includes the first video feed and the second video feed. The conference type may be based on a predetermined setting. For example, a predetermined setting for a cooking class may be to display the video feed of preparing ingredients when significant motion is detected in the first video feed and display the second video feed when significant motion is not detected in the first video feed. In an example of a funeral, the predetermined setting may be to display the video feed of a speaker and occasionally switch the video feed to a video feed of the audience or a video feed of the casket. The video conference system may detect an activity or an object associated with the conference type in the first video feed, the second video feed, or both, and adjust the respective relevance scores to account for the detected activity, object, or conference type. In some implementations, the relevance score may be adjusted based on the detection of one or more discussion points of a conference plan (e.g., conference agenda). For example, the video conference system may detect that a particular discussion point of the conference plan is being discussed using voice detection and adjust the relevance score based on that detection. FIG.14is a flowchart of an example of another method1400for automatically switching a video feed from one camera to a video feed from another camera for display in a video conference. The method1400can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The method1400can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the method1400or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. For simplicity of explanation, the method1400is depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement method1400in accordance with the disclosed subject matter. At1402, the method1400includes obtaining a first video feed from a first camera, such as camera802A shown inFIG.8, for example, and obtaining a second video feed from a second camera, such as camera802B shown inFIG.8, for example. In this example, the first camera and the second camera share a location, such as a recording area of a host. The video feeds from the first camera and the second camera may be obtained by a video conference system, such as video conference system300shown inFIG.3. The video feeds from the first camera and the second camera may be obtained by the video conference system via a client, such as client102shown inFIG.1or clients308and310shown inFIG.3. The first video feed from the first camera may have a first FOV and the second video feed from the second camera may have a second FOV. In some examples, the first FOV may partially overlap with the second FOV. In other examples, the first FOV and the second FOV may be non-overlapping FOVs. At1404, the method1400includes detecting a face in the first video feed. Detecting the face in the first video feed may include identifying an area of the first video feed that may contain a face. A bounding box may be drawn around the identified area. The identified area may be classified as a face based on a probabilistic match when compared to stored faces in a database of faces. In some examples, the identified area may include surrounding area of the detected face. The identified area may be sampled for objects using an ML face detection model. The bounding box may be dynamically resized and configured to track the detected face as it moves in the display area. At1406, the method1400includes displaying the first video feed. The first video feed may be displayed in a primary area of a display, such as the primary area902shown inFIG.9. The first video feed may be displayed in the primary area of the display based on the detection of a face. Displaying the first video feed in the primary area of the display may be based on a threshold duration of time that the face is detected. For example, if the face is detected for at least two seconds, the first video feed may be displayed in the primary area of the display. At1408, the method1400includes determining whether the face is detected in the first video feed. This determination may be performed at periodic intervals. If the face is detected at1408, the method includes continuing the display of the first video feed in the primary area of the display at1406. If the face is not detected in the first video feed at1408, the method1400includes displaying1410the second video feed. The second video feed may be displayed in the primary area of the display based on the absence of detecting the face in the first video feed. Displaying the second video feed in the primary area of the display may be based on a threshold duration of time that the face is not detected in the first video feed. For example, if the face is not detected for at least two seconds in the first video feed, the second video feed may be displayed in the primary area of the display. At1412, the method1400includes determining whether a face is detected in the first video feed. This determination may be performed at periodic intervals. If a face is not detected in the first video feed at1412, the method1400includes continuing the display of the second video feed in the primary area of the display at1410. If a face is detected in the first video feed at1412, the method1400includes switching the display to display the first video feed in the primary area of the display at1406. In some examples, the first video feed from the first camera may be minimized to the secondary area of the display when the second video feed from the second camera is displayed in the primary area of the display. In some examples, the first video feed from the first camera may be terminated in the primary area of the display when the second video feed from the second camera is displayed in the primary area of the display. FIG.15is a block diagram of an example of a conference system1500for performing automatic spotlighting of video feeds in a video conference as described inFIGS.4A-14. The conference system1500includes a video feed location1502and a server1504. The example inFIG.15shows one video feed location for simplicity and clarity, and it is understood that the conference system1500can include multiple video feed locations. The video feed location1502may be a host video feed location or a participant video feed location. The video feed location1502includes one or more cameras1506A-1506N and a client1508. Cameras1506A-1506N are configured to transmit video streams to client1508, for example wirelessly or through a wired connection. The video streams from cameras1506A-1506N may be associated with host video feeds or participant video feeds. The server1504includes a video feed processing tool1510and conference software1512. The video feed processing tool may perform the functions of the thread encoding tool302and switching/routing tool304shown inFIG.3. The conference system1500enables use of the conference software1512by the client1508. The conference system1500may be implemented using one or more servers of the system100shown inFIG.1. The client1508may connect through the server1504using one or more input streams from cameras1506A-1506N to enable users thereof to participate in a conference together using the conference software1512. The conference software1512is software for implementing conferences between users of two or more clients and/or phones. For example, the conference software1512can be the conference software described above with respect to the application server108ofFIG.1. The conference software1512includes a dedicated conference view for each input stream received and processed at the server1504. For example, a conference view may be represented within a GUI of the conference software1512by a dedicated box for a given participant. The content of the conference view for a given host or participant may be dependent upon the source of the input stream for that host or participant. For example, where a host or participant accesses the conference software1512from a client, such as the client1508, the conference view for the host or participant may include a video output stream transmitted from the conference system for viewing by all participants based on a video input stream received from the client, although the participant may optionally disable video features to suspend the video output stream from being presented in the conference view. In another example, where a participant accesses the conference software1512from a phone, the conference view for the participant may be limited to a static image or other default background aspect since there is no video output stream produced for that participant. The video feed processing tool1510receives video input streams the client1508and encodes those video input streams using one or more transcoding tools, such as to produce variant streams at different resolutions. The video input streams may be received over a network, for example, the network114shown inFIG.1, or by a direct wired connection, such as using a USB connection or like coupling aspect. After the video input streams are encoded, the encoded streams are directed through applicable network infrastructure and/or other hardware to deliver the encoded streams to the conference software1512. The conference software1512is configured to automatically spotlight video streams using the methods described inFIGS.10-14and deliver output video streams representative of the respective encoded streams to each connected client, which receives and decodes the output video streams to output them for display by video output components of the clients, such as within respective conference views of the conference software1512. A conference may be referred to as a video-enabled conference in which video streaming is enabled for one or more participants. The enabling of video streaming for a participant of a conference does not require that the participant activate or otherwise use video functionality for participating in the conference. For example, a conference may still be a video-enabled conference where none of the participants joining using clients turns on their video feed for any portion of the conference. In some cases, however, the conference may have video disabled, such as where each participant connects to the conference using a phone rather than a client, or where a host of the conference selectively configures the conference to exclude video functionality. In some implementations, other software services may be accessible in connection with a conference implemented using the conference system1500. For example, a conference may include or otherwise integrate functionality for instant messaging, unified messaging, and other types of messaging communications between participants of the conference, such as to facilitate a chat or like virtual conversation between users of those participants. Those other software services may be implemented at the conference system1500and/or a different aspect of the system100. The implementations of this disclosure can be described in terms of functional block components and various processing operations. Such functional block components can be realized by a number of hardware or software components that perform the specified functions. For example, the disclosed implementations can employ various integrated circuit components (e.g., memory elements, processing elements, logic elements, look-up tables, and the like), which can carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, where the elements of the disclosed implementations are implemented using software programming or software elements, the systems and techniques can be implemented with a programming or scripting language, such as C, C++, Java, JavaScript, assembler, or the like, with the various algorithms being implemented with a combination of data structures, objects, processes, routines, or other programming elements. Functional aspects can be implemented in algorithms that execute on one or more processors. Furthermore, the implementations of the systems and techniques disclosed herein could employ a number of conventional techniques for electronics configuration, signal processing or control, data processing, and the like. The words “mechanism” and “component” are used broadly and are not limited to mechanical or physical implementations, but can include software routines in conjunction with processors, etc. Likewise, the terms “system” or “tool” as used herein and in the figures, but in any event based on their context, may be understood as corresponding to a functional unit implemented using software, hardware (e.g., an integrated circuit, such as an ASIC), or a combination of software and hardware. In certain contexts, such systems or mechanisms may be understood to be a processor-implemented software system or processor-implemented software mechanism that is part of or callable by an executable program, which may itself be wholly or partly composed of such linked systems or mechanisms. Implementations or portions of implementations of the above disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be a device that can, for example, tangibly contain, store, communicate, or transport a program or data structure for use by or in connection with a processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device. Other suitable mediums are also available. Such computer-usable or computer-readable media can be referred to as non-transitory memory or media, and can include volatile memory or non-volatile memory that can change over time. A memory of an apparatus described herein, unless otherwise specified, does not have to be physically contained by the apparatus, but is one that can be accessed remotely by the apparatus, and does not have to be contiguous with other memory that might be physically contained by the apparatus. While the disclosure has been described in connection with certain implementations, it is to be understood that the disclosure is not to be limited to the disclosed implementations but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
108,814
11863307
DETAILED DESCRIPTION Section headings are used in the present document only to facilitate ease of understanding and scope of the embodiments and techniques described in each section are not only limited to that section. Furthermore, while 5G terminology is used in some cases to facilitate understanding of the disclosed techniques, which may be applied to wireless systems and devices that use communication protocols other than 5G or 3GPP protocols. The disclosed technology may be used by implementations to provide channel state estimating and reporting schemes in wireless communication. Some implementations of the disclosed technology provide new channel state estimating and reporting schemes in wireless communication which can provide an improved flexibility to support more various scenarios while addressing signaling overhead. As wireless communication technologies change almost every day, applications in various vertical fields are booming. In order to meet the increasing communication needs, the 5th generation mobile communication (5G, 5th Generation) technology and the further enhancements based on 5G become the development trend for future wireless communications. With the changes in the wireless communication technologies, new scenarios with high mobility and large round trip time (RTT) are presented, which the currently existing mechanisms have limited capabilities to address. For example, in an existing system, for a downlink (DL) transmission, the channel state information (CSI) is obtained based on a dedicated resource, i.e., CSI-RS (CSI reference signal), CRS (cell-specific reference signal), or SSB (SS/PBCH block, for BM only). The reporting and calculation of corresponding content are determined by the configuration information, e.g., CQI in periodic/semi-persistence or aperiodic way. The current mechanisms, however, face the issues for example, ageing of CSI and RS overhead. First, regarding the aging of CSI, in cases with either high mobility or large RTT, the reported channel state information will be aging for conducting a normal scheduling. For example, with high mobility, channel condition is changed rapidly with varied channel gain and dominant path. Thus, the obtained CSI at time instant t1will be aging at next time instant t2, which will cause the improper configuration for scheduling, e.g., MCS (Modulation and Coding Scheme) configuration and rank. To handle the aging issue in scenarios with dynamic changes, conducting more dense report in periodic way can be considered. However, such kind of report will lead to high report overhead since redundant information with same report format is sent repeatedly. In scenarios with large RTT, even the mobility is relative lower or normal, after long-time transmission, the received CSI is already expired, which makes difficult to conduct the efficient scheduling. Second, regarding the RS overhead issue, in an existing solution, the CSI is calculated based on the associated RS, e.g., CSI-RS, SSB, which are dedicated for CSI calculation, even for the CQI updates without changes on the transmission precoding at BS side. To support more frequent reports, multiple triggers of AP RS or more dense transmission of semi/periodic RSs are required, which will cause the greater RS overhead for CSI reports and updates. In recognition of the issues above, the disclosed technology provides various implementations of channel state estimation and reporting schemes which can provide more flexible and efficient ways of channel state estimating and reporting. Also, the disclosed technology can support new scenarios with high mobility and large RTT, e.g., HST, non-terrestrial communication with satellite. FIG.1shows an example of a wireless communication system (e.g., a 5G or NR cellular network) that includes a BS120and one or more user equipment (UE)111,112,113. In some embodiments, the UEs access the BS (e.g., the network) using implementations of the disclosed technology (131,132,133), which then enables subsequent communication (141,142,143) from the BS to the UEs. The UE may be, for example, a smartphone, a tablet, a mobile computer, a machine to machine (M2M) device, an Internet of Things (IoT) device, and so on. FIG.2shows an example of a block diagram representation of a portion of an apparatus. An apparatus210such as a base station or a wireless device (or UE) can include processor electronics220such as a microprocessor that implements one or more of the techniques presented in this document. The apparatus210can include transceiver electronics230to send and/or receive wireless signals over one or more communication interfaces such as antenna240. The apparatus210can include other communication interfaces for transmitting and receiving data. The apparatus210can include one or more memories (not explicitly shown) configured to store information such as data and/or instructions. In some implementations, the processor electronics220can include at least a portion of transceiver electronics230. In some embodiments, at least some of the disclosed techniques, modules or functions are implemented using the apparatus210. FIG.3shows an example of a channel state estimating and reporting scheme carried on a communication device based on some implementations of the disclosed technology. In some implementations, the channel state estimating and reporting scheme includes transmitting, by a communication device, a channel state report message that includes at least one of a first field indicative of a value of a parameter or a second field that includes a deviation or a change rate of the parameter. FIG.4shows an example of a channel state estimating and reporting scheme carried on a network device based on some implementations of the disclosed technology. In some implementations, the channel state estimating and reporting scheme includes receiving, by a network device, a channel state report message including at least one of a first field indicative of a value of a parameter or a second field that includes a deviation or a change rate of the parameter. The channel state report message can include the channel state report message which includes at least one of the following:PMI (Precoding Matrix Indicator)CQI0: Reference CQI (Channel Quality Indication)RSRP0: Reference RSRP (Reference Signal Received Power)SINR0: Reference SINR (Signal to Interference and Noise Ratio)RI (Rank Indicator)CRI (CSI-RS Resource Index)SSB (Synchronization Signal Block) IndexDeltaCQIx: Xth Deviation of CQIRCQI: Change rate of CQIDeltaRSRPx; Xth Deviation of RSRPIRRSRP; Change rate of RSRPDeltaSINRx; Xth Deviation of SINRRSINR; Change rate of SINRgranularity index for RCQI, RRSRP, or RSINRgranularity value for RCQI, RRSRP, or RSINR The above parameters can be grouped to either a first field or a second field of the channel state report message. The first field of the channel state report message may include at least one of followings: CQI0, RSRP0, SINR0, PMI, RI, CRI, SSB-Index, or a first granularity indicator. The second field of the channel state report message may include at least one of followings: DeltaCQIx, RCQI, DeltaRSRPx, RRSRP, DeltaSINRx, RSINR, a second granularity indicator, or a mode for the second field calculation. The first granularity indicator or the second granularity indicator can be either the index of value, which is organized in either table or list, or the indicator can be the value. The parameters included in the second field of the channel state report message may correspond to the first field of the channel state report message. The reporting of the first field and the reporting of the second field of the channel state report message can be triggered together via a same channel state reporting configuration or separately via different channel state reporting configurations. The triggering mechanism will be discussed in detail later in this document. Configuring Channel State Report Message The configuration of the channel state report message can be performed to include at least one of i) report quantity configuration, ii) report resource configuration and priority rules, or iii) UE capability for CSI calculation and/or report. Item 1: Report Quantity Configuration The report quantity configuration is transmitted in response to at least one of a channel state reporting configuration or a trigger of channel state reporting, which is received by a communication device from a network device, for example, BS. In the examples below, the configuration information includes at least one of a deviation of CQI (X deviation CQI(s)) or a change rate of CQI (RCQI). Case 1: Deviation of CQI The report quantity includes at least a deviation of CQI. In some implementations, the corresponding set(s) of CRI, PMI, RI, CQI0can be reported together with the deviation of CQI. If there is no reporting on CRI, CQI0, PMI, and RI, the deviation CQI is calculated latest or previously reported PMI/RI/CQI0. In this case, the deviation of CQI refers to the CQI changes based on the latest CQI or the reference CQI (CQI0). In some implementations, the channel state reporting configuration may include one reference CQI (CQI0) and X deviation CQI(s), when X deviation CQI(s) refers to the corresponding PMI/RI/CRI. FIG.5shows an example of a channel state estimating and reporting scheme when the channel state reporting configuration includes a deviation CQI(s). Referring toFIG.5, at time t0, the BS transmits the channel state reporting configuration including a deviation CQI(s) to the UE. At time t1(t1≥t0), the channel state report message is triggered by the BS in any of periodic, semi-persistent (semi), or aperiodic (AP). The triggering mechanism can include at least one of the following: 1) The first field and the second field of the channel state report message are triggered either by a single signaling if one channel state reporting configuration contains parameters corresponding to the first field and the second field. 2) The first field and the second field can be triggered by different signaling if parameters corresponding to the first field and the second field belongs to different channel state reporting configurations, respectively. 3) The first field and the second field can be triggered by different signalings if one channel state reporting configuration contains parameters corresponding to the first field and the second field. For the case of iii), the trigger can be done in two steps. For the triggering mechanisms 1) and 3), although all parameters corresponding to the first field and the second field are included within one channel state reporting configuration as report quantity, whether the triggering is done based on a single signaling or different signalings depends on a report type and a scheduling mechanism from the BS. In some implementations in which the report type is periodic, both the first field and the second field will be triggered by a same signaling once it's configured. In some implementations in which the report type is aperiodic, the first DCI can be used to trigger the report of the first field and if necessary, the second level DCI can be used to trigger the report of the second field. Although now shown inFIG.5, a mechanism to disable either the first field or the second field of the channel state report message can be provided after the triggering of the channel state report message. In some implementations, when the report type is periodic, for saving the report resource, even if the first field and the second field are triggered together in one signalling, the MAC-CE or DCI can be used to disable the first field or the second field of the channel state report message. In various implementations, the condition to disable one of the first field or the second field of the channel state report message can be set accordingly based on communication scenarios. In response to the triggering, the UE transmits the channel state report message including at least one of the first field (e.g., reference CQI) or the second field (e.g., a deviation CQI(s)). In some implementations, the report on the first field of the channel state report message and the report on the second field of the channel state report message may be transmitted at a same time. In some implementations, the report on the first field of the channel state report message and the report of the second field on the channel state report message may be transmitted at different times. To report the first field and the second field at different times, time offsets may be configured such that time requirement on calculation/report of the first field and the second field of the channel state report message is satisfied. In a specific example as shown inFIG.5, the report of the first field of the channel state report message and the report of the second field of the channel state report message are transmitted at different times. In addition, inFIG.5, the report on the second field of the channel state report messages is transmitted at different times. As shown inFIG.5, at time t2, the UE transmits the first field of the channel state report message which includes at least one of CQI_0, PMI, RI. At time t3which is obtained from the equation, t2+Δt*1, the UE transmits the second field of the channel state report message, Delta CQI_1. At time t4which is obtained from the equation, t2+Δt*2, the UE transmits the second field of the channel state report message, Delta CQI_2. At time t, which is obtained from the equation, t2+Δt*x, the UE transmits the second field of the channel state report message, Delta CQI_X. In this example, Δt refers to the report preconfigured time offset indicating time distance between two adjacent transmissions on the report of deviation CQI. The second field calculation, for example, calculating the deviation CQI, DeltaCQI_x, can be performed by using at least one of Equation 1 (for mode 1) or Equation 2 (for mode 2): Delta_CQI_x=CQImeasurNew−CQI0[Equation 1] Delta_CQI_x=CQImeasurNew−(Σi=1i=x-1DeltaCQIi+CQI0)  [Equation 2] In Equations 1 and 2, the CQImeasureNewrefers to the latest calculated CQI at the UE side before the reporting of DeltaCQIxbased on values of the corresponding PMI/RI/CRI. In some implementations, the CQI measureNew is calculated after the latest reporting with gap Δt_z1 and before the instant of reporting of DeltaCQIxwith gap Δt_z2. In some implementations, the CQI0and DeltaCQIxwill be quantized by different granularity and bit length. In some implementations, the bit length for DeltaCQIxis smaller than CQI0. Case 2: Change Rate of CQI The report quantity includes at least a change rate of CQI (RCQI). In some implementations, the corresponding set(s) of CRI, CQI, PMI and RI can be also reported together with RCQI. If there is no reporting on the corresponding set(s), the deviation CQI is calculated latest or previously reported PMI/RI/CQI. In this case, RCQIis calculated latest or previously reported PMI/RI/CQI with time granularity Gtor frequency granularity Gf. For the determination of granularity, at least one of following ways can be considered: 1. The granularity may be configured by the BS with L values (L>=1). If more than one values are configured (i.e., L>1), the UE will select one of them and report the selected index together with either the CRI/CQI/PMI/RI or RCQI. If only one value is configured (i.e., L=1), the UE does not need to report its value. 2. The granularity may be configured from the predefined table with L values (L>1). In this case, the UE will select one of values based on the signaling from the BS. 3. When there is no pre-configured value, the UE will determine the value by itself and directly report the determined value to the BS. With the granularity determined, the second field calculation, for example, calculating the change rate of CQI can be performed by using, for example, Equation 3 (for mode 1): RCQI=(CQImeasurNew−CQI0)/granularity  [Equation 3] In Equation 3, the CQImeasureNewrefers to the latest calculated CQI at the UE side before the reporting of RCQIbased on values of the corresponding PMI/RI/CRI. In some implementations, the CQImeasureNewis calculated after the latest reporting with gap Δt_z1 and before the instant of reporting of DeltaCQIwith gap Δt_z2. In some implementations, the CQI0and RCQIwill be quantized by different granularity and bit length. In some implementations, the bit length for RCQIis smaller than CQI0. In some embodiments, the granularity refers to the certain time duration or frequency unit, which is used to assess the variation of a parameter, e.g., CQI over given time duration or frequency unit. In some embodiments, multiple granularities can also be determined. In this case, each of the granularities corresponds to different domain, e.g., time or frequency domain. The determination of the granularity can be made either by the configuration or the UE selection. Once the report is triggered in any of periodic/semi/AP, the content of the required report, e.g., including the parameters (e.g., reference CQI and RCQI) will be reported at either different times or a same time. FIG.6shows an example of a channel state estimating and reporting scheme when a channel state reporting configuration includes RCQI. InFIG.6, at time t0, the BS transmits the channel state reporting configuration including at least RCQIto the UE. At time t1(t1≥t0), the configuration is triggered by the BS in any of periodic, semi, or AP. The triggering mechanism is already explained with reference toFIG.5and the similar explanations can be applied here. In response to the triggering, at time t2, the UE transmits the reporting of the first field of the channel state report message which includes at least one of CQI_0, PMI, RI. At time t3which is obtained from the equation, t2+Δt, the UE transmits the second field of the channel state report message which includes RCQI. Thus, in the specific implementation as shown inFIG.6, reference CQI and RCQIare reported at different times, t2and t3. As mentioned for the example as shown inFIG.5, to report the first field and the second field at different times, time offsets may be configured such that time requirement on calculation/report of the first field and the second field of the channel state report message is satisfied. In some implementations, the preconfigured report offset Δt is greater than Gt. FIG.7shows another example of a channel state estimating and reporting scheme when a channel state reporting configuration includes RCQI. InFIG.7, at time t0, the BS transmits the channel state reporting configuration includes at least RCQIto the UE. At time t1(t1≥t0), the configuration is triggered by the BS in any of periodic, semi, or AP. The triggering mechanism is already explained with reference toFIG.5and the similar explanations can be applied here. In response to the triggering, at time t2which is obtained from the equation, t1+Δt, the UE transmits the reporting of the first field of the channel state report message which includes at least one of CQI_0, PMI, RI and the second field of the channel state report message which includes RCQI. Thus, in the specific implementation as shown inFIG.6, reference CQI and RCQIare reported at the same time, t2. In some implementations, the preconfigured report offset Δt is greater than Gt. In some implementations, RCQIcan be calculated by considering difference(s) of CQI in time instants with the gap Gt. Filtering for multiple values can be considered as one solution. In some implementations, the valid duration of the parameters, e.g., reference CQI and RCQI, can also be reported by the UE according to the channel state reporting configuration. Once the timer is expired or exceeds the window of the validation, the UE will either update the value by another reporting or the BS should trigger another round of reporting. In examples of Cases 1 and 2, the two modes, Mode 1 and Mode 2, for the calculation of the deviation of CQI, and one mode, Mode 1, for the calculation of the change rate of CQI, have been discussed. The mode for the second field calculation, for example, Mode 1 and Mode 2 for calculating the deviation of CQI and Mode 1 for calculating the change rate of CQI, can be included in the second field. The mode for the second field calculation can be determined in at least one of the following manners: 1. In the channel state reporting configuration, the report quantity, e.g., at least the second field of the channel state report message, is configured together with the mode for the second field calculation. 2. In the channel state reporting configuration, the report quantity, e.g., at least the second field of the channel state report message, is configured. The UE will select the mode for the second field calculation. In some implementations, the mode selection can be further extended to define a channel state report mechanism considering the potential usage of AI (artificial intelligence). In this case, the CSI report mechanism will include at least one of the following: 1. Calculation mode. The calculation mode is used to determine how to calculate the channel state information. For the calculation mode, one or more modes are indexed with at least one of the corresponding set of input parameters and output parameters. Here, the output parameters can be content of the channel state report message, e.g., the first field and/or the second field and/or extracted parameters to modeling the channel condition, e.g., Doppler spread, Doppler shift, delay spread, delay shift. The input parameters can include, for example, granularity for calculation (e.g., frequency band-width including wideband or subband or frequency range or time duration), filtering parameters for CSI smoothing or weights or RS index, e.g., RS configuration/mode index, for calculation. Optionally, a method index can also be one of input parameters if multiple methods are explicitly defined in the spec for one calculation mode. In some implementations, the mapping between the method index and the calculation mode is one to one. In some implementations, only required parameters are explicitly mentioned and the methods for the second field calculation is not defined, just for implementation. 2. Report mode. The report mode is used to determine at least one of them: 1. how to report the channel state information e.g., periodic/or AP 2. The resource for reporting, e.g., single report or separately report of the channel state report message. 3. Content for reporting, 4. Granularity for reporting, e.g., wideband or sub-band. For the calculation mode, the configuration of resource for channel state information calculation can also be included. Moreover, the calculation mode can be a part of the configuration of the report mode or can be independent from the configuration of the report mode. Optionally, if the calculation mode will be a part of the report mode, the index of calculation will be a part of content of the report mode. In some implementations, if the report type is periodic, the calculation mode can be a part of the configuration of the report mode. Otherwise, the calculation mode is independent from the configuration of the report mode. The triggering mechanism, e.g., timing and signaling, for the calculation mode and the report mode can also be same or different.FIG.8illustrates an example of a framework for CSI trigger configuration. As shown inFIG.8, the association among the calculation mode, the resource mode/configuration, and the report mode will be done within the corresponding triggering configuration, e.g., indexes of these modes, and the configuration will be a part of the content of triggering configuration. In examples of Cases 1 and 2, the parameter CQI is used such that the deviation of CQI is transmitted in Case 1 and the change rate of CQI is transmitted in Case 2. In some implementations, other parameter such as RSRP or SINR can be used instead of CQI. Most explanations for Case 1 are applied to Cases 3 and 5 and most explanations for Case 2 are applied to Cases 4 and 6 by replacing the parameter of CQI with the parameter of RSRP or SINR. Case 3: Deviation of RSRP FIG.9shows an example of a channel state estimating and reporting scheme when a channel state reporting configuration includes a deviation RSRP. The process proceeds similarly as the configuration and operation for CQI in Case 1. In this case, the deviation of RSRP can be configured together with the report of CRT or SSB-index Case 4: Change Rate of RSRP FIG.10shows an example of a channel state estimating and reporting scheme when a channel state reporting configuration includes a change rate of RSRP. The process proceeds similarly as the configuration and operation for CQI in Case 2. In this case, the change rate of RSRP can be configured together with the report of CRT or SSB-index. AlthoughFIG.9shows the example that the report of the first field (e.g., RSRP_0/CRI/SSB-Index) and the report of the second field (e.g., RRSRP) are transmitted at different times, it's also possible that the first field and the second field are transmitted at a same time. Case 5: Deviation of SINR FIG.11shows an example of a channel state estimating and reporting scheme when a channel state reporting configuration includes a deviation SINR. The process proceeds similarly as the configuration and operation for CQI in Case 1. In this case, the deviation of SINR can be configured together with the report of CRT or SSB-index Case 6: Change Rate of SINR FIG.12shows an example of a channel state estimating and reporting scheme when a channel state reporting configuration includes a change rate of SINR. The process proceeds similarly as the configuration and operation for CQI in Case 2. In this case, the change rate of SINR can be configured together with the report of CRT or SSB-index. AlthoughFIG.11shows the example that the report of the first field (e.g., SINR_0/CRI/SSB-Index) and the report of the second field (e.g., RSINR) are transmitted at different times, it's also possible that the first field and the second field are transmitted at a same time. For Cases 1 to 6, it is noted that 1) the CQI/RSRP/SINR in the first field of the channel state report message is calculated based on corresponding PMI/RI/CRI/SSB index listed in the first field of the channel state report message or previously reported and 2) the second field of the channel state report message is calculated based on the corresponding value listed in the item 1) above. If multiple sets of CQI/RSRP/SINR and PMI/RI/CRI/SSB are required in the first field, one to one mapping between the first field and the second field of the channel state report message can be followed. In some implementations, a policy to report multiple sets of 1st CQI can include: different sets of “PMI and/or RI and/or CQI and/or CRT” for different frequency band or for optimal and sub-optimal results. Optimal PMI and adjacent PMI, here adjacent refers to the represented spatial direction of precoder matrix is similar. In some embodiment, the number of sets can also be indicated by network device. Item 2: Report Resource Configuration and Priority Rules Case 1: Report Resource Configuration To support the channel state report message including at least one of the first field and the second field of the channel state report message, different time offsets for the report of each field can be configured. Each component will be carried by either PUCCH (Physical Uplink Control Channel) or PUSCH (Physical Uplink Shared Channel), which satisfy the time restriction between the offsets, CSI calculation, and resource preparation. In some embodiment, the time offset refers to the offset within one period, e.g., slot offset or symbol offset within the a report period cross over multiple slots or symbols. In some embodiment, the time offsets refers to the time offset between the report and the reception of CSI report triggering. In some implementations, the BS can indicate one or more resources for calculating channel state information, the one or more resources including any one of DM-RS (Demodulation Reference Signal), CSI-RS (CSI Reference Signal) for CSI, or CSI-RS for tracking. In some implementations, the one or more resources for calculating channel state information can be indicated to satisfy at least one of i) a same resource or different resources are indicated for the first field and the second field, ii) DM-RS (Demodulation Reference Signal) is used for the second field, or iii) a power offset among the one or more resources is configured if different resources are indicated for the first field and the second field. Case 2: Priority Rule At least one of the following rules can be set to support the channel state report message including at least one of the first field and the second field of the channel state report message. Rule-1: the first field of the channel state report message is prioritized than the corresponding second field of the channel state report message. Rule-2: In case of the reporting of multiple sets of channel state report message, the second field of the ith set of channel state report message is prioritized than the jth set of channel state report message, here i<j. Rule-3: The reports triggered in different ways obey the following priority order: AP>Semi>P. Rule-4: The channel state report message for wideband is prioritized than the results for subband Rule-5: In the second field of the channel state report message, e.g., DeltaCQI_x/Delta_RSRP/Delta_SINR, if the kth value of DeltaCQI_x/Delta_RSRP/Delta_SINR and the lth value of DeltaCQI_x/Delta_RSRP/Delta_SINR will be reported in the same resource (i.e., same PUSCH/PUCCH), the kth value of DeltaCQI_x/Delta_RSRP/Delta_SINR will be dropped, here l>k; Rule-6: In the second field of the channel state report message, e.g., DeltaCQI_x/Delta_RSRP/Delta_SINR, if the kth value of DeltaCQI_x/Delta_RSRP/Delta_SINR and the lth value of DeltaCQI_x/Delta_RSRP/Delta_SINR will be reported in the different resources (e.g., PUSCH and PUCCH) but within same scheduling unit for transmission (e.g., slot), the kth value of DeltaCQI_x/Delta_RSRP/Delta_SINR will be dropped, here l>k; Rule-7: If the processing on the first channel state report message and/or the second channel state report message will be stopped, another reporting on the first channel state report message is triggered unless there is another indication for keeping such kind of parallel processing from BS, and within the UE capability. Item 3: UE Capability for Channel State Calculation and/or Report Case 1: The UE reports whether the UE support the channel state calculation and report which is suggested in this patent document. If the UE does not support it, the configuration will not be enabled for the UE. Case 2: if the required resource for supporting the calculation and the report which is suggested in this patent document exceeds the available resource/unit in time instant or duration, the channel state report message will be dropped according to the pre-defined priority rules defined as Rule-2. Implementation: Configuration of Resource for Reporting In this example, to support the channel state report message including at least one of the first field and the second field, the associated resource for calculating, for example, the second field, includes the DM-RS (Demodulation Reference Signal) which is assigned for the PDSCH (Physical Downlink Shared Channel) transmission via either DCI (Downlink Control Information) for dynamic scheduling or RRC (Radio Resource Control) configuration for a configured grant. In addition, the following examples can be implemented. Case 1: For the channel state reporting with the report quantity configuration including the first field and the second field of the channel state report message, different resources can be configured for the channel state calculations of the first field and the second field. For example, CSI-RS or SSB is configured in the configuration for the first field and DM-RS is indicated via either configuration or pre-defined rule to obtain the second field of the channel state report message. The resource for the first field of the channel state report message can be transmitted by the BS or received by the UE prior to the RSs for the second field of the channel state report message. Case 2: For the channel state reporting with the report quantity configuration including only the second field of the channel state report message, DM-RS is indicated via either configuration or pre-defined rule to obtain the second field of channel state report message. If the channel state reporting is triggered in AP, the bits for the channel state reporting need to be within a same DCI as the scheduling bits for PDSCH and DM-RS assignment. The indication of DM-RS to obtain the second field of the channel state report message can be done in any one of following ways: 1) Configuration: In the channel state reporting configuration, a set of DM-RS index(s) (e.g., port index) will be configured via RRC or MAC CE signaling. Once the scheduling is done, the calculation of second field of the channel state report message is performed based on the DM-RS indexes according to the scheduled transmission. 2) Pre-defined rule: No dedicated signaling for RS configuration associated to the report configuration exists. Once the channel state reporting is triggered, the UE will try to obtain the second field of the channel state report message based on the DM-RS which is assigned for the PDSCH transmission. The power offset between RSs for the first field and the second field of the channel state report message can also be indicated via signaling delta_P for assisting the CSI calculation if different RSs are adopted. Additional features and embodiments the above-described methods/techniques discussed above are described below using a clause-based description format. 1. A wireless communication method including: transmitting, by a communication device, a channel state report message that includes at least one of a first field indicative of a value of a parameter or a second field that includes a deviation or a change rate of the parameter. 2. The wireless communication method of clause 1, wherein the channel state report message is transmitted in response to a channel state reporting configuration received by the communication device from a network device or a trigger of channel state reporting received by the communication device from a network device. 3. The wireless communication method of clause 2, wherein the trigger of channel state reporting is performed by using a single signaling or multiple signaling based on the channel state reporting configuration. 4. The wireless communication of clause 1, wherein the value of the parameter includes a value of CQI (Channel Quality Indication), a value of RSRP (Reference Signal Received Power), or a value of SINR (Signal to Interference and Noise Ratio), the deviation or the change rate of the parameter includes a deviation or a change rate of CQI, RSRP, or SINR. 5. The wireless communication method of clause 1, wherein the first field further includes at least one of PMI (Precoding Matrix Indicator), RI (Rank Indicator), CRI (CSI-RS Resource Index), SSB (Synchronization Signal Block) index, or a first granularity indicator, and the second field further includes a second granularity indicator, or a mode for calculating the second field. 6. The wireless communication method of clause 5, wherein at least one of the first granularity indicator or the second granularity indicator is configured by a network device, or configured from a predefined table, or determined by the communication device from the predefined table. 7. The wireless communication method of clause 1, further comprising calculating the second field by a calculation mode determined by either a channel state reporting configuration received by the communication device from a network device, predefined, or selected by the communication device from pre-defined candidate sets. 8. The wireless communication method of clause 7, wherein the calculation mode is defined by using at least one of: PmeasurNew−P0.  Equation 1: PmeasurNew−(Σi=1i=x-1DeltaPi+P0), or  Equation 2: (PmeasurNew−P0)/granularity,  Equation 3: wherein PmeasureNewrefers to a value of the parameter latest obtained at the communication device and P0refers to the value of the parameter, and wherein the Equation 1 and the Equation 2 are defined to calculate xth deviation of the parameter in a first mode and a second mode, respectively, and the Equation 3 is defined to calculate the change rate of the parameter in a third mode. 9. The wireless communication method of clause 1, wherein the first field and the second field are transmitted at a same time. 10. The wireless communication method of clause 1, wherein the first field and the second field are transmitted at different times with time offsets satisfying time requirement for transmitting the channel state report message, time offsets corresponding to the first field and the second field, respectively. 11. The wireless communication method of clause 1, further comprising configuring a resource for transmitting the channel state report message, wherein different resources are configured for the first field and the second field, respectively. 12. The wireless communication method of clause 1, wherein at least one of the first field or the second field is transmitted by PUCCH (Physical Uplink Control Channel) or PUSCH (Physical Uplink Shared Channel). 13. The wireless communication method of clause 1, further comprising transmitting the channel state report message, by the communication device, according to a priority rule configured by a network device or pre-defined. 14. The wireless communication method of clause 13, wherein the priority rules includes at least one of: i) the first field of the channel state report message is prioritized than the corresponding second field of the channel state report message, iii) when the wireless method communication method further includes transmitting additional channel state report messages, the second field of the ith set channel state report message is prioritized than the jth set of channel state report message, wherein i and j are natural numbers which satisfy i<j, iii) a trigger of channel state reporting has a priority order of aperiodic>semi-persistence>periodic, iv) the channel state report message for a wideband is prioritized than that for a subband, v) if the kth value and the lth value in the second field of the channel state report message are reported in a same resource, the kth value is dropped wherein k and l are natural numbers which satisfy l>k, vi) if the kth value and the lth value in the second field of the channel state report message are reported in different resources and within a same transmission unit, the kth value is dropped wherein k and l are natural numbers which satisfy l>k, or vii) when the wireless method communication method further includes transmitting an additional channel state report message, if processing on the channel state report message or the additional channel state report message is stopped, the channel state report message is triggered again. 15. The wireless communication method of clause 1, further comprising reporting a capability of the communication device, the capability indicating whether the communication device supports the channel state report message including at least one of the first field or the second field. 16. The wireless communication method of clause 1, further comprising obtaining the second field based on one of DM-RS (Demodulation Reference Signal), CSI-RS (CSI Reference Signal) for CSI, or CSI-RS for tracking. 17. A wireless communication method including receiving, by a network device, a channel state report message including at least one of a first field indicative of a value of a parameter or a second field that includes a deviation or a change rate of the parameter. 18. The wireless communication method of clause 17, wherein the value of the parameter includes a value of CQI (Channel Quality Indication), a value of RSRP (Reference Signal Received Power), or a value of SINR (Signal to Interference and Noise Ratio), the deviation or the change rate of the parameter includes a deviation or a change rate of CQI, RSRP, or SINR. 19. The wireless communication method of clause 18, wherein first field further includes at least one of PMI (Precoding Matrix Indicator), RI (Rank Indicator), CRI (CSI-RS Resource Index), SSB (Synchronization Signal Block) index, or a first granularity indicator, and the second field further includes a second granularity indicator, or a mode for calculating the second field. 20. The wireless communication method of clause 19, wherein at least one of the first granularity indicator or the second granularity indicator is configured by the network device, or configured from a predefined table, or determined by a user device from the predefined table. 21. The wireless communication method of clause 17, further comprising transmitting, by the network device, to a user device, at least one of a channel state reporting configuration or a trigger of channel state reporting. 22. The wireless communication method of clause 17, further comprising notifying a user device any one of first to third modes for calculating the second field by one of Equations 1 to 3, respectively, wherein the Equations 1 to 3 are defined as follows: PmeasurNew−P0,  Equation 1: PmeasurNew−(Σi=1i=x-1DeltaPi+P0), or  Equation 2: (PmeasurNew−P0)/granularity,  Equation 3: wherein PmeasureNewrefers to a value of the parameter latest obtained at the user device and P0refers to the value of the parameter, and wherein the Equation 1 and the Equation 2 are defined to calculate xth deviation of the parameter in the first mode and the second mode, respectively, and Equation 3 is defined to calculate the change rate of the parameter in the third mode. 23. The wireless communication method of clause 22, wherein the first field and the second field are quantized by different granularities and bit lengths, the second field having a bit length smaller than that of the first field. 24. The wireless communication method of clause 17, wherein the first field and the second field are received at a same time. 25. The wireless communication method of clause 17, wherein the first field and the second field are received at different times with different time offsets satisfying time requirement for transmitting the channel state report message, time offsets corresponding to the first field and the second field, respectively. 26. The wireless communication method of clause 17, wherein at least one of the first field or the second field is transmitted by PUCCH (Physical Uplink Control Channel) or PUSCH (Physical Uplink Shared Channel). 27. The wireless communication method of clause 17, further comprising indicating one or more resources for calculating the channel state report message, the one or more resources including any one of DM-RS (Demodulation Reference Signal), CSI-RS for CSI, or CSI-RS for tracking. 28. The wireless communication method of clause 17, further comprising indicating one or more resources for calculating the channel state report message, wherein the one or more resources satisfy at least one of i) a same resource or different resources are indicated for the first field and the second field, ii) DM-RS (Demodulation Reference Signal) is used for the second field, or iii) a power offset among the one or more resources is configured if different resources are indicated for the first field and the second field. 29. A wireless communications apparatus comprising a processor and a memory, wherein the processor is configured to read code from the memory and implement a method recited in any of claims1to28. 30. A computer program product comprising a computer-readable program medium code stored thereupon, the code, when executed by a processor, causing the processor to implement a method recited in any of claims1to28. It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example and, unless otherwise stated, does not imply an ideal or a preferred embodiment. As used herein, the use of “or” is intended to include “and/or”, unless the context clearly indicates otherwise. Some of the embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Therefore, the computer-readable media can include a non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer- or processor-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes. Some of the disclosed embodiments can be implemented as devices or modules using hardware circuits, software, or combinations thereof. For example, a hardware circuit implementation can include discrete analog and/or digital components that are, for example, integrated as part of a printed circuit board. Alternatively, or additionally, the disclosed components or modules can be implemented as an Application Specific Integrated Circuit (ASIC) and/or as a Field Programmable Gate Array (FPGA) device. Some implementations may additionally or alternatively include a digital signal processor (DSP) that is a specialized microprocessor with an architecture optimized for the operational needs of digital signal processing associated with the disclosed functionalities of this application. Similarly, the various components or sub-components within each module may be implemented in software, hardware or firmware. The connectivity between the modules and/or components within the modules may be provided using any one of the connectivity methods and media that is known in the art, including, but not limited to, communications over the Internet, wired, or wireless networks using the appropriate protocols. While this document contains many specifics, these should not be construed as limitations on the scope of an invention that is claimed or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or a variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this disclosure.
48,546
11863308
DETAILED DESCRIPTION Reference will now be made to the embodiments illustrated in the drawings, and specific language will be used here to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Alterations and further modifications of the features illustrated here, and additional applications of the principles as illustrated here, which would occur to a person skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the disclosure. The present disclosure is directed to systems and methods for managing networked environments. An automated management service may aggregate data from servers hosting applications accessed by end-user devices and provide a dashboard user interface to administer and manage various operations of the network environment. The dashboard user interface may include: a patch installation element to initiate a patch management process to at least one set of servers; a failover execution element to carry out a network traffic failover from one set of servers to another set of servers; and a predictive analytics element to provide a set of performance indicators for various functions of a given application, among others. In this manner, the automated management service may allow the network administrator to quickly retrieve desired data (e.g., statistics and performance metrics) about the network and promptly take proper actions to manage various aspects of network operations. FIG.1depicts a block diagram of a system100for managing networked environments. In overview, the system100may include at include at least one automated management service102, one or more servers104A-1to104N-X (hereinafter generally referred to as servers104) arranged, situated, or distributed across a set of server groups106A-N (hereinafter generally referred to as server groups106); and at least one database108, among others, communicatively coupled with one another via at least one network110. The automated management service102may include at least one dashboard handler112, at least one patch manager114, at least one failover manager116, at least one analytics evaluator118, at least one reliability evaluator120, and at least one assistant handler122, among others. The automated management service102may provide at least one user interface124. The user interface124may include at least one patch installer user element (UI) element126, at least one failover execution UI element128, at least one analytics retrieval UI element130, at least one reliability retrieval UI element132, and at least one assistant invocation UI element134, among others. The one or more servers104in at least one server group106may include or host at least one application136. Various hardware and software components of one or more public or private networks110may interconnect the various components of the system100. Non-limiting examples of such networks may include Local Area Network (LAN), Wireless Local Area Network (WLAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), and the Internet. The communication over the network may be performed in accordance with various communication protocols, such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), and IEEE communication protocols, among others. Each server104may be any computing device comprising one or more processors coupled with memory and software and capable of performing the various processes and tasks described herein. Each server104may be in communication with one another, one or more end-user customer devices, the automated management system102, and the database108, among others via the network110. The server104may be situated, located, or otherwise associated with at least one server group106. Each server group106may correspond to a data center, a branch office, or a site at which a subset of servers104is situated or associated. For instance, the first server group106A may correspond to a data center at first site including a first set of servers104A-1to104A-X and the second server group106B may correspond to a branch office at a second side including a second set of servers106B-1to106B-X. At least one of the servers104may maintain, include, or otherwise host resources for the application136. The application136may be a cloud-based application (e.g., a Software as a Service (SaaS)) or a web application, among others, accessed by end-user customer devices that are communicatively coupled with the network110. For example, the application136may be an online banking application, a word processor, a spreadsheet program, a multimedia player, a video game, or a software development kit, among others. In some embodiments, the server104may include or run the application136itself. For instance, the server104may maintain or run virtual machine to run instances of the application136to be accessed by the end-user customer devices. In some embodiments, the servers104may be grouped, associated with one another, or otherwise arranged into the server group106by: a type of the application136provided; a geographic location; a network location; or a type of device (e.g., the end-user customer devices such as mobile phones, laptops, or desktops), among others. The database108may store and maintain various data associated with the servers104across the server groups106and with the application136hosted thereon, among others. The database108may also include a database management system (DBMS) to arrange and organize the data maintained thereon. The data may be produced from the application136running on the servers104and accessed by end-user customer devices over the network110. Upon production, the servers104(or the end-user customer devices) may store the data onto the database108. For instance, the database108may store and maintain a transaction log identifying communications exchanged over the network110, such as between end-user customer device and the servers104. The database108may store and maintain a process log for a given application136identifying functions, events, or otherwise component invoked in the application136when running on the servers104or when accessed by the end-user customer devices. The data maintained on the database108may be accessed by the automated management service102. The automated management service102may be any computing device comprising one or more processors coupled with memory and software and capable of performing the various processes and tasks described herein. The automated management service102may be in communication with the servers104across different server groups106, one or more end-user customer devices, and the database108, among others via the network110. Although shown as a single component, the automated management service102may include any number of computing devices. For instance, the dashboard handler112, the patch manager114, the failover manager116, the analytics evaluator118, the reliability evaluator120, and the assistant handler122may be executed across one or more computing systems. Within the automated management service102, the dashboard manager112may provide the user interface124for display of a computing device communicatively coupled with the automated management server102. The patch manager114may execute a patch installation process for the application136hosted on the servers104. The failover manager116may perform a traffic failover process from one server group106to another server group106. The analytics evaluator118may provide performance indicators for various functions of a given application. The reliability evaluator120may provide reliability indicators in accordance with service level objectives (SLOs). The assistant handler122may provide a service assistant to handle health checks and incident ticking for the application136or any of the servers104. The user interface124may be a graphical user interface (GUI), with one or more elements to invoke various functions of the automated management system102. Upon interaction, the patch installer UI element126may invoke the patch manager114to execute the patch installation process. The failover execution UI element128may invoke the failover manager116to perform a traffic failover process from one server group106to another server group106. The analytics retrieval UI element130may invoke the analytics evaluator118to provide performance indicators for various functions of the application136. The reliability retrieval UI element132may invoke the reliability evaluator120to provide reliability indicators. The assistant invocation UI element134may invoke the assistant handler122to provide a service assistant to handle health checks and incident ticking. The user interface124may be for example in the manner depicted inFIG.2. FIG.2depicts a screenshot of a dashboard user interface200of the system for managing networked environments. In the depicted example, the dashboard user interface200may include a set of elements generally in the middle of the interface, such as: a first UI element202to open incident management (e.g., including failover management); a second UI element204to open site reliability measurements; a third UI element206to access predictive analytics; a fourth UI element208to access patch management; and a fifth UI element210to invoke the intelligent customer service agent, among others. The dashboard user interface200may also include at least one sixth element212to provide notifications and updates. The dashboard user interface200may include other UI elements, such as a tool bar along the top and access to other dashboards along the left. FIG.3depicts a block diagram of a system300for patch management in the networked environment. The system300may include at least one automated management service302, one or more servers304A-X (hereinafter generally referred to as servers304) in a server group306, and at least one database308, among others, communicatively coupled with one another via at least one network310. The automated management service302may include at least one dashboard handler312and at least one patch manager314, among others, and may provide at least one user interface324. The user interface324may include at least one patch installer UI element326and the patch management UI element328, among others. At least one of the servers304may host resources for at least one application336. The automated management service302may be used to facilitate installation of at least one patch338to the application336. Embodiments may comprise additional or alternative components or omit certain components from those ofFIG.3, and still fall within the scope of this disclosure. The dashboard handler312executing on the automated management service302may provide the user interface324including the patch installer UI element326. The user interface324may be rendered, displayed, or otherwise presented on at least one display of the automated management service302or communicatively coupled with the automated management service302. The user interface324may be in the manner, for example, depicted inFIGS.4and5, among others. The patch installer UI element326may correspond to at least one graphical user interface (GUI) element within the user interface324, such as a command button, a slider, a toggle switch, an image, a window, a prompt, or a container, among others, or any combination thereof. The dashboard handler312may monitor for an interaction with the patch installer UI element326, such as a mouse click, a screen touch, a key press, voice command, or a corresponding gesture, among others. The patch installer UI element326may identify or select at least one server group306from a set of groups of servers on which to set up install at least one patch338for the application336hosted at least one of the servers304in the server group306. In response to detecting an interaction with the patch installer UI element326, the dashboard handler312may call, invoke, or otherwise execute the patch manager314. In some embodiments, the dashboard handler312may provide the user interface324to include at least one patch management UI element340. The patch management UI element340may correspond to a set of graphical user interface (GUI) elements within the user interface324, such as a radio button, a check box, a toggle switch, a text box, an image, a window, a prompt, or a container, among others, or any combination thereof. In some embodiments, the patch management UI element340may be part of the patch installer UI element326within the user interface324. For example, the dashboard handler312may present the patch management UI element340corresponding to a subset of constituent GUI elements of the patch installer UI element326. In some embodiments, the patch management UI element340may be separate from the patch installer UI element326within the user interface324. For instance, the dashboard handler312may present the patch management UI element340on a window or webpage separate from the patch installer UI element326. In some embodiments, the patch management UI element340may include or provide information associated with the application336, the servers304, and server groups306, among others. The patch management UI element340may provide information relevant to the installation of the patch338for the application336. For instance, the patch management UI element340may include: a version identifier for the application336currently installed on each server304or server group306; an identifier for each server304or server group306on which the application336is installed; a status indicating a progress (e.g., downloading, setting up, validation, and completion) of installation of the patch338at the respective server304or server group306; and a time stamp for the status, among others. In some embodiments, the patch management UI element340may accept, gather, or otherwise receive one or more parameters for the installation of the patch338for the application336. The parameters may define a set of stages of the setup process of the patch338on the server304or the server group306for the application336and may be entered by a user (e.g., a network administrator) of the automated management service302. The parameters may include, for example: an application identifier referencing the application336; a version identifier corresponding to the patch338for the application336to be installed; an identification of a selected server304or server group306on which to carry out installation of the patch338; and a schedule for the installation of the patch338for the application336, among others. The schedule may define a time at which to carry out the stage of the installation process of the patch338. In invoking the patch manager314, the dashboard handler312may pass the parameters inputted into the patch management UI element340. The patch manager314executing on the automated management service302may carry out, perform, or otherwise execute a patch management process, in response to the interaction on the patch installer UI element326. The patch management process may include the set of stages of installing the patch338for the application336whose resources are hosted on the selected server304or server group306. The patch management process may start from a shutting down of the servers304hosting resources for the application336in the selected server group306, setting up or installing the patch338for the application336on the servers304, and validating the installation of the patch338, among others. The patch manager314may carry out the patch management process in accordance with the defined parameters upon invocation from interaction with the patch installer UI element326, with minimal or no subsequent user interaction. In this manner, the patch manager314may automate the various stages of installation of the patch338to reduce manual human involvement. In carrying out the patch management process, the patch manager314may retrieve, obtain, or otherwise identify the patch338to be installed. The patch338may be stored and maintained in a storage (e.g., the database308as depicted) accessible to the automated management service302. The patch338may define, identify, or otherwise include a set of updates to be applied to the application336. The updates included in the patch338may include, for example, addition of new functions, removal of previously provided functions, or modifications to existing functions in the application336. In some embodiments, the patch manager314may identify which patch338is to be installed, using the inputs from the patch installer UI element326and the patch management UI element340of the user interface324. With the identification, the patch manager314may send, transmit, or otherwise provide the patch338to the selected sever304or server group306. In conjunction, the patch manager314may run a shutdown sub-process the selected server304or server group306hosting resources for the application336. The shutdown sub-process may entail, involve, or include causing the servers304in the server group306offline to cease further access by the end-user customer devices. Upon shutting down, the patch manager314may carry out setting up or installation of the patch338for the application336. To set up, the patch manager314may run or execute the patch338to apply the set of updates to the application336, for example, by changing executable binary files corresponding to the application336. Continuing on, the patch manager314may perform a validation sub-process (sometimes herein referred to as a post-patch check) on the installation of the patch338. In performing, the patch manager314may determine whether the patch338is successfully installed on the server304or server group306, without affecting other processes on the server304or server group306. When the installation is unsuccessful, the patch manager314may return an indication for presentation on the user interface324(e.g., via the patch management UI element340). In some embodiments, the patch manager314may perform the stages of the patch management process (e.g., the shutdown, setup, and validation) in accordance with the defined parameters. For example, the patch manager314may carry out individual sub-processes in accordance with the times identified by the schedule defined using the patch management UI element340. FIG.4depicts a screenshot of an execution interface400in the dashboard user interface for the system for patch management. In the depicted example, the execution interface400may include a list of applications402to indicate statuses of patch installations. The list402may include various information about the patch status, such as: a product name404to identify a type of application; a data center name406to identify a server group hosting the application; an operating system type408to identify the operating system at the server group; a validation status410identifying progress or completion of validation of the patch installation; and a patch status412identifying a progress of the overall patch installation process for the given application. The execution interface400may be used by the network administrator to view the patch status of various application across multiple sites. FIG.5depicts a screenshot of a status interface500in the dashboard user interface for the system for patch management. In the depicted example, the status interface500may include various information on the status of patch installation management across multiple server groups or sites. The status interface500may include: a patch count interface502identifying a number of patches installed by sites or server groups; a patch velocity interface504identifying a rate at which the patch management process is successfully carried out by sites or server groups; and a patch status interface506identifying a number of patch management processes that were either successful or failed. The status interface500may be used by the network administrator to view statistics regarding the patch installation process across multiple sites for a given application. FIG.6depicts a block diagram of a system600for failover management in networked environments. The system600may include at least one automated management service602, one or more servers604A-1to604B-X (hereinafter generally referred to as servers604) across at least two server groups606A and606B (hereinafter generally referred to as server group606), and at least one database608, among others, communicatively coupled with one another via at least one network610. The automated management service602may include at least one dashboard handler612and at least one failover manager614, among others, and may provide at least one user interface624. The user interface624may include at least one failover execution UI element628and at least one network statistics UI element638, among others. At least one of the servers604in each server group606may host respective resources for at least one application636. The server group606may facilitate network traffic638for communications between the end-user customer devices and the servers604to access resources for the application636. Embodiments may comprise additional or alternative components or omit certain components from those ofFIG.6, and still fall within the scope of this disclosure. The dashboard handler612executing on the automated management service602may provide the user interface624including the failover execution UI element628. The user interface624may be rendered, displayed, or otherwise presented on at least one display of the automated management service602or communicatively coupled with the automated management service602. The user interface624may be in the manner, for example, depicted inFIGS.7-9, among others. The failover execution UI element628may correspond to at least one graphical user interface (GUI) element within the user interface624, such as a command button, a slider, a toggle switch, a radio button, a check box, a text box, an image, a window, a prompt, or a container, among others, or any combination thereof. The failover execution UI element628may be used to identify or select one server group606B to which to transfer network traffic640associated with the application636from another server group606A. In some embodiments, the failover execution UI element628may accept, gather, or otherwise receive one or more parameters for transferal of the network traffic640as part of the failover. The parameters may identify or include, for example, an application identifier corresponding to the application636; a source identifier corresponding to the server group606from which the network traffic640is to be transferred (e.g., the server group606A as depicted); and a destination identifier corresponding to the server group606to which the network traffic640is to be transferred (e.g., the server group606B as depicted), among others. The dashboard handler612may monitor for an interaction with the failover execution UI element628to initiate the failover, such as a mouse click, a screen touch, a key press, voice command, or a corresponding gesture, among others. The interaction may indicate a command to initiate the failover. In some embodiments, the dashboard handler612may also handle one or more interactions to the failover execute UI element634to enter or input the parameters defining the traffic failover. In response to detecting the interaction with the failover execution UI element628to initiate the failover, the dashboard handler612may call, invoke, or otherwise execute the failover manager614. In invoking, the dashboard handler612may pass the input parameters to the failover manager614. The failover manager616executing on the automated management service602may carry out, perform, or otherwise execute a traffic failover process, in response to the interaction with the failover execution UI element634. The traffic failover process may correspond to or include moving, switching, or otherwise transferring the network traffic638from one server group606to another server group606(e.g., from the first server group606A to the second server group606B as depicted). The network traffic638may have been previously communicated with the server group606(e.g., the first server group606A) in providing end-user customer devices access to resources for the application636. The failover manager616may execute the traffic failover process in accordance with the parameters input via the failover execution UI element634. From the parameters, the failover manager616may select or identify the server group606(e.g., the first server group606A) referenced by the source identifier from which the network traffic638is to be transferred. In some embodiments, the failover manager616may find, select, or otherwise identify at least one stack (e.g., a subset of servers606) within the server group606hosting resources for the application636. The failover manager616may find, select, or otherwise identify the network traffic638associated with the application636hosted on one or more servers604of the identified server group606(e.g., the first server group606A). The network traffic638may identify or include communications (e.g., data packets) exchanged between the servers604of the server group606and the end-user consumer devices in accessing the application636. In addition, the failover manager616may identify the server group606(e.g., the second server group606B) to which the network traffic636is to be transferred. In some embodiments, the failover manager616may find, select, or otherwise identify at least one stack (e.g., a subset of servers604) within the server group606corresponding to the stack in the other server group606to which the network traffic638is to be transferred. The stack may correspond to a subset of servers604already hosting resources for the application636or another instance of the application636hosted thereon. The stack may also correspond to the subset of servers604in the server group606with availability to handle such resources636or communications with the end-user consumer devices to access the application636. With the identifications, the failover manager616may instruct, command, or otherwise cause the servers604of the server group606(e.g., the server group606A) to move or transfer the network traffic638to the servers604of the other server group606(e.g., the server group606B). In moving over, the failover manager616may redirect or forward communications from the end-consumer devices accessing the application636on the initial server group606to the second server group606. Subsequent to the failover, the end-user consumer devices and the servers604of the second server group606may exchange communications with each other in accessing the application636. In this manner, the failover manager616may execute the traffic failover process in accordance with the defined parameters upon invocation from the interaction with the failover execution UI element634. The failover manager616may automate the various operations involved in failover with minimal or no manual involvement. In some embodiments, the failover manager616may calculate, determine, or otherwise generate at least one network statistic for each server group606. The network statistic may identify or include a measure of performance of the network traffic638associated with the application636hosted on the servers604in the respective server group606. The measure of performance may be a single instance or time-series of measurements. The network statistics may include, for example: latency measuring a delay between the end-user consumer devices with the servers604in accessing the application636; bandwidth identifying a rate of data exchanged between the end-user consumer devices with the servers604; throughput identifying an amount of data successfully communicated between the end-user consumer devices with the servers604; jitter corresponding to a variation in latency in the exchanged communications; and an error rate identifying a rate of alterations of the data communicated between the end-user consumer devices with the servers604due to network conditions, among others, or any combination thereof. The network statistics may be instrumented by the failover manager616(or another computing device). With the generation, the failover manager616may store and maintain the network statistics for the server groups606on the database608. In some embodiments, the servers604themselves may generate and store the network statistics as detailed herein on the database608. In some embodiments, the failover manager616may relay or otherwise provide the network statistics for the server groups606to the dashboard handler612to present on the user interface624. In some embodiments, the dashboard handler612may provide the user interface624to include the network statistics UI element640. The network statistics UI element640may correspond to a set of graphical user interface (GUI) elements within the user interface624, such as a radio button, a check box, a toggle switch, a text box, a window, a prompt, or a container, among others. In some embodiments, the network statistics UI element640may be a part of the failover execution UI element634. For instance, the GUI elements corresponding to the failover execution UI element634may be included in the window including the GUI elements of the network statistics UI element640. In some embodiments, the failover execution UI element634may be separate from the network statistics UI element640. For example, the dashboard handler312may present the network statistics UI element640in a top portion of the window and the failover execution UI element634on a bottom portion of the window. The network statistics UI element640may include, identify, or otherwise provide information relevant to the traffic failover process, such as the network statistics. The dashboard handler612may retrieve, obtain, or otherwise identify the network statistics from the failover manager616or another data source (e.g., the database608). The network statistics UI element640may include or identify the network statistics by application636, server604, or server group606. For example, the network statistics UI element640may identify the latency, bandwidth, throughput, the jitter, the error rate, or a combined score for each application636, server604, or server group606. In some embodiments, the dashboard handler612may present or provide an indicator for each network statistic. The indication may be, for instance, an enumeration identifier or a color code identifying whether the network statistic for a given application636is excellent, good, fair, or poor, among others. By displaying the network statistics in the network statistics UI element640in a digestible manner, a user (e.g., the network administrator) can determine whether to invoke the traffic failover process. FIG.7depicts a screenshot of a network health interface700in the dashboard user interface for the system for failover management. The network health interface700may identify network traffic statistics categorized by application and server groups. In the depicted example, generally along the top, the network health interface700may include a set of application name elements702A-D identifying a type of application. For each application, the network health interface700may include a set of server group elements704A-D and704′A-D, each of which may identify network statistics for the given server group and application (e.g., using color coding). Generally along the bottom, the network health interface700may include a set of elements706A-C identifying network statistics by applications. The network health interface700may be used by the network administrator to make decisions regarding whether to invoke network traffic failover process from one server group to another server group for a given application. FIGS.8A and8Bdepict screenshots of an execution interface800in the dashboard user interface for the system for failover management. Starting withFIG.8A, in the depicted example, the execution interface800may include an authentication interface802to initiate the traffic failover process. The authentication interface802may be used to enter a one-time password (OTP) to validate the network administrator prior to invoking the traffic failover process. Moving ontoFIG.8B, the execution interface800may a prompt810to enter the application patch identifier to select the application whose network traffic is to be transferred from one server group to another server group. The execution interface800may be used by the network administrator to carry out the traffic failover process. FIG.9depicts a screenshot of a traffic pattern interface900in the dashboard user interface for the system for failover management. In the depicted example, the traffic pattern interface900may provide additional network statistics in relation to the network traffic for a given application categorized by host server groups (e.g., “GTDC” and “SWDC”) over a given time window. The traffic pattern interface900may be used by the network administrator to decide whether to invoke the network traffic failover process from one server group to another server group for a given application. FIG.10depicts a block diagram of a system1000for performance analytics in networked environments. The system1000may include at least one automated management service1002, one or more servers1004A-X (hereinafter generally referred to as servers1004) in at least one server group1006, and at least one database1008, among others, communicatively coupled with one another via at least one network1010. The automated management service1002may include at least one dashboard handler1012, at least one analytics evaluator1018, and at least one analytics model1020, among others, and may provide at least one user interface1024. The database1008may store, maintain, or otherwise include historical data1040. The user interface1024may include at least one analytics retrieval UI element1030and a set of analytics results UI elements1044A-N (hereinafter generally referred to as analytics results UI elements1044), among others. At least one of the servers1004in at least one server group1006may host resources for at least one application1036. The application1036may include a set of functions1038A-N (hereinafter generally referred to as functions1038). Embodiments may comprise additional or alternative components or omit certain components from those ofFIG.10, and still fall within the scope of this disclosure. The dashboard handler1012executing on the automated management service1002may provide the user interface1024including the analytics retrieval UI element1030. The analytics retrieval UI element1030may be rendered, displayed, or otherwise presented on at least one display of the automated management service1002or communicatively coupled with the automated management service1002. The analytics retrieval UI element1030may correspond to at least one graphical user interface (GUI) element within the user interface1024, such as a command button, a slider, a toggle switch, an image, a window, a prompt, or a container, among others, or any combination thereof. The user interface1302may be in the manner depicted, for example, inFIGS.11and12, among others. The analytics retrieval UI element1030may provide performance indicators of the functions1038supported or provided by the application1036. In response to detecting an interaction with the analytics retrieval UI element1030, the dashboard handler1012may call, invoke, or otherwise execute the analytics evaluator1018. The analytics evaluator1018executing on the automated management service1002may calculate, determine, or otherwise generate a set of performance indicators for the corresponding set of functions1038of the application1036, in response to the interaction with the analytics retrieval UI element1030. The generation of the performance indicators may be based on historical data1040for the application1036. The historical data1040may be stored and maintained on a storage (e.g., the database1008) using instrumentation of the instance of the application1036on one or more of the servers1004across server groups1006. For each function1038of the application1036, the historical data1040may identify or include, for example: consumption of computing resources (e.g., processor or memory); a number of invocations (or requests); latency between requests and outputs; down time; success rate in carrying out; and an a number of errors or failures from performing the function1038, among others, or any combination thereof. The historical data1040may be instrumented or measured on a rolling basis, with overlapping sampling intervals. In generating, the analytics evaluator1018may calculate, generate, or otherwise determine a performance metric for each function1038of the application1036based on at least a portion of the historical data1040. Each performance metric may identify a respective predicted likelihood that the application1036will execute or carry out a corresponding function1038. The function1038may include one or more defined operations of the application1036, such as account information retrieval in an online banking application, a copy and paste operation in a word processor application, account authentication on a video game, or a loading streaming multimedia on a video player, among others. The portion of the historical data1040may, for example, correspond to one or more recent time intervals relative to the present. In some embodiments, the analytics evaluator1018may determine the performance metric based on a combination of the portion of the historical data1040. The combination may be, for example, a summation, a weighted average, or a formula, among others, to generate the performance metric from the historical data1040. In conjunction, the analytics evaluator1018may calculate, generate, or otherwise determine a threshold value with which to compare against the performance metric. The threshold value may be determined based on at least a portion the historical data1040. The portion of the historical data1040used to determine the threshold value may include more sampling time intervals than the portion of the historical data1040used to determine the performance metric. For example, the portion of the historical data1040used for the threshold value may correspond to the most recent week, whereas the portion of the historical data1040used to determine the performance metric may correspond to the most recent six hours. In some embodiments, the analytics evaluator1018may determine the threshold value based on a combination of the portion of the historical data1040. The combination may be, for example, a moving average (e.g., weighted or exponential), a weighted sum, or a formula, among others. With the determinations, the analytics evaluator1018may determine, identify, or otherwise select a performance indicator for each function1038of the application1036. The performance indicator may be correlated with or identify the predicted likelihood of the application1036of successfully performing the function1038. The performance indicator may be selected from a positive (or normal) performance indicator corresponding to the performance metric for the function1038satisfying (e.g., greater than or equal to) the associated threshold or a negative (or anomalous) performance indicator corresponding to the performance metric for the function1038not satisfying (e.g., less than) the associated threshold. To select, the analytics evaluator1018may compare the performance metric for the function1038with the corresponding threshold. If the performance metric for the function1038satisfies the threshold, the analytics evaluator1018may select the positive performance indicator. Otherwise, if the performance metric for the function1038does not satisfy the threshold, the analytics evaluator1018may select the negative performance indicator. Upon the determination, the analytics evaluator1018may provide the performance indicators and corresponding performance metrics for the functions1038of the application1036to the user interface1024. In some embodiments, the analytics evaluator1018may use the analytics model1020to determine the set of performance indicators for the set of functions1038of the application1036. The analytics model1020may be, for example, a machine learning (ML) model to process historical data to output performance indicators. The architecture or algorithm used to implement the analytics model1020may include, for example, an artificial neural network (ANN), a clustering model (e.g., k nearest neighbors), a regression model (e.g., linear or logistic regression), a random forest, a Bayesian classifier, or a support vector machine (SVM), among others. In general, the analytics model1020may include: a set of inputs corresponding to at least a portion (e.g., the most recent time interval) of the historical data1040; and at least one output corresponding to the positive or negative performance indicator; and a set of weights relating the inputs and outputs. The analytics model1020may be initialized, trained, and established (e.g., by the analytics evaluator1018or another computing device) using a training dataset. The training dataset may identify or include the historical data1040. The portion of the historical data1040used to train the analytics model1020may include sampling time intervals prior to the portion of the historical data1040to be fed into the analytics model1020. For instance, the portion of the historical data1040may be from the previous two to five weeks of instrumentation, relative to the most recent sampling. For the training dataset, the input may correspond to the consumption of computing resources; a number of invocations; latency; and down time, among others included in one or more sampling intervals the historical data1040. The expected outputs may include the positive or negative performance indicator for the corresponding sampling interval. The weights of the analytics model1020may be trained in accordance with supervised learning using the training data. The analytics model1020may be continuously trained with updated historical data1040from one or more previous intervals of time on a rolling basis. With the establishment, the analytics evaluator1018may apply the portion of the portion (e.g., the most recent time intervals) of the historical data1040for each function1036of the application1038into the analytics model1020. In applying, the analytics evaluator1018may feed the historical data1040as input into the analytics model1020, and may process the historical data1040in accordance with the set of weights of the analytics model1020. From processing, the analytics evaluator1018may produce or generate the performance indicator for the corresponding function1038, output from the analytics model1020. The analytics evaluator1018may repeat the applying of the portions of the historical data1040over the set of functions1038of the application1036. With the determination of the performance indicators, the analytics evaluator1018may provide the performance indicators for the functions1038of the application1036to the user interface1024. The dashboard handler1012may provide the user interface1024to include the analytics results UI elements1042. The analytics results UI elements1042may correspond to a set of graphical user interface (GUI) elements within the user interface1024, such as a radio button, a slider, a check box, a toggle switch, a text box, an image, a window, a prompt, or a container, among others. In some embodiments, the analytics results UI elements1042may be a part of the analytics retrieval UI element1030. For example, the analytics results UI elements1042may be included as part of the window of the analytics retrieval UI element1030. In some embodiments, the analytics results UI elements1042may be separate from the analytics retrieval UI element1030. For instance, the analytics retrieval UI element1030may reside on a main webpage, and the analytics results UI elements1042may be presented on a separate webpage upon interaction with the analytics retrieval UI element1030. Each analytics result UI element1042may correspond to a corresponding performance indicator of the respective function1038supported or provided by the application1036. The indication may be, for instance, an enumeration identifier or a color code identifying whether the performance indicator for the respective function1038is positive (e.g., normal) or negative (e.g., anomalous), among others. For instance, the first analytics result UI element1042A may correspond to an account information retrieval feature of an online banking application, and may have a green color to indicate that the feature is operating properly. The second analytics result UI element1042B may correspond to a transaction feature of the online banking application, and may have a red color to indicate that the feature is non-operational or otherwise behaving abnormally. The displaying of the performance indicators in the set of analytics results UI elements1042in a digestible manner may allow a user (e.g., the network administrator) to diagnose any issues if any with the application1036, the servers1004hosting the application1036, or the server group1006, among others. In some embodiments, the dashboard handler1012may support or provide a drill-down feature for the performance indicator, upon interaction with at least one of the set of analytics results UI elements1042. In response to the interaction with an analytics results UI element1042, the dashboard handler1012may provide the performance metrics for the performance indication of the corresponding function1038of the application1036. As discussed above, the performance metric may identify the predicted likelihood of success for the given function1038. In some embodiments, in response to the interaction, the dashboard handler1012may provide at least a portion of the historical data1040for the corresponding function1038. The portion of the historical data1040may identify of include metrics, such as consumption of computing resources, number of requests, latency, success rate, or number or rate of errors, among others, as discussed above. FIG.11depicts a screenshot of a performance indicator interface1100in the dashboard user interface for the system for performance analytics. The performance indicator interface1100may present performance indicators for various functions across multiple applications. In the depicted example, generally in the middle, the performance indicator interface1100may a set of lists1102-C for each application platform (or operating systems). For each platform, the performance indicator interface1100may a set of functions1110A-N identifying performance indicators (e.g., using color code indications). The performance indicator interface1100may be used by the network administrator to pinpoint certain functions (or transactions) as a cause of issues in a given application. FIG.12depicts a screenshot of a drill down interface1200in the dashboard user interface for the system for performance analytics. The drill down interface1200may present additional metrics for a given function of a particular application. In the depicted example, along the top, the drill down interface1200may include an indicator element1205identifying performance indicators (e.g., using color code) for the identified function over a given time interval. In addition, generally in the middle, the drill down interface1200may include a set of graph elements1210A-D identifying performance metrics (e.g., failure count trends, slow call count trend, calls per minute, and success count trend) over time. The drill down interface1200may be used by the network administrator to view various types of metrics for a particular function in a selected application. FIG.13depicts a block diagram of a system1300for site reliability evaluation in networked environments. The system1300may include at least one automated management service1302, one or more servers1304A-X (hereinafter generally referred to as servers1304) in at least one server group1306, and at least one database1308, among others, communicatively coupled with one another via at least one network1310. The automated management service1302may include at least one dashboard handler1312and at least one reliability evaluator1320, among others, and may provide at least one user interface1322. The user interface1324may include at least one analytics retrieval UI element1332, among others. At least one of the servers1304in at least one server group1306may host resources for at least one application1336. Embodiments may comprise additional or alternative components or omit certain components from those ofFIG.13, and still fall within the scope of this disclosure. The dashboard handler1312executing on the automated management service1302may provide the user interface1324including the reliability retrieval UI element1332. The reliability retrieval UI element1332may be rendered, displayed, or otherwise presented on at least one display of the automated management service1302or communicatively coupled with the automated management service1302. The reliability retrieval UI element1332may correspond to at least one graphical user interface (GUI) element within the user interface1324, such as a command button, a slider, a toggle switch, an image, a window, a prompt, or a container, among others, or any combination thereof. The reliability retrieval UI element1332may provide reliability indicators for the application1036, the servers1304, or the server group1306. In response to detecting an interaction with the reliability retrieval UI element1332, the dashboard handler1312may call, invoke, or otherwise execute the reliability evaluator1320. The reliability evaluator1320executing on the automated management service1302may calculate, identify, or determine a set of reliability measures for the application1036, the servers1304, or the server group1306. The reliability evaluator1320may retrieve, obtain, or otherwise identify historical data instrumenting the application1036, the servers1304, or the server group1306from the database1308. In conjunction, the reliability evaluator1320may retrieve, obtain, or otherwise identify one or more service level objectives (SLOs) for the application1036, the servers1304, or the server group1306. With the identification, the reliability evaluator1320may compare the historical data with the SLOs. Based on the determination, the reliability evaluator1320may determine the reliability measures. With the determination, the reliability evaluator1320may provide the reliability indicators for presentation on the user interface1324, in the manner depicted inFIGS.14-17. FIG.14depicts a screenshot of an on-boarding interface1400in the dashboard user interface for the system for site reliability evaluation. The on-boarding interface1400may be a graphical user interface used to enter various parameters for SLOs.FIG.15depicts a screenshot of a reliability indication interface1500in the dashboard user interface for the system for site reliability evaluation. The reliability indication interface1500may present various statistics relevant to whether the SLOs are being met.FIG.16depicts a screenshot of a drill down interface1600in the dashboard user interface for the system for site reliability evaluation. The drill down interface1600may provide additional relevant statistics for a particular application, servers, or server groups.FIG.17depicts a screenshot of a heat map interface1700in the dashboard user interface for the system for site reliability evaluation. The heat map interface1700may present a set of reliability indicators for particular applications or platforms over multiple time intervals. FIG.18depicts a block diagram of a system1800for services assistance in a system for managing networked environments. The system1800may include at least one automated management service1802, one or more servers1804A-X (hereinafter generally referred to as servers1804) in at least one server group1806, and at least one database1808, among others, communicatively coupled with one another via at least one network1810. The automated management service1802may include at least one dashboard handler1812and at least one assistant handler1822, among others, and may provide at least one user interface1824. The user interface1824may include at least one assistant invocation UI element1834, among others. At least one of the servers1804in at least one server group1806may host resources for at least one application1836. Embodiments may comprise additional or alternative components or omit certain components from those ofFIG.18, and still fall within the scope of this disclosure. The dashboard handler1812executing on the automated management service1802may provide the user interface1824including the assistant invocation UI element1834. The assistant invocation UI element1834may be rendered, displayed, or otherwise presented on at least one display of the automated management service1002or communicatively coupled with the automated management service1802. The assistant invocation UI element1834may correspond to at least one graphical user interface (GUI) element within the user interface1824, such as a command button, a slider, a toggle switch, an image, a window, a prompt, or a container, among others, or any combination thereof. The assistant invocation UI element1834may provide an interface to a customer agent service to obtain health checks or enter incidents for the application1836, the servers1804, or the server group1806. In response to detecting an interaction with the assistant invocation UI element1834, the dashboard handler1812may call, invoke, or otherwise execute the assistant handler1822. The assistant handler1822executing on the automated management service1802may retrieve, obtain, or identify a health status of the application1836, the servers1804, or the server group1806selected via the user interface1822. The assistant handler1822may invoke a customer services agent (e.g., a digital assistant application) to retrieve the health status of the indicated the application1836, the servers1804, or the server group1806. The assistant handler1822may also provide invoke the customer services agent to enter details regarding an incident (e.g., an outage or an interruption) of the application1836, the servers1804, or the server group1806. The customer services agent may handle incident ticketing to prevent duplicates or aggregate similar incidents. The input and output interfaces for the customer service agent may be presented in the interface1822in the manner depicted inFIGS.19and20. FIG.19depicts a screenshot of a health check interface1900in the dashboard user interface for the system for services assistance. In the depicted example, the health check interface1900may present a health status of various functions of an application. The health check interface1900may be also used to enter an impacted application to report incidents.FIG.20depicts a screenshot of a query interface2000in the dashboard user interface for the system for services assistance. In the depicted example, the query interface2000may enter additional information for the incident. The query interface2000may be used to submit incident reports to the customer service agent. FIG.21depicts a flow diagram of a method of managing networked environments. Embodiments may include additional, fewer, or different operations from those described in the method2100. The method2100may be performed by a service (e.g., an automated management service) executing machine-readable software code, though it should be appreciated that the various operations may be performed by one or more computing devices and/or processors. At step2105, the service may provide a dashboard user interface. The dashboard user interface may include a first element to invoke patch management, a second element to execute traffic failover, a third element to retrieve predictive analytics, a fourth element to provide reliability indications, and a fifth element to invoke a services assistant. At step2110, the service may monitor for an interaction with one of the elements of the dashboard user interface. At step2115, if the interaction is detected on the dashboard user interface the service may determine which process to invoke. At step2120, if the interaction is with the first element, the service may execute the patch process to shut down a server group, install the patch, and perform validation. At step2125, if the interaction is with the second element, the service may perform a failover process to transfer network traffic for an identifier application from one group of servers to another group of services. In addition, at step2130, if the interaction is with the third element, the service may provide analytics by generating performance indicators for functions of the application. At step2135, if the interaction is with the fourth element, the service may provide reliability indicators in accordance with service level objections (SLOs). At step2140, if the interaction is with the fifth element, the service may invoke the customer services agent to check health statuses of applications and managing ticketing of incidents. At step2145, the service may provide the output from the performed process on the dashboard user interface. Subsequently, the service may repeat the method2100from the step2110. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. The steps in the foregoing embodiments may be performed in any order. Words such as “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, and the like. When a process corresponds to a function, the process termination may correspond to a return of the function to a calling function or a main function. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. The actual software code or specialized control hardware used to implement these systems and methods is not limiting. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein. When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. While various aspects and embodiments have been disclosed, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
63,393
11863309
DETAILED DESCRIPTION Herein described are systems and methods for capturing and distributing live audio streams of a live event to a plurality of mobile computing devices. According to some embodiments, the capture and distribution of the live audios streams are performed in real-time. Although the systems and methods described herein describe the capturing and distribution of live audio streams, it is understood that the systems and methods may also be utilized to capture one or more live multimedia streams that comprises audio data and video data or one or more live video streams that comprises video data (without audio data). It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the exemplary aspects of the present application described herein. However, it will be understood by those of ordinary skill in the art that the exemplary aspects described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the exemplary aspects described herein. Also, the description is not to be considered as limiting the scope of the exemplary aspects described herein. Any systems, method steps, method blocks, components, parts of components, and the like described herein in the singular are to be interpreted as also including a description of such systems, method steps or tasks, components, parts of components, and the like in the plural, and vice versa. It will also be understood that for the purposes of this application, “at least one of X, Y, and Z” or “one or more of X, Y, and Z” language can be construed as X only, Y only, Z only, or any combination of two or more items X, Y, and Z (e.g., XYZ, XYY, YZ, ZZ). In the present application, components may be described as being “configured to” or “enabled to” perform one or more functions. Generally, it is understood that a component that is configured to or enabled to perform a function is configured to or enabled to perform the function, or is suitable for performing the function, or is adapted to perform the function, or is operable to perform the function, or is otherwise capable of performing the function. Generally, the described systems and methods are suitable for use at live events, such as sporting events, in large venues, such as sports stadia or convention centers. One or more computing devices, such as servers, are used to stream live audio to the audience members at the venue via their respective mobile computing devices. Audience members can receive the audio stream(s) on their own mobile phone or another mobile computing device. According to some embodiments, the audio stream(s) is used in venue only, does not leave the local area network (LAN) of the venue and does not cross the Internet. The audio that is streamed typically relates directly to the live event in progress in the venue. For example, according to some embodiments, live audio is captured from an athlete, referee or in-venue commentator (via at least one audio input device, such as a microphone worn by the athlete, referee or in-venue commentator) as they participate in, or comment on, a sports match. The audience member can listen via their mobile computing device to what is being said on the field of play as they watch the action. Latency or delay is a particularly important consideration in the distribution of a live audio stream of a live event in a venue. The audio stream relates to live action happening in view of the audience, so it is particularly important to the listening audience member that the audio and live action appear to be synchronized. If there is too much delay between what the audience sees and what they hear then the effect will tend to be distracting and may ruin the experience. Persons skilled in the art will appreciate that end-to-end latencies up to and including 500 milliseconds (mS) are generally considered “real-time”. However, the amount of delay that can be tolerated depends upon what the audience members can see and hear. For example, if the audience can see a player's or a commentator's lips moving while they are talking then generally an audio delay, or end-to-end latency, of more than 45 mS is usually problematic. If the audience members are further away, so that they cannot see the lips moving, then a delay, or end-to-end latency, of 100 to 500 mS may be an acceptable user experience. Venue size can vary. For example, some venues can accommodate a few hundred attendees, whereas there are venues that can accommodate up to 100,000 attendees. The ability to scale up the distribution of media data, such as audio data, is particularly important when such a large number of client devices are to be connected (or potentially connected). As discussed, typical audio streaming services tend to utilize cloud-based computing devices, such as servers, to host users and deliver content. Such audio streaming services are optimized to provide good quality audio while latency is deemed to be less important. In order to scale up quickly, significant latency or delay between the audio content at the source and the audio content as presented to the listener is introduced. In some instances, these systems may introduce latency of more than 30 seconds. The described systems and methods provide a distributed architecture which allows for the scaling of a streaming platform to a large number of users (via associated mobile computing devices, also referred to herein as “mobile clients” or “client devices”) by adding audio capture and distributing computing devices (such as servers, or more granularly, processing devices), while maintaining a unified external interface to the mobile clients. According to some embodiments, the system infrastructure locally distributes audio content (over, for example, a LAN or similar medium) and is physically located in each stadium in which it operates. According to some embodiments, the described systems can be scaled up on demand to accommodate an increased client device load with reduced latency in comparison to typical cloud-based, audio streaming systems. According to some embodiments, the described systems are self-managing and can gracefully recover if one or more computing devices (such as servers, or more granularly, processing devices) fails. Attention is directed toFIGS.1and2, which depict an example system100for capturing and distributing live audio streams of a live event to a plurality of mobile computing devices. System100comprises a plurality of processing devices102(also referred to herein as processing devices102). Although the plurality of processing devices102are depicted as comprising two processing devices, processing device102-1and processing device102-2, it is understood that, according to some embodiments, system100comprises more than two processing devices. As used herein, the terms “processing device”, “processing devices”, “processing device(s)”, “processor”, “processors” or “processor(s)” may refer to any combination of processing devices, and the like, suitable for carrying out the actions or methods described herein. For example, processing devices102may comprise any suitable processing device, or combination of processing devices, including but not limited to a microprocessor, a central processing unit (CPU) and the like. Other suitable processing devices are also within the scope of the application. Processing devices102are each coupled to at least one memory104. For example, processing device102-1is coupled to memory104-1and processing device102-2is coupled to memory104-2. Memory104-1and memory104-2can each comprise any suitable memory device, including but not limited to any suitable one of, or combination of, a local and/or remote volatile memory, non-volatile memory, random access memory (RAM), read-only memory (ROM), hard drive, optical drive, buffer(s), cache(s), flash memory, magnetic computer storage devices (e.g. hard disks, floppy disks, and magnetic tape), optical memory (e.g., CD(s) and DVD(s)), and the like. Other suitable memory devices are also within the scope of the application. As such, it is understood that the term “memory”, or any variation thereof, as used herein may comprise a tangible and non-transitory computer-readable medium (i.e., a medium which does not comprise only a transitory propagating signal per se) comprising or storing computer-executable instructions, such as computer programs, sets of instructions, code, software, and/or data for execution of any method(s), step(s) or process(es) described herein by any processing device(s) and/or microcontroller(s) described herein. Memory104-1and memory104-2are configured to store computer-executable instructions106, as computer executable instructions106-1and106-2respectively, for execution by at least one processing device, including processing devices102-1and102-2. According to some embodiments, the computer executable instructions106comprise subsets of instructions based on particular functionalities, such as audio capture, media management and session management. These modules may communicate with each other to perform the functions described herein. According to some embodiments, these functionalities may be embodied as modules of the computer-executable instructions106. According to some embodiments, the audio capture module accepts raw audio from an external interface, such as an analogue audio card, and digitizes it, if needed, and compressed using an audio codec, as discussed further below. According to some embodiments, the session manager module accepts incoming connection requests from client devices (e.g., mobile computing devices) and matches mobile clients with available processing devices via their respective media manager modules. According to some embodiments, the media manager streams the audio packets to client devices. According to some embodiments, these functionalities may be distributed over the plurality of processing devices102such that each processing device is configured to participate in one or more functionalities (e.g., audio capture, media management and session management). For example, according to some embodiments, executable instructions106-1may comprise the audio capture and media management functionalities, whereas executable instructions106-2comprises the session management functionality. According to some embodiments, more than one of the processing devices102share at least one of the audio capture, media management and session management functionalities, via the respective computer executable instructions106. InFIGS.1and2, processing device102-1and memory104-1, and processing device102-2and memory104-2, are depicted as being co-located on the same computing device (e.g., computing device116-1in respect of processing device102-1and memory104-1, and computing device116-2in respect of processing device102-2and memory104-2). Computing device116-1and116-2(referred to collectively as computing devices116) may each comprise one or more computing devices. According to some embodiments, processing device102-1and memory104-1, and processing device102-2and memory104-2, are not co-located on the same computing device. For example, according to some embodiments, computing device116-1comprises two or more servers in wired and/or wireless communication with each other, and processing device102-1is located at one of the servers while memory104-1is located at another one of the servers. Similarly, according to some embodiments, computing device116-2comprises two or more servers in wired and/or wireless communication with each other, and processing device102-2is located at one of the servers while memory104-2is located at another one of the servers. Processing devices102are in network communication with each other. For example, processing devices102may be configured to communicate with each other over a first network108via communication links110(individually referred to as communication link110-1and communication link110-2). Communication links110may each comprise any suitable wired and/or wireless communication link(s), or suitable combination thereof. The processing devices102may also be configured to transmit and receive data over the first network108according to any suitable protocol or protocols, such as wireless data protocols, cellular device protocols, WiFi protocols, WiMax protocols, Real-Time Transport Protocol (RTP) and/or a combination of protocols. According to some embodiments, the first network108is a private LAN of the venue hosting the live event. According to some embodiments, the first network108is a wireless network. At least one of the processing devices102is associated with at least one audio channel. For example, processing device102-1is associated with audio channels112(referred to individually as audio channel112-1and audio channel112-2) and processing device102-2is associated with audio channels114(referred to individually as audio channel114-1and audio channel114-2). Each of the audio channels112,114are configured to receive a live audio stream from the live event. Although processing devices102are depicted as each being associated with two audio channels, it is understood that at least one of the processing devices102may be associated with one or more audio channels. For example, according to some embodiments, processing device102-1is associated with one audio channel and processing device102-2is associated with two or more audio channels. According to some embodiments, at least one of the processing devices102does not have an associated audio channel and does not participate in the audio capture activity. Such processing devices (i.e., processing devices without any associated audio channels) may participate in other activities, including media management and/or session management, to transmit the plurality of discrete audio data packets126or copies therefrom126C, as discussed further below. According to some embodiments, at least one of the plurality of processing devices102, such as processing device102-1, have at least one audio channel associated therewith and are configured to receive at least one live audio stream via at least one of the associated audio channels112,114in accordance with computer-executable instructions106. For example, processing device102-1may be coupled to at least one of the audio input devices120(referred to individually as audio input device120-1,120-2and120-3) and configured to communicate with at least one of the audio input devices120over communication link122via the first network108, or another suitable network. Communication link122comprises any suitable wired and/or wireless communication link(s), or suitable combination thereof. The processing device102-1may also be configured to communicate with at least one of the audio devices120in accordance with any suitable protocol or protocols, such as wireless data protocols, cellular device protocols, WiFi protocols, WiMax protocols, and/or a combination thereof. It is understood that the at least one processing device that receives the at least one live audio stream comprises one or more of the plurality of processing devices. For example, according to some embodiments, the number of processing devices that receive the at least one live audio stream comprises less than the total number of the processing devices102. However, according to some embodiments, the number of processing devices that receive the at least one live audio stream comprises all of the processing devices102. The audio devices120comprise any suitable audio input devices, such as a wired or wireless microphone worn by a referee, player or a commentator of the live event. The audio input devices120receive at least one of live audio streams124. For example, audio input device120-1receives first live audio stream124-1via audio input device120-1, audio input device120-2receives second live audio stream124-2via audio input device120-2and audio input device120-3receives third live audio stream124-3via audio input device120-3. The live audio streams124each comprise a live audio signal that conveys, for example, a referee's or a commentator's voice. For example, first live audio stream124-1may comprise a live audio signal of a referee's voice, second live audio stream124-2may comprise a live audio signal of a commentator's voice and third live audio stream124-3may comprise a live audio signal of an athlete's voice. According to some embodiments, the audio input devices120receive the live audio streams124in real-time. According to some embodiments, each of the live audio streams124are live audio streams comprising the same live audio signal. Processing devices102are also configured to communicate with a plurality of mobile devices132-1to132-n(referred to collectively as the plurality of mobile computing devices132or mobile computing devices132). For example, the processing devices102may be enabled to transmit and/or receive data over the first network108and/or, as shown inFIGS.1and2, a second network134via suitable communication links, such as communication links136-1,136-2and138-1to138-n(communication links136-1and136-2referred to collectively as communication links136, communications links138-1to138-nreferred to collectively as communication links138). The communication links136and138comprise any suitable wired and/or wireless communication link(s), or suitable combination thereof. The processing devices102may also be configured to transmit and receive data over the second network134according to any suitable protocol or protocols, such as wireless data protocols, cellular device protocols, WiFi protocols, WiMax protocols, Real-Time Transport Protocol (RTP) and/or a combination of protocols. According to some embodiments, the second network134is a LAN of the venue hosting the live event. According to some embodiments, the second network134is a public LAN of the venue. According to some embodiments, the second network134is a wireless network. According to some embodiments, the second network134is a cellular network. The mobile computing devices132are any computing devices suitable for communicating with the processing devices102(e.g., over second network134) and for outputting received audio data to users of the mobile computing devices132. For example, mobile computing devices132may be one or more tablet computing devices, laptop computing devices, PDAs (personal digital assistants), cellphones, smartphones, computer terminals having at least one suitable audio output device (including, without limitation, devices which are configured to use Bluetooth™ or a similar medium in conjunction with an audio output device). In addition, the mobile computing devices132are configured to subscribe to one of more of the live audio streams124. As discussed above, at one of the plurality of processing devices102, such as processing device102-1, have at least one audio channel associated therewith and are configured to receive at least one live audio stream via at least one of the audio channels112,114in accordance with computer-executable instructions106. The received live audio stream(s) may be in analog and/or digital format. According to some embodiments, if the received live audio stream(s) is in analog format (i.e., an analog signal), then the processing device(s) receiving the live audio stream, in accordance with the computer-executable instructions106, is enabled to convert the received live audio stream(s) into a digital format. According to some embodiments, the processing device(s) receiving the live audio stream(s) is configured to compress the live audio stream(s) using a suitable audio codec. According to some embodiments, the codec is a low latency audio codec, such as G.711, MP3 or Opus. For example, according to some embodiments, processing device102-1is enabled to receive live audio stream124-1via audio channel112-1. In accordance with computer-executable instructions106-1, the processing device102-1is enabled to generate a plurality of discrete audio data packets126-1to126-p(also referred to collectively as discrete audio data packets126) from the live audio stream124-1. The processing device102-1is further enabled to transmit the discrete audio data packets126, over the first network108or another suitable network, for receipt by at least one of the remainder of the plurality of the processing devices102, such as processing device102-2(seeFIG.2). In other words, the discrete audio data packets126are distributed among the processing devices102such that at least one of the processing devices102has access to the discrete audio data packets126. According to some embodiments, the discrete audio data packets are transmitted to each one of the processing devices102such that each of those processing devices has access to the discrete audio data packets. The internal distribution of the discrete audio data packets over the first network108or another suitable network may occur in a number of suitable ways. For example, according to some embodiments, processing device102-1, as a processing device that participates in the audio capture activity, receives the discrete audio data packets126generated from at least one of the remainder of the processing devices also participating in the audio capture activity after transmitting the discrete audio data packets that it has generated to the remainder processing devices, such as to processing device102-2. As another example, according to some embodiments, processing device102-1transmits the discrete audio data packets126it has generated to itself in addition to the remainder of the processing devices (i.e., at least one of the processing devices participating in the audio capture activity receives the discrete audio data packets126it generated from itself). In other words, according to some embodiments, the transmit and receive processes utilized to internally distribute over the first network108or another suitable network the discrete audio data packets126among the processing devices are harmonized. Harmonizing these processes may lead to a reduction in the complexity of the computer executable instructions106and debugging, and help to streamline the scaling of system100to handle distribution of the live streams124to the mobile devices132. According to some embodiments, at at least one of the processing devices102, copies of the discrete audio data packets126-1C to126-pC (referred to collectively as copies of discrete audio data packets126C or copies126C) are generated from the discrete audio data packets126. According to some embodiments, each one of the processing devices102generates copies126C. For example, as shown inFIG.2, processing device102-2generates copies126C from the discrete audio data packets126received from processing device102-1. In respect of the processing device(s) that participated in the audio capture activity, the copies126C may be generated from the discrete audio data packets126prior to transmitting the discrete audio data packets126to at least one of the remainder processing devices, according to some embodiments. Alternatively, as discussed above, the processing device that participated in the audio capture activity may, after transmitting the discrete audio data packets126to the remainder processing devices, receive the discrete audio data packets126from one of the remainder processing devices (such as processing device102-2) or from itself. According to such embodiments, the copies126C generated by the processing device that participated in the audio capture activity are generated from the received discrete audio data packets126. The copies126C generated by the processing devices102are placed in a buffer accessible to the respective processing device of the plurality of processing devices102. For example, as shown inFIGS.8A and8B, the copies126C generated by processing devices102-1and102-2are placed in buffers118-1and118-2(referred to collectively as buffers118). According to some embodiments, buffers118may be local to the respective processing device of processing devices102. According to some embodiments, one or more of buffers118may be remote from the respective processing device of processing devices102. As discussed further below, the processing devices102may read from the buffers118in order to fulfil requests from the mobile computing devices132for transmission of the live streams124. According to the described systems and methods, in at least some embodiments, the plurality of processing devices102have access to the discrete audio data packets126or the copies126C therefrom. As a result, the mobile computing devices132are able to access the requested live audio stream (e.g., live audio stream124-1) from any one of the plurality of processing devices102, even if the transmitting processing device is not any of the processing devices that initially received the requested live audio stream. As discussed above, according to some embodiments, more than one of the plurality of processing devices may be enabled to receive at least one of the live audio streams via at least one associated audio channel. For example, according to some embodiments, processing device102-1is configured to receive first live audio stream124-1via audio channel112-1, as described above, and processing device102-2is configured to receive one or more of second, third live audio streams124-2and124-3via audio channels114-1and114-2. Similar to processing device102-1, processing device102-2may be enabled to generate a plurality of discrete audio data packets of the received second, third live audio streams124-2and124-3and transmit the generated discrete plurality of data packets over the first network108or another suitable network for receipt by processing device102-1and/or itself, in accordance with computer-executable instructions106-2. Hence, according to some embodiments, the processing load associated with receiving the live audio streams, generating the discrete audio data packets and transmitting the discrete audio data packets over the system (e.g., system100) can be distributed over multiple processing devices of the system. According to some embodiments, one or more of the processing devices102are enabled to transmit the discrete audio data packets126for receipt by the remainder of the plurality of processing devices102or received by the at least one of the plurality of processing devices having the at least one audio channel associated therewith by unicast transmission or multicast transmission. According to some embodiments, one or more of the processing devices102are enabled to transmit the discrete audio data packets126for receipt by the remainder of the plurality of processing devices or received by the at least one of the plurality of processing devices having the at least one audio channel associated therewith in accordance with the User Datagram Protocol (UDP). In accordance with the computer-executable instructions106, a nominated processing device of the plurality of processing devices is enabled to receive a connection request from a respective mobile computing device of the plurality of mobile computing devices. For example, processing device102-2may be the nominated processing device and enabled to receive a connection request130from mobile computing device132-1(FIG.2). The connection request130comprises a request for transmission of one of the live audio streams124(e.g., first live audio stream124-1) to the respective mobile computing device (e.g., mobile computing device132-1). According to some embodiments, the nominated processing device,102-2, is in communication with a session manager140, which accepts incoming streaming session requests from the mobile computing devices132. According to some embodiments, the session manager140is a module or a subset of computer-executable instructions106-2. According to some embodiments, each of the processing devices102is in communication with a respective session manager and each of the respective computer-executable instructions comprises a session manager. The session manager140accepts incoming requests for a live audio stream session, such as connection request130, or to stop access to the live audio stream(s) for any one of the mobile computing devices132. According to some embodiments, the session manager(s) are Real-Time Protocol (RTSP) session manager(s). The nominated processing device, processing device102-2in the present example, is further enabled to determine a distribution status of each one of the processing devices102in respect of the transmission of either the plurality of discrete audio data packets126, or transmission copies126-1T to126-pT (referred to collectively as transmission copies126T) generated from the copies of the discrete audio data packets126C (FIG.8B), for receipt by the respective mobile computing device, such as mobile computing device132-1. According to some embodiments, the distribution status indicates at least the ability of the applicable processing device to deliver the plurality of discrete audio data packets126or transmission copies126T to the respective mobile computing device, such as mobile computing device132-1. Based on the determined distribution status, the nominated processing device102-2selects one of the processing devices102to transmit the discrete audio data packets126or transmission copies126T for receipt by the respective mobile computing device, such as mobile computing device132-1. The selected processing device generates the transmission copies126T from the copies126C placed in the respective buffer accessible to the selected processing device. The selected processing device then transmits the discrete audio data packets126or transmission copies126T for receipt by the respective mobile computing device, such as mobile computing device132-1, over the first network108or the second network134. According to some embodiments, the distribution status of a respective processing device may comprise one or more of a load status, uptime, Quality of Service (QoS) and reliability metrics. Uptime, the time since the processing device was booted, may indicate that a processing device has failed if the reboot is unexpected, or it may indicate a new processing device has been added to the plurality of processing devices. By comparing QoS reports received from at least one of the mobile computing devices132in respect of a particular processing device with QoS reports in respect of the other processing devices, problems with that particular processing device may be identified (e.g., the processing device is transmitting the discrete audio data packets out of order or dropping discrete audio data packets). New connections to that processing device may then be stopped and the new connections may be assigned to other processing devices. Gathering statistics about each processing device's reliability over time may also be helpful. For example, statistics about lost packets, dropped connections and reboots can be used to develop metrics about the reliability of a particularly processing device. According to some embodiments, a reliability threshold is established and processing devices that fall below the reliability threshold are deemed “unreliable” and new connections from the mobile computing devices are avoided if or until the threshold is met or exceeded. According to some embodiments, an algorithm is used to assign new connections from the mobile computing devices to the most “reliable” processing devices, based on the reliability metrics. According to some embodiments, one or more factors are layered to determine the distribution status. For example, according to some embodiments, a subset of the processing devices102that meet a QoS threshold is identified, and then the processing load on each of the processing devices in that subset is determined to identify the ability of those processing devices to transmit the discrete audio data packets or the transmission copies to the requesting mobile computing device. It is understood that additional factors may be considered in the determination of the distribution status of a respective processing device of the plurality of processing devices102. When the distribution status of the nominated processing device indicates that the nominated processing device is available or able to transmit the discrete audio data packets126or transmission copies126T, then the selected processing device is the nominated processing device (for example, as shown inFIGS.2and8B). When the distribution status of the nominated processing device indicates that the nominated processing device is not available or suitable to transmit the discrete audio data packets126, or transmission copies126T, then the selected processing device is another one of the plurality of processing devices. According to some embodiments, a processing device is nominated on the basis of load status, whereby the processing device with the lowest number of connected mobile computing devices is selected as the nominated processing device. According to some embodiments, the described systems and methods provide some flexibility in how the requested transmissions are fulfilled. For example, the number of processing devices may change based on a number of factors, such as the current load on the processing devices, the number of live audio streams that are to be provided and the number of audio channels that are to be utilized for capturing the live audio streams. According to some embodiments, the number of processing devices constituting the plurality of processing devices (e.g., the plurality of processing devices102) is based on one or more of: a number of audio channels allocated to receive the live audio streams in respect of the plurality of processing devices; and the determined distribution status of each one of the plurality of processing devices. According to some embodiments, a total number of audio channels will be provided to capture the live audio streams. The total number of audio channels may be pre-determined and/or may change over the delivery of the live audio streams during the live event. Since each processing device may be associated with a certain number of audio channels, which may vary depending on the particular processing device, a certain number of the processing devices will be required to provide the total number of audio channels. According to some embodiments, processing devices may be added as needed as the total number of audio channels provided increases or if any of the processing devices fails. With that in mind, attention is directed toFIG.3, which depicts example system100after one or more additional processing devices have been added and in which like or similar elements are denoted by like or similar numbers shown inFIGS.1and2. According to some embodiments, the plurality of processing devices further comprises one or more additional processing devices, wherein the number of the one or more additional processing devices is based on one or more of: a number of audio channels allocated to receive the live audio streams; an expected number of the plurality of mobile computing devices; and the determined distribution status of each one of the plurality of processing devices prior to adding the one or more processing devices. As discussed above, a certain number of audio channels may be allocated to provide the live audio streams. The number of allocated audio channels is based on a total number of live streams being captured and distributed. As live audio streams are added (i.e., as the total number of live audio streams increases), the number of audio channels allocated to provide the total live audio streams may also increase. For example, as shown inFIG.3, a fourth live audio stream124-4has been added to the live audio streams124. According to the example depicted inFIG.3, by adding the fourth live audio stream124-4the number of audio channels required to distribute the live audio streams124has also increased. Additional processing device102-3, associated with audio channels142(referred to individually as audio channel142-1and audio channel142-2), is added to the plurality of processing devices102. Additional processing device102-3is in network communication with processing devices102-1and102-2via the first network108. For example, additional processing device102-3may be configured to communicate with processing devices102-1and102-2over the first network108via communication link110-3. Additional processing device102-3is also configured to communicate with mobile computing devices132over, for example, the second network134via communication link136-3. Communication links110-3and136-3, may comprise any suitable wired and/or wireless communication link(s), or suitable combination thereof. According to some embodiments, additional processing device102-3is configured to communicate over the first network108over one or more suitable communication links. Additional processing device102-3may also be configured to transmit and receive data over the first network108or the second network134according to any suitable protocol or protocols, such as wireless data protocols, cellular device protocols, WiFi protocols, WiMax protocols, Real-Time Transport Protocol (RTP) and/or a combination of protocols. Similarly to processing devices102-1and102-2, additional processing device102-3is coupled to at least one memory104, such as memory104-3. Memory104-3is configured similarly to memory104-1and memory104-2and can comprise any suitable memory device. Like memory104-1and memory104-2, memory104-3is configured to store computer executable instructions106(as computer executable instructions106-3), for execution by at least one processing device, including processing device102-3. Although processing device102-3and memory104-3are depicted as being co-located on computing device116-3, it is understood that, according to some embodiments, processing device102-3and memory104-3are not located on the same computing devices. For example, according to some embodiments, computing device116-3comprises two or more servers in wired and/or wireless communication with each other, and processing device102-3is located at one of the servers while memory104-3is located at another one of the servers. According to some embodiments, one or more processing devices, such as additional processing device102-3, are added to the plurality of processing devices in response to an increase in the number of audio channels being allocated to capture and distribute the live audio streams. According to some embodiments, this addition of processing devices to the plurality of processing devices is performed automatically in real-time. According to some embodiments, one or more processing devices, such as additional processing device102-3, are added to the plurality of processing devices102based on an increase in the number of the plurality of mobile computing devices requesting transmission of one or more of the live audio streams. Based on an initial estimate or expected number of mobile computing devices that will request transmission of the live audio streams, an initial number of the plurality of processing devices to capture and distribute the live audio streams may be provided. As the number of mobile computing devices132increases, the number of processing devices102comprising the plurality of processing devices may be increased to, for example, better spread the computing load over the plurality of processing devices. According to some embodiments, the addition of processing devices is pre-emptive in that the present plurality of processing devices may be capable of fulfilling the increased requests for transmission of the live audio streams, but added as a precaution. For example, a pre-determined ratio of processing devices to mobile computing devices may be established and the addition of processing devices may be performed in order to maintain that pre-determined ratio. According to some embodiments, the addition of processing devices to the plurality of processing devices102is based on the distribution status determined prior to adding one or more processing devices. For example, the distribution status may indicate that the load on each of the present plurality of processing devices is at or exceeding a maximum load (which may be pre-determined). Increasing the number of processing devices in the plurality of processing devices may help decrease the computing load on at least one of the present processing devices by directing new incoming traffic to an additional processing device. In addition, according to some embodiments, the addition of processing devices may allow for the re-allocation of the load among the plurality processing devices overall. According to some embodiments, the described systems and methods may provide for a graceful recovery in the event one or more of the processing devices fails. For example, according to some embodiments, when any one of the processing devices102ceases to transmit the plurality of discrete audio data packets126(or the transmission copies126T) to at least one of the mobile computing devices132, the nominated processing device (e.g., processing device102-2) or another nominated processing device of the plurality of processing devices102(which may include the additional processing device102-3), takes certain actions to resume transmission of the plurality of discrete audio data packets126(or the transmission copies126T). The nominated processing device (e.g., processing device102-2) receives a subsequent connection request144from the mobile computing device(s) no longer receiving the requested transmission, such as mobile computing device132-1. The subsequent transmission request144comprises a subsequent request for transmission of one of the live audio streams to the mobile computing device(s). The nominated processing device determines a subsequent distribution status of each one of the plurality of processing devices102to transmit either the plurality of discrete audio data packets126(or the transmission copies126T) for receipt by the at least one mobile computing device originating the subsequent connection request144(mobile computing device132-1). Based on the subsequent distribution status, the nominated processing device selects a subsequent processing device, such as additional processing device102-3, to transmit the plurality of discrete audio data packets126(or the copies126T generated from copies126C placed in respective accessible buffer118-3) for receipt by the at least one mobile computing device, mobile computing device132-1in this example. The subsequent distribution status may indicate, for example, that additional processing device102-3has the lowest load of all of the processing devices102and is capable of transmitting the transmission copies126T. For example, processing device102-3may be connected to the lowest number of mobile computing devices132and still online. The selected processing device, such as processing device102-3, transmits either the plurality of discrete audio data packets126(or the transmission copies126T) for receipt by the at least one mobile computing device over the first network108or the second network134. For example, the additional processing device102-3may transmit transmission copies126T over the second network134for receipt by the mobile computing device132-1. Attention is directed toFIGS.4to6, which together depict a method for capturing and distributing live audio streams of a live event to a plurality of mobile computing devices. In order to assist in the explanation of method200, it will be assumed that method200is performed using example system100, as indicated. However, it is to be understood that example system100and/or method200can be varied, and need not work exactly as discussed herein in conjunction with each other, and that such variations are within the scope of present implementations. It is appreciated that, in some aspects, method200is implemented by example system100by processing devices102. Indeed, method200is one way in which example system100or example system200may be configured. It is to be emphasized, however, that method200need not be performed in the exact sequence as shown, unless otherwise indicated; and likewise various blocks may be performed in parallel rather than in sequence; hence the elements of method200are referred to herein as “blocks” rather than “steps”. It is also to be understood, however, that method200can be implemented on variations of system100as well. Blocks202to206(FIG.4) are performed at at least one of the plurality of processing devices having the at least one audio channel associated therewith, such as processing device102-1. At block202, at least one of the live audio streams is received via the at least one audio channel associated with the processing device(s). For example, the first live audio stream124-1may be received by processing device102-1via audio channel112-1. At block204, a plurality of discrete audio data packets are generated from the at least one received live audio stream, such as plurality of discrete audio data packets126. At block206, the discrete audio data packets126are transmitted over a network, such as the first network108, for receipt by at least one of the remainder of the plurality of processing devices, such as processing device102-2. As discussed above, according to some embodiments, processing device102-1also transmits the plurality of discrete audio data packets126to itself such that processing device102-1and102-2receive the discrete audio data packets126. According to some embodiments, processing device102-1receives the discrete audio data packets126from one of the other processing devices that generates the discrete the audio data packets126. Blocks208to210(FIG.5) are performed at at least one of the processing devices, such as at at least one of processing devices102. Each of the at least one processing devices generates copies of the discrete audio data packets126and places them in a buffer that is accessible to the respective processing device, such as buffer118-2for processing device102-2and buffer118-3for processing device102-3. As discussed above, according to some embodiments, the processing device(s) participating in the audio capture activity generate copies of the discrete audio data packets prior to transmitting the discrete audio data packets to the remainder processing devices and may not receive the discrete audio data packets from any of the remainder processing devices or themselves. Blocks212to216(FIG.6) are performed at a nominated processing device of the plurality of processing devices. At block212, a connection request is received from a respective mobile computing device of the plurality of mobile computing devices, such as connection request130from mobile computing device132-1. As discussed above, the connection request comprises a request for transmission of one of the live audio streams to the respective mobile computing device. At block214, the distribution status of each one of the plurality of processing devices is determined. As discussed above, the distribution status indicates at least the ability of the applicable processing device to deliver the plurality of discrete audio data packets (or the transmission copies) to the respective mobile computing device. At block216, based on the distribution status, a processing device of the plurality of processing devices is selected to transmit either the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device. For example, the determined distribution status of the processing device102-2may indicate that it is connected to the lowest number of mobile computing devices132and is capable of transmitting the requested transmission copies126T to the mobile computing device132-1. Blocks218to220are performed at the selected processing device. At block218, the transmission copies are generated from the copies of the discrete audio data packets in the buffer accessible to the selected processing device. For example, if processing device102-2is the selected processing device, then transmission copies126T are generated at processing device102-2(seeFIG.8B). At block220, the discrete audio data packets126or the transmission copies126T are transmitted for receipt by the respective mobile computing device over the first network or a second network, such as second network134. According to some embodiments, one or more of blocks202to220are performed in real-time. Attention is directed toFIG.7, which depicts a method300for graceful recovery of transmission when any one of the plurality of processing devices ceases to transmit the plurality of discrete audio data packets or the transmission copies to at least one mobile computing device of the plurality of mobile computing devices. Similarly to method200depicted inFIGS.4to6, in order to assist in the explanation of method300, it will be assumed that method300is performed using example system100, as indicated. However, it is to be understood that example system100and/or method300can be varied, and need not work exactly as discussed herein in conjunction with each other, and that such variations are within the scope of present implementations. Blocks302to306are performed at a nominated processing device, which may be the processing device nominated in method200or another processing device of the plurality of processing devices. At block302, the nominated processing device receives a subsequent connection request from at least one of the mobile computing devices (i.e., the mobile computing device(s) that is no longer receiving the requested transmission, such as mobile computing device132-1). At block304, a subsequent distribution status for each one of the processing devices is determined. The subsequent distribution status indicates at least the ability of the applicable processing device to deliver the plurality of discrete audio data packets, such as discrete audio data packets126, or the transmission copies, such as copies126T, to the at least one mobile computing device(s). At block306, a subsequent processing device of the processing devices is selected to transmit either the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device(s). At block308, the selected subsequent processing device transmits one of the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device over the first network or a second network. According to some embodiments, one or more of blocks302to308are performed in real-time. Those skilled in the art will appreciate that in some implementations, the functionality of system100or methods200,300can be implemented using pre-programmed hardware or firmware elements (e.g., application specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), etc.), or other related components. In other implementations, the functionality of system100or methods200,300can be achieved using a computing apparatus that has access to a code memory (not shown) which stores computer-readable program code for operation of the computing apparatus (such as computer executable instructions106). The computer-readable program code could be stored on a computer readable storage medium which is fixed, tangible and readable directly by these components, (e.g., removable diskette, CD-ROM, ROM, fixed disk, USB drive). Furthermore, it is appreciated that the computer-readable program can be stored as a computer program product comprising a computer usable medium. Further, a persistent storage device can comprise the computer readable program code. It is yet further appreciated that the computer-readable program code and/or computer usable medium can comprise a non-transitory computer-readable program code and/or non-transitory computer usable medium. Alternatively, the computer-readable program code could be stored remotely but transmittable to these components via a modem or other interface device connected to a network (including, without limitation, the Internet) over a transmission medium. The transmission medium can be either a non-mobile medium (e.g., optical and/or digital and/or analog communications lines) or a mobile medium (e.g., microwave, infrared, free-space optical or other transmission schemes) or a combination thereof. A collection of exemplary examples, including at least some explicitly enumerated as “ECs” (Example Combinations), providing additional description of a variety of example types in accordance with the concepts described herein are provided below. These examples are not meant to be mutually exclusive, exhaustive, or restrictive; and the invention is not limited to these example examples but rather encompasses all possible modifications and variation within the scope of the issued claims and their equivalents. EC. 1. A system for capturing and distributing live audio streams of a live event to a plurality of mobile computing devices, the system comprising: a plurality of processing devices in network communication with each other, at least one of the plurality of processing devices having at least one audio channel associated therewith; at least one memory coupled to the plurality of processing devices, the at least one memory configured to store computer-executable instructions, the computer-executable instructions when executed by the plurality of processing devices causing the plurality of processing devices to: at the at least one of the plurality of processing devices having the at least one audio channel associated therewith: receive at least one of the live audio streams via the at least one audio channel, generate a plurality of discrete audio data packets from the at least one received live audio stream, and transmit, over a first network, the plurality of discrete audio data packets for receipt by at least one of the remainder of the plurality of processing devices; at at least one of the plurality of processing devices: generate copies of the plurality of discrete audio data packets, and place the copies of the plurality of discrete audio data packets in a buffer accessible to the respective processing device of the plurality of processing devices; at a nominated processing device of the plurality of processing devices: receive a connection request from a respective mobile computing device of the plurality of mobile computing devices, the connection request including a request for transmission of one of the live audio streams to the respective mobile computing device, determine a distribution status of each one of the plurality of the processing devices to transmit either the plurality of discrete audio data packets or transmission copies generated from the copies for receipt by the respective mobile computing device originating the connection request, the distribution status indicating at least the ability of the applicable processing device to transmit the plurality of discrete audio data packets or the transmission copies to the respective mobile computing device, and based on the distribution status, select a processing device of the plurality of the processing devices to transmit the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device; and at the selected processing device: generate the transmission copies of the plurality of discrete audio data packets from the copies placed in the buffer accessible by the selected processing device, and transmit the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device over the first network or a second network. EC. 2. The system of any one of the preceding or subsequent example combinations, such as EC. 1, wherein the computer-executable instructions are configured to further cause the at least one of the plurality of processing devices having the at least one audio channel associated therewith to receive, via the first network, the plurality of discrete audio packets from one or more of the at least one of the plurality of the processing devices having the at least one audio channel associated therewith and/or the remainder of the plurality of processing devices. EC. 3. The system of any one of the preceding or subsequent example combinations, such as EC. 1, wherein the copies of the plurality of discrete audio data packets generated by the at least one of the plurality of the processing devices having the at least one audio channel associated therewith are generated prior to the transmission of the plurality of discrete audio data packets to the remainder of the plurality of processing devices. EC. 4. The system of any one of the preceding or subsequent example combinations, such as any one of EC. 1 to EC. 3, wherein the number of processing devices constituting the plurality of processing devices is based on one or more of: a number of audio channels allocated to receive the live audio streams in respect of the plurality of processing devices; an expected number of the plurality of mobile computing devices; and the determined distribution status of each one of the plurality of processing devices. EC. 5. The system of any one of the preceding or subsequent example combinations, such as any one of EC. 1 EC. 4, further comprising: one or more additional processing devices, wherein the plurality of processing devices comprises the one or more additional processing devices after they are added, the number of the one or more additional processing devices being based on one or more of: an increase in the number of the plurality of mobile computing devices requesting transmission of one of the live audio streams; an increase in the number of audio channels over the plurality of processing devices being allocated to capture and distribute the live audio streams; and the determined distribution status of each one of the plurality of processing devices prior to adding the one or more additional processing devices. EC. 6. The system of any one of the preceding or subsequent example combinations, such as any one of EC. 1 to EC. 5, wherein the at least one live audio streams comprise at least a first live audio stream and a second live audio stream different than the first live audio stream. EC. 7. The system of any one of the preceding or subsequent example combinations, such as any one of EC. 1 to EC. 6, wherein: when the distribution status of the nominated processing device indicates that the nominated processing device is able to transmit the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device, the selected processing device is the nominated processing device, and when the distribution status of the nominated processing device indicates that the nominated processing device is not able to transmit the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device, the selected processing device is another one of the plurality of processing devices. EC. 8. The system of any one of the preceding or subsequent example combinations, such as any one of EC. 1 to EC. 7, wherein the computer-executable instructions are configured to further cause the plurality of processing devices to: when any one of the plurality of processing devices ceases to transmit the plurality of discrete audio data packets or the transmission copies to at least one mobile computing device of the plurality of mobile computing devices, at the nominated processing device or another nominated device of the plurality of processing devices: receive a subsequent connection request from the at least one mobile computing device, the subsequent connection request including a subsequent request for transmission of one of the live audio streams to the at least one mobile computing device, determine a subsequent distribution status of each one of the plurality of the processing devices to transmit either the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device originating the subsequent connection request, the subsequent distribution status indicating at least the ability of the applicable processing device to deliver the plurality of discrete audio data packets or the transmission copies to the at least one mobile computing device, based on the subsequent distribution status, select a subsequent processing device of the plurality of processing devices to transmit either the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device; at the selected subsequent processing device: transmit the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device over the first network or a second network. EC. 9. The system of any one of the preceding or subsequent example combinations, such as any one of EC. 1 to EC. 8, wherein the plurality of discrete audio data packets are transmitted for receipt by the remainder of the plurality of processing devices or received by the at least one of the plurality of processing devices having the at least one audio channel associated therewith by multicast transmission or unicast transmission. EC. 10. The system of any one of the preceding or subsequent example combinations, such as any one of EC. 1 to EC. 9, wherein the plurality of discrete audio data packets are transmitted for receipt by the remainder of the plurality of processing devices or received by the at least one of the plurality of processing devices having the at least one audio channel associated therewith in accordance with the User Datagram Protocol (UDP). EC. 11. A non-transitory computer-readable medium for capturing and distributing live audio streams of a live event to a plurality of mobile computing devices, the computer-readable medium comprising computer-executable instructions for: at at least one of a plurality of processing devices in network communication with each other, the at least one of the plurality of processing devices having at least one audio channel associated therewith: at the at least one of the plurality of processing devices having the at least one audio channel associated therewith: receiving at least one of the live audio streams via the at least one audio channel, generating a plurality of discrete audio data packets from the at least one received live audio stream, and transmitting, over a first network, the plurality of discrete audio data packets for receipt by at least one of the remainder of the plurality of processing devices; at at least one of the plurality of processing devices: generating copies of the plurality of discrete audio data packets, and placing the copies of the plurality of discrete audio data packets in a buffer accessible to the respective processing device of the plurality of processing devices; at a nominated processing device of the plurality of processing devices: receiving a connection request from a respective mobile computing device of the plurality of mobile computing devices, the connection request including a request for transmission of one of the live audio streams to the respective mobile computing device, determining a distribution status of each one of the plurality of the processing devices to transmit either the plurality of discrete audio data packets or transmission copies generated from the copies for receipt by the respective mobile computing device originating the connection request, the distribution status indicating at least the ability of the applicable processing device to transmit the plurality of discrete audio data packets or the transmission copies to the respective mobile computing device, and based on the distribution status, selecting a processing device of the plurality of the processing devices to transmit the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device; at the selected processing device: generating the transmission copies of the plurality of discrete audio data packets from the copies placed in the buffer accessible by the selected processing device, and transmitting the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device over the first network or a second network. EC. 12. The non-transitory computer-readable medium of any one of the preceding or subsequent example combinations, such as EC. 11, further comprising computer-executable instructions for: receiving, via the first network, the plurality of discrete audio packets at the at least one of the plurality of processing devices having the at least one audio channel associated therewith from one or more of the at least one of the plurality of the processing devices having the at least one audio channel associated therewith and/or the remainder of the plurality of processing devices. EC. 13. The non-transitory computer-readable medium of any one of the preceding or subsequent example combinations, such as EC. 11, wherein the copies of the plurality of discrete audio data packets generated by the at least one of the plurality of the processing devices having the at least one audio channel associated therewith are generated prior to the transmission of the plurality of discrete audio data packets to the remainder of the plurality of processing devices. EC. 14. The non-transitory computer-readable medium of any one of the preceding or subsequent example combinations, such as any one of EC. 11 to EC. 13, further comprising computer-executable instructions for: when any one of the plurality of processing devices ceases to transmit the plurality of discrete audio data packets or the transmission copies to at least one mobile computing device of the plurality of mobile computing devices, at the nominated processing device or another nominated device of the plurality of processing devices: receiving a subsequent connection request from the at least one mobile computing device, the subsequent connection request including a subsequent request for transmission of one of the live audio streams to the at least one mobile computing device, determining a subsequent distribution status of each one of the plurality of the processing devices to transmit either the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device originating the subsequent connection request, the subsequent distribution status indicating at least the ability of the applicable processing device to deliver the plurality of discrete audio data packets or the transmission copies to the at least one mobile computing device, based on the subsequent distribution status, selecting a subsequent processing device of the plurality of processing devices to transmit either the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device; and at the selected subsequent processing device: transmitting one of the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device over the first network or a second network. EC. 15. A method for capturing and distributing live audio streams of a live event to a plurality of mobile computing devices, the method comprising: at at least one of a plurality of processing devices in network communication with each other, the at least one of the plurality of processing devices having at least one audio channel associated therewith: at the at least one of the plurality of processing devices having the at least one audio channel associated therewith: receiving at least one of the live audio streams via the at least one audio channel, generating a plurality of discrete audio data packets from the at least one received live audio stream, and transmitting, over a first network, the plurality of discrete audio data packets for receipt by at least one of the remainder of the plurality of processing devices; at at least one of the plurality of processing devices: generating copies of the plurality of discrete audio data packets, and placing the copies of the plurality of discrete audio data packets in a buffer accessible to the respective processing device of the plurality of processing devices; at a nominated processing device of the plurality of processing devices: receiving a connection request from a respective mobile computing device of the plurality of mobile computing devices, the connection request including a request for transmission of one of the live audio streams to the respective mobile computing device, determining a distribution status of each one of the plurality of the processing devices to transmit either the plurality of discrete audio data packets or transmission copies generated from the copies for receipt by the respective mobile computing device originating the connection request, the distribution status indicating at least the ability of the applicable processing device to transmit the plurality of discrete audio data packets or the transmission copies to the respective mobile computing device, and based on the distribution status, selecting a processing device of the plurality of the processing devices to transmit the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device; and at the selected processing device: generating the transmission copies of the plurality of discrete audio data packets from the copies placed in the respective buffer, and transmitting the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device over the first network or a second network. EC. 16. The method of any one of the preceding or subsequent example combinations, such as EC. 15, further comprising receiving, via the first network, the plurality of discrete audio packets at the at least one of the plurality of processing devices having the at least one audio channel associated therewith from one or more of the at least one of the plurality of the processing devices having the at least one audio channel associated therewith and/or the remainder of the plurality of processing devices. EC. 17. The method of any one of the preceding or subsequent example combinations, such as EC. 15, wherein the copies of the plurality of discrete audio data packets generated by the at least one of the plurality of the processing devices having the at least one audio channel associated therewith are generated prior to the transmission of the plurality of discrete audio data packets to the remainder of the plurality of processing devices. EC. 18. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 to EC. 17, wherein the number of processing devices constituting the plurality of processing devices is based on one or more of: a number of audio channels allocated to receive the live audio streams in respect of the plurality of processing devices; an expected number of the plurality of mobile computing devices; and the determined distribution status of each one of the plurality of processing devices. EC. 19. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 or EC. 18, further comprising: adding one or more additional processing devices to the plurality of processing devices based on one or more of: an increase in the number of the plurality of mobile computing devices requesting transmission of one of the live audio streams; an increase in the number of audio channels over the plurality of processing devices being allocated to capture and distribute the live audio streams; and the determined distribution status of each one of the plurality of processing devices prior to adding the one or more additional processing devices. EC. 20. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 to EC. 19, wherein the at least one live audio streams comprise at least a first live audio stream and a second live audio stream different than the first live audio stream. EC. 21. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 to EC. 20, wherein: when the distribution status of the nominated processing device indicates that the nominated processing device is able to transmit the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device, the selected processing device is the nominated processing device, and when the distribution status of the nominated processing device indicates that the nominated processing device is not able to transmit the plurality of discrete audio data packets or the transmission copies for receipt by the respective mobile computing device, the selected processing device is another one of the plurality of processing devices. EC. 22. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 to EC. 21, further comprising: when any one of the plurality of processing devices ceases to transmit the plurality of discrete audio data packets or the transmission copies to at least one mobile computing device of the plurality of mobile computing devices, at the nominated processing device or another nominated device of the plurality of processing devices: receiving a subsequent connection request from the at least one mobile computing device, the subsequent connection request including a subsequent request for transmission of one of the live audio streams to the at least one mobile computing device, determining a subsequent distribution status of each one of the plurality of the processing devices to transmit either the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device originating the subsequent connection request, the subsequent distribution status indicating at least the ability of the applicable processing device to deliver the plurality of discrete audio data packets or the transmission copies to the at least one mobile computing device, based on the subsequent distribution status, selecting a subsequent processing device of the plurality of processing devices to transmit either the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device; and at the selected subsequent processing device: transmitting one of the plurality of discrete audio data packets or the transmission copies for receipt by the at least one mobile computing device over the first network or a second network. EC. 23. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 to EC. 22, wherein the plurality of discrete audio data packets or the copies therefrom are transmitted for receipt by the at least one of the remainder of the plurality of processing devices or received by the at least one of the plurality of processing devices having the at least one audio channel associated therewith by multicast transmission or unicast transmission. EC. 24. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 to EC. 23, wherein the plurality of discrete audio data packets are transmitted for receipt by the at least one of the remainder of the plurality of processing devices or received by the at least one of the plurality of processing devices having the at least one audio channel associated therewith in accordance with the User Datagram Protocol (UDP). EC. 25. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 to EC. 24, wherein the receiving, generating, transmitting and selecting are performed in real-time. EC. 26. The method of any one of the preceding or subsequent example combinations, such as any one of EC. 15 to EC. 25, wherein the distribution status of a respective processing device of the plurality of processing devices comprises one or more of a load status, server uptime, quality of service and reliability metrics of the respective processing device. Persons skilled in the art will appreciate that there are yet more alternative implementations and modifications possible, and that the above examples are only illustrations of one or more implementations. The scope, therefore, is only to be limited by the claims appended hereto.
77,527
11863310
DESCRIPTION OF EMBODIMENTS Embodiments of the disclosure will now be described with reference to accompanying figures. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the disclosure. Furthermore, embodiments of the disclosure may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the embodiments of the disclosure herein described. Overview A cookie data aggregation system, as discussed herein, comprises a computer system, such as a server, that implements methods of aggregating a user's web browsing data in a networked environment. The cookie data aggregation system can aggregate the cookies of the user, cookie data from various websites, personal data, and/or other data to create a “supercookie.” The supercookie can then be used by websites to tailor a customized website experience for the user. Because of the “Same Origin Policy”, a website may not have access to cookies from different websites about a user. However, the supercookie can provide this data to a website, thus allowing a better tailored, customized website experience for the user. For example, the user may be able to authorize use of his supercookie (and or portions of his supercookie) by particular websites in order to provide potentially valuable information to the websites that may not otherwise be available to the websites (and/or may require additional effort to obtain by the websites). As a user browses the web and visits various websites, each website may generate a cookie and store it on the user's computing device, possibly as part of the website data. The cookie may allow the website to track a user's activity at the website. By tracking this activity, a website can better customize its webpages for a user. For example, a website can determine whether or not a user is “authenticated” or “logged in”, thus informing the website whether or not to provide webpages with sensitive information to the user. Additionally, the website can determine what advertisements to display to the user while he browses the website based on the cookie. The cookie data aggregation system may access these cookies on one or more of the user's computing devices. The system may install cookie tracking software on a user device for this purpose. The software may then send cookies (or information regarding the cookies) from the user to a cookie data aggregation system (or the cookie data aggregation may occur on one of the user devices). Alternatively, a user could upload his cookies to the cookie data aggregation system. The cookie data aggregation system may also retrieve additional cookie data from the websites that generated the cookies retrieved by the system. For example, information in cookies may be coded by the website that placed the cookie, such that the coded data is useful only to the website that placed the cookie and has the key to decode the data. Thus, the cookie data aggregation system may obtain information associated with coded data in cookies from the cookie originator and/or some other entity that is configured to decode the cookie data. Further, the cookie data aggregation system may also retrieve personal data from the user via a user interface. The cookie data aggregation system may then combine the retrieved user cookies, the retrieved website data, and the personal data about the user to create a supercookie. The supercookie can provide several benefits. First, the supercookie can associate the user with cookies that were previously made anonymous. Second, the supercookie can be used to create a user profile. Third, the supercookie may be used by various websites to better customize and tailor a website experience for the user. Fourth, the supercookie may be used to associate a user profile with access information from multiple devices. Additionally, the multiple cookies may also be useful for tailoring a website experience for a user. The supercookie may also contain a user ID. This may allow anonymous cookies to now be associated with the user. In some embodiments, the supercookie data structure may include additional items. In other embodiments, the supercookie data structure may not include all of the above-mentioned items. The cookie data aggregation system may push the generated supercookie back to the user. Now, when the user visits a website, the website may retrieve the supercookie from the user to better customize the website experience for the user. Alternatively, the generated supercookie may be stored at the cookie data aggregation system. In this case, a website may access the supercookie at the cookie data aggregation system to better customize a website experience for the user. The cookie data aggregation system and/or the website may request authentication from the user to store and/or access the supercookie for the user. The cookie data aggregation system may provide a user interface to allow the user to view, modify, control usage rights, and perform other management functions associated with his supercookie. For example, the user may be able to view the personal information about him stored in the supercookie. This information may include the user's name, age, gender, music preferences, interests, hobbies, and/or other relevant personal information. The user may also be able to configure what type of information stored in the supercookie can be made available to various websites. For example, a supercookie may store personal, financial, and online retail website browsing information about the user. The user may be able to configure which of these types of information is available to different websites. These websites could include social media websites, financial websites, and online retail websites. For example, a user may want his financial information to be available to financial websites. However, the user may configure his supercookie such that his financial information cannot be seen by a social media website. Additionally, a user may configure his supercookie to make his personal information available to all websites. System and Network Architecture FIG.1shows one embodiment of a cookie data aggregation system100in a networked environment. In this embodiment, the cookie data aggregation system100may communicate with a user110over the network120. User110discussed herein may refer to a computing device and/or an operator of the computing device. The actions performed by a user110may be performed automatically by a computing device without input from an operator, or may be performed based on input from an operator. The cookie data aggregation system100may also communicate with one or more websites140(including140A,140B, and140N) each with associated cookie data135(including135A,135B, and135N). In this embodiment, the cookie data aggregation system100also has access to personal data160of the user110, which the system100may use to create or supplement a supercookie165for the user110. The business entity170may engage with the cookie data aggregation system100and can provide data and/or access the supercookies165. In some embodiments, the cookie data aggregation system100may communicate with more devices and/or use more data to create supercookies165than what is shown inFIG.1. In some embodiments, the cookie data aggregation system100does not communicate with all of the devices shown inFIG.1. In some embodiments, the cookie data aggregation system100does not use all of the data shown inFIG.1to create supercookies165. InFIG.1, the cookie data aggregation system100may be a server. Alternatively, the cookie data aggregation system100may be multiple computing devices working together to create a cookie data aggregation system100. Supercookie Generation As shown in action1ofFIG.1, the cookie data aggregation system100may communicate with the user110, such as to obtain authorization to access cookies on the user's computing device(s) and to create the supercookie165. In some embodiments, the cookie data aggregation system may provide software to the user110. This software may then track the cookies of the user110and transmit them to the cookie data aggregation system100. In some embodiments, the cookie data aggregation system100downloads the user's cookies from the user110. In some embodiments, the user110uploads his cookies to the cookie data aggregation system100. In some embodiments, the user's cookies may be stored on a user device. In some embodiments, the user device may be a phone, PDA, laptop, desktop, tablet, and/or some other computing device. In some embodiments, the cookie data aggregation system100retrieves the user's cookies from the user's computing device. In the embodiment ofFIG.1, the cookie data aggregation system100also may communicate with one or more websites to obtain cookie data, as shown in action2. For example, the cookie data aggregation system100may communicate with website140A to obtain cookie data135associated with the user110. The cookies retrieved from user110may indicate that the cookie was generated by particular websites140. As a result, the cookie data aggregation system100may communicate with corresponding websites140to retrieve further cookie data135. For example, to decode information regarding information stored in a cookie placed on one of the user110devices by the website140A. In this way the cookie data aggregation system100can detail what the cookie actually means. The cookie data135may include information not available in the cookie retrieved from the user110. For example, the cookie data135may include information about the users activity and/or interactions with website140, such as the number of times the user110visited the website140, the amount of time or time spent on the website140during each visit and cumulatively, the trends and/or preferences noted about the user110when visiting website140, the trends and/or preferences noted about visitors of website140, website browsing or activity information or analysis related to website140, and/or other useful information. In some embodiments, the cookie data aggregation system100combines the personal data160with the retrieved cookies from the user110and the retrieved cookie data, for example cookie data135A,1358, and135N as part of the supercookie165for user110. The personal data160may include information about the user110that is not available in the retrieved cookies from the user110or the retrieved cookie data from the one or more websites. The personal data160may include user ID, date of birth, gender, occupation, education, and/or other relevant personal data. In some embodiments, the cookie data aggregation system100retrieves the personal data160from the user110. In some embodiments, the user110inputs the personal data160to the cookie data aggregation system100. In some embodiments, the cookie data aggregation system100stores the personal data160at the cookie data aggregation system100. In some embodiments, the cookie data aggregation system100accesses the personal data160from a remote device. The cookie data aggregation system100may communicate with the business entity170, such as to obtain data to help generate a supercookie165about a user110. The data obtained may include personal information about the user110, web browsing information about the user110, web browsing information about various websites, and/or web browsing information in general. In the embodiment ofFIG.1, the cookie data aggregation system100may generate supercookie165based on retrieved cookies from the user110, retrieved cookie data135from one or more websites140, personal data160, and/or data obtained from business entity170. In some embodiments, the cookie data aggregation system100may use more or less data than what is shown onFIG.1to generate the supercookie165. After the cookie data aggregation system100creates the supercookie165, the cookie data aggregation system100may communicate with the user110to push the supercookie to the user110. In some embodiments, the cookie data aggregation system100pushes the supercookie165to the user's device. In some embodiments, the user device may be a phone, PDA, laptop, desktop, tablet, and/or some other computing device. In some embodiments, the cookie data aggregation system100may store the supercookie165of the user110at the cookie data aggregation system100. In some embodiments, the cookie data aggregation system100may request authorization from the user110to store the supercookie165of the user110at the cookie data aggregation system100. Supercookie Distribution Once the cookie data aggregation system100has generated the supercookie165, the system100may allow the user110to access the supercookie165, either by pushing the supercookie165to the user110, allowing the user to pull the supercookie165, or allowing the user110to access the supercookie via one or more browsers or applications. In some embodiments, the cookie data aggregation system100stores the supercookie165at the cookie data aggregation system100. In some embodiments, the cookie data aggregation system100provides the supercookie165(or portions of the supercookie) to one or more websites140and/or the business entity170in response to requests for information regarding the user110. In some embodiments, the cookie data aggregation system100requests authorization from user110before providing supercookie165to one or more websites and/or a business entity. Alternatively, the user110may have pre-authorized certain websites140, groups of websites, types of websites, etc. to access certain portions of supercookie165. In some embodiments, the cookie data aggregation system100notifies the user110when it provides the supercookie165to one or more websites and/or the business entity170and/or stores historical data regarding use of the user's supercookie. In some embodiments, the cookie data aggregation system may communicate with one or more websites to provide access to supercookie165of the user110to one or more websites. In some embodiments, the one or more websites may request the cookie data aggregation system100to provide access to the supercookie165. The cookie data aggregation system100may also communicate with the business entity170to provide access to a supercookie165to the business entity170. In some embodiments, the cookie data aggregation system requests authorization from the user110before granting business entity170access to the supercookie165of the user110. Business entity170may refer to a company seeking information about users110, for example. Instead of a website seeking a user's cookie data, a business entity170may be a marketing company interested in information about user110to identify a future sales opportunity, for example. A business entity170may request data from the cookie data aggregation system100in other manners than via a website request made to the business entity170. For example, the business entity170may request data of the user110via a user interface provided by the cookie data aggregation system100that enables authorized entities to access profile data of users that is compiled from cookies. In some embodiments, the business entity170pays a fee for the service provided by the cookie data aggregation system100to provide access to the supercookie165. In some embodiments, different amounts of fees are charged depending on the different types of access provided by the cookie data aggregation system100to the business entity170. In some embodiments, no fee is charged by the cookie data aggregation system100for providing access to a business entity170to a supercookie. In some embodiments, the cookie data aggregation system100may obtain authorization from user110before granting a business entity170access to his supercookie. FIGS.2A,2B, and2Cillustrate three methods by which a supercookie may be provided to a website by the cookie data aggregation system100. In some embodiments ofFIGS.2A,2B, and2C, supercookie data is distributed to websites and business entities using fewer or additional actions than shown in the figures. In some embodiments, the cookie data aggregation system100may use more or less data than what is shown onFIGS.2A,2B, and2C, and may communicate with additional or fewer resources than are shown inFIGS.2A,2B, and2C. These figures are discussed with reference to sharing with a particular website (140B), but supercookie data may be similarly shared with other websites or business entities in a similar manner. In other embodiments supercookie data may be provided to websites or business entities through different methods. FIG.2Adepicts an example process for providing supercookie data to a website directly from the cookie data aggregation system. In this embodiment, the supercookie165is stored on the supercookie data aggregation system100. In action1ofFIG.2A, the user110communicates with a website140B. During the communication, the website140B requests supercookie data regarding the user. In response, the user110provides the website140B with a link to (and/or other identifier of) the user's supercookie165stored on the cookie data aggregation system100. In some embodiments, the user110may provide a link to her supercookie without a request from the website140B. In action2, the website140B follows the link to access the user's supercookie165on the cookie data aggregation system100. In some embodiments, the user110can determine how much supercookie data the website140B can access. For example, the user110may provide a link which only provides access to the user's personal data, but does not provide access to the user's financial data. In other embodiments, the user110may provide only one link regardless of what information the user wants to share, but the cookie data aggregation system100only provides limited information to certain types of websites. For example, if the cookie data aggregation system100recognizes the website140as an online shopping website, then it will not provide financial information to the website. In other embodiments, providing the link to the website140provides the website with access to the entire supercookie165. In some embodiments of the process shown inFIG.2A, the website140B does not request supercookie access from user110. Instead, the website140B requests supercookie data directly from the cookie data aggregation system100when the user accesses the website. In such embodiments, cookie data aggregation system100may request authorization from user110before providing website140B with access to supercookie165. In some embodiments, the cookie data aggregation system may instead provide the supercookie, or part thereof, based on prior authorization from the user and/or default rules for providing supercookie data regarding the user. FIG.2Bdepicts another example process for providing supercookie data to a website from the cookie data aggregation system by first passing the supercookie data through the user110. In this embodiment, the supercookie165is stored by the cookie data aggregation system100. In action1ofFIG.2B, the user110accesses the website140B, and the website requests a supercookie from the user. The user110responds by requesting his supercookie data from the cookie data aggregation system100in action2, such as by the user's browser (or other software on the user's computing device) automatically requesting the supercookie data from the cookie data aggregation system100without user input. The cookie data aggregation system100then returns the supercookie165to the user110. In action3, the user110provides the supercookie (or portions thereof) to the website140B. In some embodiments the website140B may request data from the user110other than the supercookie. The user may then request his supercookie and filter the supercookie data to respond to the website's request, or indicate to the cookie data aggregation system100which portions of the supercookie data should be returned to the user110in action2. In some embodiments, the user110may only wish to provide some information to the website140B. The user may do this by only requesting a limited amount of data from the cookie data aggregation system, or by only passing some of the information in his supercookie165along to the website140B, for example. FIG.2Cdepicts another example process for providing supercookie data to a website140B from the user. In this embodiment, the supercookie165is stored by the user110. In this embodiment, the cookie data aggregation system aggregates data into a supercookie and provides the aggregated cookie to the user110, but does not communicate directly with the website140when the website140requests access to the user's supercookie165. In this embodiment, the supercookie is transmitted from the user110directly to the website140B. The user110may transmit the supercookie165to the website140B when the user110visits website140B, either with an initial request for the website140B and/or in response to a request from the website140B for access to the user's supercookie. In some embodiments, the website140B requests other cookies or personal data from the user instead of, or in addition to, requesting access to the supercookie. In action1ofFIG.2C, the user110provides access to the supercookie165(or portions thereof) to the website140. In some embodiments, the user may determine how much data from supercookie165to provide to website140. In some embodiments, only one of the processes illustrated inFIGS.2A,2B, and2Care used by the cookie data aggregation system100to distribute supercookie data. In some embodiments, two or more of the processes shown inFIGS.2A,2B, and2Cmay be implemented by the user110and cookie data aggregation system100. This may be done for a variety of reasons such as depending on the type of website site being accessed, or the type of device from which the user is accessing a website. For example, when the user is accessing a mobile website from a smartphone, the cookie data aggregation system100may provide supercookie data using the process shown inFIG.2A, which may require fewer computing resources of the user's computing device and fewer communications from the user110. This may advantageously conserve system resources for a smartphone, for example. However, if the user is accessing a website from a home desktop, the cookie data aggregation system may use the process shown inFIG.2C, which may use more computing resources (e.g., memory, processor, etc.), but allows the user110more direct control of supercookie data that is provided to the website140B. It will be understood that the preceding are only examples, and that any process may be used by any user device to provide the supercookie165to any website or computing device. In some embodiments, factors other than device type and website type determine which process is used to provide supercookie data to a requesting entity. For example, the user110or cookie data aggregation system100may determine which process to use based on the type and amount of data requested by a website140. Cookie Data Aggregation System FIG.3is a block diagram illustrating an embodiment of the cookie data aggregation system100. In this embodiment, the cookie data aggregation system100may include a user cookie retrieval module310, a server cookie data retrieval module320, a data matching and aggregation module330, a supercookie management module340, a supercookie access module350, a CPU360, a Storage Device370, and a Memory380. In some embodiments, the cookie data aggregation system100comprises fewer or additional modules than shown inFIG.3. In the embodiment ofFIG.3, the user cookie retrieval module310is configured to access and/or retrieve cookies from the user110over the network120. The cookies retrieved by module310may then be used by data matching and aggregation module330. The cookies for the user110may be stored at a user computing device, such as a laptop, desktop, PDA, phone, tablet, or some other computing device that can browse the Internet. In some embodiments, the user cookie retrieval module310obtains authorization from the user110before retrieving the cookies. In some embodiments, the user cookie retrieval module310downloads cookies from the user110to the cookie data aggregation system100. In some embodiments, the user cookie retrieval module310sends a prompt to a user110to upload cookies to the cookie data aggregation system100. In some embodiments, the user cookie retrieval module310receives uploaded cookies from user110. In some embodiments, the cookie data aggregation system100provides the user110with cookie tracking software to install on his user device. The cookie tracking software may then communicate with user cookie retrieval module310to transmit cookies for the user110to the cookie data aggregation system100. In some embodiments, the user cookie retrieval module310may be used to handle other issues related to retrieving cookies from a user110. In this embodiment, the server cookie data retrieval module320is configured to retrieve cookie data from a website, such as the website that caused the cookie to be stored on the user's computing device. The cookie data retrieved from website140may provide information indicating the meaning of encoded information located in a cookie stored on one or more of the user devices. The cookie data retrieved by module320may then be used by data matching and aggregation module330. In some embodiments, the server cookie data retrieval module320may be used to handle other issues related to retrieving cookie data from the server. The data matching and aggregation module330may be configured to consolidate data and aggregate various cookies, cookie data, and personal data to generate a supercookie. In some embodiments, the data matching and aggregation module330may be used to handle other issues related to data matching and aggregation. The supercookie management module340may be configured to provide user110a user interface to manage his supercookie165. The supercookie management module340may be used when a user110wants to access the cookie data aggregation system100to manage his cookie data or generate the supercookie165. For example, the user110may want to update his cookie profile, update the information about the websites he visited, configure settings for what type of supercookie information is available to various websites, and/or delete the data from the cookie. In some embodiments, the supercookie management module340may be used to handle other issues related to supercookie management. Supercookie access module350may be used by the cookie data aggregation system100to provide access to supercookies to various parties, such as a business entity170, websites140, and/or user110. As discussed above with reference toFIGS.2A-2C, supercookie data may be delivered in various manners, such as pushing the supercookie to the user110, pushing a supercookie link to the user110, and/or storing the supercookie data at the aggregation system100. In some embodiments, the module350grants access to one or more websites to the supercookie165, which may be stored at the cookie data aggregation system100and/or the user110. In some embodiments, the module350requests authorization from user110prior to granting access to one or more websites to the supercookie165for user110. In some embodiments, the module350pushes the supercookie165to one or more websites requesting the supercookie165. In some embodiments, the module350requests authorization from user110prior to pushing the supercookie165to the one or more websites. In some embodiments, the supercookie access module350may charge fees to business entities170, websites, or other entities requesting access to supercookie data. In some embodiments, the supercookie access module350may be used to handle other issues related to supercookie access. In one embodiment, the cookie data aggregation system100includes one or more central processing unit (“CPU”)360, which may each include a conventional or proprietary microprocessor. The modules discussed herein may be stored on any type of non-transitory computer storage device370, such as hard drives, solid state memory, optical disc, and/or the like. The cookie data aggregation system100further includes one or more memory380, such as random access memory (“RAM”) for temporary storage of information, one or more read only memory (“ROM”) for permanent storage of information, such as the module discussed herein. Supercookie Structure FIG.4illustrates an example supercookie data structure165. The super cookie data structure165may include a user ID410, aggregated data420, user device IDs480, and one or more cookies430, which may include cookie data retrieved from corresponding websites regarding information in the cookies. Each cookie430may include a website440, a user ID450, data records460, a timestamp470and/or associated device IDs490of devices which have visited the site. In some embodiments, the cookies430include fewer or additional fields than displayed inFIG.4. In some embodiments, the supercookie165includes more items than what is displayedFIG.4. In some embodiments, the supercookie165does not include all of the items displayed inFIG.4. In some embodiments, the supercookie165includes a digital certificate. The supercookie165can be used by external websites140to analyze behavior of the user110. Based on this analysis, the websites140can tailor a customized experience for the user110when he visits the website140. The supercookie165also allows one or more anonymous cookies430to be associated with the user110. For example, the cookie data aggregation system100may receive some anonymous cookies from the user110when it is collecting cookie data from one of more computing devices of the user110. Because the cookie data aggregation system100knows the identity of the user (e.g., from profile information provided by the user110when setting up an account with the cookie data aggregation system430and/or from information in other cookies), the system can associate the anonymous cookie with the user100. The supercookie165may also serve as authorization for user110when the user110visits a website, such as website140. This may allow the user110to visit the website140without entering in his login information. For example, a user may access a website140which requires login information from the user110. The user110may automatically provide a link to the website140which leads to the user's supercookie165. The supercookie165may then authenticate the user110to the website140without requiring the user119to enter login information. The supercookie165may also include aggregated data420. The supercookie165aggregates data from various sources, such as one or more cookies430, personal data, or other data. By aggregating data from various sources, the supercookie165may provide better data about a user110to a website140for modeling purposes. Aggregated data420may include common data about a user found in multiple cookies430, a combination of different data found in multiple cookies430, information based on the analysis of data found in multiple cookies430, and/or other types of data. Aggregated data420may include data about a user110, such as websites visited, time spent at each website, cumulative time spent at all visited websites, preferences, and/or other data. The supercookie165may include one or more cookies430. The one or more cookies430may be retrieved by the cookie data aggregation system100from the user110. Each cookie430may include the website440that generated the cookie430, a user ID450, data records460, the timestamp470, and/or the device ID(s)490of the device(s) that accessed the site. In some embodiments, the cookie430does not have the user ID450, and is instead anonymous. In some embodiments, the user ID450is encrypted. The data records460may include data about a user110while visiting the website440, such as the time spent at the website440, number of times the website440was visited, activities done on the website440, and/or other data. The timestamp470may represent the time when the cookie430was stored on the device of the user110. Alternatively, the timestamp470may represent the time when the cookie430was created. In some embodiments the cookie430does not contain data on an associated device ID490. In other embodiments, cookie430contains multiple device IDs, for example, corresponding to multiple devices which have accessed website440with the same account. Examples of Process Flow FIG.5illustrates an embodiment of a process for retrieving cookie data to generate a supercookie165. The method500may be executed by the cookie data aggregation system100and/or other suitable computing device on an ongoing or periodic basis. Alternatively, the method500may be initiated by a user110to be executed once. In some embodiments, the method500requires authorization from the user110. At block510, the cookie data aggregation system accesses and/or retrieves cookies430from one or more devices of the user110. The cookie data aggregation system100may then analyze and parse the retrieved cookies430from the user110, as seen in block520. The cookie data aggregation system100may do this analysis and parsing in order to determine which website140placed respective cookies on the user's devices and/or to extract data from the cookies. Next, the cookie data aggregation system100retrieves website cookie data, as seen in block530. For example, website cookie data may include information not available in the cookie retrieved from the user110, such as the activities of user110on the website. Website cookie data may also include information about a cookie that must be decoded by the website140which placed the cookie. In some embodiments, the supercookie data aggregation system100can request information from the website140on how to decode its cookies. Once the supercookie data aggregation system100has an algorithm for decoding cookies from a website140, it may decode a user's cookies using its own systems. In other embodiments, the supercookie data aggregation module100may send encoded cookies to the websites140which generated the encoded cookies for the website to decode. For example, this may be necessary if the website will not provide instruction for decoding its cookies. Cookies may be decoded by anything from a simple lookup table (e.g., identifiers in particular fields of a cookie are matched to corresponding characteristics of the user) to encrypted data (e.g., encryption algorithms are used by the website140to decode raw information in the cookie in order to determine characteristics of the user). The cookie data aggregation system100may retrieve website cookie data from multiple websites, depending on how many different websites were responsible for the cookies530retrieved from the user110. Next, the cookie data aggregation system100may aggregate the cookies retrieved and the website cookie data, as seen at block540. As discussed in reference toFIG.4, the cookie data aggregation system100may aggregate data determined from multiple websites, such as data about a user found in cookies430, website cookie data received from websites (or determined based on mapping or decoding information from the websites, such as data indicating the meaning of cookie data), actual cookies, device IDs, and/or other types of data. The cookie data aggregation system100may then combine the aggregated cookie data with personal information about the user110, as seen at block550. Finally, the cookie data aggregation system100may generate and store a supercookie165, as seen in block560. The supercookie165may be stored at the cookie data aggregation system. Alternatively, the supercookie165may be stored at the user device from which the cookies430were retrieved and/or at some other storage device. Depending on the embodiment, the method ofFIG.5may include fewer or additional blocks and/or the blocks may be performed in an order different than illustrated. User Interfaces FIGS.6A,6B and6Cshow example supercookie user interfaces for a user110to manage his supercookie165.FIG.6Adisplays an embodiment of a supercookie user interface600comprising a software installation button610, cookie profile section620, recent activity section630, and supercookie settings section640. In some embodiments, the user interface has additional text, graphics, links, buttons, and/or other items to provide the user110more information about his supercookie165. In the illustrated embodiment ofFIG.6A, the user110may install cookie tracking software by selecting the software installation button610. Selecting this button may cause the cookie data aggregation system100to install cookie tracking software for the user110on a user device. The software may then track user cookies and send them to the cookie data aggregation system100to update the supercookie165for user110. In some embodiments, no option is provided to install cookie tracking software. In some embodiments, the user110uploads his cookies to the cookie data aggregation system100. In some embodiments, an option is provided to the user110for the cookie data aggregation system100to download the user's cookies from the user110. For the illustrated embodiment ofFIG.6A, the user110can view his cookie profile by looking at the cookie profile section620. In the pictured embodiment, the cookie profile section620may include one or more cookie profile data621and associated cookie profile entry modification links622. Each cookie profile data621may correspond to a type of information. For example, in the pictured embodiment, the first cookie profile data corresponds to the name of the user110, which is “John Doe”. The second cookie profile data621corresponds to age, which in this case is 27 years. The third cookie profile data621corresponds to the user's music preference, which in this case is country. In some embodiments, more cookie profile data are displayed than what is shown inFIG.6A. In some embodiments, different types of data are presented in the cookie profile section620than what is shown inFIG.6A. For example, cookie profile section620may include information about the user110, such as personal information, web browsing information, website activity information, and/or other relevant information. In one embodiment, the website from which each piece of data in section620was obtained may also be indicated, either in the interface600or in a pop-up or other user interface that provides further details. This information may be useful for the user110to determine which websites have accurate (and inaccurate) information regarding the user. The cookie profile data may also include a cookie profile entry modification link622. A user110can select the cookie profile entry modification link622to alter the corresponding cookie profile data621. In some embodiments, the cookie profile data621cannot be modified, and thus a cookie profile entry modification link622is not provided. The illustrated embodiment ofFIG.6Aalso shows that the user110can view recent activity associated with this supercookie165by looking at the recent activity section630. The recent activity section630includes recent activity data631, recent activity entry details link632, and recent activity entries deletion link633. The recent activity section630includes information about the latest websites (or other entities) that have requested and/or received access to portions of the supercookie, whether in response to the user visiting the respective website or the entity simply requesting information regarding the user for some other purpose (regardless of whether the user visits a website of the entity). In some embodiments, more or less recent activity text information is displayed than what is shown inFIG.6A. In some embodiments, a timestamp including the time and/or date of when the user110last visited the website is included as part of a recent activity entry631. The recent activity entry details link632allows the user110to click on the link to get more information about the recent access to the user's supercookie. This information may include a timestamp, duration, information accessed, number of times the website has been visited, general information about the website, and/or other relevant information. In some embodiments, the recent activity entry details link632may not be included. The recent activity entry deletion link633allows a user to delete a recent activity entry631. In some embodiments, the recent activity entry deletion link633is not provided. In some embodiments, the recent activity entries631may not be deleted. Supercookie user interface600also includes supercookie settings section640. In the pictured embodiment ofFIG.6A, the supercookie settings section640allows a user110to designate which types of supercookie information he may choose to make available and for which types of websites. In the displayed embodiment ofFIG.6A, the user110is able to select whether personal, financial, and/or shopping supercookie information are available or hidden for social networking, financial, and online retail websites. The information being made available may be information stored in the supercookie165. In some embodiments, the user can configure different types of information to be made available or hidden for different types of websites. In some embodiments, the user may have options to configure supercookie settings for different items. In some embodiments, check boxes, radio buttons, text boxes, or other types of user input may be used to determine the selections of the user110for configuring his supercookie165. In some embodiments, the type of information category, such as personal, financial, and/or shopping, may be a link to the specific information that can be displayed or hidden. For example, the category “personal” shown inFIG.6Amay be a link such that clicking on “personal” will display the personal information of the supercookie165that could be displayed or hidden to various websites. FIGS.6B and6Cillustrates other examples of user interfaces for managing a supercookie. In addition to certain elements displayed inFIG.6A, the embodiment inFIGS.6B and6Calso include a register new device button611, different data about information use in the recent activities section630, and additional configurability in supercookie settings section640. Users may desire to have their supercookie profile accessible when accessing websites from multiple devices, such as the user's mobile phone, notebook computer, and desktop computer. This may allow the user to maintain the same settings for webpages when accessed from multiple devices. Additionally, this may prevent a user from needing to enter the same information through multiple devices. As illustrated in the embodiments ofFIGS.6B and6C, the user has a single supercookie profile that automatically updates with new cookies and personal information from websites accessed by the user from any devices registered to the user. In one embodiment, the user may register a new device to his supercookie profile by selecting the register new device button611. For example, if a user bought a new tablet, his tablet would not have any of his personal information, and websites will not recognize him so he will have no saved preferences on the websites. However, if he already has a supercookie profile he can simply go to his supercookie management console and register his new device. Then his tablet will provide his supercookie anytime a website requests it (based on rules for providing the supercookie to various websites, for example). When he accesses websites with his tablet's browser, he can interact with the website in the same way as though he had accessed it through any of his devices which share a supercookie. In some embodiments, the user is able to configure personalized names for his registered devices. In other embodiments, the system automatically generates a name for new devices. FIGS.6B and6Cdisplay more information in addition to the information displayed in the recent activities section631shown inFIG.6A. The recent activities section includes a timestamp635, a device used entry634, and an information shared entry636. In some embodiments recent activities data631includes additional or fewer pieces of information than inFIGS.6B and6C. Timestamp634displays a time of access for a recent use, but may include information on previous uses also. The timestamp634may refer to the time the user accessed a website, the time the supercookie was accessed, or any other time relevant to the use of the user's information. Device used data634indicates the device associated with the use of the supercookie's information. For example, inFIG.6B, the website ‘Social App #1’ was accessed by the user's ‘mobile’ device. Information shared data636indicates what information was shared with the website. For example, inFIG.6B, the supercookie system shared ‘personal’ and ‘shopping’ information with ‘Online Store #3’. FIGS.6B and6Calso demonstrate increased user configurability. InFIGS.6B and6C, the user has the added ability to control which information is shared based on which device is accessing a website. In supercookie setting section640B and640C (FIGS.6B and6C, respective) the user can select which information is shared with which type of website for each of the registered devices. The user can first select a device in device selection tabs645. The supercookie management console600may then display the current settings for that device. For example, inFIG.6Bthe user has selected the ‘Home PC’ tab so the supercookie setting section640B indicates setting for sharing when sites are accesses from his ‘Home PC’. However, inFIG.6C, the user has selected his ‘mobile’ tab so the supercookie setting section640C indicates how supercookie information is to be shared when accessing websites from the ‘mobile’ device. In the example ofFIG.6C, the user has selected not to share his financial information with any types of websites from his mobile phone. In some embodiments the system may recommend which information to share in different circumstances, for example, based on the user's security needs. In other embodiments the user may not be able to change his information sharing settings. For example, the settings on which information to share may be preset by the system. In some embodiments, the supercookie user interface600may include more information and/or options for the user110than what is shown inFIG.6A,6B, or6C to provide the user110more control over configuring and viewing his supercookie165. In some embodiments, the supercookie user interface600may not include all of the information displayed inFIG.6A. In some embodiments, the supercookie user interface600may include different information from what is displayed inFIG.6A. The categories of information shared and website types are used for example purposes only. In other embodiments the information shared and website types may be categorized in any way that is beneficial to the user, the supercookie management system, or both. FIGS.6A,6B, and6Cshow embodiments of a user interface which a user can access through his browser to manage his supercookie. In other embodiments, the user interface may be accessed in other manners. For example, a user interface may be accessed through the user's browser, as a mobile website, a mobile app, a standalone application, or in other embodiments that allow a user to access information about his supercookie. FIG.7illustrates an embodiment of a mobile app which allows a user to change his information sharing preferences as in supercookie settings section640, discussed above with reference toFIGS.6A-6C. In this embodiment, the mobile app provides the user with his personal information and allows the user to install cookie tracking software on his mobile device, or register his mobile devices as a new device. In other embodiments, a mobile app may display information about recent activity or profile information. In some embodiments, a user may also receive an alert if his supercookie is accessed from a new unregistered device. Examples of Process Flow FIG.8illustrates one embodiment of a process for generating, using, and analyzing a supercookie165. First, at block810, the cookie data aggregation system100may generate the supercookie165. Once generated, the cookie data aggregation system100may push the supercookie165to the user110, as shown at block820(this block is optional as in some embodiments the supercookie is only maintained by the system100). In some embodiments, the cookie data aggregation system100notifies the user110that the supercookie165is being pushed to the user110. In some embodiments, the cookie data aggregation system100receives authorization to generate the supercookie165, but does not notify the user110when it pushes the supercookie165to the user110. At block830, a website140requests access to some/all of the supercookie165. In some embodiments, the supercookie165is accessed directly at the system100, while in other embodiments the user110provides the requested (and authorized) supercookie information to the requesting website. In some embodiments, the requesting website accesses the supercookie165after receiving authorization from the user110. At block840, once the website's request for the supercookie165has been granted, the website may apply a behavioral model to the supercookie165. By applying the model, the website can determine the behavior of the user110based on the cookie data compiled from multiple sites, as shown at block850. Based on the behavioral model results, the website may customize content to the user and/or otherwise use the supercookie information. In some embodiments, the method to generate, use, and analyze a supercookie165may include more actions than what is shown inFIG.8. In some embodiments, the method to generate, use, and analyze a supercookie165may not include all of the actions shown inFIG.8. Other Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code modules executed by one or more computer systems or computer processors comprising computer hardware. The code modules may be stored on any type of non-transitory computer-readable medium or computer storage device, such as hard drives, solid state memory, optical disc, and/or the like. The systems and modules may also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The results of the disclosed processes and process actions may be stored, persistently or otherwise, in any type of non-transitory computer storage such as, e.g., volatile or non-volatile storage. All of the methods and tasks described herein may be performed and fully automated by a computer system. The computer system may, in some cases, include multiple distinct computers or computing devices (e.g., physical servers, workstations, storage arrays, etc.) that communicate and interoperate over a network to perform the described functions. Each such computing device typically includes a processor (or multiple processors) that executes program instructions or modules stored in a memory or other computer-readable storage medium. Where the system includes multiple computing devices, these devices may, but need not, be co-located. The results of the disclosed methods and tasks may be persistently stored by transforming physical storage devices, such as solid state memory chips and/or magnetic disks, into a different state. The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and subcombinations are intended to fall within the scope of this disclosure. In addition, certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments. Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or actions. Thus, such conditional language is not generally intended to imply that features, elements and/or actions are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or actions are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each be present. While certain example embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Thus, nothing in the foregoing description is intended to imply that any particular element, feature, characteristic, action, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions disclosed herein. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of certain of the inventions disclosed herein. It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure.
56,105
11863311
DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions, or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the specific embodiments and examples, but is inclusive of general principles described herein and illustrated in the figures in addition to the general principles encompassed by the appended claims. The present disclosure is directed to systems and methods for providing users an extended reality environment. The term “extended reality environment,” which may also be referred to as “extended reality,” “extended reality space,” or “extended environment,” refers to all types of real-and-virtual combined environments and human-machine interactions at least partially generated by computer technology. The extended reality environment may be a completely simulated virtual environment or a combined real-and-virtual environment that a user may perceive from different perspectives. In some examples, the user may interact with elements of the extended reality environment. One non-limiting example of an extended reality environment may be a virtual reality environment, also known as “virtual reality” or a “virtual environment.” An immersive virtual reality environment may be a simulated non-physical environment which provides to the user the perception of being present in the virtual environment. Another non-limiting example of an extended reality environment may be an augmented reality environment, also known as “augmented reality” or “augmented environment.” An augmented reality environment may involve live direct or indirect view of a physical real-world environment that is enhanced with virtual computer-generated perceptual information, such as virtual objects that the user may interact with. Another non-limiting example of an extended reality environment is a mixed reality environment, also known as “mixed reality” or a “mixed environment.” A mixed reality environment may be a hybrid of physical real-world and virtual environments, in which physical and virtual objects may coexist and interact in real time. In some examples, both augmented reality environments and mixed reality environments may include a combination of real and virtual worlds, real-time interactions, and accurate 3D registration of virtual and real objects. In some examples, both augmented reality environment and mixed reality environments may include constructive overlaid sensory information that may be added to the physical environment. In other examples, both augmented reality environment and mixed reality environments may include destructive virtual content that may mask at least part of the physical environment. In some embodiments, the systems and methods may provide the extended reality environment using an extended reality appliance. The term extended reality appliance may include any type of device or system that enables a user to perceive and/or interact with an extended reality environment. The extended reality appliance may enable the user to perceive and/or interact with an extended reality environment through one or more sensory modalities. Some non-limiting examples of such sensory modalities may include visual, auditory, haptic, somatosensory, and olfactory. One example of the extended reality appliance is a virtual reality appliance that enables the user to perceive and/or interact with a virtual reality environment. Another example of the extended reality appliance is an augmented reality appliance that enables the user to perceive and/or interact with an augmented reality environment. Yet another example of the extended reality appliance is a mixed reality appliance that enables the user to perceive and/or interact with a mixed reality environment. Consistent with one aspect of the disclosure, the extended reality appliance may be a wearable device, such as a head-mounted device, for example, smart glasses, smart contact lens, headsets or any other device worn by a human for purposes of presenting an extended reality to the human. Other extended reality appliances may include holographic projector or any other device or system capable of providing an augmented reality (AR), virtual reality (VR), mixed reality (MR), or any immersive experience. Typical components of wearable extended reality appliances may include at least one of: a stereoscopic head-mounted display, a stereoscopic head-mounted sound system, head-motion tracking sensors (such as gyroscopes, accelerometers, magnetometers, image sensors, structured light sensors, etc.), head mounted projectors, eye-tracking sensors, and additional components described below. Consistent with another aspect of the disclosure, the extended reality appliance may be a non-wearable extended reality appliance. Specifically, the non-wearable extended reality appliance may include multi-projected environment appliances. In some embodiments, an extended reality appliance may be configured to change the viewing perspective of the extended reality environment in response to movements of the user and in response to head movements of the user in particular. In one example, a wearable extended reality appliance may change the field-of-view of the extended reality environment in response to a change of the head pose of the user, such as by changing the spatial orientation without changing the spatial position of the user in the extended reality environment. In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world, for example, by changing the spatial position of the user in the extended reality environment without changing the direction of the field-of-view with respect to the spatial position. According to some embodiments, an extended reality appliance may include a digital communication device configured to at least one of: receiving virtual content data configured to enable a presentation of the virtual content, transmitting virtual content for sharing with at least one external device, receiving contextual data from at least one external device, transmitting contextual data to at least one external device, transmitting of usage data indicative of usage of the extended reality appliance, and transmitting of data based on information captured using at least one sensor included in the extended reality appliance. In additional embodiments, the extended reality appliance may include memory for storing at least one of virtual data configured to enable a presentation of virtual content, contextual data, usage data indicative of usage of the extended reality appliance, sensor data based on information captured using at least one sensor included in the extended reality appliance, software instructions configured to cause a processing device to present the virtual content, software instructions configured to cause a processing device to collect and analyze the contextual data, software instructions configured to cause a processing device to collect and analyze the usage data, and software instructions configured to cause a processing device to collect and analyze the sensor data. In additional embodiments, the extended reality appliance may include a processing device configured to perform at least one of rendering of virtual content, collecting and analyzing contextual data, collecting and analyzing usage data, and collecting and analyzing sensor data. In additional embodiments, the extended reality appliance may include one or more sensors. The one or more sensors may include one or more image sensors (e.g., configured to capture images and/or videos of a user of the appliance or of an environment of the user), one or more motion sensors (such as an accelerometer, a gyroscope, a magnetometer, etc.), one or more positioning sensors (such as GPS, outdoor positioning sensor, indoor positioning sensor, etc.), one or more temperature sensors (e.g., configured to measure the temperature of at least part of the appliance and/or of the environment), one or more contact sensors, one or more proximity sensors (e.g., configured to detect whether the appliance is currently worn), one or more electrical impedance sensors (e.g., configured to measure electrical impedance of the user), one or more eye tracking sensors, such as gaze detectors, optical trackers, electric potential trackers (e.g., electrooculogram (EOG) sensors), video-based eye-trackers, infra-red/near infra-red sensors, passive light sensors, or any other technology capable of determining where a human is looking or gazing. In some embodiments, the systems and methods may use an input device to interact with the extended reality appliance. The term input device may include any physical device configured to receive input from a user or an environment of the user, and to provide the data to a computational device. The data provided to the computational device may be in a digital format and/or in an analog format. In one embodiment, the input device may store the input received from the user in a memory device accessible by a processing device, and the processing device may access the stored data for analysis. In another embodiment, the input device may provide the data directly to a processing device, for example, over a bus or over another communication system configured to transfer data from the input device to the processing device. In some examples, the input received by the input device may include key presses, tactile input data, motion data, position data, gestures based input data, direction data, or any other data for supply for computation. Some examples of the input device may include a button, a key, a keyboard, a computer mouse, a touchpad, a touchscreen, a joystick, or another mechanism from which input may be received. Another example of an input device may include an integrated computational interface device that includes at least one physical component for receiving input from a user. The integrated computational interface device may include at least a memory, a processing device, and the at least one physical component for receiving input from a user. In one example, the integrated computational interface device may further include a digital network interface that enables digital communication with other computing devices. In one example, the integrated computational interface device may further include a physical component for outputting information to the user. In some examples, all components of the integrated computational interface device may be included in a single housing, while in other examples the components may be distributed among two or more housings. Some non-limiting examples of physical components for receiving input from users that may be included in the integrated computational interface device may include at least one of a button, a key, a keyboard, a touchpad, a touchscreen, a joystick, or any other mechanism or sensor from which computational information may be received. Some non-limiting examples of physical components for outputting information to users may include at least one of a light indicator (such as a LED indicator), a screen, a touchscreen, a beeper, an audio speaker, or any other audio, video, or haptic device that provides human-perceptible outputs. In some embodiments, image data may be captured using one or more image sensors. In some examples, the image sensors may be included in the extended reality appliance, in a wearable device, in the wearable extended reality device, in the input device, in an environment of a user, and so forth. In some examples, the image data may be read from memory, may be received from an external device, may be generated (for example, using a generative model), and so forth. Some non-limiting examples of image data may include images, grayscale images, color images, 2D images, 3D images, videos, 2D videos, 3D videos, frames, footages, data derived from other image data, and so forth. In some examples, the image data may be encoded in any analog or digital format. Some non-limiting examples of such formats may include raw formats, compressed formats, uncompressed formats, lossy formats, lossless formats, JPEG, GIF, PNG, TIFF, BMP, NTSC, PAL, SECAM, MPEG, MPEG-4 Part 14, MOV, WMV, FLV, AVI, AVCHD, WebM, MKV, and so forth. In some embodiments, the extended reality appliance may receive digital signals, for example, from the input device. The term digital signals refers to a series of digital values that are discrete in time. The digital signals may represent, for example, sensor data, textual data, voice data, video data, virtual data, or any other form of data that provides perceptible information. Consistent with the present disclosure, the digital signals may be configured to cause the extended reality appliance to present virtual content. In one embodiment, the virtual content may be presented in a selected orientation. In this embodiment, the digital signals may indicate a position and an angle of a viewpoint in an environment, such as an extended reality environment. Specifically, the digital signals may include an encoding of the position and angle in six degree-of-freedom coordinates (e.g., forward/back, up/down, left/right, yaw, pitch, and roll). In another embodiment, the digital signals may include an encoding of the position as three-dimensional coordinates (e.g., x, y, and z), and an encoding of the angle as a vector originating from the encoded position. Specifically, the digital signals may indicate the orientation and an angle of the presented virtual content in an absolute coordinates of the environment, for example, by encoding yaw, pitch and roll of the virtual content with respect to a standard default angle. In another embodiment, the digital signals may indicate the orientation and the angle of the presented virtual content with respect to a viewpoint of another object (e.g., a virtual object, a physical object, etc.), for example, by encoding yaw, pitch, and roll of the virtual content with respect a direction corresponding to the viewpoint or to a direction corresponding to the other object. In another embodiment, such digital signals may include one or more projections of the virtual content, for example, in a format ready for presentation (e.g., image, video, etc.). For example, each such projection may correspond to a particular orientation or a particular angle. In another embodiment, the digital signals may include a representation of virtual content, for example, by encoding objects in a three-dimensional array of voxels, in a polygon mesh, or in any other format in which virtual content may be presented. In some embodiments, the digital signals may be configured to cause the extended reality appliance to present virtual content. The term virtual content may include any type of data representation that may be displayed by the extended reality appliance to the user. The virtual content may include a virtual object, inanimate virtual content, animate virtual content configured to change over time or in response to triggers, virtual two-dimensional content, virtual three-dimensional content, a virtual overlay over a portion of a physical environment or over a physical object, a virtual addition to a physical environment or to a physical object, a virtual promotion content, a virtual representation of a physical object, a virtual representation of a physical environment, a virtual document, a virtual character or persona, a virtual computer screen, a virtual widget, or any other format for displaying information virtually. Consistent with the present disclosure, the virtual content may include any visual presentation rendered by a computer or a processing device. In one embodiment, the virtual content may include a virtual object that is a visual presentation rendered by a computer in a confined region and configured to represent an object of a particular type (such as an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, virtual widget, or other virtual representation.). The rendered visual presentation may change to reflect changes to a status object or changes in the viewing angle of the object, for example, in a way that mimics changes in the appearance of physical objects. In another embodiment, the virtual content may include a virtual display (also referred to as a “virtual display screen” or a “virtual screen” herein), such as a virtual computer screen, a virtual tablet screen or a virtual smartphone screen, configured to display information generated by an operating system, in which the operating system may be configured to receive textual data from a physical keyboard and/or a virtual keyboard and to cause a display of the textual content in the virtual display screen. In one example, illustrated inFIG.1, the virtual content may include a virtual environment that includes a virtual computer screen and a plurality of virtual objects. In some examples, a virtual display may be a virtual object mimicking and/or extending the functionality of a physical display screen. For example, the virtual display may be presented in an extended reality environment (such as a mixed reality environment, an augmented reality environment, a virtual reality environment, etc.), using an extended reality appliance. In one example, a virtual display may present content produced by a regular operating system that may be equally presented on a physical display screen. In one example, a textual content entered using a keyboard (for example, using a physical keyboard, using a virtual keyboard, etc.) may be presented on a virtual display in real time as the textual content is typed. In one example, a virtual cursor may be presented on a virtual display, and the virtual cursor may be controlled by a pointing device (such as a physical pointing device, a virtual pointing device, a computer mouse, a joystick, a touchpad, a physical touch controller, and so forth). In one example, one or more windows of a graphical user interface operating system may be presented on a virtual display. In another example, content presented on a virtual display may be interactive, that is, it may change in reaction to actions of users. In yet another example, a presentation of a virtual display may include a presentation of a screen frame, or may include no presentation of a screen frame. Some disclosed embodiments may include and/or access a data structure or a database. The terms data structure and a database, consistent with the present disclosure may include any collection of data values and relationships among them. The data may be stored linearly, horizontally, hierarchically, relationally, non-relationally, uni-dimensionally, multidimensionally, operationally, in an ordered manner, in an unordered manner, in an object-oriented manner, in a centralized manner, in a decentralized manner, in a distributed manner, in a custom manner, or in any manner enabling data access. By way of non-limiting examples, data structures may include an array, an associative array, a linked list, a binary tree, a balanced tree, a heap, a stack, a queue, a set, a hash table, a record, a tagged union, Entity-Relationship model, a graph, a hypergraph, a matrix, a tensor, and so forth. For example, a data structure may include an XML database, an RDBMS database, an SQL database or NoSQL alternatives for data storage/search such as, for example, MongoDB, Redis, Couchbase, Datastax Enterprise Graph, Elastic Search, Splunk, Solr, Cassandra, Amazon DynamoDB, Scylla, HBase, and Neo4J. A data structure may be a component of the disclosed system or a remote computing component (e.g., a cloud-based data structure). Data in the data structure may be stored in contiguous or non-contiguous memory. Moreover, a data structure does not require information to be co-located. It may be distributed across multiple servers, for example, that may be owned or operated by the same or different entities. Thus, the term data structure in the singular is inclusive of plural data structures. In some embodiments, the system may determine the confidence level in received input or in any determined value. The term confidence level refers to any indication, numeric or otherwise, of a level (e.g., within a predetermined range) indicative of an amount of confidence the system has at determined data. For example, the confidence level may have a value between 1 and 10. Alternatively, the confidence level may be expressed as a percentage or any other numerical or non-numerical indication. In some cases, the system may compare the confidence level to a threshold. The term threshold may denote a reference value, a level, a point, or a range of values. In operation, when the confidence level of determined data exceeds the threshold (or is below it, depending on a particular use case), the system may follow a first course of action and, when the confidence level is below it (or above it, depending on a particular use case), the system may follow a second course of action. The value of the threshold may be predetermined for each type of examined object or may be dynamically selected based on different considerations. System Overview Reference is now made toFIG.1, which illustrates a user that uses an example extended reality system consistent with various embodiments of the present disclosure.FIG.1is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. As shown, a user100is sitting behind table102, supporting a keyboard104and mouse106. Keyboard104is connected by wire108to a wearable extended reality appliance110that displays virtual content to user100. Alternatively or additionally to wire108, keyboard104may connect to wearable extended reality appliance110wirelessly. For illustration purposes, the wearable extended reality appliance is depicted as a pair of smart glasses, but, as described above, wearable extended reality appliance110may be any type of head-mounted device used for presenting an extended reality to user100. The virtual content displayed by wearable extended reality appliance110includes a virtual screen112(also referred to as a “virtual display screen” or a “virtual display” herein) and a plurality of virtual widgets114. Virtual widgets114A-114D are displayed next to virtual screen112and virtual widget114E is displayed on table102. User100may input text to a document116displayed in virtual screen112using keyboard104; and may control virtual cursor118using mouse106. In one example, virtual cursor118may move anywhere within virtual screen112. In another example, virtual cursor118may move anywhere within virtual screen112and may also move to any one of virtual widgets114A-114D but not to virtual widget114E. In yet another example, virtual cursor118may move anywhere within virtual screen112and may also move to any one of virtual widgets114A-114E. In an additional example, virtual cursor118may move anywhere in the extended reality environment including virtual screen112and virtual widgets114A-114E. In yet another example, virtual cursor may move on all available surfaces (i.e., virtual surfaces or physical surfaces) or only on selected surfaces in the extended reality environment. Alternatively or additionally, user100may interact with any one of virtual widgets114A-114E, or with selected virtual widgets, using hand gestures recognized by wearable extended reality appliance110. For example, virtual widget114E may be an interactive widget (e.g., a virtual slider controller) that may be operated with hand gestures. FIG.2illustrates an example of a system200that provides extended reality (XR) experience to users, such as user100.FIG.2is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. System200may be computer-based and may include computer system components, wearable appliances, workstations, tablets, handheld computing devices, memory devices, and/or internal network(s) connecting the components. System200may include or be connected to various network computing resources (e.g., servers, routers, switches, network connections, storage devices, etc.) for supporting services provided by system200. Consistent with the present disclosure, system200may include an input unit202, an XR unit204, a mobile communications device206, and a remote processing unit208. Remote processing unit208may include a server210coupled to one or more physical or virtual storage devices, such as a data structure212. System200may also include or be connected to a communications network214that facilitates communications and data exchange between different system components and the different entities associated with system200. Consistent with the present disclosure, input unit202may include one or more devices that may receive input from user100. In one embodiment, input unit202may include a textual input device, such as keyboard104. The textual input device may include all possible types of devices and mechanisms for inputting textual information to system200. Examples of textual input devices may include mechanical keyboards, membrane keyboards, flexible keyboards, QWERTY keyboards, Dvorak keyboards, Colemak keyboards, chorded keyboards, wireless keyboards, keypads, key-based control panels, or other arrays of control keys, vision input devices, or any other mechanism for inputting text, whether the mechanism is provided in physical form or is presented virtually. In one embodiment, input unit202may also include a pointing input device, such as mouse106. The pointing input device may include all possible types of devices and mechanisms for inputting two-dimensional or three-dimensional information to system200. In one example, two-dimensional input from the pointing input device may be used for interacting with virtual content presented via the XR unit204. Examples of pointing input devices may include a computer mouse, trackball, touchpad, trackpad, touchscreen, joystick, pointing stick, stylus, light pen, or any other physical or virtual input mechanism. In one embodiment, input unit202may also include a graphical input device, such as a touchscreen configured to detect contact, movement, or break of movement. The graphical input device may use any of a plurality of touch sensitivity technologies, including, but not limited to, capacitive, resistive, infrared, and surface acoustic wave technologies as well as other proximity sensor arrays or other elements for determining one or more points of contact. In one embodiment, input unit202may also include one or more voice input devices, such as a microphone. The voice input device may include all possible types of devices and mechanisms for inputting voice data to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. In one embodiment, input unit202may also include one or more image input devices, such as an image sensor, configured to capture image data. In one embodiment, input unit202may also include one or more haptic gloves configured to capture hands motion and pose data. In one embodiment, input unit202may also include one or more proximity sensors configured to detect presence and/or movement of objects in a selected region near the sensors. In accordance with some embodiments, the system may include at least one sensor configured to detect and/or measure a property associated with the user, the user's action, or user's environment. One example of the at least one sensor, is sensor216included in input unit202. Sensor216may be a motion sensor, a touch sensor, a light sensor, an infrared sensor, an audio sensor, an image sensor, a proximity sensor, a positioning sensor, a gyroscope, a temperature sensor, a biometric sensor, or any other sensing devices to facilitate related functionalities. Sensor216may be integrated with, or connected to, the input devices or it may be separated from the input devices. In one example, a thermometer may be included in mouse106to determine the body temperature of user100. In another example, a positioning sensor may be integrated with keyboard104to determine movement of user100relative to keyboard104. Such positioning sensor may be implemented using one of the following technologies: Global Positioning System (GPS), GLObal NAvigation Satellite System (GLONASS), Galileo global navigation system, BeiDou navigation system, other Global Navigation Satellite Systems (GNSS), Indian Regional Navigation Satellite System (IRNSS), Local Positioning Systems (LPS), Real-Time Location Systems (RTLS), Indoor Positioning System (IPS), Wi-Fi based positioning systems, cellular triangulation, image based positioning technology, indoor positioning technology, outdoor positioning technology, or any other positioning technology. In accordance with some embodiments, the system may include one or more sensors for identifying a position and/or a movement of a physical device (such as a physical input device, a physical computing device, keyboard104, mouse106, wearable extended reality appliance110, and so forth). The one or more sensors may be included in the physical device or may be external to the physical device. In some examples, an image sensor external to the physical device (for example, an image sensor included in another physical device) may be used to capture image data of the physical device, and the image data may be analyzed to identify the position and/or the movement of the physical device. For example, the image data may be analyzed using a visual object tracking algorithm to identify the movement of the physical device, may be analyzed using a visual object detection algorithm to identify the position of the physical device (for example, relative to the image sensor, in a global coordinates system, etc.), and so forth. In some examples, an image sensor included in the physical device may be used to capture image data, and the image data may be analyzed to identify the position and/or the movement of the physical device. For example, the image data may be analyzed using visual odometry algorithms to identify the position of the physical device, may be analyzed using an ego-motion algorithm to identify movement of the physical device, and so forth. In some examples, a positioning sensor, such as an indoor positioning sensor or an outdoor positioning sensor, may be included in the physical device and may be used to determine the position of the physical device. In some examples, a motion sensor, such as an accelerometer or a gyroscope, may be included in the physical device and may be used to determine the motion of the physical device. In some examples, a physical device, such as a keyboard or a mouse, may be configured to be positioned on a physical surface. Such physical device may include an optical mouse sensor (also known as non-mechanical tracking engine) aimed towards the physical surface, and the output of the optical mouse sensor may be analyzed to determine movement of the physical device with respect to the physical surface. Consistent with the present disclosure, XR unit204may include a wearable extended reality appliance configured to present virtual content to user100. One example of the wearable extended reality appliance is wearable extended reality appliance110. Additional examples of wearable extended reality appliance may include a Virtual Reality (VR) device, an Augmented Reality (AR) device, a Mixed Reality (MR) device, or any other device capable of generating extended reality content. Some non-limiting examples of such devices may include Nreal Light, Magic Leap One, Varjo, Quest 1/2, Vive, and others. In some embodiments, XR unit204may present virtual content to user100. Generally, an extended reality appliance may include all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. As mentioned above, the term “extended reality” (XR) refers to a superset which includes the entire spectrum from “the complete real” to “the complete virtual.” It includes representative forms such as augmented reality (AR), mixed reality (MR), virtual reality (VR), and the areas interpolated among them. Accordingly, it is noted that the terms “XR appliance,” “AR appliance,” “VR appliance,” and “MR appliance” may be used interchangeably herein and may refer to any device of the variety of appliances listed above. Consistent with the present disclosure, the system may exchange data with a variety of communication devices associated with users, for example, mobile communications device206. The term “communication device” is intended to include all possible types of devices capable of exchanging data using digital communications network, analog communication network or any other communications network configured to convey data. In some examples, the communication device may include a smartphone, a tablet, a smartwatch, a personal digital assistant, a desktop computer, a laptop computer, an IoT device, a dedicated terminal, a wearable communication device, and any other device that enables data communications. In some cases, mobile communications device206may supplement or replace input unit202. Specifically, mobile communications device206may be associated with a physical touch controller that may function as a pointing input device. Moreover, mobile communications device206may also, for example, be used to implement a virtual keyboard and replace the textual input device. For example, when user100steps away from table102and walks to the break room with his smart glasses, he may receive an email that requires a quick answer. In this case, the user may select to use his or her own smartwatch as the input device and to type the answer to the email while it is virtually presented by the smart glasses. Consistent with the present disclosure, embodiments of the system may involve the usage of a cloud server. The term “cloud server” refers to a computer platform that provides services via a network, such as the Internet. In the example embodiment illustrated inFIG.2, server210may use virtual machines that may not correspond to individual hardware. For example, computational and/or storage capabilities may be implemented by allocating appropriate portions of desirable computation/storage power from a scalable repository, such as a data center or a distributed computing environment. Specifically, in one embodiment, remote processing unit208may be used together with XR unit204to provide the virtual content to user100. In one example configuration, server210may be a cloud server that functions as the operation system (OS) of the wearable extended reality appliance. In one example, server210may implement the methods described herein using customized hard-wired logic, one or more Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), firmware, and/or program logic which, in combination with the computer system, cause server210to be a special-purpose machine. In some embodiments, server210may access data structure212to determine, for example, virtual content to display user100. Data structure212may utilize a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, other type of storage device or tangible or non-transitory computer-readable medium, or any medium or mechanism for storing information. Data structure212may be part of server210or separate from server210, as shown. When data structure212is not part of server210, server210may exchange data with data structure212via a communication link. Data structure212may include one or more memory devices that store data and instructions used to perform one or more features of the disclosed methods. In one embodiment, data structure212may include any of a plurality of suitable data structures, ranging from small data structures hosted on a workstation to large data structures distributed among data centers. Data structure212may also include any combination of one or more data structures controlled by memory controller devices (e.g., servers) or software. Consistent with the present disclosure, communications network may be any type of network (including infrastructure) that supports communications, exchanges information, and/or facilitates the exchange of information between the components of a system. For example, communications network214in system200may include, for example, a telephone network, an extranet, an intranet, the Internet, satellite communications, off-line communications, wireless communications, transponder communications, a Local Area Network (LAN), wireless network (e.g., a Wi-Fi/302.11 network), a Wide Area Network (WAN), a Virtual Private Network (VPN), digital communication network, analog communication network, or any other mechanism or combinations of mechanism that enable data transmission. The components and arrangements of system200shown inFIG.2are intended to be exemplary only and are not intended to limit any embodiment, as the system components used to implement the disclosed processes and features may vary. FIG.3is a block diagram of an exemplary configuration of input unit202.FIG.3is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.3, input unit202may directly or indirectly access a bus300(or other communication mechanism) that interconnects subsystems and components for transferring information within input unit202. For example, bus300may interconnect a memory interface310, a network interface320, an input interface330, a power source340, an output interface350, a processing device360, a sensors interface370, and a database380. Memory interface310, shown inFIG.3, may be used to access a software product and/or data stored on a non-transitory computer-readable medium. Generally, a non-transitory computer-readable storage medium refers to any type of physical memory on which information or data readable by at least one processor can be stored. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, any other optical data storage medium, any physical medium with patterns of holes, a PROM, an EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. The terms “memory” and “computer-readable storage medium” may refer to multiple structures, such as a plurality of memories or computer-readable storage mediums located within an input unit or at a remote location. Additionally, one or more computer-readable storage mediums can be utilized in implementing a computer-implemented method. Accordingly, the term computer-readable storage medium should be understood to include tangible items and exclude carrier waves and transient signals. In the specific embodiment illustrated inFIG.3, memory interface310may be used to access a software product and/or data stored on a memory device, such as memory device311. Memory device311may include high-speed random-access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). Consistent with the present disclosure, the components of memory device311may be distributed in more than units of system200and/or in more than one memory device. Memory device311, shown inFIG.3, may contain software modules to execute processes consistent with the present disclosure. In particular, memory device311may include an input determination module312, an output determination module313, a sensors communication module314, a virtual content determination module315, a virtual content communication module316, and a database access module317. Modules312-317may contain software instructions for execution by at least one processor (e.g., processing device360) associated with input unit202. Input determination module312, output determination module313, sensors communication module314, virtual content determination module315, virtual content communication module316, and database access module317may cooperate to perform various operations. For example, input determination module312may determine text using data received from, for example, keyboard104. Thereafter, output determination module313may cause presentation of the recent inputted text, for example on a dedicated display352physically or wirelessly coupled to keyboard104. This way, when user100types, he can see a preview of the typed text without constantly moving his head up and down to look at virtual screen112. Sensors communication module314may receive data from different sensors to determine a status of user100. Thereafter, virtual content determination module315may determine the virtual content to display, based on received input and the determined status of user100. For example, the determined virtual content may be a virtual presentation of the recent inputted text on a virtual screen virtually located adjacent to keyboard104. Virtual content communication module316may obtain virtual content that is not determined by virtual content determination module315(e.g., an avatar of another user). The retrieval of the virtual content may be from database380, from remote processing unit208, or any other source. In some embodiments, input determination module312may regulate the operation of input interface330in order to receive pointer input331, textual input332, audio input333, and XR-related input334. Details on the pointer input, the textual input, and the audio input are described above. The term “XR-related input” may include any type of data that may cause a change in the virtual content displayed to user100. In one embodiment, XR-related input334may include image data of user100, a wearable extended reality appliance (e.g., detected hand gestures of user100). In another embodiment, XR-related input334may include wireless communication indicating a presence of another user in proximity to user100. Consistent with the present disclosure, input determination module312may concurrently receive different types of input data. Thereafter, input determination module312may further apply different rules based on the detected type of input. For example, a pointer input may have precedence over voice input. In some embodiments, output determination module313may regulate the operation of output interface350in order to generate output using light indicators351, display352, and/or speakers353. In general, the output generated by output determination module313does not include virtual content to be presented by a wearable extended reality appliance. Instead, the output generated by output determination module313include various outputs that relates to the operation of input unit202and/or the operation of XR unit204. In one embodiment, light indicators351may include a light indicator that shows the status of a wearable extended reality appliance. For example, the light indicator may display green light when wearable extended reality appliance110are connected to keyboard104, and blinks when wearable extended reality appliance110has low battery. In another embodiment, display352may be used to display operational information. For example, the display may present error messages when the wearable extended reality appliance is inoperable. In another embodiment, speakers353may be used to output audio, for example, when user100wishes to play some music for other users. In some embodiments, sensors communication module314may regulate the operation of sensors interface370in order to receive sensor data from one or more sensors, integrated with, or connected to, an input device. The one or more sensors may include: audio sensor371, image sensor372, motion sensor373, environmental sensor374(e.g., a temperature sensor, ambient light detectors, etc.), and other sensors375. In one embodiment, the data received from sensors communication module314may be used to determine the physical orientation of the input device. The physical orientation of the input device may be indicative of a state of the user and may be determined based on combination of a tilt movement, a roll movement, and a lateral movement. Thereafter, the physical orientation of the input device may be used by virtual content determination module315to modify display parameters of the virtual content to match the state of the user (e.g., attention, sleepy, active, sitting, standing, leaning backwards, leaning forward, walking, moving, riding, etc.). In some embodiments, virtual content determination module315may determine the virtual content to be displayed by the wearable extended reality appliance. The virtual content may be determined based on data from input determination module312, sensors communication module314, and other sources (e.g., database380). In some embodiments, determining the virtual content may include determining the distance, the size, and the orientation of the virtual objects. The determination of the position of the virtual objects may be determined based on the type of the virtual objects. Specifically, with regards to the example illustrated inFIG.1, the virtual content determination module315may determine to place four virtual widgets114A-114D on the sides of virtual screen112and to place virtual widget114E on table102because virtual widget114E is a virtual controller (e.g., volume bar). The determination of the position of the virtual objects may further be determined based on user's preferences. For example, for left-handed users, virtual content determination module315may determine placing a virtual volume bar left of keyboard104; and for right-handed users, virtual content determination module315may determine placing the virtual volume bar right of keyboard104. In some embodiments, virtual content communication module316may regulate the operation of network interface320in order to obtain data from one or more sources to be presented as virtual content to user100. The one or more sources may include other XR units204, the user's mobile communications device206, remote processing unit208, publicly available information, etc. In one embodiment, virtual content communication module316may communicate with mobile communications device206in order to provide a virtual representation of mobile communications device206. For example, the virtual representation may enable user100to read messages and interact with applications installed on the mobile communications device206. Virtual content communication module316may also regulate the operation of network interface320in order to share virtual content with other users. In one example, virtual content communication module316may use data from input determination module to identify a trigger (e.g., the trigger may include a gesture of the user) and to transfer content from the virtual display to a physical display (e.g., TV) or to a virtual display of a different user. In some embodiments, database access module317may cooperate with database380to retrieve stored data. The retrieved data may include, for example, privacy levels associated with different virtual objects, the relationship between virtual objects and physical objects, the user's preferences, the user's past behavior, and more. As described above, virtual content determination module315may use the data stored in database380to determine the virtual content. Database380may include separate databases, including, for example, a vector database, raster database, tile database, viewport database, and/or a user input database. The data stored in database380may be received from modules314-317or other components of system200. Moreover, the data stored in database380may be provided as input using data entry, data transfer, or data uploading. Modules312-317may be implemented in software, hardware, firmware, a mix of any of those, or the like. In some embodiments, any one or more of modules312-317and data associated with database380may be stored in XR unit204, mobile communications device206, or remote processing unit208. Processing devices of system200may be configured to execute the instructions of modules312-317. In some embodiments, aspects of modules312-317may be implemented in hardware, in software (including in one or more signal processing and/or application specific integrated circuits), in firmware, or in any combination thereof, executable by one or more processors, alone, or in various combinations with each other. Specifically, modules312-317may be configured to interact with each other and/or other modules of system200to perform functions consistent with Some disclosed embodiments. For example, input unit202may execute instructions that include an image processing algorithm on data from XR unit204to determine head movement of user100. Furthermore, each functionality described throughout the specification, with regards to input unit202or with regards to a component of input unit202, may correspond to a set of instructions for performing said functionality. These instructions need not be implemented as separate software programs, procedures, or modules. Memory device311may include additional modules and instructions or fewer modules and instructions. For example, memory device311may store an operating system, such as ANDROID, iOS, UNIX, OSX, WINDOWS, DARWIN, RTXC, LINUX or an embedded operating system such as VXWorkS. The operating system can include instructions for handling basic system services and for performing hardware-dependent tasks. Network interface320, shown inFIG.3, may provide two-way data communications to a network, such as communications network214. In one embodiment, network interface320may include an Integrated Services Digital Network (ISDN) card, cellular modem, satellite modem, or a modem to provide a data communication connection over the Internet. As another example, network interface320may include a Wireless Local Area Network (WLAN) card. In another embodiment, network interface320may include an Ethernet port connected to radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of network interface320may depend on the communications network or networks over which input unit202is intended to operate. For example, in some embodiments, input unit202may include network interface320designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth network. In any such implementation, network interface320may be configured to send and receive electrical, electromagnetic, or optical signals that carry digital data streams or digital signals representing various types of information. Input interface330, shown inFIG.3, may receive input from a variety of input devices, for example, a keyboard, a mouse, a touch pad, a touch screen, one or more buttons, a joystick, a microphone, an image sensor, and any other device configured to detect physical or virtual input. The received input may be in the form of at least one of: text, sounds, speech, hand gestures, body gestures, tactile information, and any other type of physically or virtually input generated by the user. In the depicted embodiment, input interface330may receive pointer input331, textual input332, audio input333, and XR-related input334. In additional embodiment, input interface330may be an integrated circuit that may act as bridge between processing device360and any of the input devices listed above. Power source340, shown inFIG.3, may provide electrical energy to power input unit202and optionally also power XR unit204. Generally, a power source included in the any device or system in the present disclosure may be any device that can repeatedly store, dispense, or convey electric power, including, but not limited to, one or more batteries (e.g., a lead-acid battery, a lithium-ion battery, a nickel-metal hydride battery, a nickel-cadmium battery), one or more capacitors, one or more connections to external power sources, one or more power convertors, or any combination of them. With reference to the example illustrated inFIG.3, the power source may be mobile, which means that input unit202can be easily carried by a hand (e.g., the total weight of power source340may be less than a pound). The mobility of the power source enables user100to use input unit202in a variety of situations. In other embodiments, power source340may be associated with a connection to an external power source (such as an electrical power grid) that may be used to charge power source340. In addition, power source340may be configured to charge one or more batteries included in XR unit204; for example, a pair of extended reality glasses (e.g., wearable extended reality appliance110) may be charged (e.g., wirelessly or not wirelessly) when they are placed on or in proximity to the input unit202. Output interface350, shown inFIG.3, may cause output from a variety of output devices, for example, using light indicators351, display352, and/or speakers353. In one embodiment, output interface350may be an integrated circuit that may act as bridge between processing device360and at least one of the output devices listed above. Light indicators351may include one or more light sources, for example, a LED array associated with different colors. Display352may include a screen (e.g., LCD or dot-matrix screen) or a touch screen. Speakers353may include audio headphones, a hearing aid type device, a speaker, a bone conduction headphone, interfaces that provide tactile cues, vibrotactile stimulators, and more. Processing device360, shown inFIG.3, may include at least one processor configured to execute computer programs, applications, methods, processes, or other software to perform embodiments described in the present disclosure. Generally, a processing device included in the any device or system in the present disclosure may include one or more integrated circuits, microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field programmable gate array (FPGA), or other circuits suitable for executing instructions or performing logic operations. The processing device may include at least one processor configured to perform functions of the disclosed methods such as a microprocessor manufactured by Intel™. The processing device may include a single core or multiple core processors executing parallel processes simultaneously. In one example, the processing device may be a single core processor configured with virtual processing technologies. The processing device may implement virtual machine technologies or other technologies to provide the ability to execute, control, run, manipulate, store, etc., multiple software processes, applications, programs, etc. In another example, the processing device may include a multiple-core processor arrangement (e.g., dual, quad core, etc.) configured to provide parallel processing functionalities to allow a device associated with the processing device to execute multiple processes simultaneously. It is appreciated that other types of processor arrangements could be implemented to provide the capabilities disclosed herein. Sensors interface370, shown inFIG.3, may obtain sensor data from a variety of sensors, for example, audio sensor371, image sensor372, motion sensor373, environmental sensor374, and other sensors375. In one embodiment, sensors interface370may be an integrated circuit that may act as bridge between processing device360and at least one of the sensors listed above. Audio sensor371may include one or more audio sensors configured to capture audio by converting sounds to digital information. Some examples of audio sensors may include: microphones, unidirectional microphones, bidirectional microphones, cardioid microphones, omnidirectional microphones, onboard microphones, wired microphones, wireless microphones, or any combination of the above. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on data received from audio sensor371(e.g., voice commands). Image sensor372may include one or more image sensors configured to capture visual information by converting light to image data. Consistent with the present disclosure, an image sensor may be included in the any device or system in the present disclosure and may be any device capable of detecting and converting optical signals in the near-infrared, infrared, visible, and ultraviolet spectrums into electrical signals. Examples of image sensors may include digital cameras, phone cameras, semiconductor Charge-Coupled Devices (CCDs), active pixel sensors in Complementary Metal-Oxide-Semiconductor (CMOS), or N-type metal-oxide-semiconductor (NMOS, Live MOS). The electrical signals may be used to generate image data. Consistent with the present disclosure, the image data may include pixel data streams, digital images, digital video streams, data derived from captured images, and data that may be used to construct one or more 3D images, a sequence of 3D images, 3D videos, or a virtual 3D representation. The image data acquired by image sensor372may be transmitted by wired or wireless transmission to any processing device of system200. For example, the image data may be processed in order to: detect objects, detect events, detect action, detect face, detect people, recognize a known person, or any other information that may be used by system200. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on image data received from image sensor372. Motion sensor373may include one or more motion sensors configured to measure motion of input unit202or motion of objects in the environment of input unit202. Specifically, the motion sensors may perform at least one of the following: detect motion of objects in the environment of input unit202, measure the velocity of objects in the environment of input unit202, measure the acceleration of objects in the environment of input unit202, detect the motion of input unit202, measure the velocity of input unit202, measure the acceleration of input unit202, etc. In some embodiments, motion sensor373may include one or more accelerometers configured to detect changes in proper acceleration and/or to measure proper acceleration of input unit202. In other embodiments, motion sensor373may include one or more gyroscopes configured to detect changes in the orientation of input unit202and/or to measure information related to the orientation of input unit202. In other embodiments, motion sensor373may include one or more using image sensors, LIDAR sensors, radar sensors, or proximity sensors. For example, by analyzing captured images the processing device may determine the motion of input unit202, for example, using ego-motion algorithms. In addition, the processing device may determine the motion of objects in the environment of input unit202, for example, using object tracking algorithms. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on the determined motion of input unit202or the determined motion of objects in the environment of input unit202. For example, causing a virtual display to follow the movement of input unit202. Environmental sensor374may include one or more sensors from different types configured to capture data reflective of the environment of input unit202. In some embodiments, environmental sensor374may include one or more chemical sensors configured to perform at least one of the following: measure chemical properties in the environment of input unit202, measure changes in the chemical properties in the environment of input unit202, detect the present of chemicals in the environment of input unit202, measure the concentration of chemicals in the environment of input unit202. Examples of such chemical properties may include: pH level, toxicity, and temperature. Examples of such chemicals may include: electrolytes, particular enzymes, particular hormones, particular proteins, smoke, carbon dioxide, carbon monoxide, oxygen, ozone, hydrogen, and hydrogen sulfide. In other embodiments, environmental sensor374may include one or more temperature sensors configured to detect changes in the temperature of the environment of input unit202and/or to measure the temperature of the environment of input unit202. In other embodiments, environmental sensor374may include one or more barometers configured to detect changes in the atmospheric pressure in the environment of input unit202and/or to measure the atmospheric pressure in the environment of input unit202. In other embodiments, environmental sensor374may include one or more light sensors configured to detect changes in the ambient light in the environment of input unit202. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on input from environmental sensor374. For example, automatically reducing the brightness of the virtual content when the environment of user100becomes darker. Other sensors375may include a weight sensor, a light sensor, a resistive sensor, an ultrasonic sensor, a proximity sensor, a biometric sensor, or other sensing devices to facilitate related functionalities. In a specific embodiment, other sensors375may include one or more positioning sensors configured to obtain positioning information of input unit202, to detect changes in the position of input unit202, and/or to measure the position of input unit202. Alternatively, GPS software may permit input unit202to access an external GPS receiver (e.g., connecting via a serial port or Bluetooth). Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on input from other sensors375. For example, presenting private information only after identifying user100using data from a biometric sensor. The components and arrangements shown inFIG.3are not intended to limit any embodiment. As will be appreciated by a person skilled in the art having the benefit of this disclosure, numerous variations and/or modifications may be made to the depicted configuration of input unit202. For example, not all components may be essential for the operation of an input unit in all cases. Any component may be located in any appropriate part of an input unit, and the components may be rearranged into a variety of configurations while providing the functionality of various embodiments. For example, some input units may not include all of the elements as shown in input unit202. FIG.4is a block diagram of an exemplary configuration of XR unit204.FIG.4is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.4, XR unit204may directly or indirectly access a bus400(or other communication mechanism) that interconnects subsystems and components for transferring information within XR unit204. For example, bus400may interconnect a memory interface410, a network interface420, an input interface430, a power source440, an output interface450, a processing device460, a sensors interface470, and a database480. Memory interface410, shown inFIG.4, is assumed to have similar functionality as the functionality of memory interface310described above in detail. Memory interface410may be used to access a software product and/or data stored on a non-transitory computer-readable medium or on memory devices, such as memory device411. Memory device411may contain software modules to execute processes consistent with the present disclosure. In particular, memory device411may include an input determination module412, an output determination module413, a sensors communication module414, a virtual content determination module415, a virtual content communication module416, and a database access module417. Modules412-417may contain software instructions for execution by at least one processor (e.g., processing device460) associated with XR unit204. Input determination module412, output determination module413, sensors communication module414, virtual content determination module415, virtual content communication module416, and database access module417may cooperate to perform various operations. For example, input determination module412may determine User Interface (UI) input received from input unit202. At the same time, sensors communication module414may receive data from different sensors to determine a status of user100. Virtual content determination module415may determine the virtual content to display based on received input and the determined status of user100. Virtual content communication module416may retrieve virtual content not determined by virtual content determination module415. The retrieval of the virtual content may be from database380, database480, mobile communications device206, or from remote processing unit208. Based on the output of virtual content determination module415, output determination module413may cause a change in a virtual content displayed to user100by projector454. In some embodiments, input determination module412may regulate the operation of input interface430in order to receive gesture input431, virtual input432, audio input433, and UI input434. Consistent with the present disclosure, input determination module412may concurrently receive different types of input data. In one embodiment, input determination module412may apply different rules based on the detected type of input. For example, gesture input may have precedence over virtual input. In some embodiments, output determination module413may regulate the operation of output interface450in order to generate output using light indicators451, display452, speakers453, and projector454. In one embodiment, light indicators451may include a light indicator that shows the status of the wearable extended reality appliance. For example, the light indicator may display green light when the wearable extended reality appliance110are connected to input unit202, and blinks when wearable extended reality appliance110has low battery. In another embodiment, display452may be used to display operational information. In another embodiment, speakers453may include a bone conduction headphone used to output audio to user100. In another embodiment, projector454may present virtual content to user100. The operations of a sensors communication module, a virtual content determination module, a virtual content communication module, and a database access module are described above with reference toFIG.3, details of which are not repeated herein. Modules412-417may be implemented in software, hardware, firmware, a mix of any of those, or the like. Network interface420, shown inFIG.4, is assumed to have similar functionality as the functionality of network interface320, described above in detail. The specific design and implementation of network interface420may depend on the communications network(s) over which XR unit204is intended to operate. For example, in some embodiments, XR unit204is configured to be selectively connectable by wire to input unit202. When connected by wire, network interface420may enable communications with input unit202; and when not connected by wire, network interface420may enable communications with mobile communications device206. Input interface430, shown inFIG.4, is assumed to have similar functionality as the functionality of input interface330described above in detail. In this case, input interface430may communicate with an image sensor to obtain gesture input431(e.g., a finger of user100pointing to a virtual object), communicate with other XR units204to obtain virtual input432(e.g., a virtual object shared with XR unit204or a gesture of avatar detected in the virtual environment), communicate with a microphone to obtain audio input433(e.g., voice commands), and communicate with input unit202to obtain UI input434(e.g., virtual content determined by virtual content determination module315). Power source440, shown inFIG.4, is assumed to have similar functionality as the functionality of power source340described above, only it provides electrical energy to power XR unit204. In some embodiments, power source440may be charged by power source340. For example, power source440may be wirelessly changed when XR unit204is placed on or in proximity to input unit202. Output interface450, shown inFIG.4, is assumed to have similar functionality as the functionality of output interface350described above in detail. In this case, output interface450may cause output from light indicators451, display452, speakers453, and projector454. Projector454may be any device, apparatus, instrument, or the like capable of projecting (or directing) light in order to display virtual content onto a surface. The surface may be part of XR unit204, part of an eye of user100, or part of an object in proximity to user100. In one embodiment, projector454may include a lighting unit that concentrates light within a limited solid angle by means of one or more mirrors and lenses, and provides a high value of luminous intensity in a defined direction. Processing device460, shown inFIG.4, is assumed to have similar functionality as the functionality of processing device360described above in detail. When XR unit204is connected to input unit202, processing device460may work together with processing device360. Specifically, processing device460may implement virtual machine technologies or other technologies to provide the ability to execute, control, run, manipulate, store, etc., multiple software processes, applications, programs, etc. It is appreciated that other types of processor arrangements could be implemented to provide the capabilities disclosed herein. Sensors interface470, shown inFIG.4, is assumed to have similar functionality as the functionality of sensors interface370described above in detail. Specifically sensors interface470may communicate with audio sensor471, image sensor472, motion sensor473, environmental sensor474, and other sensors475. The operations of an audio sensor, an image sensor, a motion sensor, an environmental sensor, and other sensors are described above with reference toFIG.3, details of which are not repeated herein. It is appreciated that other types and combination of sensors may be used to provide the capabilities disclosed herein. The components and arrangements shown inFIG.4are not intended to limit any embodiment. As will be appreciated by a person skilled in the art having the benefit of this disclosure, numerous variations and/or modifications may be made to the depicted configuration of XR unit204. For example, not all components may be essential for the operation of XR unit204in all cases. Any component may be located in any appropriate part of system200, and the components may be rearranged into a variety of configurations while providing the functionality of various embodiments. For example, some XR units may not include all of the elements in XR unit204(e.g., wearable extended reality appliance110may not have light indicators451). FIG.5is a block diagram of an exemplary configuration of remote processing unit208.FIG.5is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.5, remote processing unit208may include a server210that directly or indirectly accesses a bus500(or other communication mechanism) interconnecting subsystems and components for transferring information within server210. For example, bus500may interconnect a memory interface510, a network interface520, a power source540, a processing device560, and a database580. Remote processing unit208may also include a one or more data structures. For example, data structures212A,212B, and212C. Memory interface510, shown inFIG.5, is assumed to have similar functionality as the functionality of memory interface310described above in detail. Memory interface510may be used to access a software product and/or data stored on a non-transitory computer-readable medium or on other memory devices, such as memory devices311,411,511, or data structures212A,212B, and212C. Memory device511may contain software modules to execute processes consistent with the present disclosure. In particular, memory device511may include a shared memory module512, a node registration module513, a load balancing module514, one or more computational nodes515, an internal communication module516, an external communication module517, and a database access module (not shown). Modules512-517may contain software instructions for execution by at least one processor (e.g., processing device560) associated with remote processing unit208. Shared memory module512, node registration module513, load balancing module514, computational module515, and external communication module517may cooperate to perform various operations. Shared memory module512may allow information sharing between remote processing unit208and other components of system200. In some embodiments, shared memory module512may be configured to enable processing device560(and other processing devices in system200) to access, retrieve, and store data. For example, using shared memory module512, processing device560may perform at least one of: executing software programs stored on memory device511, database580, or data structures212A-C; storing information in memory device511, database580, or data structures212A-C; or retrieving information from memory device511, database580, or data structures212A-C. Node registration module513may be configured to track the availability of one or more computational nodes515. In some examples, node registration module513may be implemented as: a software program, such as a software program executed by one or more computational nodes515, a hardware solution, or a combined software and hardware solution. In some implementations, node registration module513may communicate with one or more computational nodes515, for example, using internal communication module516. In some examples, one or more computational nodes515may notify node registration module513of their status, for example, by sending messages: at startup, at shutdown, at constant intervals, at selected times, in response to queries received from node registration module513, or at any other determined times. In some examples, node registration module513may query about the status of one or more computational nodes515, for example, by sending messages: at startup, at constant intervals, at selected times, or at any other determined times. Load balancing module514may be configured to divide the workload among one or more computational nodes515. In some examples, load balancing module514may be implemented as: a software program, such as a software program executed by one or more of the computational nodes515, a hardware solution, or a combined software and hardware solution. In some implementations, load balancing module514may interact with node registration module513in order to obtain information regarding the availability of one or more computational nodes515. In some implementations, load balancing module514may communicate with one or more computational nodes515, for example, using internal communication module516. In some examples, one or more computational nodes515may notify load balancing module514of their status, for example, by sending messages: at startup, at shutdown, at constant intervals, at selected times, in response to queries received from load balancing module514, or at any other determined times. In some examples, load balancing module514may query about the status of one or more computational nodes515, for example, by sending messages: at startup, at constant intervals, at pre-selected times, or at any other determined times. Internal communication module516may be configured to receive and/or to transmit information from one or more components of remote processing unit208. For example, control signals and/or synchronization signals may be sent and/or received through internal communication module516. In one embodiment, input information for computer programs, output information of computer programs, and/or intermediate information of computer programs may be sent and/or received through internal communication module516. In another embodiment, information received though internal communication module516may be stored in memory device511, in database580, in data structures212A-C, or other memory device in system200. For example, information retrieved from data structure212A may be transmitted using internal communication module516. In another example, input data may be received using internal communication module516and stored in data structure212B. External communication module517may be configured to receive and/or to transmit information from one or more components of system200. For example, control signals may be sent and/or received through external communication module517. In one embodiment, information received though external communication module517may be stored in memory device511, in database580, in data structures212A-C, and or any memory device in the system200. In another embodiment, information retrieved from any of data structures212A-C may be transmitted using external communication module517to XR unit204. In another embodiment, input data may be transmitted and/or received using external communication module517. Examples of such input data may include data received from input unit202, information captured from the environment of user100using one or more sensors (e.g., audio sensor471, image sensor472, motion sensor473, environmental sensor474, other sensors475), and more. In some embodiments, aspects of modules512-517may be implemented in hardware, in software (including in one or more signal processing and/or application specific integrated circuits), in firmware, or in any combination thereof, executable by one or more processors, alone, or in various combinations with each other. Specifically, modules512-517may be configured to interact with each other and/or other modules of system200to perform functions consistent with embodiments of the present disclosure. Memory device511may include additional modules and instructions or fewer modules and instructions. Network interface520, power source540, processing device560, and database580, shown inFIG.5, are assumed to have similar functionality as the functionality of similar elements described above with reference toFIGS.4and5. The specific design and implementation of the above-mentioned components may vary based on the implementation of system200. In addition, remote processing unit208may include more or fewer components. For example, remote processing unit208may include an input interface configured to receive direct input from one or more input devices. Consistent with the present disclosure, a processing device of system200(e.g., processor within mobile communications device206, a processor within a server210, a processor within a wearable extended reality appliance, such as, wearable extended reality appliance110, and/or a processor within an input device associated with wearable extended reality appliance110, such as keyboard104) may use machine learning algorithms in order to implement any of the methods disclosed herein. In some embodiments, machine learning algorithms (also referred to as machine learning models in the present disclosure) may be trained using training examples, for example in the cases described below. Some non-limiting examples of such machine learning algorithms may include classification algorithms, data regressions algorithms, image segmentation algorithms, visual detection algorithms (such as object detectors, face detectors, person detectors, motion detectors, edge detectors, etc.), visual recognition algorithms (such as face recognition, person recognition, object recognition, etc.), speech recognition algorithms, mathematical embedding algorithms, natural language processing algorithms, support vector machines, random forests, nearest neighbors algorithms, deep learning algorithms, artificial neural network algorithms, convolutional neural network algorithms, recurrent neural network algorithms, linear machine learning models, non-linear machine learning models, ensemble algorithms, and more. For example, a trained machine learning algorithm may comprise an inference model, such as a predictive model, a classification model, a data regression model, a clustering model, a segmentation model, an artificial neural network (such as a deep neural network, a convolutional neural network, a recurrent neural network, etc.), a random forest, a support vector machine, and so forth. In some examples, the training examples may include example inputs together with the desired outputs corresponding to the example inputs. Further, in some examples, training machine learning algorithms using the training examples may generate a trained machine learning algorithm, and the trained machine learning algorithm may be used to estimate outputs for inputs not included in the training examples. In some examples, engineers, scientists, processes and machines that train machine learning algorithms may further use validation examples and/or test examples. For example, validation examples and/or test examples may include example inputs together with the desired outputs corresponding to the example inputs, a trained machine learning algorithm and/or an intermediately trained machine learning algorithm may be used to estimate outputs for the example inputs of the validation examples and/or test examples, the estimated outputs may be compared to the corresponding desired outputs, and the trained machine learning algorithm and/or the intermediately trained machine learning algorithm may be evaluated based on a result of the comparison. In some examples, a machine learning algorithm may have parameters and hyper parameters, where the hyper parameters may be set manually by a person or automatically by a process external to the machine learning algorithm (such as a hyper parameter search algorithm), and the parameters of the machine learning algorithm may be set by the machine learning algorithm based on the training examples. In some implementations, the hyper-parameters may be set based on the training examples and the validation examples, and the parameters may be set based on the training examples and the selected hyper-parameters. For example, given the hyper-parameters, the parameters may be conditionally independent of the validation examples. In some embodiments, trained machine learning algorithms (also referred to as machine learning models and trained machine learning models in the present disclosure) may be used to analyze inputs and generate outputs, for example in the cases described below. In some examples, a trained machine learning algorithm may be used as an inference model that when provided with an input generates an inferred output. For example, a trained machine learning algorithm may include a classification algorithm, the input may include a sample, and the inferred output may include a classification of the sample (such as an inferred label, an inferred tag, and so forth). In another example, a trained machine learning algorithm may include a regression model, the input may include a sample, and the inferred output may include an inferred value corresponding to the sample. In yet another example, a trained machine learning algorithm may include a clustering model, the input may include a sample, and the inferred output may include an assignment of the sample to at least one cluster. In an additional example, a trained machine learning algorithm may include a classification algorithm, the input may include an image, and the inferred output may include a classification of an item depicted in the image. In yet another example, a trained machine learning algorithm may include a regression model, the input may include an image, and the inferred output may include an inferred value corresponding to an item depicted in the image (such as an estimated property of the item, such as size, volume, age of a person depicted in the image, distance from an item depicted in the image, and so forth). In an additional example, a trained machine learning algorithm may include an image segmentation model, the input may include an image, and the inferred output may include a segmentation of the image. In yet another example, a trained machine learning algorithm may include an object detector, the input may include an image, and the inferred output may include one or more detected objects in the image and/or one or more locations of objects within the image. In some examples, the trained machine learning algorithm may include one or more formulas and/or one or more functions and/or one or more rules and/or one or more procedures, the input may be used as input to the formulas and/or functions and/or rules and/or procedures, and the inferred output may be based on the outputs of the formulas and/or functions and/or rules and/or procedures (for example, selecting one of the outputs of the formulas and/or functions and/or rules and/or procedures, using a statistical measure of the outputs of the formulas and/or functions and/or rules and/or procedures, and so forth). Consistent with the present disclosure, a processing device of system200may analyze image data captured by an image sensor (e.g., image sensor372, image sensor472, or any other image sensor) in order to implement any of the methods disclosed herein. In some embodiments, analyzing the image data may comprise analyzing the image data to obtain a preprocessed image data, and subsequently analyzing the image data and/or the preprocessed image data to obtain the desired outcome. One of ordinary skill in the art will recognize that the followings are examples, and that the image data may be preprocessed using other kinds of preprocessing methods. In some examples, the image data may be preprocessed by transforming the image data using a transformation function to obtain a transformed image data, and the preprocessed image data may comprise the transformed image data. For example, the transformed image data may comprise one or more convolutions of the image data. For example, the transformation function may comprise one or more image filters, such as low-pass filters, high-pass filters, band-pass filters, all-pass filters, and so forth. In some examples, the transformation function may comprise a nonlinear function. In some examples, the image data may be preprocessed by smoothing at least parts of the image data, for example using Gaussian convolution, using a median filter, and so forth. In some examples, the image data may be preprocessed to obtain a different representation of the image data. For example, the preprocessed image data may comprise: a representation of at least part of the image data in a frequency domain; a Discrete Fourier Transform of at least part of the image data; a Discrete Wavelet Transform of at least part of the image data; a time/frequency representation of at least part of the image data; a representation of at least part of the image data in a lower dimension; a lossy representation of at least part of the image data; a lossless representation of at least part of the image data; a time ordered series of any of the above; any combination of the above; and so forth. In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may comprise information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to: edges; corners; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth. In some examples, analyzing the image data may include calculating at least one convolution of at least a portion of the image data, and using the calculated at least one convolution to calculate at least one resulting value and/or to make determinations, identifications, recognitions, classifications, and so forth. Consistent with another aspects of the disclosure, a processing device of system200may analyze image data in order to implement any of the methods disclosed herein. In some embodiments, analyzing the image may comprise analyzing the image data and/or the preprocessed image data using one or more rules, functions, procedures, artificial neural networks, object detection algorithms, face detection algorithms, visual event detection algorithms, action detection algorithms, motion detection algorithms, background subtraction algorithms, inference models, and so forth. Some non-limiting examples of such inference models may include: an inference model preprogrammed manually; a classification model; a regression model; a result of training algorithms, such as machine learning algorithms and/or deep learning algorithms, on training examples, where the training examples may include examples of data instances, and in some cases, a data instance may be labeled with a corresponding desired label and/or result, and more. In some embodiments, analyzing image data (for example by the methods, steps and modules described herein) may comprise analyzing pixels, voxels, point cloud, range data, etc. included in the image data. A convolution may include a convolution of any dimension. A one-dimensional convolution is a function that transforms an original sequence of numbers to a transformed sequence of numbers. The one-dimensional convolution may be defined by a sequence of scalars. Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value. A result value of a calculated convolution may include any value in the transformed sequence of numbers. Likewise, an n-dimensional convolution is a function that transforms an original n-dimensional array to a transformed array. The n-dimensional convolution may be defined by an n-dimensional array of scalars (known as the kernel of the n-dimensional convolution). Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value. A result value of a calculated convolution may include any value in the transformed array. In some examples, an image may comprise one or more components (such as color components, depth component, etc.), and each component may include a two-dimensional array of pixel values. In one example, calculating a convolution of an image may include calculating a two-dimensional convolution on one or more components of the image. In another example, calculating a convolution of an image may include stacking arrays from different components to create a three-dimensional array, and calculating a three-dimensional convolution on the resulting three-dimensional array. In some examples, a video may comprise one or more components (such as color components, depth component, etc.), and each component may include a three-dimensional array of pixel values (with two spatial axes and one temporal axis). In one example, calculating a convolution of a video may include calculating a three-dimensional convolution on one or more components of the video. In another example, calculating a convolution of a video may include stacking arrays from different components to create a four-dimensional array, and calculating a four-dimensional convolution on the resulting four-dimensional array. The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions, or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples, but is inclusive of general principles described herein and illustrated in the figures in addition to the general principles encompassed by the appended claims. The present disclosure is directed to systems and methods for providing users an extended reality environment. The term “extended reality environment,” which may also be referred to as “extended reality,” “extended reality space,” or “extended environment,” refers to all types of real-and-virtual combined environments and human-machine interactions at least partially generated by computer technology. The extended reality environment may be a completely simulated virtual environment or a combined real-and-virtual environment that a user may perceive from different perspectives. In some examples, the user may interact with elements of the extended reality environment. One non-limiting example of an extended reality environment may be a virtual reality environment, also known as “virtual reality” or a “virtual environment.” An immersive virtual reality environment may be a simulated non-physical environment which provides to the user the perception of being present in the virtual environment. Another non-limiting example of an extended reality environment may be an augmented reality environment, also known as “augmented reality” or “augmented environment.” An augmented reality environment may involve live direct or indirect view of a physical real-world environment that is enhanced with virtual computer-generated perceptual information, such as virtual objects that the user may interact with. Another non-limiting example of an extended reality environment is a mixed reality environment, also known as “mixed reality” or a “mixed environment.” A mixed reality environment may be a hybrid of physical real-world and virtual environments, in which physical and virtual objects may coexist and interact in real time. In some examples, both augmented reality environments and mixed reality environments may include a combination of real and virtual worlds, real-time interactions, and accurate 3D registration of virtual and real objects. In some examples, both augmented reality environment and mixed reality environments may include constructive overlaid sensory information that may be added to the physical environment. In other examples, both augmented reality environment and mixed reality environments may include destructive virtual content that may mask at least part of the physical environment. In some embodiments, the systems and methods may provide the extended reality environment using an extended reality appliance. The term extended reality appliance may include any type of device or system that enables a user to perceive and/or interact with an extended reality environment. The extended reality appliance may enable the user to perceive and/or interact with an extended reality environment through one or more sensory modalities. Some non-limiting examples of such sensory modalities may include visual, auditory, haptic, somatosensory, and olfactory. One example of the extended reality appliance is a virtual reality appliance that enables the user to perceive and/or interact with a virtual reality environment. Another example of the extended reality appliance is an augmented reality appliance that enables the user to perceive and/or interact with an augmented reality environment. Yet another example of the extended reality appliance is a mixed reality appliance that enables the user to perceive and/or interact with a mixed reality environment. Consistent with one aspect of the disclosure, the extended reality appliance may be a wearable device, such as a head-mounted device, for example, smart glasses, smart contact lens, headsets or any other device worn by a human for purposes of presenting an extended reality to the human. Other extended reality appliances may include holographic projector or any other device or system capable of providing an augmented reality (AR), virtual reality (VR), mixed reality (MR), or any immersive experience. Typical components of wearable extended reality appliances may include at least one of: a stereoscopic head-mounted display, a stereoscopic head-mounted sound system, head-motion tracking sensors (such as gyroscopes, accelerometers, magnetometers, image sensors, structured light sensors, etc.), head mounted projectors, eye-tracking sensors, and additional components described below. Consistent with another aspect of the disclosure, the extended reality appliance may be a non-wearable extended reality appliance. Specifically, the non-wearable extended reality appliance may include multi-projected environment appliances. In some embodiments, an extended reality appliance may be configured to change the viewing perspective of the extended reality environment in response to movements of the user and in response to head movements of the user in particular. In one example, a wearable extended reality appliance may change the field-of-view of the extended reality environment in response to a change of the head pose of the user, such as by changing the spatial orientation without changing the spatial position of the user in the extended reality environment. In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world, for example, by changing the spatial position of the user in the extended reality environment without changing the direction of the field-of-view with respect to the spatial position. According to some embodiments, an extended reality appliance may include a digital communication device configured to at least one of: receiving virtual content data configured to enable a presentation of the virtual content, transmitting virtual content for sharing with at least one external device, receiving contextual data from at least one external device, transmitting contextual data to at least one external device, transmitting of usage data indicative of usage of the extended reality appliance, and transmitting of data based on information captured using at least one sensor included in the extended reality appliance. In additional embodiments, the extended reality appliance may include memory for storing at least one of virtual data configured to enable a presentation of virtual content, contextual data, usage data indicative of usage of the extended reality appliance, sensor data based on information captured using at least one sensor included in the wearable extended reality appliance, software instructions configured to cause a processing device to present the virtual content, software instructions configured to cause a processing device to collect and analyze the contextual data, software instructions configured to cause a processing device to collect and analyze the usage data, and software instructions configured to cause a processing device to collect and analyze the sensor data. In additional embodiments, the extended reality appliance may include a processing device configured to perform at least one of rendering of virtual content, collecting and analyzing contextual data, collecting and analyzing usage data, and collecting and analyzing sensor data. In additional embodiments, the extended reality appliance may include one or more sensors. The one or more sensors may include one or more image sensors (e.g., configured to capture images and/or videos of a user of the appliance or of an environment of the user), one or more motion sensors (such as an accelerometer, a gyroscope, a magnetometer, etc.), one or more positioning sensors (such as GPS, outdoor positioning sensor, indoor positioning sensor, etc.), one or more temperature sensors (e.g., configured to measure the temperature of at least part of the appliance and/or of the environment), one or more contact sensors, one or more proximity sensors (e.g., configured to detect whether the appliance is currently worn), one or more electrical impedance sensors (e.g., configured to measure electrical impedance of the user), one or more eye tracking sensors, such as gaze detectors, optical trackers, electric potential trackers (e.g., electrooculogram (EOG) sensors), video-based eye-trackers, infra-red/near infra-red sensors, passive light sensors, or any other technology capable of determining where a human is looking or gazing. In some embodiments, the systems and methods may use an input device to interact with the extended reality appliance. The term input device may include any physical device configured to receive input from a user or an environment of the user, and to provide the data to a computational device. The data provided to the computational device may be in a digital format and/or in an analog format. In one embodiment, the input device may store the input received from the user in a memory device accessible by a processing device, and the processing device may access the stored data for analysis. In another embodiment, the input device may provide the data directly to a processing device, for example, over a bus or over another communication system configured to transfer data from the input device to the processing device. In some examples, the input received by the input device may include key presses, tactile input data, motion data, position data, gestures based input data, direction data, or any other data for supply for computation. Some examples of the input device may include a button, a key, a keyboard, a computer mouse, a touchpad, a touchscreen, a joystick, or another mechanism from which input may be received. Another example of an input device may include an integrated computational interface device that includes at least one physical component for receiving input from a user. The integrated computational interface device may include at least a memory, a processing device, and the at least one physical component for receiving input from a user. In one example, the integrated computational interface device may further include a digital network interface that enables digital communication with other computing devices. In one example, the integrated computational interface device may further include a physical component for outputting information to the user. In some examples, all components of the integrated computational interface device may be included in a single housing, while in other examples the components may be distributed among two or more housings. Some non-limiting examples of physical components for receiving input from users that may be included in the integrated computational interface device may include at least one of a button, a key, a keyboard, a touchpad, a touchscreen, a joystick, or any other mechanism or sensor from which computational information may be received. Some non-limiting examples of physical components for outputting information to users may include at least one of a light indicator (such as a LED indicator), a screen, a touchscreen, a beeper, an audio speaker, or any other audio, video, or haptic device that provides human-perceptible outputs. In some embodiments, image data may be captured using one or more image sensors. In some examples, the image sensors may be included in the extended reality appliance, in a wearable device, in the wearable extended reality device, in the input device, in an environment of a user, and so forth. In some examples, the image data may be read from memory, may be received from an external device, may be generated (for example, using a generative model), and so forth. Some non-limiting examples of image data may include images, grayscale images, color images, 2D images, 3D images, videos, 2D videos, 3D videos, frames, footages, data derived from other image data, and so forth. In some examples, the image data may be encoded in any analog or digital format. Some non-limiting examples of such formats may include raw formats, compressed formats, uncompressed formats, lossy formats, lossless formats, JPEG, GIF, PNG, TIFF, BMP, NTSC, PAL, SECAM, MPEG, MPEG-4 Part 14, MOV, WMV, FLV, AVI, AVCHD, WebM, MKV, and so forth. In some embodiments, the extended reality appliance may receive digital signals, for example, from the input device. The term digital signals may refer to a series of digital values that are discrete in time. The digital signals may represent, for example, sensor data, textual data, voice data, video data, virtual data, or any other form of data that provides perceptible information. Consistent with the present disclosure, the digital signals may be configured to cause the extended reality appliance to present virtual content. In one embodiment, the virtual content may be presented in a selected orientation. In this embodiment, the digital signals may indicate a position and an angle of a viewpoint in an environment, such as an extended reality environment. Specifically, the digital signals may include an encoding of the position and angle in six degree-of-freedom coordinates (e.g., forward/back, up/down, left/right, yaw, pitch, and roll). In another embodiment, the digital signals may include an encoding of the position as three-dimensional coordinates (e.g., x, y, and z), and an encoding of the angle as a vector originating from the encoded position. Specifically, the digital signals may indicate the orientation and an angle of the presented virtual content in an absolute coordinates of the environment, for example, by encoding yaw, pitch and roll of the virtual content with respect to a standard default angle. In another embodiment, the digital signals may indicate the orientation and the angle of the presented virtual content with respect to a viewpoint of another object (e.g., a virtual object, a physical object, etc.), for example, by encoding yaw, pitch, and roll of the virtual content with respect a direction corresponding to the viewpoint or to a direction corresponding to the other object. In another embodiment, such digital signals may include one or more projections of the virtual content, for example, in a format ready for presentation (e.g., image, video, etc.). For example, each such projection may correspond to a particular orientation or a particular angle. In another embodiment, the digital signals may include a representation of virtual content, for example, by encoding objects in a three-dimensional array of voxels, in a polygon mesh, or in any other format in which virtual content may be presented. In some embodiments, the digital signals may be configured to cause the extended reality appliance to present virtual content. The term virtual content may include any type of data representation that may be displayed by the extended reality appliance to the user. The virtual content may include a virtual object, inanimate virtual content, animate virtual content configured to change over time or in response to triggers, virtual two-dimensional content, virtual three-dimensional content, a virtual overlay over a portion of a physical environment or over a physical object, a virtual addition to a physical environment or to a physical object, a virtual promotion content, a virtual representation of a physical object, a virtual representation of a physical environment, a virtual document, a virtual character or persona, a virtual computer screen, a virtual widget, or any other format for displaying information virtually. Consistent with the present disclosure, the virtual content may include any visual presentation rendered by a computer or a processing device. In one embodiment, the virtual content may include a virtual object that is a visual presentation rendered by a computer in a confined region and configured to represent an object of a particular type (such as an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, virtual widget, or other virtual representation.). The rendered visual presentation may change to reflect changes to a status object or changes in the viewing angle of the object, for example, in a way that mimics changes in the appearance of physical objects. In another embodiment, the virtual content may include a virtual display (also referred to as a “virtual display screen” or a “virtual screen” herein), such as a virtual computer screen, a virtual tablet screen or a virtual smartphone screen, configured to display information generated by an operating system, in which the operating system may be configured to receive textual data from a physical keyboard and/or a virtual keyboard and to cause a display of the textual content in the virtual display screen. In one example, illustrated inFIG.1, the virtual content may include a virtual environment that includes a virtual computer screen and a plurality of virtual objects. In some examples, a virtual display may be a virtual object mimicking and/or extending the functionality of a physical display screen. For example, the virtual display may be presented in an extended reality environment (such as a mixed reality environment, an augmented reality environment, a virtual reality environment, etc.), using an extended reality appliance. In one example, a virtual display may present content produced by a regular operating system that may be equally presented on a physical display screen. In one example, a textual content entered using a keyboard (for example, using a physical keyboard, using a virtual keyboard, etc.) may be presented on a virtual display in real time as the textual content is typed. In one example, a virtual cursor may be presented on a virtual display, and the virtual cursor may be controlled by a pointing device (such as a physical pointing device, a virtual pointing device, a computer mouse, a joystick, a touchpad, a physical touch controller, and so forth). In one example, one or more windows of a graphical user interface operating system may be presented on a virtual display. In another example, content presented on a virtual display may be interactive, that is, it may change in reaction to actions of users. In yet another example, a presentation of a virtual display may include a presentation of a screen frame, or may include no presentation of a screen frame. Some disclosed embodiments may include and/or access a data structure or a database. The terms data structure and a database, consistent with the present disclosure may include any collection of data values and relationships among them. The data may be stored linearly, horizontally, hierarchically, relationally, non-relationally, uni-dimensionally, multidimensionally, operationally, in an ordered manner, in an unordered manner, in an object-oriented manner, in a centralized manner, in a decentralized manner, in a distributed manner, in a custom manner, or in any manner enabling data access. By way of non-limiting examples, data structures may include an array, an associative array, a linked list, a binary tree, a balanced tree, a heap, a stack, a queue, a set, a hash table, a record, a tagged union, Entity-Relationship model, a graph, a hypergraph, a matrix, a tensor, and so forth. For example, a data structure may include an XML database, an RDBMS database, an SQL database or NoSQL alternatives for data storage/search such as, for example, MongoDB, Redis, Couchbase, Datastax Enterprise Graph, Elastic Search, Splunk, Solr, Cassandra, Amazon DynamoDB, Scylla, HBase, and Neo4J. A data structure may be a component of the disclosed system or a remote computing component (e.g., a cloud-based data structure). Data in the data structure may be stored in contiguous or non-contiguous memory. Moreover, a data structure, does not require information to be co-located. It may be distributed across multiple servers, for example, that may be owned or operated by the same or different entities. Thus, the term data structure in the singular is inclusive of plural data structures. In some embodiments, the system may determine the confidence level in received input or in any determined value. The term confidence level refers to any indication, numeric or otherwise, of a level (e.g., within a predetermined range) indicative of an amount of confidence the system has at determined data. For example, the confidence level may have a value between 1 and 10. Alternatively, the confidence level may be expressed as a percentage or any other numerical or non-numerical indication. In some cases, the system may compare the confidence level to a threshold. The term threshold may denote a reference value, a level, a point, or a range of values. In operation, when the confidence level of determined data exceeds the threshold (or is below it, depending on a particular use case), the system may follow a first course of action and, when the confidence level is below it (or above it, depending on a particular use case), the system may follow a second course of action. The value of the threshold may be predetermined for each type of examined object or may be dynamically selected based on different considerations. Reference is now made toFIG.1, which illustrates a user that uses an example extended reality system consistent with embodiments of the present disclosureFIG.1is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. As shown, a user100is sitting behind table102, supporting a keyboard104and mouse106. Keyboard104is connected by wire108to a wearable extended reality appliance110that displays virtual content to user100. Alternatively or additionally to wire108, keyboard104may connect to wearable extended reality appliance110wirelessly. For illustration purposes, the wearable extended reality appliance is depicted a pair of smart glasses, but, as described above, wearable extended reality appliance110may be any type of head-mounted device used for presenting an extended reality to user100. The virtual content displayed by wearable extended reality appliance110includes a virtual screen112(also referred to as a “virtual display screen” or a “virtual display” herein) and a plurality of virtual widgets114. Virtual widgets114A-114D are displayed next to virtual screen112and virtual widget114E is displayed on table102. User100may input text to a document116displayed in virtual screen112using keyboard104; and may control virtual cursor118using mouse106. In one example, virtual cursor118may move anywhere within virtual screen112. In another example, virtual cursor118may move anywhere within virtual screen112and may also move to any one of virtual widgets114A-114D but not to virtual widget114E. In yet another example, virtual cursor118may move anywhere within virtual screen112and may also move to any one of virtual widgets114A-114E. In an additional example, virtual cursor118may move anywhere in the extended reality environment including virtual screen112and virtual widgets114A-114E. In yet another example, virtual cursor may move on all available surfaces (i.e., virtual surfaces or physical surfaces) or only on selected surfaces in the extended reality environment. Alternatively or additionally, user100may interact with any one of virtual widgets114A-114E, or with selected virtual widgets, using hand gestures recognized by wearable extended reality appliance110. For example, virtual widget114E may be an interactive widget (e.g., a virtual slider controller) that may be operated with hand gestures. FIG.2illustrates an example of a system200that provides extended reality (XR) experience to users, such as user100.FIG.2is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. System200may be computer-based and may include computer system components, wearable appliances, workstations, tablets, handheld computing devices, memory devices, and/or internal network(s) connecting the components. System200may include or be connected to various network computing resources (e.g., servers, routers, switches, network connections, storage devices, etc.) for supporting services provided by system200. Consistent with the present disclosure, system200may include an input unit202, an XR unit204, a mobile communications device206, and a remote processing unit208. Remote processing unit208may include a server210coupled to one or more physical or virtual storage devices, such as a data structure212. System200may also include or be connected to a communications network214that facilitates communications and data exchange between different system components and the different entities associated with system200. Consistent with the present disclosure, input unit202may include one or more devices that may receive input from user100. In one embodiment, input unit202may include a textual input device, such as keyboard104. The textual input device may include all possible types of devices and mechanisms for inputting textual information to system200. Examples of textual input devices may include mechanical keyboards, membrane keyboards, flexible keyboards, QWERTY keyboards, Dvorak keyboards, Colemak keyboards, chorded keyboards, wireless keyboards, keypads, key-based control panels, or other arrays of control keys, vision input devices, or any other mechanism for inputting text, whether the mechanism is provided in physical form or is presented virtually. In one embodiment, input unit202may also include a pointing input device, such as mouse106. The pointing input device may include all possible types of devices and mechanisms for inputting two-dimensional or three-dimensional information to system200. In one example, two-dimensional input from the pointing input device may be used for interacting with virtual content presented via the XR unit204. Examples of pointing input devices may include a computer mouse, trackball, touchpad, trackpad, touchscreen, joystick, pointing stick, stylus, light pen, or any other physical or virtual input mechanism. In one embodiment, input unit202may also include a graphical input device, such as a touchscreen configured to detect contact, movement, or break of movement. The graphical input device may use any of a plurality of touch sensitivity technologies, including, but not limited to, capacitive, resistive, infrared, and surface acoustic wave technologies as well as other proximity sensor arrays or other elements for determining one or more points of contact. In one embodiment, input unit202may also include one or more voice input devices, such as a microphone. The voice input device may include all possible types of devices and mechanisms for inputting voice data to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. In one embodiment, input unit202may also include one or more image input devices, such as an image sensor, configured to capture image data. In one embodiment, input unit202may also include one or more haptic gloves configured to capture hands motion and pose data. In one embodiment, input unit202may also include one or more proximity sensors configured to detect presence and/or movement of objects in a selected region near the sensors. In accordance with some embodiments, the system may include at least one sensor configured to detect and/or measure a property associated with the user, the user's action, or user's environment. One example of the at least one sensor, is sensor216included in input unit202. Sensor216may be a motion sensor, a touch sensor, a light sensor, an infrared sensor, an audio sensor, an image sensor, a proximity sensor, a positioning sensor, a gyroscope, a temperature sensor, a biometric sensor, or any other sensing devices to facilitate related functionalities. Sensor216may be integrated with, or connected to, the input devices or it may be separated from the input devices. In one example, a thermometer may be included in mouse106to determine the body temperature of user100. In another example, a positioning sensor may be integrated with keyboard104to determine movement of user100relative to keyboard104. Such positioning sensor may be implemented using one of the following technologies: Global Positioning System (GPS), GLObal NAvigation Satellite System (GLONASS), Galileo global navigation system, BeiDou navigation system, other Global Navigation Satellite Systems (GNSS), Indian Regional Navigation Satellite System (IRNSS), Local Positioning Systems (LPS), Real-Time Location Systems (RTLS), Indoor Positioning System (IPS), Wi-Fi based positioning systems, cellular triangulation, image based positioning technology, indoor positioning technology, outdoor positioning technology, or any other positioning technology. In accordance with some embodiments, the system may include one or more sensors for identifying a position and/or a movement of a physical device (such as a physical input device, a physical computing device, keyboard104, mouse106, wearable extended reality appliance110, and so forth). The one or more sensors may be included in the physical device or may be external to the physical device. In some examples, an image sensor external to the physical device (for example, an image sensor included in another physical device) may be used to capture image data of the physical device, and the image data may be analyzed to identify the position and/or the movement of the physical device. For example, the image data may be analyzed using a visual object tracking algorithm to identify the movement of the physical device, may be analyzed using a visual object detection algorithm to identify the position of the physical device (for example, relative to the image sensor, in a global coordinates system, etc.), and so forth. In some examples, an image sensor included in the physical device may be used to capture image data, and the image data may be analyzed to identify the position and/or the movement of the physical device. For example, the image data may be analyzed using visual odometry algorithms to identify the position of the physical device, may be analyzed using an ego-motion algorithm to identify movement of the physical device, and so forth. In some examples, a positioning sensor, such as an indoor positioning sensor or an outdoor positioning sensor, may be included in the physical device and may be used to determine the position of the physical device. In some examples, a motion sensor, such as an accelerometer or a gyroscope, may be included in the physical device and may be used to determine the motion of the physical device. In some examples, a physical device, such as a keyboard or a mouse, may be configured to be positioned on a physical surface. Such physical device may include an optical mouse sensor (also known as non-mechanical tracking engine) aimed towards the physical surface, and the output of the optical mouse sensor may be analyzed to determine movement of the physical device with respect to the physical surface. Consistent with the present disclosure, XR unit204may include a wearable extended reality appliance configured to present virtual content to user100. One example of the wearable extended reality appliance is wearable extended reality appliance110. Additional examples of wearable extended reality appliance may include a Virtual Reality (VR) device, an Augmented Reality (AR) device, a Mixed Reality (MR) device, or any other device capable of generating extended reality content. Some non-limiting examples of such devices may include Nreal Light, Magic Leap One, Varjo, Quest 1/2, Vive, and others. In some embodiments, XR unit204may present virtual content to user100. Generally, an extended reality appliance may include all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. As mentioned above, the term “extended reality” (XR) refers to a superset which includes the entire spectrum from “the complete real” to “the complete virtual.” It includes representative forms such as augmented reality (AR), mixed reality (MR), virtual reality (VR), and the areas interpolated among them. Accordingly, it is noted that the terms “XR appliance,” “AR appliance,” “VR appliance,” and “MR appliance” may be used interchangeably herein and may refer to any device of the variety of appliances listed above. Consistent with the present disclosure, the system may exchange data with a variety of communication devices associated with users, for example, mobile communications device206. The term “communication device” is intended to include all possible types of devices capable of exchanging data using digital communications network, analog communication network or any other communications network configured to convey data. In some examples, the communication device may include a smartphone, a tablet, a smartwatch, a personal digital assistant, a desktop computer, a laptop computer, an IoT device, a dedicated terminal, a wearable communication device, and any other device that enables data communications. In some cases, mobile communications device206may supplement or replace input unit202. Specifically, mobile communications device206may be associated with a physical touch controller that may function as a pointing input device. Moreover, mobile communications device206may also, for example, be used to implement a virtual keyboard and replace the textual input device. For example, when user100steps away from table102and walks to the break room with his smart glasses, he may receive an email that requires a quick answer. In this case, the user may select to use his or her own smartwatch as the input device and to type the answer to the email while it is virtually presented by the smart glasses. Consistent with the present disclosure, embodiments of the system may involve the usage of a cloud server. The term “cloud server” refers to a computer platform that provides services via a network, such as the Internet. In the example embodiment illustrated inFIG.2, server210may use virtual machines that may not correspond to individual hardware. For example, computational and/or storage capabilities may be implemented by allocating appropriate portions of desirable computation/storage power from a scalable repository, such as a data center or a distributed computing environment. Specifically, in one embodiment, remote processing unit208may be used together with XR unit204to provide the virtual content to user100. In one example configuration, server210may be a cloud server that functions as the operation system (OS) of the wearable extended reality appliance. In one example, server210may implement the methods described herein using customized hard-wired logic, one or more Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), firmware, and/or program logic which, in combination with the computer system, cause server210to be a special-purpose machine. In some embodiments, server210may access data structure212to determine, for example, virtual content to display user100. Data structure212may utilize a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, other type of storage device or tangible or non-transitory computer-readable medium, or any medium or mechanism for storing information. Data structure212may be part of server210or separate from server210, as shown. When data structure212is not part of server210, server210may exchange data with data structure212via a communication link. Data structure212may include one or more memory devices that store data and instructions used to perform one or more features of the disclosed methods. In one embodiment, data structure212may include any of a plurality of suitable data structures, ranging from small data structures hosted on a workstation to large data structures distributed among data centers. Data structure212may also include any combination of one or more data structures controlled by memory controller devices (e.g., servers) or software. Consistent with the present disclosure, communications network may be any type of network (including infrastructure) that supports communications, exchanges information, and/or facilitates the exchange of information between the components of a system. For example, communications network214in system200may include, for example, a telephone network, an extranet, an intranet, the Internet, satellite communications, off-line communications, wireless communications, transponder communications, a Local Area Network (LAN), wireless network (e.g., a Wi-Fi/302.11 network), a Wide Area Network (WAN), a Virtual Private Network (VPN), digital communication network, analog communication network, or any other mechanism or combinations of mechanism that enable data transmission. The components and arrangements of system200shown inFIG.2are intended to be exemplary only and are not intended to limit the disclosed embodiments, as the system components used to implement the disclosed processes and features may vary. FIG.3is a block diagram of an exemplary configuration of input unit202.FIG.3is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.3, input unit202may directly or indirectly access a bus300(or other communication mechanism) that interconnects subsystems and components for transferring information within input unit202. For example, bus300may interconnect a memory interface310, a network interface320, an input interface330, a power source340, an output interface350, a processing device360, a sensors interface370, and a database380. Memory interface310, shown inFIG.3, may be used to access a software product and/or data stored on a non-transitory computer-readable medium. Generally, a non-transitory computer-readable storage medium refers to any type of physical memory on which information or data readable by at least one processor can be stored. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, any other optical data storage medium, any physical medium with patterns of holes, a PROM, an EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. The terms “memory” and “computer-readable storage medium” may refer to multiple structures, such as a plurality of memories or computer-readable storage mediums located within an input unit or at a remote location. Additionally, one or more computer-readable storage mediums can be utilized in implementing a computer-implemented method. Accordingly, the term computer-readable storage medium should be understood to include tangible items and exclude carrier waves and transient signals. In the specific embodiment illustrated inFIG.3, memory interface310may be used to access a software product and/or data stored on a memory device, such as memory device311. Memory device311may include high-speed random-access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). Consistent with the present disclosure, the components of memory device311may be distributed in more than units of system200and/or in more than one memory device. Memory device311, shown inFIG.3, may contain software modules to execute processes consistent with the present disclosure. In particular, memory device311may include an input determination module312, an output determination module313, a sensors communication module314, a virtual content determination module315, a virtual content communication module316, and a database access module317. Modules312-317may contain software instructions for execution by at least one processor (e.g., processing device360) associated with input unit202. Input determination module312, output determination module313, sensors communication module314, virtual content determination module315, virtual content communication module316, and database access module317may cooperate to perform various operations. For example, input determination module312may determine text using data received from, for example, keyboard104. Thereafter, output determination module313may cause presentation of the recent inputted text, for example on a dedicated display352physically or wirelessly coupled to keyboard104. This way, when user100types, he can see a preview of the typed text without constantly moving his head up and down to look at virtual screen112. Sensors communication module314may receive data from different sensors to determine a status of user100. Thereafter, virtual content determination module315may determine the virtual content to display, based on received input and the determined status of user100. For example, the determined virtual content may be a virtual presentation of the recent inputted text on a virtual screen virtually located adjacent to keyboard104. Virtual content communication module316may obtain virtual content that is not determined by virtual content determination module315(e.g., an avatar of another user). The retrieval of the virtual content may be from database380, from remote processing unit208, or any other source. In some embodiments, input determination module312may regulate the operation of input interface330in order to receive pointer input331, textual input332, audio input333, and XR-related input334. Details on the pointer input, the textual input, and the audio input are described above. The term “XR-related input” may include any type of data that may cause a change in the virtual content displayed to user100. In one embodiment, XR-related input334may include image data of user100, a wearable extended reality appliance (e.g., detected hand gestures of user100). In another embodiment, XR-related input334may include wireless communication indicating a presence of another user in proximity to user100. Consistent with the present disclosure, input determination module312may concurrently receive different types of input data. Thereafter, input determination module312may further apply different rules based on the detected type of input. For example, a pointer input may have precedence over voice input. In some embodiments, output determination module313may regulate the operation of output interface350in order to generate output using light indicators351, display352, and/or speakers353. In general, the output generated by output determination module313does not include virtual content to be presented by a wearable extended reality appliance. Instead, the output generated by output determination module313include various outputs that relates to the operation of input unit202and/or the operation of XR unit204. In one embodiment, light indicators351may include a light indicator that shows the status of a wearable extended reality appliance. For example, the light indicator may display green light when wearable extended reality appliance110are connected to keyboard104, and blinks when wearable extended reality appliance110has low battery. In another embodiment, display352may be used to display operational information. For example, the display may present error messages when the wearable extended reality appliance is inoperable. In another embodiment, speakers353may be used to output audio, for example, when user100wishes to play some music for other users. In some embodiments, sensors communication module314may regulate the operation of sensors interface370in order to receive sensor data from one or more sensors, integrated with, or connected to, an input device. The one or more sensors may include: audio sensor371, image sensor372, motion sensor373, environmental sensor374(e.g., a temperature sensor, ambient light detectors, etc.), and other sensors375. In one embodiment, the data received from sensors communication module314may be used to determine the physical orientation of the input device. The physical orientation of the input device may be indicative of a state of the user and may be determined based on combination of a tilt movement, a roll movement, and a lateral movement. Thereafter, the physical orientation of the input device may be used by virtual content determination module315to modify display parameters of the virtual content to match the state of the user (e.g., attention, sleepy, active, sitting, standing, leaning backwards, leaning forward, walking, moving, riding, etc.). In some embodiments, virtual content determination module315may determine the virtual content to be displayed by the wearable extended reality appliance. The virtual content may be determined based on data from input determination module312, sensors communication module314, and other sources (e.g., database380). In some embodiments, determining the virtual content may include determining the distance, the size, and the orientation of the virtual objects. The determination of the position of the virtual objects may be determined based on the type of the virtual objects. Specifically, with regards to the example illustrated inFIG.1, the virtual content determination module315may determine to place four virtual widgets114A-114D on the sides of virtual screen112and to place virtual widget114E on table102because virtual widget114E is a virtual controller (e.g., volume bar). The determination of the position of the virtual objects may further be determined based on user's preferences. For example, for left-handed users, virtual content determination module315may determine placing a virtual volume bar left of keyboard104; and for right-handed users, virtual content determination module315may determine placing the virtual volume bar right of keyboard104. In some embodiments, virtual content communication module316may regulate the operation of network interface320in order to obtain data from one or more sources to be presented as virtual content to user100. The one or more sources may include other XR units204, the user's mobile communications device206, remote processing unit208, publicly available information, etc. In one embodiment, virtual content communication module316may communicate with mobile communications device206in order to provide a virtual representation of mobile communications device206. For example, the virtual representation may enable user100to read messages and interact with applications installed on the mobile communications device206. Virtual content communication module316may also regulate the operation of network interface320in order to share virtual content with other users. In one example, virtual content communication module316may use data from input determination module to identify a trigger (e.g., the trigger may include a gesture of the user) and to transfer content from the virtual display to a physical display (e.g., TV) or to a virtual display of a different user. In some embodiments, database access module317may cooperate with database380to retrieve stored data. The retrieved data may include, for example, privacy levels associated with different virtual objects, the relationship between virtual objects and physical objects, the user's preferences, the user's past behavior, and more. As described above, virtual content determination module315may use the data stored in database380to determine the virtual content. Database380may include separate databases, including, for example, a vector database, raster database, tile database, viewport database, and/or a user input database. The data stored in database380may be received from modules314-317or other components of system200. Moreover, the data stored in database380may be provided as input using data entry, data transfer, or data uploading. Modules312-317may be implemented in software, hardware, firmware, a mix of any of those, or the like. In some embodiments, any one or more of modules312-317and data associated with database380may be stored in XR unit204, mobile communications device206, or remote processing unit208. Processing devices of system200may be configured to execute the instructions of modules312-317. In some embodiments, aspects of modules312-317may be implemented in hardware, in software (including in one or more signal processing and/or application specific integrated circuits), in firmware, or in any combination thereof, executable by one or more processors, alone, or in various combinations with each other. Specifically, modules312-317may be configured to interact with each other and/or other modules of system200to perform functions consistent with disclosed embodiments. For example, input unit202may execute instructions that include an image processing algorithm on data from XR unit204to determine head movement of user100. Furthermore, each functionality described throughout the specification, with regards to input unit202or with regards to a component of input unit202, may correspond to a set of instructions for performing said functionality. These instructions need not be implemented as separate software programs, procedures, or modules. Memory device311may include additional modules and instructions or fewer modules and instructions. For example, memory device311may store an operating system, such as ANDROID, iOS, UNIX, OSX, WINDOWS, DARWIN, RTXC, LINUX or an embedded operating system such as VXWorkS. The operating system can include instructions for handling basic system services and for performing hardware-dependent tasks. Network interface320, shown inFIG.3, may provide two-way data communications to a network, such as communications network214. In one embodiment, network interface320may include an Integrated Services Digital Network (ISDN) card, cellular modem, satellite modem, or a modem to provide a data communication connection over the Internet. As another example, network interface320may include a Wireless Local Area Network (WLAN) card. In another embodiment, network interface320may include an Ethernet port connected to radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of network interface320may depend on the communications network or networks over which input unit202is intended to operate. For example, in some embodiments, input unit202may include network interface320designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth network. In any such implementation, network interface320may be configured to send and receive electrical, electromagnetic, or optical signals that carry digital data streams or digital signals representing various types of information. Input interface330, shown inFIG.3, may receive input from a variety of input devices, for example, a keyboard, a mouse, a touch pad, a touch screen, one or more buttons, a joystick, a microphone, an image sensor, and any other device configured to detect physical or virtual input. The received input may be in the form of at least one of: text, sounds, speech, hand gestures, body gestures, tactile information, and any other type of physically or virtually input generated by the user. In the depicted embodiment, input interface330may receive pointer input331, textual input332, audio input333, and XR-related input334. In additional embodiment, input interface330may be an integrated circuit that may act as bridge between processing device360and any of the input devices listed above. Power source340, shown inFIG.3, may provide electrical energy to power input unit202and optionally also power XR unit204. Generally, a power source included in the any device or system in the present disclosure may be any device that can repeatedly store, dispense, or convey electric power, including, but not limited to, one or more batteries (e.g., a lead-acid battery, a lithium-ion battery, a nickel-metal hydride battery, a nickel-cadmium battery), one or more capacitors, one or more connections to external power sources, one or more power convertors, or any combination of them. With reference to the example illustrated inFIG.3, the power source may be mobile, which means that input unit202can be easily carried by a hand (e.g., the total weight of power source340may be less than a pound). The mobility of the power source enables user100to use input unit202in a variety of situations. In other embodiments, power source340may be associated with a connection to an external power source (such as an electrical power grid) that may be used to charge power source340. In addition, power source340may be configured to charge one or more batteries included in XR unit204; for example, a pair of extended reality glasses (e.g., wearable extended reality appliance110) may be charged (e.g., wirelessly or not wirelessly) when they are placed on or in proximity to the input unit202. Output interface350, shown inFIG.3, may cause output from a variety of output devices, for example, using light indicators351, display352, and/or speakers353. In one embodiment, output interface350may be an integrated circuit that may act as bridge between processing device360and at least one of the output devices listed above. Light indicators351may include one or more light sources, for example, a LED array associated with different colors. Display352may include a screen (e.g., LCD or dot-matrix screen) or a touch screen. Speakers353may include audio headphones, a hearing aid type device, a speaker, a bone conduction headphone, interfaces that provide tactile cues, vibrotactile stimulators, and more. Processing device360, shown inFIG.3, may include at least one processor configured to execute computer programs, applications, methods, processes, or other software to perform embodiments described in the present disclosure. Generally, a processing device included in the any device or system in the present disclosure may include one or more integrated circuits, microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field programmable gate array (FPGA), or other circuits suitable for executing instructions or performing logic operations. The processing device may include at least one processor configured to perform functions of the disclosed methods such as a microprocessor manufactured by Intel™. The processing device may include a single core or multiple core processors executing parallel processes simultaneously. In one example, the processing device may be a single core processor configured with virtual processing technologies. The processing device may implement virtual machine technologies or other technologies to provide the ability to execute, control, run, manipulate, store, etc., multiple software processes, applications, programs, etc. In another example, the processing device may include a multiple-core processor arrangement (e.g., dual, quad core, etc.) configured to provide parallel processing functionalities to allow a device associated with the processing device to execute multiple processes simultaneously. It is appreciated that other types of processor arrangements could be implemented to provide the capabilities disclosed herein. Sensors interface370, shown inFIG.3, may obtain sensor data from a variety of sensors, for example, audio sensor371, image sensor372, motion sensor373, environmental sensor374, and other sensors375. In one embodiment, sensors interface370may be an integrated circuit that may act as bridge between processing device360and at least one of the sensors listed above. Audio sensor371may include one or more audio sensors configured to capture audio by converting sounds to digital information. Some examples of audio sensors may include: microphones, unidirectional microphones, bidirectional microphones, cardioid microphones, omnidirectional microphones, onboard microphones, wired microphones, wireless microphones, or any combination of the above. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on data received from audio sensor371(e.g., voice commands). Image sensor372may include one or more image sensors configured to capture visual information by converting light to image data. Consistent with the present disclosure, an image sensor may be included in the any device or system in the present disclosure and may be any device capable of detecting and converting optical signals in the near-infrared, infrared, visible, and ultraviolet spectrums into electrical signals. Examples of image sensors may include digital cameras, phone cameras, semiconductor Charge-Coupled Devices (CCDs), active pixel sensors in Complementary Metal-Oxide-Semiconductor (CMOS), or N-type metal-oxide-semiconductor (NMOS, Live MOS). The electrical signals may be used to generate image data. Consistent with the present disclosure, the image data may include pixel data streams, digital images, digital video streams, data derived from captured images, and data that may be used to construct one or more 3D images, a sequence of 3D images, 3D videos, or a virtual 3D representation. The image data acquired by image sensor372may be transmitted by wired or wireless transmission to any processing device of system200. For example, the image data may be processed in order to: detect objects, detect events, detect action, detect face, detect people, recognize a known person, or any other information that may be used by system200. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on image data received from image sensor372. Motion sensor373may include one or more motion sensors configured to measure motion of input unit202or motion of objects in the environment of input unit202. Specifically, the motion sensors may perform at least one of the following: detect motion of objects in the environment of input unit202, measure the velocity of objects in the environment of input unit202, measure the acceleration of objects in the environment of input unit202, detect the motion of input unit202, measure the velocity of input unit202, measure the acceleration of input unit202, etc. In some embodiments, motion sensor373may include one or more accelerometers configured to detect changes in proper acceleration and/or to measure proper acceleration of input unit202. In other embodiments, motion sensor373may include one or more gyroscopes configured to detect changes in the orientation of input unit202and/or to measure information related to the orientation of input unit202. In other embodiments, motion sensor373may include one or more using image sensors, LIDAR sensors, radar sensors, or proximity sensors. For example, by analyzing captured images the processing device may determine the motion of input unit202, for example, using ego-motion algorithms. In addition, the processing device may determine the motion of objects in the environment of input unit202, for example, using object tracking algorithms. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on the determined motion of input unit202or the determined motion of objects in the environment of input unit202. For example, causing a virtual display to follow the movement of input unit202. Environmental sensor374may include one or more sensors from different types configured to capture data reflective of the environment of input unit202. In some embodiments, environmental sensor374may include one or more chemical sensors configured to perform at least one of the following: measure chemical properties in the environment of input unit202, measure changes in the chemical properties in the environment of input unit202, detect the present of chemicals in the environment of input unit202, measure the concentration of chemicals in the environment of input unit202. Examples of such chemical properties may include: pH level, toxicity, and temperature. Examples of such chemicals may include: electrolytes, particular enzymes, particular hormones, particular proteins, smoke, carbon dioxide, carbon monoxide, oxygen, ozone, hydrogen, and hydrogen sulfide. In other embodiments, environmental sensor374may include one or more temperature sensors configured to detect changes in the temperature of the environment of input unit202and/or to measure the temperature of the environment of input unit202. In other embodiments, environmental sensor374may include one or more barometers configured to detect changes in the atmospheric pressure in the environment of input unit202and/or to measure the atmospheric pressure in the environment of input unit202. In other embodiments, environmental sensor374may include one or more light sensors configured to detect changes in the ambient light in the environment of input unit202. Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on input from environmental sensor374. For example, automatically reducing the brightness of the virtual content when the environment of user100becomes darker. Other sensors375may include a weight sensor, a light sensor, a resistive sensor, an ultrasonic sensor, a proximity sensor, a biometric sensor, or other sensing devices to facilitate related functionalities. In a specific embodiment, other sensors375may include one or more positioning sensors configured to obtain positioning information of input unit202, to detect changes in the position of input unit202, and/or to measure the position of input unit202. Alternatively, GPS software may permit input unit202to access an external GPS receiver (e.g., connecting via a serial port or Bluetooth). Consistent with the present disclosure, processing device360may modify a presentation of virtual content based on input from other sensors375. For example, presenting private information only after identifying user100using data from a biometric sensor. The components and arrangements shown inFIG.3are not intended to limit the disclosed embodiments. As will be appreciated by a person skilled in the art having the benefit of this disclosure, numerous variations and/or modifications may be made to the depicted configuration of input unit202. For example, not all components may be essential for the operation of an input unit in all cases. Any component may be located in any appropriate part of an input unit, and the components may be rearranged into a variety of configurations while providing the functionality of the disclosed embodiments. For example, some input units may not include all of the elements as shown in input unit202. FIG.4is a block diagram of an exemplary configuration of XR unit204.FIG.4is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.4, XR unit204may directly or indirectly access a bus400(or other communication mechanism) that interconnects subsystems and components for transferring information within XR unit204. For example, bus400may interconnect a memory interface410, a network interface420, an input interface430, a power source440, an output interface450, a processing device460, a sensors interface470, and a database480. Memory interface410, shown inFIG.4, is assumed to have similar functionality as the functionality of memory interface310described above in detail. Memory interface410may be used to access a software product and/or data stored on a non-transitory computer-readable medium or on memory devices, such as memory device411. Memory device411may contain software modules to execute processes consistent with the present disclosure. In particular, memory device411may include an input determination module412, an output determination module413, a sensors communication module414, a virtual content determination module415, a virtual content communication module416, and a database access module417. Modules412-417may contain software instructions for execution by at least one processor (e.g., processing device460) associated with XR unit204. Input determination module412, output determination module413, sensors communication module414, virtual content determination module415, virtual content communication module416, and database access module417may cooperate to perform various operations. For example, input determination module412may determine User Interface (UI) input received from input unit202. At the same time, sensors communication module414may receive data from different sensors to determine a status of user100. Virtual content determination module415may determine the virtual content to display based on received input and the determined status of user100. Virtual content communication module416may retrieve virtual content not determined by virtual content determination module415. The retrieval of the virtual content may be from database380, database480, mobile communications device206, or from remote processing unit208. Based on the output of virtual content determination module415, output determination module413may cause a change in a virtual content displayed to user100by projector454. In some embodiments, input determination module412may regulate the operation of input interface430in order to receive gesture input431, virtual input432, audio input433, and UI input434. Consistent with the present disclosure, input determination module412may concurrently receive different types of input data. In one embodiment, input determination module412may apply different rules based on the detected type of input. For example, gesture input may have precedence over virtual input. In some embodiments, output determination module413may regulate the operation of output interface450in order to generate output using light indicators451, display452, speakers453, and projector454. In one embodiment, light indicators451may include a light indicator that shows the status of the wearable extended reality appliance. For example, the light indicator may display green light when the wearable extended reality appliance110are connected to input unit202, and blinks when wearable extended reality appliance110has low battery. In another embodiment, display452may be used to display operational information. In another embodiment, speakers453may include a bone conduction headphone used to output audio to user100. In another embodiment, projector454may present virtual content to user100. The operations of a sensors communication module, a virtual content determination module, a virtual content communication module, and a database access module are described above with reference toFIG.3, details of which are not repeated herein. Modules412-417may be implemented in software, hardware, firmware, a mix of any of those, or the like. Network interface420, shown inFIG.4, is assumed to have similar functionality as the functionality of network interface320, described above in detail. The specific design and implementation of network interface420may depend on the communications network(s) over which XR unit204is intended to operate. For example, in some embodiments, XR unit204is configured to be selectively connectable by wire to input unit202. When connected by wire, network interface420may enable communications with input unit202; and when not connected by wire, network interface420may enable communications with mobile communications device206. Input interface430, shown inFIG.4, is assumed to have similar functionality as the functionality of input interface330described above in detail. In this case, input interface430may communicate with an image sensor to obtain gesture input431(e.g., a finger of user100pointing to a virtual object), communicate with other XR units204to obtain virtual input432(e.g., a virtual object shared with XR unit204or a gesture of avatar detected in the virtual environment), communicate with a microphone to obtain audio input433(e.g., voice commands), and communicate with input unit202to obtain UI input434(e.g., virtual content determined by virtual content determination module315). Power source440, shown inFIG.4, is assumed to have similar functionality as the functionality of power source340described above, only it provides electrical energy to power XR unit204. In some embodiments, power source440may be charged by power source340. For example, power source440may be wirelessly changed when XR unit204is placed on or in proximity to input unit202. Output interface450, shown inFIG.4, is assumed to have similar functionality as the functionality of output interface350described above in detail. In this case, output interface450may cause output from light indicators451, display452, speakers453, and projector454. Projector454may be any device, apparatus, instrument, or the like capable of projecting (or directing) light in order to display virtual content onto a surface. The surface may be part of XR unit204, part of an eye of user100, or part of an object in proximity to user100. In one embodiment, projector454may include a lighting unit that concentrates light within a limited solid angle by means of one or more mirrors and lenses, and provides a high value of luminous intensity in a defined direction. Processing device460, shown inFIG.4, is assumed to have similar functionality as the functionality of processing device360described above in detail. When XR unit204is connected to input unit202, processing device460may work together with processing device360. Specifically, processing device460may implement virtual machine technologies or other technologies to provide the ability to execute, control, run, manipulate, store, etc., multiple software processes, applications, programs, etc. It is appreciated that other types of processor arrangements could be implemented to provide the capabilities disclosed herein. Sensors interface470, shown inFIG.4, is assumed to have similar functionality as the functionality of sensors interface370described above in detail. Specifically sensors interface470may communicate with audio sensor471, image sensor472, motion sensor473, environmental sensor474, and other sensors475. The operations of an audio sensor, an image sensor, a motion sensor, an environmental sensor, and other sensors are described above with reference toFIG.3, details of which are not repeated herein. It is appreciated that other types and combination of sensors may be used to provide the capabilities disclosed herein. The components and arrangements shown inFIG.4are not intended to limit the disclosed embodiments. As will be appreciated by a person skilled in the art having the benefit of this disclosure, numerous variations and/or modifications may be made to the depicted configuration of XR unit204. For example, not all components may be essential for the operation of XR unit204in all cases. Any component may be located in any appropriate part of system200, and the components may be rearranged into a variety of configurations while providing the functionality of the disclosed embodiments. For example, some XR units may not include all of the elements in XR unit204(e.g., wearable extended reality appliance110may not have light indicators451). FIG.5is a block diagram of an exemplary configuration of remote processing unit208.FIG.5is an exemplary representation of just one embodiment, and it is to be understood that some illustrated elements might be omitted and others added within the scope of this disclosure. In the embodiment ofFIG.5, remote processing unit208may include a server210that directly or indirectly accesses a bus500(or other communication mechanism) interconnecting subsystems and components for transferring information within server210. For example, bus500may interconnect a memory interface510, a network interface520, a power source540, a processing device560, and a database580. Remote processing unit208may also include a one or more data structures. For example, data structures212A,212B, and212C. Memory interface510, shown inFIG.5, is assumed to have similar functionality as the functionality of memory interface310described above in detail. Memory interface510may be used to access a software product and/or data stored on a non-transitory computer-readable medium or on other memory devices, such as memory devices311,411,511, or data structures212A,212B, and212C. Memory device511may contain software modules to execute processes consistent with the present disclosure. In particular, memory device511may include a shared memory module512, a node registration module513, a load balancing module514, one or more computational nodes515, an internal communication module516, an external communication module517, and a database access module (not shown). Modules512-517may contain software instructions for execution by at least one processor (e.g., processing device560) associated with remote processing unit208. Shared memory module512, node registration module513, load balancing module514, computational module515, and external communication module517may cooperate to perform various operations. Shared memory module512may allow information sharing between remote processing unit208and other components of system200. In some embodiments, shared memory module512may be configured to enable processing device560(and other processing devices in system200) to access, retrieve, and store data. For example, using shared memory module512, processing device560may perform at least one of: executing software programs stored on memory device511, database580, or data structures212A-C; storing information in memory device511, database580, or data structures212A-C; or retrieving information from memory device511, database580, or data structures212A-C. Node registration module513may be configured to track the availability of one or more computational nodes515. In some examples, node registration module513may be implemented as: a software program, such as a software program executed by one or more computational nodes515, a hardware solution, or a combined software and hardware solution. In some implementations, node registration module513may communicate with one or more computational nodes515, for example, using internal communication module516. In some examples, one or more computational nodes515may notify node registration module513of their status, for example, by sending messages: at startup, at shutdown, at constant intervals, at selected times, in response to queries received from node registration module513, or at any other determined times. In some examples, node registration module513may query about the status of one or more computational nodes515, for example, by sending messages: at startup, at constant intervals, at selected times, or at any other determined times. Load balancing module514may be configured to divide the workload among one or more computational nodes515. In some examples, load balancing module514may be implemented as: a software program, such as a software program executed by one or more of the computational nodes515, a hardware solution, or a combined software and hardware solution. In some implementations, load balancing module514may interact with node registration module513in order to obtain information regarding the availability of one or more computational nodes515. In some implementations, load balancing module514may communicate with one or more computational nodes515, for example, using internal communication module516. In some examples, one or more computational nodes515may notify load balancing module514of their status, for example, by sending messages: at startup, at shutdown, at constant intervals, at selected times, in response to queries received from load balancing module514, or at any other determined times. In some examples, load balancing module514may query about the status of one or more computational nodes515, for example, by sending messages: at startup, at constant intervals, at pre-selected times, or at any other determined times. Internal communication module516may be configured to receive and/or to transmit information from one or more components of remote processing unit208. For example, control signals and/or synchronization signals may be sent and/or received through internal communication module516. In one embodiment, input information for computer programs, output information of computer programs, and/or intermediate information of computer programs may be sent and/or received through internal communication module516. In another embodiment, information received though internal communication module516may be stored in memory device511, in database580, in data structures212A-C, or other memory device in system200. For example, information retrieved from data structure212A may be transmitted using internal communication module516. In another example, input data may be received using internal communication module516and stored in data structure212B. External communication module517may be configured to receive and/or to transmit information from one or more components of system200. For example, control signals may be sent and/or received through external communication module517. In one embodiment, information received though external communication module517may be stored in memory device511, in database580, in data structures212A-C, and or any memory device in the system200. In another embodiment, information retrieved from any of data structures212A-C may be transmitted using external communication module517to XR unit204. In another embodiment, input data may be transmitted and/or received using external communication module517. Examples of such input data may include data received from input unit202, information captured from the environment of user100using one or more sensors (e.g., audio sensor471, image sensor472, motion sensor473, environmental sensor474, other sensors475), and more. In some embodiments, aspects of modules512-517may be implemented in hardware, in software (including in one or more signal processing and/or application specific integrated circuits), in firmware, or in any combination thereof, executable by one or more processors, alone, or in various combinations with each other. Specifically, modules512-517may be configured to interact with each other and/or other modules of system200to perform functions consistent with disclosed embodiments. Memory device511may include additional modules and instructions or fewer modules and instructions. Network interface520, power source540, processing device560, and database580, shown inFIG.5, are assumed to have similar functionality as the functionality of similar elements described above with reference toFIGS.4and5. The specific design and implementation of the above-mentioned components may vary based on the implementation of system200. In addition, remote processing unit208may include more or fewer components. For example, remote processing unit208may include an input interface configured to receive direct input from one or more input devices. Consistent with the present disclosure, a processing device of system200(e.g., processor within mobile communications device206, a processor within a server210, a processor within a wearable extended reality appliance, such as, wearable extended reality appliance110, and/or a processor within an input device associated with wearable extended reality appliance110, such as keyboard104) may use machine learning algorithms in order to implement any of the methods disclosed herein. In some embodiments, machine learning algorithms (also referred to as machine learning models in the present disclosure) may be trained using training examples, for example in the cases described below. Some non-limiting examples of such machine learning algorithms may include classification algorithms, data regressions algorithms, image segmentation algorithms, visual detection algorithms (such as object detectors, face detectors, person detectors, motion detectors, edge detectors, etc.), visual recognition algorithms (such as face recognition, person recognition, object recognition, etc.), speech recognition algorithms, mathematical embedding algorithms, natural language processing algorithms, support vector machines, random forests, nearest neighbors algorithms, deep learning algorithms, artificial neural network algorithms, convolutional neural network algorithms, recurrent neural network algorithms, linear machine learning models, non-linear machine learning models, ensemble algorithms, and more. For example, a trained machine learning algorithm may comprise an inference model, such as a predictive model, a classification model, a data regression model, a clustering model, a segmentation model, an artificial neural network (such as a deep neural network, a convolutional neural network, a recurrent neural network, etc.), a random forest, a support vector machine, and so forth. In some examples, the training examples may include example inputs together with the desired outputs corresponding to the example inputs. Further, in some examples, training machine learning algorithms using the training examples may generate a trained machine learning algorithm, and the trained machine learning algorithm may be used to estimate outputs for inputs not included in the training examples. In some examples, engineers, scientists, processes and machines that train machine learning algorithms may further use validation examples and/or test examples. For example, validation examples and/or test examples may include example inputs together with the desired outputs corresponding to the example inputs, a trained machine learning algorithm and/or an intermediately trained machine learning algorithm may be used to estimate outputs for the example inputs of the validation examples and/or test examples, the estimated outputs may be compared to the corresponding desired outputs, and the trained machine learning algorithm and/or the intermediately trained machine learning algorithm may be evaluated based on a result of the comparison. In some examples, a machine learning algorithm may have parameters and hyper parameters, where the hyper parameters may be set manually by a person or automatically by an process external to the machine learning algorithm (such as a hyper parameter search algorithm), and the parameters of the machine learning algorithm may be set by the machine learning algorithm based on the training examples. In some implementations, the hyper-parameters may be set based on the training examples and the validation examples, and the parameters may be set based on the training examples and the selected hyper-parameters. For example, given the hyper-parameters, the parameters may be conditionally independent of the validation examples. In some embodiments, trained machine learning algorithms (also referred to as machine learning models and trained machine learning models in the present disclosure) may be used to analyze inputs and generate outputs, for example in the cases described below. In some examples, a trained machine learning algorithm may be used as an inference model that when provided with an input generates an inferred output. For example, a trained machine learning algorithm may include a classification algorithm, the input may include a sample, and the inferred output may include a classification of the sample (such as an inferred label, an inferred tag, and so forth). In another example, a trained machine learning algorithm may include a regression model, the input may include a sample, and the inferred output may include an inferred value corresponding to the sample. In yet another example, a trained machine learning algorithm may include a clustering model, the input may include a sample, and the inferred output may include an assignment of the sample to at least one cluster. In an additional example, a trained machine learning algorithm may include a classification algorithm, the input may include an image, and the inferred output may include a classification of an item depicted in the image. In yet another example, a trained machine learning algorithm may include a regression model, the input may include an image, and the inferred output may include an inferred value corresponding to an item depicted in the image (such as an estimated property of the item, such as size, volume, age of a person depicted in the image, distance from an item depicted in the image, and so forth). In an additional example, a trained machine learning algorithm may include an image segmentation model, the input may include an image, and the inferred output may include a segmentation of the image. In yet another example, a trained machine learning algorithm may include an object detector, the input may include an image, and the inferred output may include one or more detected objects in the image and/or one or more locations of objects within the image. In some examples, the trained machine learning algorithm may include one or more formulas and/or one or more functions and/or one or more rules and/or one or more procedures, the input may be used as input to the formulas and/or functions and/or rules and/or procedures, and the inferred output may be based on the outputs of the formulas and/or functions and/or rules and/or procedures (for example, selecting one of the outputs of the formulas and/or functions and/or rules and/or procedures, using a statistical measure of the outputs of the formulas and/or functions and/or rules and/or procedures, and so forth). Consistent with the present disclosure, a processing device of system200may analyze image data captured by an image sensor (e.g., image sensor372, image sensor472, or any other image sensor) in order to implement any of the methods disclosed herein. In some embodiments, analyzing the image data may comprise analyzing the image data to obtain a preprocessed image data, and subsequently analyzing the image data and/or the preprocessed image data to obtain the desired outcome. One of ordinary skill in the art will recognize that the followings are examples, and that the image data may be preprocessed using other kinds of preprocessing methods. In some examples, the image data may be preprocessed by transforming the image data using a transformation function to obtain a transformed image data, and the preprocessed image data may comprise the transformed image data. For example, the transformed image data may comprise one or more convolutions of the image data. For example, the transformation function may comprise one or more image filters, such as low-pass filters, high-pass filters, band-pass filters, all-pass filters, and so forth. In some examples, the transformation function may comprise a nonlinear function. In some examples, the image data may be preprocessed by smoothing at least parts of the image data, for example using Gaussian convolution, using a median filter, and so forth. In some examples, the image data may be preprocessed to obtain a different representation of the image data. For example, the preprocessed image data may comprise: a representation of at least part of the image data in a frequency domain; a Discrete Fourier Transform of at least part of the image data; a Discrete Wavelet Transform of at least part of the image data; a time/frequency representation of at least part of the image data; a representation of at least part of the image data in a lower dimension; a lossy representation of at least part of the image data; a lossless representation of at least part of the image data; a time ordered series of any of the above; any combination of the above; and so forth. In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may comprise information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to: edges; corners; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth. In some examples, analyzing the image data may include calculating at least one convolution of at least a portion of the image data, and using the calculated at least one convolution to calculate at least one resulting value and/or to make determinations, identifications, recognitions, classifications, and so forth. Consistent with other aspects of the disclosure, a processing device of system200may analyze image data in order to implement any of the methods disclosed herein. In some embodiments, analyzing the image may comprise analyzing the image data and/or the preprocessed image data using one or more rules, functions, procedures, artificial neural networks, object detection algorithms, face detection algorithms, visual event detection algorithms, action detection algorithms, motion detection algorithms, background subtraction algorithms, inference models, and so forth. Some non-limiting examples of such inference models may include: an inference model preprogrammed manually; a classification model; a regression model; a result of training algorithms, such as machine learning algorithms and/or deep learning algorithms, on training examples, where the training examples may include examples of data instances, and in some cases, a data instance may be labeled with a corresponding desired label and/or result, and more. In some embodiments, analyzing image data (for example by the methods, steps and modules described herein) may comprise analyzing pixels, voxels, point cloud, range data, etc. included in the image data. A convolution may include a convolution of any dimension. A one-dimensional convolution is a function that transforms an original sequence of numbers to a transformed sequence of numbers. The one-dimensional convolution may be defined by a sequence of scalars. Each particular value in the transformed sequence of numbers may be determined by calculating a linear combination of values in a subsequence of the original sequence of numbers corresponding to the particular value. A result value of a calculated convolution may include any value in the transformed sequence of numbers. Likewise, an n-dimensional convolution is a function that transforms an original n-dimensional array to a transformed array. The n-dimensional convolution may be defined by an n-dimensional array of scalars (known as the kernel of the n-dimensional convolution). Each particular value in the transformed array may be determined by calculating a linear combination of values in an n-dimensional region of the original array corresponding to the particular value. A result value of a calculated convolution may include any value in the transformed array. In some examples, an image may comprise one or more components (such as color components, depth component, etc.), and each component may include a two-dimensional array of pixel values. In one example, calculating a convolution of an image may include calculating a two-dimensional convolution on one or more components of the image. In another example, calculating a convolution of an image may include stacking arrays from different components to create a three-dimensional array, and calculating a three-dimensional convolution on the resulting three-dimensional array. In some examples, a video may comprise one or more components (such as color components, depth component, etc.), and each component may include a three-dimensional array of pixel values (with two spatial axes and one temporal axis). In one example, calculating a convolution of a video may include calculating a three-dimensional convolution on one or more components of the video. In another example, calculating a convolution of a video may include stacking arrays from different components to create a four-dimensional array, and calculating a four-dimensional convolution on the resulting four-dimensional array. Some disclosed embodiments may include systems, methods, and non-transitory computer readable media configured for enabling content sharing between users of wearable extended reality appliances. Content sharing may involve the presentation by one entity to another of text, images, programs, or any other information. The content may be shared between users of wearable extended reality appliances, so that one entity wearing an appliance may view or otherwise have access to content available via the wearable extended reality appliance of another. Content sharing may be enabled in many different ways. In some embodiments, enabling may occur through a link between two extended reality appliances. In one example, the link may include a direct and/or in-direct communication link between the two extended reality appliances. In another example, a system (such as a centralized system) may be in communication with each one of the two extended reality appliances, for example to provide each one of the two extended reality appliances content for presentation, to receive location and/or orientation information associated with the extended reality appliances, and/or any other information pertaining to presenting content. The link may include a linkage in a data-structure or a database of the system or maintained by the system. In one non-limiting example, a detected proximity of two entities may trigger an automatic or selectable sharing of content. In other embodiments, one entity may select content for sharing either with a group or an individual or may tie content to a physical or virtual location enabling anyone accessing that location to view the content. In some embodiments, content sharing may be enabled through permissions, where viewing of shared content is only enabled for entities with appropriate authority. In yet other embodiments, rules may define with whom content may be shared. In further exemplary embodiments, sharing may be enabled via a request sent from one appliance to another. Content sharing may be initiated in some embodiments as the result of signals from one or more sensors. For example, one or more image sensors might detect a gesture indicative of an intent to share content. Or a proximity sensor may trigger an ability to share when two wearable extended reality appliances are in proximity to each other. There are many different ways in which content sharing may be enabled consistent with this disclosure, and this disclosure is not limited to any particular one. Some disclosed embodiments may include computer readable media containing instructions that when executed by at least one processor cause the at least one processor to establish a link between a first wearable extended reality appliance and a second wearable extended reality appliance. In some embodiments, establishing a link may refer to any means of communicating between the first wearable extended reality appliance and the second extended wearable reality appliance. In some embodiments, there may be a formal protocol that governs how the first wearable extended reality appliance and the second wearable extended reality appliance transmit and receive data. Such protocols may include but are not limited to Transmission Control Protocol/Internet Protocols (TCP/IP), Bluetooth, Infrared, Near Field Communication, Ultra-Wide Band, WiFi, Zig-Bee, short-range communication protocol, and/or long-range communication protocol. Such means of communication between the first wearable extended reality appliance and the second wearable extended reality appliance may include, but are not limited to, Local Area Network (LAN), Wireless Local Area Network (WLAN), Virtual Private Network (VPN), an indirect communication link, and/or by a directed communication link. The processor may cause the first wearable extended reality appliance to initiate communication with a second wearable extended reality appliance via one or more of the above-identified protocols. In some examples, establishing a link may refer to linkage in a data-structure and/or a database, for example in system coordinating an extended reality environment, in a system that communicates with both the first and the second wearable extended reality appliances, and/or in a system that provides content for presentation for both the first and the second wearable extended reality appliance. Some disclosed embodiments may involve establishing the link (for example, the communication link, the link in the data-structure and/or a database) and enabling information exchange when the first wearable extended reality appliance is detected in proximity to the second wearable extended reality appliance. Such instructions may be configured based on a threshold distance, and this threshold distance may vary between users of the extended reality appliance. For example, the first wearable extended reality appliance user may prefer to initiate a link with the second wearable extended reality appliance at a shorter threshold distance, for instance one or two meters, when a particular type of user is around, such as a family member. By contrast, the same first wearable extended reality appliance user may prefer to initiate a link with the second wearable extended reality appliance at a longer threshold distance in order to facilitate communication with a colleague or team member. The user may be free to update threshold distance settings depending on the user's preference. In some examples, the communication link between the wearable extended reality appliances may be through intermediate devices. Intermediate devices may include any networking device positioned between a Remote Access Server (RAS) and a RAS client. Intermediate devices may aid in providing connectivity between two separate wearable extended reality appliances. An intermediate device may provide an extra layer of security before a communication link between two wearable extended reality appliances is established. For example, when security protocols are not satisfied, the intermediate device may prevent establishment of a connection between the two extended reality appliances. Some disclosed embodiments may involve presenting through the first wearable extended reality appliance first virtual content. The virtual content may take on a variety of different forms and may include documents, photos, videos, virtual characters, and any other sharable media that can be transmitted wirelessly. In some examples, the virtual content may be presented as part of an extended reality environment. In other examples, virtual content may be displayed on a physical screen, such as a TV, tablet, laptop, or smartphone, or it may be displayed on a virtual screen, such as through one or more extended reality appliances. Some disclosed embodiments may include obtaining a first command to display the first virtual content via the second wearable extended reality appliance. A first command may refer to a signal received from the at least one processor to display virtual content via the second wearable extended reality appliance. In another example, the first command may refer to a command obtained from a user, and/or in response to an action of the user. The non-transitory computer readable medium may be configured to share content and may contain instructions for the at least one processor to send a signal to the first wearable extended reality appliance to display virtual content. In some disclosed embodiments, obtaining the first command to present virtual content may include a sharing intent in data captured by the first wearable extended reality appliance. A sharing intent may refer to a user's desire to present virtual content to another user or to exchange virtual content with another user. In one example, a machine learning model may be trained using training examples to identify sharing intent by analyzing captured data. An example of such a training example may include sample captured data, together with a label indicating whether the sample capture data corresponds to a sharing intent and/or a parameter of the sharing intent (for example, an intent to share a particular content, an intent to share content presented in a particular area, an intent to share content with a particular entity, and/or an intent to share content in a particular way). The trained machine learning model may be used to analyze the data captured by the first wearable extended reality appliance and identify the sharing intent and/or parameters of the sharing intent. For example, a user's sharing intent may be identified in captured image data. Some non-limiting examples of such captured image data may include image data (such as one or more images and/or one or more videos) captured using an image sensor included in the first wearable extended reality appliance, included in the second wearable extended reality appliance, included in a device in an environment of the first and the second wearable extended reality appliances (the device may differ from the first and the second wearable extended reality appliances, such as another wearable extended reality appliance, a stationary camera mounted in a room, and/or any other device that may capture image or video data). A user's sharing intent may be identified by analyzing the image data to detect movement or a change of position or orientation of an object in the captured image data. For example, a user's sharing intent may be identified by detecting a movement of the user's hand (e.g., user is waving their hand), by using a visual gesture recognition algorithm to analyze the image data. In another example, a user's sharing intent may be identified by detecting that a user has moved a piece of hardware in the one or more captured images, for example using a visual object tracking algorithm. Additionally, such a sharing intent may be captured if a user moves a finger, nods their head, touches their extended reality appliance, or otherwise gestures to another user that he or she wishes to share content. In some examples, a machine learning model may be trained using training examples to identify sharing intent by analyzing images and/or videos. An example of such training example may include a sample image and/or a sample video, together with a label indicating whether the sample image and/or sample video corresponds to a sharing intent and/or a parameter of the sharing intent (for example, an intent to share a particular content, an intent to share content presented in a particular area, an intent to share content with a particular entity, an intent to share content in a particular way, and/or intent to share content at a particular time). The trained machine learning model may be used to analyze the captured image data and identify the sharing intent and/or parameters of the sharing intent. In some examples, a convolution of at least part of the image data may be calculated to obtain a result value of the calculated convolution. In one example, in response to a first result value of the calculated convolution, a sharing intent of the first virtual content may be identified, and in response to a second result value of the calculated convolution, no sharing intent of the first virtual content may be identified. In another example, in response to a first result value of the calculated convolution, a sharing intent of one virtual content may be identified, and in response to a second result value of the calculated convolution, a sharing intent of another virtual content may be identified. By way of example,FIG.6illustrates a sharing intent of first extended reality appliance user610via a sharing movement612. For example, as illustrated inFIG.6, hand waving by first user610reflects an intent to share virtual content614with second extended reality appliance user616. Hardware that may be used to capture a user's sharing intent may include a pointer, keyboard, mouse, joystick, or any other object designed to facilitate sharing data. Such a sharing intent may be captured when a user moves the hardware, such as a mouse or a joystick, towards another user. In some embodiments, a user's sharing intent may also be determined when the user waves a wand, presses a button on a keyboard, or otherwise gestures with respect to another user. In some embodiments, identification of sharing intent based on captured data from hardware may be configured based on a user's preference, and may not be limited to moving the hardware towards a second user. A user's preference refers to how a particular wearable extended reality appliance user chooses to configure their appliance to share virtual content. For example, one user may prefer to share virtual content via a hand movement, whereas another user may prefer to share content via a pointer or joystick. Alternatively, users may be permitted to define gesture preferences for sharing content. In some embodiments, the sharing intent may also be identified in captured voice data. For example, the captured voice data may include audio data captured using an audio sensor included in the first wearable extended reality appliance, using an audio sensor included in the second wearable extended reality appliance, and/or using an audio sensor included in another device. In one example, a user may verbally initiate sharing through a voice command. The command may include, but is not limited to, for example, “share content,” “transmit content,” “display content,” “present content,” or any other word combination that may be construed as a request to share content with another user. Additionally, the verbal command may name a second user specifically, and may take the form of, for example, “share content with User X.” Verbal commands may also be configured based on a user's preference. In one example, the captured voice data may be analyzing using a speech recognition algorithm to identify voice commands corresponding to the sharing intent and/or parameters of the sharing intent. Each unique wearable extended reality appliance user's preferences, as described above, may be stored in the non-transitory computer readable medium enabled for control of content sharing. In some embodiments, the sharing intent may be identified in captured positioning data. Such positioning data may be captured based on the first wearable extended reality appliance user's position and/or orientation, based on position and/or orientation of the first wearable extended reality appliance, and/or based on position and/or orientation of a device associated the first wearable extended reality appliance (for example, of another device used by a user of the first wearable extended reality appliance, such as a keyboard, a glove, and/or a pointing device). For example, a sharing intent may be identified when a user positions their body and wearable extended reality appliance closer to another user's. By way of another example, the sharing intent may be identified when a user stands from a sitting position, leans over to another user, or otherwise positions their body or extended reality appliance in such a way as to indicate a sharing intent. Sharing intent based on positioning data may also be configured based on a user's preference. In one example, the position and/or orientation may be determined by analyzing data captured using a sensor included the first wearable extended reality appliance and/or in a different device, such as a positioning sensor (such as an accelerometer, a gyroscope, and/or a GPS sensor), and/or an image sensor (for example by analyzing the image data captured using the image sensor using ego-motion algorithms, and/or using visual object tracking algorithms). In some embodiments, obtaining the first command may include identifying a sharing intent associated with an action performed in an extended reality environment (such as a virtual reality environment, an augmented reality environment, and/or a mixed reality environment). As described in the below paragraphs, an action performed in an extended reality environment may include sharing content via a sharing bubble or a sharing ring, or by automatically sharing content with another wearable extended reality appliance user upon establishing a link between the two or more wearable extended reality appliances. In some embodiments, identifying the sharing intent may include determining that a first user of the first wearable extended reality appliance changed an orientation of a virtual screen presenting the first virtual content towards a second user of the second wearable extended reality appliance. In one example, determining that a first user of the first wearable extended reality appliance changed an orientation of the virtual screen towards a second user may include determining that the virtual screen is oriented towards the second user, and determining that the virtual screen became oriented towards the second user as a result of an action of the first user (and not, for example, as a result of a movement of the second user). In one example, determining whether the virtual screen became oriented towards the second user as a result of an action of the first user may be based on an analysis of movements of the second user and/or movements of the virtual screen and/or inputs leading to the movements of the virtual screen. For example, at least one of a system coordinating an extended reality environment, a system that communicates with both the first and the second wearable extended reality appliances, a system that provides content for presentation for both the first and the second wearable extended reality appliance, or the first wearable extended reality appliance may determine that an orientation of the virtual screen changed towards the second user. Based on the first wearable extended reality appliance user's preference, this may manifest a sharing intent with the second user. In some examples, a first angle may be an angle between a surface corresponding to the virtual screen and a direction associated with a user of the first wearable extended reality appliance, a second angle may be an angle between a surface corresponding to the virtual screen and a direction associated with a user of the second wearable extended reality appliance, and the determination that the orientation of the virtual screen presenting the first virtual content changed towards the second user of the second wearable extended reality may be based on the first angle and the second angle. Some non-limiting examples of such direction associated with a user may include a direction of a head of the user, a direction of a body of the user, a direction of a gaze and/or an eye of the user, a fixation line direction of the user, a line of sight direction of the user, a visual axis direction of the user, a pupillary axis direction of the user, an optical axis direction of the user, and/or a direction associated with an orientation of the wearable extended reality appliance of the user. In one example, a function of the first angle and the second angle may be calculated and compared with a threshold to determine whether the orientation of the virtual screen presenting the first virtual content changed towards the second user. In another example, a machine learning model may be trained using training examples to determine whether orientations of virtual screens are changed towards users based on angles and/or distances. An example of such training example may include sample angles of a sample virtual screen with respect to sample users and/or sample distances of the sample virtual screen from the sample users, together with a label indicating whether the sample virtual screen is oriented towards each one of the samples users. The trained machine learning model may be used to analyze the first angle, the second angle, and possibly the distances of the users from the virtual screen, to determine that the virtual screen is oriented towards the second user. Furthermore, the user of the second wearable extended reality appliance may be located within proximity, whether a physical threshold distance or virtual proximity, of the virtual screen. In one example, the sharing intent may be determined if a user of the first wearable extended reality appliance leans or nods towards the user of the second wearable extended reality appliance, such that the change in orientation may be captured. The user of the second wearable extended reality appliance may also be a representation of the user, such as an avatar. Thus, a first user may share content based on a change in orientation even if the second user was not physically located next to the first user. In some embodiments, identifying the sharing intent may include determining that a first user of the first wearable extended reality appliance moved the first virtual content towards a virtual sharing space surrounding a second user of the second wearable extended reality appliance. In one example, the virtual sharing space surrounding a second user may be referred to as a sharing bubble. For example, the at least one processor may be configured to detect an action that may take the form of a gesture, a dragging of the first virtual content, or other physical movements made by the first user. The at least one processor may be configured to determine movement of a first virtual content towards a virtual sharing space based on the detected gesture, dragging, or other physical movement. This sharing action may also be based on people's natural movement to lean towards other people with whom they wish to share content. Such movements may be detected by one or more sensors within at least one extended reality appliance, or via a sensor associated with another device. The virtual sharing space in the above embodiment, known as a sharing bubble, may be determined automatically in a myriad of different ways. For example, the virtual sharing space may be determined based on a position and/or a movement of the second user and/or the second wearable extended reality appliance. For example, the virtual sharing space may include a space within a selected distance from the second user and/or the second wearable extended reality appliance. In this embodiment, when a first user is within a certain distance of a second user, virtual content may be automatically shared with the second user. In another example, the sharing bubble may be elongated in the direction of the movement. In yet another example, the sharing bubble may be larger in horizontal directions than in vertical directions. The sharing bubble may be a space around a user. This space may be defined in a variety of different ways. In some embodiments, the size and/or orientation of a bubble surrounding one or more users may be determined by the number of users, the positions of the user, the positions of other people not belonging to the bubble, the orientations of the users, the orientations of other people not belonging to the bubble, the identities of the users, the identities of the users not belonging to the bubble, the positions of inanimate objects in the vicinity of the users, the types of the inanimate objects in the vicinity of the users, the status of the inanimate objects in the vicinity of the users, motion patterns of the users, motion patterns of other people not belonging to the bubble, preferences of the users, past behavior of the users, gestures of the users, and/or other factors regarding proximity of users to one another. In one example, a topological function of one of more of these parameters may define the space of the sharing bubble. When the virtual sharing bubbles of two or more users collide, the two or more users may be automatically offered an opportunity to create a new shared bubble in which virtual content items may be shared among the extended realities of the two or more users. The at least one processor may be configured to determine that there is an overlap between virtual sharing bubbles, and in response the at least one processor may suggest to the users of the overlapping bubbles that virtual content may be shared between those users. A user's wearable extended reality appliance may also be configured to share content automatically upon overlapping with another user's bubble. By way of example,FIG.7Aillustrates a pair of users714and716. As illustrated inFIG.7A, first wearable extended reality appliance user714may have an associated virtual sharing bubble710and second wearable extended reality appliance user716may have an associated virtual sharing bubble718. Virtual sharing bubble710of first wearable extended reality appliance user714may come into contact with and overlap with virtual sharing bubble718of second wearable extended reality appliance user716when, for example, first wearable extended reality appliance user714moves towards a second user716.FIG.7Billustrates a condition in which virtual sharing bubble710overlaps with virtual sharing bubble718via overlapping region720. Virtual content712may be automatically shared with second wearable extended reality appliance user716when the sharing bubbles710and718overlap in, for example, overlapping region720, and/or when the volume of overlapping region720is above a selected volume threshold. In one example, the volume threshold may be selected by configuration, by a user, as a function of a volume of virtual sharing bubble710and/or of a volume of virtual sharing bubble718, of the users714,716, and/or of the virtual content712. By way of example,FIGS.8A and8Billustrate a combined virtual sharing bubble810after the sharing bubbles from first extended reality appliance user812and second extended reality appliance user816overlap, as shown inFIG.8A. While virtual content814may be automatically shared when these virtual sharing bubbles overlap, as shown inFIG.8B, virtual content814may also be shared automatically when a first user812leans towards a second user816(for example, when first user812leans towards second user816when the virtual sharing bubbles overlap). The combined sharing bubble810may reduce in size the closer users get to one another. The virtual sharing space may also be determined based on a usage mode of the second wearable extended reality appliance, the user, the second wearable extended reality appliance, and/or objects surrounding the second wearable extended reality appliance. For example, a user may configure the extended reality appliance to automatically share content when that user enters a room, passes a second user's desk, or walks within proximity of any other object in a room. Additionally, a usage mode may be configured to automatically share content with some users, and to not share the content with others. A usage mode may refer to the different settings a user configures their extended reality system (for example, their wearable extended reality appliance) based on the different tasks they wish to accomplish. For example, one usage mode may apply only to activities performed in the workplace. In this example, certain types of content, including documents and emails, may be shared at a certain distance between two wearable extended reality appliance users. In another example, a usage mode may apply only to activities performed at home. A user may wish to share a video, for example, but not any documents or emails. In some embodiments, identifying the sharing intent may also include determining that a first user of the first wearable extended reality appliance moved the first virtual content towards a defined, sharing space. For example, additionally or alternatively to automatically sharing content when two extended reality appliance users move towards each other, a user may create a sharing space based on predefined parameters, such as length, width, height, and/or location in a room. Instead of sharing content when two users' virtual sharing bubbles overlap, the sharing intent may be determined when users move content towards and/or into such a predefined space. In contrast with the sharing bubble that encompasses each user and has been described above, this sharing intent may be referred to as a sharing ring, as its characteristics are predefined by an extended reality appliance user. Sharing rings may be configured based on a user's preferences. In particular, the location and/or size of a sharing ring may be configured by each user of a wearable extended reality appliance. Some non-limiting examples of such a virtual sharing space may include a ring, a rectangle, a cube, a sphere, a convex shape, a non-convex shape, a smooth shape, an irregular shape, or any other well-defined shape. In some examples, the virtual sharing space may be anchored to a physical location, may be anchored to a physical object, may be created by a user, or may be created based on any other predefined characteristic. A virtual sharing space (such as a sharing ring) may have any two-dimensional or three-dimensional shape, and are not limited to rings. In one example, a virtual sharing space (such as a sharing ring) may be anchored to an object and configured to move with the object. For example, a virtual sharing space may be anchored to a keyboard, an input device, an output device, a chair or a desk associated with a particular user, and moving virtual content into that virtual sharing space may cause the virtual content to be shared with the particular user. In an example of a desk, the virtual sharing space may correspond to at least part of an area of a top surface of the desk. In an example of a chair, the virtual sharing space may correspond to a three-dimensional shape encompassing at least part of the chair. In an example of a keyboard, the keyboard may be placed on a desk, and the virtual sharing space may correspond to at least part of an area of a top surface of the desk, for example a part of the area of the top surface of the desk in a vicinity of the keyboard. Sharing content may also be based on a predefined privacy setting, which may be configured by an extended reality appliance user. In some examples, content may be shared between two users, may be shared with the public, may be shared among a plurality of users, or may include any other combination of public and private sharing. The type and level of sharing and/or the identity of the users that may access the virtual content may be determined based on the virtual sharing space, the content, the user, or any other factors based on the where the user is located. In the sharing space, movement of the virtual content may be a result of an action of a user of the first wearable extended reality appliance. Such an action may take the form of, for example, a gesture, a dragging of the first virtual content, or any other physical movement or movement involving the first virtual content. By way of example,FIG.9illustrates a sharing ring914, which may be a predefined virtual sharing space created by a first wearable extended reality appliance user910, a virtual sharing space created by a user of another wearable extended reality appliance (such as one of users918or a user of a different wearable extended reality appliance), a virtual sharing space created by a user not using a wearable extended reality appliance, and/or an automatically generated virtual sharing space. First user910may place virtual content912in a predefined area914such that virtual content912, previously only visible to the first wearable extended reality appliance user910, becomes shared content916that may be visible to other extended reality appliance users918. In some examples, virtual content items may be shared based on people's natural movement and/or based on a user moving a virtual display screen and placing it in a location which is visible to other users or participants. For example, it may be recognized that there is a hand movement or a gesture, and in response a virtual content item (such as a virtual display screen) may be shared with the relevant users. In some embodiments, all participants may see the virtual display screen docked in the same location. In some embodiments, a virtual content item may be simultaneously viewed in a public region by a plurality of participants and in private mode as an additional/duplicate virtual display screen of a particular user, docked to where the users choose. For example, a user may prefer to see a virtual content item from a closer distance, especially if it includes text, may prefer to add private overlay to the virtual content item, or any other configurations. In one example, virtual content items with lower levels of privacy may be shared without need of user approval. In one example, public content items may be viewed in shared and private modes, and shared content may be viewed in private mode. In some embodiments, for example, the sharing of virtual content may be triggered when a first user and a second user's sharing bubble overlap, or when a user shares content in a predefined area, i.e., a sharing ring, or by any other mean discussed herein. However, before virtual content between a first wearable extended reality appliance and a second wearable extended reality appliance may be shared, a non-transitory computer readable medium may establish a link between the two appliances. In some embodiments, virtual content may be automatically shared via a virtual sharing space when the link between the first wearable extended reality appliance and the second extended reality appliance is established. A processor may automatically generate the sharing of virtual content when the first and second wearable extended reality appliances are linked so as to communicate with each other. The shared virtual content may be selected based on the preference of the users of the first and/or second extended reality appliances. Whether a sharing is created immediately after establishing a link may be based on a user's preference, and the extended reality appliance may be configured for different uses. For example, one user may configure their wearable extended reality appliance or system to automatically share content in the workplace. In this example, the appliance may be configured to share content at a longer distance, such as when a coworker is in view, and the content may be limited to documents and emails. In another example, a user may configure their wearable extended reality appliance or system to automatically share content at home. In this example, the appliance may be configured to only automatically share content at a shorter distance, such as when a family member is directly next to the user. Such virtual content may be limited to videos and music and may not include documents or emails. In some embodiments, a visualization of the virtual sharing space may be presented by each of the first wearable extended reality appliance and the second wearable extended reality appliance. For example, on a typical computer screen, one can share a document by dragging and dropping files between different folders, such as a Dropbox folder. In contrast, in some embodiments of the disclosure, sharing virtual objects does not have to be limited to folders. Rather, virtual objects may be shared via a virtual box, a virtual window, or otherwise marked area. A virtual box may refer to a space on a virtual display presented by the first or second wearable extended reality appliance where multiple users may share virtual content. This sharing act may also take place in the three-dimensional space virtual space, and the act of sharing virtual objects between such a folder, a virtual box, or a window may be visible to users of both the first and second wearable extended reality appliances. In some examples, the visualization of the virtual sharing space may include a visual indication of a border of the virtual sharing space, a visual indication of a volume of the virtual sharing space, and/or a visual indication of a center of the virtual sharing space. Some non-limiting examples of such visual indicators may include virtual colorization of a surface or a volume, virtually highlighting a surface or a volume, and/or one or more virtual visual symbols at selected locations. When a user virtually moves virtual content towards and/or into the visualized virtual sharing space, a sharing of the moved virtual content may be triggered. For example, the user may move the virtual content using gestures, using a computer pointing device, using textual inputs, and/or using voice commands. In some embodiments, a location of the virtual sharing space may be determined based on a location of a physical object in an environment of the first wearable extended reality appliance. For example, certain physical objects such as whiteboards or other display surfaces may be linked to a virtual sharing space such that anyone with permissions and in the vicinity of that object may have access to shared content. In some embodiments, an extended reality appliance user may select an object for defining a virtual sharing space. For example, a piece of artwork on a wall or a physical object on a table may be selected as a basis for a virtual sharing space, and upon detection of the selected object, a virtual display screen may appear in a region adjacent the selected object. By way of example.FIG.10Aillustrates a sharing bubble, wherein the second wearable extended reality appliance1012receives first virtual content1014from the second wearable extended reality appliance1010via sharing region1016. Such virtual content may also be shared via a sharing ring as shown inFIG.9. In another example,FIG.10Billustrates presenting received virtual content. Here, the second wearable extended reality appliance1012presents first virtual content1014received from the first wearable extended reality appliance1010. The virtual content may be presented via the wearable extended reality appliance on a screen such as a tablet or TV, via the appliance itself, or any other physical object, as represented by the square inFIG.10B. Some embodiments may provide an indicator via the first wearable extended reality appliance upon the first virtual content being presented via the second wearable extended reality appliance. An indicator may refer to any sort of mark, annotation, or signal that informs a user that his or her content is being displayed on another's extended reality appliance. Additionally, the indicator may be demonstrative of whether the content is privately shared, publicly shared, shared with a particular individual (and/or the identity of the particular individual), shared with a particular group (and/or the identity of the particular group), and other public or private sharing options. As another example, the indicator may inform whether a user of the second wearable extended reality appliance has included annotations or other comments on the first virtual content, or has edited or transformed the first virtual content. In one example, the indicator provided via the first wearable extended reality appliance may be an audible indicator. In another example, the indicator provided via the first wearable extended reality appliance may be a visual indicator. For example, the visual indicator may be present on or in conjunction with the first virtual content. In another example, the visual indicator may include a visual modification to a presentation of the first virtual content. There are many ways in which virtual content may be shared between wearable extended reality appliance users. Content may be shared via a sharing bubble or a sharing ring. A first wearable extended reality appliance user may share content, via a sharing bubble or sharing ring, with a second wearable extended reality appliance user. In the above embodiments, the second wearable extended reality appliance user may display content received from the first user. This content may be displayed in a myriad of ways, whether it be through the second user's extended reality appliance, via a screen such as a tablet or TV, a flat surface such as a wall, window, or chalkboard, and/or any other physical object that a user may designate to present virtual content. For example, the second wearable extended reality appliance, after receiving virtual content from the first user, may display the content via the second appliance, and/or via a TV or whiteboard. In some embodiments, the first virtual content may be associated with at least one private virtual object and at least one public virtual object, and causing the virtual content to be transmitted for display by the second wearable extended reality appliance may include transmitting the at least one public virtual object and avoiding transmission of the at least one private virtual object. As described herein, a virtual object may refer to a virtual display screen, to virtual content items, to virtual two-dimensional objects, to virtual three-dimensional objects, documents, media items, photos, videos, virtual characters, user-generated content, and/or components of the listed objects (e.g., a volume or brightness adjustment bar). Furthermore, a public virtual object may include virtual object that may be shared for viewing by other wearable extended reality appliance users and that may not be subject to any restrictions on sharing with other users. By contrast, a private virtual object may include virtual content that a wearable extended reality appliance user does not wish and/or is not allowed to share with any other user or with any other user that is not part of a predefined group. In some examples, a user may configure which virtual objects are public and which virtual objects are private, or may configure different privacy levels to different virtual objects. In other example, it may be determined automatically whether a virtual object is private or public, for example based on an analysis of visual and/or non-visual characteristics of the virtual object. In the above embodiments, a user may specify that some or all portions of a virtual content may not be shared with another user. For example, if when the virtual content is visualized as a screen, the screen may have both a public and a private window. The public and private window each may contain virtual object. Thus, in some disclosed embodiments, virtual content may be displayed with only the public virtual object being visible to the second wearable extended reality appliance user, with the private virtual content remaining hidden. Virtual content, meant to be shared between wearable extended reality appliance users, may consist of various virtual objects, some private and some public. By way of example, virtual content may include a table or graph illustrating financial information associated with a user and the user may not wish to share the financial information with other users. The user may designate the financial information in the virtual content as a private virtual object thus preventing other users from being able to see that virtual content. In another example, virtual content may include a document with confidential exhibits. The user may designate the exhibits as a private virtual object, thus preventing other users from being able to see that virtual object. In some embodiments, virtual content items may be shared between same levels of privacy without need for user's approval. Levels of privacy may refer to which virtual content items may be or is shared with which specific wearable extended reality appliance users. Levels of privacy may be established by dividing the virtual content itself of other users one or more groups with each group being allowed to visualize some or all of the virtual content. The number of levels of privacy may be configured based on the user's preferences. For example, some of the virtual content may be designated as content that may be shared with all other users. This may qualify as a low level of privacy. In another example, a portion of the virtual content may be designated as content that may be shared with users who may be related to a user sharing the virtual content. The relationship may be based on family relationships or professional relationships. For example, a user may designate a portion of or all of a virtual content as content that may be shared only with immediate family members (e.g., spouse, children, or parents) or with users in a particular professional group (e.g., an office team or department). This may qualify as a medium level of privacy. As another example, a user may designate some or all of a virtual content as content that may be viewed only by the user and by no one else. This may qualify as a high level of privacy. Although only three levels of privacy (e.g., low, medium, high) have been discussed in this specification, the present disclosure is not so limiting, and it is contemplated that any number of levels of privacy may be established consistent with embodiments of the present disclosure. In some embodiments, virtual content items may require user's approval before being shared at a lower level of privacy than the current level of privacy of the visual content item. In some examples, the decision of whether a virtual display screen needs to be shared may be based on the virtual size of the virtual display screen, the orientation of the virtual display screen, the location of the virtual display screen, and factors related to the virtual screen configuration. In some embodiments, different sharing levels may be applied. Some non-limiting examples of such sharing levels may include private (available only to the user), Shared (available to people the user chooses to share with), public (available to anyone in a particular physical space), or any other combination of public, private, or shared. In one example, a particular physical space may have owners, and only an owner of the particular physical space may share public content in that particular physical space content. For example, a company may share public content within their office, or users may share public content within their house. In some examples, private content items, including, but not limited to, virtual display screens and widgets, may be shared with other users in a way that do not enable identification of some type of items. For example, items such as text, images, fine details, or any other media that a user may not want to share may not be identified or visible to the other users. By way of example, some or all of the virtual content may appear blurred to some users but may be clearly visible to other users based on how the first wearable extended reality appliance user configures their appliance or level of privacy. Other methods of obscuring the virtual content may be employed. In one example, when a first user comes close to a second user, the first user may see private content of the second user as if being viewed through a milky glass, but without seeing content details. Thus, the second user's contents may be protected from a first user who may not have authority to view private documents. In some embodiments, it may be communicated to the user that a particular content is going to be shared with others, and the user may be provided time to cancel sharing before the particular content is shared. Additionally, a user may obtain information related to the user's objects being shared with other users, information related to objects of others that are shared with the user, information related to these other users, and/or information related to a level of sharing of different users. In some embodiments, a user may select a virtual content item, such as a virtual display screen, a document, or other media, to be shared and send it to a physical screen, using a gesture. Such physical screens may include, for example, but are not limited to, TV, tablet, laptop, smartphone, smartboard, board, physical screen designated for content sharing, or any other physical screen. Gestures used by a user to share virtual content may include, for example, a drag, a pinch, a swipe, or any other hand movement. As a result, the virtual content item may be shared, may be presented over the physical screen, or manipulated in any other way as desired by the user. In some examples, the physical screen may serve and/or induce a virtual sharing space, for example with screen borders as area of sharing, and the virtual content item may be presented as a virtual layer and viewed by extended reality appliances. By way of example,FIGS.11A and11Billustrate sharing public virtual content1110. This content may be shared to a physical screen1114, here, a TV, as shown inFIG.11B, for example by using a gesture1112, as shown inFIG.11A. Such a gesture may be a hand wave, finger point, or any bodily movement. In this embodiment, the virtual content shared to a physical screen may be a public virtual content-everyone may see the content, including those who are not wearing extended reality appliances. In some examples, a user may move her/his own virtual content item, such as a virtual display screen or document, to any of her/his personal physical screens. Such personal screens may include personal physical display screen, a smartphone screen, or tablet screen. The user may move his/her own content item by using a gesture. In response, the virtual content item may move to the physical display screen (i.e., may be presented using the physical display screen) or may be presented virtually over the physical display screen, for example without additional approvals, and with the virtual content item staying in same level of privacy. When sharing some content, however, additional approval may be required. For example, a user may desire to share private virtual content that has a higher level of privacy, i.e., is confidential, with another user. Such virtual content may be password protected, be accessible to only certain users, or otherwise be made private. In this scenario, the sharing user may be prompted to enter a password, contact a system administrator, or otherwise request permission to share the virtual content. By way of example,FIGS.12A and12Billustrate sharing private virtual content1210. Here, an extended reality appliance user may share their own virtual content1210with their own physical screen1212by using a gesture1214, as shown inFIG.12B. This content stays in the same level of privacy, i.e., no additional approvals are needed to share the private virtual content. In some examples, a user may initiate sharing of a virtual content with a physical screen. An identification of one or more users exposed to the physical screen may be made (for example, based on location information associated with the users, and/or based on an analysis of an image of a room that includes the physical screen). Based on the identified one or more users, it may be determined whether the virtual content may be shared with the physical screen without additional approval, whether the virtual content may be shared with the physical screen upon additional approval, and/or whether the virtual content may not be shared with the physical screen. In one example, after the virtual content is shared with the physical screen, a (potential or actual) exposer of the physical screen to an additional user may be identified (for example, based on location information associated with the additional user, and/or based on an analysis of an image of a room that includes the physical screen), and in response to the identified exposer of the physical screen to the additional user, the sharing of the virtual content with the physical screen may be stopped, or a prompt may be provided to the user sharing the virtual content. Such prompt may include an indicator of the exposer and/or of the additional user, and/or may be configured to enable the user sharing the virtual content to continue or stop the sharing of the virtual content. In some embodiments, the privacy settings described above may be configured as part of a user's default settings. In some embodiments, causing the first virtual content to be displayed by the second wearable extended reality appliance may include modifying the first virtual content according to default settings associated with the second wearable extended reality appliance. The default settings may be configured according to the second wearable extended reality appliance user's preference and/or hardware available in the second wearable extended reality appliance and/or environmental conditions associated with the second wearable extended reality appliance. For example, the second wearable extended reality appliance may operate in lower or higher resolution than the first wearable extended reality appliance, and the rendering resolution of the first virtual content may be modified accordingly. In another example, the second wearable extended reality appliance may operate at different ambient light conditions than the first wearable extended reality appliance, and intensity and/or opacity of the rendering of the first virtual content may be modified to compensate for the different ambient light conditions. In yet another example, the second wearable extended reality appliance may enable a different angular field-of-view than the first wearable extended reality appliance, and a size of the first virtual content may be modified to adjust to the different angular field-of-view. In some examples, default settings may refer to how a wearable extended reality appliance is configured for use or how the wearable extended reality appliance is used. In some embodiments, default settings may be configured to hide personal information such as passwords or credit card numbers, or the default settings may involve a user's preferred display parameters, i.e., where content is displayed or how large the content appears. Default settings may be customizable by a user of the wearable extended reality appliance and may include, for example, privacy settings, proximity settings, gesture settings, or any other settings that may determine how the wearable extended reality appliance presents and/or shares virtual content. In the above embodiments, privacy settings may determine an initial level of privacy associated with any virtual content that may be shared with another user. For example, a default privacy setting may allow virtual content to be shared with all users. As another example, a default privacy setting may prohibit virtual content from being shared with any other user. As another example, a default privacy setting may allow virtual content to be shared with a predefined group of users (e.g., immediate family members of a user, or colleagues in a particular team or department associated with the user). In the above embodiments, sharing settings may determine how far one user needs to be from another before a sharing bubble is created, how large a predefined sharing ring is and where it is presented, or a default physical object via which virtual content is to be presented. In the above embodiments, gesture settings may determine what default user movement manifests a sharing intent. For example, a wearable extended reality appliance user may configure the gesture settings to automatically share content when the user waves their hand, but not share content when the user nods their head. Such privacy sharing and gesture settings may be adjusted at any time depending on the preference of the user and what the desired use of the wearable extended reality appliance is. Some disclosed embodiments may involve receiving from the second wearable extended reality appliance second virtual content for display via the first wearable extended reality appliance. Receiving may involve any form of transmission of content. For example, the content may be received through a network-based communication of content, or through a direct communication of content, such as through a Bluetooth, Wi-Fi, NFC, or other direct communication protocol between multiple extended reality appliances or modems associated therewith. In some embodiments, a processor may instruct the first wearable extended reality appliance to receive virtual content from the second wearable extended reality appliance, and to display that virtual content. In some other examples, a processor may receive the virtual content from the second wearable extended reality appliance, and may provide the virtual content to the first wearable extended reality appliance, for example for display of the virtual content. There are many ways in which the first wearable extended reality appliance may receive virtual content from the second wearable extended reality appliance as discussed elsewhere in this specification. For example, the second wearable extended reality appliance may share virtual content with the first wearable extended reality appliance via a sharing bubble or a sharing ring. As another example, the second wearable extended reality appliance may share virtual content with the first wearable extended reality appliance as soon as a link between the two appliances is established. Some disclosed embodiments may include displaying multiple versions of virtual content. In some embodiments, the first virtual content may include a two-dimensional virtual object, and the second virtual content may include an alternate version of the two-dimensional object. An alternate version of a two-dimensional object may include, for example, a scaled down version of the two-dimensional object, a magnified version of the two-dimensional object, a two-dimensional object having a different location or orientation, a two-dimensional object having a different color, texture, or shading, or a two-dimensional object with some or all of its characteristics or functions having been altered in some manner. An alternate version of a two-dimensional object may be generated based on a viewing user's preference. For example, a two-dimensional object may be a virtual document and the alternate version of the two-dimensional virtual object may be an annotated or highlighted version of the virtual document. As another example, a two-dimensional object may be a presentation, and an alternate version of the two-dimensional object may have modified colors, graphics, and/or shapes in the presentation. By way of another example, multiple versions of virtual content may be comprised of both public and private virtual objects. For example, a first user of a first wearable extended reality appliance may send a document to a second user, who may share the document with a larger group. However, some of the comments and annotations on the virtual object may remain hidden based on the first or second user's privacy and/or document sharing settings. In some embodiments, the first virtual content may include a three-dimensional virtual object, and the second virtual content may include a two-dimensional virtual object associated with the three-dimensional object. When one wearable extended reality appliance user shares content with another user, such content may include multiple virtual objects. As described in the above embodiments, these virtual objects may include both two-dimensional objects and three-dimensional objects and may be subject to various privacy settings. A three-dimensional virtual object may be a design model or project mock-up and the two-dimensional virtual object associated with the three-dimensional object may be a list of comments on the model. As with the two-dimensional models, subsequent versions of a three-dimensional virtual object may be configured to have different privacy settings. For example, a project mock-up, such as a bridge, building, or other scale or architectural model, may be shared as a three-dimensional virtual object with another user. Associated with the project-mock up may be a list of comments, i.e., a two-dimensional virtual object, regarding the size, length, width, a myriad of any other parameters relevant to the project mock-up. In addition to the two-dimensional objects associated with the three-dimensional objects, a user may include three-dimensional virtual objects in the modified version as well. In the project mock-up example for instance, a wearable extended reality appliance user may share a first version of the mock-up and may also share a modified version of the model that may have more components, may be a different size, or have different texture than the first one. In response to receiving the virtual content from the second wearable extended reality appliance, some disclosed embodiments may include presenting via the first extended reality appliance the second virtual content received from the second wearable extended reality appliance. There are many ways in which virtual content may be shared between wearable extended reality appliance users. Content may be shared via a sharing bubble or a sharing ring. A second wearable extended reality appliance user may share content, via a sharing bubble or sharing ring, with a first wearable extended reality appliance user. In the above embodiments, the first wearable extended reality appliance user may present content received from the second user. This content may be presented in a myriad of ways, for example, through the second user's extended reality appliance, via a screen such as a tablet or TV, or via a physical object such as a whiteboard or chalkboard. In one embodiment, the first user, after receiving virtual content from the second user, may present the content via the first appliance, or via a TV or whiteboard. The first wearable extended reality appliance user may also present content received from a second user to other users who are nearby the first user. By way of example,FIGS.13A and13Billustrate a sharing bubble, wherein the first wearable extended reality appliance1310receives second virtual content1314from the second wearable extended reality appliance1312via sharing region1316. Such virtual content may also be shared via a sharing ring.FIG.13Billustrates an example of presenting received content1314. Here, first wearable extended reality appliance1310may present second virtual content1314received from the second wearable extended reality appliance1312. The virtual content may be presented via the wearable extended reality appliance, a screen such as a tablet or TV, or any other physical object. In another example,FIG.14shows a flowchart illustrating an exemplary method1410of coordinating between the first and second wearable extended reality appliances to display various virtual content. Method1410may contain step1412, wherein the at least one processor may establish a link between the first and second wearable extended reality appliances, for example for communication between the two appliances or for control of the two appliances. Method1410may also contain step1414, wherein first virtual content may be presented on the first wearable extended reality appliance before, after, or simultaneously with establishing a link with a second wearable extended reality appliance. Method1410may also contain step1416, involving obtaining a command to display the first virtual content via the second wearable extended reality appliance. Method1410may also contain step1418, wherein the first virtual content in some embodiments may be caused to be transmitted for display to the second wearable extended reality appliance. Method1410may also contain step1420, wherein a second virtual content may be received from the second wearable extended reality appliance. Method1410may contain step1422, wherein the first wearable extended reality appliance may display the second virtual content. Some disclosed embodiments may involve operations for providing situational awareness to users of wearable extended reality appliances. Situational awareness may include one or more of a perception of elements in an environment, a comprehension of a current situation, a projection of future status, or any ability to identify, process, or understand elements of information regarding what is occurring in an environment. For example, situational awareness may include an understanding of one or more of a location, area, point, geography, region, scene, setting, site, surroundings, topography, section, angle, inclination, dimension, amount, breadth, capacity, content, diameter, height, intensity, length, magnitude, proportion, range, volume, width, amplitude, age, date, presence, time, moment, occasion, or any other condition associated with any portion of an environment. In some examples, situational awareness may include an understanding of a state of another person in an environment of a user. For example, the situational awareness may include an understanding of whether the other person is engaged with virtual content (such as virtual displays, virtual avatars, etc.) or not. Some embodiments may cause virtual content to be displayed through a first wearable extended reality appliance. The term virtual content may include any type of data representation that may be displayed to the user, such as through an extended reality appliance or other presentation device. The virtual content may include a virtual object, inanimate virtual content, animate virtual content configured to change over time or in response to triggers, virtual two-dimensional content, virtual three-dimensional content, a virtual overlay over a portion of a physical environment or over a physical object, a virtual addition to a physical environment or to a physical object, a virtual promotion content, a virtual representation of a physical object, a virtual representation of a physical environment, a virtual document, a virtual character or persona, a virtual computer screen, a virtual widget, or any other format for displaying information virtually. Consistent with the present disclosure, the virtual content may include any visual presentation rendered by a computer or a processing device. In one embodiment, the virtual content may include a virtual object that is a visual presentation rendered by a computer in a confined region and configured to represent an object of a particular type (such as an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, virtual widget, or other virtual representation). The rendered visual presentation may change to reflect changes to a status object or changes in the viewing angle of the object, for example, in a way that mimics changes in the appearance of physical objects. Some embodiments may involve detecting a second wearable extended reality appliance in proximity to the first wearable extended reality appliance. Proximity may refer to a closeness, adjacency, concurrence, contiguity, propinquity, close vicinity, or any other state or condition of being near. Proximity may be measured by any of a capacitive effect, inductive effect, magnetic effect, or optical effect. Proximity may also be detected using one or more of, for example, radar, sonar, ultrasonic, fiber optic, or Hall Effect technology, reflection of ionizing radiation, or any other technology that can detect the presence of nearby objects. Proximity may include distance, direction, or a combination of distance and direction. In other examples, locations of the first and second wearable extended reality appliance may be based on positioning data corresponding to the two. In one example, the locations may be determined using indoor or outdoor positioning sensors included in each one of the two wearable extended reality appliances, from localization of the two wearable extended reality appliances in an image (for example, an image captured using an image sensor in the environment of the two wearable extended reality appliances and analyzed using an object detection algorithm to localize the wearable extended reality appliances), and so forth. In other examples, an image captured using an image sensor included in the first wearable extended reality appliance may be analyzed using an object detection algorithm to detect the second wearable extended reality appliance, and a distance between the two wearable extended reality appliances may be determined, for example based on a size of the second wearable extended reality appliance in the captured image, thereby determining whether the second wearable extended reality appliance is in proximity to the first wearable extended reality appliance. In other examples, an image captured using an image sensor included in the second wearable extended reality appliance may be analyzed using an object detection algorithm to detect the first wearable extended reality appliance, and the second wearable extended reality appliance may be considered in proximity to the first wearable extended reality appliance when the first wearable extended reality appliance is detected in the captured image (and therefore is in a line of sight of the second wearable extended reality appliance). In some embodiments, detecting a second wearable extended reality appliance in proximity to the first wearable extended reality appliance may be continuous. In some embodiments, detecting a second wearable extended reality appliance in proximity to the first wearable extended reality appliance may occur at regular or irregular time intervals. In some embodiments, detecting a second wearable extended reality appliance in proximity to the first wearable extended reality appliance may be triggered by an input. The input may be received from a user, a system, or any other source of information that may trigger a detection of proximity. In certain embodiments, detecting a second wearable extended reality appliance in proximity to the first wearable extended reality appliance may include detecting that the second wearable extended reality appliance is within a certain distance, width, height, radius, or other measure of separation from the first wearable extended reality appliance. For example, detecting a second wearable extended reality appliance in proximity to the first wearable extended reality appliance may include detecting that the second wearable extended reality appliance is within one foot, two feet, five feet of, or any distance from the first extended reality appliance. The measure of separation used to detect the proximity may be inputted by a user, a system, or may be determined from any other source of information. For example, a user may input a value of five feet to trigger a detection of proximity. In another embodiment, a system may include default settings to trigger a detection of proximity when a distance of five feet is detected between the first wearable extended reality appliance and the second wearable extended reality appliance. In one example, the measure of separation used to detect the proximity may be different for different directions (for example, one measure of separation for horizontal distances, and a smaller measure of separation for vertical distances, or in another example, one measure of separation for an upward distance, and a smaller measure of separation for a downward distance). In some examples, the second wearable extended reality appliance may be considered in proximity to the first wearable extended reality appliance when the two wearable extended reality appliance are in a same defined space (such as a same room, a same apartment, a same office, a same building, and so forth). In some examples, the second wearable extended reality appliance may be considered in proximity to the first wearable extended reality appliance when the first wearable extended reality appliance is in a line of sight of the second wearable extended reality appliance. In some embodiments, a detection of proximity may be based on any reference point of each of the first wearable extended reality appliance and the second wearable extended reality appliance. For example, proximity may be detected based on a distance from any physical position of a wearable extended reality appliance. Alternatively, proximity may be detected based on a distance from a virtual position of a virtual object presented through a wearable extended reality appliance. In another example, proximity may be detected based on a distance from a reference point within a certain range of a wearable extended reality appliance. Some embodiments may involve establishing a link between the first wearable extended reality appliance and the second wearable extended reality appliance. A link may include one or more of a physical or non-physical attachment, or any other mode of connection. In some embodiments, a link may be established using physical attachment including, for example, a wire, cable, or any other type of attachment means that requires a tangible connection between the first wearable extended reality appliance and the second wearable extended reality appliance. In some embodiments, a link may be established using non-physical means, and may include wireless communication such as Bluetooth, ZigBee, Wi-Fi, or any other technology that implements the transfer of information between two or more points without the use of an electric conductor as a medium by which to perform the transfer. Furthermore, the link may be a direct or indirect link between the first wearable extended reality appliance and the second extended reality appliance. A direct link between the first wearable extended reality appliance and the second wearable extended reality appliance may be an unobstructed connection between the first wearable extended reality appliance and the second wearable extended reality appliance. Alternatively, an indirect link between the first wearable extended reality appliance and the second wearable extended reality appliance may be an obstructed connection between the first wearable extended reality appliance and the second wearable extended reality appliance. This type of link may include a connection that connects systems that are connected to one or both of the first wearable extended reality appliance and the second wearable extended reality appliance. Such systems may include, for example, a laptop, smartphone, or any other type of non-wearable device that may be connected to a wearable extended reality appliance. In one example, the link between the first wearable extended reality appliance and the second wearable extended reality appliance may include a direct and/or in-direct communication link between the two extended reality appliances. In another example, a system (such as a centralized system) may be in communication with each one of the first and the second wearable extended reality appliances, for example to provide each one of the two wearable extended reality appliances content for presentation, to receive location and/or orientation information associated with the wearable extended reality appliances, and so forth. The link between the first wearable extended reality appliance and the second wearable extended reality appliance may include a linkage in a data-structure or a database of the system or maintained by the system. In some examples, establishing a link between the first wearable extended reality appliance and the second wearable extended reality appliance may refer to linkage in a data-structure and/or a database, for example in system coordinating an extended reality environment, in a system that communicates with both the first and the second wearable extended reality appliances, in a system that provides content for presentation for both the first and the second wearable extended reality appliance, and so forth. Some embodiments may involve transmitting data representing at least a portion of the virtual content in an obscured form to the second wearable extended reality appliance, wherein the obscured form provides an indication of a position of the at least a portion of the virtual content in a three-dimensional space without revealing substance of the virtual content in obscured form. In one example, the data may be transmitted from the first wearable extended reality appliance to the second, for example through a direct or an indirect communication link. In this example, the first wearable extended reality appliance may process at least part of the virtual content to generate the obscured form, for example as described below. In another example, the data may be transmitted from a computerized system (for example, a system coordinating an extended reality environment, a system that communicates with both the first and the second wearable extended reality appliances, a system that provides content for presentation for both the first and the second wearable extended reality appliance, and so forth) to the second wearable extended reality appliance. In this example, the computerized system may process at least part of the virtual content to generate the obscured form, for example as described below. An obscured form may include a presentation of any portion of the virtual content in a manner that obfuscates that portion of the virtual content. In certain embodiments, an obscured form that provides an indication of a position of the at least a portion of the virtual content in a three-dimensional space without revealing substance of the virtual content may include a presentation of the at least a portion of the virtual content through masking, redaction, omission, pixilation, blurring, or any other means that at least partially conceals any portion of the virtual content. In some examples, an obscured form may provide a visual indication of a position of the at least a portion of the virtual content in a three-dimensional space and/or a visual indication of a type of the at least a portion of the virtual content without revealing at least one detail of the at least a portion of the virtual content. For example, the obscured form may provide a visual indication that the at least a portion of the virtual content includes a textual content, without revealing the exact words and/or the exact letters of the textual content. In another example, the obscured form may provide a visual indication that the at least a portion of the virtual content includes a graphical content, without revealing the exact images and/or the exact graphics of the graphical content. The functioning of the at least one processor as prescribed by the instructions is exemplified inFIGS.15and16. For example,FIG.15is an illustration of an exemplary environment including a plurality of users, using a plurality of wearable extended reality appliances, consistent with some embodiments of the present disclosure. As illustrated inFIG.15, a first user1514of a first wearable extended reality appliance1515may be located at a first position1510. Second user1516of a second wearable extended reality appliance1517may be located at a second position1512that is not in proximity of first position1510of first user1514(for example, in a distance longer than a selected threshold, not in the same defined space, not in the same room, not in the line of sight, and so forth). The first user1514may be presented with first content1522in a first virtual display1518, while the second user1516may be presented with second content1524in a second virtual display1520. Since the first wearable extended reality appliance1515is not in proximity to the second wearable extended reality appliance1517, none of the first virtual content1522may be transmitted to the second user1516. FIG.16is an illustration of exemplary virtual displays including a portion of the virtual displays provided in an obscured form, consistent with some embodiments of the present disclosure. As illustrated inFIG.16, a first user1614of a first wearable extended reality appliance1615may be located at a first position1610. Second user1616of a second wearable extended reality appliance1617may be located at a second position1612that is in proximity of a first position1610of first user1614(for example, in a distance shorter than a selected threshold, in a same defined space, in a same room, in a line of sight of wearable extended reality appliance1615, and so forth). In this example, the first user1614may be presented with first content1622in a first virtual display1618, while the second user1616may be presented with second content1624in a second virtual display1620. Here, because the second wearable extended reality appliance1617is in proximity to the first wearable extended reality appliance1615, a portion1626of the first virtual content1622may be transmitted to the second wearable extended reality appliance1617. Moreover, the portion1626of the first virtual content1622may be shown in an obscured form to the second wearable extended reality appliance1617by being blurred, so that the blurred image provides an indication of a position of portion1626of the first virtual content1622in a three-dimensional space without revealing a substance of the virtual content1622. In some embodiments, the virtual content displayed through the first wearable extended reality appliance may include content of differing types, and the operations further include presenting via the second wearable extended reality appliance, a type associated with the obscured information, without revealing substance of the content. Content of differing types may include content that varies with regard to format, presentation, shape, size, color, or any other aspect. In some embodiments, differing types of content may include, for example, word processing content, gaming content, spreadsheet content, database content, graphics content, or web browser content. As one example of a presentation consistent with such embodiments, the virtual content displayed through the first wearable extended reality appliance may include content of a web browser type. Accordingly, when data representing at least a portion of the virtual content is transmitted in an obscured form to the second wearable extended reality device, a web browser type that is associated with the obscured form is presented via the second display. In some embodiments, the virtual content displayed through the first wearable extended reality appliance may include a plurality of virtual screens, and the virtual content presented in obscured form via the second wearable extended reality appliance may further provide an indication of sizes and orientations of the plurality of virtual screens without revealing substance of content associated with the plurality of virtual screens. A virtual screen (also referred to as virtual display herein) may include any bounded shape or presentation area including virtual content, such as a rectangular window containing a plurality of virtual objects. A plurality of virtual screens may include two or more bounded shapes or presentation areas, with each including virtual content. The plurality of virtual screens may have different sizes, orientations, display settings, and other visual features. One or more of these visual features may be obfuscated when the plurality of virtual screens is presented to the second wearable extended reality appliance. For example, the virtual content displayed through the first wearable extended reality appliance may include one virtual screen that is square in shape and another virtual screen that is rectangular in shape. In such an embodiment, when the obscured form is presented via the second wearable extended reality appliance, the square shape and the rectangular shape of the corresponding virtual screens may be displayed to the user of the second wearable extended reality appliance, but the content of the square and rectangular shapes may be obscured.FIG.17is an example illustration of a virtual display provided in an obscured form, consistent with some embodiments of the present disclosure. For example, as illustrated inFIG.17, the virtual content displayed through the first wearable extended reality appliance1710may include one virtual screen1712comprising two smaller windows1714and1716and another virtual screen1718comprising one large window1720. When the virtual content is displayed through the second wearable extended reality appliance1722, only the size of the two smaller windows1714and1716in the first virtual screen1712and the larger window1720in the second virtual screen1718may be displayed to the user of the second wearable extended reality appliance. In this exemplary embodiment, the indication of the size without revealing substance of content associated with the plurality of virtual screens may be achieved through blurring, but blacking out, pixilation, or any other method of concealing the content is contemplated for use with other embodiments. In some embodiments, the virtual content displayed through the first wearable extended reality appliance may include a user interface presenting a conversation with one or more participants, and the obscured form may further include a representation of the conversation without revealing identities of the one or more participants. The user interface may include a graphical or textual display of communication exchanged by two or more users. The display may be generated by one or more applications that allow users to exchange textual or graphic communications. Such applications may include iMessage, Google Chat, SMS, Facebook Chat, Messenger, Slack, Teams, WhatsApp, Telegram, Signal, Zoom, or any other application used for composing and sending electronic messages, typically consisting of alphabetic and numeric characters, between two or more users of mobile devices, desktops, laptops, or another type of compatible computer. The user interface may be presented to another user if a second wearable extended reality appliance is in proximity to a first wearable extended reality appliance. For example, the user interface may include a chatting window comprising a plurality of participants. In such an embodiment, the obscured form may present an indication of a chatting window, such as through chatting bubbles, but conceal the names of the participants. Other indications of a participant's identity may also be concealed, such as a photograph, picture, avatar, association, organization, location, position, role in an organization, or any other information that may be tied to an individual's identity. An individual's identity may include their name, phone number, email, user ID, address, location, face, picture, video, or any other identifying information. In some embodiments, the virtual content displayed through the first wearable extended reality appliance may include a virtual three-dimensional object, and the obscured form may further provide an indication of a shape of the virtual three-dimensional object without revealing other details of the three-dimensional object. A virtual three-dimensional object may include a virtual three-dimensional representation of a physical person, or an animate or inanimate object. A shape of a three-dimensional object may include the architecture, body, configuration, contour, format, frame, outline, silhouette, dimensions, color, texture, identifying features such as projections and recesses, or any other characteristic of the object that describes its form. For example, the virtual content displayed through the first wearable extended reality appliance may include a three-dimensional rendering of a person. In such an embodiment, the obscured form may provide a silhouette of the person without revealing other details of the person, such as their face, what they are wearing, and any other attributes of the person that extend beyond merely their shape. In some embodiments, the operations may further include receiving a request from the second wearable extended reality appliance to share the virtual content, and upon receiving confirmation from the first wearable extended reality appliance, transmitting second data representing the at least a portion of the virtual content in an unobscured form to the second wearable extended reality appliance. A request from the second wearable extended reality appliance may be generated either automatically or through user input. An automatic request from the second wearable extended reality appliance may include a request that is triggered to be sent based on a distance between the first wearable extended reality appliance and the second wearable extended reality appliance, a location of the first wearable extended reality appliance or the second wearable extended reality appliance, an identity of a user of the first wearable extended reality appliance or the second wearable extended reality appliance, a specified time, or any other parameter that may be used to trigger a request from the second wearable extended reality appliance. A request from the second wearable extended reality appliance may also be sent through a user input. A user input request may be generated when a user of the second wearable extended reality appliance interacts with the second wearable extended reality device or any other device connected to the second wearable extended reality device. In some examples, a user input request may be generated when a user of the second wearable extended reality appliance interacts with any portion of the virtual display of the second wearable extended reality appliance. Such an interaction may be achieved through a trigger, a touch-screen interface, a mouse, or any other type of interactive element. In some embodiments, the automatic request from the second wearable extended reality appliance may be generated through a combination of an automatic input and a user-generated input. For example, a request from the second wearable extended reality appliance to share the virtual content may be generated when the first wearable extended reality appliance is within five feet of the second wearable extended reality appliance and a user of the second wearable extended reality appliance presses a button to generate the request. Confirmation from the first wearable extended reality appliance may be generated through an input from a user of the first wearable extended reality appliance. The user may generate the input through interaction with an object connected to the first wearable extended reality appliance, such as a trigger, a touch-screen interface, a mouse, or any other type of interactive element. In another example, the first wearable extended reality appliance or a processing unit associated with the first wearable extended reality appliance may access a data-structure including access permissions to determine whether a confirmation should be provided, and may automatically provide the confirmation in response to a determination that a confirmation should be provided. In some embodiments, the operations may further include providing to the first wearable extended reality appliance an indication that the at least a portion of the virtual content is displayed in an obscured form via the second wearable extended reality appliance. This indication may be presented to a user of the first wearable extended reality appliance through an output that is either audio, visual, or haptic in nature. Further, any combination of an audible, visual, or haptic output may also be used to present the indication to the first wearable extended reality appliance. The indication presented to the user may include an indication of at least one of the second wearable extended reality appliance, a user of the second wearable extended reality appliance, the portion of the virtual content, the obscuring technique, the obscured form, or a time period of the presentation of the obscured form. An audio indication that the at least a portion of the virtual content is displayed in an obscured form via the second extended reality appliance may include a beep, tone, alarm, voice indication, or any other type of sound. A visual indication that the at least a portion of the virtual content is displayed in an obscured form via the second extended reality appliance may include a picture, image, cartoon, description, sentence, drawing, figure, icon, photograph, or any other viewable output. A haptic indication that the at least a portion of the virtual content is displayed in an obscured form via the second extended reality appliance may include a buzz, vibration, or any other sign to a user of the first extended reality appliance that relates to the sense of touch. In other examples, the indication that the at least a portion of the virtual content is displayed in the obscured form via the second wearable extended reality may be provided to the first wearable extended reality appliance in a digital signal, for example in a digital signal transmitted to the first wearable extended reality appliance. The digital signal may include an indication of at least one of the second wearable extended reality appliance, a user of the second wearable extended reality appliance, the portion of the virtual content, the obscuring technique, the obscured form, or a time period of the presentation of the obscured form. Some embodiments, may involve determining an intent of a first user of the first wearable extended reality appliance to share the at least a portion of the virtual content with a second user of the second wearable extended reality appliance, and transmitting second data representing the at least a portion of the virtual content in an unobscured form to the second wearable extended reality appliance. An intent of a first user of the first wearable extended reality appliance to share the at least a portion of the virtual content with a second user of the second wearable extended reality appliance may be determined based on any input by the first user. Such an input may include interaction with objects such as a trigger, a touch-screen interface, a mouse, or any other type of interactive element. For example, when a first user of the first wearable extended reality appliance presses a button on the first wearable extended reality appliance, an intent of the first user of the first wearable extended reality appliance to share the at least a portion of the virtual content with a second user of the second wearable extended reality appliance is determined. In another example, such input may include a gesture of the first user. An unobscured form of the at least a portion of the virtual content may include a presentation of the at least a portion in a manner that is not concealed. For example, an obscured form of the at least one portion may include an image with a blur filter applied such that the shape of the image is readily identifiable, while its substance is not. In such an embodiment, an unobscured form of the image may include an image with the blur filter removed, such that the substance of the image also becomes identifiable. In some embodiments, the intent to share the at least a portion of the virtual content with the second user of the second wearable extended reality appliance is determined from image data captured by the first wearable extended reality appliance. Image data may include pixel data, time, date, location, identity of a subject, role of a subject, or any other information that may be associated with a given image. For example, the first wearable extended reality appliance may capture image data indicative of a person A. The at least one processor may be configured to determine an intent to share the at least a portion of the virtual content with the second user of the second wearable extended reality appliance based on the information included in the image data. For example, the at least one processor may determine the intent to share the at least a portion of the virtual content with the second user of the second wearable extended reality appliance when the image data indicates person A, or based on a time, date, or location associated with the image. In other examples, the image data may be analyzed using a gesture recognition algorithm to identify a gesture indicative of the intent of the first user to share the at least a portion of the virtual content with the second user. In some embodiments, the intent to share the at least a portion of the virtual content with the second user of the second wearable extended reality appliance may be determined from not receiving an indication from the user of the first wearable extended reality appliance to keep the at least a portion of the virtual content in an obscured form. In some embodiments, the at least one processor may not receive any input from a user of the first wearable extended reality appliance regarding obscuring a portion of the virtual content. The at least one processor may be configured to determine the intent to share based on a lack of input from the user. In some embodiments, the at least one processor may not receive any input from a user of the first wearable extended reality appliance regarding obscuring a portion of the virtual content within a given period of time. The at least one processor may be configured to determine the intent to share based on a lack of input from the user during a given period of time. For example, the given period of time set by the first wearable extended reality system may be five minutes from any triggering event. The at least one processor my determine the intent to share the at least a portion of the virtual content with the second user of the second wearable extended reality appliance when it does not receive an indication from the user of the first wearable extended reality appliance to keep the at least a portion of the virtual content in an obscured form within five minutes. A triggering event may be one or more of a user input or a determination made by the first wearable extended reality appliance. In some embodiments, the virtual content displayed through the first wearable extended reality appliance may include a first portion classified as private and a second portion classified as public, and the operations may further exclude the first portion from transmission to the second wearable extended reality appliance and transmit the second portion in an obscured form to the second wearable extended reality appliance. The classification of private or public content may be based on user input or other criteria, such as the content itself. For example, content may be marked by a user as confidential or private, or content may be automatically designated as private if it includes any identifying information. Information classified as private may include a name, identity, email address, location, or any other information intended for or restricted to the use of a particular person, group, or class. Information classified as public may include a message, picture, news article, or any other information intended to be exposed to general view. For example, the virtual content displayed through the first wearable extended reality appliance may include a chat window consisting of a first portion comprising an identity of a participant, which may be classified as private. A second portion of the chat window may include a chat bubble, which may be classified as public. The at least one processor may be configured to prohibit transmission of the first portion comprising the identity of the participant to the second wearable extended reality appliance. The processor may, however, transmit the second portion of the chat window comprising the chat bubble in an obscured form to the second wearable extended reality appliance. As another example, the virtual content displayed through the first wearable extended reality appliance may include a shopping application window consisting of a first portion comprising the user's credit card details, which may be classified as private. A second portion of the shopping application window may include a list of items available for purchase, which may be classified as public. The at least one processor may be configured to prohibit transmission of the first portion comprising the user's credit card details to the second wearable extended reality appliance. The processor may, however, transmit the second portion comprising the list of items available for purchase in an obscured form to the second wearable extended reality appliance. In some embodiments, the virtual content displayed through the first wearable extended reality appliance may include a first portion classified as private and a second portion classified as public, and the operations may further include transmitting the first portion in an obscured form and transmitting the second portion in a non-obscured form. As discussed above, some portions of a virtual content may be designated as being confidential or private while other portions may be designated as being public. Information classified as public may include a message, picture, news article, or any other information intended to be exposed to general view. The portions classified as private may be transmitted in an obscured form, while the portions classified as public may be transmitted in an unobscured form. For example, the virtual content displayed through the first wearable extended reality appliance may include a chat window consisting of a first portion comprising an identity of a participant classified as private and a second portion comprising a chat bubble classified as public. In such an embodiment, the first portion comprising the identity of the participant is transmitted to the second wearable extended reality appliance in an obscured form, while the second portion comprising the chat bubble is transmitted in a non-obscured form to the second wearable extended reality appliance. In such an embodiment, the first portion comprising the identity of the participant may be presented as a blurred image, while the second portion comprising the chat bubble may be presented without any blur. As another example, the virtual content displayed through the first wearable extended reality appliance may include a shopping application window consisting of a first portion comprising the user's credit card details, which may be classified as private. A second portion of the shopping application window may include a list of items available for purchase, which may be classified as public. In such an embodiment, the first portion comprising the user's credit card details is transmitted to the second wearable extended reality appliance in an obscured form, while the second portion comprising the list of items available for purchase is transmitted in a non-obscured form to the second wearable extended reality appliance. In such an embodiment, the first portion comprising the user's credit card details may be presented as a blurred image, while the second portion comprising the list of items available for purchase may be presented without any blur. In some embodiments, the operations may further include receiving indication via the first wearable extended reality appliance that the first portion is classified as private. An indication that the first portion is classified as private may be received via the first wearable extended reality appliance through a user interaction with any portion of the first wearable extended reality appliance or with any device connected to the first wearable extended reality appliance, such as a trigger, a touch-screen interface, a mouse, or any other type of interactive element. Thus, for example, a user may provide one or more inputs indicating a portion of virtual content and identifying that portion as being private, using the first wearable extended reality appliance or any device connected to the first wearable extended reality appliance. For example, a user may select portions of a document by highlighting them with a cursor to mark those portions as private. As another example, a user may select portions of an image by highlighting them with a cursor to mark those portions as private. In some embodiments, the operations may further include automatically determining that the first portion is classified as private. An automatic determination that the first portion is classified as private may be based on a determination by the first wearable extended reality appliance that is independent of any user input, or may be determined by another computing device. In such embodiments, the automatic determination may be based on one or more characteristics of the virtual content, including, for example, an identity of the user, a role of the user, a location of the first wearable extended reality appliance, a present time or date, or any other information associated with a virtual content. In some embodiments, the at least one processor may employ one or more rules that determines whether some or all portions of virtual content should be designated as private based on the one or more characteristics of the virtual content independent of an active user input. In such embodiments, the rules may be stored in a database or may be created by a user. Alternatively, or in addition, a machine learning model (such as a classification model, a visual classification model, etc.) may be trained using training examples to determine whether contents are public or private. An example of such training example may include a sample content, together with a label indicating whether the sample content is public or private. The trained machine learning model may be used to determine which portions are confidential and which portions are public. In another example, the first wearable extended reality appliance may be configured to determine the location of the appliance. In such an embodiment, the at least one processor may be configured to classify the first portion of a virtual content as private whenever the first wearable extended reality appliance is in a given location or within a predetermined distance of the given location. Some embodiments, the operations may further include identifying at least a first individual with permission to view the first portion and at least a second individual without permission to view the first portion, and the operations may additionally include enabling display of the first portion to the at least a first individual while preventing display of the first portion to the at least a second individual. A first individual with permission to view the first portion may be identified either based on user input or automatically. In some embodiments, a user of the first wearable extended reality appliance may interact with the first wearable extended reality appliance or any connected devices in order to identify that an individual has permission to view the first portion. The user may interact with the first wearable extended reality appliance or any connected devices using a trigger, a touch-screen interface, a mouse, or any other type of interactive element in order to identify an individual as having permission to view the first portion. In some embodiments, the first wearable extended reality appliance or a different computing device may automatically identify that an individual has permission to view the first portion, based on one or more of the identity of the individual, the role of the individual, the location of the first wearable extended reality appliance, the location of the second wearable extended reality appliance, the present time or date, or any other information that may be used to classify a permission status of an individual independent of an active user input. In some embodiments, the at least one processor may employ one or more rules to identify that an individual has permission to view the first portion based on the information that may be used to classify a permission status independent of an active user input. In such embodiments, the rules may be stored in a database or could be created by a user. Alternatively, or in addition, a machine learning model (such as a classification model, a visual classification model, etc.) could be trained using training examples to determine whether particular individuals have permissions to view different contents. An example of such training example may include a sample content and data associated with a sample individual, together with a label indicating whether or not the sample individual has permission to view the sample content. The trained machine learning model may be used to determine which individuals have permission to view the first portion. A second individual without permission to view the first portion may similarly be identified either based on user input or automatically. In some embodiments, a user of the first wearable extended reality appliance may interact with the first wearable extended reality appliance or any connected devices in order to identify that an individual does not have permission to view the first portion. The user may interact with the first wearable extended reality appliance or any connected devices using a trigger, a touch-screen interface, a mouse, or any other type of interactive element in order to identify an individual as not having permission to view the first portion. In some embodiments, the first wearable extended reality appliance or a different computing device may automatically identify that an individual does not have permission to view the first portion, based on one or more of the identity of the individual, the role of the individual, the location of the first wearable extended reality appliance, the location of the second wearable extended reality appliance, the present time or date, or any other information that can be used to classify a permission status of an individual independent of an active user input. Additionally, enabling display may include enabling a fully obscured display, a fully unobscured display, a partially obscured display, or a partially unobscured display. For example, a first individual may be identified to have permission to view the first portion of a document, while a second individual may be identified to not have permission to view the first portion. Some disclosed embodiments may allow the first individual to see the portion of the document clearly, while blacking out, blurring, or in some other way while preventing display of the portion of the document to the second individual. Some embodiments may involve detecting that a third wearable extended reality appliance is in proximity to the first wearable extended reality appliance; establishing an additional link between the first wearable extended reality appliance and the third wearable extended reality appliance; and transmitting data representing a specific portion of the virtual content in an unobscured form to the third wearable extended reality appliance, while the specific portion of the virtual content is displayed in an obscured form to the second wearable extended reality appliance. Detecting a proximity between the first wearable extended reality appliance and the third wearable extended reality appliance may be achieved via systems, devices, and methods similar to those discussed above for detecting the proximity between the first wearable extended reality appliance and the second wearable extended reality appliance. Establishing an additional link between the first wearable extended reality appliance and the third wearable extended reality appliance may be achieved via systems, devices, and methods similar to those discussed above for establishing a link between the first wearable extended reality appliance and the second extended reality appliance. For example, a link may be established between the first wearable extended reality appliance and the third wearable extended reality appliance through Bluetooth, and a link may be established between the first wearable extended reality appliance and the second wearable extended reality appliance also through Bluetooth. In another example, the link between the first wearable extended reality appliance and the second wearable extended reality appliance may include a linkage in a data-structure or a database, for example in a system coordinating an extended reality environment, in a system that communicates with both the first and the second wearable extended reality appliances, in a system that provides content for presentation for both the first and the second wearable extended reality appliance, and so forth. By way of another example, a link may be established between the first wearable extended reality appliance and the third wearable extended reality appliance in a different way than a link established between the first wearable extended reality appliance and the second wearable extended reality appliance. For example, a link may be established between the first wearable extended reality appliance and the third wearable extended reality appliance through Bluetooth, while a link may be established between the first wearable extended reality appliance and the second wearable extended reality appliance through Wi-Fi or in a data-structure. In some embodiments, the operations may further include accessing second user permission data associated with a second user of the second wearable extended reality appliance and third user permission data associated with the third wearable extended reality appliance; and based on the second user permission data and the third user permission data, selectively sharing content with the second wearable extended reality appliance and with the third wearable extended reality appliance, wherein the content shared with the second wearable extended reality appliance differs from content shared with the third wearable extended reality appliance. The second user permission data associated with the second user of the second wearable extended reality appliance may be generated either through user input or automatically, similar to the systems, devices, and methods discussed above. The user input may be generated by either a user of the first wearable extended reality appliance, a user of the second wearable extended reality appliance, or any other user. In one embodiment, a user of the first wearable extended reality appliance, a user of the second wearable extended reality appliance, or any other user may interact with the first wearable extended reality appliance, the second wearable extended reality appliance, or any connected devices in order to generate the second user permission data associated with the second user of the second wearable extended reality appliance. Such a user may interact with the first wearable extended reality appliance, the second wearable extended reality appliance, or any connected devices using a trigger, a touch-screen interface, a mouse, or any other type of interactive element in order to generate the second user permission data associated with the second user of the second wearable extended reality appliance. In some embodiments, the first wearable extended reality appliance, the second wearable extended reality appliance, or any connected devices may automatically generate the second user permission data associated with the second user of the second wearable extended reality appliance, based on one or more of the identity of the second user, the role of the second user, the location of the first wearable extended reality appliance, the location of the second wearable extended reality appliance, the present time or date, or any other information that can be used to establish rules for data privacy independent of an active user input. Similarly, the third user permission data associated with the third user of the second wearable extended reality appliance may be generated either through user input or automatically. The user input may be generated by either a user of the first wearable extended reality appliance, a user of the third wearable extended reality appliance, or any other user. In one embodiment, a user of the first wearable extended reality appliance, a user of the third wearable extended reality appliance, or any other user may interact with the first wearable extended reality appliance, the third wearable extended reality appliance, or any connected devices in order to generate the third user permission data associated with the third user of the third wearable extended reality appliance. Such a user may interact with the first wearable extended reality appliance, the third wearable extended reality appliance, or any connected devices using a trigger, a touch-screen interface, a mouse, or any other type of interactive element in order to generate the third user permission data associated with the third user of the third wearable extended reality appliance. In another embodiment, the first wearable extended reality appliance, the third wearable extended reality appliance, or any connected devices may automatically generate the third user permission data associated with the third user of the third wearable extended reality appliance, based on one or more of the identity of the third user, the role of the third user, the location of the first wearable extended reality appliance, the location of the third wearable extended reality appliance, the present time or date, or any other information that can be used to establish rules for data privacy independent of an active user input. Furthermore, selectively sharing content may include sharing content in any way that differentiates the information being presented to the second user of the second wearable extended reality appliance from the information being presented to the third user of the third wearable extended reality appliance. For example, the content shared with the second user of the second wearable extended reality appliance may comprise fully obscured content, while the content shared with the third user of the third wearable extended reality appliance may comprise only partially obscured content. In one example, the content shared with the second user of the second wearable extended reality appliance may comprise a blacked-out image, while the content shared with the third user of the third wearable extended reality appliance may comprise only a blurred-out image. In another example, the content shared with the second user of the second wearable extended reality appliance may comprise a completely redacted document, while the content shared with the third user of the third wearable extended reality appliance may comprise only a partially redacted document. In some embodiments, the second user permission data and the third user permission data may be obtained during the establishment of the link and the additional link respectively. In such embodiments, the second user permission data and the third user permission data may be obtained following a request to a user for the permission data or automatically. In some embodiments, the data representing at least a portion of the virtual content in an obscured form may be generated by processing at least part of the virtual content. For example, the first wearable extended reality appliance may process the at least part of the virtual content to generate the data representing the at least a portion of the virtual content in the obscured form. In another example, a computerized system (for example, a system coordinating an extended reality environment, a system that communicates with both the first and the second wearable extended reality appliances, a system that provides content for presentation for both the first and the second wearable extended reality appliance, and so forth) may process the at least part of the virtual content to generate the data representing the at least a portion of the virtual content in the obscured form. In some examples, the processing of the at least part of the virtual content to generate the data may include applying the at least part of the virtual content to a transformation function, such as blurring, modification, visual filter, and so forth. In one example, a machine learning model (such as a generative model, a generative adversarial network, a transformer based generative model, etc.) may be trained using training examples to generated obscure forms of virtual contents. An example of such training example may include a sample virtual content and sample obscuring parameters, together with a desired obscured form of the sample virtual content corresponding to the sample obscuring parameters. Some non-limiting examples of such obscuring parameters may include an obscuring level (such as ‘High’, ‘Low’, etc.), an obscuring method (such as reduction, blurring, etc.), and so forth. The trained machine learning model may analyze the at least part of the virtual content to generate the data representing the at least a portion of the virtual content in the obscured form. In one example, a manual configuration may be used to select obscuring parameters for the usage of the trained machine learning model. In some examples, the obscure form of the at least a portion of the virtual content may be selected based on characteristics of the second wearable extended reality appliance (such as hardware characteristics, display resolution, operating system, and so forth) and/or based on a condition of the second wearable extended reality appliance (such as ambient illumination conditions in an environment of the second wearable extended reality appliance, usage mode of the second wearable extended reality appliance, and so forth). For example, the characteristics and/or condition of the second wearable extended reality appliance may be used to select obscuring parameters for the usage of the trained machine learning model. In some examples, in response to first ambient illumination conditions in the environment of the second wearable extended reality appliance, a first obscure form of the at least a portion of the virtual content may be selected, and in response to second illumination conditions in the environment of the second wearable extended reality appliance, a second obscure form of the at least a portion of the virtual content may be selected, the second obscure form may differ from the first obscure form. In one example, the first ambient illumination conditions may correspond to brighter illumination than the second ambient illumination conditions, and the first obscure form may be more opaque and/or sharper than the second obscure form. In some examples, in response to a first usage mode of the second wearable extended reality appliance, a first obscure form of the at least a portion of the virtual content may be selected, and in response to a second usage mode of the second wearable extended reality appliance, a second obscure form of the at least a portion of the virtual content may be selected, the second obscure form may differ from the first obscure form. For example, the first usage mode may include an active engagement of the user of the second wearable extended reality appliance with virtual objects, the second usage mode may include a passive viewing of virtual objects by the user of the second wearable extended reality appliance, and the first obscure form may be smaller than the second obscure form. In another example, the first usage mode may include usage while being mobile (for example, walking, running, etc.), the second usage mode may include usage while being stationary (for example, sitting, standing, etc.), and the first obscure form may be less opaque than the second obscure form. In some embodiments, a method for providing situational awareness to users of wearable extended reality appliances may be disclosed.FIG.18is a flowchart of an exemplary method1805of providing situational awareness to users of wearable extended reality appliances, consistent connection with some embodiments of the present disclosure. Method1805may include a step1810in which the at least one processor causes virtual content to be displayed through a first wearable extended reality appliance. Method1805may include a step1812in which the at least one processor detects a second wearable extended reality appliance in proximity to the first wearable extended reality appliance. Method1805may include a step1814in which the at least one processor establishes a link between the first wearable extended reality appliance and the second wearable extended reality appliance. Method1805may include a step1816in which the at least one processor transmits data representing at least a portion of the virtual content in an obscured form to the second wearable extended reality appliance, wherein the obscured form provides an indication of a position of the at least a portion of the virtual content in a three-dimensional space without revealing substance of the virtual content in obscured form. Some embodiments may involve providing situational awareness to users of wearable extended reality appliances. For example, a system may include at least one processor configured to: cause virtual content to be displayed through a first wearable extended reality appliance; detect a second wearable extended reality appliance in proximity to the first wearable extended reality appliance; establish a link between the first wearable extended reality appliance and the second wearable extended reality appliance; and transmit data representing at least a portion of the virtual content in an obscured form to the second wearable extended reality appliance, wherein the obscured form provides an indication of a position of the at least a portion of the virtual content in a three-dimensional space without revealing substance of the virtual content in obscured form. Some disclosed embodiments may relate to tying a virtual whiteboard to a physical space, including methods, systems, apparatuses, and non-transitory computer-readable media. For example, a non-transitory computer-readable medium is described below, with the understanding that aspects of the non-transitory computer-readable medium may apply equally to methods, systems, and apparatuses. For example, one or more processes embodied in the non-transitory computer-readable medium may be performed as a method, in a system, or in an apparatus. Some aspects of such processes may occur electronically over a network that may be wired, wireless, or both. Other aspects of such processes may occur using non-electronic means. In a broadest sense, the processes are not limited to particular physical and/or electronic instrumentalities, but rather may be accomplished using many differing instrumentalities. For example, some disclosed embodiments may include a system, method or apparatus for tying virtual whiteboards to physical spaces, the system comprising at least one processor configured to perform various processes as described herein. The non-transitory computer-readable medium may contain instructions that when executed by at least one processor cause the at least one processor to perform various processes as described herein. A non-transitory computer-readable medium may include any type of physical memory on which information or data readable by at least one processor may be stored. A non-transitory computer-readable medium may include, for example, random access memory (RAM), read-only memory (ROM), volatile memory, non-volatile memory, hard drives, compact disc read-only memory (CD-ROM), digital versatile discs (DVDs), flash drives, disks, any optical data storage medium, any physical medium with patterns of holes, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), FLASH-EPROM or any other flash memory, non-volatile random-access memory (NVRAM), caches, registers, any other memory chip or cartridge, or networked versions of the same. A non-transitory computer-readable medium may refer to multiple structures, such as a plurality of non-transitory computer-readable media, located at a local location or at a remote location. Additionally, one or more non-transitory computer-readable media may be utilized in implementing a computer-implemented method. Accordingly, a non-transitory computer-readable medium may include tangible items and may exclude carrier waves or transient signals. The instructions contained in the non-transitory computer-readable medium may include, for example, software instructions, computer programs, computer code, executable instructions, source code, machine instructions, machine language programs, or any other type of directions for a computing device. The instructions contained in the non-transitory computer-readable medium may be based on one or more of various types of desired programming languages, and may include (e.g., embody) various processes for tying a virtual whiteboard to a physical space as described herein. At least one processor may execute the instructions contained in the non-transitory computer-readable medium to cause various processes to be performed for tying a virtual whiteboard to a physical space as described herein. The processor may include, for example, integrated circuits, microchips, microcontrollers, microprocessors, central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), or other units suitable for executing instructions or performing logic operations. The processor may include a single-core or multiple-core processor. In some examples, the processor may be a single-core processor configured with virtualization technologies. The processor may, for example, implement virtualization technologies or other functionalities to provide the ability to execute, control, run, manipulate, or store multiple software processes, applications, or programs. In another example, the processor may include a multiple-core processor (e.g., dual-core, quad-core, or with any desired number of cores) configured to provide parallel processing functionalities to allow a device associated with the processor to execute multiple processes simultaneously. Other types of processor arrangements may be implemented to provide the capabilities described herein. Some disclosed embodiments may relate to tying a virtual whiteboard to a physical space. A virtual whiteboard may refer to a virtual location where content may be displayed. For example, a virtual surface may be an area in a display space configured for the presentation of digital content. The digital content may include images, widgets, text, links, markings, scribbles or any other preexisting information or information generated on the fly, such as handwritten or typed text or images. The virtual content being displayed by an extended reality appliance to a user. Although referred to in a colloquial sense as a “white” board, the surface need not be of any particular color. That is, the display area may be white, black, blue, green, or yellow, or may have any other desired color. The surface may be displayed as glossy, smooth, or coarse, or may have any other desired texture. Any shape may be employed as a whiteboard. For example, in some embodiments, the whiteboard may have a traditional rectangular shape of a chalkboard or a flip chart, and in other embodiments, the whiteboard may be presented in the shape of a circle, square, triangle, hexagon, parallelogram, trapezoid, freeform shape, or any other desired contour. Similarly, use of the term “board” is not meant to imply that the display need be flat. In some embodiments, the virtual whiteboard may be present with varying contours. For example, a whiteboard may be presented as a rotatable cube or other three-dimensional shape (or combinations of shapes.) In some embodiments, initial characteristics (e.g., shape, size, color) of a whiteboard may change over time. For example, as a user interacts with differing portions of the whiteboard, the whiteboard's characteristics may change (e.g., a full board may expand; interacting with certain regions of the board may cause changes in display characteristics). A whiteboard surface is not limited to any particular size. It might be of any size, depending on the particular use case (e.g., if rectangular, 0.5 square meters, 0.75 square meters, 1 square meter, 2 square meters, or any other desired amount of area). In some examples, the surface of the virtual whiteboard may be bounded by an enclosure (e.g., frames) along the borders of the shape of the surface, the enclosure being displayed as virtual content together with the displayed surface. The virtual whiteboard may be displayed as being affixed onto a physical wall or a virtual wall being displayed as virtual content, may be displayed as being supported by an assembly standing on the floor, may be displayed as being placed (e.g., floating) in a space without being connected to other objects (either physical or virtual) in the space, or may be displayed in any other desired manner. The virtual whiteboard may be configured for making markings. For example, virtual content may be added onto, or removed from, a surface of the virtual whiteboard. Such virtual content may include texts, drawings, colors, shapes, icons, logos, pictures, graphics, annotations, videos, animations, documents, files, links, programs, scripts, or any other desired representation of data. Such virtual content may be displayed to be on the surface of the virtual whiteboard, and to be facing a direction same as or substantially similar to a direction that the virtual whiteboard may be facing. The virtual content on the virtual whiteboard may be within the borders of the virtual whiteboard. The virtual content on the virtual whiteboard may be modified in any desired manner. Using the virtual whiteboard (e.g., adding, removing, or modifying virtual content) may be carried out via user commands. As further described herein, the virtual whiteboard may be used in various manners and to implement various functions. In some examples, the virtual whiteboard may resemble a physical whiteboard. Just as physical whiteboards may be located in physical spaces, virtual whiteboards may be tied to physical spaces. For example, a virtual whiteboard may be tied to a physical space such as a conference room, a classroom, a discussion room, a work room, an office, a living room, a bedroom, a kitchen, a hall, a concourse, an indoor space, a playground, an outdoor space, or any other desired environment. Tying the virtual whiteboard to a physical space may, for example, enable a user of an extended reality appliance in the physical space to post content on the virtual whiteboard, for later viewing by another user even when the posting user is no longer physically located in the physical space of the virtual whiteboard. In some examples, a virtual whiteboard tied to a physical space may be configured to have a particular orientation and a particular position relative to the settings of the physical space. This may enable the virtual whiteboard, when displayed in the physical space, to have a consistent (e.g., fixed) orientation and position relative to the settings of the physical space. For example, a virtual whiteboard tied to a particular conference room in a building may, when invoked, be displayed consistently as being placed on a particular part of a specific wall of the conference room. In some examples, a virtual whiteboard may be a surface that enables different users in a vicinity of the virtual whiteboard to add virtual content for presentation on the virtual whiteboard by interacting with the virtual whiteboard, and/or erase visual content presented on the virtual whiteboard (for example, by interacting with the virtual whiteboard) and/or view visual content presented on the whiteboard. Some non-limiting examples of such virtual content are described above. Some non-limiting examples of such interactions are describes below. In some examples, a virtual whiteboard may be a virtual object mimicking and/or extending the functionality of a physical whiteboard. Some disclosed embodiments may involve receiving via a wireless network, an indication of a location of a first wearable extended reality appliance (WER-Appliance). A wireless network may include, for example, a Wi-Fi network, a WiMAX network, a cellular network (e.g., 2G, 3G, 4G, or 5G), a mobile network, a satellite communication network, a terrestrial microwave network, a wireless personal area network, a wireless local area network, a wireless ad hoc network, a wireless metropolitan area network, a wireless wide area network, a wireless global area network, a space network, or any other type of computer network that may use wireless data connections between network nodes. The location of the first wearable extended reality appliance may be determined, and the information may be sent to at least one processor via the non-transitory computer-readable medium (e.g., a processor of the server210). The determination of the location of the first wearable extended reality appliance may be based on one or more positioning systems included in the first wearable extended reality appliance, such as Global Positioning System (GPS) sensors, indoor positioning systems, Wi-Fi-based positioning systems, presence reporting systems for tagged objects, radio-frequency identification (RFID) systems, positioning systems based on received signal strength indications of wireless networks, or other types of desired mechanisms configured to identify device locations. The location of the first wearable extended reality appliance may be determined periodically, such as continuously, every 0.1 seconds, every 0.5 seconds, every 1 second, every 2 seconds, every 3 seconds, or at any other desired interval. In other examples, the location of the first wearable extended reality appliance may be determined when needed, at random points in time, and so forth. Such location information as monitored and updated may be sent to, and received by, the at least one processor of the non-transitory computer-readable medium. The location of the first wearable extended reality appliance may comprise, for example, a set of GPS coordinates, a building identifier, a room identifier, a space identifier, an office identifier, or any other type of positioning data. FIG.19is a flowchart illustrating an exemplary process1900for tying a virtual whiteboard to a physical space consistent with some embodiments of the present disclosure. With reference toFIG.19, in step1910, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive via a wireless network, an indication of a location of a first wearable extended reality appliance. Some disclosed embodiments may involve performing a lookup in a repository of virtual whiteboards and locations thereof to determine that the location of the first wearable extended reality appliance corresponds to a location of a particular virtual whiteboard. The repository of virtual whiteboards and locations thereof may refer to a data storage containing, for example, a table of virtual whiteboards (e.g., data for presentation on virtual whiteboards) and corresponding locations of the virtual whiteboards. For example, each virtual whiteboard of the virtual whiteboards indicated in the table may have a corresponding location stored in the table. The repository may be implemented based on or in a similar manner as data structure212. In some examples, the repository of virtual whiteboards and locations may be any data-structure searchable by location and enabling retrieval of whiteboards by physical location. In some examples, the location of the first wearable extended reality appliance may be a particular room (such as a particular meeting room, a particular office room, etc.), the lookup in the repository may be based on the particular room to identify one or more virtual whiteboards positioned in the particular room, and it may be determined that the location of the first wearable extended reality appliance corresponds to the one or more virtual whiteboards positioned in the particular room. In one example, the particular room may be a particular physical room, and the first wearable extended reality appliance may be located in the particular physical room. In some examples, the location of the first wearable extended reality appliance may be and/or indicate a particular wall (or a particular surface of a different type) visible to a user of the first wearable extended reality appliance, the lookup in the repository may be based on the location of the first wearable extended reality appliance and/or the particular wall to identify one or more virtual whiteboards placed on the particular wall, and it may be determined that the location of the first wearable extended reality appliance corresponds to the one or more virtual whiteboards placed on the particular wall. In one example, the particular wall may be identified by the position and direction of the first wearable extended reality appliance. In another example, the particular wall may be identified by analyzing image data captured by an image sensor included in the first wearable extended reality appliance to detect the particular wall in the image data. In some examples, the lookup in the repository may identify one or more virtual whiteboards positioned in a distance smaller than a selected threshold from the location of the first wearable extended reality appliance, and it may be determined that the location of the first wearable extended reality appliance corresponds to the identified one or more virtual whiteboards. In some non-limiting examples, the threshold may be selected based on a user of the first wearable extended reality appliance, based on the location of the first wearable extended reality appliance, based on a direction of the first wearable extended reality appliance, based on the whiteboard, based on a dimension of the whiteboard, based on a type of the whiteboard, and so forth. In some examples, when a particular virtual whiteboard is initially generated for a physical space, a registration process may be triggered, where identification of the virtual whiteboard and the location of the virtual whiteboard may be determined and sent to the repository. The generated virtual whiteboard may be tied to the physical space. The location of the virtual whiteboard may include, for example, a set of GPS coordinates, a building identifier, a room identifier, a space identifier, an office identifier, or any other type of positioning data. The information pair of the virtual whiteboard identification and the virtual whiteboard location may be received by the repository, and may be stored by the repository (e.g., in a data table). In some examples, an orientation of a particular virtual whiteboard and a position of the virtual whiteboard relative to the settings of a physical space may be recorded in the repository. By doing so, when the virtual whiteboard is displayed in the physical space, the virtual whiteboard may be configured, using the orientation and position information recorded in the repository, to have a consistent (e.g., fixed) orientation and position relative to the settings of the physical space. For example, using the orientation and position information recorded in the repository, when the virtual whiteboard is displayed in the physical space, the virtual whiteboard may be displayed consistently as being placed on a particular part of a wall of the physical space (e.g., a conference room). Information stored in the repository may be queried or retrieved, for example, for determining whether an extended reality appliance is present in a location of any of the virtual whiteboards identified in the repository. To perform the lookup in the repository, at least one processor may use the location of the first wearable extended reality appliance as a key to search in the repository. The at least one processor may determine whether a location indication matching the location of first wearable extended reality appliance is found in the repository. If a matching location indication is found in the repository, the at least one processor may identify a virtual whiteboard corresponding to the matching location. Based on a lookup as described above, the at least one processor may determine that the location of the first wearable extended reality appliance corresponds to a location of a particular virtual whiteboard. If a location indication matching the location of the first wearable extended reality appliance is not found in the repository, the at least one processor may determine that the location of the first wearable extended reality appliance does not correspond to a location of any virtual whiteboard. A lookup in the repository may be triggered when location information of an extended reality appliance (e.g., as periodically monitored) is received by the at least one processor, and the lookup may be performed based on the received location information. With reference toFIG.19, in step1912, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to perform a lookup in a repository of virtual whiteboards and locations thereof to determine that the location of the first wearable extended reality appliance corresponds to a location of a particular virtual whiteboard. Some disclosed embodiments may involve transmitting to the first wearable extended reality appliance, data corresponding to content of the particular virtual whiteboard to thereby enable a first user of the first wearable extended reality appliance to virtually view the content of the particular virtual whiteboard and to add virtual content to the particular virtual whiteboard. Based on determining that the location of the first wearable extended reality appliance corresponds to the location of the particular virtual whiteboard, at least one processor may transmit, to the first wearable extended reality appliance, data corresponding to content of the particular virtual whiteboard. The transmitted data may include, for example, text, drawings, colors, shapes, icons, logos, pictures, graphics, annotations, videos, animations, documents, files, links, programs, scripts, or any other information that conveys content for representation on the virtual whiteboard. In some examples, the content of the particular virtual whiteboard may include multiple documents contributed by differing users. The multiple documents may include, for example, text documents, image documents, audio documents, video documents, program documents, scribbles, handwritten text, or any other types of organizations of data. The differing users may comprise different persons using extended reality appliances (or other kind of computerized systems), such as students, teachers, presenters, participants, workers, individuals, or other members of the population. Each of the differing users may post his or her document(s) onto the virtual whiteboard, to share the document(s) with other users. The content of the particular virtual whiteboard may be stored in the repository that may also store identifications and locations for virtual whiteboards, or in a different repository. The at least one processor may retrieve the content of the particular virtual whiteboard from the repository, and may transmit the retrieved content to the first wearable extended reality appliance. In some examples, the at least one processor may retrieve the configuration of the virtual whiteboard (e.g., the size, shape, texture, or frame of the virtual whiteboard), which may be stored in the repository. The at least one processor may transmit the configuration of the virtual whiteboard to the first wearable extended reality appliance. The first wearable extended reality appliance may receive the transmitted content of the particular virtual whiteboard. In some examples, the first wearable extended reality appliance may receive the transmitted configuration of the particular virtual whiteboard. The first wearable extended reality appliance may, based on the received data, render the virtual whiteboard and the content of the virtual whiteboard, as a virtual representation to a first user of the first wearable extended reality appliance. The first user may include a person using the first wearable extended reality appliance, such as a student, a teacher, a presenter, a participant, a worker, an individual, or any other member of the population. Using the first wearable extended reality appliance, the first user may virtually view the rendered virtual whiteboard and the rendered content of the virtual whiteboard. The first user may add virtual content to the particular virtual whiteboard. The added virtual content may include, for example, text, drawings, colors, shapes, icons, logos, pictures, graphics, annotations, videos, animations, documents, files, links, programs, scripts, scribbles, handwritten texts, or any other desired representation of data. In some examples, the added content may include animated content, such as videos, sequentially displayed images, moving pictures, motion pictures, or any other type of content with moving elements. In some examples, the added content may include a drawing made by the first user through interaction with the particular virtual whiteboard. For example, a physical drawing tool or a digital representation of a drawing tool may be provided to the first user. The drawing tool when activated may follow a travel path of an input device associated with the first wearable extended reality appliance. Using the drawing tool, the travel path of the input device may be projected onto the virtual whiteboard, thereby allowing the first user to draw on the virtual whiteboard. Additionally or alternatively, when the drawing tool is intersecting with and/or is touching and/or in proximity of a region of the particular virtual whiteboard, a marking may be added to the region of the particular virtual whiteboard. The characteristics of the marking added (such as color, size, texture, etc.) may be selected based on the drawing tool, based on a parameter of the intersection, based on a parameter of the touch, based on the proximity, based on an analysis of the movement of the drawing tool, based on the virtual whiteboard, and so forth. In another example, a part of a body of the first user (such as a digit, a hand, etc.) may be used by the first user to add content through interaction with the particular virtual whiteboard. For example, when the part of the body is intersecting with and/or is touching and/or in proximity of a region of the particular virtual whiteboard, a marking may be added to the region of the particular virtual whiteboard. The characteristics of the marking added (such as color, size, texture, etc.) may be selected based on the gesture, based on a parameter of the intersection, based on a parameter of the touch, based on the proximity, based on the virtual whiteboard, and so forth. To add virtual content to the virtual whiteboard, the first user may input user commands to the first wearable extended reality appliance using an input device associated with the first wearable extended reality appliance. The virtual whiteboard may have one or more functions that may allow the first user to add virtual content to the virtual whiteboard. For example, a function to add text may allow the first user to type text onto the virtual whiteboard. As another example, a function to add drawings may allow the first user to draw on the virtual whiteboard. By way of another example, a function to add files may allow the first user to upload a file onto the virtual whiteboard. As another example, a function to add pictures may allow the first user to upload a picture onto the virtual whiteboard. Other functions to add virtual content to the virtual whiteboard may be provided as desired. The first user may use an input device of the first wearable extended reality appliance to invoke one or more of the functions, and to add virtual content to the virtual whiteboard. In addition, the virtual whiteboard may have functions to modify or remove virtual content of the virtual whiteboard. Using the functions, the first user may select virtual content of the virtual whiteboard for modification or removal. With reference toFIG.19, in step1914, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to transmit to the first wearable extended reality appliance, data corresponding to content of the particular virtual whiteboard to thereby enable a first user of the first wearable extended reality appliance to virtually view the content of the particular virtual whiteboard and to add virtual content to the particular virtual whiteboard. FIG.20andFIG.21are schematic diagrams illustrating examples of tying a virtual whiteboard to a physical space consistent with some embodiments of the present disclosure. With reference toFIG.20, a physical space2010may include a conference room, a classroom, a discussion room, a work room, a meeting room, an office, a living room, a bedroom, a kitchen, a hall, a concourse, an indoor space, a playground, an outdoor space, or any other desired environment. A first user2016may be present in the physical space2010, and may wear a first wearable extended reality appliance. A virtual whiteboard2012may be displayed via the first wearable extended reality appliance to the first user2016. The virtual whiteboard2012may be displayed as being placed on a wall2014. The virtual whiteboard2012may be tied to the physical space2010. Based on the first wearable extended reality appliance being in the physical space2010, the first wearable extended reality appliance may receive and display the virtual whiteboard2012and content of the virtual whiteboard2012. The content of the virtual whiteboard2012may include, for example, text2018and multiple documents2020,2022. The first user2016may add virtual content to the virtual whiteboard2012(for example, using the first wearable extended reality appliance, using an input device, using a physical marking tool, using a virtual marking tool, using gestures, using voice commands, by interacting with virtual whiteboard2012, etc.). Some disclosed embodiments may involve receiving, during a first time period, the virtual content added by the first user. For example, receiving may include obtaining by at least one processor, signals reflecting virtual content. Based on the adding of the virtual content to the particular virtual whiteboard by the first user, the first wearable extended reality appliance may send signals reflecting the added virtual content to the at least one processor. The at least one processor may receive, during a first time period, the virtual content added by the first user. The virtual content added by the first user may be received from the first wearable extended reality appliance, from a device associated with the first wearable extended reality appliance, from a fixed camera placed in a vicinity of the particular virtual whiteboard, or from any other suitable entities having information associated with the added virtual content. In some examples, the first time period may be a time period before the first wearable extended reality appliance leaves the location of the particular virtual whiteboard. For example, the first time period may include the time of the addition of the virtual content to the particular virtual by the first user. In some examples, the first time period may be a time period after the first wearable extended reality appliance is no longer in the location of the particular virtual whiteboard. In some examples, the first time period may be a time period during which the first wearable extended reality appliance leaves the location of the particular virtual whiteboard. For example, the first time period may be a time period after the time of the addition of the virtual content to the particular virtual by the first user. For example, the virtual content added to the particular virtual whiteboard by the first user may be received from an external device that stored the virtual content from the time of the addition. The external device may provide the virtual content (for example, by transmitting it) once the first wearable extended reality appliance leaves the location of the particular virtual whiteboard and/or after the first wearable extended reality appliance is no longer in the location of the particular virtual whiteboard (for example, to overcome communication problems at the location of the particular virtual whiteboard, to minimize communication costs, and so forth). With reference toFIG.19, in step1916, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive, during a first time period, the virtual content added by the first user. The processor may be part of, for example, server210. In some examples, the instructions contained in the non-transitory computer-readable medium may further include storing the added content in the repository of virtual whiteboards. Based on receiving the virtual content added by the first user from the first wearable extended reality appliance, the at least one processor may store the added content in the repository of virtual whiteboards and locations thereof. The added content may be stored in the repository in association with the particular virtual whiteboard to which the added content is added. Consistent with disclosed embodiments, instructions contained in a non-transitory computer-readable medium may include receiving via the wireless network at a second time period after the first wearable extended reality appliance is no longer in the location of the particular virtual whiteboard, an indication that a second wearable extended reality appliance is in the location of the particular virtual whiteboard. After the first user has added the virtual content to the particular virtual whiteboard using the first wearable extended reality appliance, the first wearable extended reality appliance may leave the location of the particular virtual whiteboard. For example, the first user may leave the location of the particular virtual whiteboard, and may take the first wearable extended reality appliance with him or her. At a second time period after the first wearable extended reality appliance is no longer in the location of the particular virtual whiteboard, a second wearable extended reality appliance (which may differ from or be the same as the first wearable extended reality appliance) may arrive at the location of the particular virtual whiteboard. For example, a second user in possession of the second wearable extended reality appliance, may arrive at a vicinity of the virtual whiteboard. By way of another example, a second user may enter a conference room, a classroom, a discussion room, a work room, an office, a meeting room, a living room, a bedroom, a kitchen, a hall, a concourse, an indoor space, a playground, an outdoor space, or any other desired environment associated with the location of the particular virtual whiteboard, and may take the second wearable extended reality appliance with him or her. In a similar manner as the first wearable extended reality appliance, the second wearable extended reality appliance (or another process or computerized system) may periodically monitor the location of the second wearable extended reality appliance. In other examples, the location of the second wearable extended reality appliance may be determined when needed, at random points in time, and so forth. For example, the second wearable extended reality appliance may use positioning systems to determine its location. The second wearable extended reality appliance may send the location information (e.g., as periodically monitored) via the wireless network to at least one processor (e.g., a processor of the server210). The at least one processor may receive the location information sent by the second wearable extended reality appliance. The received location information may indicate, to the at least one processor, whether the second wearable extended reality appliance is in the location of the particular virtual whiteboard (or in a location that corresponds to the location of the particular virtual whiteboard). For example, the at least one processor may compare the received location information of the second wearable extended reality appliance with locations of virtual whiteboards included in the repository. If the location information of the second wearable extended reality appliance matches the location of the particular virtual whiteboard, the at least one processor may determine that the second wearable extended reality appliance is in the location of the particular virtual whiteboard. In another example, it may be determined that the second wearable extended reality appliance is in a location that corresponds to the location of the particular virtual whiteboard in a similar manner to the determination that the location of the first wearable extended reality appliance corresponds to the location of the particular virtual whiteboard. In some examples, the second time period and the first time period may have no overlap. For example, the first time period may end before the second time period starts. In some examples, the first time period may start after the second time period ends. In some examples, the second time period and the first time period may have overlap, but may differ from one another (e.g., may be of unequal durations). In some examples, the second time period and the first time period may be identical (e.g., may have a generally equal duration). With reference toFIG.19, in step1918, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive via the wireless network at a second time period after the first wearable extended reality appliance is no longer in the location of the particular virtual whiteboard, an indication that a second wearable extended reality appliance is in the location of the particular virtual whiteboard. Some disclosed embodiments may also involve transmitting to the second wearable extended reality appliance, data corresponding to the content and the added content of the particular virtual whiteboard, to thereby enable a second user of the second wearable extended reality appliance to view the content and the added content while the first user is absent from the location of the particular virtual whiteboard. Based on receiving the indication that the second wearable extended reality appliance is in the location of the particular virtual whiteboard, at least one processor may transmit, to the second wearable extended reality appliance, data corresponding to the content of the particular virtual whiteboard (e.g., the content transmitted previously to the first wearable extended reality appliance) and corresponding to the virtual content added by the first user to the particular virtual whiteboard. Such data may be stored in memory with a marker associating the content with the virtual whiteboard. Therefore, although the first wearable extended reality device is not present, the data may be retrieved from memory and sent to the second wearable extended reality device. Such a process may occur as the result of at least one processor, such as processor of the server210, accessing the data for transmission. In some examples, the data transmitted to the second wearable extended reality appliance may include an indication that the first user contributed the added content. For example, when the first user adds the virtual content to the particular virtual whiteboard, an indication that the first user contributed the added content may be recorded by the first wearable extended reality appliance, and may be sent to, and stored by, the at least one processor of the non-transitory computer-readable medium. That is, the at least one processor may store the added content in memory for later retrieval. The indication that the first user contributed the added content may include, for example, initials of the first user, a code associated with the first user, a username of the first user, user identification of the first user (e.g., a user identification number), a location of the first user when the first user adds the virtual content to the particular virtual whiteboard, a time when the first user adds the virtual content to the particular virtual whiteboard, a place of the first user when the first user adds the virtual content to the particular virtual whiteboard, or any other desired information associated with the first user adding the virtual content to the particular virtual whiteboard. The indication associated with the added content may be stored in connection with the added content (e.g., in the repository). The at least one processor may be configured to transmit an indication associated with the added content to the second wearable extended reality appliance. In one example, the second wearable extended reality appliance may be configured to provide a visual indication that the first user contributed the added content (for example, to a user of the second wearable extended reality appliance), for example based on the indication that the first user contributed the added content in the data transmitted to the second wearable extended reality appliance. In some examples, the data transmitted to the second wearable extended reality appliance may include an indication of when the added content was added to the particular virtual whiteboard. For example, when the first user adds the virtual content to the particular virtual whiteboard, a time stamp of when the adding of the virtual content occurs may be recorded by the first wearable extended reality appliance, and may be sent to, and stored by, the at least one processor. The time stamp may be recorded in connection with the added content (e.g., in the repository), and may be transmitted in connection with the added content (e.g., to the second wearable extended reality appliance). In one example, the second wearable extended reality appliance may be configured to provide a visual indication of when the added content was added to the particular virtual whiteboard (for example, to a user of the second wearable extended reality appliance), for example based on the indication of when the added content was added to the particular virtual whiteboard in the data transmitted to the second wearable extended reality appliance. The transmission of the data to the second wearable extended reality appliance may enable a second user of the second wearable extended reality appliance to view the content of the particular virtual whiteboard (e.g., the content transmitted also to the first wearable extended reality appliance) and the content added by the first user to the particular virtual whiteboard, while the first user is absent from the location of the particular virtual whiteboard. The second user may include a person using the second wearable extended reality appliance, such as a student, a teacher, a presenter, a participant, a worker, an individual, or any other member of the population. With reference toFIG.19, in step1920, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to transmit to the second wearable extended reality appliance, data corresponding to the content and the added content of the particular virtual whiteboard, to thereby enable a second user of the second wearable extended reality appliance to view the content and the added content while the first user is absent from the location of the particular virtual whiteboard. With reference toFIG.21, a second user2112wearing a second wearable extended reality appliance may come to (or enter) the physical space2010, for example, after the first user2016wearing the first wearable extended reality appliance leaves the physical space2010. Based on the second wearable extended reality appliance being in the physical space2010, the second wearable extended reality appliance may receive and display the virtual whiteboard2012, the content of the virtual whiteboard2012(e.g., the text2018and the multiple documents2020,2022), and virtual content added by the first user2016to the virtual whiteboard2012. The virtual content added by the first user2016to the virtual whiteboard2012may include, for example, an image2110. Some examples may involve causing the multiple documents to be displayed in a common layout arrangement during both the first time period and the second time period. The content of the particular virtual whiteboard that may be displayed during the first time period and the second time period may include multiple documents. The multiple documents may be displayed in a common layout arrangement. For example, each of the multiple documents may be displayed as a large icon. As another example, each of the multiple documents may be displayed as a small icon. By way of another example, the multiple documents may be displayed in the form of tiles. As another example, the multiple documents may be displayed in a list. As another example, each of the multiple documents may be displayed, showing contents of the document. The multiple documents may be displayed in any other desired common manner. It is contemplated that the layout used to display the documents during the first time period to a first user may be the same as the layout used to display the documents during the second time period to a second user. For example, with reference toFIG.20andFIG.21, the multiple documents2020,2022may be displayed in a common layout arrangement (e.g., one below the other on a right side of the display) using icons during both a time period when the first user2016is present in the physical space2010and a time period when the second user2112is present in the physical space2010. Similarly, for example, text2018may be displayed above image2110on a left side of the display during both a time period when the first user2016is present in the physical space2010and a time period when the second user2112is present in the physical space2010. Some examples may involve automatically erasing at least some of the multiple documents from the particular virtual whiteboard upon expiration of a maintenance period. The maintenance period may be, for example, 15 days, 1 month, 2 months, 3 months, 6 months, 1 year, 2 years, or any other interval or time duration suitable for automatically removing documents from a virtual whiteboard. In some examples, the maintenance period may be specific to each of the multiple documents. In some examples, the maintenance period for a particular document may be set by a user who adds the particular document to the particular virtual whiteboard. In some examples, the maintenance period may be adjusted by other users or may be based on policies (e.g., policies of an organization or group). In some examples, the maintenance period may be different for different types of documents (e.g., 2 months for text documents and 1 month for video documents). At least one processor of the non-transitory computer-readable medium may be configured, for example, to set a timer corresponding to a maintenance period for a document of multiple documents displayed on a virtual whiteboard. Upon expiration of the timer, the at least one processor may be configured to automatically erase the document from that particular virtual whiteboard. Further embodiments may involve accessing rules associating users of wearable extended reality appliances with permissions to add content to the particular virtual whiteboard, and using the accessed rules to determine that the first user can add content on the particular virtual whiteboard. The rules associating the users with the permissions to add content to the particular virtual whiteboard may include an indication of whether each of the users is permitted to add content to the particular virtual whiteboard. For example, the repository of virtual whiteboards may store, for each virtual whiteboard, a user permission table for adding content. The user permission table may list one or more users of the virtual whiteboard, and whether the one or more users have permission to add content to the virtual whiteboard. At least one processor may access the permission information for adding content for the particular virtual whiteboard. Based on the accessed permission information, the at least one processor may determine whether a first user is permitted to add content on the particular virtual whiteboard. FIG.22is a flowchart illustrating an exemplary process2200for adding content to a virtual whiteboard consistent with some embodiments of the present disclosure. With reference toFIG.22, in step2210, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to access rules associating users of wearable extended reality appliances with permissions to add content to the particular virtual whiteboard. In step2212, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to use the accessed rules to determine that the first user can add content on the particular virtual whiteboard. In some examples, the determination that the first user can add content on the particular virtual whiteboard may be based on the location of the particular virtual whiteboard. In one example, the rules associating users of wearable extended reality appliances with permissions to add content to the particular virtual whiteboard may include rules associating the users with permissions based on locations of virtual whiteboards. For example, the first user may have permission to add content to all virtual whiteboard in a first region (such as a particular building, a particular floor, a particular room, etc.). In another example, the first user may have no permission to add content to all virtual whiteboard in a second region (such as a particular building, a particular floor, a particular room, etc.). In some examples, the determination that the first user can add content on the particular virtual whiteboard may be based on an access privilege of the first user to the particular virtual whiteboard, or may be based on a position within an organization associated with the first user. The access privilege of a first user to the particular virtual whiteboard may indicate, for example, to what extent the first user is permitted to access the particular virtual whiteboard, such as whether the first user is permitted to add content to the particular virtual whiteboard, whether the first user is permitted to delete content from the particular virtual whiteboard, whether the first user is permitted to modify content of the particular virtual whiteboard, or whether the first user is permitted to view content of the particular virtual whiteboard. The organization associated with a first user may include, for example, a company, a firm, a government agency, an education institution, a school, a university, or any other entity. The position within the organization may be based on, for example, a job function or work responsibilities of the first user. For example, the position within the organization may include a supervisor, an employee, an officer, a board member, a president, a manager, a director, or any other appropriate job description or title within an organization. Permissions to add content to the particular virtual whiteboard may be different for different positions within the organization. The at least one processor may determine whether the first user can add content on the particular virtual whiteboard based on the access privilege of the first user to the particular virtual whiteboard, or based on the position within the organization associated with the first user. Some embodiments may further involve accessing rules associating users with permissions to delete content from the particular virtual whiteboard, and using the accessed rules to determine that the second user is permitted to delete the added content from the particular virtual whiteboard. The rules associating the users with the permissions to delete content from the particular virtual whiteboard may include indication of whether each of the users is permitted to delete content from the particular virtual whiteboard. For example, the repository of virtual whiteboards may store, for each virtual whiteboard, a user permission table for deleting content. The user permission table may list one or more users of the virtual whiteboard, and whether the one or more users have permission to delete content from the virtual whiteboard. At least one processor of the non-transitory computer-readable medium may access the permission information for deleting content for the particular virtual whiteboard. Based on the accessed permission information, the at least one processor may determine whether a second user is permitted to delete content (e.g., the content added by the first user to the particular virtual whiteboard) from the particular virtual whiteboard. In some examples, the determination that the second user is permitted to delete the added content from the particular virtual whiteboard may be based on the location of the particular virtual whiteboard. In one example, the rules associating users with permissions to delete content from the particular virtual whiteboard may include rules associating the users with permissions based on locations of virtual whiteboards. For example, the second user may be permitted to delete content from all virtual whiteboard in a first region (such as a particular building, a particular floor, a particular room, etc.). In another example, the second user may have no permission to delete content from all virtual whiteboard in a second region (such as a particular building, a particular floor, a particular room, etc.). In some examples, the determination that the second user is permitted to delete the added content from the particular virtual whiteboard may be based on the first user. In one example, the rules associating users with permissions to delete content from the particular virtual whiteboard may associate the users with permissions based on identities of content creators associated with the content to be deleted. For example, the second user may be permitted to delete content added to the particular virtual whiteboard by individuals of a first group of individuals. In another example, the second user may have no permission to delete content added to the particular virtual whiteboard by individuals of a second group of individuals. The determination that the second user is permitted to delete the added content from the particular virtual whiteboard may be based on whether the first user is included in the first group and/or based on whether the first user is included in the second group. FIG.23is a flowchart illustrating an exemplary process2300for deleting content from a virtual whiteboard consistent with some embodiments of the present disclosure. With reference toFIG.23, in step2310, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to access rules associating users with permissions to delete content from the particular virtual whiteboard. In step2312, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to use the accessed rules to determine that the second user is permitted to delete the added content from the particular virtual whiteboard. In some examples, the determination that the second user is permitted to delete content from the particular virtual whiteboard may be based on a position within an organization associated with the second user, or may be based on an access privilege of the second user to the particular virtual whiteboard. The access privilege of the second user to the particular virtual whiteboard may indicate, for example, to what extent the second user is permitted to access the particular virtual whiteboard, such as whether the second user is permitted to add content to the particular virtual whiteboard, whether the second user is permitted to delete content from the particular virtual whiteboard, whether the second user is permitted to modify content of the particular virtual whiteboard, or whether the second user is permitted to view content of the particular virtual whiteboard. The organization associated with the second user may include, for example, a company, a firm, a government agency, an education institution, a school, a university, or any other entity. The position within the organization may be based on, for example, a job function or work responsibilities of the second user. For example, the position within the organization may include a supervisor, an employee, an officer, a board member, a president, a manager, a director, or any other appropriate job description or title within an organization. Permissions to delete content from the particular virtual whiteboard may be different for different positions within the organization. The at least one processor may determine whether the second user is permitted to delete content from the particular virtual whiteboard based on the access privilege of the second user to the particular virtual whiteboard, or based on the position within the organization associated with the second user. Additional embodiments may involve accessing rules associating types of content with permissions to upload content to the particular virtual whiteboard, and using the accessed rules to determine that the added content is permitted for posting on the particular virtual whiteboard. The rules associating the types of content with the permissions to upload content to the particular virtual whiteboard may include indication of whether one or more of the types of content is permitted to be uploaded to the particular virtual whiteboard. The types of content may include, for example, scientific content, research content, accounting content, financial content, administrative content, training content, professional content, personal content, or other categories of content. In another example, the types of content may be based on a category of the content, such as textual, visual, footage, animated, unanimated, handwritten, scribbled, and so forth. For example, the repository of virtual whiteboards may store, for each virtual whiteboard, a content-type permission table for uploading content. The content-type permission table may list the one or more types of content, and whether the one or more types of content are permitted to be uploaded to the virtual whiteboard. At least one processor may access the permission information for uploading content for the particular virtual whiteboard. Based on the accessed permission information, the at least one processor may determine whether content (e.g., the content added by the first user to the particular virtual whiteboard) is permitted to be uploaded to the particular virtual whiteboard for posting on the particular virtual whiteboard. For example, the first user may mark or tag the added content with a content type for determining whether the added content is permitted to be uploaded to the particular virtual whiteboard for posting on the particular virtual whiteboard. FIG.24is a flowchart illustrating an exemplary process2400for uploading content to a virtual whiteboard based on types of content consistent with some embodiments of the present disclosure. With reference toFIG.24, in step2410, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to access rules associating types of content with permissions to upload content to the particular virtual whiteboard. In step2412, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to use the accessed rules to determine that the added content is permitted for posting on the particular virtual whiteboard. In some examples, the determination that the added content is permitted for posting on the particular virtual whiteboard may be based on content analysis of the added content. For example, the added content may be permitted for posting on the particular virtual whiteboard based on images analysis of the added content to determine that there is no nudity in the added content. Additionally or alternatively, image analysis of the added content may be used to determine various characteristics of the added content, such as unclear or blurred images, images with insufficient lighting, images of restricted objects (e.g., restricted government facility), images of a confidential document or prototype, or other image features of interest. One or more criteria to permit content posting on the particular virtual whiteboard may be configured, for example, based on policies (e.g., policies of an organization or group). The one or more criteria may indicate, for example, whether images with particular characteristics are permitted for posting on the particular virtual whiteboard. Based on image analysis of the added content, when determined characteristics of the added content satisfy the one or more criteria, the added content may be permitted for posting on the particular virtual whiteboard. Based on image analysis of the added content, when determined characteristics of the added content do not satisfy the one or more criteria, the added content might not be permitted for posting on the particular virtual whiteboard. In some examples, the type of content may be determined by analyzing the added content, for example using a visual classification algorithm classifying each content to one of a plurality of classes, where each class correspond to a content type. In some examples, a binary visual classification algorithm may be used to classify the added content to one of two classes, where one class may be permitted for posting, and the other class may not be permitted for posting. As another example, the added content may be permitted for posting on the particular virtual whiteboard based on text analysis of the added content to determine that there is no offensive language in the added content. Additionally or alternatively, text analysis of the added content may be used to determine various characteristics of the added content, such as text with spelling errors, text with grammatical errors, text of a confidential document or prototype, or other text features of interest. One or more criteria to permit content posting on the particular virtual whiteboard may be configured, for example, based on policies (e.g., policies of an organization or group). The one or more criteria may indicate, for example, whether text with particular characteristics is permitted for posting on the particular virtual whiteboard. Based on text analysis of the added content, when determined characteristics of the added content satisfy the one or more criteria, the added content may be permitted for posting on the particular virtual whiteboard. Based on text analysis of the added content, when determined characteristics of the added content do not satisfy the one or more criteria, the added content might not be permitted for posting on the particular virtual whiteboard. In some examples, the type of content may be determined by analyzing the added content, for example using a textual classification algorithm classifying each content to one of a plurality of classes, where each class correspond to a content type. In some examples, a binary textual classification algorithm may be used to classify the added content to one of two classes, where one class may be permitted for posting, and the other class may not be permitted for posting. Additional embodiments may involve accessing rules associating users of wearable extended reality appliances with permissions to view content from the particular virtual whiteboard, and using the accessed rules to determine that the second user is permitted to view the added content on the particular virtual whiteboard. The rules associating the users with the permissions to view content from the particular virtual whiteboard may include indication of whether each of the users is permitted to view content from the particular virtual whiteboard. For example, the repository of virtual whiteboards may store, for each virtual whiteboard, a user permission table for viewing content. The user permission table may list one or more users of the virtual whiteboard, and whether the one or more users have permission to view content from the virtual whiteboard. At least one processor may access the permission information for viewing content for the particular virtual whiteboard. Based on the accessed permission information, the at least one processor may determine whether a second user is permitted to view content (e.g., the content added by the first user to the particular virtual whiteboard) on the particular virtual whiteboard. In some examples, the determination that the second user is permitted to view the added content from the particular virtual whiteboard may be based on the location of the particular virtual whiteboard. In one example, the rules associating users with permissions to view content from the particular virtual whiteboard may include rules associating the users with permissions based on locations of the particular virtual whiteboard. For example, the second user may be permitted to view content from all virtual whiteboard in a first region (such as a particular building, a particular floor, a particular room, etc.). In another example, the second user may have no permission to view content from all virtual whiteboard in a second region (such as a particular building, a particular floor, a particular room, etc.). In some examples, the determination that the second user is permitted to view the added content from the particular virtual whiteboard may be based on the first user. In one example, the rules associating users with permissions to view content from the particular virtual whiteboard may include rules associating the users with permissions based on identities of content creators associated with the content to be viewed. For example, the second user may be permitted to view content added to the particular virtual whiteboard by individuals of a first group of individuals. In another example, the second user may have no permission to view content added to the particular virtual whiteboard by individuals of a second group of individuals. The determination that the second user is permitted to view the added content from the particular virtual whiteboard may be based on whether the first user is included in the first group and/or based on whether the first user is included in the second group. FIG.25is a flowchart illustrating an exemplary process2500for viewing content from a virtual whiteboard consistent with some embodiments of the present disclosure. With reference toFIG.25, in step2510, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to access rules associating users of wearable extended reality appliances with permissions to view content from the particular virtual whiteboard. In step2512, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to use the accessed rules to determine that the second user is permitted to view the added content on the particular virtual whiteboard. In some examples, the determination that the second user is permitted to view the added content on the particular virtual whiteboard may be based on an age of the second user. Permissions to view content on a virtual whiteboard may be different for different ages of users. For example, a content item of a virtual whiteboard may be permitted to be viewed by users who are 18 years of age or older, and may not be permitted to be viewed by users who are under 18 years of age. The at least one processor may determine, based on the age of the second user, whether the second user is permitted to view content (e.g., the content added by the first user to the particular virtual whiteboard) on the particular virtual whiteboard. The age restriction for viewing a content item may be specified by the user who adds the content item to a virtual whiteboard, may be specified by other users, or may be configured based on policies (e.g., policies of an organization or group). Further embodiments may involve receiving via the wireless network at a time before the second time period, an indication of at least one class of content of interest to the second user, determining that a first part of the added content is included in the at least one class of interest to the second user, and enabling the second user to view the first part of the added content. The at least one class of content of interest to the second user may be configured by the second user. For example, the second user may identify one or more classes of content, such as scientific content, research content, accounting content, financial content, administrative content, training content, professional content, personal content, or other categories of content. The at least one processor may receive an indication of one or more content classes, for example, at a time before the second time period at which the indication that the second wearable extended reality appliance is in the location of the particular virtual whiteboard is received. The at least one processor may determine whether any part of the content added by the first user to the particular virtual whiteboard is included in at least one class of interest to the second user. In some examples, the at least one processor may perform content classification techniques on different parts of the added content to determine one or more classes for each of the different parts of the added content. In some examples, the first user may mark or tag each of the different parts of the added content with one or more suitable classes. The at least one processor may determine whether the class(es) for each of the different parts of the added content match one or more of the at least one class of interest to the second user. Based on determining that a first part of the added content is included in the at least one class of interest to the second user, the at least one processor may send the first part of the added content to the second wearable extended reality appliance, and enable the second user to view the first part of the added content via the second wearable extended reality appliance. For example, with reference toFIG.20andFIG.21, a first user2016may assign different content classes to the items that may be displayed on the virtual whiteboard2012(e.g., the text2018, the image2110, the document2020, or the document2022). A second user2112may identify an interest in one of the different content classes. An item, of the items, corresponding to the content class identified by the second user2112may be provided to the second user2112for viewing. One or more of the items not corresponding to the content class identified by the second user2112may not be provided to the second user2112for viewing. In some examples, image data captured using an image sensor included in the first wearable extended reality appliance while the first user interacts with a first portion of the particular whiteboard may be received. For example, the image data may be received from the image sensor, from the first wearable extended reality appliance, from an intermediate device external to the first wearable extended reality appliance, from a memory unit, and so forth. The image data may be analyzed to detect a physical object moving in a trajectory, for example using an object tracking algorithm. In one example, the physical object and the first user may be on opposite sides of the particular whiteboard. In one example, a presentation of the particular whiteboard via the first wearable extended reality appliance may hide, at least partly, the physical object from the first user. For example, a ray casting algorithm may be used to determine that the particular whiteboard hides, at least partly, the physical object from the first user. In one example, the trajectory of the physical object may be analyzed, for example using an extrapolation algorithm, to determine a likelihood that the physical object is about to move through a second portion of the particular whiteboard. The second portion and the first portion of the particular whiteboard may be disjoint, may be identical, may have some but not all area in common, may have no area in common, and so forth. Further, a visual indication of the second portion may be provided to the first user, for example via the first wearable extended reality appliance. In some examples, the visual indication may further indicate at least one of the physical object, a type of the physical object, a velocity of the physical object, the trajectory, a distance of the physical object from the particular whiteboard, or a likely time of intersection of the physical object with the second portion of the particular whiteboard by the physical object. In some examples, the type of the physical object may be determined, for example by analyzing the image data using an object recognition algorithm. Further, in response to a first determined type of the physical object (such as a small physical object, a fly, etc.), providing the visual indication may be avoided, and in response to a second determined type of the physical object (such as a large physical object, a person, etc.), the visual indication may be provided. In another example, the physical object may be a person, in response to a determination that the person is using a wearable device that provides the person with an indication of the particular white board, providing the visual indication may be avoided, and in response to a determination that the person is not using a wearable device that provides the person with an indication of the particular white board, the visual indication may be provided. In one example, the visual indication of the second portion may include halting the presentation of the second portion of the particular whiteboard via the first wearable extended reality appliance. In another example, the visual indication of the second portion may include a presentation of a virtual object indicating the second portion of the particular whiteboard (such as an arrow, a halo, and so forth). In yet another example, the visual indication of the second portion may include a presentation of the second portion of the particular whiteboard with different presentation parameters. FIG.26is a flowchart illustrating an exemplary process2600for viewing virtual whiteboard content of interest to a user consistent with some embodiments of the present disclosure. With reference toFIG.26, in step2610, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive via the wireless network at a time before the second time period, an indication of at least one class of content of interest to the second user. In step2612, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to determine that a first part of the added content is included in the at least one class of interest to the second user. In step2614, instructions contained in a non-transitory computer-readable medium when execute by a processor may cause the processor to enable the second user to view the first part of the added content. Some embodiments may involve determining that a second part of the added content is excluded from the at least one class of content of interest to the second user, and precluding display of the second part of the added content to the second user. The at least one processor may determine whether any part of the content added by the first user to the particular virtual whiteboard is included in the at least one class of interest to the second user. In some examples, the at least one processor may perform content classification techniques on different parts of the added content to determine one or more classes for each of the different parts of the added content. In some examples, the first user may mark or tag each of the different parts of the added content with one or more suitable classes. The at least one processor may determine whether the class(es) for each of the different parts of the added content match one or more of the at least one class of interest to the second user. Based on determining that a second part of the added content is excluded from the at least one class of interest to the second user, the at least one processor may preclude display of the second part of the added content to the second user. In some examples, the second part of the added content includes advertising, such as commercials, television advertisements, newspaper advertisements, radio advertisements, online advertisements, or any other type of promotional announcement or publication. For example, with reference toFIG.20andFIG.21, a first user2016may assign different content classes to the items that may be displayed on the virtual whiteboard2012(e.g., the text2018, the image2110, the document2020, or the document2022). A second user2112may identify an interest in one of the different content classes. An item, of the items, corresponding to the content class identified by the second user2112may be provided to the second user2112for viewing. One or more of the items not corresponding to the content class identified by the second user2112may not be provided to the second user2112for viewing. The one or more items not provided to the second user2112for viewing may, for example, include advertising. In some examples, the second wearable extended reality appliance may be configured to receive via the wireless network at a time before the second time period, at least one indication of a class of content that the second user of the second wearable extended reality appliance is interested in viewing; receive the data corresponding to the content and the added content of the particular virtual whiteboard; determine that the added content is associated with the at least one class that the second user of the second wearable extended reality appliance is interested in viewing; and display to the second user of the second wearable extended reality appliance the added content. For example, the second wearable extended reality appliance may be further configured to: determine that a specific content associated with a different virtual whiteboard is associated with a class that the second user of the second wearable extended reality appliance is not interested in viewing; and avoid displaying to the second user of the second wearable extended reality appliance the specific content. In some examples, the specific content may include advertisements. Viewing content in extended reality environment has significant advantages and disadvantages when compared to viewing content on a physical display. On one hand, the boundaries and size limit of the physical display vanishes, enabling viewing content of any size and in every place in the environment. Furthermore, the content may be three dimensional, and may interact with physical elements of the environment. Moreover, when it comes to privacy, content in extended reality may be viewed in private without the danger of someone catching a glimpse of the displayed content. On the other hand, display quality of extended reality appliances may be lesser than the one of physical displays, viewing the content may require usage of extended reality appliances which may be burdensome when used for long periods of time, and the content may not be easily shared with people that do not use extended reality appliances to share the extended reality environment. To enable users to enjoy the benefits of both mediums, it is desired to enable the user to easily move content from extended reality to physical displays and vise versa. Some disclosed embodiments may include a non-transitory computer readable medium containing instructions that when executed by at least one processor cause the at least one processor to perform operations for transferring virtual content to a physical display device. Virtual content may include a virtual object, inanimate virtual content, animate virtual content configured to change over time or in response to triggers, virtual two-dimensional content, virtual three-dimensional content, a virtual overlay over a portion of a physical environment or over a physical object, a virtual addition to a physical environment or to a physical object, a virtual promotion content, a virtual representation of a physical object, a virtual representation of a physical environment, a virtual document, a virtual character or persona, a virtual computer screen, a virtual widget, or any other format for displaying information virtually. In one example, the virtual object may be or include a virtual display screen (also referred to as a “virtual display” or a “virtual screen” herein), as described above. An example of such virtual display screen may include virtual screen112. Consistent with the present disclosure, the virtual content may include any visual presentation rendered by a computer or a processing device, for example presented in an extended reality environment by an extended reality appliance (for example, presented in an extended reality environment to a user via a wearable extended reality appliance). In one embodiment, the virtual content may include a virtual object that is a visual presentation rendered by a computer in a confined region and configured to represent an object of a particular type (such as an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, virtual widget, or other virtual representation). The rendered visual presentation may change to reflect changes to a status object or changes in the viewing angle of the object, for example, in a way that mimics changes in the appearance of physical objects. A physical display device (also referred to as a ‘physical display’, a ‘physical display screen’ or a ‘physical screen’ herein) may include a television, a monitor, a computer screen, a laptop screen, a phone screen, a tablet, a smartwatch screen or any other tangible output device for the presentation of information in a visual form.FIG.27illustrates examples of various physical display devices, consistent with some embodiments of the present disclosure. For example, the physical display device may be included in a computer2720, in a laptop2724, or in a phone2728. Transferring virtual content to a physical display device may include relocating, transmitting, displacing, shifting, translating, or in any other way moving virtual content to the physical display device. It may be desirable to transfer virtual content to a physical device in order to view information presented in a virtual format in a different size or perspective, or to separate a presentation of virtual content that is too cluttered for proper viewing. For example, the virtual content may include several documents presented on top of one another or overlapping one another in some way. In this example, it may be desirable to transmit at least a portion of that virtual content, such as one of the documents, to a particular physical display device so that a user may view the information in that document more clearly. In another example, it may be desirable to transmit at least a portion of a virtual content to a particular physical display device so that other people, including people not using extended reality appliances or people not sharing the extended reality environment of the user, may view the virtual content.FIG.28illustrates an example of transferring virtual content to a physical display device, consistent connection with some embodiments of the present disclosure. For example, the virtual content2716may be a calendar that is transferred to a physical display device included in laptop2724. Some disclosed embodiments may include presenting an extended reality environment in a room via a wearable extended reality appliance, the wearable extended reality appliance being configured to be paired with multiple display devices located in the room, wherein each display device is associated with a unique network identifier. Multiple display devices located in the room may include two or more of the physical display devices described above. The multiple devices may include a variety of different types of physical display devices or a plurality of the same type of physical display device. For example, the multiple display devices may include three phones in the room. In another example, the multiple display devices may include one phone, one television, and one laptop in the room. It may be desirable to provide multiple display devices in the room so that the user may have a choice of ways in which to display the transferred virtual content, based on size, compatibility, or other features that may make a particular display device more appropriate for a given type of virtual content. For example, providing multiple display devices in the room including a phone, television, and laptop may allow a user to transfer virtual content better suited to each type of display device. The user may transfer messaging content to the phone, documents to the laptop, and videos to the television. A unique network identifier may include a MAC address, IP address, address, domain name, code, serial number, name, mark, sign, character, emblem, token, or any other sequence of words, numbers, letters, or symbols that indicate the identity of a display device. The unique network identifier may belong to or may be connected with one particular display device. It may be desirable to use a unique network identifier so that communication is established with trusted devices with associated recognized unique network identifiers, and to avoid communicating sensitive information with devices that the user may not recognize. For example, a user of the wearable extended reality appliance may be in a room with many different types of display devices, and some of these display devices may not belong to the user or may belong to other individuals. By using unique network identifiers, the transfer of virtual content to a display device in that room may be more secure since the user may only transfer virtual content to a display device with a recognized unique network identifier.FIG.27illustrates examples of various physical display devices, consistent with some embodiments of the present disclosure. For example, the computer2720may be associated with unique network identifier “ID 1”2722, the laptop2724may be associated with unique network identifier “ID 2”2726, and the phone2728may be associated with unique network identifier “ID 3”2730. Some disclosed embodiments may include receiving input associated with the wearable extended reality appliance to cause presentation of a specific virtual object in the extended reality environment on a target display device. Receiving input may include obtaining an indication from a user, sensor information, from a device external to the wearable extended reality appliance (for example, from an input device, such as a keyboard, paired with the wearable extended reality appliance, from a computing device paired with the wearable extended reality appliance, and so forth), from a memory unit, from a database or a data-structure, or any other data or control signals by the at least one processor. Receiving input by obtaining an indication from the user may include any user interaction with any physical device configured to receive input from a user (such as a keyboard, a touch sensor, a computer mouse, a haptic glove, etc.) or from an environment of the user (for example through an image sensor, an audio sensor, a volumetric sensor, and so forth), and to provide the data to a computational device. In one example, receiving input by obtaining an indication from the user may include recognizing a gesture of the user (for example by analyzing image data using a visual gesture recognition algorithm), recognizing voice commands provided by the user (for example by analyzing audio data using a speech recognition algorithm), and so forth. The data provided to the computational device may be in a digital format and/or in an analog format. In one embodiment, the input device may store the input received from the user in a memory device accessible by a processing device (e.g., the at least one processor), and the processing device may access the stored data for analysis. In another embodiment, the input device may provide the data directly to a processing device, for example, over a bus or over another communication system configured to transfer data from the input device to the processing device. In some examples, the input received by the input device may include key presses, tactile input data, motion data, position data, direction data, image data, or any other data. Some examples of the input device may include a button, a key, a keyboard, a computer mouse, a touchpad, a touchscreen, a joystick, a microphone, or another mechanism from which input may be received. Receiving input by obtaining sensor information may include acquiring data from a sound sensor, vibration sensor, acoustic sensor, chemical sensor, magnetic sensor, position sensor, optical sensor, image sensor, pressure sensor, force sensor, thermal sensor, proximity sensor, or any other device that detects events or changes in its environment and sends the detected information to other electronics. In some embodiments, the input may be received from a pointing device associated with the wearable extended reality appliance. A pointing device may include a mouse, a touch screen, a joystick, a remote, or any other human interface device that allows a user to input spatial data to a computer. For example, the pointing device may be a mouse that is connected by either a wired or wireless connection to the wearable extended reality appliance. Some disclosed embodiments may further include analyzing the input from the pointing device to determine a cursor drag-and-drop movement of the specific virtual object on the target display device. A cursor may include a pointer, arrow, text, or any other indicator used to show the current position of user interaction on a display in response to input from the pointing device. A cursor drag-and-drop movement may include any movement associated with a pointing device in which the user selects a virtual object and drags it to a different location. The virtual object may be selected by clicking, highlighting, or hovering over the virtual object using the pointing device. InFIG.30, the virtual content3010includes virtual calendar2716and virtual clock2718. In this example, a drag-and-drop movement of cursor3018is used to transfer the virtual clock2718to a target display device, for example, a computer3016. A target display device may include one of the multiple display devices located in the room that is selected to be the physical device to which the virtual content is to be transferred. It may be desirable to associate the input with the target display device so that the virtual content is only transferred to a physical display device that is intended or chosen by the user, rather than another of the multiple display devices that may not be appropriate for the type of virtual content being transferred.FIG.27illustrates examples of various physical display devices, consistent with some embodiments of the present disclosure. InFIG.27, a user2710of a wearable extended reality appliance2712is standing in a room with multiple display devices including computer2720with associated unique network identifier “ID 1”2722, laptop2724with associated unique network identifier “ID 2”2726, and phone2728with associated unique network identifier “ID 3”2730. The virtual content2714presented to user2710includes virtual calendar2716and virtual clock2718. FIG.28illustrates an example of transferring virtual content to a physical display device, consistent with some embodiments of the present disclosure. InFIG.28, the virtual calendar2716of the virtual content2714including virtual calendar2716and virtual clock2718is transferred to the target display device, as caused by receiving input from the user2710, for example in the form of a pointing gesture. Some disclosed embodiments may include receiving image data from an image sensor associated with the wearable extended reality appliance, the image data depicting the target display device. Receiving image data from the image sensor may include receiving the image data continuously, or at regular or irregular intervals. Continuously receiving the image data may be desirable when the user intends to transfer a plurality of virtual content items to one or more physical display devices. For example, when the virtual content includes a clock, a calendar, and a document, continuously receiving the image data may be appropriate so that the user may transfer one, some, or all of the virtual content to one or more physical display devices. Receiving the image data at intervals may be desirable when the user may only want to transfer virtual content during a specific period of time. For example, the user may only want to transfer virtual content while at work. In such cases, receiving the image data only during working hours of the day may be appropriate so that the user may transfer virtual content at a time of their choosing without placing the burden of continuous receipt of image data on an associated processing system (e.g., the at least one processor). In other examples, the image data may be received in response to a trigger. Some non-limiting examples of such triggers may include receiving the input to cause presentation of a specific virtual object in the virtual environment on a target display device, receiving an input from a user, entering a specific space (such as a room), and so forth. Some disclosed embodiments may include analyzing the image data to identify the target display device. Analyzing the image data may include object recognition, image segmentation, motion detection, video tracking, optical flow detection, or any other manner of extracting information from images acquired by the image sensor. For example, the processor may analyze the image data by performing object recognition on the image data to determine when a target display device is detected. Such object recognition may be accomplished through edge matching, divide-and-conquer search, greyscale matching, gradient matching, histograms of receptive field response, or any other technique for identifying objects in images or videos. In some examples, a machine learning model may be trained using training examples to identify display devices from images and/or videos. An example of such training example may include a sample image and/or a sample video of a sample display device, together with a label indicating the identity of the target display device. The trained machine learning model may be used to analyze the image data and identify the target display device. In some examples, a convolution of at least part of the image data may be calculated to obtain a result value of the calculated convolution. In response to a first result value of the calculated convolution, a first identity of the target display device may be determined, and in response to a second result value of the calculated convolution, a second identity of the target display device may be determined, the second identity may differ from the first identity. In some embodiments, analyzing the image data to identify the target display device may include determining product visual characteristics based on the image data and comparing the product visual characteristics with stored reference data to thereby determine an identity of the target display device. Product visual characteristics may include a label, logo, emblem, symbol, tag, brand name, design, stamp, sticker, insignia, mark, size, aspect ratio, frame color, or any other visible feature associated with a specific product. Stored reference data may include any information used to classify or categorize the product visual characteristics or an identity of a display device. For example, the stored reference data may include data associating various logos with specific display devices. In another example, the stored reference data may include data associating various brand names with specific display devices. The reference data may be provided by a user or generated by the processor. Using product visual characteristics and stored reference data to determine the identity of the target display device may be desirable to automate the identification of the target display device, so that the processor may perform the identification using image processing instead of a manual identification by the user for every display device in the room. For example, a target display device may have product visual characteristics in the form of a logo on the surface of the target display device. In this example, the image data may be analyzed to detect logos, for example using visual logo recognition algorithm, the processed logo data may be associated with a database of logos associated with various display devices, and the identity of the target display device may be identified based on a match in the database with the detected logo data. In another example, a target display device may have product visual characteristics in the form of a display size and/or a display aspect ratio. In this example, the image data may be analyzed to determine the display size and/or the display aspect ratio, for example based on the measurements of the target display device in the image data. The determined display size and/or display aspect ratio may be compared with a database of display sizes and/or display aspect ratios associated with various display devices, and the identity of the target display device may be identified based on a match in the database with the determined display size and/or display aspect ratio. Some disclosed embodiments may further include analyzing the image data to recognize content displayed by the target display device and to thereby determine an identity of the target display device. Content displayed by the target display device may include one or more of a symbol, image, sentence, word, letter, shape, sign, or any other visual information that may be displayed by the target display device. Using content displayed by the target display device may be desirable to automate the identification of the target display device, so that the processor may perform the identification using image processing instead of a manual identification by the user for every display device in the room. For example, the target display device may be configured to display a message that states, “meeting room TV.” In this example, the processor may analyze the image data to recognize the message “meeting room TV” to determine that the target display device is a television located in or associated with the meeting room. In some embodiments, analyzing the image data to identify the target display device may include determining a position based on the image data and comparing the position with stored position data to thereby determine an identity of the target display device. A position may include a location, area, environment, point, room, geography, locale, region, setting, site, space, surroundings, angle, direction, or any other indication of a way in which someone or something is arranged. The position may include a position of a user of the wearable user interface, another individual, a target display device, another display device, or any other object. Using a position may be desirable to automate the identification of the target display device, so that the processor may perform the identification using image processing instead of a manual identification by the user for every display device in the room. For example, the image data may be analyzed to identify that the target display device is in a meeting room. In this example, the meeting room information may be compared with stored position data that associates rooms with display devices to determine that the display device is a meeting room display device, such as a television or computer, as opposed to a personal display device, such as a phone. Some disclosed embodiments may further include detecting in the image data a visual code associated with the target display device and processing the visual code to determine the identity of the target display device. A visual code may include a QR code, barcode, textual code, numerical code, specific visual pattern, or any other visible indication of association with the target display device utilizing any combination of letters, words, number, symbols, or images. Using a visual code to determine the identity of the target display device may be desirable to automate the identification of the target display device, so that the processor may perform the identification using image processing instead of a manual identification by the user for every display device in the room. For example, a target display device may have a visual code in the form of a QR code presented on the screen of the target display device. In this example, the image data may be processed to detect information in the QR code that associates the QR code with a specific display device and this information may be used to determine the identity of the target display device. In some embodiments, the input may be received from the image sensor associated with the wearable extended reality appliance. This type of input may be desirable in order to reduce the number of devices used to transfer the virtual content to the physical display device. Additionally, since the user would be wearing the wearable extended reality appliance during the transfer, using the image sensor associated with the wearable extended reality appliance may ensure that the user can direct the input depending on the direction in which the user is facing. Some disclosed embodiments may further include analyzing the image data to identify a gesture initiated by a user of the wearable extended reality appliance that triggers a virtual collision between the specific virtual object and the target display device. A gesture may include any form of nonverbal communication in which bodily actions may be used to communicate. A gesture may include movement of the hand, fingers, leg, head, face, or other parts of the body. For example, a hand gesture may include pointing, dragging, clenching, opening, pinching, sliding, twisting, or rotating using the palm, one or more of the fingers, or the hand. In one example, the gesture may include a dragging of the virtual object to the target display device, a push the virtual object towards the target display device, and so forth. A virtual collision may include a contact, association, or any other encounter between the specific virtual object and the target display device. In some instances, the gesture may trigger a full transfer of the virtual object to the target display device. In other instances, the gesture may trigger a transfer of the virtual object to only a portion of the target display device.FIG.29illustrates an example of a user gesture for transferring virtual content to a physical display device, consistent connection with some embodiments of the present disclosure. InFIG.29, the virtual content2910includes virtual calendar2716and virtual clock2718. In this example, a user gesture2918of dragging the virtual clock2718to a target display device (i.e., computer2916) triggers a virtual collision between the virtual clock2718and the target display device by causing the virtual clock2718and the target display device to at least partly be in a same common region of the extended reality environment. Some disclosed embodiments may include upon identifying the target display device, determining a network identifier of the target display device. Determining a network identifier of the target display device may include using the identity of the target display device to access a data-structure or a database associating display devices with network identifiers, and thereby obtaining the network identifier of the target display device. In other examples, the network identifier of the target display device may be an encoding or another simple transformation of the identifier of the target display device. In other examples, determining a network identifier of the target display device may include scanning, examining, inspecting, or in any other way receiving information regarding a unique network identifier associated with a physical display device. The network identifier may be determined using an image sensor, a barcode reader, a magnetic stripe reader, or any other device that is capable of detecting an identifier. For example, a network identifier associated with a physical display device may be encoded on a barcode or another visual code, and the barcode number may be determined by using an image sensor to measure the intensity of light reflected back by the white spaces within the unique pattern of parallel bars in the barcode. The network identifier may also be obtained by requesting the target display device to transmit the identifier to the processor or the wearable extended reality appliance. Some disclosed embodiments may include using the determined network identifier of the target display device to establish a communications link with the target display device. Establishing a communications link may include creating a communications channel that connects the target display device to at least one other device. The communications channel may be either a wired communication channel or a wireless communication channel. A wired communication channel may utilize coaxial cables, Ethernet, or any other channel that transmits information over a wired connection. A wireless communication channel may utilize Wi-Fi, Bluetooth™, or any other channel that transmits information without a wired connection. It may be desirable to establish a communications link with the target display device in order to ensure that information is only being communicated on a recognized communications channel, which may improve user privacy. For example, establishing a communications link with the target display device may include creating a Bluetooth™ or other wireless communication links between the target display device and another device so that the target display device and the other device may communicate wirelessly, allowing a wireless transfer of the virtual content to the target display device. Some non-limiting examples of such established communication link may include at least one of a communications link between the wearable extended reality appliance and the target display device, a communications link between a computing device associated with the wearable extended reality appliance and the target display device, a communications link between a computing device coordinating the extended reality environment and the target display device, a communications link between a computing device generating the virtual object and the target display device, a communications link between a computing device providing content for presentation to the wearable extended reality appliance and the target display device, and so forth. InFIG.28, a determined network identifier “ID 2”2726of laptop2724is used to establish a communications link2832with the target display device (i.e., laptop2724). In some embodiments, the communications link may be established between the wearable extended reality appliance and the target display device. It may be desirable to establish the communications link between the wearable extended reality appliance and the target display device in order to enable a direct connection between the two devices. This may improve the speed of transfer between virtual content presented on the wearable extended reality appliance to the target display device. For example, inFIG.28, a wireless communications link2832may be established between the wearable extended reality appliance2712and the target display device in the form of laptop2724. In some embodiments, the communications link may be established between a computing device associated with the wearable extended reality appliance and the target display device. A computing device may include a smartphone, keyboard, computer, or any other device that may be used to accept inputs, process the inputs, and output information. It may be desirable to establish the communications link between a computing device associated with the wearable extended reality appliance and the target display device in order to reduce a burden on an existing communications link between the wearable extended reality appliance and the target display device, or in the case that the computing device is better located or equipped to form a communications link with the target display device. For example, in a large room, a wearable extended reality appliance may be connected to a smartphone and may be located at a distance beyond the Wi-Fi range of the target display device. In this example, the smartphone may be located at a distance within the Wi-Fi range of the target display device. In such an instance, a Wi-Fi communications link may be established between the smartphone connected to the wearable extended reality appliance and the target display device because the smartphone may be better positioned to establish that Wi-Fi link. Some disclosed embodiments may include transmitting data representing the specific virtual object to the target display device, wherein the transmitted data enables the target display device to present the specific virtual object. Transmitting the data may include communicating the data to the target display device using the established communications link. It may be desirable to use the established communications link in order to improve privacy by ensuring that a recognized communications channel is used to transfer the virtual content. Alternatively, transmitting the data may include communicating the data to the target display device using another communications link. It may be desirable to use another communications link to improve system efficiency and speed by reducing the burden on a single communications link for several types of communication. InFIG.28, data representing the virtual calendar2716is transmitted to the target display device2724through the communications link2832, and the transmitted data enables the target display device2724to present the virtual calendar2716. In some embodiments, the specific virtual object may be a virtual screen playing media content, and the transmission of the data may cause the media content to be presented through the target display device. In one example, virtual screen (also referred to as a “virtual display” or a “virtual display screen” herein) may include any bounded shape or presentation area, such as a rectangular window containing one or more virtual objects. Other examples are described above. Media content may include one or more of an image, data, text, sound, video, or any other information or experience directed at an end-user or audience in publishing, art, or communication. For example, the specific virtual object may be a virtual screen playing a movie, and the transmission of the data may cause the movie to be presented through the target display device. In some embodiments, the specific virtual object may be a virtual screen (also referred to as a “virtual display” or a “virtual display screen” herein) displaying content generated by a graphical operating system, and the transmission of the data may cause the content generated by the graphical operating system to be presented through the target display device. In one example, the graphical operating system may be paired with a physical keyboard, and text entered using the physical keyboard may be presented in a word processing application running over the operating system and (at least previously) displayed on the virtual screen. In this example, the transmission of the data may cause the word processing application running over the operating system to be displayed on the target display device, and therefore may cause text entered using the physical keyboard to appear on the target display device. Some disclosed embodiments may further include determining that the specific virtual object is classified as private, requesting a permission for displaying the specific virtual object through the target display device, and transmitting the data representing the specific virtual object to the target display device only after receiving the permission. Some disclosed embodiments may also include in response to receiving a denial of the permission, avoiding transmitting the information. A private virtual object may include a virtual object that is confidential, exclusive, secret, discreet, non-public, or in any other way belonging to or for the use of one particular person or group of people only. Transmitting the data only after receiving the permission for a private virtual object may be desirable when a user does not wish to share the private virtual object (e.g., personal information) with the general public and may want to authorize each request for the private virtual object to ensure that the user is comfortable with sharing the information with a specific individual or group of individuals. For example, the specific virtual object may be a banking application of the user of the wearable extended reality appliance. In this example, the user's financial advisor may request a permission for displaying the banking appliance through a television in a meeting room consisting of the user and the financial advisor. The user may grant the permission since the financial advisor may be a trusted individual whom the user may trust with sensitive and confidential financial information presented on the banking application. Thus, the data representing the banking application may be transmitted to the television. In another example, the user may be waiting in the same meeting room for his financial advisor to arrive. In the meantime, if another individual that the user does not know requests permission for displaying the banking application through the television, the user may deny the permission. In this example, the transmission of the information would be avoided in order to preserve the confidentiality of the user's financial information. Some disclosed embodiments may further include determining that the specific virtual object is classified as private, determining that the target display device is authorized for presenting the private information, and in response to the determination that the target display device is authorized for presenting the private information, transmitting the data representing the specific virtual object to the target display device. A private virtual object may include a virtual object similar to those described above. Transmitting the data in response to determining that the target display device is authorized for presenting the private information may be desirable to ensure that sensitive or confidential information is only presented on devices that are capable of and appropriate for displaying such information. For example, the specific virtual object may be a banking application of the user of the wearable extended reality appliance. In this example, the target display device may be a television in a meeting room of a bank associated with the banking application. In such an instance, the television may be determined to be authorized for presenting the banking application, and in response to this determination, the data representing the banking application may be transmitted to the television, so that financial information may be presented in a display device that is appropriate for the sensitive nature of the information. In another example using the same banking application, the target display device may be a computer in a living room of a friend whom the user is visiting. In this example, the computer may be determined to not be authorized for presenting the private information, and transmission of the data representing the banking application to the computer would be avoided in order to preserve the confidentiality of the user's financial information, which the user may not want the friend to view. Some disclosed embodiments may further include determining that the specific virtual object is classified as private, analyzing the image data to determine presence of individuals other than a user of the wearable extended reality appliance exposed to the target display device, in response to a determination of no individuals other than the user exposed to the target display device, transmitting the data representing the specific virtual object to the target display device, and in response to a determination of more than one individual is exposed to the target display device, avoiding transmission of the data representing the specific virtual object to the target display device. A private virtual object may include a virtual object similar to those described above. Individuals exposed to the target display device may include individuals within a specific geographical range of the target display device, or individuals that may be able to view information displayed on the target display device. Transmitting information to the target display device based on the absence of individuals other than the user of the wearable extended reality appliance may be desirable to ensure that the confidentiality of the user's sensitive information is preserved by not presenting that information to individuals other than the user. For example, the specific virtual object may be a document containing secret information that the user may not wish to share with others, and the user may be standing in an office room, where the target display device is a computer. In this example, the image data may be analyzed to determine whether other individuals are in the office room, and in response to a determination that no other individuals are present in the office room, the data representing the document may be transmitted to the computer. Alternatively, the image data may be analyzed to determine that someone else is in the office room, and in response to a determination of that other individual's presence in the room, transmission of the data representing the document to the computer may be avoided, in order to preserve the confidentiality of the information in the document. In some examples, a machine learning model may be trained using training examples to determine presence of individuals exposed to display devices. An example of such training example may include a sample image and/or a sample video of a sample room including a sample display device, together with a label indicating whether individuals in the sample room are exposed to the sample display device. The trained machine learning model may be used to analyze the image data and determine the presence of individuals other than a user of the wearable extended reality appliance that are exposed to the target display device. In some examples, the image data may be analyzed to detect persons in the room and to determine pose of the detected persons, for example using a visual person detection algorithm and a visual pose estimation algorithm. Further, the pose and location of a person may be used to determine whether the person is exposed to the target display device, for example using ray casting algorithm involving the location of the eyes of the person (based on the determined pose). Some disclosed embodiments may further include, after transmitting the data representing the specific virtual object to the target display device: determining that an individual other than the user is likely to be exposed to the target display device; and in response to the determination that the individual other than the user is likely to be exposed to the target display device, causing the target display device to cease presentation of the specific virtual object. An individual that is likely to be exposed to the target display device may include an individual within a specific geographical range or proximity of the target display device, an individual that may be able to view information displayed on the target display device, or an individual that is in a location where they may soon be within the specific geographical range of the target display device or be able to view information displayed on the target display device. Causing the target display device to cease presentation of the specific virtual object may include turning off the entire display of the target display device, removing the specific virtual object from the display of the target display device, or pixilating or blurring the virtual object to make it illegible. Ceasing presentation of the specific virtual object on the target display device in response to the determination that the individual other than the user is likely to be exposed to the target display device may be desirable to continuously preserve the confidentiality of sensitive information, even after an initial determination of an absence of other individuals is made. Continuing from the previous example where the data representing the document was transmitted to the computer, the processor may determine after this transmission that an individual other than the user is about to enter the office room, making the individual likely to be exposed to the computer. In this example, the computer may be turned off, or the document may be removed from the display presented on the computer, in order to preserve the confidentiality of the information in the document in view of the individual that may soon enter the office room. Some disclosed embodiments may further include causing the specific virtual object to disappear from the extended reality environment once the specific virtual object is presented through the target display device. Causing the specific virtual object to disappear from the extended reality environment after its presentation through the target display device may be desirable to reduce the processing burden on the system and the redundancy of displaying the same information on two different modalities (e.g., physical display devices and one or more virtual screens) at the same time. Causing the specific virtual object to disappear from the extended reality environment may also reduce a user's visual confusion upon being presented with the same information on two different displays, thereby allowing the user to better focus on the information presented. For example, the specific virtual object may be a movie, and presenting the movie through both the wearable extended reality appliance and a target display device may cause confusion to the user, and may not allow the user to understand the movie. In this example, the processor may cause the movie to disappear from the extended reality environment once the movie is presented through the target display device, in order to avoid user confusion. Some disclosed embodiments may further include displaying via the wearable extended reality appliance a virtual indicator indicating that the specific virtual object is presented through the target display device. A virtual indicator may include a color, pattern, shape, text, message, picture, symbol, or any other sign that that shows the condition or existence of the specific virtual object being presented through the target display device. It may be desirable to display the virtual indicator via the extended reality appliance so that a user of the wearable extended reality appliance may be put on notice of the specific virtual object being presented through the target display device. This may be relevant in cases where the specific virtual object was accidentally transmitted to the target display device. For example, inFIG.29, the virtual clock2718is the specific virtual object that is presented through the target display device in the form of computer2916. In this example, the virtual indicator may be a greyed-out virtual clock2920displayed via the wearable extended reality appliance to signal to the user of the wearable extended reality appliance that the virtual clock2718is being presented through computer2916. Some disclosed embodiments may further include generating a virtual controller associated with the target display device, the virtual controller enabling modification of display parameters of the target display device. A virtual controller may include a clicker, remote, joystick, wheel, pad, button, or any other mechanism for controlling any audio or visual aspect of an extended reality display. Generating a virtual controller associated with the target display device may be desirable to adjust a size, volume, angle, perspective, or any other audio or visual aspect of the target display device. For example, inFIG.30, a virtual controller in the form of a remote3020is generated, wherein the remote3020is associated with the target display device in the form of computer3016. In this example, the remote3020may be used to enlarge or reduce the size of the virtual clock2718presented on computer3016to enable a user to view the virtual clock2718better. In another example, the target display device may be a television presenting a video with sound. In this example, a virtual remote may be generated to control the volume of the television so that a user may better hear the sounds associated with the video presented on the television. After the target display device begins presenting the specific virtual object, some disclosed embodiments may further include receiving additional input, and modifying the specific virtual object on the target display device based on the additional input. The additional input may be received from the user or the processor. The additional input may be received from the user through interaction with a user input device, such as a button, a key, a keyboard, a computer mouse, a touchpad, a touchscreen, a joystick, or another mechanism from which input may be received. The additional input may additionally, or alternatively, be received from the processor based on an automated assessment of further modifications that may be appropriate for the specific virtual object on the target display device. Modifying the specific virtual object may include adjusting or changing one or more of a size, angle, color, position, orientation, perspective, text, object, sound, or any other audio or visual aspect of the virtual object. It may be desirable to modify the specific virtual object on the target display device based on the additional input in order to allow a user to adjust a size of the virtual object to see it better, add text to a virtual document in order to complete work tasks, or in any other way change the virtual object to better fit the needs or desires of the user. For example, the specific virtual object may be a movie window, and the processor may determine that the size of the movie window is too large to fit entirely within a target television display. In this example, the processor may provide additional input to reduce the size of the movie window to fit within the screen of the television. In another example, the specific virtual object may be a document, and a user may provide additional input by typing into a keyboard text to be added to the document. In this example, the processor may receive the additional input to add the typed text to the document. After the target display device begins presenting the specific virtual object, some disclosed embodiments may further include receiving an additional input indicating a desire of a user of the wearable extended reality appliance to stop presenting the specific virtual object through the target display device; and in response to receiving the additional input, causing the target display device to cease presentation of the specific virtual object by the target display device. An additional input indicating a desire of the user of the wearable extended reality appliance to stop presenting the specific virtual object through the target display device may be received through interaction with a user input device, such as those described above. Causing the target display device to cease presentation of the specific virtual object by the target display device may include transmitting information configured to cause the target display device to stop displaying the content to the system controlling the physical display, halting transmission of data to a system controlling the target display device, or in any other way stopping the presentation of the specific virtual object on the target display device. Ceasing presentation of the specific virtual object by the target display device in response to receiving additional input indicating such a desire of the user may be desirable so that the user may prevent others from viewing sensitive information, or if the user no longer wants to view information on the target display device. For example, the specific virtual object may be a document and after a target computer presents the document, the user may finish reading the document. In this example, the user may click a button to provide additional input indicating that the user no longer wishes to present the document on the computer, and in response to receiving this additional input, data may be transmitted to a system controlling the computer to cease presentation of the document on the computer, since the user has finished reading the document. In some embodiments, the additional input indicating the desire of the user of the wearable extended reality appliance to stop presenting the specific virtual object through the target display device may include an indication that the wearable extended reality appliance left the room. An indication that the wearable extended reality appliance left the room may be provided through user input or sensor input. A user input may be received through interaction with a user input device, such as those described above. A sensor input may be received via data from a position sensor, image sensor, proximity sensor, or any other device that may generate information indicative of the presence or absence of the wearable extended reality appliance in the room. It may be desirable to stop presenting the specific virtual object through the target display device based on an indication that the wearable extended reality appliance left the room so that the processor is not burdened with presenting information that the user is not able to view, and so that others may not view the information after the user has left the room. For example, the room may be a living room, and a position sensor may provide information indicating that the wearable extended reality appliance has moved to a dining room. In this example, the indication that the wearable extended reality appliance has moved to a dining room may indicate the desire of the user of the wearable extended reality appliance to stop presenting the specific virtual object through the target display device in the living room, since the user is no longer in the living room to view the presented information. Some disclosed embodiments may further include, after ceasing the presentation of the specific virtual object, receiving an indication that the wearable extended reality appliance reentered the room, and causing a renewal of the presentation of the specific virtual object by the target display device in response to the reentrance. An indication that the wearable extended reality appliance reentered the room may be provided through user input or sensor input. A user input may be received through interaction with a user input device, such as those described above. A sensor input may be received via data from a sensor, such as those described above. It may be desirable to renew the presentation of the specific virtual object through the target display device in response to the reentrance in order to improve processing speed by avoiding repeating the steps of receiving input associated with the wearable extended reality appliance, receiving image data from the image sensor, analyzing the image data to identify the target display device, upon identifying the target display device, determining a network identifier of the target display device, using the determined network identifier of the target display device to establish a communications link with the target display device, and transmitting data representing the specific virtual object to the target display device. Continuing from the example of the living room above, after ceasing presentation of the specific virtual object, the position sensor may provide information indicating that the wearable extended reality appliance has reentered the living room. In this example, a renewal of the presentation of the specific virtual object by the target display device may be caused in response to the reentrance, so that a user may once again view the specific virtual object without having to repeat the initial steps. Some disclosed embodiments may further include receiving additional input to cause presentation of content from a second display device through the wearable extended reality appliance; receiving second image data from the image sensor, the second image data depicting the second display device; analyzing the second image data to identify the second display device; upon identifying the second display device, determining a network identifier of the second display device; using the determined network identifier of the second display device to establish a communications link with the second display device; receiving data representing the content from the second display device; and displaying through the wearable extended reality appliance the content from the second display device. The second display device may include a physical display device, such as one of those described above. It may be desirable to display through the wearable extended reality appliance the content from the second display device so that a user may view content in a virtual space which is not limited by the constraints of a physical display device, such as size or perspective. For example, the user may press a button to cause presentation of a movie from a second display device, such as a phone, through the wearable extended reality appliance. In this example, the processing steps described above may be performed so that the movie from the phone is displayed through the wearable extended reality appliance, so that the user may enlarge the size of the movie for better viewability. Some embodiments may include a method for transferring virtual content to a physical display device.FIG.31is a flowchart of an exemplary method3100of coordinating virtual content display with mobility status, consistent connection with some embodiments of the present disclosure. Method3100may include a step3110of presenting an extended reality environment in a room via a wearable extended reality appliance, the wearable extended reality appliance being configured to be paired with multiple display devices located in the room, wherein each display device is associated with a unique network identifier. Method3100may include a step3112of receiving input associated with the wearable extended reality appliance to cause presentation of a specific virtual object in the extended reality environment on a target display device. Method3100may include a step3114receiving image data from an image sensor associated with the wearable extended reality appliance, the image data depicting the target display device. Method3100may include a step3116of analyzing the image data to identify the target display device. Method3100may include a step3118of upon identifying the target display device, determining a network identifier of the target display device. Method3100may include a step3120of using the determined network identifier of the target display device to establish a communications link with the target display device. Method3100may include a step3122of transmitting data representing the specific virtual object to the target display device, wherein the transmitted data enables the target display device to present the specific virtual object. Some embodiments may include a system for transferring virtual content to a physical display device, the system comprising, at least one processor configured to: present an extended reality environment in a room via a wearable extended reality appliance, the wearable extended reality appliance being configured to be paired with multiple display devices located in the room, wherein each display device is associated with a unique network identifier; receive input associated with the wearable extended reality appliance to cause presentation of a specific virtual object in the extended reality environment on a target display device; receive image data from an image sensor associated with the wearable extended reality appliance, the image data depicting the target display device; analyze the image data to identify the target display device; upon identifying the target display device, determine a network identifier of the target display device; use the determined network identifier of the target display device to establish a communications link with the target display device; and transmit data representing the specific virtual object to the target display device, wherein the transmitted data enables the target display device to present the specific virtual object. When discussing physically presented visual content, it is useful to visually indicate portions of the physically presented visual content, for example with hand gestures. When sharing virtually presented visual content among computers, for example in a video call, it is useful to visually indicate portions of the virtually presented visual content, for example using a mouse cursor controlled by one user and visible to the other user. However, when sharing content among extended reality appliances, the usage of a cursor to indicate portions of the content may be counterproductive, as it breaks the natural experience that the usage of extended reality appliances calls for. That is, it fails to mimic the experience of discussing physically presented visual content. It is therefore desire to allow visual indication of a portion of the content produced by a first user through natural hand gestures to be visually available to a second user viewing the content using an extended reality appliance at a remote location. When the extended reality is a mixed reality or an augmented reality, the position from which the first user use the hand gesture to indicate the portion of the content may not be available for the second user, for example due to different layouts of the physical spaces of the two users (for example, the first user may stand at a location in relation to the content that corresponds to a wall in the physical space of the second user). Therefore, in some cases it is desired to adjust the visual indications to the physical environments of the users. Disclosed embodiments, including methods, systems, apparatuses, and non-transitory computer-readable media, may relate to simulating user interactions with shared content. Shared content may include any information communicated between two or more entities (e.g., users or devices). For example, the shared content may include visual content, textual content, audio content, graphical content, virtual white board, virtual display screen, two-dimensional content, three-dimensional content, animated content, inanimate content, live content, real-time content, Internet content, cinema content, television content, radio content, smartphone content, live events, physical objects, virtual objects, or any other type of information. In some examples, the shared content may be communicated between two or more wearable extended realty appliances. For example, content (physical or virtual) viewed by a user of a wearable extended reality appliance may be transmitted to another wearable extended reality appliance for display to another user. A user interaction may include any action associated with a user or an entity. For example, the user interaction may include a finger gesture, a hand gesture, an eye gesture, a mouth gesture, a face gesture, or an action of one or more other parts of a person's body. As an example, the user interaction may include any finger or hand motion, such as a drag, a pinch, a spread, a swipe, a tap, a pointing, a scroll, a rotate, a flick, a touch, a zoom-in, a zoom-out, a thumb-up, a thumb-down, a touch-and-hold, or any other action of a finger or hand. As another example, the user interaction may include a location or movement of the attention (e.g., gaze) of a user. As another example, the user interaction may include a sound (e.g., voice) of a user. Additionally or alternatively, the user interaction may include an action via an object (physical or virtual), such as a pen, an eraser, a pointer stick, a laser pointer, a cursor, or any other item. Simulating user interactions may include, for example, generating an indication of a user interaction of a user to another user. In some examples, a user may cause a user interaction (e.g., with particular content), and an indication of the user interaction may be transmitted for display to another user. A simulated user interaction may be based on a user interaction initiated by a user, and may be same as or similar to the initiated user interaction. In some examples, the simulated user interaction may be different from the initiated user interaction. As one example, the initiated user interaction may include a hand gesture, and the simulated user interaction may include a visual representation of the hand gesture. As another example, the initiated user interaction may include a hand gesture, and the simulated user interaction may include a visual indication of a pointer. The simulated user interaction may have one or more features same as or similar to the initiated user interaction, such as a location of a user interaction relative to content interacted with, an orientation of a user interaction relative content interacted with, a motion of a user interaction, a texture of a user interaction, or any other characteristic of a user interaction. Some disclosed embodiments may relate to simulating user interactions with shared content, as described in greater detail herein. Users of wearable extended reality appliances may automatically share user interactions with shared content. For example, when a first user stands in front of a whiteboard (physical or virtual) and points to a specific part of the whiteboard, a second user's wearable extended reality appliance may display a virtual representation of the whiteboard with a virtual indicator identifying the specific part of the whiteboard to which the first user points, as described herein. Some disclosed embodiments may include establishing a communication channel for sharing content and user interactions between a first wearable extended reality appliance (WER-Appliance) and at least one second wearable extended reality appliance. A communication channel may include any type of medium for transmitting data. A communication channel may include, for example, an IP (Internet Protocol) connection, a Wi-Fi connection, a WiMAX connection, a cellular connection (e.g., 2G, 3G, 4G, or 5G), a Bluetooth connection, a near-field communication (NFC) connection, a low-power wide-area networking (LPWAN) connection, an Ethernet connection, a power-line communication (PLC) connection, a satellite communication connection, a mobile network connection, a terrestrial microwave network connection, a wireless ad hoc network connection, or any other type of connection via a network. The communication channel may be wired or wireless. In some examples, the communication channel may be via a personal area network, a local area network, a metropolitan area network, a wide area network, a global area network, a space network, a peer-to-peer network, or any other type of computer network that may use data connections between network nodes. In some examples, the communication channel may be established between the first wearable extended reality appliance and the at least one second wearable extended reality appliance. For example, the at least one processor may cause networking resources to be allocated for the first wearable extended reality appliance and the at least one second wearable extended reality appliance, such as network addresses, port numbers, or other types of networking configurations. In some examples, the at least one processor may execute processes that may be specified in network protocols (e.g., the transmission control protocol (TCP), the user datagram protocol (UDP), or any other protocol). In some examples, the communication channel may be established between a centralize system and each one of the first wearable extended reality appliance and the at least one second wearable extended reality appliance. That is, in some examples there may be no communication between wearable extended reality appliances that does not pass through the centralize system. Such centralized system may include one or more servers, a cloud platform, a distributed computing system, and so forth. Some non-limiting examples of such centralize system may include a system that controls the extended reality environment, a system that provides virtual content for presentation to the wearable extended reality appliances, and so forth. In one example, establishing the communication channel may include adding an association of the first wearable extended reality appliance and the at least one second wearable extended reality appliance in a data-structure maintained by the centralize system. When data is received from the first wearable extended reality appliance for sharing with other wearable extended reality appliances, the centralized system may access the data-structure to determine that the received data, or information based on the received data, needs to be shared with the at least one second wearable extended reality appliance, and vice versa. Content may be shared via the communication channel. The shared content may include any information communicated between two or more entities (e.g., users or devices). For example, the shared content may include visual content, textual content, audio content, graphical content, two-dimensional content, three-dimensional content, animated content, inanimate content, live content, real-time content, Internet content, cinema content, television content, radio content, smartphone content, live events, physical objects, virtual objects, or any other type of information. User interactions may be shared via the communication channel. FIG.32is a flowchart illustrating an exemplary process3200for simulating user interactions with shared content consistent with some embodiments of the present disclosure. With reference toFIG.32, in step3210, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to establish a communication channel for sharing content and user interactions between a first wearable extended reality appliance and at least one second wearable extended reality appliance. FIG.34,FIG.35, andFIG.36are schematic diagrams illustrating various use snapshots of an example system for simulating user interactions with shared content consistent with some embodiments of the present disclosure. With reference toFIG.34, a first user3414using a first wearable extended reality appliance3418may be present in a first location3410, and a second user3416using a second wearable extended reality appliance3420may be present in a second location3412. A communication channel (e.g., based on a link3424, a network3422, and a link3426) may be established between first wearable extended reality appliance3418and second wearable extended reality appliance3420. Some disclosed embodiments may include transmitting to the at least one second wearable extended reality appliance, first data representing an object associated with the first wearable extended reality appliance. The first data may enable a virtual representation of the object to be displayed through the at least one second wearable extended reality appliance. The object associated with the first wearable extended reality appliance may include any item with which a user may interact. In some examples, the object associated with the first wearable extended reality appliance may be a physical object, such as a whiteboard, a screen, a lamp, a desk, a table, a vase, a container, a printer, a shelf, a keyboard, a mouse, a touchpad, a cup, a telephone, a mobile device, a machine, a vehicle, a door, a window, a chair, a button, a surface, or any other type of physical item. In some examples, the object associated with the first wearable extended reality appliance may be a virtual object, such as a virtual widget, a virtual screen, a virtual whiteboard, a virtual keyboard, a virtual touchpad, a virtual button, a virtual surface, virtual furniture, a virtual desk, a virtual chair, a virtual window, a virtual decorative object, a virtual vase, an inanimate virtual object, an animate virtual object, or any other type of visual representation rendered by a computing device (e.g., a wearable extended reality appliance) and configured to represent an item. In one example, the first data may be transmitted from the first wearable extended reality appliance to the at least one second wearable extended reality appliance. In another example, the first data may be transmitted from a centralized system to the at least one second wearable extended reality appliance, for example in response to information received at the centralized system from the first wearable extended reality. The first data may be identical or different from the information received at the centralized system from the first wearable extended reality. Some examples of such centralized system are described above. The first data representing the object associated with the first wearable extended reality appliance may include any information describing the object, such as textual data, imagery data (e.g., two-dimensional), modeling data (e.g., three-dimensional), feature data, or any other type of desired information. The first data may enable display of a visual representation of the object (e.g., by a wearable extended reality appliance). As one example, the first data may include data of a three-dimensional model of the object and may enable a wearable extended reality appliance to display a three-dimensional visual representation of the object. As another example, the first data may include data of a two-dimensional image of the object and may enable a wearable extended reality appliance to display a two-dimensional visual representation of the object. In some examples, the first data may indicate one or more features for the visual representation of the object, such as a size, a color scheme, a texture, a location, an orientation, or any other characteristic. The one or more features for the visual representation may be same as or similar to the one or more features of the object, so that the visual representation of the object may be same as or similar to the object in terms of a size, a color scheme, a texture, a location, an orientation, or any other feature. At least one processor may determine (e.g., identify) the object associated with the first wearable extended reality appliance. In some examples, the at least one processor may receive image data that may be captured by image sensor(s) of the first wearable extended reality appliance. The image sensor(s) may be part of or separate from the first wearable extended reality appliance. The at least one processor may use the image data to identify the object (e.g., a physical object). The image sensor(s) may capture the image data of the scenes in front of (e.g., in the field of view of) the image sensor(s). When the object is located in front of (e.g., in the field of view of) the image sensor(s), the captured image data may indicate the object. The at least one processor may use image analysis algorithms to identify the object, and/or to determine the features (e.g., a size, a color scheme, a texture, a location, an orientation, or any other characteristic) of the object. Based on the image data, the at least one processor may generate the first data representing the object. For example, the at least one processor may use one or more captured images to construct a three-dimensional model of the object. In some examples, the construction of the three-dimensional model may include classifying the object into a category based on captured images, and adding, to a template three-dimensional model (e.g., predefined) for the determined category, extracted features of the object (e.g., a size, a color scheme, a texture, a location, an orientation, or any other characteristic). Additionally or alternatively, the construction of the three-dimensional model may be based on three-dimensional scanning devices (e.g., light detection and ranging (Lidar)), depth sensors, or range imaging sensors of the first wearable extended reality appliance. As another example, the at least one processor may use one or more captured images to determine a two-dimensional representation of the object. For example, the at least one processor may extract an imagery representation of the object (e.g., based on an image captured from a particular perspective). In some examples, the at least one processor may receive data of the object associated with the first wearable extended reality appliance (e.g., a virtual object). The object may be displayed by the first wearable extended reality appliance. For example, the object may be displayed in a field of view of a display system of the first wearable extended reality appliance. The at least one processor may determine the first data based on the received data of the object (e.g., being displayed by the first wearable extended reality appliance). In some examples, the first data may be same as or similar to the received data, so that the object may be displayed in a same or similar manner by the first wearable extended reality appliance and the at least one second wearable extended reality appliance to which the first data may be transmitted. In some examples, the first data may be a changed (e.g., simplified or compressed) version of the received data. In some examples, the object may be a three-dimensional virtual object virtually displayed by the first wearable extended reality appliance and the first data may be configured to cause a display, via the at least one second wearable extended reality appliance, of the three-dimensional virtual object. In some examples, the first data may be configured to cause a display of the three-dimensional virtual object via the at least one second extended reality appliance in a size corresponding to a size of the three-dimensional virtual object displayed via the first extended reality appliance. A size may include, for example, a height, a width, a depth, or any other measurement. In some examples, the first data may be configured to cause a display of the three-dimensional virtual object via the at least one second extended reality appliance with a feature (e.g., a color scheme, a texture, a location, an orientation, or any other characteristic) corresponding to a feature of the three-dimensional virtual object displayed via the first extended reality appliance. In some examples, the object may be a two-dimensional virtual object virtually displayed by the first wearable extended reality appliance and the first data may be configured to cause a display, via the at least one second wearable extended reality appliance, of the two-dimensional virtual object. In some examples, the first data may be configured to cause a display of the two-dimensional virtual object via the at least one second extended reality appliance in a color scheme corresponding to a color scheme of the two-dimensional virtual object displayed via the first extended reality appliance. A color scheme may include, for example, a choice of colors used in a design for creating style and/or appeal, or any other choice of colors. In some examples, the first data may be configured to cause a display of the two-dimensional virtual object via the at least one second extended reality appliance with a feature (e.g., a size, a texture, a location, an orientation, or any other characteristic) corresponding to a feature of the two-dimensional virtual object displayed via the first extended reality appliance. The at least one processor may transmit to the at least one second wearable extended reality appliance (e.g., via the established communication channel), the first data representing the object associated with the first wearable extended reality appliance. The first data may enable a virtual representation of the object to be displayed through the at least one second wearable extended reality appliance. The virtual representation of the object may include, for example, any visual display rendered by a computing device (e.g., a wearable extended reality appliance) and configured to represent the object. With reference toFIG.32, in step3212, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to transmit to the at least one second wearable extended reality appliance, first data representing an object associated with the first wearable extended reality appliance, the first data enabling a virtual representation of the object to be displayed through the at least one second wearable extended reality appliance. With reference toFIG.34, an object3428associated with first wearable extended reality appliance3418may be present or presented in first location3410. Object3428may include a physical object or a virtual object. As one example, object3428may include a physical whiteboard, or a virtual whiteboard displayed by first wearable extended reality appliance3418. At least one processor (e.g., associated with first wearable extended reality appliance3418or with a centralized system as described above) may determine first data representing object3428, and may transmit the first data to second wearable extended reality appliance3420. The transmission of the first data may be via the established communication channel (e.g., based on link3424, network3422, and link3426). Second wearable extended reality appliance3420may receive the first data and may, based on the received first data, cause display of a virtual representation3430of object3428. For example, second user3416may view virtual representation3430via second wearable extended reality appliance3420. In some examples, the object may be an inanimate object physically located in proximity to the first wearable extended reality appliance, and the first data may include a representation of the inanimate object. The inanimate object may include any physical item that may lack motion (e.g., may be motionless). The inanimate object may include, for example, a whiteboard, a desk, a table, a vase, a container, a shelf, a cup, a chair, a surface, or any other physical item that may lack motion. The object associated with the first wearable extended reality appliance may be the inanimate object physically located in proximity (e.g., 0.1 meters, 0.5 meters, 1 meter, 2 meters, 3 meters, 5 meters, 10 meters, or any other desired distance) to the first wearable extended reality appliance. The representation of the inanimate object may include any information describing the inanimate object, such as textual data, imagery data (e.g., two-dimensional), modeling data (e.g., three-dimensional), feature data, or any other type of desired information. In some examples, the first data may enable display of a virtual representation of the inanimate object in a size corresponding to an actual size of the inanimate object. At least one processor may measure the actual size (e.g., height, width, depth, or any other measurement) of the inanimate object, for example, based on image data captured by an image sensor associated with the first wearable extended reality appliance. In some examples, additional or alternative sensor(s) may be used to capture data (e.g., distance data, depth data, position data, orientation data, perspective data, or any other information) for measuring the actual size of the inanimate object. In some examples, based on data captured by the sensor(s), the at least one processor may construct a model (e.g., three-dimensional) of the inanimate object to have a size corresponding to the actual size of the inanimate object. The first data may include information of a determined size of the inanimate object, and may enable display of a virtual representation of the inanimate object in a size corresponding to an actual size of the inanimate object. The virtual representation of the inanimate object may include, for example, any visual display rendered by a computing device (e.g., a wearable extended reality appliance) and configured to represent the inanimate object. Some disclosed embodiments may include detecting changes associated with the inanimate object, and transmitting third data representing the changes to the at least one second wearable extended reality appliance. The third data may enable a virtual representation of the changes to be displayed through the at least one second wearable extended reality appliance. For example, at least one processor may, periodically or continuously, monitor the inanimate object to detect changes associated with the inanimate object. The monitoring may be based on, for example, image data captured by an image sensor and/or data captured by other sensor(s). For example, the at least one processor may compare a current captured image of the inanimate object with a previous captured image of the inanimate object, and may detect changes associated with the inanimate object if a difference between the current and previous captured images of the inanimate object satisfies (e.g., meets or exceeds) a threshold. In some examples, the changes may include markings. For example, if the inanimate object is a whiteboard, the changes may include writing on the whiteboard. The at least one processor may transmit third data representing the changes to the at least one second wearable extended reality appliance. The third data may include any information for describing the changes. The third data may enable a virtual representation of the changes to be displayed through the at least one second wearable extended reality appliance. The virtual representation of the changes may include, for example, any visual display rendered by a computing device (e.g., a wearable extended reality appliance) and configured to represent the changes. In some examples, the third data may indicate the changes, which may be applied to a previous virtual representation of the inanimate object. In some examples, the third data may indicate a representation of the inanimate object with the changes, which representation may be displayed to replace a previous virtual representation of the inanimate object. Some disclosed embodiments may include receiving image data from an image sensor associated with the first wearable extended reality appliance. The image sensor may be part of or separate from the first wearable extended reality appliance. The image sensor may be configured to capture image data of the scenes in front of (e.g., in the field of view of) the image sensor. The image sensor may, periodically or continuously, capture images and transmit captured images to at least one processor. The at least one processor may receive the image data from the image sensor. With reference toFIG.32, in step3214, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive image data from an image sensor associated with the first wearable extended reality appliance. Some disclosed embodiments may include detecting in the image data at least one user interaction associated with the object. The at least one user interaction may include a human hand pointing to a specific portion of the object. The at least one user interaction associated with the object may include any action associated with a user or the object (physical or virtual). The at least one user interaction may be directed to the object. In some examples, the at least one user interaction may include a human hand pointing to a specific portion of the object. The specific portion of the object may include any location, point, area, or space of the object. In some examples, the at least one user interaction may include a finger gesture, a hand gesture, an eye gesture, a mouth gesture, a face gesture, or an action of other part(s) of a person's body, directed to the object. As an example, the at least one user interaction may include any finger or hand motion associated with the object, such as a drag, a pinch, a spread, a swipe, a tap, a pointing, a scroll, a rotate, a flick, a touch, a zoom-in, a zoom-out, a thumb-up, a thumb-down, a touch-and-hold, or any other action of a finger or hand. As another example, the at least one user interaction may include a location or movement of the attention (e.g., gaze) of a user on the object. As another example, the at least one user interaction may include a sound (e.g., voice) of a user associated with the object. Additionally or alternatively, the at least one user interaction may include an action with the object via another object (physical or virtual), such as a pen, an eraser, a pointer stick, a laser pointer, a cursor, or any other item. At least one processor may detect the at least one user interaction in the image data received from the image sensor associated with the first wearable extended reality appliance. For example, the at least one processor may perform image analysis algorithms (e.g., gesture recognition algorithms) based on the image data to detect the at least one user interaction. The at least one processor may detect the at least one user interaction, for example, based on a structure, a shape, a pose, a position, a location, an orientation, or any other feature of a human hand (or other body part(s)) or a portion thereof. In some examples, the detection of the at least one user interaction may be based on additional or alternative sensor(s) associated with (e.g., part of or separate from) the first wearable extended reality appliance, such as a depth sensor, a range imaging sensor, a haptic glove, a wired glove, a data glove, a computer mouse, a touchpad, or any other device configured to capture information for detecting the at least one user interaction. The detection of the at least one user interaction may be based on one or more of various types of gesture recognition algorithms, such as algorithms based on three-dimensional models, skeletal-based algorithms, appearance-based models, or other types of algorithms for recognizing gestures. In some examples, the at least one user interaction may include a human hand pointing to a specific portion of the object associated with the first wearable extended reality appliance. The specific portion of the object may include any location, point, area, or space of the object. The at least one processor may determine the specific portion of the object to which the human hand may be pointing. For example, if a pointing end of the human hand (e.g., a tip of a pointing finger) touches on the object, the at least one processor may determine the specific portion of the object to be the location where the pointing end of the human hand touches on the object. As another example, if a pointing end of the human hand (e.g., a tip of a pointing finger) does not touch on the object, the at least one processor may determine the specific portion of the object to be a location where a pointing direction of the human hand as extended towards the object reaches the object, to be a location where a pointing end of the human hand is mapped onto the object (e.g., the mapping of the pointing end may be perpendicular, approximately perpendicular, or in any other desired angle to a nearby surface of the object), or to be any other desired location determined based on the pointing human hand. Additionally or alternatively, the specific portion of the object may be determined based on, for example, eye tracking of the eye(s) of a user initiating the at least one user interaction. For example, the attention (e.g., gaze) of the user determined based on the eye tracking may be used to confirm or adjust a location of the specific portion of the object. With reference toFIG.32, in step3216, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to detect in the image data at least one user interaction associated with the object, the at least one user interaction including a human hand pointing to a specific portion of the object. With reference toFIG.35, first user3414may initiate a user interaction3510associated with object3428. User interaction3510may include a human hand (e.g., of first user3414) pointing to a specific portion of object3428. At least one processor (e.g., associated with first wearable extended reality appliance3418or with a centralized system as described above) may detect user interaction3510, for example, based on image data from an image sensor associated with first wearable extended reality appliance3418. Some disclosed embodiments may include, based on the detection of the at least one user interaction in the image data, transmitting to the at least one second wearable extended reality appliance second data indicating an area of the specific portion of the object. The area of the specific portion of the object may include, for example, an area within, surrounding, overlapping with, coextensive with, extending beyond, or otherwise associated with the specific portion of the object. In some examples, the area of the specific portion of the object may include an area centered around a location to which a human hand may point. The area of the specific portion of the object may have any desired size, color, shape, contour, visual effect, animation, or other features. The second data may include any information for describing the area of the specific portion of the object. At least one processor may determine the area of the specific portion of the object, for example, based on the determined specific portion of the object. In one example, the second data may be transmitted from the first wearable extended reality appliance to the at least one second wearable extended reality appliance. In another example, the second data may be transmitted from a centralized system to the at least one second wearable extended reality appliance, for example in response to information received at the centralized system from the first wearable extended reality. The second data may be identical or different from the information received at the centralized system from the first wearable extended reality. Some examples of such centralized system are described above. Based on the detection of the at least one user interaction in the image data, the at least one processor may transmit to the at least one second wearable extended reality appliance the second data indicating the area of the specific portion of the object. The at least one second wearable extended reality appliance may receive the second data and may, based on the second data, cause display of an indication of the area of the specific portion of the object. The indication of the area may be displayed in connection with the virtual representation of the object displayed via the at least one second wearable extended reality appliance. A location of the indicated area relative to the virtual representation of the object may be same as or similar to the location of the specific portion relative to the object. The indication of the area on the virtual representation of the object may simulate the at least one user interaction with the object, and may show to a user associated with the at least one second wearable extended reality appliance the at least one user interaction with the object. The indication of the area on the virtual representation of the object may be in any desired form, such as highlighting the area with any desired color, displaying a virtual hand pointing to the area, displaying a particular shape (e.g., a circle, a square, a triangle, or any other desired shape) covering the area, moving a cursor to the area, or in any other manner for displaying the indication of the area. Some disclosed embodiments may include transmitting the second data to a plurality of second wearable extended reality appliances. In some examples, the transmissions of the second data may be configured to cause a displaying of differing virtual indicators by the plurality of second wearable extended reality appliances. The differing virtual indicators may be in various desired forms, such as a cursor, a virtual hand, highlighting, a particular shape, or in any other form that serves to indicate. At least one processor may transmit the second data to the plurality of second wearable extended reality appliances. The plurality of second wearable extended reality appliances may, in response to receiving the second data, cause display of the differing virtual indicators. The differing virtual indicators may be pointing to or be directed to the area of the specific portion of the object (e.g., as virtually displayed via the plurality of second wearable extended reality appliances). With reference toFIG.32, in step3218, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to, based on the detection of the at least one user interaction in the image data, transmit to the at least one second wearable extended reality appliance second data indicating an area of the specific portion of the object. With reference toFIG.35, at least one processor (e.g., associated with first wearable extended reality appliance3418or with a centralized system as described above) may, based on detecting user interaction3510, transmit to second wearable extended reality appliance3420second data indicating an area of the specific portion of object3428which the human hand of user interaction3510may point to. The transmission of the second data may be, for example, via the established communication channel (e.g., based on link3424, network3422, and link3426). Second wearable extended reality appliance3420may receive the second data and may, based on the second data, cause display of an indication3512of the area of the specific portion. Indication3512may be displayed in connection with virtual representation3430of object3428, to simulate user interaction3510with object3428. Indication3512may be in a form of a circle covering the area of the specific portion. In some examples, indication3512may be in a form of a virtual hand pointing to the area of the specific portion, or in any other desired form. In one implementation, the detected at least one user interaction may include a movement of the human hand. Some disclosed embodiments may include identifying multiple parts of the object pointed to by the human hand at a particular order in time, and causing a display of visual indications of the multiple parts via the at least one second extended reality appliance at the particular order in time. The movement of the human hand may include any motion or action of the human hand. For example, the movement of the human hand may include moving the human hand in a particular gesture (e.g., a pointing finger) along a surface of the object, changing the human hand from being in one gesture (e.g., a pointing finger) to being in another gesture (e.g., a thumb-up), or any other motion or action of the human hand. As one example, the human hand may be in the gesture of a pointing finger, and may move along a surface of the object to point to multiple parts of the object. At least one processor may identify multiple parts of the object pointed to by the human hand at a particular order in time (e.g., sequentially), for example, based on image data captured by an image sensor and/or data captured by other sensor(s). The identified multiple parts of the object may have any desired interval between each other in terms of space or time. For example, the identified multiple parts may sequentially be spaced apart by any desired distance (e.g., 0.1 cm, 0.2 cm, 0.3 cm, 0.4 cm, 0.5 cm, 0.8 cm, 1 cm, 2 cm, 3 cm, 5 cm, 10 cm, or any other desired distance). As another example, the identified multiple parts may be captured at times that may sequentially be apart by any desired time period (e.g., 0.1 seconds, 0.2 seconds, 0.3 seconds, 0.4 seconds, 0.5 seconds, 0.8 seconds, 1 second, 2 seconds, 3 seconds, or any other desired time interval). The at least one processor may cause a display of visual indications of the multiple parts via the at least one second extended reality appliance at the particular order in time. For example, the at least one processor may transmit, to the at least one second extended reality appliance, a data stream indicating the identified multiple parts of the object. For example, as the at least one processor identifies a particular part of the multiple parts of the object, the at least one processor may transmit, to the at least one second extended reality appliance, a data segment, of the data stream, indicating the identified particular part of the object. As each one of the multiple parts of the object may be sequentially identified and indicated in a data segment of the data stream transmitted to the at least one second extended reality appliance, the at least one second extended reality appliance may cause display of visual indications of the multiple parts at the particular order in time (e.g., in connection with the displayed virtual representation of the object). The visual indications (e.g., relative to the displayed virtual representation of the object) may simulate the movement of the human hand (e.g., relative to the object). In some examples, the simulation of the movement of the human hand may be updated in real-time based on the movement of the human hand. Some disclosed embodiments may include initially determining that the human hand pointing to the specific portion of the object belongs to a user of the first wearable extended reality appliance and causing a first virtual indicator to identify the specific portion of the object; and subsequently determining that an additional human hand pointing to the specific portion of the object belongs to an individual other than the user of the first wearable extended reality appliance, and causing a second virtual indicator, different from the first virtual indicator, to identify the specific portion of the object. Hand source recognition may occur in any one of a number of ways, or a combination of ways. For example, based on a relative position of the hand, the system may determine whether the hand is that of a user of the first wearable extended reality appliance. This may occur because hands of wearers may have an orientation in the field of view that may be a telltale sign that the hand is that of the wearer. Such telltale signs might include detected finger orientation relative to an image sensor, or detection of arms extending toward a general direction of the image sensor. Similarly, hands of individuals other than the wearer may have differing finger orientations or may be detected as being associated with arms extending in a direction inconsistent with a direction associated with a wearer. Such signs may be determined by performing image analysis on the current hand image and comparing it with stored images or image data associated with orientations of hands of wearers. Similarly, hands of persons other than the wearer may have differing orientations and in a similar manner to determining that a hand is one of a wearer of the first wearable extended reality appliance, the system may determine that a detected hand is of a person other than the wearer. By way of another example, hands of differing individuals may differ. The system may recognize hands as being those of an extended reality device wearer (or even a particular individual), by examining unique characteristics of skin or structure. Over time as a wearer uses the system, features of the wearer's hands may be stored in a data structure and image analysis may be performed to confirm that a current hand in an image is that of the wearer. In a similar way, hands of individuals other than the wearer may be recognized, enabling the system to distinguish between a plurality of individuals based on characteristics of their hands. This feature may enable unique user interaction simulation. For example, if multiple individuals interact with the same object, virtual indicators simulating the user interactions may vary based on the person interacting with the object. At least one processor may determine whether a human hand interacting with the object belongs to the user of the first wearable extended reality appliance or another individual, for example, based on image data captured by an image sensor associated with the first wearable extended reality appliance. The at least one processor may analyze the image data for hand identification, for example, by determining hand features based on the image data and comparing the determined hand features with stored features of a hand of the user of the first wearable extended reality appliance. In some examples, the at least one processor may perform hand identification based on particular objects associated with a hand (e.g., a particular ring for identifying a hand of the user). In some examples, the hand identification may be based on data captured by other sensor(s). The at least one processor may initially determine that the human hand pointing to the specific portion of the object belongs to a user of the first wearable extended reality appliance and cause a first virtual indicator to identify the specific portion of the object; and subsequently determine that an additional human hand pointing to the specific portion of the object belongs to an individual other than the user of the first wearable extended reality appliance and cause a second virtual indicator, different from the first virtual indicator, to identify the specific portion of the object. The first virtual indicator and/or the second virtual indicator may be displayed, for example, by the at least one second wearable extended reality appliance in connection with the displayed virtual representation of the object. The first virtual indicator and the second virtual indicator may be different in terms of one or more of various aspects, such as colors, shapes, textures, visual effects, animations, or other features. In some examples, the object associated with the first wearable extended reality appliance may be a three-dimensional virtual object virtually displayed by the first wearable extended reality appliance and the first data may be configured to cause a display, via the at least one second wearable extended reality appliance, of the three-dimensional virtual object. Some disclosed embodiments may include detecting in the image data (e.g., received from the image sensor associated with the first wearable extended reality appliance) an additional user interaction changing an orientation of the three-dimensional virtual object for viewing from a particular perspective, and causing the at least one second wearable extended reality appliance to display the three-dimensional object from the particular perspective. The detecting of the additional user interaction may be performed in a similar manner as the detecting of the at least one user interaction as described above. The additional user interaction may include any action that may change the orientation of the three-dimensional virtual object, such as a rotate gesture, a drag gesture, a tap gesture (e.g., to activate a function to change an orientation), a spread gesture, a click (e.g., via a computer mouse or a touchpad), or any other action for changing an orientation. The particular perspective may include any desired angle from which the three-dimensional virtual object may be viewed. The orientation of the three-dimensional virtual object may be measured using any desired method to represent orientations, such as the Euler angles, the system based on yaw, pitch, and roll, or any other way for measuring orientation. At least one processor may measure degree(s) of change of the orientation of the three-dimensional virtual object. Based on the detecting of the additional user interaction, at least one processor may transmit, to the at least one second wearable extended reality appliance, data indicating the change of the orientation of the three-dimensional virtual object. The data may include, for example, information of the measured degree(s) of change of the orientation of the three-dimensional virtual object. The at least one second wearable extended reality appliance may receive the data and may, based on the measured degree(s) of orientation change (e.g., included in the data), adjust the orientation of the three-dimensional virtual object displayed via the at least one second wearable extended reality appliance, so that the three-dimensional virtual object may be displayed via the at least one second wearable extended reality appliance from the particular perspective. Some disclosed embodiments may include receiving from the at least one second wearable extended reality appliance third data in response to a detection of a second user interaction with the virtual representation of the object. The second user interaction may include a second human hand pointing to a particular portion of the virtual representation of the object. For example, at least one processor (e.g., associated with the at least one second wearable extended reality appliance) may detect the second user interaction with the virtual representation of the object (e.g., in a similar manner as the detection of the at least one user interaction associated with the object as described above) and may in response transmit the third data to the first wearable extended reality appliance or to the centralized system as described above. The third data may indicate, for example, the detected second user interaction and/or an area of the particular portion, of the virtual representation of the object, which the second human hand may be pointing to. At least one processor (e.g., associated with first wearable extended reality appliance or with a centralized system as described above) may receive the third data from the at least one second wearable extended reality appliance. Some disclosed embodiments may include causing the first wearable extended reality appliance to display a visual indicator of a particular area of the object corresponding to the particular portion of the virtual representation of the object. For example, based on the received third data, at least one processor (e.g., associated with first wearable extended reality appliance or with a centralized system as described above) may cause display of a visual indicator of a particular area of the object corresponding to the particular portion of the virtual representation of the object. The particular area of the object may be in a location, relative to the object, that may be same as or similar to a location, of the particular portion of the virtual representation of the object, relative to the virtual representation of the object. The visual indicator may be in any desired form, such as highlighting the particular area of the object with any desired color, displaying a virtual hand pointing to the particular area of the object, displaying a particular shape (e.g., a circle, a square, a triangle, or any other desired shape) covering the particular area of the object, moving a cursor to the particular area of the object, or in any other desired manner. In some examples, the visual indicator of the particular area of the object may include a virtual hand pointing to the particular area of the object. The virtual hand may include, for example, any visual display rendered by a computing device (e.g., a wearable extended reality appliance) and configured to represent a hand. In some examples, at least one visual characteristic of the virtual hand may be selected based on the third data. The at least one virtual characteristic may include a size of at least part of the virtual hand (such as a digit), a color of at least part of the virtual hand, a ratio between sizes of two digits of the virtual hand, a pose of the virtual hand, or any other feature of the virtual hand. In some examples, the third data may be indicative of a pose of the second human hand. The at least one visual characteristic may be based on the pose of the second human hand. In some examples, the at least one visual characteristic of the virtual hand may be based on one or more features of the second human hand in the second user interaction with the virtual representation of the object. In some examples, the virtual hand may visually resemble an appearance of the second human hand in the second user interaction with the virtual representation of the object. Some disclosed embodiments may include causing the first wearable extended reality appliance to display a virtual shadow corresponding to the virtual hand pointing to the particular area of the object. A virtual shadow may take the form of an outline, a shape or a contour, and may simulate an appearance of an actual shadow. In some examples, the virtual shadow may include an area or shape that may be shaded or dark. The virtual shadow may be placed in a location, for example, based on the lighting condition in the environment surrounding the first wearable extended reality appliance to simulate how a shadow might actually appear if an actual hand were pointing. For example, the virtual shadow may be placed in a location where a natural shadow may appear based on the lighting condition if the virtual hand were a physical hand. In some examples, the virtual shadow may have a shape, a color, or any other characteristic that may be same as or similar to such a natural shadow. In some examples, the visual indicator of the particular area of the object may include a change to a visual appearance of the particular area of the object. For example, the change to the visual appearance of the particular area of the object may include a change to a color scheme, a change to a brightness, a change to a texture, a change to a shape, or a change to any other feature, of the particular area of the object. With reference toFIG.36, second user3416may initiate a second user interaction3610with virtual representation3430of object3428. Second user interaction3610may include a second human hand (e.g., a hand of second user3416) pointing to a particular portion of virtual representation3430of object3428. Second wearable extended reality appliance3420may detect second user interaction3610(e.g., based on image data captured by an image sensor and/or data captured by other sensor(s)) and may transmit third data to first wearable extended reality appliance3418. The transmission of the third data may be, for example, via the established communication channel (e.g., based on link3424, network3422, and link3426). In another example, a centralized system may transmit the third data to first wearable extended reality appliance3418, for example based on information received at the centralized system from second wearable extended reality appliance3420, for example as described above. The third data may indicate second user interaction3610and/or an area of the particular portion, of virtual representation3430, which the second human hand may be pointing to. First wearable extended reality appliance3418may receive the third data and may in response cause display of a visual indicator3612of a particular area of object3428corresponding to the particular portion of virtual representation3430. Visual indicator3612may include a virtual hand, and may be displayed to first user3414via first wearable extended reality appliance3418. Some disclosed embodiments may relate to simulating user interactions with shared content, including methods, systems, apparatuses, and non-transitory computer-readable media. Some disclosed embodiments may include establishing a communication channel for sharing content and user interactions between a first wearable extended reality appliance and a second wearable extended reality appliance. For example, at least one processor may cause establishing of a communication channel for sharing content and user interactions between a first wearable extended reality appliance and a second wearable extended reality appliance, as described herein.FIG.33is a flowchart illustrating an exemplary process3300for simulating user interactions with shared content consistent with some embodiments of the present disclosure. With reference toFIG.33, in step3310, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to establish a communication channel for sharing content and user interactions between a first wearable extended reality appliance and a second wearable extended reality appliance. Some disclosed embodiments may include receiving, from the first wearable extended reality appliance, first data representing an object associated with the first wearable extended reality appliance. The first data may enable a virtual representation of the object to be displayed through the second wearable extended reality appliance. For example, at least one processor (e.g., associated with the second wearable extended reality appliance or with the centralized system as described above) may receive, from the first wearable extended reality appliance, first data representing an object associated with the first wearable extended reality appliance, as described herein. With reference toFIG.33, in step3312, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive, from the first wearable extended reality appliance, first data representing an object associated with the first wearable extended reality appliance, the first data enabling a virtual representation of the object to be displayed through the second wearable extended reality appliance. Some disclosed embodiments may include outputting for presentation via the second wearable extended reality appliance first display signals reflective of the virtual representation of the object. For example, at least one processor (e.g., associated with the second wearable extended reality appliance or with the centralized system as described above) may cause output, for presentation via the second wearable extended reality appliance, of first display signals reflective of the virtual representation of the object, as described herein. The first display signals may include any visible radiation that may be output by a display system associated with a wearable extended reality appliance, such as an optical head-mounted display, a monocular head-mounted display, a binocular head-mounted display, a see-through head-mounted display, a helmet-mounted display, or any other type of device configured to show images to a user. In some examples, the display system may reflect the first display signals (e.g., projected images) and to allow a user to see through the display system. In some examples, the first display signals may be based on waveguide techniques, diffraction optics, holographic optics, polarized optics, reflective optics, or other types of techniques for combining images projected by a computing device and optical signals emanated from physical objects. At least one processor (e.g., associated with the second wearable extended reality appliance or with the centralized system as described above) may output for presentation via the second wearable extended reality appliance the first display signals reflective of the virtual representation of the object. The output of the first display signals may cause display of the virtual representation of the object to a user. With reference toFIG.33, in step3314, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to output for presentation via the second wearable extended reality appliance first display signals reflective of the virtual representation of the object. Some disclosed embodiments may include receiving, from the first wearable extended reality appliance, second data representing at least one user interaction associated with the object. The at least one user interaction may include a human hand pointing to a specific portion of the object. For example, at least one processor (e.g., associated with the second wearable extended reality appliance or with the centralized system as described above) may receive, from the first wearable extended reality appliance, second data representing at least one user interaction associated with the object, as described herein. With reference toFIG.33, in step3316, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to receive, from the first wearable extended reality appliance, second data representing at least one user interaction associated with the object, the at least one user interaction including a human hand pointing to a specific portion of the object. Some disclosed embodiments may include outputting for presentation via the second wearable extended reality appliance second display signals visually indicating an area of the specific portion. For example, at least one processor (e.g., associated with the second wearable extended reality appliance or with the centralized system as described above) may cause output, for presentation via the second wearable extended reality appliance, of second display signals visually indicating an area of the specific portion, as described herein. The visual indicator of the area of the specific portion may be in any desired form, as described herein. The visual indicator may be displayed in connection with the displayed virtual representation of the object. With reference toFIG.33, in step3318, instructions contained in a non-transitory computer-readable medium when executed by a processor may cause the processor to output for presentation via the second wearable extended reality appliance second display signals visually indicating an area of the specific portion. Some disclosed embodiments may include, following receipt of the second data, continuously receiving information from the first wearable extended reality appliance and continuously updating the second data to thereby cause a visual representation of the area of the specific portion to cease when the at least one user interaction including the human hand pointing to the specific portion of the object is no longer detected. For example, the first wearable extended reality appliance may periodically or continuously monitor the at least one user interaction and may transmit, to the second wearable extended reality appliance, a data stream of updated state(s) of the at least one user interaction as monitored. The second wearable extended reality appliance may receive the data stream and may, based on the data stream, update the visual indicator simulating the at least one user interaction (e.g., in terms of a location, an orientation, a shape, a color, or any other feature). When the at least one user interaction is no longer detected, the first wearable extended reality appliance may transmit, to the second wearable extended reality appliance, an indication that the at least one user interaction is no longer detected. Based on receiving the indication, the second wearable extended reality appliance may cause a visual representation of the area of the specific portion to cease (e.g., the visual representation of the area of the specific portion may be removed from the displayed virtual representation of the object). In some examples, the second data may be reflective of multiple human hands pointing to multiple parts of the object during a time period, and the second display signals may be configured to visually indicating a plurality of areas corresponding to the multiple parts of the object. The multiple human hands may belong to one person, or multiple persons. In some examples, the multiple human hands may include two hands of a user of the first wearable extended reality appliance. In some examples, the multiple human hands may include hand(s) of the user of the first wearable extended reality appliance, and/or hand(s) of person(s) other than the user of the first wearable extended reality appliance. The time period may include any length of time, such as 0.5 seconds, 1 second, 2 seconds, 3 seconds, 5 seconds, 10 seconds, 30 seconds, 60 seconds, 120 seconds, other any other time interval. The visual indicators of the plurality of areas may be displayed in connection with the displayed virtual representation of the object, and may be configured to simulate the interactions of the multiple human hands with the multiple parts of the object. In some embodiments, the second display signals may be configured to cause differentiated indicators of the plurality of areas to be displayed via the second wearable extended reality appliance. The differentiated indicators may differ in terms of one or more of various aspects, such as colors, shapes, textures, visual effects, animations, or other features. In some embodiments, the first data may include information on an environment of the object from a perspective of the first wearable extended reality appliance. Some disclosed embodiments may include configuring the first display signals to cause via the second wearable extended reality appliance, a presentation of the object located in proximity to the first wearable extended reality appliance corresponding to the perspective of the first wearable extended reality appliance. For example, the first wearable extended reality appliance may measure a position and/or orientation of the object relative to the first wearable extended reality appliance. The measuring may be based on data captured by one or more of an image sensor, a distance sensor, a depth sensor, a range imaging sensor, or any other sensor configured to capture information for measuring a position and/or orientation of the object. The measured position and/or orientation may be included in the first data, and may be transmitted to the second wearable extended reality appliance. The second wearable extended reality appliance may, based on the measured position and/or orientation of the object relative to the first wearable extended reality appliance, display the virtual representation of the object to be from a perspective same as or similar to the perspective of the first wearable extended reality appliance. In some examples, the environment of the object (e.g., physical or virtual space surrounding the object) may be captured, and the information may be transmitted to the second wearable extended reality appliance. Based on receiving the information, the second wearable extended reality appliance may cause display of the virtual representation of the object to have a relationship, with the surrounding environment of the virtual representation, that may be same as or similar to the relationship of the object with the surrounding environment of the object (e.g., distances of the object to a nearby wall, nearby furniture, the floor, or other items in the environment). Some disclosed embodiments may include receiving image data from an image sensor associated with the second wearable extended reality appliance, detecting in the image data a structure that prevents displaying the virtual representation of the object in the manner corresponding to the perspective of the first wearable extended reality appliance; and, upon detecting the structure, determining an alternative display of the virtual representation of the object, the alternative display presenting the virtual representation of the object from a perspective differing from the perspective of the first wearable extended reality appliance. The structure may include any item (physical or virtual) that may be in such a position and/or orientation that it may occupy (physically or virtually) the space (or a portion thereof) where the displayed virtual representation of the object may be located (e.g., when displayed from a perspective same as or similar to the perspective of the first wearable extended reality appliance). At least one processor may identify the structure, for example, based on analysis of the image data received from the image sensor. In some examples, the identification of the structure may be based on data captured by one or more of a distance sensor, a depth sensor, a range imaging sensor, or any other sensor. The at least one processor may determine the alternative display of the virtual representation of the object, for example, by identify a location, in the environment of the second wearable extended reality appliance, where there may not be such an obstructing structure, or by identify an orientation, of the virtual representation of the object, that may allow the virtual representation of the object to fit in at its current location. The alternative display may be implemented, for example, using the identified location or orientation. The alternative display may present the virtual representation of the object from a perspective differing from the perspective of the first wearable extended reality appliance. Some disclosed embodiments may include adjusting a display of a virtual indicator to point to the specific portion of the object from the perspective associated with the alternative display. For example, at least one processor may adjust the location and/or orientation of the virtual indicator, in a similar manner as the virtual representation of the object. The adjusting of the virtual indicator may be such that the location and/or orientation of the virtual indicator relative to the virtual representation of the object may simulate the location and/or orientation of the at least one user interaction relative to the object. In some embodiments, methods, systems, and non-transitory computer readable media for simulating visual pointers over shared content may be provided. In some examples, an indication that a first user and a second user are watching two replicas of the same computer window may be received. Further, an indication that the first user physically points to a first element of the computer window may be received. Further, a visual pointer pointing at the first element may be caused to be displayed to the second user. In one example, the first user may watch the replica of the computer window on a physical display and the second user may watch the replica of the computer window on a virtual display. In another example, the first user may watch the replica of the computer window on a virtual display and the second user may watch the replica of the computer window on a physical display. In yet another example, both users may watch the replica of the computer window on physical displays. In an additional example, both users may watch the replica of the computer window on virtual displays. In one example, such virtual display may be part of an augmented reality environment. In another example, such virtual display may be part of a virtual reality environment. In some examples, the indication that the first user physically points to the first element of the computer window may be based on an analysis of an image of the first user (such as an image captured by a wearable system used by the first user, or an image captured by a fixed camera positioned in the room). In some examples, the indication that the first user physically points to the first element of the computer window may be an indication that the first user is using a gesture to point to the first element of the computer window. For example, the gesture may be using a finger to point at the first element. In some examples, the visual pointer pointing at the first element may be a two-dimensional overlay over the content of the computer window. In some examples, the visual pointer pointing at the first element may be a two-dimensional visual pointer displayed outside the computer window, for example, using an augmented reality system used by the second user, or using a virtual reality system used by the second user. In some examples, the visual pointer pointing at the first element may be a three-dimensional visual pointer displayed outside the computer window, for example, using an augmented reality system used by the second user, or using a virtual reality system used by the second user. For example, the three-dimensional visual pointer may be a representation of a hand or may be a representation of an arrow. Allowing anyone to place virtual content anywhere in a shared extended reality environment may cause virtual content clutter and abuses such as placement of promotional content or undesirable content in inappropriate locations. Aspects of this disclosure that follows describe ways for applying location-based restrictions to manage content placement in extended reality environments. Some disclosed embodiments may involve systems, methods, and non-transitory computer readable media configured for managing content placement in extended reality environments. The term “managing content placement” may refer to using one or more rules that govern where users may place content in an extended reality environment. These rules may include permissions as to where to place virtual content and may be based on a variety of different parameters, such as time, location, size, or virtual content type. In some embodiments, virtual content may include any two-dimensional or three-dimensional representation of data. The virtual content may be inanimate or animate, and/or may be public or private. Public content may be visible to all wearable extended reality appliance users registered to a certain a shared extended reality environment, whereas private content may only be visible to a limited number of users whose appliances are registered to the certain a shared extended reality environment. In some embodiments, the managing of the content placement may involve applying rules that may vary between different types of users. For example, some rules may limit where an individual user may present virtual content. Other rules may enable expanded freedom to a user associated with a corporate entity to present virtual content in certain places, including office buildings, client offices, or public locations associated with or advertising the entity's services. The rules for managing content placement may also vary between public and private locations, i.e., there may be different rules for presenting virtual content in private places, such as user's home or office, compared to public places, such as a public park, a city street, a town square, a retail establishment, or any other public or semi-public space. Some disclosed embodiments may involve receiving a request from an entity to place a virtual content at a specific geographic location in at least one shared extended reality environment that includes a plurality of virtual objects. The term “entity” may refer to an individual, artificial intelligence (AI) software, an organization, a professional office, a service provider, a company, a corporation, partnership, or other individual, group or mechanism capable of making a request. In one example, the entity associated with the request may be a single wearable extended reality appliance user. However, the entity may not be limited a user of a wearable extended reality appliance; it may include virtual entities such as an AI software, or human users of other types of computerized devices. In one embodiment, the request from the entity may be a signal sent from the at least one processor associated with a non-transitory computer readable medium as part of the wearable extended reality appliance. In another embodiment, the request signal may be sent from the at least one processor associated with the entity. In some examples, receiving the request may include at least one of reading the request from a memory unit, receiving the request from an external device, receiving the request from a user (for example, via a user interface), or determining the request by analyzing data (such as data captured using one or more sensors, data read from memory, and/or data received from an external device). The specific geographic location identified in the request may be selected by the entity and may refer to a physical point or region on Earth identifiable utilizing any descriptor, metric or characteristic. For example, the geographic location may be represented as a combination of longitude/latitude, a street address, a room in an office, a conference hall, an area defined by a code, or any other identifiable place. The specific geographic location may include any surface on which virtual content may be presented. In some examples, the specific geographic location may include a screen, a billboard, a window, a wall, and/or other location that is conducive for displaying virtual content. The specific geographic location may be located in a myriad of different places. For example, a specific geographic area may host content in a town square, a wall or window in an office building, a public transmit area, or any other public or private place, subject to restrictions. The specific geographic location may be able to accommodate more than one virtual object at a time, depending on the virtual content display size. As described elsewhere in this specification, the specific geographic location may also vary in size depending on the type of the virtual content and where the virtual content is presented, such as in an office building conference room, a town square or park, or auditorium. For example, a specific geographic location in an office building conference room may only be able to accommodate one or two virtual objects at one time, but a specific geographic location in a public park may be able to accommodate more than three or four objects at the same time. As mentioned, the specific geographic location may be either public or private. If public, most if not all users may be able to view and access the presented content. For example, a specific geographic location may be a public park, a town square, a public transit station, a billboard on a highway, supermarkets or grocery stores, and/or other locations that are open to the public. A specific geographic location may also be in private location, i.e., a location that is not open to the public. For example, such a specific geographic location may be a conference room in an office building, a shared working space, and/or a school or university. Consistent with disclosed embodiments, the plurality of virtual objects may be viewable by a plurality of wearable extended reality appliances registered to the at least one shared extended reality environment. The term shared extended reality environment may refer to a common computer-generated perceptual surrounding. The shared extended reality environment may be a completely simulated virtual environment or a combined real-and-virtual environment wherein multiple users may perceive common content from different perspectives (or in some embodiments, from a common perspective). In one example, multiple shared extended reality environments may coexist in the same physical space, and a user of a wearable extended reality appliance may select which extended reality environments to view and interact with at the physical space by registering to the desired extended reality environments. In one example, multiple wearable extended reality appliance users registered to the extended reality environment may view the same content available in the extended reality environment. In some embodiments, the wearable extended reality appliance users may be all in a similar location, and thus their experiences may be shared. A wearable extended reality appliance user may register the appliance to multiple shared extended reality environments. Registering to a shared extended reality environment may refer to initially interacting with one or more shared extended reality environments, i.e., public or private shared extended reality environments. A user may register his or her appliance to one or more extended reality environments and may switch between the environments by adjusting settings on the wearable extended reality appliance. For example, a user may register to a public shared extended reality environment and may also register to multiple private shared extended reality environments. In one example, a single location, e.g., a room may be associated with content of multiple shared extended reality environments. Accordingly, a first wearable extended reality appliance may display a first set of virtual objects based on the shared extended reality environments to which the first wearable extended reality appliance is currently registered, and a second wearable extended reality appliance may display a second set of virtual objects (that may be different from the first set of virtual objects) based on the shared extended reality environments to which the second wearable extended reality appliance is currently registered. Registration may occur automatically when an extended reality appliance is in proximity to a location associated with an extended reality environment. Regardless of how registered, in some embodiments, returning to a location associated with an extended reality environment may cause an extended reality appliance to recognize the associated extended reality environment. In one embodiment, a public shared extended reality environment may be an extended reality environment that is free and open to most if not all users. The public shared extended reality environment may be akin to a public communication network, where entities may place virtual content, such as personal posts or advertisements. However, a user may selectively turn off the public shared extended reality environment if the user wishes to access one or more private shared extended reality environments, such as an office environment, family home, and/or school or university environment (i.e., an education space-based shared extended reality environment). An extended reality appliance user may register and connect to one shared extended reality environment or may register to a plurality of shared extended reality environments. Specifically, in the office environment example, only colleagues in the workplace may be able to access the shared extended reality environment. In a family home example, only family members may be able to access the shared extended reality environment. In the education space example, only students, instructors, and associated staff may access the shared extended reality environment. Such restrictions may be implemented through permissions, such as permissions granted by an administrator of a particular shared extended reality environment. In the previously described private shared extended reality space examples, the specified geographical location where virtual content may be presented may be a wall or window in an office conference room, a wall, TV screen, or window in a private home, a whiteboard, chalkboard, blackboard, or projector screen in a classroom, and/or other locations which are visible to users whose wearable extended reality appliances are registered to the private shared extended reality environment. By way of example,FIG.37Adepicts a specific geographical location where virtual content may be displayed.FIG.37Ashows a location3710, here, a conference room in an office building, where virtual content may be placed to be viewed by other wearable extended reality appliance users3712. InFIG.37A, one or more of the users may have not registered to the public extended reality environment, where virtual content may be presented by outside entities. InFIG.37B, a user of wearable extended reality appliance3714in location3710may be registered to a public extended realty environment. Thus, virtual content, such as advertisements3716, may be visible to the user of wearable extended reality appliance3714and other users3712in the conference room. Some disclosed embodiments may further include using input from a wearable extended reality appliance used by the entity to determine the specific geographic location. In one example, the input from the wearable extended reality appliance may include positioning data captured using a positioning sensor included in the wearable extended reality environment (such as a global positioning sensor, an outdoor positioning sensor, an indoor positioning sensor, or any other positioning sensors as described above), and the specific geographic location may be immediately determined from the positioning data. In one example, image data captured using an image sensor included in the wearable extended reality appliance may be received. Further, the image data may be analyzed using at least one of an ego-positioning algorithm, an ego-motion algorithm or a visual odometry algorithm to determine the specific geographic location. In some examples, a user of the wearable extended reality appliance may provide information, for example through voice commands, gestures or physical input devices, and the specific geographic location may be determined based on the information provided by the user. For example, the information provided by a user may include textual information (for example, entered using a keyboard and/or determined by analyzing audio with speech recognition algorithms), and the textual information may be analyzed using a natural-language algorithm to determine the specific geographic location. In another example, the information provided by the user may include an indication of a location on a map (for example, through a hand gesture or a use of a virtual cursor), and the specific geographic location may be immediately determined from the location on the map. Some disclosed embodiments may involve obtaining information associated with the request. The obtained information may include any data or information relevant to the request for placing a virtual content at a specific geographic location. For example, the obtained information may include information related to at least one of the entity, the virtual content, the specific geographic location, the at least one shared extended reality environment, one or more virtual objects of the plurality of virtual objects, or one or more appliances of the plurality of wearable extended reality appliances registered to the at least one shared extended reality environment. For example, the obtained information may include details on the virtual content, image data (e.g., an image of the specific geographic location), license information, ownership information, identity information, whether the specific geographic location is public or private, affiliation information between the specific geographic location and the entity, content type information, the entity's location information, time of day information, virtual space constraint information, and/or more details associated with the specific geographic location, the content to be displayed, and the entity making the request. In some examples, obtaining the information may include reading the information from memory, receiving the information from an external device, receiving the information from a user (for example, through a user interface), determining the information by analyzing data, and/or capturing the information using one or more sensors. In one embodiment, a first entity may request to present virtual content and a second entity may receive the presentation request from the requesting entity. The second entity may determine whether to permit presentation of virtual content. In some embodiments, the second entity request specific information from the first entity about the specific geographic location. Some disclosed embodiments may involve accessing a plurality of content placement rules defining geographical restrictions on extended reality environment content placement. A content placement rule may be any definition that defines the appropriateness of information for placement in a particular location (or a category or type of location). The content placement rule may impose geographical restrictions, content type restrictions, time of day restrictions, and/or shared extended reality environment (i.e., public or private) restrictions. For example, a content placement rule may govern what time virtual content may be presented, size restrictions for presented virtual content, types of content that may be presented, and/or who may view the presented virtual content. Additional examples of content placement rules are described throughout this specification. In one embodiment, the content placement rules may be implemented throughout a shared extended reality space to regulate which content may be placed where, when, and/or for how long. Whether an entity's request to present content meets a content placement rule may be based on obtained information. In some examples, accessing the plurality of content placement rules may include accessing a data-structure including at least part of the plurality of content placement rules, accessing a database including at least part of the plurality of content placement rules, accessing a memory unit storing at least part of the plurality of content placement rules, and/or communicating with an external device maintaining at least part of the plurality of content placement rules. Some disclosed embodiments may involve, based on the obtained information, determining that the request from the entity to place the virtual content at the specific geographic location corresponds to a specific content placement rule. The specific content placement rule may govern whether content may be displayed at the specific geographic location. The specific content rule may be associated with the content type, i.e., the character of the content, whether the content is inanimate or animate, whether the content is meant to be publicly or privately displayed, and/or whether the content is two-dimensional or three-dimensional. Other content placement rules may be based on what the substance of the content, i.e., content meaning, appropriateness of the content to a determined or expected audience, where the content may be placed, the size of the presented virtual content is, and/or what time the content is to be displayed. A specific content placement rule may also be based on the type of shared extended reality environment in which the content may be placed. For example, there may be different content placement rules for public shared extended reality environments and private shared extended reality environments. In this example, some content (e.g., advertisements) may be permitted in public environments but may not be permitted in private environments. In some examples, the plurality of content placement rules may be stored in a data-structure associating request-related-information with content placement rules, and the data-structure may be accessed based on the obtained information to determine the specific content placement rule. In some examples, a plurality of conditions may be used to select which content placement rule corresponds to which request-related-information, and the obtained information may be analyzed using the conditions to determine the specific content placement rule. In some examples, a machine learning model may be trained using training examples to determine content placement rules based on request-related-information records. An example of such training example may include a sample request-related-information, together with a label indicating a content placement rule corresponding to the sample request-related-information. The trained machine learning model may be used to analyze the obtained information and determine the specific content placement rule. In some embodiments, the request for presenting the virtual content may correspond with a specific content placement rule. For example, there may be characteristics of the content presentation request that correspond with the content placement rule in a specific geographic location. Such characteristics may be content type, time of day restrictions, and/or display size restrictions. For example, an entity's virtual content request may involve presenting three-dimensional virtual content and the specific geographic location may have a content placement rule corresponding to three-dimensional virtual content, i.e., it is either permitted or not permitted. By way of example,FIG.37Cillustrates choosing a specific geographic location based on obtained information. Here, a wearer of wearable extended reality appliance3718may gesture his or her hand3720to a specific location where he or she wishes to display content. InFIG.37C, the user's wearable extended reality appliance3718is not registered to the public extended reality environment, rather, the appliance is registered to the private extended reality environment. Thus, only office and business-related virtual content3724is displayed and the advertisements3716shown inFIG.37Bare not displayed.FIG.37Dillustrates a virtual object3726being displayed after determining, based on the desired geographic location, that the specific conditions for content placement are met. In this example, a specific condition for content placement may be that the type of content is office or business related. Therefore, it may be determined that the request from the entity to place the virtual content at the specific geographic location corresponds to a specific content placement rule. Some disclosed embodiments may further include determining whether the condition of the specific content placement rule is met based on the determined characteristic of the specific geographic location. A specific content placement rule may be based on a myriad of inputs and may be configured by the wearable extended reality appliance user receiving the presentation request. The determined characteristic may be any one of the characteristics described in this specification, such as whether the specific geographic location is public or private, whether there is enough space for the virtual object to be presented, and/or what type of content is permitted at the specific geographic location. For example, specific content placement rules may be content-based, i.e., based on whether the content is to be public, such as an advertisement, or private, such as content related to business or education. A specific content placement rule may also apply to a plurality of geographic locations. For example, a town or neighborhood may have content placement rules preventing the display of certain types of virtual content. These local content placement rules may be enforced by a local government or regulatory entity. In some embodiments, a condition of the specific content placement rule may prevent placement of content in the specific geographical location in an absence of a license permitting content placement by the entity in the specific geographical location. While content sharing may be encouraged, if anyone is able to post content anywhere, the shared extended reality environment may become cluttered with content. As a result, the users of the shared extended reality environment may be indifferent to the content displayed in the shared wearable extended reality environment, damaging the effectiveness of targeted content placed by various entities throughout the shared extended reality environment. In order to reduce the crowding of virtual content items in certain locations throughout the shared wearable extended reality environment, it may be desirable to adopt a licensing system to reduce unwanted content. Such licenses may govern, for example, which content may be presented at a specific geographic location at a specific time, which types of content may be presented (i.e., public or private content, two-dimensional or three-dimensional content, and/or inanimate or animate content), and/or how many virtual objects may be presented at a specific time. This licensing system may also be adapted to the private shared extended reality environment. For example, a license may be utilized to prevent members of the general public from posting advertisements or presenting other content inside private areas such as homes, offices, shops, or other private businesses. The licensing scheme may be relaxed in public spaces, but time, place, and content-type restrictions may be implemented. Entities may request or apply for a license to place content in private spaces, and the licenses may be evaluated based on the products the entities wish to advertise, the time and day that the content is to be presented, how many other virtual objects are present, and the type of business where the content is to be placed. In one example, a license to present in a specific geographic location may not be given for a private home because the value of the wearable extended reality appliance user's privacy may outweigh the benefit they may receive from the presenting entity. However, in another example, an owner of a specific geographic location such as a store or office may grant a license to display virtual content based on the benefit it receives from the requesting entity, which may be in the form of money payment. In a public shared extended reality environment, licenses may be granted or denied based on the type of content that is requested to be presented. For example, a requesting entity may need a specific license to present content related to age-restricted products such as tobacco or alcohol. In another example, the license may be limited based on the time of day. In this example, a license may be required to present content outside of normal business hours, such as nighttime or even presenting content for a 24-hour period. Content placement restrictions may also be implemented more broadly. For example, an entire city or town, rather than a single wearable extended reality appliance user, may implement content placement restrictions that may be related to content type, i.e., public or private content, two-dimensional or three-dimensional content, and/or inanimate or animate content. Content placement restrictions may also be based on time of day, as described later in this specification. In some embodiments, the plurality of content placement rules may govern a first extended reality environment and a second extended reality environment, and may further include preventing placement of the virtual content in the specific geographic location of the first extended reality environment and permitting placement of the virtual content in the specific geographic location of the second extended reality environment. As described elsewhere in this specification, wearable extended reality appliance users may register to a plurality of shared extended reality environments. A first extended reality environment may be a private shared extended reality environment and a second extended reality environment may be a public shared extended reality environment. As described elsewhere in this specification, private and public shared extended reality environments may have different content placement restrictions. In some embodiments, placement of the virtual content, such as an advertisement, may be prohibited in the first private shared extended reality environment, but may be permitted to be displayed in the second public shared extended reality environment. For example, advertisements may be prevented from being displayed in a private shared extended reality environment, such as an office, but may be permitted in a public shared extended reality environment, such as in a public park or public transit station. In some embodiments, obtained information associated with the virtual content presentation request may include information about an identity of the entity, and may further include determining whether the condition of the specific content placement rule is met based on the information about the identity of the entity. Information about an identity of the requesting entity may include whether it is a private person or a corporation, what type of business the entity typically engages in, where the entity typically operates, for example, where does it typically display virtual content, and/or how many other virtual objects the entity is currently displaying. Whether the condition of the specific content placement rule is met may be based on the above identity information. For example, a large corporation may wish to advertise products in a smaller setting, such as a school or university. Based on the identity of the entity, and where the entity wishes to place the virtual content, such information may be instrumental in determining where the condition of the specific content placement rule is met. Thus, for example, a content presentation request may be denied if the information contains details that are not desirable to a recipient of the content. Such details may include, for example, the type of product that the company typically advertises or typical customers. For example, a company that sells unhealthy food products or tobacco products may not meet the specific content placement rule if it wishes to advertise at a school or university campus. In some embodiments, obtained information associated with the request to present content may include information about a physical location of the entity, and the operations may further include determining whether the condition of the specific content placement rule is met based on the information about the physical location of the entity. A receiving wearable extended reality appliance user may accept or deny an entity's presentation request based on the entity's physical location. For example, if the requesting entity is located close to the specific geographic location, the request may be accepted. If the requesting entity is located remote from the specific geographic location, i.e., where the requesting entity wishes to present content, the presentation request may be denied. Additionally, an entity's physical location may differ based on the time the request was made. An entity's location at different times may be useful for a receiving wearable extended reality appliance user in determining whether to grant the presentation request because the differing locations may show whether the requesting entity is moving closer to the requested specific geographic location, which may factor favorably into granting a presentation request, or moving further away from the requested specific geographic location, which may factor unfavorably into granting a presentation request. For example, the entity's physical location may be that at the receiving time of the request, a time period before the receiving time of the request, a physical location during a time leading up to the receiving time of the request, and/or a physical location during a time shortly after the receiving time of the request. Depending on where the entity's physical location is at these times, the request to present content may be denied. In some embodiments, when the obtained information about the physical location of the entity indicates that at a time of making the request, the entity is located at a first location remote from the specific geographic location, the display of the virtual content may be prevented. Virtual content may be prevented from being displayed when an entity's physical location is remote from the geographic location where the virtual content is to be displayed. Display prevention may be a function of a distance of a requesting entity from the requested specific geographic location. Additionally, remoteness may be based on differences in time zone, elevation, and/or topography. The threshold distance, i.e., the distance where a presentation request may be granted or denied based on how remote the requesting entity is, may be configurable or may be a matter of design choice by the system designer. For example, if the requesting entity is remote from a particular location, e.g., fifteen kilometers or more, it may be undesirable to permit such an entity to display content at the particular location because it may cause cluttering public shared extended reality spaces. For example, if remote entities were permitted to present virtual content in areas that are far away from their physical locations, these areas may become overloaded with virtual content. Some disclosed embodiments may further include receiving updated information indicating that the entity is located at a second location proximate to the specific geographic location, for example after preventing the display of the virtual content based on the information about the physical location of the entity. Proximate may refer to how close the requesting entity is to the specific geographic location, and may be determined based on distance, time zone, and/or elevation. The processor may compare the proximate distance to the remote distance threshold to aid in determining whether the presentation request may be approved. This proximate distance threshold may be configured by the receiving wearable extended reality appliance user or may be designed into the system. The content the entity requested may be automatically displayed when the entity moves closer to the recipient of the request, i.e., within the proximate distance threshold. Whereas the remote threshold distance may govern whether requested content may be presented or not, the proximate distance threshold may govern the distance at which content may be automatically presented. For example, virtual content may be automatically presented when the requesting entity is within one to ten kilometers of the specific geographic location, and/or the requesting entity is within one kilometer and also at the same elevation as the specific geographic location. Some disclosed embodiments may further include automatically enabling the display of the virtual content at the specific geographic location in the at least one shared extended reality environment by at least some of the plurality of wearable extended reality appliances in response to receiving updated information. When an entity is remote from the specific geographic location and requests to display content relevant to its location, presenting the virtual content may overload the virtual space in the requested area. Although the requested content may meet other content placement restrictions, the requesting entity may be too remote for the receiving wearable extended reality appliance to accept the entity's request. However, when the requesting entity moves closer to a recipient of the request, the requested content may be automatically displayed. This may enable entities to plan distribution of virtual content along a route. The distance at which content is automatically displayed may be configured by the wearable extended reality appliance user at the specific geographic location. For example, the proximate threshold distance may be within ten, five, or one kilometer(s) of the specific geographic location. The proximate threshold distance may also be based on elevation. For example, content may be automatically displayed when the requesting entity and the specific geographic location are within one kilometer of each other and are also located at the same elevation. The obtained information associated with the request includes ownership information associated with the specific geographic location, and some disclosed embodiments may further include determining whether the condition of the specific content placement rule is met based on the ownership information. Ownership information associated with the specific geographic location may refer to whether the owner is a public or private user. If a private user, ownership information may include the entity's identity (e.g., personal name, company name, organization, and/or sponsor). In some embodiments, an entity may be permitted to display a specific virtual content in a specific geographic location that it owns and may be prohibited from displaying a specific virtual content in a specific geographic location that it does not own. In some cases, the specific geographic location in the request may be a private place or a public place. Private places may have more content placement rules than public places, even within the same shared extended reality environment. For example, a private place may be associated with restrictions as to what type of content may be displayed, what time the content may be placed, and where the content may be placed. In this example, an entity may only be able to post office or business-related content if the specific geographic location is in a private place such as an office building, owned by the entity. By contrast, an entity may be able to present a wider array of content if the ownership information indicates that the specific geographic location is in a public place, owned by the local town or city. By way of example, the obtained information may include ownership information that the specific geographic location is in a public place, such as a park. The owner of the specific geographic location may be the local town, which may have more relaxed presentation rules. In one example, the entity may wish to present virtual content comprising an advertisement for a soft drink. In this public environment, a specific content placement rule may state that advertisements for soft drinks are permitted. Thus, in this example, the content placement rule was met, and the entity was permitted to display an advertisement for a soft drink. In another example, the ownership information may indicate that the specific geographic location is in a private place owned by a private wearable extended reality appliance user. As in the previous example, the entity may wish to present virtual content comprising an advertisement for a soft drink. However, unlike the previous example, a condition of the specific content placement rule may state that no outside advertisements are permitted. Thus, the request to present the virtual content may be denied because the specific content placement rule is not met. In another example, the requesting entity may be an employee of a company and the specific geographic location is the company building, owned and occupied by the company. Here, the content placement rule may be met because, based on obtained ownership information, the requesting entity's place of work is at the specific geographic location. Consistent with one aspect, the obtained information associated with the virtual content presentation request may include affiliation information reflecting a connection of the entity with the specific geographic location, and some disclosed embodiments may further include determining whether the condition of the specific content placement rule is met based on the affiliation information. Affiliation information may refer to any connection that the requesting entity has with the specific geographic location. For example, a lobby of a building may be associated with the company located in the building. A sporting venue may be associated with the home sports team affiliated with the venue. A conference room may be affiliated with an office in which the conference room is located. The affiliation information may additionally or alternatively include information related to ownership interests between the requesting entity and the specific geographic location, prior instances where the entity presented virtual content at the specific geographic location, and/or whether the specific geographic location is public or private. Ownership interests may refer to whether the requesting entity owns the specific geographic location. For example, affiliation information reflecting a connection with the specific geographic location may be useful in determining whether a content placement rule is met because if the requesting entity is affiliated with the specific geographic location, the affiliation may suggest that the content placement rule is met. Additionally, the affiliation may suggest that the requesting entity has previously presented virtual content at the specific geographic location. An affiliation may be a franchisor-franchisee relationship, landlord-tenant relationship, employer-employee relationship, and/or any other business or otherwise working relationship that may suggest that the entity has previously engaged with the specific geographic location or has a measure of authority associated with the geographical location. For example, the requesting entity may be an employee of a store who wishes to present an advertisement in the store. Since the requesting entity is affiliated with the specific geographic location, the associated affiliation information may be used to determine whether the condition of the specific content placement rule is met. In another example, affiliation information may refer to prior instances where the requesting entity was permitted to present virtual content. This affiliation information may be relevant in both the public and private space. Here, if the affiliation information reflects that the requesting entity previously presented virtual content at the specific geographic location, this may indicate that the specific content placement rule was previously met and may be met at the current time as well. For example, a not-for-profit entity may have been granted permission to post information in a public space and the permission granted in the past may determine an ability to place content in the future. Consistent with one aspect, the obtained information associated with the virtual content presentation request may include information about a content type, and some disclosed embodiments may further include determining whether the condition of the specific content placement rule is met based on the information about the content type. A variety of different virtual content types may be presented. Content type may refer to one or more characteristics or properties of the virtual contents. Such characteristics or properties may include, for example, whether the virtual content is of a commercial or public informational nature, whether the content is of a public or private nature, whether the content is a document, video, or presentation, whether the content is two-dimensional or three-dimensional, and/or any other properties related to the subject matter or properties of the content (e.g., ownership or affiliation information associated with a presenter of the content or recipient of the content.) Public content may refer to content that is posted in the public extended reality environment. Public content may be visible to many or all wearable extended reality appliance users. Such public content may also include advertisements. Private content may refer to content that is posted in a private extended reality environment, i.e., a limited number of users may be able to view the presented virtual content. Private virtual content may be documents, videos, or presentations that are relevant to conducting business, education, or simply sharing between family members or friends. Determining the content type may be helpful in determining whether the content placement rule is met. For example, a content placement rule may limit presented content to private documents and other business-related content. Here, if a requesting entity wishes to present a public advertisement in a private environment, that request may be denied based on the content type-based content placement rule because the content placement rule is not met. In this example, the content placement rule may prevent the presentation of public content such as advertisements. However, if a requesting entity wishes to present a private document in a specific geographic location to a private user, that request may be approved because the content placement rule is met. Consistent with one aspect, the obtained information associated with the virtual content presentation request may include information about a time of day, and some disclosed embodiments may further include determining whether the condition of the specific content placement rule is met based on the information about the time of day. Time of day information may include the time of day at which the request to present virtual content is received, and/or a time of day when the virtual content is to be displayed. For example, a content placement rule may limit receiving presentation requests to normal 9 AM to 5 PM business hours. Here, if a receiving wearable extended reality appliance user receives a presentation request outside of normal 9 AM to 5 PM business hours, the content placement rule may not be met, and thus the presentation request may be denied. The time of day that the virtual content is to be displayed is also relevant to determining whether the specific content placement rule is met. For example, a content placement rule may limit presenting virtual content to normal working hours of 9 AM to 5 PM. Here, the specific content placement rule may not be met if a request to display private virtual content is outside the working hours of 9 AM to 5 PM. In another example, a content placement rule may limit presenting content outside of the operating hours of a public space. Here, an entity may request to present virtual content in a public park, but the park is only open from dawn to dusk. If an entity requests to present virtual content outside of those hours, the specific content placement rule may not be met, and the request to present virtual content may be denied. In some embodiments, there may be content placement rules based on the time of day that govern which type of content may be presented at which time. Here, one type of content may be presented at a first time of day, and a second type of content may be presented at a second time of day. For example, a specific rule may allow display of an advertisement related to coffee and breakfast in the morning, and soft drinks and lunch in the afternoon. In this example, the content placement rule may not be met if the requesting entity wishes to present content related to coffee and breakfast in the afternoon or late evening. Thus, the request to present virtual content may be denied. Consistent with one aspect, the obtained information associated with the virtual content presentation request may include virtual space constraint information associated with the specific geographic location, and some disclosed embodiments may further include determining whether the condition of the specific content placement rule is met based on the virtual space constraint information. Virtual space constraint information may refer to the number of objects that may be placed at the specific geographic location, the size of the objects that may be placed at the specific geographic location, the total virtual real estate available for placement and/or the specific arrangement of the objects at the specific geographic location (i.e., if some virtual content may be presented before other virtual content). Virtual space constraint information may also refer to how many virtual objects are in the vicinity of the specific geographic location. Virtual space constraints may exist so that there are not too many virtual objects presented in one specific geographic location, or in the vicinity of one specific geographic location. Virtual space constraints may limit the number of virtual objects that may be presented in a single geographic location and may depend on the size of the virtual space. Virtual space constraints may also depend on the size of the objects presented and are also related to the size of the virtual space. Virtual space constraints may also refer to a specific arrangement of objects at the specific geographic location. Depending on the size of the virtual space, virtual objects may need to be organized in a certain way to either avoid overlapping or overloading the virtual space or to be more visible to passersby. For example, a specific content placement rule may prevent more than four virtual objects from being presented in a virtual space. The content placement rule may be based on how many virtual objects are present in the virtual space and may also be based on the display size of the objects in the virtual space. If the display of one of the virtual objects is larger than another, fewer virtual objects may be able to be displayed. In this example, an entity may request to present virtual content in a specific geographic location. However, if the entity requests to present four large virtual objects, the request may be denied because even though the entity's request met the content placement rule regarding the number of virtual objects in the virtual space, it did not meet the size requirement, and thus the presentation request may be denied. In another example, a specific content placement rule may prevent the display of virtual content unless it is arranged in a specific way, i.e., two small virtual objects located adjacent to two larger virtual objects. If a requesting entity wishes to present content that does not meet the virtual space constraint, e.g., the request involves three small virtual objects located adjacent to one larger virtual object, the content placement rule may not be met, and the presentation request may be denied. By way of example,FIG.38is a diagram that illustrates content placement rules3824that are associated with multiple input parameters that include information about the requesting entity, the specific geographic location, and content type. The input parameters may refer to the information obtained by a processing device. The processing device may be included in remote processing unit208(e.g., processing device560), included in XR unit204(e.g., processing device460), included in input device202(e.g., processing device360), or included in mobile communications device206. Input parameter3810may represent information about an identity of the entity, input parameter3812may represent information about a physical location of the entity, input parameter3814may represent virtual space constraint information, input parameter3816may represent ownership information, input parameter3818may represent affiliation information, input parameter3820may represent content type information, and input parameter3822may represent information about a time of day. The content placement rules3824may be configurable and may include rules related to public or private content, time restrictions, content type restrictions, ownership restrictions, affiliation restrictions, physical location restrictions, display size restrictions, and/or virtual object number restrictions. Based on the input parameters and the content placement rules3824, the processing device may execute one or more of the following operations: operation3826preventing the display of the virtual content, operation3828enabling the display of the virtual content, operation3830modifying the virtual content for display, and/or operation3832applying a time limitation for displaying the virtual content. Related embodiments may include receiving image data captured from an area of the specific geographic location via an image sensor associated with the requesting entity's wearable extended reality appliance. The image data may be captured either continuously or periodically, i.e., every few seconds. For example, an image sensor that acquires the image data may be located on the lens, bridge, or any other location on the wearable extended reality appliance so as to capture accurate image data. Captured image data may include color, brightness, light, and/or texture information, location information, geographic information, and/or height or depth information about the specific geographic location. Some disclosed embodiments may further include analyzing image data to determine a characteristic of the specific geographic location. For example, the characteristic of the specific geographic location may include indications of whether the location is inside or outside, i.e., an office, home, street, park, or other public or private location. In some examples, a characteristic of a geographic location may include the physical dimensions associated with the specific geographic location, i.e., the height, width, and depth of the surface where the content is to be presented. Another characteristic of a geographic location may include the number of people present at, or in the vicinity of, the specific geographic location. For example, the more people who are present at a specific geographic location, the more viewers of a presented virtual object, e.g., an advertisement, and thus there are more potential purchasers of the advertised product. A further characteristic of a geographic location may include the number of virtual objects present at, or in the vicinity of, the specific geographic location. For example, if an entity wishes to present virtual content so that it is visible to other wearable extended reality appliance users, one characteristic that the entity may analyze is how many virtual objects are already in that specific geographic location and if there is space to present another virtual object or a plurality of virtual objects. In this example, the specific geographic location may be a billboard on a highway, and two virtual objects may already be present on the billboard. Thus, the entity may not present virtual content there based on the presence of the other virtual objects. In another example, the requesting entity may opt to present virtual content at an area that is close to the billboard, but not so close as to confuse a viewer with cluttered content. In some examples, a machine learning model may be trained using training examples to determine characteristic of geographic locations from images and/or videos. An example of such training example may include a sample image and/or a sample video of a sample geographic location, together with a label indicating characteristic of the sample geographic location. The trained machine learning model may be used to analyze the image data to determine the characteristic of the specific geographic location. In some examples, the characteristic of geographic locations may be received from a user (for example through a user interface), may be read from memory, may be received from an external device, and/or may be obtained from a database. Some disclosed embodiments may involve implementing the specific content placement rule to prevent a display of the virtual content at the specific geographic location in the at least one shared extended reality environment by at least some of the plurality of wearable extended reality appliances when a condition of the specific content placement rule is not met. Each wearable extended reality appliance (or its user) may be checked against rules. Those appliances or users who meet the rules may be permitted to post content; those who do not may be prohibited. A specific content placement rule may apply to multiple shared extended reality environments and may involve multiple parties. Depending on the purpose of presenting virtual content, there may be more than one shared extended reality environment. Many if not all users may have access to a public shared extended reality environment in which various types of virtual content may be displayed. However, there may also be a plurality of private shared extended reality environments in which virtual content may be shared only between a limited number of users. For example, in one private shared extended reality environment, content may only be shared between a limited number of users who may be friends or family. In another example, there may be a private shared extended reality environment where content may be shared between a limited number of users who are coworkers or students. In some examples, implementing the specific content placement rule to prevent a display of the virtual content at the specific geographic location in the at least one shared extended reality environment may comprise avoiding adding the virtual content to the at least one shared extended reality environment, at all or at least at the specific geographic location. While specific types of content may be permitted in some shared extended reality environments, this content may not be permitted in other shared extended reality environments based on specific content placement rules at specific geographic locations. Content placement rules may be based on content type, time of day, virtual space constraints, and/or whether the specific geographic location is public or private. When a content placement rule is met, display of virtual content may be enabled. When a content placement rule is not met, display of virtual content may be prevented. For example, advertisements may be permitted in a public shared extended reality environment, but not in a private office environment or a private friends or family environment. Such a private environment may also be in a wearable extended reality appliance user's home, and it may be desirable to not have unwanted ads in one's home. The system may use the content placement rules to prevent unwanted content. In one example, users in home, office, and family environments, i.e., a plurality of users in different shared extended reality environments, may prevent the public content from being presented because it may interfere with their ability to present the content that they wanted, such as business-related content. In this example, presenting public advertisements in a private office setting may be distracting to employees. The group of users that may prevent the display of content may also be a local government or regulatory entity. Some disclosed embodiments may involve implementing the specific content placement rule to enable the display of the virtual content at the specific geographic location in the at least one shared extended reality environment by at least some of the plurality of wearable extended reality appliances when the condition of the specific content placement rule is met. In some examples, implementing the specific content placement rule to enable the display of the virtual content at the specific geographic location in the at least one shared extended reality environment may comprise adding the virtual content to the at least one shared extended reality environment at the specific geographic location. In some embodiments, content may be enabled to be presented, provided that a specific content placement rule is met. A content placement rule may be based on content type, time of day, virtual space constraints, and/or whether the specific geographic location is public or private. Content may be presented when the content placement rule is met, and the specific content placement rule may apply to more than one shared extended reality environment, i.e., both a public and private extended reality environment. In some embodiments, a plurality of wearable extended reality appliance users may implement a content placement rule, and that plurality of users may determine which content may be presented and which content may not be presented. Additionally, this plurality of users may determine whether virtual content may be presented in both private and public shared extended reality environments. A content placement rule may apply to content that is useful and informative to all users and/or conveys emergency information. For example, a content placement rule may permit news and weather reports to be displayed in emergency situations, such as when severe weather is approaching the area. In one embodiment, this content placement rule may be met when the public content that is requested to be presented is informative to all users, i.e., it includes urgent weather-related information, and thus may be enabled to be presented. A plurality of users, i.e., local government or regulatory entities, may determine which public content may be displayed in private shared extended reality environments. Additionally, permitting this content to be presented may meet the content placement rule described above, and thus may be presented by at least some private shared extended reality environments. Each unique content placement rule may be configured by an individual wearable extended reality appliance user or plurality of users. In some embodiments, implementing the specific content placement rule to enable virtual content display may include presenting the virtual content in a manner overlaying a physical object at the specific geographic location. When overlaying virtual content at the specific geographic location, there may be at least one physical object and at least one virtual object. Overlaying virtual content may refer to presenting virtual content atop a physical object. The physical object may be a screen, sign, poster, wall, window, or any other physical object wherein virtual content may be displayed. For example, a specific content placement rule to enable virtual content display may require virtual content to be presented on a screen or other flat surface. If there is no screen or other flat surface available, the request to present virtual content may be denied. By way of another example, a presented virtual object may be docked with another virtual object before being overlaid over the physical object, i.e., two virtual objects are docked with one another before they are presented over a physical object such as a screen. For example, presented virtual content may involve a first virtual object such as video content. A second virtual object that may be docked with the video content may be a volume adjustment bar. The content placement rule at the specific geographic location may stipulate that a second virtual object, here, a volume adjustment bar, must be associated with the virtual content before the content may be overlaid over the physical object. The content placement rule may be met if the first virtual object (the video content) and the second virtual object (the volume bar) are docked to one another before the content is presented over the physical object. In this example, the content placement rule is met, and the presentation request may be granted. In another example, the same content placement rule applies, i.e., virtual objects must be docked to one another before content may be displayed over physical object. In this example, if the first virtual object (the video content) is not docked with the second virtual object (the volume adjustment bar), the content placement rule is not met, and the presentation request may be denied. In some embodiments, implementing the specific content placement rule to enable virtual content display may include docking the virtual content to a physical object located in an area of the specific geographic location. Docking may refer to the action of linking, tethering, attaching, pairing, or in some way connecting a virtual object and a physical object. An example of a virtual object is a virtual screen (also referred to as a ‘virtual display’ or a ‘virtual display screen’ herein), and an example of a physical object may be a keyboard. When docked, the virtual object and the physical object may move together in tandem and change location and/or orientation. Virtual objects may also be docked with other virtual objects. In one implementation, the physical object may be movable, and some disclosed embodiments may further include receiving an input indicative of a movement of the physical object to a new location and enabling the display of the virtual content at the new location. A movable physical object may be one that is possible to relocate based on its size, weight, orientation, and/or other characteristic. A movable object may be a keyboard, a mouse, a touchpad, a pointing device, a microphone, a pencil, a pen, a stylus, a joystick, a person, a robot, and/or any other object that a wearable extended reality appliance user may dock with a virtual object. In some embodiments, input indicative of a movement of a physical object may be received from a sensor included in the physical object. Movement may also be detected based on position of the object. In one example, a sensor may be a motion sensor, image sensor, and/or a combination of a motion sensor and an image sensor. Other examples of using other sensors to determine movement of physical objects are discussed above. The captured motion or image data may be used in order to determine that the physical object has moved to a new location. The sensor(s) may be located on any part of the physical object so that a motion sensor may accurately capture changes in position and/or orientation, and the image sensor may capture changes in color, brightness, texture, and/or light so as to signal a change in location. A motion sensor may be associated with the physical object which sends a signal to the processor associated with the non-transitory computer readable medium that the keyboard has moved to a new location. Image data may also be used to determine that the movable physical object has moved to a new location, for example by analyzing the image data using an ego-motion algorithm. Once it is determined that the movable physical object has relocated, the virtual content may be displayed at the second location. The virtual content may be displayed at the second location because the virtual object is docked to the physical object. Some disclosed embodiments may further include receiving an input indicative of a movement of the physical object to a new geographic location. The processor, for example, based on captured image or motion data received from the physical object, or based on other input data as described above, may determine that the physical object has moved to a new geographic location. For example, the image data may be analyzed using an ego-motion algorithm or an ego positioning algorithm to determine that the physical object has moved to a new geographic location. The processor may also determine a change in physical location based on GPS data. In this example, a GPS is included as part of the movable physical object. Some disclosed embodiments may further include selecting a second content placement rule, based on the new geographic location. Content placement rules may change between two different geographic locations. A first geographic location may be private, and a second geographic location may be public. Content placement rules may also change between two public geographic locations. For example, for public locations, a first town may have different public content placement rules compared to a second town. In some embodiments, the movable physical object may be moved from a first location to a second location. Because the physical object is docked with a virtual object, the virtual object moves to the second location as well. Here, the second location may have a different content placement rule compared to the first location's content placement rule. For example, the first location may be a public location, and the second location may be private. The public location may have different content placement rules as compared to the private location. For example, advertisements may be permitted in the first public location, but not in the second private location. Some disclosed embodiments may include implementing the second content placement rule to prevent a display of the virtual content at the new geographic location when a condition of the second content placement rule is not met. Other disclosed embodiments may also include implementing the second content placement rule to enable the display of the virtual content at the new geographic location when the condition of the second content placement rule is met. In some embodiments, a content placement rule at a first specific geographic location may be met, enabling content to be presented. However, the content placement rule at a second specific geographic location may not be met, and the request to present content may be denied. The content placement rule may govern virtual content type, virtual space restrictions, time of day restrictions, and/or public or private shared extended reality space restrictions. The second specific geographic location may be in the same shared extended reality space (i.e., public or private), may be in a different shared extended reality space, and/or may be located close to or remote from the first specific geographic location. In other embodiments, a content placement rule at a first geographic location may not be met, and thus virtual content is prevented from being presented. However, a content placement rule at a second specific geographic location may be met, thus enabling virtual content to be presented. For example, advertisements may be permitted in the first public location, but not in a second private location. Thus, a requesting entity may be prevented from presenting virtual content in a second private location because the second content placement rule has not been met. In the example described above, the requesting entity was unable to present virtual content in the second location because they did not meet the second content placement rules. In that example, presenting advertisements was permitted in the first public location but not the second private location. In another example, however, a presenting entity may be presenting content that is permitted in both a public first location and a private second location. For example, the presented content may be a weather report that is beneficial to both public users and private users. Local government or regulatory entities may determine what content is permitted to be displayed in both public and private shared extended reality environments. In this example, the content placement rule in the private specific geographic location permits news reports to be presented but prevents advertising content from being presented. Thus, because the content placement rule of the second location is met, the content may be presented. If a user does not wish to view this content, he or she may create another shared extended reality environment and filter out the content that he or she does not wish to view. This user may add other users to the new shared extended reality environment based on the new content placement restrictions the user created. In some embodiments, implementing the specific content placement rule to enable virtual content display may include storing data associated with the virtual content and associating the data with the specific geographic location. Stored data may include content type of the displayed content, i.e., public or private content, inanimate or animate content, i.e., video or document content, two-dimensional or three-dimensional content, and/or the display size of the content. Data may be stored over time in the non-transitory computer readable medium. This data, over time, may be associated with a specific geographic location. The data may be placed in a record in a database or data structure, and/or the two items (i.e., the requested virtual content and the specific geographic location) may be linked in a database (such as a key-value database), data structure, or lookup table, in order to be associated with a specific geographic location. For example, an entity may present different types of virtual content at different specific geographic locations over a one-year period. The entity presents the same type of virtual content as the same location each time. Thus, over the one-year period, the processor associated with the non-transitory computer readable medium may preemptively associate certain virtual content with certain specific geographic locations and may preemptively determine that a specific content placement rule to enable content display is met or not met based on the stored data. In some embodiments, a machine learning algorithm may also be implemented to store test data over time, such that the processor may preemptively determine which virtual content may be permitted to be displayed, and which virtual content may be prevented from being displayed. For example, an entity may present an advertisement at a particular specific geographic location in a public shared extended space three or four times consecutively. When the wearable extended reality appliance receives the fifth request, the processor may automatically determine that the content placement rule is met based on the entity's prior placement of the content. In some embodiments, implementing the specific content placement rule to enable virtual content display may include determining a time limitation for displaying the virtual content at the specific geographic location. A time limitation may include a cap on how long specific virtual content may be displayed. A time limitation may be based on the content type of the displayed content, i.e., public or private content, inanimate or animate content, two-dimensional or three-dimensional content, and/or the display size of the content. Public content may be displayed for a longer time as compared to private content, depending on the content placement rule at a specific geographic location. Time limitations may also be tied to a specific time of day. Depending on the virtual content displayed, the time limitation may be a set time period, such as one hour or two hours, or may be based on the time of day, such as morning, afternoon, evening, or nighttime. The time limitation may also be based on normal 9 AM to 5 PM working hours. In this example, some virtual content may be time-limited to be displayed only during working hours, and some virtual content may be time-limited to only be displayed outside of working hours. In one example, displayed content may be public and in a high-trafficked area such as a town square or park. At peak times, such as when people are going to and from work, time limitations at that specific geographic location may be short. That is, virtual content may be displayed for a shorter time at peak traffic periods than at lower traffic periods. Time limitations may also exist for the hours themselves. For example, content may be prevented from being displayed at a specific geographic location at night or during the earlier morning. In another example, content placement rules at specific geographic locations may be related to working hours. Here, content placement rules may permit some content outside of working hours and not permit that content during working hours. Virtual content that may lie within this rule may include public advertisements for bars, clubs, or other social gathering spaces. Presenting virtual content related to these spaces during working hours may hinder productivity, and thus may not be permitted in a shared extended realty environment. In some embodiments, implementing the specific content placement may include modifying the virtual content for display at the specific geographic location. Modifying the display of content may refer to reducing certain display parameters and/or changing the subject matter of the displayed content and may occur in a myriad of ways. Modifying the display of content may include reducing brightness, intensity, or opacity of inanimate content, reducing the frame rate of animate content, changing the color scheme of displayed content, and/or reducing the display size of the virtual content. Modifying the display of content may also involve changing the content itself in order to meet the specific content placement restriction at the specific geographic location. For example, modifying the display of content may include modifying a gender of a character in an advertisement from a man to a woman. For example, if there are virtual space constraints at the specific geographic location, an entity may modify the size of the presented virtual content. An entity may also change the colors of the presented virtual content to meet specific geographic location requirements. For example, an office or classroom may have a desired color scheme for presented content. The content itself may also be changed based on the restrictions associated with the location. For example, if a school or place of worship is nearby, an entity that wishes to present content may need to ensure that the presented content is not offending. In another example, animated content may need to be changed to inanimate content if the geographic restrictions require it. In yet another example, three-dimensional content may need to be reconfigured as two-dimensional content to meet virtual space constraints. By way of example,FIG.39depicts a flow chart3910illustrating method3910of determining whether to display content. Method3910may include step3912of receiving a request from an entity to display virtual content. Method3910may also include step3914of receiving information associated with the entity's request. Such information, as discussed inFIG.38, may include information about an identity of the entity, information about a physical location of the entity, ownership information, affiliation information, and/or content type information. Method3910may also include step3916of identifying a specific content placement rule. Such a rule may be based on the type of content the entity wishes to present, and/or whether it is public or private. Method3910may also include step3918of determining whether the content placement rule is met. When the condition associated with the specific content placement rule is not met (Step3918: NO), method3910may include step3920, where virtual content may be prevented from being displayed at the specific geographic location. However, method3910may also include step3922, where the processor may receive ongoing information, such as time of day information and virtual space constraint information and may reevaluate the requesting entity's query accordingly. When the condition associated with the specific content placement rule is met (Step3918: YES), method3910may include step3924, which may enable the display of the virtual content at the specific geographic location. Method3910may also include step3926of receiving additional information. Even though the content placement rule is met at one specific geographic location, the content placement rule may not be met at a second geographic location. Thus, processing device360,460, or560(seeFIGS.3,4and5) may be configured to keep receiving information. By way of example,FIG.40is a flowchart illustrating an exemplary method4010for managing content placement in extended reality environments. Process4010may be performed by one or more processing devices (e.g.,360,460,560) associated with input unit202(as depicted inFIG.3), XR unit204(as depicted inFIG.4), and/or remote processing unit208(as depicted inFIG.5). The steps of the disclosed method4010may be modified in any manner, including by reordering steps and/or inserting or deleting steps. Method4010may include a step4012of receiving a request from an entity to place virtual content at a specific geographic location in at least one shared extended reality environment. Method4010may also include a step4014of obtaining information associated with the request. The extended reality appliance receiving the virtual content placement request may access a plurality of content placement rules defining geographical restrictions on extended reality environment content placement4016(as depicted inFIG.38). Method4010may also include step4018of determining that the request from the entity to place the virtual content at the specific geographic location corresponds to a specific content placement value. The extended reality appliance receiving the request may implement the specific content placement rule to prevent a display of the virtual content when a condition of the specific content placement rule is not met4020and may enable the display of the virtual content when a condition of the specific content placement rule is met. Some disclosed embodiments may involve presenting virtual content to multiple viewers. Virtual content, as described more fully in other portions of this disclosure, may include any non-physical representation of information that may be displayed. Virtual content may include two-dimensional virtual objects, three-dimensional virtual objects, animated content, unanimated content, textual data, or other graphical representations. The virtual content may be shared in a physical room or a virtual room with physical viewers and/or virtual viewers. The position and/or orientation of the virtual content may be determined based on one or more viewer locations for optimal user experience. Multiple individuals wearing extended reality appliances may, for example, be presented with a virtual screen that may be displayed to each viewer at a single location to all viewers in a room, with each viewer being presented with the same content but from a different perspective. Some disclosed embodiments may involve receiving sensor data indicative of a plurality of wearable extended reality appliances located in a room. Receiving sensor data may refer to collecting, acquiring, gathering, or getting any type of data that is output from a sensor or that is derived from signals output from a sensor. For example, receiving the sensor data may comprise at least one of reading the sensor data from memory, receiving the sensor data from an external device, receiving the sensor data from one or more sensors, capturing the sensor data using one or more sensors, and so forth. A sensor may refer to a device that detects and/or responds to an input from a physical or virtual environment. Non-limiting examples of sensors that may be used to provide sensor data indicative of a plurality of wearable extended reality appliances located in a room may include image sensors, audio sensors, motion sensors, temperature sensors, vibration sensors, infrared sensors, LIDAR sensors, or any other device capable of identifying the presence and/or the identity of wearable extended reality appliance. Other sensors that may be used to capture sensor data indicative of the plurality of wearable extended reality appliances being in the room, such as positioning sensor, are described above. For example, the sensor data may include sensor data captured using sensors included in wearable extended reality appliances, a processor within a server (e.g., server210) may receive sensor data from different processors within the wearable extended reality appliances, or a processor included in one of the wearable extended reality appliances may receive the sensor data. In another example, the sensor data may include sensor data captured using a sensor not included in any wearable extended reality appliance, such as an image sensor positioned in the room. A room may refer to an area, place, accommodation, or other similar space that may be occupied or where events may occur, such as a display of virtual content. The room may be a virtual room, a physical room, or a combination of a virtual and physical space. For example, data, for example in the form of multiple images captured at different times from an image sensor, may indicate, show, or portray that more than one wearable extended reality appliance are physically or virtually located in a physical or virtual room. For example, there may be two, three, four, or five wearable extended reality appliances in the room. In some examples, the room may be completely virtual, and, in other examples, the room may be completely physical. By way of example,FIG.41illustrates a plurality of wearable extended reality appliances4102located in a room4100. For example, the room4100may be a physical room and the plurality of wearable extended reality appliances4102may include two wearable extended reality appliances4102.FIG.41also illustrates a virtual object4104at a position4112displayed for viewing via wearable extended reality appliances4102. A first wearable extended reality appliance is located at a first location4106and a second wearable extended reality appliance is located at a second location4108. Furniture4120and a physical object4124(e.g., a vase) is also present in this example. Sensor4114is also present in this example. In some embodiments, the sensor data may be received from at least one of the first wearable extended reality appliance, the second wearable extended reality appliance, or the third wearable extended reality appliance. It should be apparent to one ordinarily skilled in the art that the disclosed embodiments, for example, may include at least one processor for providing the functionality described herein. In some embodiments, at least one processor may be associated with a wearable extended reality appliance. The wearable extended reality appliance may be configured to acquire sensor data from an environment (for example, using one or more sensors included in the wearable extended reality appliance) and transmit the sensor data (or a portion of the sensor data) to, for example, a remote server. In some embodiments, the at least one processor may be configured to collect sensor data from a sensor associated with one or more wearable extended reality appliances. For example, if three extended reality appliance are in proximity to each other, one or more of them may transmit data obtained from an on-board sensor. For example, the sensor in one appliance may detect other appliances in proximity and transmit data identifying their presence and/or information reflective of their specific identity. Each wearable extended reality appliance may identify one or more other appliances to provide a collective indication of the appliances located within the same room or common physical (or virtual) space. The transmission of sensor data may occur over one or more networks, which may or may not involve remote servers. At least one processor may receive sensor data from a sensor associated with the wearable extended reality appliance or from some other sensor. At least one processor may use one or more programs or sets of program instructions to analyze data received from one or more sensors. For example, sensor data may be received by a processor within a server (e.g., server210) from a first wearable extended reality appliance, a second wearable extended reality appliance, or a third wearable extended reality appliance. Alternatively, sensor data may be received by the processor within the server (e.g., server210) from the first wearable extended reality appliance, the second wearable extended reality appliance, and the third wearable extended reality appliance. Sensor data may also be received by the processor within the server (e.g., server210) from a combination of a plurality of wearable extended reality appliances. For example, the sensor data may be in the form of images from an image sensor associated with a wearable extended reality appliance or may be in the form of values representing motion from a motion sensor associated with the wearable extended reality appliance. In some embodiments, the sensor data may be received from a sensor separated from the first wearable extended reality appliance, the second wearable extended reality appliance, and the third wearable extended reality appliance. A sensor may be considered to be “separated” from a wearable extended reality appliance when it is divided, not linked, not connected, or otherwise distinct from the wearable extended reality appliance. Non-limiting examples of a sensor that may be separated from a wearable extended reality appliance may include an infrastructure sensor mounted in a room (e.g., a physical conference room may have installed image sensors) or a moveable sensor placed in a room. The moveable sensor placed in a room may be a dedicated sensor configured for use with embodiments disclosed herein, or may be a sensor that is temporarily in the room, such as the camera on a laptop, phone, or wearable device other than a wearable extended reality appliance. By way of example,FIG.42Aillustrates a sensor4114(e.g., two image sensors on two different walls) that is separate from a first wearable extended reality appliance, a second wearable extended reality appliance, and a third wearable extended reality appliance. For example, a physical conference room4100may have the sensor4114in the form of two images sensors4114installed on two different walls. As illustrated in this example, the third wearable extended reality appliance is at a third location4210. The third wearable extended reality appliance in this example is associated with a user identity4218. In some embodiments, the sensor data may include image data captured by an image sensor. Further, the image data may be analyzed to determine that at least one of the first wearable extended reality appliance, the second wearable extended reality appliance, or the third wearable extended reality appliance may be physically located in the room. The analysis of the image data may involve a processor executing a sequence of stored instructions (e.g., a program) which takes inputs, process the inputs, and output the results to an output device. For example, at least one processor may receive the image data capture by the image sensor and may then execute a program in order to analyze the image data. For example, image data captured using an image sensor included in a particular wearable extended reality appliance (e.g., the first, second or third wearable extended reality appliance) may be analyzed using an ego-positioning algorithm, an ego-motion algorithm or a visual odometry algorithm to determine that the particular wearable extended reality appliance is in the room. In another example, image data captured using an image sensor not included in any wearable extended reality appliance may be analyzed using an object recognition algorithm to identify at least one of the first, second or third wearable extended reality appliance in the room. Non-limiting examples of image data may include one or more images of a whole room, part of the room, of a wearable extended reality appliance, of more than one wearable extended reality appliance, or of other objects or people in the room. Alternatively, or additionally, the sensor data may include audio data (e.g., information derived from sound signals) from an audio sensor (e.g., a microphone). Audio data, may, for example, represent sound or may be derivatives of sound. Non-limiting examples of audio data may include files in wave format, mp3 format, or WMA format. Or audio data may be information derived from a sound file such as a pattern or sound signature. A microphone may refer to an instrument for converting sound waves into electrical energy variations which may then be amplified, transmitted, or recorded. Examples of microphones may include dynamic microphones, condenser microphones, and ribbon microphones. Either such a microphone or a processor that receives sound signals from the microphone may process the sound digitally. For example, the audio sensor may be configured to collect audio data which may then be analyzed by a processor associated with the audio sensor. The analysis of the audio data may be recognize voices and differentiate between the recognized voices to determine that at least one of the wearable extended reality appliances is physically located in the room. By way of example,FIG.42Aillustrates sensor4114in the form of two image sensors4114on two different walls in a physical conference room4100. For example, pixels of the images captured by the two image sensors4114may be compared against one another to determine that a plurality of wearable extended reality appliances4102(e.g., three wearable extended reality appliances) are physically located in the physical conference room4100. In some embodiments, the sensor data may be analyzed to determine that at least one of the first wearable extended reality appliance, the second wearable extended reality appliance, or the third wearable extended reality appliance may be virtually located in the room, wherein the determined location of the at least one wearable extended reality appliance virtually located in the room is reflective of a location of an avatar of at least one user. An appliance may be considered virtually located in a room if the appliance is not physically located in that room, but is participating (through information displayed) as if physically located in that room. The virtual presence may be represented by an avatar. An avatar may refer to a two-dimensional or three-dimensional virtual figure or virtual icon representative a user. A location that is “reflective” of a location of an avatar may refer to a location that is indicative, expressive, exhibitive, or similarly demonstrative of a location of that avatar. A user may refer to a person wearing an extended reality appliance. The determined location of the at least one wearable extended reality appliance may refer to a position or place within a coordinate system (e.g., a Cartesian coordinate system). For example, pixels of images captured by an image sensor may be compared to one another to determine that at least one of the wearable extended reality appliances may be virtually located in a room. For example, a user who is not physically located in a room will not be detected by image sensors in the room. This is one way of using image data to determine that a participant is virtually located in a room. In such an instance, an avatar may appear virtually in place of a participant while other participants are physically located in the room. The avatar may be a virtual copy of the user, a character, or any other virtual object representative of the user. For example, the avatar may be an extended reality display of Yoda. The determined location of the at least one wearable extended reality appliance may be a location of where the at least one user is in the different location. By way of example,FIG.43illustrates a physical conference room4100.FIG.43illustrates a position4112for displaying a virtual object4104may be changed based on the room4100containing four wearable extended reality appliances and a physical object4124blocking a view of the virtual object4104and the position4112changing. For example, pixels of the images captured by a sensor4114separated may be compared against one another to determine that a fourth wearable extended reality appliance4326is virtually located in the physical conference room4100and is reflective of a location of an avatar4316of at least one user. Some disclosed embodiments may involve receiving a command to share a virtual object with the plurality of wearable extended reality appliances. A command may refer to a direction, an order, an act, or a similar request or an indication of an intent or a need. To share a virtual object may refer to distributing, bestowing, sending, providing, or a similar act of showing the virtual object. A virtual object may refer to any type of data representation that may be displayed by a wearable extended reality appliance to a user. Non-limiting examples of the virtual object may be an inanimate extended reality display, an animate extended reality display configured to change over time or in response to triggers, virtual two-dimensional content, virtual three-dimensional content, a virtual overlay over a portion of a physical environment or over a physical object, a virtual addition to a physical environment or to a physical object, a virtual promotion content, a virtual representation of a physical object, a virtual representation of a physical environment, a virtual document, a virtual character or persona, a virtual computer screen, a virtual widget, or any other format for displaying information virtually. In one embodiment, the virtual object may be a visual presentation rendered by a computer in a confined region and configured to represent an object of a particular type (such as an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, virtual widget, or other virtual representation). The rendered visual object may change to reflect changes to a status object or changes in the viewing angle of the object, for example, in a way that mimics changes in the appearance of physical objects. In another embodiment, the virtual object may include a virtual computer screen configured to display information. For example, the command may be determined from an analysis of a sensor data or from an analysis of input from an input device. The sensor data may include image data captured using an image sensor and the pixels of the images may be analyzed by at least one processor to determine that text is present, or a user has gestured in a way corresponding to a signal. The text and/or signal may be associated with a predetermined text or image for comparison to determine an associated command. In some embodiments, a user may say “Share,” press a key on a keyboard or a screen, blink one or both eyes in a certain recognizable pattern or move a head to send a command to share the virtual object (e.g., a virtual display screen) with a plurality of wearable extended reality appliances. Other examples of such commands for sharing virtual content are described in other portions of this disclosure. By way of example,FIG.41illustrates an input device4128(e.g., a physical keyboard) from which a command may be received from. For example, a user may press one or more keys of the input device4128to indicate a command to share a virtual object4104with a plurality of wearable extended reality appliances4102. The command may be received by any one of the plurality of wearable extended reality appliances4102and transmitted from the input device4128by at least one processor. In some embodiments, the virtual object may be a virtual display (also referred to as a ‘virtual screen’ or a ‘virtual display screen’ herein) configured to present text entered using an input device, and determining the position for displaying the virtual display may be further based on at least one of a location of the input device or an orientation of the input device. As discussed more fully in other portions of this disclosure, an input device may include a button, key, keyboard, computer mouse, touchpad, touchscreen, joystick, or any other mechanism from which input may be received. For example, a position for displaying a virtual display may be based on at least one of a location of the input device (e.g., a keyboard) or an orientation of the input device so that a distance between the virtual display and the input device is within a selected distance range from the input device, or an angle between a selected axis of the input device and a normal of the virtual display is within a selected angular range. Non-limiting examples of the selected range of distances may be one meter, two meters, or five meters. In another example, the position for displaying the virtual display me be determined so that the input device will be between the user of the keyboard and the virtual display. In some embodiments, the virtual object may be a virtual display (also referred to as a ‘virtual screen’ or a ‘virtual display screen’ herein) configured to present text entered using an input device, the input device may be positioned on a physical surface, and determining the position for displaying the virtual display may be further based on a characteristic of the physical surface. In some embodiments, the physical surface may include, for example, an exterior, top, side, external, outward, or an outside part or uppermost layer of a physical object. Some non-limiting examples of physical surfaces may include a table, a desk, a bed, a floor, a counter, a wall, and any other physical object having a surface. The characteristic of the physical may refer to a quality or trait that is distinctive, innate, unique, indicative, typical, or another feature or quality. Non-limiting examples of a characteristic of a physical surface may be a size of the physical surface, a location of an edge of the physical surface, a type of the physical surface, a texture of the physical surface, a shape of the physical surface, or any other identifying feature of the physical surface. Non-limiting examples of physical surfaces may include a surface of a dining table, a desk, a bar, a bed, a chair, a floor, a book, or a folding table. For example, a virtual object may be a virtual display (also referred to as a ‘virtual screen’ or a ‘virtual display screen’ herein) configured to present text (e.g., the texted typed by the user) when a user clicks on keys of a keyboard. For example, the input device may be positioned on a physical surface (e.g., a desk) and the determined position for displaying the virtual display me be based on the physical surface being a desk and not a bed. In some examples with the desk, a virtual display may be positioned at an end of the desk away from the user. In such an instance, the detected edge of the desk and/or sides of the desk may aid in determining the position of the virtual display. In other examples with a physical surface of a bed, the virtual display may be positioned above the bed so that the user may lay in the bed and see the virtual display by looking upward. By way of example,FIG.41illustrates a virtual object4104as a virtual display configured to present text entered using an input device4128(e.g., a keyboard). A position4112for displaying the virtual object4104is based on at least one of a location of the input device4128or an orientation of the input device4128so that a distance between the virtual object4104and the input device4128is within a selected range of distances. Some disclosed embodiments may include analyzing the sensor data to determine a first location in the room of a first wearable extended reality appliance, a second location in the room of a second wearable extended reality appliance, and a third location in the room of a third wearable extended reality appliance. Any number (such as three) wearable extended reality appliances may be co-located in the same room or common space. In order to present information to each from perspectives corresponding to their unique locations, or for other uses, their locations may be determined. To this end, sensor data from the types of sensors described earlier may be used to identify each location of each wearable extended reality appliance. A single sensor may detect information from which all appliance locations may be determined, or data from multiple sensors may be aggregated to determine the locations of each wearable extended reality appliance. In some cases, data from the sensor in one appliance may be used to determine a location of another appliance and vice versa. Physical attributes of the room may also be used in location determination. For example, sensor data may be used to identify corners of a room or other physical attributes of the room, and the wearable extended reality appliances may be located relative to those physical attributes. Non-limiting examples of other such physical attributes may include the placement of furniture or other objects. Thus, pixels of images from an image sensor may be analyzed, for example using a visual object localization algorithm, to determine a first location in the room of a first wearable extended reality appliance, a second location in the room of a second wearable extended reality appliance, and a third location in the room of a third wearable extended reality appliance. In another example, the locations of the first, second or third wearable extended reality appliances may be determined in a coordinate system, such as a global coordinate system, a coordinate system of an extended reality environment, and so forth. For context,FIG.42Aincludes a room4100and parts or the entirety of the room4100may be described in one or more examples to illustrate parts of the room4100in regard to a disclosed embodiment. Additionally,FIG.42Aillustrates the room4100with a sensor4114separated from a plurality of wearable extended reality devices4102. Specifically, sensor4114separated includes two image sensors4114on two different walls in the room4100. For example, pixels of images from the sensors4114separated may be analyzed to determine a first location4106in the room4100of a first wearable extended reality device, a second location4108in the room4100of a second wearable extended reality device, and a third location4210in the room4100of a third wearable extended reality device. Some disclosed embodiments may include determining a position for displaying the virtual object in the room based on the determined first location, the determined second location, and the determined third location. A display position may be determined based on the locations of each wearable extended reality appliance. For example, if three appliances are arranged side by side at a conference room table, a processor may determine that the best position to display the screen is opposite the three appliances as opposed to on a perpendicular side end of the table. Such a determination may take into account the fields of view of each wearable extended reality appliance. In some instances, the determination may take into account articles in the room. For example, if an easel displaying a whiteboard is immediately opposite users of wearable extended reality appliances, the determined placement may be such so as not to block the white board on the easel. Non-limiting examples of a position for displaying a virtual object in a room based on the determined locations may include a position between the determined locations, a position of intersection between the determined locations, a position closer to one of the determined locations, a position closer to two of the determined locations, a position furthest away from the determined locations, or a position closest to the determined locations. For example, if the three determined locations form a triangle, the position for displaying the virtual object in the room may be in the middle of the triangle. In another example, when the three determined locations form a single line or row, the position for displaying the virtual object in the room may be parallel and in front of the line or row. By way of example,FIG.42Aillustrates a determined position4112for displaying a virtual object4104in a room4100based on a determined first location4106, a determined second location4108, and a determined third location4210. For example, the determined position4112is at a corner of the room4100between the determined first location4106, the determined second location4108, and the determined third location4210. In some embodiments, the determined position for displaying the virtual object may include a determined orientation of the virtual object in the room, and additional sensor data may be analyzed to determine that at least two of the first wearable extended reality appliance, the second wearable extended reality appliance, or the third wearable extended reality appliance changed orientation or location, and adjusting the determined orientation of the virtual object may be based on the orientation or location change. Orientation may refer to direction or the relative position of an object or individual. Individuals tend to change their orientations over time, by shifting their focus, turning their bodies, or changing their posture. When they do, the changes may impact a virtual presentation. Similarly, individuals may change their locations by moving from one spot to another. Sensor data may be used to determine that one or more of the wearable virtual reality appliances changed orientation or location, for example, if a user shifts in a chair so that the user's body position turns to the right, and another user's body position shifts to the left, the orientational movements may be detected, and a virtual display may shift to accommodate the new positions. Such orientational changes may be determined based on analysis of image data from one or more of the sensors described earlier, and the virtual object may be reoriented to accommodate the orientational changes of one or more users. Similar adjustments may be made for locational changes. For example, if two users were to move from one side of the room to another closer to a third user, a virtual display that was originally centered in the room may shift toward the center of the reoriented group. Or if a group encircles a virtual object, and some of the group members shift so that the center of the encirclement changes, the virtual object may move toward the new center Further, pixels from additional images from an image sensor may be analyzed to determine that at least two of a first wearable extended reality appliance, a second wearable extended reality appliance, or a third wearable extended reality appliance (which may include the individual wearing the appliance) changed orientation (e.g., from facing the center of the room to facing the door of the room). In some embodiments the determined orientation of the virtual object may be adjusted based on the orientation changes so that the virtual object also faces the door of the room. By way of example,FIG.41illustrates a determined position4112for displaying a virtual object4104which may include a determined orientation (e.g., facing two users) of the virtual object4104in a room4100. For example, pixels of images from additional images from sensors4114separated from wearable extended reality appliances may be analyzed by an at least one processor to determine that at least two of a first wearable extended reality appliance, a second wearable extended reality appliance, or a third wearable extended reality appliance changed orientation (e.g., from facing two users to facing three users). The determined orientation of the virtual object4104may be adjusted to facing three users in the room4100based on the orientation changes so that the virtual object4104still faces the users. By way of example,FIG.41illustrates a determined position4112for displaying a virtual object4104that includes a determined location (e.g., a corner of the room4100) of the virtual object4104in a room4100. For example, pixels from additional images from a sensor4114separated from wearable extended reality appliances may be analyzed to determine that at least two of a first wearable extended reality appliance, a second wearable extended reality appliance, or a third wearable extended reality appliance changed location (e.g., a third user entered the room). The determined location of the virtual object4104may be adjusted in the room4100based on the location changes so that the virtual object4104is not occluded. Some disclosed embodiments may include causing a first display of the virtual object at the determined position through the first wearable extended reality appliance, the first display being rendered from a first perspective. Additionally other disclosed embodiments may include causing a second display of the virtual object at the determined position through the second wearable extended reality appliance, the second display being rendered from a second perspective different from the first perspective. Still further, additional disclosed embodiments may include causing a third display of the virtual object at the determined position through the third wearable extended reality appliance, the third display being rendered from a third perspective different from the first perspective and the second perspective. A display that is rendered from a perspective may refer to a display delivered, depicted, or presented due to or because of an angle, aspect, context, viewpoint, or a similar point of view. In some embodiments, perspectives are simulated in the virtual world as they would appear in the real world. In the real world, three individuals sitting in different locations in the same room and viewing an object will each see the object somewhat differently, depending on their relative positions. In some disclosed embodiments, computer readable medium running on at least one processor collects data from sensors to determine differing locations or orientations of different wearers of extended reality appliances, and renders the view of each differently, to effectively simulate the perspectives they would see if viewing a real object from their respective orientations. Non-limiting examples of a display that is rendered from a perspective may include the display positioned to the left, the center, the right, higher up, or lower down from a user's focal point. A first display of the virtual object, a second display of the virtual object, and a third display of the virtual object rendered from their own perspective, respectively may be the same display. Alternatively, the first display of the virtual object, the second display of the virtual object, and the third display of the virtual object rendered from their own perspective, respectively may be different versions of the display. For example, the first display of the virtual object at a determined position through a first wearable extended reality appliance, rendered from the first perspective may be of the virtual object positioned lower in height in reference to the wearable extended reality appliance. As another example, the second display of the virtual object at a determined position through a second wearable extended reality appliance, rendered from the second perspective may be of the virtual object positioned higher in reference to the wearable extended reality appliance and straight in front of the wearable extended reality appliance. As yet another example, the third display of the virtual object at a determined position through a third wearable extended reality appliance, rendered from the third perspective may be of the virtual object positioned higher in reference to the wearable extended reality appliance and to the side of the wearable extended reality appliance. In some examples, the first perspective, second perspective and third perspective may be selected so that users of the first, second and third wearable extended reality appliances may all see a first portion of the virtual object at the same first position in the room, a second portion of the virtual object at the same second position in the room, and a third portion of the virtual object at the same third position in the room. For example, the first perspective, second perspective and third perspective may be selected so mimic the different projections of the objects when viewed from different positions in the room (for example, from the positions of the first, second and third wearable extended reality appliances, respectively). By way of example,FIG.42Billustrates a first display of a virtual object4104at a determined position4112through a first wearable extended reality appliance. In this example, the first display may be rendered from a first perspective of the virtual object4104positioned at an angle where the left side of the virtual object4104is closer to the first wearable extended reality appliance than the right side. By way of example,FIG.42Balso illustrates a second display of the virtual object4104at a determined position4112through a second wearable extended reality appliance. In this example, the second display may be rendered from a second perspective of the virtual object4104where the right side of the virtual object4104is almost as close as the left side of the virtual object4104. By way of example,FIG.42Billustrates a third display of the virtual object4104at a determined position4112through a third wearable extended reality appliance. In this example, the third display may be rendered from a third perspective of the virtual object4104positioned where the right side of the virtual object4104is closer to the third wearable extended reality appliance than the left side. Some disclosed embodiments may include determining an identity of at least one of a user of the first wearable extended reality appliance, a user of the second wearable extended reality appliance, or a user of the third wearable extended reality appliance; and wherein determining the position for displaying the virtual object may be further based on the identity of the at least one user. Each user may have different physical characteristics and different preferences. A system may store information relating to those characteristics and preferences and alter the display based on them. For example, some users may prefer to sit farther from a screen. Other users may prefer to view two adjacent screens in an angled rather than linear arrangement. For example, by identifying particular users, a lookup may occur in a data structure of the user's characteristics/preferences, and the virtual display may be adjusted for each user accordingly. By way of another example, the identity of the at least one user may refer to a status or title or other classification. Information displayed to individuals in the same virtual room may differ based on their class. Higher ups, for example, may be presented with financial data that may not be presented to others. Non-limiting examples of an identity may include a boss, a manager, an employee, a partner, an associate, an owner, a non-employee, an independent contractor, a licensee, a licensor, or a relative. The same data may be presented differently based on identity. For example, if the user's identity is that of a partner, the position for displaying the virtual object may be directly in front of or closer to the partner so that the partner has the best or most clear view of the virtual object. By way of example,FIG.42Aillustrates an identity4218(e.g., a partner) of at least one of a user of a first wearable extended reality appliance, a user of a second wearable extended reality appliance, or a user of a third wearable extended reality appliance. For example, when the determined identity4218of the user is a partner, a position4112for displaying a virtual object4104may be in clear view for the partner. Some disclosed embodiments may include determining a physical characteristic of at least one user of the first wearable extended reality appliance, the second wearable extended reality appliance, or the third wearable extended reality appliance; and wherein determining the position for displaying the virtual object may be further based on the physical characteristic of the at least one user. Non-limiting examples of physical characteristics may include the height of the at least one user or disabilities of the at least one user. For example, when a determined physical characteristic of a user is tall (which means that the user is used to looking at objects from a higher vantage point) virtual objects may be presented lower in the field of view on that tall user's wearable extended reality appliance. If the determined physical characteristic is that a user is near-sighted, the position for displaying the virtual object may be closer to that user so that the user may clearly see the virtual object. For example, if the determined physical characteristic is that the at least one user has hearing problems, the position for displaying the virtual object may be closer to the at least one user so that sounds associated with the virtual object are closer to the user. Some disclosed embodiments may include determining a layout of the room that may involve one or more physical locations of furniture in the room, and wherein the position for displaying the virtual object may be further determined based on the layout of the room. A layout may refer to an arrangement, format, configuration, design, or way in which a room set up. An example of a layout may include four chairs, a desk, and a presentation board. Non-limiting examples of furniture may include one or more chairs, desks, plants, and a presentation board. A position of a virtual object (e.g., a virtual display screen) may be determined based on the layout of the room. If four chairs are detected (via sensors and processing described earlier) as arranged in a U-shape, a processor may determine that a virtual object such as a virtual screen should be positioned opposite the open end of the U. In one example, the position for displaying the virtual object may be strategically placed to avoid occlusion of at least part of the virtual object by furniture in the room. In another example, the position of the virtual object may on a surface of an object, such as a wall or a table. In yet another example, an egress such as door to the room may be taken into account when positioning a virtual object so that persons entering or leaving the room avoid colliding with the virtual object. By way of example,FIG.42Aillustrates a layout of a room4100that may include one or more physical locations of furniture4120in the room4100. For example, the layout of the room4100may include two chairs, a desk, and a vase of plants (that serves an example of a physical object4124). Further, a position4112for displaying a virtual object4104may be further determined based on the layout of the room4100that may involve the one or more physical locations of furniture4120. Advantageously, the position4112for displaying the virtual object4104may be in front of the one or more physical locations of furniture4120in the room4100. Some disclosed embodiments may include determining illumination conditions in the room, and the position for displaying the virtual object may further be determined based on the illumination conditions. An illumination condition may refer to the state of the room's brightness, radiance, glittering, glowing, or similar lighting level. Non-limiting examples of illumination conditions may include the lighting level of two overhead lights in a single room. For example, to avoid glare, a position for displaying a virtual object (e.g., a virtual display screen) may be based on illumination conditions so that the position for display avoids a direct light source with respect to a user. In another example, the determined illumination conditions in the room may be dim or dark so that the position for displaying the virtual object (e.g., a virtual movie screen) is based on the dimness or darkness and positioned in a darker area in the room. Alternatively, in another example, the determined illumination conditions in the room may be bright or well-lit so that the position for displaying the virtual object (e.g., a virtual object with a text document) is based on the bright conditions and positioned in a brighter area in the room for users to read the text easier. In yet another example, the determined illumination conditions may include locations of borders between shaded and non-shaded areas and the position for displaying the virtual object may be determined so that the virtual object is not across the border. Further, in another example, the determined illumination conditions may include an indication that the borders are about to move (for example, due to motion of the sun), and the position for displaying the virtual object may be determined so that the virtual object is not across the border during a selected time duration. By way of example,FIG.44illustrates illumination conditions4422in a room4100. The illumination conditions4422may be provided by two overhanging lights in the room4100. For example, a position4112for displaying a virtual object4104may further be determined based on the illumination conditions4422so that the determined position4112avoids glare or a direct light source with respect to a user of a wearable extended reality appliance. Some disclosed embodiments may include determining a type of the virtual object, and the position for displaying the virtual object may further be determined based on the type of the virtual object. Non-limiting examples of a type of a virtual object may include animate virtual objects, inanimate virtual objects, two-dimensional virtual objects, three-dimensional virtual objects, virtual characters, virtual environments, virtual documents, virtual display screens, virtual widgets, or virtual overlays of a portion or whole of a physical environment or physical object. For example, the determined type of virtual object may be a virtual display screen and a position for displaying the virtual display screen may be determined on a wall of a room, for example because the virtual object is a virtual two-dimensional virtual display screen. In another example, the determined type of virtual object may be a three-dimensional product and the position for displaying the virtual solar system may be determined in the middle of the room to permit three-dimensional virtual viewing. In yet another example, the determined type of virtual object may be virtual rain and the position for displaying the virtual rain may be determined as falling or originating from the top of the room and falling to the ground of the room, for example because the type of the virtual object is rain. By way of example,FIG.41illustrates a type (e.g., a virtual display screen, a two-dimensional virtual object, a three-dimensional virtual object, etc.) of a virtual object4104and a position4112for displaying the virtual object4104may further be determined based on the type of the virtual object4104. For example, the determined type of virtual object4104may be a virtual display screen and the position4112for displaying the virtual object4104may be determined in the corner of the room as a virtual two-dimensional object. Some disclosed embodiments may include analyzing the sensor data to identify a physical object in the room and the position for displaying the virtual object may be determined so that none of the first display, the second display, and the third display may be occluded by the physical object. A ray casting algorithm may be used to determine the position in which none of the first display, the second display, and the third display are occluded by the physical object. Occluded may refer to blocked, prevented, or a similar obstruction. Non-limiting examples of physical objects may be a user, a piece of furniture (e.g., a chair, table, or door), a laptop, food, a drink, a presentation board. For example, pixels of images from an image sensor may be analyzed to identify a physical object (e.g., a row of chairs). Further, a position for displaying a virtual object (e.g., a virtual display screen) may be determined to be higher on a wall of a room so that none of a first display, a second display, and a third display may be occluded by the row of chairs. In another example, pixels of images from an image sensor may be analyzed to identify the physical object (e.g., a presentation board). Further, a position for displaying a virtual object may be determined to be moved slightly to the left so that none of the first display, the second display, and the third display may be occluded by the presentation board. By way of example,FIG.43illustrates a room4100with a sensor4114separated and a physical object4124. For example, pixels of images from the sensor4114separated may be analyzed to identify the physical object4124. Further, a position4112for displaying the virtual object4124may be determined to be moved to the left of the physical object4124so that none of a first display, a second display, and a third display may be occluded by the physical object4124. Some disclosed embodiments may include analyzing the sensor data to identify a physical object in the room, and the position for displaying the virtual object may be determined so that none of the first display, the second display, and the third display occlude the physical object. For example, pixels of images from an image sensor may be analyzed to identify a physical object (e.g., a user, a plant, artwork, or furniture). Further, a position for displaying a virtual object (e.g., a virtual display screen) may be determined to shift or move away from the user so that none of a first display, a second display, and a third display occlude the user. A ray casting algorithm may be used to determine the position in which none of the first display, the second display, and the third display occlude the physical object. By way of example,FIG.43illustrates a room4100with a sensor4114separated and a physical object4124. For example, pixels of images from the sensor4114separated may be analyzed to identify the physical object4124. Further, a position4112for displaying the virtual object4124may be determined to be moved to the left of the physical object4124so that none of a first display, a second display, and a third display may occlude the physical object4124. In some embodiments, the first display of the virtual object, the second display of the virtual object, and the third display of the virtual object may be associated with a single version of the virtual object. Additionally, a change, introduced by a user of the first wearable extended reality appliance to the virtual object may be detected. Further, the second display and the third display may be updated to reflect the change to the virtual object introduced by the user of the first wearable extended reality appliance. Displays may be associated with a single version if each display presents the same subject matter, even if viewed from differing perspectives. A user may introduce a change may adding material, virtually drawing on or coloring an object, rotating or translating the object, or otherwise modifying the object. When all users are presented with a single version and that single version is altered by one user, the alteration may be displayed to other users through each user's wearable extended reality appliance. For example, a first display of a virtual object (e.g., a virtual display screen), a second display of the virtual object, and a third display of the virtual object may be associated with a single version (e.g., a single view) of the virtual object so that each display is identical to each user, even after one user makes a change to the virtual object. Additionally, a user of a first wearable extended reality appliance may be advancing between virtual slides of a virtual slideshow on the virtual display screen and this change may be detected by a sensor connected to at least one of the first wearable extended reality appliance, a second wearable extended reality appliance, and a third wearable extended reality appliance. In another example, the virtual object may be a virtual display, and the change may include a presentation of text entered by the first user using a physical keyboard. In another example, the change may be introduced by a usage of a marking implement. Further, the second display and the third display may be updated to reflect the new virtual slide of the virtual presentation introduced by the user of the first wearable extended reality appliance. In some embodiments, the first display of the virtual object, the second display of the virtual object, and the third display of the virtual object may be associated with different versions of the virtual object. Additionally, a profile associated with a user of the first wearable extended reality appliance, a profile associated with a user the second wearable extended reality appliance, and a profile associated with a user of the third wearable extended reality appliance may be obtained. Further, a personalized version of the virtual object based on the profile associated with each user may be determined. In the prior example discussed, a single version of a virtual object was presented to all users. In this example, each user may be presented with a unique version depending on the user's preferences. For example, some users may have color preferences and those preferences may be carried through to their unique presentations via a look up of those preferences in a data structure which then causes alteration of the display to correspond to those preferences. Other users may have disabilities that may cause their virtual content to be displayed differently from others. For example, blinking and flashing may be avoided on virtual displays of individuals who have epilepsy to avoid a risk of seizure. Word spacing may change on displays of individuals with dyslexia. These are just examples of profile information that may be stored in a data structure that is accessed to determine how virtual objects will be displayed. Different versions may refer to different details of a virtual object. In other words, different versions may present different points of view of the virtual object. The display may thus be impacted by a profile which may include indications of permission level (e.g., top-secret), accessibility limitations (e.g., confidential information), or preferences to color scheme or disabilities (e.g., color blind). For example, a first display of a virtual object, a second display of the virtual object, and a third display of the virtual object may be associated with different versions of the virtual object. A profile for each user of a wearable extended reality appliance may be obtained. For example, the first profile may be a top-secret permission level, the second profile may be restricted to non-confidential information, and the third profile may indicate that the third user is color blind. For example, the personalized version of the virtual object may be based on the determined profiles associated with each user. Advantageously, the first profile may see all information presented since he or she has a top-secret permission level. The second profile may see only some or part of the information because he or she is restricted to non-confidential information. The third profile may see the virtual object differently because of a color-blind mode. Some disclosed embodiments may include receiving additional sensor data indicating a change to a status of the third wearable extended reality appliance, while the virtual object is displayed at the determined position through the first wearable extended reality appliance and through the second wearable extended reality appliance. Further, the virtual object may be repositioned based on the determined first location of the first wearable extended reality appliance and the determined second location of the second wearable extended reality appliance. A change to a status may include leaving a room, switching off, switching to a different extended reality environment not including the virtual object, and similar changes to a wearable extended reality appliance. For example, while a virtual object may be displayed at a determined position through a first wearable extended reality appliance and through a second wearable extended reality appliance, additional pixels of images from an image sensor may be received indicating that the third wearable extended reality appliance has switched off. Further, the virtual object may be repositioned based on a determined first location of the first wearable extended reality appliance and a determined second location of the second wearable extended reality appliance. In another example, additional pixels of images from the image sensor may be received indicating that the third wearable extended reality appliance is no longer in the room because the user has left the room with third wearable extended reality appliance. In yet another example, additional pixels of images from the image sensor may be received indicating that the third wearable extended reality appliance is longer in the room because the virtual user (e.g., avatar) has left the room with the third wearable extended reality appliance. By way of example,FIG.41illustrates a virtual object4104displayed at a determined position4112through a first wearable extended reality appliance and through a second wearable extended reality appliance. For example, while the virtual object4104is displayed, additional pixels of images from a sensor4114separated from a plurality of wearable extended reality appliances4102may be received indicating that a third wearable extended reality appliance is no longer in a room4100. Further, the virtual object4104may be repositioned based on a determined first location4106of the first wearable extended reality appliance and the determined second location4108of the second wearable extended reality appliance. Some disclosed embodiments may include receiving additional sensor data indicating that a fourth wearable extended reality appliance may be in the room, while the virtual object is displayed at the determined position. Further, other disclosed embodiments may include determining a fourth location of the fourth wearable extended reality appliance, while the virtual object is displayed at the determined position. Further, the virtual object may be repositioned based on the determined first location, the determined second location, the determined third location, and the determined fourth location. For example, while a virtual object is displayed at a determined position, additional pixels of images from an image sensor may be received and analyzed to determine that a fourth wearable extended reality appliance is in a room. Further, the fourth location of the fourth wearable extended reality appliance may be determined and the virtual object may be repositioned in the room based on the determined first location, the determined second location, the determined third location, and the determined fourth location so that the position is in the middle of all the determined locations for clear view. By way of example,FIG.43illustrates a virtual object4104displayed at a determined position4112. For example, while the virtual object4104is displayed, additional pixels of images from a sensor4114separated may be received and analyzed to determine that a fourth wearable extended reality appliance4326is in a room4100. Further, the fourth location of the fourth wearable extended reality appliance4326may be determined and the virtual object4104may be repositioned in the room4100based on a determined first location4106, a determined second location4108, a determined third location4210, and the determined fourth location so that the position4112is in the middle of all the determined locations for clear view. Some disclosed embodiments may be executed by the first wearable extended reality appliance, wherein causing the first display may include generating display signals, causing the second display may include transmitting data reflecting the virtual object to the second wearable extended reality appliance, and causing the third display may include transmitting data reflecting the virtual object to the third wearable extended reality appliance. Display signals may include, for example, analog or digital electrical signals that may cause a display device to present content in the form of a virtual or digital representation. The virtual or digital representation may include, for example, one or more still or moving images, text, icons, video, or any combination thereof. The graphical display may be two-dimensional, three-dimensional, holographic, or may include various other types of visual characteristics. The first wearable extended reality appliance may cause one or more analog or digital signals to be generated or transmitted to a display device for presenting the graphical display for viewing by a user. Generate may refer to produce or cause something to arise or come about or any other production of a thing. To transmit may refer to cause something to pass on from one place or thing to another, to broadcast or send out something from one place or thing to another, or another way of sending something. In some embodiments, the display device may include a wearable extended reality appliance. For example, the at least one processor may cause one or more analog or digital signals to be generated and transmitted to the display device for presenting a movie, an emoji, a video, a text, or any combination thereof. While, some embodiments could rely on a central server to process image signals and update all extended wearable reality appliances sharing a common presentation, in other embodiments, display signal processing may be relegated to one of the extended reality appliances which may then transmit display signals to the other of the extended reality appliances. Alternatively, multiple wearable extended reality appliances may share image signal processing responsibility and distribute updated display signals to the other appliances. According to another embodiment of the present disclosure, a method for presenting virtual content to multiple viewers may be provided. In some embodiments, the method may be implemented by at least one processor that executes program instructions. The method may include receiving sensor data indicative of a plurality of wearable extended reality appliances located in a room. The method may further include receiving a command to share a virtual object with the plurality of wearable extended reality appliances. The method may additionally include analyzing the sensor data to determine a first location in the room of a first wearable extended reality appliance, a second location in the room of a second wearable extended reality appliance, and a third location in the room of a third wearable extended reality appliance. The method may include determining a position for displaying the virtual object in the room based on the determined first location, the determined second location, and the determined third location. The method may also include causing a first display of the virtual object at the determined position through the first wearable extended reality appliance, the first display may be rendered from a first perspective. The method may include causing a second display of the virtual object at the determined position through the second wearable extended reality appliance, the second display may be rendered from a second perspective different from the first perspective. The method may also include causing a third display of the virtual object at the determined position through the third wearable extended reality appliance, the third display being rendered from a third perspective different from the first perspective and the second perspective. FIG.45illustrates an exemplary method4530for presenting virtual content to multiple viewers. Method4530may be performed by one or more or more processing devices (e.g.,360,460, or560) associated with input unit202(seeFIG.3), XR unit204(seeFIG.4), and/or remote processing unit208(seeFIG.5). The steps of the disclosed method4530may be modified in any manner, including by reordering steps and/or inserting or deleting steps. Method4530may include a step4532of receiving sensor data indicative of a plurality of wearable extended reality appliances4102located in a room4100. Method4530may include a step4534of receiving a command to share a virtual object4104with the plurality of wearable extended reality appliances4102. Method4530may include a step4536of analyzing the sensor data to determine a first location4106in the room4100of a first wearable extended reality appliance, a second location4108in the room4100of a second wearable extended reality appliance, and a third location4210in the room4100of a third wearable extended reality appliance. Method4530may include a step4538of determining a position4112for displaying the virtual object4104in the room4100based on the determined first location4106, the determined second location4108, and the determined third location4210. Method4530may include a step4540of causing a first display of the virtual object4104at the determined position4112through the first wearable extended reality appliance, the first display may be rendered from a first perspective. Method4530may include a step4542of causing a second display of the virtual object4104at the determined position4112through the second wearable extended reality appliance, the second display may be rendered from a second perspective different from the first perspective. Method4530may include a step4544of causing a third display of the virtual object4104at the determined position4112through the third wearable extended reality appliance, the third display being rendered from a third perspective different from the first perspective and the second perspective. FIG.46illustrates another exemplary method4630for presenting virtual content to multiple viewers. Method4630may be performed by one or more or more processing devices (e.g.,360,460, or560) associated with input unit202(seeFIG.3), XR unit204(seeFIG.4), and/or remote processing unit208(seeFIG.5). The steps of the disclosed method4630may be modified in any manner, including by reordering steps and/or inserting or deleting steps. Method4630may include a step4532of receiving sensor data indicative of a plurality of wearable extended reality appliances4102located in a room4100. Method4630may include a step4650where the sensor data may be received from at least one of a first wearable extended reality appliance, a second wearable extended reality appliance, or a third wearable extended reality appliance. FIG.47illustrates another exemplary method4730for presenting virtual content to multiple viewers. Method4730may be performed by one or more or more processing devices (e.g.,360,460, or560) associated with input unit202(seeFIG.3), XR unit204(seeFIG.4), and/or remote processing unit208(seeFIG.5). The steps of the disclosed method4730may be modified in any manner, including by reordering steps and/or inserting or deleting steps. Method4730may include a step4544of causing a third display of the virtual object4104at the determined position4112through the third wearable extended reality appliance, the third display being rendered from a third perspective different from the first perspective and the second perspective. Method4730may include a step4752of determining a physical characteristic of at least one user of a first wearable extended reality appliance, a second wearable extended reality appliance, or a third wearable extended reality appliance; and wherein determining a position4112for displaying the virtual object4104may be further based on the physical characteristic of the at least one user. FIG.48illustrates another exemplary method4830for presenting virtual content to multiple viewers. Method4830may be performed by one or more or more processing devices (e.g.,360,460, or560) associated with input unit202(seeFIG.3), XR unit204(seeFIG.4), and/or remote processing unit208(seeFIG.5). The steps of the disclosed method4830may be modified in any manner, including by reordering steps and/or inserting or deleting steps. Method4830may include a step4544of causing a third display of a virtual object4104at the determined position4112through the third wearable extended reality appliance, the third display being rendered from a third perspective different from the first perspective and the second perspective. Method4830may include a step4854wherein a first display of the virtual object4104, a second display of the virtual object4104, a third display of the virtual object4104may be associated with a single version of the virtual object4104. Further, step4854may include detecting a change to the virtual object4104introduced by a user of the first wearable extended reality appliance, and updating the second display and the third display to reflect the change to the virtual object4104introduced by the user of the first wearable extended reality appliance. Method4830may include a step4856wherein the first display of the virtual object4104, the second display of the virtual object4104, the third display of the virtual object4104may be associated with different versions of the virtual object4104. Further, step4856may include obtaining a profile associated with a user of the first wearable extended reality appliance, a profile associated with a user of the second wearable extended reality appliance, and a profile associated with a user of the third wearable extended reality appliance, and determining a personalized version of the virtual object4104based on the profile associated with each user. Some disclosed embodiments may involve systems, methods, and non-transitory computer readable media configured for making virtual colored markings on objects. Virtual markings may broadly include any visual representation simulating in an extended reality environment writing/coloring/painting implement markings that exist in the physical world. Examples of types of virtual markings may include textual markings (e.g., words, sentences), punctuation markings (e.g., dots, commas), format markings (e.g., underlines, highlights), emoji markings (e.g., smiley, symbols), drawing markings (e.g., free hand sketching such as lines, figures, symbols, or any other individual or set of visible traces such as would occur in the physical world with pens, pencils, markers, chalk, or any other writing implement), colorization (such as might occur in the physical world with crayons, markers or paint brushes), texturizations (such as the addition of surface treatments). The virtual markings may be associated with one or more colors which may be rendered by software on top of one or more objects. The objects on which the virtual markings are drawn may refer to any physical or virtual items having a form which may be observed. For instance, an object may be an extended reality display of a 2D item or 3D item (e.g., a document, an image, a video, a virtual model, a shape, or any other intangible object) or a physical item, (e.g., food, furniture, a wall, a landscape, or any other tangible object). Some disclosed embodiments may include receiving an indication of an object. An indication of an object may refer to any data which may provide evidence of the existence of an object. The indication of the object may include type of data that enable identification of the object. The indication may include a location, form, color, size, distance to, material composition, and any other attribute which may describe the object. In some embodiments, the indication of the object may include one or more images of the object. The indication of the object may be received via a communications network, as described in greater detail herein. An indication of an object may be received, for example, via an image sensor that identifies an object in a field of view in the physical world. In another example, an indication of an object may be received from a rendering of an extended reality space that includes a virtual object, from a data-structure of virtual objects in an extended reality space, from an external device controlling the extended reality space, and so forth. By way of example, remote processing unit208ofFIG.5may receive an indication of an object via communications network214. An indication of object may be a signal created by remote processing unit208, XR unit204, or input unit202depicted inFIG.2to generate a virtual object or an identification of a physical object. In some embodiments, the object may be a virtual object presented by a wearable extended reality appliance. Additionally or alternatively, the object may be a physical object detectable in an image captured from the physical environment, for example using an image sensor included in the wearable extended reality appliance. By way of example,FIG.49illustrates an example of a virtual object4910presented to user100through wearable extended reality appliance110and an example of a physical object4911. User100may draw virtual markings on both objects.FIG.50illustrates a view of object4910as seen by user100through wearable extended reality appliance110. Some disclosed embodiments may include receiving from an image sensor an image of a hand of an individual holding a physical marking implement. An image sensor may include one or more sensors configured to capture visual information by converting light to image data, as described previously in this disclosure. A physical marking implement may refer to a tangible instrument that may be utilized by a user to provide inputs to make virtual colored markings on objects. A physical marking implement may take any form which may allow a user to make the virtual colored markings by performing movements with the physical marking implement. For instance, a physical marking implement may be or may resemble a stylus, pen, pencil, crayon, marker, chalk, paint brush, or any other instrument which a user may hold and move to make virtual colored markings on objects. Alternatively, a physical marking implement may take the form of an input device such as a mouse, a keyboard, a joystick, a touchpad or touch screen, or any other input device which may be used to provide an input to create virtual colored markings on objects. By way of example, a processing device may receive from image sensor472of XR unit204ofFIG.3an image5000. As shown inFIG.50, image5000may include a hand5010holding a physical marking implement4920having a tip4922, and an object4910. Some disclosed embodiments may include detecting in the image a color associated with the marking implement. Detecting in the image a color associated with the marking implement may include analyzing image data received from an image sensor to identify a color of the at least part of a marking implement being held by a user. For example, if the user is holding a red pen (or an implement configured to resemble a red pen) a processor may detect the color red. In another example, the marking implement may have a colored tip (for example, a blue tip), the image may be analyzed to detect the marking implement using an object detection algorithm, the region of the marking implement corresponding to the tip may be identified in the image using a template matching algorithm, and the values of the pixels in the identified region may be analyzed, for example using a statistical function or a histogram, to determine the color of the tip (for example, blue), thereby detecting that the color associated with the marking implement is the color of the tip (for example, blue). Alternatively, the marking implement may have a unique shape or code there that identifies it as being associated with the color red, and that unique shape or code may be recognized through image processing and look up in a data structure associating such shapes or codes with colors. In one example, detecting the color associated with the marking implement may first involve identifying the marking implement, which may occur by detection of a human hand holding the physical marking implement, for example using a hand detection algorithm. In some examples, a machine learning model may be trained using training examples to detect color associated with marking implements from images and/or videos. An example of such training example may include a sample image and/or a sample video of a sample marking implement, together with a label indicating the color associated with the sample marking implement. The trained machine learning model may be used to analyze the image and determine the color associated with the marking implement. By way of example, a processing device may detect from image5000ofFIG.50, a color associated with marking implement4920, e.g., red. Some examples of marking implements are shown inFIG.52, represented as marking implements5220,5222,5224, and5226. Some disclosed embodiments may include receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip. Image data may refer to signals or information received from or derived from the output of an image sensor. Image data may include images, grayscale images, color images, 2D images, 3D images, videos, 2D videos, 3D videos, frames, footages, color identification and/or data derived from or associated with any of the forgoing. A tip of the marking implement may refer to a distal end of a marking implement, for example of a marking implement associated with forming markings (e.g., what is referred to in the physical world as a pen, pencil, or crayon tip, a marking element of a magic marker, or the bristles of a paint brush). Non-physically functional virtual marking implements (i.e., an object that cannot produce physical drawings) may be considered marking implements within this disclosure in the sense that they are capable of causing virtual markings. Thus, the tip of the marking implement may be the end of a marking implement, such as a stylus, which may be used by a user to interact with a location on an image. In some embodiments, the tip of the marking implement may be a separate element which may be selectively attached and detached to a marking implement. In this example, differing tips may have differing colors and may be interchangeable. Movement of a tip of the marking implement may refer to any motion of the tip of the marking implement through space. Such movements may be detected through analysis of the image data using a visual object tracking algorithm. Locations of the tip may refer to one or more positions of the tip in space (such as a physical space, an extended reality space, etc.) as the tip moves through space or when the tip is stationary. In some embodiments, the tip of the marking implement may be detected as moving in 2D or 3D space. Data indicative of the movement or locations of the tip may refer to any data which may be analyzed to determine the movement or locations of the tip. For instance, the data indicative of the movement or locations of the tip may include positions, angles, distances, pixel locations, orientations, scales, speeds, accelerations, and any other property of the tip which may indicate a movement or location of the tip in 2D or 3D. In one example, the data indicative of the movement or locations of the tip may include a time series of locations or movement vectors associated with the tip. In another example, the data indicative of the movement or locations of the tip may include a group of locations or a group of movement vectors associated with the tip that are not sorted by time. In yet another example, the data indicative of the movement or locations of the tip may include a curved line associated with the movement or locations of the tip. By way of example, a processing device may receive from image sensor472ofFIG.3image data indicative of movement of tip4922of marking implement4920and locations of tip4922, as shown inFIGS.51A to51D. For instance, the processing device may receive a location5102of tip4922in image5100, a location5112of tip4922in image5110, a location5122of tip4922in image5120, and a location5132of tip4922in image5130. The processing device may also receive images from in between each of images5100,5110,5120, and5130in the series of events. Additionally, the processing device may receive data indicating the movement of tip4922based on images5100,5110,5120, and5130. Some disclosed embodiments may include determining from the image data when the locations of the tip correspond to locations on the object. In some examples, the locations of the tip may correspond to locations on the object when, while viewed from a particular distance and orientation, the tip of the physical marking implement may appear to hover directly over locations on the object or when the position of the tip of the physical marking implement overlaps or coincides with a position of a point on the object. For example, a virtual object may be presented in a field of view of a wearable extended virtual reality appliance, and the wearer may virtually color or mark-up the virtual object. A processor may detect the coincidence of the tip of the marking implement with a location on the virtual object. Similarly, in the case of a physical object, the processor may detect in a similar way a coincidence of the tip of the marking implement with the physical object. In some embodiments, determining from the image data when the locations of the tip correspond to locations on the object may include analyzing the image data using any of the method described above. By way of example, a processing device may determine from the image data when the locations of tip4922correspond to locations on object4910. For instance, inFIGS.51A to51D, a processing device may determine from images5100,5110,5120, and5130that locations5102,5112, and5122of tip4922correspond to locations on object4910, whereas location5132of tip4922does not correspond to a location on object4910. In some examples, a distance of the determined locations of the tip from the locations on the object may be determined, for example by measuring the distance in the image data, by projecting a location of the tip on a 3D model of the object, and so forth. Further, when the distance is below a selected threshold, it may be determined that the location of the tip corresponds to the locations on the object, and when the distance is above the selected threshold, it may be determined that the location of the tip does not correspond to the locations on the object. In some examples, a machine learning model may be trained using training examples to determine when locations of tips of marking implements correspond to locations on objects. An example of such training example may include a sample image and/or a sample video frame of a sample marking implement and a sample object, together with a label indicating whether the location of the tip of the sample marking implement corresponds to locations of the sample object. The trained machine learning model may be used to analyze the image data and determine when the locations of the tip correspond to locations on the object. Some disclosed embodiments may include generating, in the detected color, virtual markings on the object at the corresponding locations. The virtual markings may be rendered on top of an object at one or more locations at which a user has performed a movement of a physical marking implement. In some embodiments generating the virtual markings in the detected color may include rendering the virtual markings in the color corresponding to the physical marking implement onto a virtual view of the object which may be observed by one or more users. By way of example, a processing device may generate, in the detected color, virtual markings on object4910at the corresponding locations. For instance, a processing device may generate, in red, the color associated with marking implement4920. Virtual marking5134on object4910may correspond to locations5102,5112, and5122of tip4922on object4910. In some examples, generating, in the detected color, virtual markings on the object at the corresponding locations may include adding the virtual markings to an extended reality environment or to a virtual object in the extended reality environment. In some examples, the virtual markings may be docked to the object in the extended reality environment, and may therefore move with the object when the object is moving. Some disclosed embodiments may include presenting to the individual holding the marking implement, the virtual markings via a wearable extended reality appliance. A wearable extended reality appliance may be a wearable device, such as a head-mounted device, for example, smart glasses, smart contact lens, headsets or any other device worn by a human for purposes of presenting an extended reality to the human, as described elsewhere in this disclosure. Presenting to the individual holding the marking implement, the virtual markings via a wearable extended reality appliance may refer to displaying the virtual markings through the wearable extended reality appliance such that the individual, who is looking at the object through the wearable extended reality appliance, may see the virtual markings. Thus, a user may virtually color or annotate a virtual or physical object in the user's field of view. Colorization of a physical object may involve image manipulation to present to the user a colorized overlay corresponding to the movements of the tip of the marking implement on the physical object. By way of example, visual marking5134ofFIGS.51A to51Dmay be presented to user100holding marking implement4920via wearable extended reality appliance110ofFIG.49. As another example, visual markings5534and5536may be displayed on extended reality display5500to user100holding marking implement4920via a wearable extended reality appliance such as wearable extended reality appliance110. Some disclosed embodiments may include presenting the virtual markings via a wearable extended reality appliance to a second individual located in proximity to the individual holding the marking implement. A second individual located in proximity may refer to two individuals being separated by a particular distance or a threshold distance. In one embodiment, the second individual is considered to be located in proximity to the individual holding the marking implement when their wearable extended reality appliances may communicate with each other. For example, an individual located in proximity to the individual holding the marking implement may be within 1 meter, 2 meters, 3 meters, 4 meters, 5 meters, 10 meters, 20 meters, 50 meters, or any other appropriate distance from the individual holding the marking implement. In other embodiments, the distances may be much greater, such that the individuals may be in two remote locations. Regardless, when wearable extended reality appliances are viewing the same space and one user marks-up a virtual or physical object, those markings detected by the image sensor of the virtual reality appliance associated with the individual doing the marking may be transmitted to the other individual, such that the same markings appear on the display of the other individual. By way of example,FIG.52illustrates individual5200and user100spaced apart by a distance5240. Second individual5200may be in proximity to user100because distance5240may be less than the threshold distance (e.g., 1 meter). As an example, virtual markings5234and5236may be presented to user100via wearable extended reality appliance110and to and individual5200via wearable extended reality appliances5202. Some embodiments may include receiving a second indication that the second individual performed an action for modifying a virtual representation of the object. An indication of an action may include any notification that an action took place or is taking place, the action being a manipulation of the marking implement. For example, notification may be a mirror display of the marking implement on displays of individuals who are part of a group viewing an object that is in the process of modification. Or some other indicator such as an icon or notice may appear on others' displays indicating that a revision by the marking element is either in progress or has occurred. For instance, a second individual may point to locations on the object to create new virtual markings or may wave their hands to erase existing virtual markings on the object. Those indications might be presented to others. Some embodiments may involve causing additional virtual markings on the object that corresponds to the action in response to the received second indication. The additional virtual markings may be modifications to the virtual representation of the object that may affect only the virtual representation of the second individual, or may affect the virtual representations of some or all the individuals viewing the object. By way of example, as illustrated inFIG.52, a virtual marking5234may be presented via wearable extended reality appliance5202to a second individual5200located in proximity to user100holding marking implement4920. In some embodiments, second individual5200may perform an action for modifying the visual representation of object4910and send a second indication to rem a processing device via a network. In response to the second indication, the processing device may cause additional virtual markings to appear on object4910corresponding to the performed action. Additionally or alternatively, the action may cause the processing device to erase existing virtual markings such as virtual marking5234from the visual representation of object4910. Some disclosed embodiments may include presenting the virtual markings together with a virtual presentation of the object via wearable extended reality appliances to a plurality of individuals located remotely from the individual holding the marking implement. A plurality of individuals located remotely from the individual holding the marking implement may refer to two or more individuals being physically separated from the individual, for example, in different rooms, buildings, cities, countries, or in the same room but separated by a physical partition. For instance, a plurality of individuals may be presented the virtual markings and the virtual presentation of the object while being in a different location from the object and the individual holding the physical marking implement. As an example, the individual holding the physical marking implement and the plurality of individuals may be separated by a distance of 5 meters, 10 meters, 100 meters, 1 kilometer, 10 kilometers, 100 kilometers, 1000 kilometers, or any other distance which would mean the individual holding the marking implement and the plurality of individuals are physically separated. The plurality of individuals may each view the virtual markings and the virtual presentation of the object via a wearable extended reality appliance, or via a screen which the plurality of individuals may view together. As an example, a university professor may generate virtual markings on an object from an office or classroom, and students in their homes may be presented with the virtual markings together with a virtual presentation of the object. In one example, the object may be a virtual object, and the virtual presentation of the object may be based on a model of the virtual object. In another example, the object may be a physical object, and the virtual presentation of the object may be based on images of the object captured using the image sensor. Some disclosed embodiments may include receiving a second indication that at least one of the plurality of individuals performed an action for modifying the virtual representation of the object. In some embodiments, one or more individuals of the plurality of individuals discussed above may use their own marking implements to make changes to the object or may otherwise take an action signaling that the virtual representation of the object is to be modified. For instance, the one or more individuals may wave their hands to erase existing virtual markings on the object or may use their own marking implements to augment, modify, or erase the markings already added to the object. Additionally or alternatively, the one or more individuals may perform gestures or use their own marking implements to create new virtual markings on the object. In one example, receiving the second indication may include at least one of reading the second indication from memory, receiving the second indication from an external device (for example, from an external device associated with the at least one of the plurality of individuals, from an external device controlling the extended reality environment, etc.), determining the second indication by analyzing sensor data (such as image data captured using an image sensor, gesture data captured using haptic glove, etc.), and so forth. Some embodiments may involve, causing removal of at least part of the virtual markings on the object in response to the received second indication. For example, the modifications to the virtual representation of the object may affect only the virtual representation associated with the individual performing the action, or may affect the virtual representations of a part or all of the individuals viewing the object, including the individual holding the physical marking implement. By way of example, virtual marking5134ofFIG.53may be presented via wearable extended reality appliance5310to a plurality of individuals5300located remotely from user100holding marking implement4920. For instance, user100may be located in a room5320while plurality of individuals5300may be located in a different room5330, which may be anywhere in the world where communications network214may still reach wearable extended reality appliances5310. As an example, an individual5302from among individuals5300may perform an action for modifying the virtual representation of object4910, and a second indication may be sent to a processing device via a network. In response to the second indication, the processing device may cause the removal of at least part of virtual marking5134on object4910. In some embodiments, the modifications to the virtual representation of object4910may affect only the virtual representation associated with individual5302, may affect the visual representation associated with some or all individuals5300, may affect the visual representation associated with user100, or may affect the visual representation associated with user100and the visual representation associated with some or all individuals5300. Some disclosed embodiments may include presenting the virtual markings via a wearable extended reality appliance to a second individual while the second individual is located in an environment of a physical object and the individual holding the marking implement is no longer in the environment of the object. In one example, an environment of the object may refer to a space in which the object may reside (e.g., a room, a building, an exterior, or any other area where the object may be located). In another example, an environment of the object may refer to all positions with a line-of-sight to at least part of the object. For instance, a view of the object may be partially obscured by a wall for an individual present in a different room than the room in which the may be located. However, the individual may still be considered to be in the environment of the object because they have a line-of-sight to at least part of the object. In some embodiments, a first individual holding a physical marking implement may make virtual markings on an object and then leave the environment of the object. Upon leaving the environment of the object, the first individual may no longer be in the same space as the object and/or may not have a line-of-sight to any portion of the object. A second individual may enter the environment of the object and may still view the virtual markings that were made by the first individual previously. By way of example, as illustrated inFIG.54, an object may be a physical object, such as a table5410. In this example, user100may make a virtual marking5412on table5410using marking implement4920and may leave room5430to go to room5420, which is outside of the environment of table5410. Then, a second individual5400may enter room5430(and thus the environment of table5410) and may view virtual marking5412on table5410via wearable extended reality appliance5440. Some disclosed embodiments may include determining from the image data an intent of the individual to delete at least some of the virtual markings and removing the at least some of the virtual markings from the object. An intent to delete at least some of the virtual markings may be determined by analyzing received image data to determine that an individual is performing a gesture or the use of a marking implement associated with removing virtual markings. For instance, an individual may gesture in a certain way signaling a removal of markings (e.g. by virtually touching the markings to be deleted and either before or after pressing a physical or virtual button signaling deletion) or by use an opposite end of the marking implement as a virtual eraser. Image data may be analyzed to determine which end of the marking implement is in use—the tip or the eraser, and the functionality associated with the end in use may thereby be implemented. These are just a few examples. Other actions or combinations of actions may be recognized by at least one processor analyzing image data to cause deletions. By way of example, a processing device may determine from image5130ofFIG.51Dan intent of user100to delete at least some of virtual marking5134. The processing device may determine such intent when, for example, user100performs a predetermined action or utilizes an opposing end5138of marking implement4920to indicate portions of virtual marking5134that user100wishes to erase. The processing device may then remove the indicated portions and render only the remaining portions of virtual marking5134. Some disclosed embodiments may include determining from the image data an intent of the individual to change the color of the virtual markings, presenting a virtual color palette via a wearable extended reality appliance to the individual holding the marking implement, determining a selection of a substitute color from the virtual color palette, and generating additional virtual markings on the object in the selected substitute color. Color changes may occur in a manner similar to deletions discussed above, through the use of a differing color marking implement detected in image data. Or, a virtual color palette may be provided, and an individual may gesture (e.g., point) to a new color which in response may change a color of a marking implement. Similarly, pointing toward a virtual eraser may change the function of the marking implement from a marker to an eraser. An individual may indicate an intent to change the color of the virtual markings by performing a gesture with a hand or with the physical marking implement. A gesture may include a conscious or unconscious movement of a hand of an individual which may be detected by a wearable extended reality appliance. For example, a gesture may include pointing, rotating, swiping, grabbing, tapping, or otherwise moving a hand or a finger with the intent to perform an action on a particular part of an extended reality display, an object, or the space visible through the wearable extended reality appliance. A virtual color palette may refer to a user interface enabling a selection of a variety of colors. The virtual color palette may be presented to the individual virtually via the wearable extended reality appliance. The virtual color palette may be virtually presented as a bounded area with smaller areas each representing a different range or shade of color. The virtual color palette may be presented to an individual as an overlay on top of objects visible through the wearable extended reality appliance or displayed via an extended reality display presented to the individual. The individual may perform a gesture with a body part or with the physical marking implement to indicate selection of a substitute color from the virtual color palette. For instance, a specific gesture such as swiping, tapping, double tapping, selecting, pointing, or other appropriate gesture may be recognized as a selection of a substitute color. Once the substitute color has been selected, any further virtual markings made on the object by the individual will be in the selected substitute color until the individual selects a new color. In some embodiments, the individual may select to have a part of or all of the existing virtual markings change color to a substitute color. By way of example, a processing device may determine from image data an intent of user100to change the color of the virtual markings. The processing device may then present a virtual color palette5510ofFIG.55via wearable extended reality appliance110to user100holding marking implement4920. Then, the processing device may determine a selection of a substitute color from virtual color palette5510when, for instance, user100may point to a substitute color. The processing device may generate additional virtual markings on object5520in the selected substitute color. In some embodiments, user100may perform an action (e.g., a movement of the marking implement or an appropriate gesture made by the second individual to signal that a virtual representation of the object is to be modified, as discussed above) such that existing virtual marking5534may change color to the substitute color. Some disclosed embodiments may include detecting a type of the marking implement, accessing a repository of types of marking implements and corresponding mark characteristics associated therewith, and generating the virtual markings with a specific mark characteristic corresponding to the detected type of the marking implement. The type of the marking implement may be detected by analyzing the image data (i.e., data which may be used to represent graphic or pictorial data, as discussed above) by any of the methods described herein. For example, a data structure may hold characteristics of differing marking implements (e.g., pen, pencil, crayon, paint brush, magic marker, etc.) Image data may be compared with the characteristics stored in the data structure to identify the marking implement type (and, perhaps also an associated color.) When a match is determined, the marking characteristics associated with that marking implement may be retrieved from the data structure and applied. Thus, the visible characteristics of a virtual line (e.g., quality, consistency, shape) drawn by a pencil may differ in appearance from the characteristics of a line drawn by a paint brush, pen, crayon or magic marker. As another example, the type of the marking implement may be input by a user. A repository of types of marking implements and corresponding mark characteristics associated therewith may refer to a data store which may store information on different types of marking implements and their corresponding characteristics. In some embodiments, the types of marking implements may be associated with different widths of the virtual markings on the object. For example, a width of a pen stroke may be narrower than a width of a brush stroke. As other examples, the types of marking implements may be associated with different fonts, font styles, sizes, underline styles, and any other effects which may affect the appearance of a virtual marking. Some embodiments may include displaying a virtual representation of the detected type of the marking implement in virtual reality. For instance, an individual may be able to confirm that the detected type of the marking implement is indeed the marking implement they are holding. Generating the virtual markings with a specific mark characteristic corresponding to the detected type of the marking implement may include creating to be presented or otherwise displayed to an individual the virtual markings having properties associated with the marking implement used to create the virtual markings. For instance, a virtual marking created with a marking implement having a predetermined line width and color associated with it may be generated to have the predetermined line width and color. By way of example, three types of marking implements5222,5224, and5226are illustrated inFIG.52. In some embodiments, marking implements5222,5224, and5226may generate virtual markings having different widths, fonts, font styles, sizes, underline styles, and any other effects which may affect the appearance of a virtual marking. For example, a processing device may detect marking implement5222from image5200by analyzing image5200by any of the methods discussed herein. In this example, the processing device may access a repository of types of marking implements and corresponding mark characteristics associated therewith to determine that the width of the virtual marks generated by marking implement5222are “thin.” Accordingly, the processing device may generate the virtual markings with the specific mark characteristic (e.g., “thin”) and/or color corresponding to marking implement5222. In some embodiments, wearable extended reality appliance110may display a virtual representation5530of the detected type of the marking implement in virtual reality to user100. Some disclosed embodiments may include interpreting three-dimensional movements of the marking implement as two-dimensional inputs. While a user may move a marking implement in three-dimensional space, the out may be presented in two dimensions. In such an instance, the processor converts the three-dimensional detected movements of the marking implement into two-dimensional representations. In some embodiments, the two-dimensional virtual object may include a document, and the three-dimensional movements of the marking implement may be interpreted as handwriting virtual markings on the document in the detected color. By way of example, a processing device may interpret three-dimensional movements of marking implement4920ofFIG.55as two-dimensional inputs to generate virtual marking5534including handwriting on object5520. Some disclosed embodiments may include interpreting three-dimensional movements of the marking implement as three-dimensional inputs. If a user is marking on a three-dimensional object, the processor may interpret the three-dimensional coordinates of the markings and translate them onto the corresponding three-dimensional representation of the object. For example, the three-dimensional virtual object may include a virtual representation of a physical object, and the virtual markings may include markings in the detected color on two traversed surfaces of the virtual representation of the physical object. The three-dimensional movements of the marking implement may be interpreted as three-dimensional inputs by interpreting the motion of the marking implement in a space with more than two coordinate axes. For instance, the two traversed surfaces of the virtual representation of the physical object may be on two planes which are at different angles to each other. That is, a three-dimensional virtual object may have numerous surfaces, some of which may be on different planes. A traversed surface may be one with which the tip of the marking implement overlaps. By way of example, object4910ofFIGS.51A to51Dmay be a three-dimensional object and a processing device may interpret three-dimensional movements of marking implement4920as three-dimensional inputs to generate virtual markings5134and5136, which may be on two different surfaces of the virtual representation of object4910. Some disclosed embodiments may include determining from the image data that the individual replaced the physical marking implement with a second physical marking implement associated with a second color, detecting in the image data the second color associated with the second marking implement, and generating additional virtual markings on the object in the second color. For instance, to change to a second color, an individual may simply switch to a second physical marking implement, which may be determined by analyzing the image data via any of the methods described herein. The color of second physical marking implement may be automatically detected such that the individual may not need to perform any additional actions. In one embodiment, a second physical marking implement may be associated with a second color via data stored in a repository which may be accessed once a specific physical marking implement is detected to determine which is the specific color associated with the specific physical marking implement. In some embodiments, the additional virtual markings in the second color may overlay the previous virtual markings associated with a first color. By way of example, instead of selecting a new color as discussed with respect to virtual color palette5510ofFIG.55, user100may simply switch to a second physical marking implement5222,5224, or5226as shown in image5200ofFIG.52. A processing device may detect the second color associated with second physical marking implement5226and generate additional virtual marking5236on object5210in the second color. In some embodiments, additional virtual marking5236in the second color may overlay virtual marking5234associated with the first color. Some disclosed embodiments may include determining from the image data that the individual holds a second physical marking implement associated with a second color in addition to the physical marking implement, detecting in the image data the second color associated with the second marking implement, determining from the image data locations of a tip of the second marking implement, and generating additional virtual markings on the object in the second color based on the determined locations of the tip of the second marking implement. In some embodiments, any number of additional physical marking implements may be held by an individual and detected by analyzing the image data via any of the methods described herein such that the virtual markings made by each physical marking implement retain the characteristics of the physical marking implement which performed the action to make the virtual marking. By way of example, a processing device may determine that user100may hold an additional physical marking implement5222,5224, and/or5226associated with an additional color in addition to physical marking implement5220, may detect the additional color associated with additional physical marking implement5222,5224, and/or5226, may determine locations of a tip of additional marking implement5222,5224, and/or5226, and may generate additional virtual markings on object5210in the additional color based on the determined locations of the tip of additional marking implement5222,5224, and/or5226. In some examples, contextual environmental data corresponding to a selected time period may be determined. For example, images and/or videos (such as the image data) may be analyzed using a person detection algorithm to determine the presence of one or more additional people (e.g., beside the individual holding the physical marking implement) at the selected time period, may be analyzed using a face recognition algorithm to determine the identity of the one or more additional people, may be analyzed using a visual action recognition algorithm to determine one or more actions performed by the one or more additional people at the selected time period, may be analyzed using a visual event detection algorithm to detect one or more events occurring at the selected time period, may be analyzed using a scene recognition algorithm to determine a type of the physical environment (such as outdoor, indoor, office, meeting room, home, etc.), and so forth. In one example, the contextual environmental data corresponding to the selected time period may include at least one of the one or more additional people, the identity of the one or more additional people, the one or more actions, the one or more events, or the type of the physical environment. In another example, audio captured from the environment of the individual holding the physical marking implement may be analyzed using speech recognition algorithm to determine a topic of a conversation taking place at the selected time period, may be analyzed using speaker diarization algorithm to identify contributors to the conversation at the selected time period, and so forth. In one example, the contextual environmental data corresponding to the selected time period may include at least one of the determined topic of the conversation or the identified contributors. Some disclosed embodiments may involve introducing changes or differentiators to different portions of the virtual markings that are associated with different time periods and/or different contextual environmental data. For example, portions of the virtual marking associated with different time periods corresponding to the creation of the portion of the virtual marking may have differing appearance (e.g., shading). In some examples, a data-structure may associate different portions of the virtual markings with determined contextual environmental data corresponding to the time period associated with the portions of the virtual markings. In one example, a user may provide (for example, via a user interface) an input indicative of a specific contextual environmental data or of a group of different contextual environmental data instances. In other examples, such input may be received from an external device, from a memory unit, from an automated process, and so forth. The data-structure may be searched based on the input to identify the portions of the virtual markings corresponding to the specific contextual environmental data or to the group of different contextual environmental data instances. The identified portions of the virtual markings may be provided, for example to the user that provided the input, to a different user, to an external device, to a memory unit, for presentation in an extended reality environment, and so forth. FIG.56provides a flowchart of an example method5600for making virtual colored markings on objects executed by a processing device of system200as illustrated inFIG.2. The processing device of system200may include a processor within a mobile communications device (e.g., a mobile communications device206), a processor within a server (e.g., server210), a processor within a wearable extended reality appliance (e.g., wearable extended reality appliance110), or a processor within an input device associated with wearable extended reality appliance110(e.g., keyboard104). It will be readily appreciated, that various implementations are possible and that any combination of components or devices may be utilized to implement the exemplary method. It will also be readily appreciated that the illustrated method can be altered to modify the order of steps, delete steps, or further include additional steps, such as steps directed to optional embodiments. Method5600may include a step5602of receiving an indication of an object. Method5600may further include a step5604of receiving from an image sensor an image of a hand of an individual holding a physical marking implement. Method5600may further include a step5606of detecting in the image a color associated with the marking implement. Method5600may further include a step5608of receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip. Method5600may further include a step5610of determining from the image data when the locations of the tip correspond to locations on the object. Method5600may further include a step5612of generating, in the detected color, virtual markings on the object at the corresponding locations. Details and examples on the different steps of method5600are described above. In order for multiple users of wearable extended reality appliances to commonly view virtual objects, aspects of this disclosure describe how to develop an on-the-fly coordinate system that may be shared by users. The coordinate system may be created by displaying a code on a mobile device. When the extended reality appliances scan the code, the wearable extended reality appliances share a common reference point and may use that reference point to generate a shared common coordinate system. Additionally, when the mobile device moves, the displayed code may be changed accordingly. Some disclosed embodiments may involve enabling wearable extended reality appliances to share virtual content. Sharing virtual content may refer to the distribution of text, photos, videos, links, or any other information across different displays of wearable extended reality appliances. For example, a wearable extended reality appliance may display virtual content in an extended reality environment. A second wearable extended reality appliance may not display any virtual content. The first wearable extended reality appliance may share its display of virtual content with the second wearable extended reality appliance in order for the second wearable extended reality appliance to display the same content as the first wearable extended reality appliance. In some examples, sharing virtual content may include the presentation of the same virtual content by a plurality of wearable extended reality appliances, for example in a same or similar position and orientation in an extended reality environment. Some disclosed embodiments may involve generating a visual code reflecting a first physical position of a mobile device, the visual code being configured to be read by a plurality of wearable extended reality appliances. The term “visual code” may include, for example, a machine-readable code consisting of an array of black and white squares, typically used for storing the address of a webpage or other information. Additionally or alternatively, the visual code may include one or more of numbers, letters, shapes, images, colors, or any other identifier that may be detected by an image sensor. For example, the visual code may be a QR code, Data Matrix code, bar code, alphanumeric code, or any other type of code. The visual code may be read by a plurality of wearable extended reality appliances using a device for recording visual images that may be incorporated within or attached to the wearable extended reality appliances. The device for recording visual images may be a camera, scanner, recorder, or any other type of recording device. The visual code may be displayed on a mobile device, such as a cell phone, tablet, dedicated hardware, or any other physical device capable of code display. In some examples, the information contained in the visual code may represent a physical position of a mobile device in a physical space. A physical position may be the place where an item is located. In some examples, the information contained in the visual code may represent a physical position of a presentation of the visual code in a physical space. For example, for a first presentation of the visual code at a first position on a display screen of the mobile device and a second presentation of the visual code at a second position (different from the first position) on the display screen of the mobile device while the mobile device is at the same position and orientation in the physical space, the first presentation and the second presentation may have different physical positions in the physical space, and the visual code may include different information to represent that. In another example, different positions of the presentation of the visual code on the display screen may compensate for different positions and/or orientations of the mobile device, so that the physical position of the presentation of the visual code is the same, and the visual code may include the same information in the different presentations to represent that. In some examples, the information contained in the visual code may represent a physical orientation of a mobile device in a physical space. A physical orientation may be the direction of an item in a physical space. In some examples, the information contained in the visual code may represent a physical orientation of a presentation of the visual code in a physical space. For example, for a first presentation of the visual code at a first orientation on a display screen of the mobile device and a second presentation of the visual code at a second orientation (different from the first orientation) on the display screen of the mobile device while the mobile device is at the same position and orientation in the physical space, the first presentation and the second presentation may have different physical orientations in the physical space, and the visual code may include different information to represent that. In another example, different orientations of the presentation of the visual code on the display screen may compensate for different positions and/or orientations of the mobile device, so that the physical orientation of the presentation of the visual code is the same, and the visual code may include the same information in the different presentations to represent that. A mobile device may be located in many different environments. For example, a mobile device may be located in an office environment, home environment, public space, or any other location. In one exemplary use case, a mobile phone (for example on a conference table) may display the visual code, and a group of individuals, each wearing an extended reality device, may capture an image of the visual code, providing a common reference point for the group. In some embodiments, the mobile device may be a smartphone paired with one of the plurality of wearable extended reality appliances. Pairing may involve creating a link between computing devices to allow communications between the devices. Paired devices may be able to transmit information between each other. For example, a smartphone may be paired with one of the plurality of wearable extended reality appliances so that information may be shared between the smartphone and the wearable extended reality appliance. In some embodiments, the mobile device may be paired with one of the plurality of wearable extended reality appliances. A wearable extended reality appliance may include smart glasses, head-mounted displays, head-up displays, virtual reality (VR) headset, or any other type of appliance. Prior to generating the visual code, some of the disclosed embodiments may involve determining the first physical position of the mobile device based on data received from at least one location sensor within the mobile device. A location sensor may detect a physical position that may include absolute position (i.e. location) or relative position (i.e. displacement from a position). For example, a location sensor may use GPS technology, outdoor positioning system or an indoor positioning system to determine absolute or relative location. As another example, a location sensor may provide coordinate locations relative to a predetermined reference frame. As another example, the location sensor may provide angular positions of relative to a predetermined reference frame. Additionally, one or more hubs of a wireless network such as a WiFi network may be used to determine location. A mobile device may contain a location sensor configured for use in determining a physical position of the mobile device. Some disclosed embodiments may involve presenting the visual code on a display of the mobile device for detection by the plurality of wearable extended reality appliances, to thereby enable the plurality of wearable extended reality appliances to share content in a common coordinate system upon the detection of the visual code. Detection of a visual code by a wearable extended reality appliance may include an appliance configured to identify the presence of a visual code. For example, as stated above, the wearable extended reality appliance may contain a recording, scanning, or imaging device. The recording, scanning, or imaging device may identify and scan the visual code. For example, the visual code may include a QR code, and image data captured using an image sensor included in a wearable extended reality appliance may be analyzed using a QR code detection algorithm to detect the visual code. In another example, the visual code may include a selected visual pattern, and the image data captured using an image sensor included in a wearable extended reality appliance may be analyzed using a visual pattern recognition algorithm (such as a template matching algorithm) to detect the visual code. Detecting the visual code may enable a plurality of wearable extended reality appliances to share content in a common coordinate system after the detection of the visual code. The term “coordinate system” refers to a common reference frame. A coordinate system may include, for example, an arrangement of reference lines or curves used to identify the locations of points in space. For example, in two dimensions, an x-y plane (that is, a Cartesian coordinates system with a specific origin point and specific directions for the axes) may be used to identify locations of points in space. As another example, in two dimensions, a polar coordinates system with a particular origin point and a specific reference direction may be used to identify locations of points in space. As another example, in three dimensions, an x-y-z plane (that is, a Cartesian coordinates system with a specific origin point and specific directions for the axes) or a spherical coordinates system with a specific origin point, a specific zenith direction and a specific reference plane may be used to identify locations of points in space. A physical space may contain an infinite number of coordinate systems. Prior to detecting the visual code, each wearable extended reality appliance may have its own coordinate system. After detecting the visual code, the plurality of wearable extended reality appliances may share a common coordinate system (which may replace their previous coordinate systems or may be used in addition to their previous coordinate systems). In one example, a common coordinate system may mean that each coordinate system of each extended reality appliance may be aligned and may have the same arrangements of reference lines or curves. In another example, a common coordinate system may enable different wearable extended reality appliances to present virtual content at the same physical location and/or physical orientation. Aligning coordinate systems may include ensuring that origins of the coordinate systems coincide and that the one or more coordinate axes associated with each coordinate system overlaps with or coincides with one or more coordinate axes of another coordinate system, or that other elements that define the coordinate systems coincide. In one example, aligning coordinate systems may include determining one or more transformation functions from one coordinate system to other coordinate systems and vice versa. By way of another example, the visual code, which may be generated and displayed by the mobile device, may define a common reference for a plurality of extended reality appliance wearers. Once they share the common reference, details of the shared coordinate system may be a matter of design choice. The coordinate system may be imperceptible to appliance wearers. For example, they may not see any particular visualization of the coordinate system, aside from the fact that the perspective view of each appliance wearer may differ based on their orientations relative to the common reference point. In other embodiments where the coordinate system is perceptible, each wearable extended reality appliance may display the same arrangement of lines with a common reference point. A common reference point may be a point in space with respect to which the position of an object in space is expressed. For example, a mobile device may represent the point in space where the x and y axis cross each other. As another example, a mobile device may represent the point in space where the x, y, and z axis cross each other. A common coordinate system may be created when each wearable extended reality appliances' coordinate system shares a common reference point. For example, each wearable extended reality appliance may read the visual code, determine the physical location of the mobile device based on the location information stored in the visual code, and make the physical location of the mobile device the origin of each wearable extended reality appliances' coordinate system, creating a common coordinate system. Content may be shared in the common coordinate system in the same manner as described above. In some embodiments, users of the plurality of wearable extended reality appliances may view a virtual object at a single location in a physical space. For example, prior to a common coordinate system, a wearable extended reality appliance may display a virtual object in its own coordinate system. And a second wearable extended reality appliance may display the same virtual object in its own different coordinate system, with the two coordinate systems unaligned. The virtual objects may appear in different locations in the two different coordinate systems. After scanning the visual code and forming a common coordinate system, the plurality of wearable extended reality appliances may display the virtual object in the same position in the common coordinate system. By way of example,FIG.57illustrates an example of a plurality of wearable extended reality appliances with their own coordinate systems not aligned. For example, as illustrated inFIG.57, user5710and user5712may be in an office conference room. User5710may be wearing wearable extended reality appliance5711. Wearable extended reality appliance5711may display virtual content5715in coordinate system5714. User5712may be wearing wearable extended reality appliance5713. Wearable extended reality appliance5713may display virtual content5717in coordinate system5716. As illustrated inFIG.57coordinate system5714and5716may not be aligned, and as a result virtual content5715and virtual content5717may be displayed at different locations and/or orientations. By way of example,FIG.58illustrates an example of a plurality of wearable extended reality appliances with a common coordinate system. For example, as illustrated inFIG.58, user5810and user5812may be in an office conference room. User5810may be wearing wearable extended reality appliance5811and user5812may be wearing wearable extended reality appliance5813. Visual code5816, displayed on mobile device5817, may be read by wearable extended reality appliances5811and5813. After reading visual code5816, wearable extended reality appliances5811and5813may display content5815in a common coordinate system5814. Thereby, both wearable extended reality appliances5811and5813may present content5815at the same location and/or orientation. Some disclosed embodiments may include identifying a trigger for causing the presentation of the visual code. The term “trigger” may mean an action that causes an event or situation to happen. For example, a visual code may only be presented based on an identification of an action. In one embodiment, the trigger may be associated with an input received from a wearable extended reality appliance paired with the mobile device. An input may include data that may be created based on an event. The input may cause a trigger for a visual code to be presented on the mobile device. For example, a mobile device and a wearable extended reality appliance may pair and create an input that may cause a visual code to appear on the mobile device. In one example, the wearable extended reality appliance may include an image sensor, image data captured using the image sensor may be analyzed using gesture recognition algorithm, and identifying the trigger may include detection of a particular gesture of a user of the wearable extended reality appliance in the image data. In another example, the wearable extended reality appliance may include an audio sensor, audio data captured using the audio sensor may be analyzed using speech recognition algorithm, and identifying the trigger may include detection of a particular voice command in the audio data. In yet another example, the wearable extended reality appliance may identify a proximity of another wearable extended reality appliance, and identifying the trigger may be based on the identification of the proximity of the other wearable extended reality appliance. In another embodiment, the trigger may be associated with an input from at least one sensor within the mobile device, the input being indicative of an additional wearable extended reality appliance approaching the mobile device. A sensor may be a device that detects and responds to some type of input. Examples of sensors may include position sensors, pressure sensors, temperature sensors, force sensors, or any other type of sensor. A sensor may detect that an additional wearable extended reality appliance is near the mobile device. The sensor may create an input that may generate a trigger for a visual code to be presented on the mobile device. For example, a wearable extended reality appliance may appear in the same room as a mobile device. The mobile device may detect the wearable extended reality appliance via a sensor. The detection may create an input that may cause a visual code to appear on the mobile device. In another embodiment, an input may be received from a user of the mobile device. For example, a user of a mobile device may manually prompt a sensor on the mobile device in order to cause a presentation of the visual code. The user may manually prompt a sensor by pushing a button, selecting an application, downloading a link, or any other manual means. By way of example,FIG.59illustrates an example of an additional wearable extended reality appliance joining a plurality of wearable extended reality appliances, where the additional appliance has a coordinate system not aligned with the common coordinate system. For example, as illustrated inFIG.59, user5910, user5912, and user5916may be in an office conference room. User5910may be wearing wearable extended reality appliance5911and user5912may be wearing wearable extended reality appliance5913. Wearable extended reality appliances5911and5913may display content5915in a common coordinate system5914. User5916may be wearing wearable extended reality appliance5917. Wearable extended reality appliance5917may display content5921in coordinate system5920that may be different from coordinate system5914. As also illustrated inFIG.59, mobile device5918may present visual code5919. Wearable extended reality appliance5917may read visual code5919and cause coordinate system5920to become aligned with coordinate system5914, for example as will be explained further in connection withFIG.60. By way of example,FIG.60illustrates an example of an additional wearable extended reality appliance having a common coordinate system with a plurality of wearable extended reality appliances. For example, as illustrated inFIG.60, user6010, user6012, and user6014may be in an office conference room. User6010may be wearing wearable extended reality appliance6011, user6012may be wearing wearable extended reality appliance6013, and user6014may be wearing wearable extended reality appliance6015. Wearable extended reality appliances6011,6013, and6015may display content6017in a common coordinate system6016after wearable extended reality appliances read a visual code presented on a mobile device. Upon the detection of the visual code, in some embodiments, the visual code may be configured to provide access to a private extended reality environment. A private extended reality environment may be an environment that may be accessible to a particular user or users and that may be inaccessible to other users. To gain access to a private extended reality environment, the one or more other users may need a password, permission, pin code, or another type of authentication information. For example, to gain access to a private extended reality environment, a wearable extended reality appliance may need to read a specific visual code. Reading the visual code may cause the wearable extended reality appliance to access the private extended reality environment. Some disclosed embodiments may involve preventing access to the private extended reality environment following passage of a temporal limit. A temporal limit may be a limit that is based on a length of time. A temporal limit may be measured in seconds, minutes, hours, days, and/or any other length of time. An extended reality environment may only be accessible for a specific and/or limited amount of time, or a specific period of time. For example, a wearable extended reality appliance may detect and read a visual code that may provide access to a private extended reality environment. The private extended reality environment may be configured to deny access after a predetermined amount of time (e.g., 10 minutes, 15 minutes, or any other predetermined time period). After expiry of the predetermined amount of time, the wearable extended reality appliance may no longer be able to display the extended reality environment. In some examples, reading the visual code in the specific period of time may provide the wearable extended reality appliance access to the private extended reality environment, while reading the same visual code after the specific period of time has passed may not provide the wearable extended reality appliance access to the private extended reality environment. In one example, the provided access may be limited to the specific period of time, while in another example, once the access was provided in the specific period of time, the access may be preserved for at least a selected amount of time after the specific period of time has passed. In one example, the visual code may be valid for granting access to the private extended reality environment for a specific period of time, which may prevent abuse of the visual code (for example, the usage of a picture of the visual code after the presentation of the visual code by the mobile device stopped). Some disclosed embodiments may involve preventing access to the private extended reality environment when the visual code is detected beyond a spatial limit. A spatial limit may be a limit that is based on an event existing or happening in a certain space. An extended reality environment may only be accessible when a wearable extended reality appliance detects a visual code within a certain defined space. For example, the spatial limit may require that the visual code must be detected in a certain room in an office. In another example, the spatial limit may require that the visual code must be detected within a certain distance of another wearable extended reality appliance. As another example, the spatial limit may be that the code must be detected in a certain room in a user's home. Access to the private extended reality environment may be denied if the visual code is not detected within the limit. For example, a wearable extended reality appliance may detect a code while in an office setting, and the wearable extended reality appliance may be provided access to the private extended reality environment. In another example, a wearable extended reality appliance may detect a code in a public setting (e.g., outside the office, in a park, in a public place), and the wearable extended reality appliance may be denied access to the private extended reality environment. In some examples, the spatial limit may require that the visual code must be detected within a certain distance of the mobile device presenting it, or in a specific space in which the mobile device presenting the visual code is located. This may prevent abuse of the visual code (for example, the usage of a picture of the visual code sent to a remote location away of the mobile device). Some disclosed embodiments may involve generating a first visual code configured to grant access to a first private extended reality environment, and generating a second visual code configured to grant access to a second private extended reality environment. A wearable extended reality appliance may be granted access to multiple different private extended reality environments from a single location. Each different private extended reality environment may be associated with its own visual code. For example, a wearable extended reality appliance may detect a first visual code associated with a first private extended reality environment, and the appliance may be granted access to the first private environment. The wearable extended reality appliance may wish to join a different private environment. A second visual code associated with a second private extended reality environment may be generated. The wearable extended reality appliance may detect the second visual code and be granted access to the second private environment. Some disclosed embodiments may involve detecting movement of the mobile device to a second physical position different from the first physical position. Detecting movement may include detecting a change in the position of an object relative to its surroundings. Movement may be detected by monitoring changes in sound waves, infrared light, visible light, radio frequency energy, or any other changes associated with a change in location. In some embodiments, the detected movement may include a change in an orientation of the mobile device without a change in a location of the mobile device, the altering of the presentation may include changing an orientation of the presented visual code to compensate for the change in the orientation of the mobile device. The term “orientation” may include the determination of the relative direction of something. A change in orientation may include a change in the relative direction of something. For example, a mobile device may be presented in a specific direction relative to another direction in a common coordinate system or another object. A mobile device may change relative directions without changing locations in a common coordinate system. For example, a top portion of a mobile device may be facing north in a common coordinate system that may be using cardinal directions. The mobile device may be turned upside down so the top portion of the mobile device may be facing south in the common coordinate system. As another example, the mobile device may be turned left so the top portion of the mobile device may be facing west in the common coordinate system. As another example, the mobile device may be turned right so the top portion of the mobile device may be facing east in the common coordinate system. A visual code that may be presented on a mobile device may also change orientations with the mobile device. For example, a mobile device may be turned upside down so the top portion of the mobile device may be facing south in a common coordinate system. A visual code may also be rotated 180 degrees so the top portion of the visual code may be facing south in the common coordinate system. In some embodiments, the detected movement may include a change in a location of the mobile device without a change in an orientation of the mobile device. When the change in the location of the mobile device is smaller than a selected threshold, the altering of the presentation may include changing a location of the presented visual code on the display of the mobile device to compensate for the change in the location of the mobile device. A selected threshold may be an amount that must be exceeded for a certain reaction, result, or condition to occur. A threshold may be based on distance, location, position, or any other measurement. In one examples, a threshold may be pre-determined or selected by a user. In one example, the threshold may be selected based on a display size of the mobile device, based on a current location of the visual code on the display screen of the mobile device, and so forth. A threshold may be selected and when a change in location of a mobile device is smaller than the threshold, the display of the visual code on the mobile device may also be changed. In some embodiments, the display may be moved by a distance proportional to the change in location of the mobile device. For example, a threshold may be set to 5 feet. A change in a location of the mobile device may be 4 feet. The change may be smaller than the threshold and therefore a presentation of the visual code may also be moved 4 feet from the first physical position. In another embodiment, a threshold may be set to a specific location. For example, a threshold may be set to only the location of an office conference room. A first physical location of a mobile device may be in the office conference room and a second physical location of the mobile device may also be in the office conference room. The difference between the first and second physical location is below the threshold and therefore a presentation of the visual code may also be moved on the display of the mobile device to compensate for the change in the location of the mobile device. For example, the movement of the visual code on the display of the mobile device may cancel with the movement of the mobile device so that the position of the visual code in the physical environment stays the same. In this example, the threshold may be selected so that the display screen of the mobile device is large enough for relocating the visual code in a way that cancels the movement of the mobile device. In another example, each deviation of the visual code from a selected position on the display screen may encode a non-linear correction to the movement of the mobile device. In some embodiments, the detected movement may include a change in a location of the mobile device and a change in an orientation of the mobile device. When the change in the location of the mobile device is smaller than a selected threshold, the altering of the presentation may include changing a location of the presented visual code on the display of the mobile device to compensate for the change in the location of the mobile device and changing an orientation of the presented visual code to compensate for the change in the orientation of the mobile device. A change in orientation may occur as described above. Additionally, a selected threshold may be determined as described above. For example, a threshold may be set to 5 feet. By way of an example, a change in location of the mobile device between the first physical position and the second physical position may be 4 feet and a mobile device may be rotated by four degrees. The change may be smaller than the threshold and therefore a presentation of the visual code may also be moved 4 feet from the first physical position and rotated by four degrees. Some disclosed embodiments may involve, upon detecting movement of the mobile device, altering the presentation of the visual code so that the visual code is unavailable for use in content sharing. Altering the presentation of the visual code may include changing a visual property of the code. For example, the visual code may disappear, may be distorted, or may be changed in any way that makes it unreadable or invalid. For example, the visual code may disappear so a camera on a wearable extended reality appliance may no longer be able to read the visual code. Likewise, when the visual code is distorted or modified to make it unreadable, the camera on the wearable extended reality appliance may no longer be able to read the visual code. A wearable extended reality appliance may not be able to share content in a common coordinate system when the appliance can no longer read the visual code. In other embodiments, an amount of change in location or orientation beyond a threshold may result in generation of a new visual code, requiring wearable appliance users to detect the new visual code in order to participate in (or to continue participating in) a common virtual reality experience. When the detected movement includes a change in an orientation of the mobile device without a change in a location of the mobile device, some of the disclosed embodiments may involve rendering the visual code unavailable for use in content sharing. As described above, the orientation of a mobile device may be changed. In such a situation, a prior visual code may become unreadable as described above. For example, a top portion of a mobile device may be facing north in a common coordinate system that may be using cardinal directions. The mobile device may be rotated so the top portion of the mobile device may be facing south in the common coordinate system. Based on the change in orientation, the visual code may be unreadable by the wearable extended reality appliance and the appliance may be unable to share content. In one implementation, the detected movement may include a change in a location of the mobile device without a change in an orientation of the mobile device, and some disclosed embodiments may involve rendering the visual code unavailable for use in content sharing. As described above, when the orientation and location of a mobile device displaying the visual code changes, the visual code may become unreadable as described above. For example, a first physical position of a mobile device may be in an office conference room. A second physical position of a mobile device may be in an office hallway. The change in location may be the difference between the mobile device being in the office conference room and the mobile device being located in the office hallway. A visual code may disappear when the mobile device moves into the office hallway. Therefore, the visual code may not be available to be read by the wearable extended reality appliance, and the appliance may not be able to share content in the preexisting shared coordinate system. In some embodiments, altering of the presentation may include removing the presentation of the visual code from the display of the mobile device. A mobile device may no longer display a visual code after moving positions in a physical space. In another embodiment, altering of the presentation may include shrinking the presentation of the visual code, enlarging the presentation of the visual code, changing intensity of the visual code, changing the color scheme of the visual code, adding a visual indicator indicative of the visual code being obsolete, distorting the presentation of the visual code, or any other alterations to the visual appearance of the visual code. For example, altering the presentation may make the visual code undetectable by one or more wearable extended reality appliances. As another example, altering the presentation may enable one or more wearable extended reality appliances to determine the visual code is obsolete. In some embodiments, the altering of the presentation may include presenting an updated visual code on the display of the mobile device, the updated visual code being configured for detection by an additional wearable extended reality appliance, to thereby enable the additional wearable extended reality appliance to share content in the common coordinate system upon the detection of the updated visual code. In some embodiments, altering the presentation may include presenting a new visual code. A new visual code may include a different pattern, different type of code, different coloring, or any other type of difference compared to the original visual code. An additional wearable extended reality appliance may be present in a physical space and may read the updated visual in order to display the common coordinate system and the virtual objects located in the common coordinate system. Some disclosed embodiments may involve generating the updated visual code to reflect the second physical position of the mobile device. In some examples, the second physical position may reflect a new common reference point for a common coordinate system. An updated visual code may contain information identifying the second physical position. For example, one or more wearable extended reality appliances may already be sharing a common coordinate system. The users of the one or more wearable extended reality appliances may want to share a common coordinate system with an additional wearable extended reality appliance. A mobile device may be moved to a different position in the physical space and display an updated visual code. The plurality of wearable extended reality appliances may scan the updated visual code to enable a common coordinate system with the second physical position of the mobile device as a common reference point. In some examples, since the second physical position is different from the first physical position, the updated visual code may include a correction factor for determining the common coordinate system. The correction factor may be based on a displacement between the first physical position and the second physical position, on a change in the direction of the mobile device, and so forth. In one example, the first physical position may be used as a reference point for the common coordinate system, and the updated visual code may be configured to enable a determination of the first physical position based on the second physical position, for example by encoding the displacement between the two physical positions. In some embodiments, the second physical position of the mobile device may be determined based on an analysis of an image data depicting at least part of the visual code and captured by an image sensor included in a particular wearable extended reality appliance of the plurality of wearable extended reality appliances after the detection of the visual code by the plurality of wearable extended reality appliances. The image data captured by the image sensor may be analyzed using a visual pattern recognition algorithm to localize the visual code in the image data. The location of the visual code in the image data may be compared with the location of the visual code at the time of the detection of the visual code by the plurality of wearable extended reality appliances to determine that a mobile device may have moved positions. For example, a mobile device may be located in a first physical position in a space. The image sensor may capture the mobile device and a visual code being displayed on the mobile device at a first location. The mobile device may move in a left direction in the space to a second physical position. The image sensor may capture the visual code at a different location and determine that the mobile device has moved positions in the space. In some embodiments, the movement of the mobile device may be detected based on a comparison of a position of the visual code in the image data and a position in the common coordinate system corresponding to the first physical position of the mobile device. In some examples, the position of the visual code in the image data may corresponds to a specific position in the common coordinate system. The image data may be analyzed using visual pattern recognition algorithm to identify the position of the visual code in the image data. The position of the visual code in the image data, the size of the visual code in the image data and/or the capturing parameters of the image data (such as a position of the image sensor, an orientation of the image sensor, zoom or other optical characteristics associated with the capturing of the image data, etc.) may be used to determine the specific position in the common coordinate system. The specific position in the common coordinate system may be compared with the position in the common coordinate system corresponding to the first physical position of the mobile device. A movement of the mobile device may be detected when the specific position in the common coordinate system is different from the position in the common coordinate system corresponding to the first physical position of the mobile device, or when the distance between the specific position in the common coordinate system and the position in the common coordinate system corresponding to the first physical position of the mobile device is longer than a selected threshold. In one implementation, the visual code may include at least one dynamic marker indicative of detected movements of the mobile device, and some disclosed embodiments may involve changing the at least one dynamic marker when generating the updated visual code. A dynamic marker may include an indicator with characteristics that may change, or different indicators having differing characteristics. For example, a marker that is dynamic may have characteristics that vary over time in one or more of information presented, color saturation, color scheme, brightness, or any other change that is capable of conveying differing information as a result of the changes. Alternatively, a dynamic marker may be one that changes completely. For example, at a first position the marker may reflect a first bar code or QR code, and in a second position the marker may reflect an entirely different bar code or QR code. An updated visual code may include a change in the code based on a detected movement of a mobile device. For example, a mobile device may move in a physical space. The processor may detect the movement of the mobile device. Based on the movement, the dynamic marker of the visual code may change. For example, alphanumeric information may change, symbols, icons or other graphics, may change, and/or a change in color saturation, color scheme, or brightness may occur. FIG.61illustrates a flow chart of an exemplary method6110that may be executed by a processor to perform operations for sharing virtual content. Method6110may include a step6111of generating a visual code reflecting a first physical position of a mobile device. Method6110may also include a step6112of presenting the visual code on a display of the mobile device for detection by a plurality of wearable extended reality appliances, to enable the plurality of wearable extended reality appliances to share content in a common coordinate system. Further, method6110may include a step6113of detecting movement of the mobile device to a second physical position different from the first physical position. Method6110may include a step6114of altering the presentation of the visual code so that the visual code is unavailable for use in content sharing. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting. Implementation of the method and system of the present disclosure may involve performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present disclosure, several selected steps may be implemented by hardware (HW) or by software (SW) on any operating system of any firmware, or by a combination thereof. For example, as hardware, selected steps of the disclosure could be implemented as a chip or a circuit. As software or algorithm, selected steps of the disclosure could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the disclosure could be described as being performed by a data processor, such as a computing device for executing a plurality of instructions. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the implementations. It should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The implementations described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different implementations described. The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations of the embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. For example, the described implementations include hardware and software, but systems and methods consistent with the present disclosure may be implemented as hardware alone. It is appreciated that the above-described embodiments can be implemented by hardware, or software (program codes), or a combination of hardware and software. If implemented by software, it can be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods. The computing units and other functional units described in the present disclosure can be implemented by hardware, or software, or a combination of hardware and software. One of ordinary skill in the art will also understand that multiple ones of the above-described modules/units can be combined as one module or unit, and each of the above-described modules/units can be further divided into a plurality of sub-modules or sub-units. The block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer hardware or software products according to various example embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical functions. It should be understood that in some alternative implementations, functions indicated in a block may occur out of order noted in the figures. For example, two blocks shown in succession may be executed or implemented substantially concurrently, or two blocks may sometimes be executed in reverse order, depending upon the functionality involved. Some blocks may also be omitted. It should also be understood that each block of the block diagrams, and combination of the blocks, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or by combinations of special purpose hardware and computer instructions. In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as example only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method. It will be appreciated that the embodiments of the present disclosure are not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. And other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosed embodiments being indicated by the following claims. Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. These examples are to be construed as non-exclusive. Further, the steps of the disclosed methods can be modified in any manner, including by reordering steps or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
770,126
11863312
DETAILED DESCRIPTION FIG.1illustrates an example system100for managing congestion and managing UL and DL data that may be transmitted in a control plane. The system100includes one or more wireless communication devices (WCDs), such as user equipment (UE)101. It further includes a base station, such as NB/eNB103, a mobility management entity (MME) node105, a service capability exposure function (SCEF) node107, a serving gateway (SGW)111, a PDN gateway (PGW)113, a PCRF115, a HSS117, and an application server109. In some cases, the UE may be a cellular Internet of Things (CIoT) device, such as a sensor or appliance. Compared to a UE such as a smartphone, a sensor or appliance may transmit (e.g., broadcast) much less UL data and do so much less frequently. Under some circumstances, it may be more efficient to transmit such UL data in the control plane rather than in the user plane. The control plane may include, for example, the non-access stratum (NAS) layer used by the UE101and the MME105to communicate with each other. The data may include UL data transmitted by the UE101for another device (e.g., application server109) or DL data transmitted by another device (e.g., AS109) for UE101. The data, in some situations, are transported using non-IP data delivery (NIDD), as specified in TS 23.682 (see change request 0154, or S2-160423). The SCEF node107, which is also described in TS 23.682, may facilitate delivery of non-IP data. FIG.2illustrates a more specific example of system100. In this example, the SCEF node107may be part of a service capability server (SCS) and/or a machine-type communication (MTC) Interworking function (MTC-IWF) node. In other instances, the SCEF node107may be a standalone component. The SCEF node107may be located at an edge of the core network and act as a gateway to devices outside the core network. Additionally, whileFIG.1andFIG.2show a MME node105, the functionality and steps performed in the MME node105may in some embodiments be performed in a CIoT service gateway node (C-SGN), either in addition to or as an alternative to the MME node105. As discussed above, UL and DL data for a UE that is a CIoT device may be more efficiently transmitted in the control plane between the UE101and MME105, and via NIDD between the MME105and the application server109or other source or destination of the data. The control plane may have a limited amount of transmission resources, such as radio transmission resources (e.g., frequency and time resources) used by the UE101and eNB/NB103to wirelessly exchange information. The transport of the UL and DL data in the control plane may compete with control signaling for such transmission resources. As a result, it may significantly interfere with control signaling between the MME105and the UE101, and/or between the eNB/NB103and the UE101. Thus, methods are needed to perform rate control for UL data that the UE may attempt to send in the control plane and for DL data that may be intended to be sent to the UE in the control plane. UL Data Rate Control For performing rate control for UL data, the MME may throttle NAS messages with user data (i.e., NAS Data PDUs) sent using Control Plane CIoT EPS optimization (e.g., the MME may send a message to a UE indicating a time period (a.k.a., “throttling delay”) and indicating (implicitly or expressly) a number of NAS Data PDUs that the UE is permitted to send during the indicated time period) by, for example, accepting the NAS message and adding a throttling factor and/or a throttling delay in a NAS message sent to the UE. The UE shall follow the throttling factor and throttling delay sent by the MME until the throttling has been omitted in the next NAS message from the network, or the throttling delay time has expired. (That is, for example, the UE shall not send any subsequent NAS messages with user data sent using Control Plane CIoT EPS optimization until that criterion is fulfilled.) The UE may resume normal operations at the expiry of the throttling delay. The last received value of the throttling factor and throttling delay supersedes any previous values received from that MME. The reception of a throttling delay restarts the UE throttling delay timer. In an alternative embodiment, the UE resumes normal operation when the UE receives a subsequent NAS message from the network where the throttling factor and throttling delay has been omitted. The detection of NAS level congestion is discussed in more detail in section 4.3.7.4.2.1 of TS 23.401, and is reproduced later in the disclosure. FIGS.3and4provide flow diagrams that also illustrate coordination between a MME node (e.g., MME node105) and a wireless communication device (e.g., UE101) for limiting UL data rate and managing signalling congestion between the MME node and the UE. In an embodiment, the process300inFIG.3may begin in step302, in which the MME node receives (e.g., accepting) a first control plane message (e.g., a non-access stratum (NAS) message, such as a NAS attach request message) transmitted by the WCD, the first control plane message including uplink (UL) data (e.g., user plane data) intended for relay by the MME node to another device. Examples of the control plane message include a NAS message that contains Control Plane CIoT Optimization data (or “small data”) (e.g. “NIDD Delivery” message). Other examples, depending on protocol layer, are “S1-AP Initial UE Message (NAS Data PDU with EBI)” or “Uplink S1-AP msg (NAS Data PDU with EBI)”. In step306, after receiving the first control message, the MME node may create a second control message (e.g., a NAS attach accept message), the second control plane message identifying at least one of: i) a throttling factor that indicates a level by which the WCD should reduce the amount of UL data in any future control plane message to the MME node, and ii) a throttling delay that indicates how much time the WCD should wait before including any UL data in any future control plane message to the MME (i.e., the second control plane message may indicate a time period (i.e., “throttling delay”) (e.g., 0.5 deci hours) and a number of NAS Data PDUs that the UE is permitted to send during the indicated time period, wherein, in this example, the number is zero). In step308, MME node transmits the second control plane message including the at least one of the throttling factor and the throttling delay, the second control plane message intended for the WCD. For instance, the MME node may transmit the second control plane message toward the WCD, via a base station between the two nodes. Examples of this message include a Downlink S1-AP message with a DL NAS message that includes the throttling factor and throttling delay. FIG.3further illustrates an optional step304for process300. In step304, the MME node may determine in step304whether congestion of control signaling between the MME node and the WCD has deteriorated past a predetermined threshold. In this example, step306may be performed in response to both receiving first control plane message from the WCD and determining that congestion has deteriorated past the predetermined threshold. In some cases, the received control plane message (e.g., NAS message) may be processed (i.e. forwarded UL data or be discarded). Throttling can also be initiated at a later stage when congestion/overload has been detected (but before the signaling connection with the UE is released). Immediate initiation of the throttling may be done if the congestion/overload has already been detected in the MME when the UL NAS message is received In some instances, the MME node may initiate the throttling process immediately after receiving a control plane message (e.g., in step302) transmitted by the WCD. In some instances, it may do so without first receiving a control plane message from the WCD, and may instead initiate the throttling criteria based on some other throttling criterion or criteria, such as a result of overload/congestion, and/or as a result of that the UE has exceeded its small data quota, subscribed maximum bitrate, subscribed maximum bitrate for the Control Plane, exceeded Service Level Agreement etc. In some instances, a combination of these criteria may need to be satisfied (e.g., the MME node has to receive a control plane message and detect control signaling congestion, and/or detect that the WCD has exceeded a maximum UL bit rate or quota) before the MME node will initiate the throttling process. In some instances, the MME node may determine whether the WCD has exceeded a predetermined maximum data quota or maximum data rate (e.g., subscribed maximum bit rate, subscribed UE aggregated maximum bit rate for the CP or totally for the UE), wherein the step of transmitting the second control plane message including the throttling factor or the throttling delay is performed in response to receiving first control plane message from the WCD and determining that the WCD has exceeded a predetermined maximum data quota or maximum data rate. More generally speaking, the receipt of the first control plane message with the uplink data in step302, the deterioration of congestion past a threshold, and/or the exceeding of the maximum bit rate (e.g., maximum UL bit rate) may be examples of throttling criteria. The MME node may thus trigger throttling when one or more of the throttling criteria are met. This is illustrated in a process inFIG.4, in which the MME node, in step402, determines whether one or more throttling criteria have been met. The MME node in this embodiment may transmit the second control plane message that includes the throttling factor or throttling delay in step308without first waiting to receive a control plane message with UL data from the WCD. In that instance, the throttling may be triggered by deterioration of signaling congestion at the MME node and/or the WCD exceeding a maximum bit rate or UL data quota. The throttling factor or throttling delay transmitted in step308may be overridden.FIG.3illustrates the overriding feature with step312, in which, after transmitting the second control plane message, the MME node transmits a third control plane message which includes another throttling factor or another throttling delay, wherein the other throttling delay overrides the throttling delay in the second control plane message and the other throttling factor overrides the throttling factor in the second control plane message. As discussed above, the UE may stop the throttling when a timer set by a throttling delay (if transmitted) expires, or when the UE receives a subsequent control plane message that the throttling can cease. In step310, for example, after transmitting the second control plane message, for instance, the MME node may transmit a third control plane message which includes no throttling factor and no throttling delay, wherein the omission of the throttling factor and the throttling delay is an indication that the one or more WCDs can stop throttling UL data in control plane messages. In some cases, the throttling factor may indicate that the WCD should include no UL data in any future control plane message to the MME node until the MME node indicates stopping of throttling. In some cases, the throttling factor may indicate a percentage (e.g., 25, 50, 100%) by which the WCD should reduce UL data transmission in the control plane. In some instances, the MME node determines a maximum bit rate (MBR) at which to limit UL data in the control plane between the MME node and one or more WCDs attached to the MME node. The MME may determine the throttling factor or the throttling delay based on the determined MBR. FIG.5illustrates data rate control from the perspective of the WCD. The process500illustrated inFIG.5may, in one embodiment, begin at step502, in which the WCD transmits a first control plane message which includes uplink (UL) data intended for relay by a mobility management entity (MME) node to another device, the first control plane message intended for the MME node. For instance, the WCD may transmit the control plane message toward the MME node, via a base station between the two nodes. In step504, the WCD receives a second control plane message transmitted from a mobility management entity (MME) node, the second control plane message including at least one of: i) a throttling factor that indicates a level by which the WCD should reduce the amount of UL data in any future control plane message to the MME node and ii) a throttling delay that indicates how much time the WCD should wait before including any UL data in any future control plane message to the base station or to the MME. In step506, after receiving the second control plane message, the WCD transmits a third control plane message with an amount of UL data (e.g., zero amount of UL data) based on the throttling factor, or with zero amount of UL data if a timer set based on the throttling delay has not yet expired, the third control plane message intended for the MME node. This step may be part of the WCD's efforts to throttle its transmission of UL data in the control plane. In an embodiment, the WCD may receive, in step508, a third control plane message transmitted by the MME node that includes another throttling factor or another throttling delay which overrides those in the earlier control plane message. In an embodiment, the WCD may receive, in step510, a third control plane message which includes no throttling factor and no throttling delay. The WCD may identify this as an indication that it can stop throttling of UL data in the control plane. FIG.6provides a signaling diagram of the UL data rate control. Messages601and602show that the control plane message can be a NAS message (e.g., NAS attach request message, as discussed in section 9.8 of TS 24.301 included as a payload in a RRC message between a UE and a base station, and as a payload in a S1-AP message between the base station and the MME node. In the illustrated embodiment, the MME node may initiate throttling after determining in step604that there is control signaling congestion. It may transmit a NAS message using messages605and606to convey a throttling factor or throttling delay to the UE, which throttles UL data in step607. The MME node may further transmit messages608and609to modify the throttling, or messages610and611to stop the throttling. After the throttling is stopped, the UE may resume normal transmission of UL data to the MME node in a control plane (e.g., in the NAS layer). DL Data Rate Control DL data rate control may involve the MME rejecting data delivery requests (e.g., MT NIDD delivery request or NIDD submission request). Such requests may include DL data that may need to be relayed to a WCD in the control plane. Because this DL data may compete with control signaling for radio transmission resources, the DL data may be throttled. The MME node may itself receive and reject individual data delivery requests, or it may offload some of that gateway functionality to the SCEF node. For instance, the MME can reject NIDD Submit Request messages or to further offload the MME, the MME can request the SCEFs to selectively reduce the number of NIDD Submit Requests it sends for downlink traffic according to a throttling factor and for a throttling delay specified in the NIDD Submit Downlink Ack message (or NIDD Submit Ack message). See TS 23.682 for corresponding SCEF logic. The SCEF shall not send any subsequent NIDD Submit Request messages with user data until its throttling delay timer has expired. The SCEF resumes normal operations at the expiry of the throttling delay. The last received value of the throttling factor and throttling delay supersedes any previous values received from the MME. The reception of a throttling delay restarts the SCEF throttling delay timer. In an alternative embodiment, the SCEF resumes normal operation when it receives a subsequent NIDD Submit Downlink Ack message (or NIDD Submit Ack message) from the network where the throttling factor and throttling delay has been omitted. In some instances, the MME node may also restrict signaling node that its SGW may generate, by throttling downlink data notification requests from the SGW. Throttling downlink data notification requests from the SGW is discussed in TS 23.401, section 4.3.7.4.1a, which is also reproduced below. FIG.7illustrates another example of DL data rate control. In the process700illustrated inFIG.7, the MME node may, in step702, determine whether one or more throttling criteria have been met. The one or more criteria may include, for instance, receipt of a data delivery request message (e.g., MT NIDD delivery request or NIDD submission request) transmitted by the SCEF node, congestion of control signaling between the MME node and one or more WCDs has deteriorated past a predetermined threshold, and/or the one or more WCDs exceeding a maximum bit rate (e.g., a maximum DL bit rate) or DL data quota (which may be predetermined values set by a network operator, or may be dynamically determined). In step704, in response to a determination that the one or more throttling criteria have been met, the MME node may create a first data delivery message (e.g., MT NIDD response or NIDD submit downlink ack message) that includes at least one of: i) a throttling factor that indicates a level by which the SCEF node should reduce the number of downlink (DL) data delivery requests to the MME node, and ii) a throttling delay that indicates how much time the SCEF node should wait before transmitting any future data delivery request to the MME node. In some cases, the throttling factor or the throttling delay may be based on a determined maximum bit rate at which the MME node is attempting to limit for one or more WCDs. In step706, the MME node may transmit the first data delivery message including the at least one of the throttling factor and the throttling delay, the first data delivery message intended for the SCEF node.FIG.7further illustrates steps708and710for modifying the throttling and stopping the throttling, respectively. In some cases, the MME node may send the throttling factor or throttling delay in an empty NIDD response message or as a new message that is created for the purpose of DL data control between the MME node and the SCEF node. Note that if the throttling criteria does not involve the MME node first receiving a NIDD request message (e.g., MT NIDD request message or NIDD submit request message) from the SCEF node, then the throttling indication from the MME node may be sent to the SCEF node in an unsolicited manner. FIG.8illustrates the DL data rate control from the perspective of the SCEF node. In step802, the SCEF node receives a first data delivery message (e.g., NIDD submission ack message or MT NIDD response message) transmitted by a mobility management entity (MME) node, the first data delivery message including at least one of: i) a throttling factor that indicates a level by which the SCEF node should reduce the number of downlink (DL) data delivery requests to the MME node, and ii) a throttling delay that indicates how much time the SCEF node should wait before transmitting any future data delivery request to the MME node. In step804, after receiving the second control plane message, the SCEF reduces the number of data delivery requests transmitted to the MME node based on the throttling factor, or refraining from transmitting any data delivery request to the MME node if a timer based on the throttling delay has not yet expired. In an embodiment, the SCEF node may receive subsequent data delivery messages from the MME node that modifies the throttling or indicates that the throttling can cease. The DL rate control is also illustrated in the signal diagram inFIG.9. In this particular example, the throttling may be initiated after the MME node receives a NIDD submission request message903that includes DL non-IP data. The non-IP data may originate from, for example a service capability server (SCS) and/or an application server (AS), which determines in step901that non-IP data exists and transmits a NIDD submission request message902to the SCEF node. To initiate throttling, the MME node may transmit a NIDD submission downlink ack message904with a throttling factor or throttling delay to the SCEF node. This causes the SCEF node, even after receiving non-IP data in step905, to throttle NIDD submission request messages in step906. The throttling may be modified in message907, and may be stopped by the MME node in step908. After the throttling is stopped, the SCEF node may continue to forward NIDD submission request to the MME node, which may then relay the DL data toward the UE in the user plane or the data plane. Coordination Between MME Node and Base Station Rate control (e.g., UL data rate control) may also involve the MME node coordinating with a base station to limit network access (e.g., RAN access) if that access may involve transmission of excessive UL in the control plane. This coordination allows a MME node to make a request to a base station for all UEs that are camped on a base station and using the control plane to transport data. In one example, the MME node may use an Overload Start message, which is discussed in TS 23.401, at section 4.3.7.4.1, which is also reproduced below. In the example, the MME node may use the Overload Start message to request an eNB to reject new RRC connection requests from UEs that access the network to send user data via the Control Plane for normal priority and/or exception reporting. FIG.10illustrates another example of an overload handling mechanism that involves coordination between a MME node and base station. This example includes a process1000that begins, in an embodiment, in step1002, in which the MME node determines whether congestion of control signaling between the MME node and the one or more WCDs has deteriorated past a predetermined threshold. In step1006, in response to determining that the congestion has deteriorated past the predetermined threshold, the MME node generates an overload start message that indicates the base station should reject any radio resource control (RRC) connection requests being used by a WCD to access the MME node to send uplink (UL) data in a control plane message (i.e., a request from the WCD for data transfer via control plane CIoT EPS Optimization). In step1008, the MME node transmitting the overload start message to the base station. FIG.11illustrates a process1100that is from the perspective of the base station. In step1102, the base station receives an overload start message transmitted by the MME node, the message indicating the base station should reject any radio resource control (RRC) connection requests being used by a WCD to access the MME node to send uplink (UL) data in a control plane message (i.e., a request from the WCD for data transfer via control plane CIoT EPS Optimization). In step1104, after receiving the overload start message, the base station receives a RRC connection request from one of the WCDs, the RRC connection request including a control plane message (i.e., the request is a request for data transfer via control plane CIoT EPS Optimization). In step1106, the base station determines whether the control plane message includes UL data. In step1108, in response to determining that the control plane message includes UL data, the base station rejecting the RRC connection request. In some instances, the information for whether the control plane message includes UL data may be in the header of the RRC connection request. Exemplary MME Node FIG.12illustrates a block diagram of an example MME node105. As shown inFIG.12, the interference mitigation controller may include: a data processing system1202, which may include one or more processors1255(e.g., microprocessors and/or one or more circuits, such as an application specific integrated circuit (ASIC), Field-programmable gate arrays (FPGAs), etc.); a communication interface1205for communicating with the RAN and an interface1205for communicating with a SCEF node, a data storage system1206, which may include one or more computer-readable data storage mediums, such as non-transitory data storage apparatuses (e.g., hard drive, flash memory, optical disk, etc.) and/or volatile storage apparatuses (e.g., dynamic random access memory (DRAM)). In embodiments where data processing system1202includes a processor (e.g., a microprocessor), a computer program product1233may be provided, which computer program product includes: computer readable program code1243(e.g., instructions), which implements a computer program, stored on a computer readable medium1242of data storage system1206, such as, but not limited, to magnetic media (e.g., a hard disk), optical media (e.g., a DVD), memory devices (e.g., random access memory), etc. In some embodiments, computer readable program code1243is configured such that, when executed by data processing system1202, code1243causes the data processing system1202to perform steps described herein. In some embodiments, the MME node may be configured to perform steps described above without the need for code. For example, data processing system1202may consist merely of specialized hardware, such as one or more application-specific integrated circuits (ASICs). Hence, the features of the present invention described above may be implemented in hardware and/or software. Exemplary Wireless Communication Device (WCD) FIG.13illustrates a block diagram of an example of the WCD106. As shown inFIG.16, WCD106may include: the data processing system (DPS)1602(which includes, e.g., a digital signal processor (DSP), which may include one or more processors (P)1655(e.g., microprocessors) and/or one or more circuits, such as an application specific integrated circuit (ASIC), Field-programmable gate arrays (FPGAs), etc.; a transceiver1605, each connected to an antenna1622, for wirelessly transmitting and receiving information, respectively; a data storage system1606, which may include one or more computer-readable data storage mediums, such as non-transitory memory unit (e.g., hard drive, flash memory, optical disk, etc.) and/or volatile storage apparatuses (e.g., dynamic random access memory (DRAM)). In embodiments where data processing system1602includes a processor1655(e.g., a microprocessor), a computer program product1633may be provided, which computer program product includes: computer readable program code1643(e.g., instructions), which implements a computer program, stored on a computer readable medium1642of data storage system1606, such as, but not limited, to magnetic media (e.g., a hard disk), optical media (e.g., a DVD), memory devices (e.g., random access memory), etc. In some embodiments, computer readable program code1643is configured such that, when executed by data processing system1602, code1643causes the data processing system1602to perform steps described herein. In some embodiments, WCD106is configured to perform steps described above without the need for code1643. For example, data processing system1602may consist merely of specialized hardware, such as one or more application-specific integrated circuits (ASICs). Hence, the features of the present invention described above may be implemented in hardware and/or software. For example, in some embodiments, the functional components of WCD106described above may be implemented by data processing system1602executing program code1643, by data processing system1601operating independent of any computer program code1643, or by any suitable combination of hardware and/or software. In a second embodiment, WCD106further includes: 1) a display screen coupled to the data processing system1602that enables the data processing system1602to display information to a user of WCD106; 2) a speaker coupled to the data processing system1602that enables the data processing system1602to output audio to the user of UE1602; and 3) a microphone coupled to the data processing system1602that enables the data processing system1602to receive audio from the user. Exemplary SCEF Node FIG.14illustrates a block diagram of an example of a SCEF node107. As shown inFIG.14, the interference mitigation controller may include: a data processing system1702, which may include one or more processors1455(e.g., microprocessors and/or one or more circuits, such as an application specific integrated circuit (ASIC), Field-programmable gate arrays (FPGAs), etc.); a communication interface1405for communicating with the MME; a network interface1403for interfacing with a SCS/AS109, a data storage system1406, which may include one or more computer-readable data storage mediums, such as non-transitory data storage apparatuses (e.g., hard drive, flash memory, optical disk, etc.) and/or volatile storage apparatuses (e.g., dynamic random access memory (DRAM)). In embodiments where data processing system1402includes a processor (e.g., a microprocessor), a computer program product1433may be provided, which computer program product includes: computer readable program code1443(e.g., instructions), which implements a computer program, stored on a computer readable medium1442of data storage system1406, such as, but not limited, to magnetic media (e.g., a hard disk), optical media (e.g., a DVD), memory devices (e.g., random access memory), etc. In some embodiments, computer readable program code1443is configured such that, when executed by data processing system1402, code1443causes the data processing system1402to perform steps described herein. In some embodiments, SCEF node may be configured to perform steps described above without the need for code1443. For example, data processing system1402may consist merely of specialized hardware, such as one or more application-specific integrated circuits (ASICs). Hence, the features of the present invention described above may be implemented in hardware and/or software. Exemplary Base Station FIG.15is a block diagram of an embodiment of a base station. As shown inFIG.15, the base station (e.g., eNB/NB103) may include: a computer system (CS)1502, which may include one or more processors1555(e.g., a general purpose microprocessor and/or one or more other data processing circuits, such as an application specific integrated circuit (ASIC), field-programmable gate arrays (FPGAs), and the like); a network interface1505for use in connecting the network node to a network (e.g., core network) and communicating with other units connected to the network; a transceiver1507coupled to an antenna1508for wirelessly communicating with WCDs; and a data storage system1506for storing information (e.g., network slice information received from network management node (e.g., NM or DM), which may include one or more non-volatile storage devices and/or one or more volatile storage devices (e.g., random access memory (RAM)). In embodiments where computer system1502includes a general purpose microprocessor, a computer program product (CPP)1541may be provided. CPP1541includes a computer readable medium (CRM)1542storing a computer program (CP)1543comprising computer readable instructions (CRI)1544. CRM1542may be a non-transitory computer readable medium (i.e., magnetic media (e.g., a hard disk), optical media (e.g., a DVD), flash memory, and the like). In some embodiments, the CRI1544of computer program1543is configured such that when executed by data processing system1502, the CRI causes the computer system to perform steps described herein. In other embodiments, computer system1502may consist merely of one or more ASICs. Hence, the features of the embodiments described herein may be implemented in hardware and/or software. TS 23.401 4.3.7.4.2.1 General NAS level congestion control contains the functions: “APN based congestion control” and “General NAS level Mobility Management control”. The use of the APN based congestion control is for avoiding and handling of EMM and ESM signalling congestion associated with UEs with a particular APN. Both UEs and network shall support the functions to provide APN based EMM and ESM congestion control. The MME may detect the NAS signalling congestion associated with the APN and start and stop performing the APN based congestion control based on criteria such as: Maximum number of active EPS bearers per APN; Maximum rate of EPS Bearer activations per APN; One or multiple PDN GWs of an APN are not reachable or indicated congestion to the MME; Maximum rate of MM signalling requests associated with the devices with a particular subscribed APN; and/or Setting in network management. The MME may detect the NAS signalling congestion associated with the UEs belonging to a particular group. The MME may start and stop performing the group specific NAS level congestion control based on criteria such as: Maximum rate of MM and SM signalling requests associated with the devices of a particular group; and/or Setting in network management. The MME may detect the NAS signalling congestion associated with the UEs that belong to a particular group and are subscribed to a particular APN. The MME may start and stop performing the APN and group specific NAS level congestion control based on criteria such as: Maximum number of active EPS bearers per group and APN; Maximum rate of MM and SM signalling requests associated with the devices of a particular group and a particular subscribed APN; and/or Setting in network management. The MME should not apply NAS level congestion control for high priority access and emergency services. With General NAS level Mobility Management control, the MME may also use the reject of NAS level Mobility Management signalling requests under general congestion conditions. TS 23.401 4.3.7.4.1a Throttling of Downlink Data Notification Requests Under unusual circumstances (e.g. when the MME load exceeds an operator configured threshold), the MME may restrict the signalling load that its SGWs are generating on it, if configured to do so. The MME can reject Downlink Data Notification requests for non-priority traffic for UEs in idle mode or to further offload the MME, the MME can request the SGWs to selectively reduce the number of Downlink Data Notification requests it sends for downlink non-priority traffic received for UEs in idle mode according to a throttling factor and for a throttling delay specified in the Downlink Data Notification Ack message. The SGW determines whether a bearer is to be subjected to the throttling of Downlink Data Notification Requests on the basis of the bearer's ARP priority level and operator policy (i.e. operator's configuration in the SGW of the ARP priority levels to be considered as priority or non-priority traffic). While throttling, the SGW shall throttle the Downlink Data Notification Requests for low and normal priority bearers by their priority. The MME determines whether a Downlink Data Notification request is priority or non-priority traffic on the basis of the ARP priority level that was received from the SGW and operator policy. If ISR is not active for the UE, during the throttling delay, the SGW drops downlink packets received on all its non-priority bearers for UEs known as not user plane connected (i.e. the SGW context data indicates no downlink user plane TEID) served by that MME in proportion to the throttling factor, and sends a Downlink Data Notification message to the MME only for the non throttled bearers. If ISR is active for the UE, during the throttling delay, the SGW does not send DDN to the MME and only sends the DDN to the SGSN. If both MME and SGSN are requesting load reduction, the SGW drops downlink packets received on all its non-priority bearers for UEs known as not user plane connected (i.e. the SGW context data indicates no downlink user plane TEID) in proportion to the throttling factors. The SGW resumes normal operations at the expiry of the throttling delay. The last received value of the throttling factor and throttling delay supersedes any previous values received from that MME. The reception of a throttling delay restarts the SGW timer associated with that MME. TS 23.401 4.3.7.4 MME control of overload 4.3.7.4.1 General The MME shall contain mechanisms for avoiding and handling overload situations. These can include the use of NAS signalling to reject NAS requests from UEs. In addition, under unusual circumstances, the MME shall restrict the load that its eNBs are generating on it if it is configured to enable the overload restriction. This can be achieved by the MME invoking the S1 interface overload procedure (see TS 36.300 [5] and TS 36.413 [36]) to all or to a proportion of the eNBs with which the MME has S1 interface connections. To reflect the amount of load that the MME wishes to reduce, the MME can adjust the proportion of eNBs which are sent S1 interface OVERLOAD START message, and the content of the OVERLOAD START message. The MME should select the eNBs at random (so that if two MMEs within a pool area are overloaded, they do not both send OVERLOAD START messages to exactly the same set of eNBs). The MME may optionally include a Traffic Load Reduction Indication in the OVERLOAD START message. In this case the eNB shall, if supported, reduce the type of traffic indicated according the requested percentage (see TS 36.413 [36]) (The MME implementation may need to take into account the fact that eNBs compliant to Release 9 and earlier version of the specifications do not support the percentage overload indication). Using the OVERLOAD START message, the MME can request the eNB to: reject RRC connection requests that are for non-emergency and non-high priority mobile originated services (This blocks PS service and service provided by MSC following an EPS/IMSI attach procedure); reject new RRC connection requests for EPS Mobility Management signalling (e.g. for TA Updates) for that MME; only permit RRC connection requests for emergency sessions and mobile terminated services for that MME. This blocks emergency session requests from UEs with USIMs provisioned with Access Classes 11 and 15 when they are in their HPLMN/EHPLMN and from UEs with USIMs provisioned with Access Classes 12, 13 and 14 when they are in their home country (defined as the MCC part of the IMSI, see TS 22.011 [67]) (The MME can restrict the number of responses to paging by not sending paging messages for a proportion of the events that initiate paging. As part of this process, the MME can provide preference for paging UEs with Emergency Bearer Services and terminations associated with MPS ARP); only permit RRC connection requests for high priority sessions and mobile terminated services for that MME; reject new RRC connection requests from UEs that access the network with low access priority. When rejecting an RRC connection request for overload reasons the eNB indicates to the UE an appropriate timer value that limits further RRC connection requests for a while. An eNB supports rejecting of RRC connection establishments for certain UEs as specified in TS 36.331 [37]. Additionally, an eNB provides support for the barring of UEs configured for Extended Access Barring, as described in TS 22.011 [67]. These mechanisms are further specified in TS 36.331 [37]. An eNB may initiate Extended Access Barring when: all the MMEs connected to this eNB request to restrict the load for UEs that access the network with low access priority; or requested by O&M. If an MME invokes the S1 interface overload procedure to restrict the load for UEs that access the network with low access priority, the MME should select all eNBs with which the MME has S1 interface connections. Alternatively, the selected eNBs may be limited to a subset of the eNBs with which the MME has S1 interface connection (e.g. particular location area or where devices of the targeted type are registered). During an overload situation the MME should attempt to maintain support for emergency bearer services (see clause 4.3.12) and for MPS (see clause 4.3.18). When the MME is recovering, the MME can either: send, to some, or all, of the eNB(s), OVERLOAD START messages with new percentage value that permit more traffic to be carried, or the MME sends OVERLOAD STOP messages to some, or all, of the eNB(s). In addition, to protect the network from overload the MME has the option of rejecting NAS request messages which include the low access priority indicator before rejecting NAS request messages without the low access priority indicator (see clause 4.3.7.4.2 for more information) (It cannot be guaranteed that voice services will be available for mobile terminated calls while the Mobility Management back-off timer is running. It is recommended, that UEs requiring voice services are not configured for low access priority). While various aspects and embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments. Moreover, any combination of the elements described in this disclosure in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. Additionally, while the processes described herein and illustrated in the drawings are shown as a sequence of steps, this was done solely for the sake of illustration. Accordingly, it is contemplated that some steps may be added, some steps may be omitted, the order of the steps may be re-arranged, and some steps may be performed in parallel. Advantages of this application includes, but are not limited to: The advantage of this application is generally to provide rate control, congestion control, and/or flow control of UL data from a UE and DL data from a SCEF. The advantage is to that the control plane of the 3GPP system is not overloaded by CIoT devices (UEs) sending uplink small data, which could have severe effects on the system's ability to send control signaling between the MME and the UE and between the eNB and the UE. Further, using the SCEF means that DL transmission is blocked at the edge of the 3GPP network when excessive DL data has been sent and rate control is triggered, thus not wasting any additional network resources. The advantage is also that the rate control involves the UE (the CIoT device), that is, UL transmission is blocked in the UE when excessive data has been sent and rate control is triggered, thus not wasting any additional radio resources. The advantage is also that the control plane of the 3GPP system is not overloaded by Servers on the Internet or Packet Data Networks (PDNs) sending downlink small data to CIoT devices (UEs), which could have severe effects on the system's ability to send control signaling between the MME and the UE and between the eNB and the UE. Concise Description of Some Embodiments 1) Rate Control Method Performed by MME In one aspect there is provided a rate control method performed by a mobility management entity (MME). In one embodiment, the method includes the MME receiving an uplink (UL) Non-Access Stratum (NAS) message (e.g., attach request) transmitted by a wireless communication device (WCD). The method further includes the MME, after receiving the UL NAS message, generating a downlink (DL) NAS message and transmitting the DL NAS message towards the WCD. The DL NAS message transmitted by the MME comprises information indicating a number of UL NAS messages containing user data that the WCD is permitted send to the MME within a certain time period. In some embodiments, the number of UL NAS messages indicated by the information included in the DL NAS message is zero. In some embodiments, the DL NAS message transmitted by the MME comprises information indicating the certain time period. In some embodiments, the UL NAS message transmitted by the WCD comprises user data, and the method further comprises the MME forwarding the user data to another device. In some embodiments, the method further includes the MME receiving a second UL NAS message transmitted by the WCD, wherein the second UL NAS message comprises user data intended for another device; and the MME discarding the user data such that the MME does not forward the user data to the another device. 2) Another MME Method In another aspect there is provided a method performed in a mobility management entity (MME) node for managing signaling congestion. In one embodiment the method includes the MME node receiving (e.g., accepting) a first control plane message (e.g., a non-access stratum (NAS) message, such as a NAS attach request message) transmitted by the WCD, the first control plane message including uplink (UL) data (e.g., user plane data) intended for relay by the MME node to another device. The method also includes after receiving the first control message, the MME node creating a second control message (e.g., a NAS attach accept message), the second control plane message identifying at least one of: i) a throttling factor that indicates a level by which the WCD should reduce the amount of UL data in any future control plane message to the MME node, and ii) a throttling delay that indicates how much time the WCD should wait before including any UL data in any future control plane message to the MME. The method further includes the MME node transmitting the second control plane message including the at least one of the throttling factor and the throttling delay, the second control plane message intended for the WCD. In some embodiments, the method further comprises the MME node determining whether congestion of control signaling between the MME node and the WCD has deteriorated past a predetermined threshold, wherein the step of transmitting the second control plane message including the throttling factor or the throttling delay is performed in response to receiving first control plane message from the WCD and determining that congestion has deteriorated past the predetermined threshold. In some embodiments, the method further comprises the MME node determining whether the WCD has exceeded a predetermined maximum data quota or maximum data rate, wherein the step of transmitting the second control plane message including the throttling factor or the throttling delay is performed in response to receiving first control plane message from the WCD and determining that the WCD has exceeded a predetermined maximum data quota or maximum data rate. In some embodiments, the method further comprises, after transmitting the second control plane message, the MME node transmitting a third control plane message which includes no throttling factor and no throttling delay, wherein the omission of the throttling factor and the throttling delay is an indication that the one or more WCDs can stop throttling UL data in control plane messages. In some embodiments, the method further comprises the MME node, after transmitting the second control plane message, the MME node transmitting a third control plane message which includes another throttling factor or another throttling delay, wherein the other throttling delay overrides the throttling delay in the second control plane message and the other throttling factor overrides the throttling factor in the second control plane message. In some embodiments, the throttling factor indicates that the WCDs should include no UL data in any future control plane message to the MME node until the MME node indicates stopping of throttling. In some embodiments, the method further comprises the MME node determining a maximum bit rate (MBR) at which to limit UL data in the control plane between the MME node and one or more WCDs attached to the MME node; and the MME determining the throttling factor or the throttling delay based on the determined MBR. In some embodiments, the control plane message is a non-access stratum (NAS) message transmitted as a payload by the WCD to a eNB in a RRC message and relayed from the eNB to the MME node as a payload in an uplink S1-AP message. 3) Method Performed by WCD In another aspect, there is provided a method performed in a wireless communication device (WCD) for managing signaling congestion. In some embodiments, the method includes the WCD transmitting a first control plane message which includes uplink (UL) data intended for relay by a mobility management entity (MME) node to another device, the first control plane message intended for the MME node. The method further includes the WCD receiving a second control plane message transmitted from a mobility management entity (MME) node. The second control plane message includes at least one of: i) a throttling factor that indicates a level by which the WCD should reduce the amount of UL data in any future control plane message to the MME node and ii) a throttling delay that indicates how much time the WCD should wait before including any UL data in any future control plane message to the base station or to the MME. The method further includes, after receiving the second control plane message, the WCD transmitting a third control plane message with an amount of UL data (e.g., zero amount of UL data) based on the throttling factor, or with zero amount of UL data if the timer set based on the throttling delay has not yet expired, the third control plane message intended for the MME node. In some embodiments, the method further includes, after receiving the second control plane message, the WCD receiving a third control plane message transmitted from the MME node, the third control plane message including no throttling factor and no throttling delay; and after receiving the third control plane message, the WCD transmitting toward the MME node a fourth control plane message that includes UL data, the amount of UL data not being based on any throttling factor or any throttling delay. In some embodiments, the method further includes, after receiving the second control plane message, the WCD receiving a third control plane message transmitted from the MME node, the third control plane message including another throttling factor or another throttling delay, wherein the other throttling delay overrides the throttling delay in the second control plane message and the other throttling factor overrides the throttling factor in the second control plane message. 4) Method Performed by MME Linked to a SCEF In another aspect, there is provided a method performed in a mobility management entity (MME) node linked to a service capability exposure function (SCEF) node and to one or more wireless communication devices (WCDs). In one embodiment, the method includes the MME node determining whether one or more throttling criteria have been met (e.g., whether congestion of control signaling between the MME node and the one or more WCDs has deteriorated past a predetermined threshold). The method further includes, in response to determining that the one or more throttling criteria have been met (e.g., congestion of control signaling has deteriorated past the predetermined threshold): the MME node creating a first data delivery message (e.g., MT NIDD response or NIDD submit downlink ack message) that includes at least one of: i) a throttling factor that indicates a level by which the SCEF node should reduce the number of downlink (DL) data delivery requests to the MME node, and ii) a throttling delay that indicates how much time the SCEF node should wait before transmitting any future data delivery request to the MME node. The method further includes the MME node transmitting the first data delivery message including the at least one of the throttling factor and the throttling delay, the first data delivery message intended for the SCEF node. In some embodiments, the throttling factor or the throttling delay in the first data delivery message applies to requests from the SCEF that include non-internet-protocol (non-IP) data to be delivered to one of the one or more WCDs. In some embodiments, the first data delivery message is a mobile terminated (MT) non-IP data delivery (NIDD) acknowledgement message and the DL data delivery requests being throttled are MT NIDD delivery requests that include DL data for the one or more WCDs. In some embodiments, the method further includes receiving a previous data delivery request from the SCEF, wherein the step of transmitting the first data delivery message is in response to determining that congestion has deteriorated past the predetermined threshold and to receiving the previous data delivery request. In some embodiments, the method further includes the MME node, after transmitting the first data delivery message including the at least one of the throttling factor and the throttling delay, transmitting a second data delivery message which includes no throttling factor and no throttling delay, wherein the omission of the throttling factor and the throttling delay is an indication that the SCEF node can stop throttling data delivery requests to the MME node. In some embodiments, the method further includes the MME node, after transmitting the first data delivery message including the at least one of the throttling factor and the throttling delay, transmitting a second data delivery message which includes another throttling factor or another throttling delay, wherein the other throttling factor overrides the throttling factor in the first data delivery message and the other throttling delay overrides the throttling delay in the first data delivery message. In some embodiments, the method further includes the MME node determining a maximum bit rate (MBR) at which to limit DL data in the control plane between the MME node and the one or more WCDs; and the MME determining the throttling factor or the throttling delay based on the determined MBR. 5) A Method Performed by a SCEF In another aspect, there is provided a method performed in a service exposure capability function (SCEF) node. In one embodiment the method includes the SCEF receiving a first data delivery message (e.g., NIDD submission ack message or MT NIDD response message) transmitted by a mobility management entity (MME) node, the first data delivery message including at least one of: i) a throttling factor that indicates a level by which the SCEF node should reduce the number of downlink (DL) data delivery requests to the MME node, and ii) a throttling delay that indicates how much time the SCEF node should wait before transmitting any future data delivery request to the MME node. The method further includes after receiving the second control plane message, the SCEF reducing the number of data delivery requests transmitted to the MME node based on the throttling factor, or refraining from transmitting any data delivery request to the MME node if a timer based on the throttling delay has not yet expired. In some embodiments, the method further includes, after receiving the first data delivery message, the SCEF node receiving a second data delivery message from the MME node that includes no throttling factor and no throttling delay; and after receiving the second data delivery message, the SCEF node stopping throttling of data delivery requests to the MME node. In some embodiments, the method further includes, after receiving the first data delivery message, the SCEF node receiving a second data delivery message transmitted from the MME node, the second data delivery message including another throttling factor or another throttling delay, wherein the other throttling delay overrides the throttling delay in the first data delivery message and the other throttling factor overrides the throttling factor in the first data delivery message. 6) Another Method Performed by MME In another aspect, there is provided a method performed in a mobility management entity (MME) node for managing signaling congestion, the MME node adapted to exchange control signaling with one or more wireless communication devices (WCDs) via a base station. The method includes the MME node determining whether congestion of control signaling between the MME node and the one or more WCDs has deteriorated past a predetermined threshold. The method further includes, in response to determining that congestion has deteriorated past the predetermined threshold, the MME generating an overload start message that indicates the base station should reject any radio resource control (RRC) connection requests being used by a WCD to access the MME node to send uplink (UL) data in a control plane message. The method further includes the MME node transmitting the overload start message toward the base station. In some embodiments, the overload control plane message applies to RRC connection requests being used by a WCD to access the MME node to send UL data having a normal priority level. 9) Base Station Method In another aspect, there is provided a method performed in a base station for managing signaling congestion, the base station linked to a MME node and one or more wireless communication devices (WCDs). The method includes the base station receiving an overload start message transmitted by the MME node, the message indicating the base station should reject any radio resource control (RRC) connection requests being used by a WCD to access the MME node to send uplink (UL) data in a control plane message; after receiving the overload start message, the base station receiving a RRC connection request from one of the WCDs, the RRC connection request including a control plane message; the base station determining whether the control plane message includes UL data; and in response to determining that the control plane message includes UL data, the base station rejecting the RRC connection request. 10) Another MME Method In another aspect, there is provided a method performed in a mobility management entity (MME) node for managing signaling congestion. The method includes the MME node determining whether one or more throttling criteria have been met; in response to determining that the one or more throttling criteria has been met, the MME node creating a control message (e.g., a NAS attach accept message), the control plane message identifying at least one of: i) a throttling factor that indicates a level by which the WCD should reduce the amount of UL data in any future control plane message to the MME node, and ii) a throttling delay that indicates how much time the WCD should wait before including any UL data in any future control plane message to the MME; and the MME node transmitting the control plane message including the at least one of the throttling factor and the throttling delay, the control plane message intended for the WCD. In some embodiments, the one or more throttling criteria includes at least one of: i) congestion of control signaling between the MME node and the WCD has deteriorated past a predetermined threshold; and ii) the WCD has exceeded a predetermined maximum data quota or maximum data rate. 11) Another WCD Method In another aspect, there is provided a method performed in a wireless communication device (WCD) for managing signaling congestion. The method includes the WCD receiving a first control plane message transmitted from a mobility management entity (MME) node, the first control plane message including at least one of: a throttling factor that indicates a level by which the WCD should reduce the amount of UL data in any future control plane message to the MME node and ii) a throttling delay that indicates how much time the WCD should wait before including any UL data in any future control plane message to the base station or to the MME, wherein the first control plane message is transmitted in response to one or more throttling criteria having been met. The method further includes, after receiving the first control plane message, the WCD transmitting a second control plane message with an amount of UL data (e.g., zero amount of UL data) based on the throttling factor, or with zero amount of UL data if the timer set based on the throttling delay has not yet expired, the second control plane message intended for the MME node. 12) Another MME Method In another aspect, there is provided a method performed in a mobility management entity (MME) node linked to a service capability exposure function (SCEF) node and to one or more wireless communication devices (WCDs). The method includes the MME node receiving a first data delivery request transmitted by the SCEF node, the first data delivery request including DL data intended one of the one or more WCDs; after receiving the first data delivery request, the MME node creating a first data delivery response message that includes at least one of: i) a throttling factor that indicates a level by which the SCEF node should reduce the number of downlink (DL) data delivery requests to the MME node, and ii) a throttling delay that indicates how much time the SCEF node should wait before transmitting any future data delivery request to the MME node; and the MME node transmitting the first data delivery response message including the at least one of the throttling factor and the throttling delay, the first data delivery message intended for the SCEF node. 13) Another MME Method In another aspect, there is provided a method for CN overload control. In one embodiment, the method includes a network node (e.g., MME) determining that a load has reached a threshold. The method further includes, after determining that the load has reached the threshold, the network node transmitting to a base station an Overload Start message comprising information for configuring the base station such that the base station rejects a request transmitted by a WCD for data transfer via control plane CIoT EPS Optimization. 14) Another Base Station Method In another aspect, there is provided a method for CN overload control. In one embodiment, the method includes a base station receiving from a network node (e.g., MME) an Overload Start message comprising information indicating that the base station may reject a request for data transfer via control plane CIoT EPS Optimization. The method further includes, after receiving the Overload Start message, the base station receiving from a WCD a request for data transfer via control plane CIoT EPS Optimization. The method further includes, in response to receiving the request transmitted by the WCD, the base station rejecting the request. 15) MME Node In another aspect, there is provided a mobility management entity (MME) node comprising one or more processors configured for performing any one of the MME methods disclosed herein. 16) WCD In another aspect, there is provided a wireless communication device (WCD) comprising one or more processors configured for performing any one of the WCD methods disclosed herein. 17) SCEF Node In another aspect, there is provided a SCEF node comprising one or more processors configured for performing any one of the SCEF methods disclosed herein. 18) Base Station In another aspect, there is provided a base station comprising one or more processors configured for performing any one of the base station methods disclosed herein.
63,548
11863313
DETAILED DESCRIPTION The processes of scheduling and hybrid automatic repeat request (HARQ) caused by dynamic change of uplink and downlink transmission should also be considered. For example, the base station sends downlink control information (DCI) in a slot 1 to schedule a slot 3 and a slot 4 for uplink data transmission. Nevertheless, when downlink data with a higher priority, such as a data packet of a downlink high-reliability ultra-low-latency communication ultra reliable and low latency communications (URLLC) service, needs to be transmitted suddenly, or when strong interference is detected, how the base station operates to satisfy the service requirements should be considered. For another example, in the process of sending downlink data or sending the discovery reference signal (DRS) in consecutive slots, a sudden uplink scheduling request requires the base station to immediately allocate an uplink resource to the UE for sending a service data packet with a higher priority. How to perform scheduling, how to notify the UE, and how to process the HARQ of corresponding scheduled data by the base station should be considered. Therefore, the implementation of flexible duplex and relevant signaling design corresponding to different service requirements, as well as its impact on scheduling and subsequent HARQ should be considered. The present invention will be described in detail below in conjunction with the drawings and examples. A data transmission method is provided in the present invention. As shown inFIG.1A, the method includes steps described below. In step101, a frame structure of each time unit within a preset duration is adjusted and determined. In step102, the adjusted frame structure is notified to a UE. In step103, data transmission is performed according to the adjusted frame structure. One aspect of the present invention is how to implement flexible duplex or dynamic TDD, how to determine uplink and downlink configuration and how to perform indication. First, the uplink and downlink configuration is adjusted and determined by using at least one of:a priority level of a data service;a priority level of a channel, a signal or a link;a sensing result of a carrier;negotiation between adjacent cells; anda capability of the UE. The each configured time unit includes one of: a subframe, a slot, a mini-slot, and the number m of orthogonal frequency division multiplexing (OFDM) symbols. Here, m is an integer greater than or equal to 1. The step of notifying the UE of the adjusted frame structure includes at least one of: indicating the adjusted frame structure through physical layer signaling;configuring the adjusted frame structure through higher-layer signaling; andnotifying the adjusted frame structure through multicast signaling or a system message. For example, the base station notifies the UE of the uplink and downlink configuration or further a blank resource through at least one of: a system information block (SIB), a physical broadcast channel (PBCH), radio resource control (RRC), and dynamic physical layer signaling such as DCI. The blank resource represents a resource at least being not used for transmitting data information. The step in which the frame structure is adjusted includes steps described below. A first number of slots or mini-slots or OFDM symbols are configured for uplink transmission of uplink preset information. The uplink preset information includes at least one of: an uplink acknowledgement (ACK)/negative-acknowledgement (NACK), a scheduling request (SR), a sounding reference signal (SRS), a preamble initial access, and an uplink retransmitted data packet. A second number of slots or mini-slots or OFDM symbols are configured for downlink transmission of downlink preset information. The downlink preset information includes at least one of: a downlink control channel, a synchronization channel, a downlink broadcast channel, and a discovery reference signal (DRS). A third number of slots or mini-slots or OFDM symbols are configured as reserved resources or as blank resources. The blank resources represent resources at least being not used for transmitting data information. That is to say, the semi-static and dynamic signaling indication adopts least one of the following manners. Manner 1: Two adjacent base stations negotiate and then semi-statically configure certain time domain positions for transmission of downlink or uplink important information. For example, certain slots or OFDM symbols are configured for transmission of an uplink ACK/NACK, an SR, an SRS, a preamble initial access or a retransmitted data packet. Certain slots or OFDM symbols are configured for transmission of a downlink control channel, a synchronization channel, a DRS signal or the like. Then, the base station may dynamically indicate remaining time domain resource to be used for uplink, downlink or used as blank resource. Manner 2: A ratio set, or a pattern, or a ratio set and a pattern is configured, and an index of the ratio set, or an index of the pattern, or an index of the ratio set and the pattern is indicated through dynamic signaling. Some ratio sets, or some patterns, or some ratio sets and patterns are semi-statically configured, and the index of the ratio sets, or the index of the patterns, or the index of the ratio sets and the patterns is dynamically indicated. Manner 3: A size of a subframe group/slot group is configured, and uplink and downlink configuration of each subframe/slot in each subframe group/slot group is dynamically indicated. A size of a subframe/slot group is semi-statically configured, and uplink and downlink configuration of each subframe/slot in each subframe/slot group is dynamically indicated. The uplink grant information includes information of a time domain position at which uplink data transmission is scheduled, and the downlink grant information includes information of a time domain position at which downlink data transmission is scheduled. The base station performs the above dynamic indication further in the following included manners. An uplink time domain position is determined according to uplink grant information for scheduling uplink data; and a downlink time domain position is determined according to downlink grant information for scheduling downlink data. Uplink and downlink configuration information of subsequent k slots or m mini-slots is indicated or the subsequent k slots or m mini-slots are indicated as blank resources through downlink control information born in a common search space of a downlink control channel.Manner 1: A time domain position of uplink data is implicitly determined according to uplink grant information for scheduling the uplink data; and a time domain position of downlink data is implicitly determined according to downlink grant information for scheduling the downlink data.Manner 2: Uplink and downlink attribute of subsequent k slots or mini-slots are indicated or a certain slot is indicated as a blank resource, and an attribute of each OFDM symbol in a mixed slot is indicated through downlink control information born in a common search space of a downlink control channel. The common control information is sent in a control area of a predefined or configured downlink time unit or is sent in each downlink time unit.Manner 3: A change of an uplink and downlink configuration attribute of a subsequent t-th slot or s-th mini-slot is indicated or the subsequent t-th slot or s-th mini-slot is indicated as a blank resource through specific downlink control information born in a specific search space of the downlink control channel, where t and s are positive integers. In particular, for the structure of the mixed slot including both uplink and downlink, a secondary indication is used for determining each symbol to be uplink or downlink or the blank resource. First-level DCI indicates the length of each mini-slot, and second-level DCI indicates the uplink and downlink or the blank resource of each mini-slot. In particular, for the carrier aggregation scenario, the method of dynamically adjusting uplink and downlink configuration is applicable to all cells, and uplink and downlink configuration indication information of a secondary cell (Scell) is sent on a downlink control channel of a primary cell (Pcell), or is sent only on a downlink control of the S cell. For the dual-link scenario, the uplink and downlink configuration indication information of the Scell may also be sent on a downlink control channel of a primary secondary cell (PScell). Another aspect of the present invention is how to treat with impact on scheduling and HARQ, and steps described below may be included. A slot n or a mini-slot in the slot n for downlink data transmission is adjusted to be the slot n or the mini-slot in the slot n for uplink data transmission. A slot m or a mini-slot in the slot m for the uplink data transmission is adjusted to be the slot m or the mini-slot in the slot m for the downlink data transmission. The slot n or the mini-slot in the slot n for the downlink data transmission is adjusted to be a blank resource. The slot m or the mini-slot in the slot m for the uplink data transmission is adjusted to be the blank resource. For example, since the slot n or a certain mini-slot in the slot n is temporarily and dynamically adjusted for sending a service having a high priority level in the uplink or adjusted for coordinating interference between adjacent cells, a certain downlink data packet scheduled to be originally transmitted in the slot n or the certain mini-slot in the slot n may be processed in one of the following manners. A data packet originally sent in the slot n or the mini-slot in the slot n is discarded, and the data packet being corrupted is indicated to a terminal, where scheduling is not counted in the number of retransmissions; or a data packet originally sent in the slot m or the mini-slot in the slot m is discarded, and the data packet being corrupted is indicated to the terminal, where scheduling is not counted in the number of retransmissions. The data packet to be originally sent in the slot n or the mini-slot in the slot n and the data packet to be originally sent in the slot m or the mini-slot in the slot m are sent in a manner of reduced power or a reduced modulation coding scheme (MCS). The data packet to be originally sent in the slot n or the mini-slot in the slot n, or the data packet to be originally sent in the slot m or the mini-slot in the slot m is rescheduled to another resource; or the data packet to be originally sent in the slot n or the mini-slot in the slot n and the data packet to be originally sent in the slot m or the mini-slot in the slot m are rescheduled to another resource for transmission. The data packet to be originally sent in the slot n or the mini-slot in the slot n, or the data packet to be originally sent in the slot m or the mini-slot in the slot m is sent on a reserved resource; or the data packet to be originally sent in the slot n or the mini-slot in the slot n and the data packet to be originally sent in the slot m or the mini-slot in the slot m are sent on the reserved resource. The above-mentioned reserved resource and another resource may substantially be time domain resources, or frequency domain resources, or time domain resources and frequency domain resources, and another resource and the reserved resource may refer to different resource positions respectively. That is to say, in a method 1, the data originally sent at the position is directly discarded, and the terminal is indicated that the data packet is corrupted, thereby avoiding the influence of retransmission and merging. ACK/NACK is not fed back and scheduling is not counted in the number of retransmissions. In a method 2, the data packet is still sent with reduced power or with a low MCS. In a method 3, the data packet is rescheduled to another time domain position, or another frequency domain position, or another carrier. The feedback of ACK/NACK may be processed as described below. When the re-indicated data transmission position is located later than the original position of the ACK/NACK feedback, the DCI information also includes new resource position information of the ACK/NACK feedback corresponding to the data packet, and the new position of the data packet and the corresponding position of the ACK/NACK are indicated in a manner of joint coding.Manner 4: The base station sends the data packet on some reserved downlink resources. Since the slot m or a certain mini-slot in the slot m is temporarily and dynamically adjusted for sending a service having a high priority level in the uplink or adjusted for coordinating interference between adjacent cells, a certain uplink data packet scheduled to be originally transmitted in the slot m or the certain mini-slot in the slot m may be processed in one of the following manners. Transmission of the data packet originally transmitted in the slot is directly relinquished. The UE continues to blindly detect DCI indication information for triggering transmission. After the indication information for triggering transmission is detected again, the UE sends the prepared data packet at the indicated position again. A timer is set. When new trigger information is received within the time interval set by the timer, transmission is performed according to the indication information. When no new trigger information is received within the time interval set by the timer, the data packet is discarded. Other time-frequency resources includes: a physical resource block (PRB) position or a new slot position, or further includes a codebook or an orthogonal code resource. For example, trigger transmission signaling that indicates a new scheduling position is sent to the UE originally scheduled at the position. The new scheduling position includes a new PRB position or a new slot position or further includes a codebook or an orthogonal code resource. The UE sends the data packet on some reserved uplink resources. The treatment with impacts on scheduling and HARQ timing includes steps described below. For the scenario of semi-static configuration scheduling and feedback timing, the subframe for data transmission is determined according to the reference subframe configuration and the configured timing value; or the original semi-static configuration timing is switched to dynamic indication timing; and for dynamic signaling indication timing, the timing indication is re-modified. In addition, different subbands in a bandwidth are configured to have different uplink and downlink configurations, and when adjacent two subbands have different uplink and downlink configurations, a guardband is provided between the adjacent two subbands. Different subbands in a large bandwidth may be configured to have different uplink and downlink configurations. When adjacent two subbands have different uplink and downlink configurations, a guardband may be provided between the adjacent two subbands to avoid adjacent frequency interference. The base station notifies the UE of the uplink and downlink configurations of different subbands in a time-frequency two-dimensional manner. Preambles of certain signals/channels such as discovery reference signals (DRSs)/random access channels are sent in a transmission window, and transmission positions of these signals may be dynamically adjusted for transmission of a service having a high priority level. A data transmission method is further provided in the present invention, and is applied to a UE. As shown inFIG.1B, the method includes steps described below. In step201, an adjusted frame structure sent by a base station is received. In step202, data transmission is performed according to the adjusted frame structure. After the adjusted uplink and downlink configuration information or frame structure sent by the base station is received, the method further includes the step described below. When the UE determines a change of uplink and downlink configuration information in a time unit corresponding to the uplink and downlink configuration information or the frame structure according to the uplink and downlink configuration information or the frame structure, an originally scheduled data packet is processed as follows. The UE blindly detects new scheduling information of the base station within predefined time. The new scheduling information is scrambled by using a specific identifier. The new scheduling information indicates that the originally scheduled data packet is rescheduled to another time domain position, another frequency domain position, another carrier, another codebook, or another orthogonal code resource. When control information of the rescheduling has not been detected within the predefined time, the UE relinquishes sending or receiving of the data packet, or the UE sends or receives the data packet on some reserved resources. Based on the above, the present invention provides the following examples. Example 1 The example describes in detail how the base station notifies the UE subject thereto of the link direction, or the uplink/downlink/blank resource attribute. The base station notifies the UE of the uplink and downlink configuration by using at least one of the following information: an SIB, a PBCH, a RRC, and dynamic physical layer signaling such as DCI. For example, the uplink and downlink attribute at a certain moment is determined in one of the following manners.Manner 1: The system specifies certain fixed time domain positions such as certain fixed slots or OFDM symbols for sending the uplink ACK/NACK, SR, SRS or preamble initial access or retransmitted data packet. For example, a slot 1 is used for sending the SR, the last OFDM symbol of a slot 4 is used for sending the SRS, and a slot 5 is used for sending a message 1 or a preamble signal in a random access process. The last OFDM symbol of a slot 8 is used for sending the ACK/NACK. Meanwhile, certain slots or OFDM symbols are fixedly used for sending certain downlink data or information. For example, the first OFDM symbol of a slot 0 is fixedly used for sending the downlink control channel, the slot 4 is used for sending the downlink synchronization channel, and a slot 7 is used for sending the DRS. The base station may then dynamically indicate whether the remaining resources other than the fixed resources are uplink or downlink or blank resources.Manner 2: Two adjacent base stations negotiate and then semi-statically configure certain time domain positions for transmitting downlink data or uplink data. For example, a cell 1 and a cell 2 are two adjacent cells. When the two adjacent cells belong to the same base station, the base station may semi-statically configure, through higher-layer signaling according to requirements, certain time domain resources for transmitting uplink data or downlink data. When the two adjacent cells belong to different base stations, the two base stations may exchange information via an air interface, and then notify the UE in the cells through the higher-layer signaling of the negotiated uplink and downlink attribute of the time domain position. For example, the adjacent base stations negotiate and determine that in the adjacent two cells, the slot 1 is used for sending the SR, the last OFDM symbol of the slot 4 is used for sending the SRS, and the slot 5 is used for sending the message 1 or the preamble signal in the random access process. The last OFDM symbol of the slot 8 is used for sending the ACK/NACK. The first OFDM symbol of the slot 0 is used for sending the downlink control channel, the slot 4 is used for sending the downlink synchronization channel, and the slot 7 is used for sending the DRS. Then, the base station dynamically adjusts or configures the remaining time domain resources according to the uplink and downlink service requirements.Manner 3: All resources are dynamically indicated by the base station.Manner 4: A part of the resources are fixedly used for uplink data transmission or downlink data transmission, and a part of the resources are semi-statically configured for uplink data transmission or downlink data transmission or as blank resources, and a part of the resources are used for dynamically indicating uplink data transmission or downlink data transmission or blank resources. The base station performs the above dynamic indication further in the following included manners.Manner 1: A time domain position of uplink data is implicitly determined according to uplink grant information for scheduling the uplink data; and a time domain position of downlink data is implicitly determined according to downlink grant information for scheduling the downlink data. That is, uplink data is scheduled to the time domain position for uplink data transmission, and downlink data is scheduled to the time domain position for downlink data transmission. The uplink grant information and the downlink grant information are born in a specific search space of the downlink control channel. The downlink control channel is located on the first few OFDM symbols of certain slots that are semi-statically configured or fixed.Manner 2: Signaling notification is displayed through common control information. For example, downlink control information is carried in a common search space of the downlink control channel to indicate uplink and downlink attribute of subsequent k slots or mini-slots or to indicate a certain slot as a blank resource, and an attribute of each OFDM symbol in a mixed slot. For example, the bitmap is used for indicating whether the subsequent k slots or mini-slots have uplink attribute or downlink attribute. 0 represents the uplink and 1 represents the downlink. Or inversion represents a change of the uplink and downlink attribute of the slot or the mini-slot. No inversion represents no change of the uplink and downlink attribute of the slot or the mini-slot. If a bit corresponding to a slot changes from 0 to 1, it means that the uplink and downlink attribute is changed. Otherwise it means that the attribute of the slot is not changed.Manner 3: A slot whose configuration changes is notified through specific control information. For example, 3 bits or 4 bits in the DCI indicate the slot or the mini-slot whose configuration has changed, and 1 bit indicates a change between uplink and downlink or the resource becoming a blank resource. For example, when the 1 bit is 0, it indicates that the uplink and downlink attribute of the slot or the mini-slot changes, and when the 1 bit is 1, it indicates that the slot or the mini-slot is a blank resource.Manner 4: Some uplink and downlink frame structure patterns and the granularity of slot allocation changed by the base station are predefined or semi-statically configured through higher-layer signaling, and then the base station dynamically indicates indexes of the patterns. For example, a size of the slot group is semi-statically configured to be 4, and then the uplink and downlink configuration of every 4 slots is notified through dynamic DCI signaling. The notification may also be in a manner of notifying the index of the uplink-downlink ratio. As listed in Table 1 below, each ratio index corresponds to an uplink-downlink ratio. Configuration of each slot is determined by one-to-one correspondence in the order of first the downlink slot and then the uplink slot. What is missing is a blank slot. If both uplink and downlink slots exist, the blank slot is located between the downlink slot and the uplink slot. TABLE 1Corresponding uplink-IndexSignalingdownlink ratio10000:420011:330102:240113:151004:061013:071100:381112:1 The UE determines the receiving or transceiving of data or the reservation of resources at each time moment by receiving the above information. Through the above method, the UE can accurately know the direction of data transmission, so as to correctly receive or transmit data. Example 2 The example describes in detail the situation in which the uplink and downlink configuration in the frame structure is dynamically changed. The uplink and downlink configuration in the frame structure includes reserving or configuring some blank resources. In addition to a dynamical change of the uplink and downlink attribute of a subframe or a slot or a mini-slot or several OFDM symbols, the base station may also dynamically indicate some blank resources and instruct the UE not to send any data on the blank resources during the data transmission. These blank resources include n consecutive PRBs for the frequency domain, and include one or more slots or one or more OFDM symbols within a slot for the time domain. For example, the base station indicates, through the bitmap, in the common control information DCI of the first OFDM symbol of a slot which ones or which one of the remaining six OFDM symbols among the seven OFDM symbols included in the slot are reserved. The reserved or blank resource positions are used for at least one of the following. The UE does not receive data at the reserved position. The reserved or blank resource positions are used for a site to sense and listen for interference. The reserved or blank resource positions are used for dynamic adjustment of receiving and sending. The reserved or blank resource positions coexist with the traditional system. Some resources are reserved for sending data packets that are not sent. The reserved or blank resource positions are used for sending multicast services. The position of the blank symbol may also be semi-statically configured through higher-layer signaling, with the frequency domain position occupying a part of the bandwidth. The position of the blank symbol may be located between transmissions of uplink data and downlink data or between two mini-slots. The requirements of the system for forward compatibility and flexible adjustment of resources are satisfied by configuring these blank resources. The step in which the base station notifies the uplink and downlink configuration further includes: semi-statically configuring a size of a slot group, and then indicating uplink and downlink configuration in each slot group through dynamic signaling. As shown inFIG.2A, the size of a slot group initially configured by the base station is 4 slots. Then, the dynamic DCI indicates that the uplink-downlink ratio in the first slot group is 0:4, and the predefined arrangement sequence is first the downlink slot and then the uplink slot. It means that the first slot to the fourth slot are all downlink. For another example, the DCI indicates that the uplink-downlink ratio in the second slot group is 2:1, and then among the four slots, the first slot is downlink, the second slot is blank, and the third and fourth slots are uplink. Then, the size of the slot group is semi-statically changed to be 2 slots through the higher-layer signaling. And the uplink-downlink ratio in the first slot group is indicated to be 1:1 through the dynamic signaling, which means that the first slot is downlink and the second slot is uplink. Then, the uplink-downlink ratio in the second slot group is 2:0, which means that the first slot is downlink, and the second slot is also downlink. The signaling overhead can be reduced in this indication manner. Example 3 The example describes the granularity and adjustment of dynamic TDD. The base station determines the uplink and downlink attribute at a certain moment by using at least one of the followings. A priority level of a data service is used. A service having a high priority level is sent first, a corresponding uplink and downlink attribute is configured to the data packet, and the final link direction of two data packets having the same service priority level and different directions is determined through contention. A priority level of a channel or a signal is used. Different channels and signals are divided into different priority levels through predefinition. A channel or signal having a high priority level are sent first, and then the base station broadcasts the corresponding uplink and downlink attribute. A channel or signal having a low priority level is delayed to be sent. A sensing result of a carrier is used. Uplink and downlink data transmission is determined according to a contention result of the carrier. A result of negotiation between adjacent cells is used. If an adjacent cell performs a high-priority-level downlink data transmission at a certain moment, the current cell should also be configured to perform downlink data transmission in order to avoid cross-link interference. A capability of the UE is used. The granularity of dynamical change of slot allocation of uplink and downlink includes: the granularity of a subframe (1 ms), the granularity of a slot, the granularity of a mini-slot, or the granularity of n OFDM symbols, where n is semi-statically configured or indicated through dynamic signaling. For example, it is assumed that the length of the mini-slot is semi-statically configured to be 2 OFDM symbols, and the structure of a certain slot is as shown inFIG.2B. The length of the slot is 7 OFDM symbols, and the first OFDM symbol is fixedly used for the downlink control channel and includes the uplink and downlink attribute of the subsequent 3 mini-slots. For example, the bitmap is used for indicating to the user equipment that the adjusted frame structure is 011, and then the first mini-slot is indicated to be downlink, and the remaining two mini-slots are indicated to be uplink. Different subbands in a system bandwidth may be configured to have different uplink and downlink configurations within the same time period. For example, for a subframe, the uplink and downlink configuration of a subband 1 is a configuration pattern 1, the uplink and downlink configuration of a subband 2 is a pattern 2, and the uplink and downlink configuration of a subband 3 is a pattern 3. Meanwhile a guardband is provided between subbands. In the above manner, the uplink and downlink configurations of different frequency bands in the system bandwidth at different time may be dynamically adjusted. Example 4 The example describes a situation in which the transmission of uplink data is adjusted to the transmission of downlink data. The base station sends, in the slot 0, DCI information for continuous scheduling of multiple slots to the UE, and the DCI information is born in the UE-specific search space. For example, as shown inFIG.3, slots 4, 5 and 6 are continuously scheduled for sending the uplink data packet. While in the slot 2, the base station suddenly has a downlink data packet, such as a URLLC packet, having a high priority level to be sent, and then the base station sends, in the common DCI of the slot 3, signaling for changing uplink and downlink configuration in a slot of a frame structure. Therefore, the slot 5 is changed to be downlink, and then the base station sends downlink scheduling information to schedule the URLLC data packet to be sent in the slot 5. After all UEs detect the common indication information, the data packet, of the UE, scheduled in the slot 5 is processed in one of the following manners.Manner 1: Transmission of the data packet in the slot 5 is relinquished and directly discarded.Manner 2: The data packet is sent in the slot 5 at a lower power.Manner 3: The UE continues to blindly detect DCI indication information for triggering transmission. After the indication information for triggering transmission is detected again, the UE sends the prepared data packet at the indicated position again. For example, the base station sends scheduling update signaling to the UE originally scheduled at the position. The scheduling update signaling indicates a new scheduling position which includes a new PRB position or a new slot position or further includes a codebook or an orthogonal code resource. For example, the data packet originally scheduled in the slot 5 is scheduled to the slot 6 for transmission, and an orthogonal code may be configured at the same time.Manner 4: The UE sends the data packet on some reserved uplink resources. The reserved resources are certain fixed resources or certain semi-statically configured resources. For example, the slot 7 is a reserved slot resource as a new position for transmission of a data packet that is originally scheduled but is not sent. Then the base station receives the data packet according to the indicated new position or reserved resource position. This implements data transmission and reception in the case of dynamically adjusting the uplink and downlink configuration. Example 5 The example describes a situation in which the downlink data transmission process is adjusted to be uplink transmission process. The base station sends, on the first OFDM symbol of the slot 0, DCI information for continuous scheduling of multiple slots to the UE, and the DCI information is born in the UE-specific search space. For example, as shown inFIG.3, slots 0, 1, 2 and 3 are continuously scheduled for sending the downlink data packet. While in the slot 1, the base station suddenly receives an uplink scheduling request sent by the UE to require the base station to immediately allocate an uplink resource to the UE for sending a service data packet having a high priority level. For example, the URLLC packet is to be sent, and then the base station sends uplink and downlink change signaling in the common downlink control information of the slot 2, and the slot 2 is changed to be uplink. After the UE receives the common information, the UE that has a service with a high priority level may perform sensing on certain resources of the slot. If the resource is not used, the uplink data may be sent in a grant-free manner. Or the base station simultaneously sends, in the specific search space of the control channel in the slot 2, grant information to the UE that sends the scheduling request, to indicate certain PRB resources of certain OFDM symbols of the slot 2 and scheduling information such as the MCS. For the downlink packet originally scheduled at this position, the base station may perform processing in one of the following manners. Transmission of the data packet is relinquished, and meanwhile the UE does not receive the scheduled downlink data packet and does not feed back ACK/NACK. The data packet is still sent at a lower power than the original transmit power. The data packet is rescheduled to another time domain position, or another frequency domain position, or another carrier. A timer is set. If the indication information of rescheduling is still not received within the time interval set by the timer, the data packet is discarded. For example, the base station resends DCI to the UE corresponding to the downlink data packet originally scheduled at the position, and notifies the UE that the original data packet is resent in the slot 4, which may be, for example, an offset of k slots from the original position, where k=2. If the re-indicated position is located later than the original position of the ACK/NACK feedback, the DCI information may include new resource position information of the ACK/NACK feedback corresponding to the data packet. For example, the transmission position of the ACK/NACK feedback corresponding to the downlink data packet originally indicated in the slot 2 is the last OFDM symbol of the slot 3. Since the data packet of the base station in the slot 2 is not sent, the UE feeds back NACK or nothing in the slot 3. If the base station re-indicates the position of the ACK/NACK feedback corresponding to the data packet as the slot 5, the UE feeds back the corresponding ACK/NACK in the slot 5 according to the new indication information after the UE receives the data packet in the slot 4. Or the base station indicates the position of the new data packet and the position of the corresponding ACK/NACK in a manner of joint coding. The base station sends the data packet on some reserved downlink resources. The reserved resources are certain fixed resources or certain semi-statically configured resources. For example, the slot 7 is a reserved slot resource as a new position for transmission of a data packet that is originally scheduled but is not sent. This implements data transmission, reception and feedback in the case of dynamically adjusting the uplink and downlink configuration. Example 6 The example describes the process of data scheduling adjustment by the base station in the process of sending the downlink synchronization signal and the measurement signal. For example, the system defines that the downlink synchronization signal and the channel measurement signal may be sent in some predefined time windows. It is assumed that the time window for transmission is 2 ms and includes four slots, that is, these signals may have four possible transmission positions in one transmission period, or may be transmitted in any one of the four slots. The base station may dynamically adjust the transmission position of these signals according to the requirements of the service priority level. As shown inFIG.4, The base station originally sends a downlink synchronization signal and a channel measurement signal, such as a DRS, in the slot 5, and suddenly an uplink URLLC data packet is to be sent, and then the base station may send common DCI information to notify that all or part of the OFDM symbols of the slot 5 are used for uplink URLLC service. At the same time, the DRS signal is delayed to the next slot for transmission. For example, the transmission position of a message 1 in the predefined random access process is also some predefined time windows, that is, the system reserves certain resources for transmission of the uplink preamble initial access. If the base station has a downlink service with a high priority level to be sent, the base station may send common DCI information to notify that certain reserved resources are used for transmission of downlink data with the high priority level, and then the UE may perform access on other reserved resources in the time window. Channel sensing measurement is performed before the access. Example 7 The example describes ACK/NACK of the data packet. When transmission of uplink and downlink data packets is dynamically changed, the size of the corresponding ACK/NACK playload may also be dynamically changed, and a resource and position of the ACK/NACK feedback may be affected. For example, if an ACK/NACK of m bits is to be fed back at a certain moment and since an ACK/NACK corresponding to a newly added downlink data packet is also fed back at this moment, the number of bits of the ACK/NACK will increase. Or the base station re-allocates another moment for feeding back the ACK/NACK corresponding to the newly added downlink data packet. A feedback resource of the ACK/NACK corresponding to the downlink data is determined in at least one of the following manners. The last OFDM symbol or the first OFDM symbol of each of certain slots is semi-statically configured as a resource for the uplink ACK/NACK. For example, a slot includes 14 OFDM symbols, the base station originally and continuously schedules m data packets in a scheduling unit of k (k may be 1, 2 or 4) OFDM symbols, and the ACK/NACK feedback of the m data packets are all performed in the slot. When the length of the uplink control channel bearing the ACK/NACK is one slot, the resource is determined in one of the following manners.Manner 1: A position of the ACK/NACK corresponding to the data packet is implicitly determined through a position of DCI corresponding to the scheduled downlink data packet.Manner 2: The base station semi-statically configures a resource set through higher-layer signaling, and then indicates the time-frequency resource through dynamic signaling. In addition, when the size of the ACK/NACK payload is dynamically changed, the resource may be determined in the following manner. Two resource positions are configured by the higher-layer and a threshold is predefined. When the size of the ACK/NACK payload is greater than the threshold, the ACK/NACK is sent at a resource position1or sent on a long physical uplink control channel (PUCCH). When the size of the ACK/NACK payload is less than the threshold, the ACK/NACK is sent at a resource position2or sent on a short PUCCH. Example 8 The example describes the situation in which the URLLC service and the enhanced mobile broadband (eMBB) service are multiplexed and transmitted. It is assumed that the base station sends downlink control information to a UE1on the first OFDM symbol of the slot 1 to schedule a downlink eMBB service for data transmission in slots 1, 2, 3 and 4 continuously. A UE2suddenly has an uplink URLLC service to be scheduled in the slot 2, and then the base station sends DCI information to the UE2in the slot 3 to schedule transmission of the uplink URLLC data packet. Meanwhile the base station removes all eMBB data packets originally intended to be sent in the slot 3, and sends control information to the UE1on the last OFDM symbol of the slot 4 to notify that the eMBB data packet in the slot 3 is corrupted. The UE1may then perform interference cancellation reception on the corrupted data after the UE1receive the control information. Through the displayed signaling, retransmission of the entire eMBB data packet is avoided, and the spectrum efficiency is improved. Example 9 The example is directed to an impact of a dynamic change of uplink and downlink on scheduling and HARQ timing. It is assumed that the base station sends downlink control information in the slot n to schedule four downlink slots from slot (n+1) to slot (n+4) for downlink data transmission, and meanwhile indicates through dynamic signaling that the ACK/NACK corresponding to a data packet born in each slot is fed back in a slot (n+5). The adjacent cell or the current cell has uplink data with a high priority level to be sent in the slot (n+4), so in order to ensure the performance of the data packet with the high priority level, the base station temporarily sends common DCI information to notify that the slot (n+4) is the uplink subframe. After the information is received, the UE managed by the base station does not receive downlink data in the slot (n+4). The ACK/NACK is not fed back for the downlink data packet originally scheduled at the position, where data transmission of the scheduling is not counted in the maximum number of retransmissions. If new DCI information is received within the time interval set by the timer, the data packet is received according to the new indicated position, and the ACK/NACK is fed back at the corresponding position according to the indication information. Example 10 The example describes the process of URLLC scheduling and data transmission. The URLLC service may be accessed in a scheduling-based manner or in a schedule-free manner. When reception of the initial transmission has an error and in order to reduce the delay, a retransmitted data packet of the UE is accessed in the schedule-free manner, or the UE performs access in the scheduling-based manner, and the indicated transmission timing is symbol-level. For example, the URLLC is scheduled in units of 1 OFDM symbol, and the scheduling interval of two adjacent scheduling is very small and is one or two OFDM symbols. The process of scheduling access may be that: the base station sends downlink DCI on the third symbol of the slot 0 to notify the UE of transmission on a symbol6of the retransmitted data packet originally accessed in the scheduling-free manner. The resource is notified by the base station through common DCI. After the DCI is received, all UEs will not send uplink data on the resource. The resource is a resource specific for scheduling retransmitted data packets, and will not be occupied by other UEs. In addition, for the schedule-free UE, when the initial transmission error occurs or when the UE does not receive the ACK of the base station within predefined time, the UE may perform retransmission according to the predefined frequency-hopping pattern. The frequency-hopping pattern is separated by k OFDM symbols in the time domain. k is less than a predefined threshold. A random frequency domain position is used. Or the UE uses a different orthogonal code for each automatic transmission. Or the base station allocates a frequency-hopping pattern of an automatic retransmission when rescheduling is performed. The above method is used to meet the requirements of low-delay services. Example 11 The example describes the processing flow of the method, applied to the base station side, provided by the present invention. As shown inFIG.4, first the base station determines an uplink-downlink ratio in a certain cell through negotiation with an adjacent site and according to its own uplink and downlink service requirements. If a data packet having a high priority is to be sent in a cell 1 at a certain moment, the uplink and downlink attribute of the two cells is determined by the data packet. If the data packet is an uplink data packet, uplink data transmission is performed in both the two adjacent cells at the moment. If the data packet is a downlink data packet, downlink data transmission is performed in both the two cells at the moment. The processing flow on the base station side corresponding to the example is described in conjunction withFIG.5. When an uplink data packet and a downlink data packet have the same priority level, the uplink and downlink attribute at the moment is determined through contention. Then, the base station notifies the UE subject thereto of the determined uplink-downlink ratio. The notification method includes higher-layer signaling, or dynamic physical layer control signaling, or higher-layer signaling and dynamic physical layer control signaling. Then, downlink data is sent and uplink data is received according to the uplink and downlink positions. During the process, the base station temporarily adjusts the uplink and downlink attribute or the blank resource at a certain moment according to the service requirements or the measured interference status. Example 12 The example describes the processing flow of the method, applied to the terminal side, provided by the present invention. As shown inFIG.6, first the terminal receives relevant information about the uplink and downlink configuration sent by the base station. Then the terminal sends or receives data according to the configuration information. The base station described in the present invention includes a Node B, an evolved base station (eNode B), a home Node B, a relay node (RN), a macro base station, a micro base station, and the like. Based on the above description, a base station is further provided in the present invention. As shown inFIG.7A, the base station includes a control unit71and a communication unit72. The control unit71is configured to adjust and determine a frame structure of each time unit within a preset duration. The communication unit72is configured to notify a UE of the adjusted frame structure; and perform data transmission according to the adjusted frame structure. The each time unit includes one of: a subframe, a slot, a mini-slot, and the number m of OFDM symbols. m is an integer greater than or equal to 1. The control unit is configured to notify the UE of the uplink and downlink configuration information or the frame structure by using at least one of:indicating through physical layer signaling;configuring through higher-layer signaling; and notifying through multicast signaling or a system message. The control unit is configured to adjust and determine the uplink and downlink configuration through at least one of the followings:a priority level of a data service;a priority level of a channel, a signal or a link;a sensing result of a carrier;negotiation between adjacent cells; anda capability of the UE. The adjusted frame structure includes structures described below. A first number of slots or OFDM symbols are configured for uplink transmission of uplink preset information. The uplink preset information includes at least one of: an ACK/NACK, an SR, an SRS, a preamble initial access, and an uplink retransmitted data packet. A second number of slots or OFDM symbols are configured for downlink transmission of downlink preset information. The downlink preset information includes at least one of: a downlink control channel, a synchronization channel, and a DRS. A third number of slots or mini-slots or OFDM symbols are configured as reserved resources or as blank resources. The blank resources represent resources at least being not used for transmitting data information. The control unit is configured to perform at least one of following operations. A ratio set, or a pattern, or a ratio set and a pattern is configured, and an index of the ratio set, or an index of the pattern, or an index of the ratio set and the pattern through dynamic signaling is indicated. A size of a subframe group/slot group is configured, and uplink and downlink configuration of each subframe/slot in each subframe group/slot group is dynamically indicated. The indication through physical layer signaling includes steps described below. An uplink time domain position is determined according to uplink grant information for scheduling uplink data; and a downlink time domain position is determined according to downlink grant information for scheduling downlink data. Uplink and downlink configuration information of subsequent k slots or m mini-slots is indicated or the subsequent k slots or m mini-slots are indicated as blank resources through downlink control information born in a common search space of a downlink control channel. A slot n or a mini-slot in the slot n for downlink data transmission is adjusted to be the slot n or the mini-slot in the slot n for uplink data transmission. A slot m or a mini-slot in the slot m for the uplink data transmission is adjusted to be the slot m or the mini-slot in the slot m for the downlink data transmission. The control unit is configured to perform one of following operations. A data packet originally sent in the slot n or the mini-slot in the slot n is discarded, and the data packet being corrupted is indicated to a terminal, where scheduling is not counted in the number of retransmissions; or the data packet originally sent in the slot m or the mini-slot in the slot m is discarded, and the data packet being corrupted is indicated to the terminal, where scheduling is not counted in the number of retransmissions. The data packet to be originally sent in the slot n or the mini-slot in the slot n, and the data packet to be originally sent in the slot m or the mini-slot in the slot m are sent in a manner of reduced power or a reduced MCS. The data packet to be originally sent in the slot n or the mini-slot in the slot n and the data packet to be originally sent in the slot m or the mini-slot in the slot m are rescheduled to another time-frequency resource. The data packet to be originally sent in the slot n or the mini-slot in the slot n and the data packet to be originally sent in the slot m or the mini-slot in the slot m are sent on the reserved resource. Another time-frequency resource includes: a PRB position or a new slot position, or further includes a codebook or an orthogonal code resource. The uplink and downlink configuration further includes a step described below. Different subbands in a bandwidth are configured to have different uplink and downlink configurations, where when adjacent two subbands have the different uplink and downlink configurations, a guardband is provided between the adjacent two subbands. The control unit is further configured to perform operations described below. A signal/channel of a preset type is sent in a specified window. When a signal/channel having a higher priority level than the signal/channel of the preset type is sent, the signal/channel having the higher priority level than the signal/channel of the preset type is sent in the specified window. In addition, a UE is further provided. As shown inFIG.7B, the UE includes a receiving unit81and a sending unit82. The receiving unit81is configured to receive adjusted uplink and downlink configuration information or an adjusted frame structure sent by a base station. The sending unit82is configured to perform data transmission according to the adjusted frame structure. The UE further includes an adjustment unit83. The adjustment unit83is configured to: when a change of uplink and downlink configuration information in a time unit corresponding to the uplink and downlink configuration information or the frame structure is determined according to the uplink and downlink configuration information or the frame structure, an originally scheduled data packet is processed as follows. New scheduling information of the base station is blindly detected within predefined time. The new scheduling information is scrambled by using a specific identifier. The new scheduling information indicates that the originally scheduled data packet is rescheduled to another time domain position, another frequency domain position, or another carrier. When control information of the rescheduling has not been detected within the predefined time, the UE relinquishes sending or receiving of the data packet, or the UE sends or receives the data packet on some reserved resources. An indication system is further provided in the present invention. As shown inFIG.7C, the system includes a base station91and a UE92. The base station91is configured to adjust and determine a frame structure of each time unit within a preset duration, notify a UE of the adjusted frame structure, and perform data transmission according to the adjusted frame structure. The UE92is configured to receive the adjusted uplink and downlink configuration information or frame structure sent by the base station, and perform the data transmission according to the adjusted uplink and downlink configuration information or frame structure. A base station provided in the present invention includes a storage medium and a processor. The storage medium includes a group of instructions that, when executed, cause at least one processor to perform the included operations described below. A frame structure of each time unit within a preset duration is adjusted and determined. The adjusted frame structure is notified to a UE. Data transmission is performed according to the adjusted frame structure. A UE provided in the present invention includes a storage medium and a processor. The storage medium includes a group of instructions that, when executed, cause at least one processor to perform the included operations described below. An adjusted frame structure sent by a base station is received, and data transmission is performed according to the adjusted frame structure. A computer-readable storage medium is further provided in the embodiments of the present invention, and is configured to store computer-executable instructions which, when executed by a processor, implement any one of the above-mentioned methods. It should be understood by those skilled in the art that functional modules/units in all or part of the steps of the method, the system and the device disclosed above may be implemented as software, firmware, hardware and appropriate combinations thereof. In the hardware implementation, division of the functional modules/units mentioned in the above description may not correspond to division of physical components. For example, one physical component may have several functions, or one function or step may be executed jointly by several physical components. Some or all components may be implemented as software executed by processors such as digital signal processors or microcontrollers, hardware, or integrated circuits such as application specific integrated circuits. Such software may be distributed on a computer-readable medium, which may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium). As is known to those skilled in the art, the term, computer storage medium, includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information (such as computer-readable instructions, data structures, program modules or other data). The computer storage medium includes, but is not limited to, a random access memory (RAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a flash memory or other memory technologies, a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD) or other optical disc storage, a magnetic cassette, a magnetic tape, a magnetic disk storage or other magnetic storage devices, or any other medium used for storing desired information and accessed by a computer. In addition, as is known to those skilled in the art, the communication medium generally includes computer-readable instructions, data structures, program modules or other data in modulated data signals such as carriers or other transmission mechanisms, and may include any information delivery medium. The above are only exemplary embodiments of the present invention and are not intended to limit the scope of the present invention. INDUSTRIAL APPLICABILITY A data transmission method, a base station, a user equipment, and a system are provided in the embodiments of the present invention. The base station side can flexibly adjust the frame structure of each time unit within the preset duration and send the adjusted frame structure to the UE, so that the data transmission can be performed between the base station and the UE according to the adjusted frame structure. Therefore, dynamic uplink and downlink data transmission according to service requirements is implemented.
58,340
11863314
DETAILED DESCRIPTION Certain details are set forth herein to provide an understanding of described embodiments of technology. However, other examples may be practiced without various ones of these particular details. In some instances, well-known circuits, control signals, timing protocols, and/or software operations have not been shown in detail in order to avoid unnecessarily obscuring the descried embodiments. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. Examples described herein include systems and methods for granular interference aware selection of modulation and coding schemes (MCS) including user scheduling. Examples may make MCS selections per-sub-band and/or per-user, and in some examples, per-sub-band, per-user, and/or per-stream (e.g., per-number of streams), which may allow for a finer-grain selection of MCS—for example, an appropriate MCS may be selected for each sub-band per-user per-stream based at least in part on the observance of interference (e.g., channel interference), which may improve transmission performance at each sub-band. In some examples, the interference aware adaptive MCS selection, including user scheduling, techniques described herein may be utilized in broadband wireless access time-division-duplex (TDD) retro-directive beamforming (RDB) systems. As should be appreciated, and as used here, per-sub-band per-stream may be referred to as “granular.” As one example, signaling of a subMar, as described herein, may occur at a granular level, per-sub-band per-stream. Examples described herein generally pertain to broadband wireless access systems with dynamic user allocations in either time or frequency or spatial dimensions. Examples of broadband wireless access systems include Long-Term-Evolution (LTE) and WiMAX systems. Such systems generally select the modulation and coding scheme (MCS) based on different metrics to achieve a desired throughout and error rate operating point. The techniques (e.g., algorithms) for selecting the MCS may be referred to as adaptive coding and modulation (ACM) techniques (e.g., algorithms), which may distinguish them from systems that use a fixed MCS. One problem in wireless access is interference, which in some examples may be caused by other wireless transmitters nearby. This may particularly be a problem in unlicensed parts of the radio frequency band but can be a problem in any band. ACM techniques that detect interference and consider interference in MCS selection, (e.g., interference aware ACM techniques), may perform better in areas or bands where the wireless access system is subject to interference that may not be spatially separable. Further improvements may be achieved by including interference awareness in the scheduling of users. For example, avoiding scheduling users on frequency bands with interference may improve user experience as well as improving the network efficiency. Examples described herein include a suite of ACM techniques for selection of MCS and user scheduling techniques. Examples described herein include examples designed for time-division-duplex (TDD) retro-directive beamforming (RDB) broadband wireless communications (e.g., access) systems with dynamic allocations in time, frequency, or space (e.g., spatial multiplexing of multiple users). Generally, a user herein may refer to a particular communications node (e.g., a residential node) and/or a particular computing system connected to a wireless communications node. Examples described herein may include one or more base nodes, which may be intended to communicate wirelessly to one or more other communications nodes (e.g., residential nodes). A base node may, for example, be positioned on a tower or other centralized location. A residential (remote) node may be particular to a building, home, office, or other location. The residential node may in turn be in communication with one or more electronic devices and may facilitate communication between the electronic devices and the base node. Any number or type of electronic device may be serviced using communication techniques described herein including, but not limited to, computing systems, servers, desktops, laptops, tablets, cellular phones, appliances, vehicles, etc. Recall many existing wireless systems utilize a broadcast control channel where allocations are signaled with one MCS per allocation where an allocation may, in some examples, span a small and/or a large frequency band. Error rates may be reduced in such systems by using hybrid-automatic-repeat-request (HARQ) that sends a Negative Acknowledgement (NACK) if errors in the transmission are detected, and all or portions of the transmission are repeated. A drawback of this scheme is a high implementation complexity of the HARQ including storage on the LLRs. Often many parallel HARQ streams are used which further increases storage requirements. Another drawback of several re-transmissions is an increased latency. Frequency domain duplex (FDD) systems that can quickly send an ACK/NACK on another frequency have a relatively low latency even with multiple re-transmissions. Time-domain-duplex (TDD) systems have to wait until the switch in link-direction plus a transmit time gap due to propagation delays between uplink and downlink, causing an increase in the latency of the system. As used herein, downlink (DL) may refer to a transmission from a base station (BS) to user equipment (UE). Similarly, and as used here, uplink (UL) may refer to a transmission from a BS to a UE. Examples described herein include ACM systems and techniques for selection of the MCS, which may be used with TDD retro-directive beamforming (RDB) broadband wireless access systems with dynamic allocations in time, frequency, and/or spatial multiplexing of multiple users. Retro-directive beamforming generally refers to use of the spatial structure of the received signal to decide the spatial structure of a transmitted signal. For example, if a received signal impinging on the receiving antenna array comes primarily from one direction, the transmitter may then transmit back in the same direction. Both analog and digital forms for RDB are possible but the examples herein generally focus on a digital implementation. Since the characteristics of the radio channel typically are frequency dependent, RDB is often implemented for TDD systems where both transmission directions (e.g., from the BS to the UE, from the UE to the BS, from a BN to an RN, from an RN to a BN, etc.) use the same frequency so the receive beamforming weights can be used to derive the transmit beamforming weights. Beamforming weights generally refer to how to weight the signals received and/or transmitted to and/or from different antennas. This weighting of signals corresponds to forming a spatial signature that may match the channel and may change (e.g., improve and/or optimize) a performance metric. Examples of performance metrics include throughput and signal to noise plus interference ratio (SNIR). For a RDB system with allocations that are dynamic in time, frequency, and or space, a single MCS per allocation may not be optimal. Consider an orthogonal frequency-division multiple access (OFDMA) RDB system where the allocation unit is a sub-band that is defined as a group of consecutive sub-carriers. The size of a sub-band may be selected such that the channel conditions are similar within a sub-band. However, the conditions between sub-bands can potentially be very different due in part to retro directive beamforming and the spatial compatibility with other users allocated at each sub-band. Another example that leads to different MCS per-sub-band per-stream (e.g., per-spatial-stream, per-number of streams) is that a new spatially multiplexed user can enter the sub-band and change the conditions on that sub-band compared to other sub-bands for a short time. Hence, the signal quality conditions change over both time, sub-band (frequency), and spatial multiplexing conditions (other users). Operating in an un-licensed band (as may occur in examples described herein) may further expose the system to external interference that also can be dynamic in terms of time, frequency, and space. Under these conditions, a single MCS for an allocation spanning multiple (or many) sub-bands may not be optimal. In some systems, such as RDB systems with dynamic scheduling in time, frequency, space, and possibly exposed to external interference, a better solution may be to use a separate MCS across different parts of the allocation, e.g., a fine-grained MCS selection scheme. A scheme that uses a separate MCS per-sub-band and spatial multiplexing dimension may substantially improve performance in some examples as the MCS is selected to match the channel conditions better. Another benefit may be that fewer re-transmissions typically are required since many errors can be reduced and/or avoided by selecting a lower MCS for parts of the band with challenging conditions. This reduces latency, which may be advantageous, for example, in TDD systems. It also may greatly reduce complexity by providing solutions that do not store many LLRs for parallel streams. Of course, example techniques for fine-grained MCS may still be compatible with HARQ which may yield even higher performance but at a higher complexity. It may also compatible with ARQ, which is a simpler version of HARQ where the full message is retransmitted as opposed to HARQ where the re-transmissions typically modify the transmission and the receiver combines information for all transmissions. In some examples, ARQ and HARQ may be implemented in a plurality of ways. As such, the suite of techniques described herein may be applicable to any implementation. As one example, ARQ and HARQ schemes (e.g., techniques) may be implemented by repeating either packets or transport blocks, and the suite of techniques described herein, in some examples, is compatible with both. In some examples, an allocator, such as those described herein, may determine (e.g., decide) a number of streams based on a multi-dimensional interference margin generator, such as those described herein. In some examples, a signaling of a subMar may occur at a granular level, e.g., per-sub-band, per-stream, etc. In some examples, a “no data” MCS indicator may be a solution to the problem that even the lowest MCS may not support low error transmissions. This “no data” MCS indicator may also be referred to, in some examples, as the FORBID option since it forbids user data to be striped on the affected sub-bands although these sub-bands may be allocated to this user by the scheduler. In some examples, the base SINR (as computed, in some examples, by UL & DL Base SINR720ofFIG.7as described herein) may be used by a network management system to determine whether the majority of the users are experiencing a throughput that is far from optimal. Note that in some examples, the adaptive rate of change as described herein may help reduce the starvation occurrences but there may still be a duration (e.g., 10-20 mins) before the subMar decays enough to allow for new allocations. As described herein, a common problem in wireless access is interference (e.g., channel interference, etc.), which may in some examples be caused by other wireless transmitters nearby. This may be particularly a problem in unlicensed parts of the radio frequency band but may be a problem in any band. Fine-grained interference aware MCS selection schemes may be able to handle interference generally, and more specifically, may be able to handle interference that covers parts of the band. Detecting and/or observing interference and making the ACM techniques aware of the interference in a fine-grained manner to adjust the MCS in the interfered parts of the band can substantially improve performance in presence of spatially unresolvable (and/or spatially inseparable) interference. Furthermore, including interference knowledge in the scheduling of users may further improve performance and improve the network efficiency. As used herein, spatially inseparable interference refers to interference that cannot be mitigated through spatial means such as multiple antennas. As should be appreciated, and in some examples, systems and methods described herein may be implemented in unlicensed bands that may have none and/or some and/or significant interference. In some examples, the bands (e.g., unlicensed bands) may include spatially inseparable interference. In some examples, systems and methods described herein may be implemented in licensed bands.” Accordingly, examples described herein include ACM systems and techniques (e.g., algorithms) using metrics for selecting the fine-grained MCS and user scheduling techniques under a variety of conditions, including the observance and/or determination of interference. To implement fine-grained MCS selection schemes, fine-grained control and feedback channels may be used. FIGS.1A and1Bare schematic illustrations of a communication system100for interference aware adaptive selection of modulation and coding schemes (MCS), arranged in accordance with examples described herein. As described herein, system100ofFIGS.1A and1Bincludes multiple transceivers, including transceivers104a,104b, and104c. Each transceiver is depicted with two antennas. Transceiver104ais coupled to antenna102aand102b. Transceiver104bis coupled to antenna102cand102d. Transceiver104cis coupled to antenna102eand102f. The transceivers are each coupled to a transform block (e.g., FFT/IFFT106a-f). The signals at the right-hand side of the transform blocks inFIGS.1A and1Bare depicted by sets110. One set (shown as signals arranged in frequency over time) is depicted for each antenna, and within each set are multiple sub-bands of frequencies (e.g., SB1 through SB M). There is one set for each antenna (e.g., antenna A1through AN). The transform blocks are coupled to weight processors108. Signals at the right-hand side of the weight processors are depicted as streams112. Each stream may include a weighted combination of the per-antenna sets of signals. In some examples, each stream in streams112may be a communication stream, and in some examples, each communication stream may include a weighted combination of a per-antenna set of signals that, in some examples, an output of a weight processor. In some examples, a system (e.g., system100) may transmit (or multiplex) multiple information/communication streams on the same time and frequency resource using the spatial dimension (e.g., antennas through the weight processors in system100) to separate them. This is sometimes called spatial division multiplexing as opposed to time or frequency division multiplexing. In some examples system100may use time, frequency, and spatial division multiplexing where the spatial streams (through the antennas and weight processors) provide the spatial division. The weight processors are coupled to modulation/demodulation encode/decoders114. In some examples, there may be one or more weight processors per-sub-band and per-stream, which each may be coupled to modulation/demodulation encode/decoders114. The modulation/demodulation encoder/decoders may modulate, demodulate, encode, and/or decode the streams in accordance with modulation coding scheme selections made as described herein. Generally, a modulator/demodulator and/or encoder/decoder may be provided for each sub-band of each stream. ACM circuitry126is coupled to the modulator/demodulators and/or encoder/decoders, and multidimensional interference margin generator136, to provide an MCS selection for use by those components as described herein. ACM circuitry126may include selection circuitry for each sub-band and/or each stream and/or each user. Multidimensional interference margin generator136may in some examples observe interference (e.g., channel interference) present in one or more channel metrics, including but not limited to an uplink reference symbol signal to interference plus noise power ratio metric, an uplink pilot-based signal to interference plus noise power ratio metric, an uplink transport block error metric, or combinations thereof. Multidimensional interference margin generator136may, based at least on the observed and/or determined interference, may generate per-sub-band per-user margin and transmit (e.g., send, relay, etc.) the per-sub-band per-user margin to ACM circuitry126. While ACM circuitry126is depicted as being in a base node, similar ACM circuitry may be present in a remote node, as described herein. In some examples, a remote node comprising ACM circuitry may also generate and/or determine a per-sub-band per-user margin. As used herein, a per-sub-band per-user margin per-user and subMar are used interchangeably throughout. As should be appreciated, a DL subMar is interchangeable with a downlink per-sub-band per-user margin, and an UL subMar is interchangeable with an uplink per-sub-band per-user margin. As should further be appreciated, and as described herein, in some examples, on a base node (e.g., BN), there may be many users, each one having a separate subMar. In some examples, and from the remote node (e.g., RN) perspective, there may be a single user, e.g., itself. The modulator/demodulators and/or encoders/decoders may be coupled to a packet processor118. The packet processor118may be coupled to a switch120for receipt of and/or provision of Ethernet or other data. A scheduler/allocator116may be coupled to the packet processor118and the ACM circuitry126. A demand estimator122may be coupled to the switch120and the scheduler/allocator116. A spatial database124may be coupled to the scheduler/allocator116and the ACM circuitry126. ACM circuitry126may include and/or be in communication with stored metrics such as look up table (LUT)130, hysteresis132, and/or margin134. As should be appreciated, at least both an adaptive margin and a subMar are discussed throughout. At least both the adaptive margin and the subMar, in some examples, may be stored as metrics, such as the margin134metric. The components shown inFIGS.1A and1Bare examples. It is to be understood that additional, fewer, and/or different components may be used in other examples. Generally, components shown and described with reference toFIGS.1A and1B, which perform processing, calculating, or other data manipulations, may be understood to be implemented in hardware, software, or combinations thereof. For example, the weight processors, modulators/demodulators, encoders/decoders, ACM circuitry, may be implemented using one or more processors and memory encoded with executable instructions for performing their functions (e.g., software). In some examples, one or more application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), systems on a chip (SOCs), or other logic may be used. Any electronic storage may be used to store metrics or other parameters described herein (e.g., weights, look-up tables, hysteresis, margin), such as any kind of memory. It is to be understood that the communications node ofFIGS.1A and1Bmay be used to both transmit and receive signals. In some instances throughout, particular components may be described with reference to their receive operation or their transmit operation, however it should be understood that the components have dual roles and function in both the receive and the transmit paths. Examples of systems described herein, such as system100ofFIGS.1A and1B, may be incorporated into and/or otherwise used to implement one or more wireless communication nodes. Examples of wireless communication nodes include base stations, routers, access points, cells, computers (e.g., servers) as well as remote nodes of a wireless network and/or mobile devices such as tablets, handsets (e.g., cellular phones), and laptops. Wireless communication nodes may be used to provide data service (e.g., Ethernet, etc.) to other devices having incorporated communication technology such as televisions, set-top boxes, gaming devices, home automation devices, appliances, and automobiles or other vehicles. Multiple systems described herein, including multiple instances of system100ofFIGS.1A and1B, may be deployed in a communication environment. For example, the system100may be used to implement one or more base nodes and/or residential nodes (e.g., routers) which may communicate with one or more electronic devices to provide Ethernet service to the electronic device. Depending on the node type, some of the system blocks in system100may be different. For example, the ACM circuitry (e.g., ACM circuitry126ofFIGS.1A and1B) of a base node may select the MCS while the ACM circuitry of a residential node may receive and apply the selected MCS. Although it may be applicable to any node, including but not limited to a remote node,FIGS.1A and1Bgenerally focus on a base node perspective. Accordingly, examples of communication nodes described herein may include transceivers, such as the transceivers104a-cshown inFIGS.1A and1B. Each transceiver may include a dual digital converter unit including an analog to digital converter and/or a digital to analog converter. For example, transceiver104amay include dual digital converter unit104d. Transceiver104bmay include dual digital converter unit104e. Transceiver104cmay include dual digital converter unit104f. Transceivers may generally include both transmitter and receiver components and/or share circuitry used to perform transmitting and receiving. In some examples, a transceiver may include separate transmitter and receiver components. In some examples, transceivers described herein may include and/or be coupled to two digital-to-analog converters in a dual digital converter unit, such as dual digital converter units104d-104f. In some examples, converter units described herein convert a signal incident on one or more antennas of system100from analog to digital. Additionally and/or alternatively, in some examples, converter units described herein may convert a signal to be transmitted by the one or more antennas of system100from digital to analog. Examples of systems described herein include transceivers (e.g., wireless communication transceivers), such as transceiver104a, transceiver104b, and transceiver104cofFIGS.1A and1B. While three transceivers are provided with reference numbers inFIGS.1Aand1B, any number of transceivers may be included in the system (as indicated by the multiple dots inFIGS.1A and1B). For example, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 16, 32, 64, 128, and/or other numbers of transceivers may be used in other examples. The transceivers, at their right-hand side inFIGS.1A and1Bmay each provide and/or receive a data stream corresponding to RF signals incident on and/or transmitted by the antenna(s) to which they are connected. The data stream may be, for example, a discretized version of the RF signals incident on one or more antenna(s). One or more data streams may include data, such as training data, which may be used (e.g., by a weight processor) to generate weights. Examples of transceivers described herein may be coupled to antennas. For example, transceiver104ais depicted coupled to antenna102aand antenna102b. Transceiver104bofFIGS.1A and1Bis depicted coupled to antenna102cand antenna102d. Transceiver104cofFIGS.1A and1Bis depicted coupled to antenna102eand antenna102f. Generally, multiple antennas coupled to a single transceiver may each be used to (e.g., tuned to) receive a particular polarization (e.g., indicated by ‘Antenna V’ and ‘Antenna H’ inFIGS.1A and1B. In some examples, one or more transceivers may be coupled to only a single antenna. Radio frequency (RF) signals may be incident on and/or transmitted by antennas as described herein. Generally, the RF signals may be received in a frequency band (e.g., a sub-band) that may include multiple subcarrier frequencies. System100may include one antenna and/or may include multiple antennas—e.g., system100may be a multiple antenna system, otherwise referred to as a multiple-input multiple output (MIMO) system. In this manner, any number of antennas may be provided in systems described herein, such as 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 16, 32, 64, 128, 256, or other number of antennas. Multiple antennas provided in a system (e.g., in a mobile communication device) may be referred to as an antenna array. The antenna array may be local to one device (e.g., the antennas may be provided in a same device), or the antenna array may be distributed in some examples. For example, the antennas shown inFIGS.1A and1Bmay be included in separate devices distributed about an environment in some examples. Transceivers in communication with the distributed antennas may be in communication with one another and/or with a beamforming network to facilitate techniques described herein (e.g., TDD RDB). In some examples, the signals incident on the one or more antennas may be time varying signals received by the one or more antennas in the time domain. In some examples, the signals to be transmitted by the one or more antennas need to be transformed from a frequency domain to the time domain. Examples of system described herein may include transforms, such as Fast Fourier Transform (FFT) and/or Inverse Fast Fourier Transform (IFFT) blocks106a-f. FFT and/or IFFT blocks106a-fmay be implemented in hardware, software, or combinations thereof. Circuitry may be used to implement FFTs and/or IFFTs. Transforms may be coupled (communicatively or otherwise) to one or more transceivers. For example, FFTs and IFFTs106aand106bare depicted coupled to transceiver104a. FFTs and IFFTs106cand106dare depicted coupled to transceiver104b. FFTs and IFFTs106eand106fare depicted coupled to transceiver104c. In some examples, the transforms may convert (or transform) an output of the connected transceiver, e.g., a signal incident on one or more antennas of a wireless access system, such as antennas102a-102fofFIGS.1A and1B, from the time domain to the frequency domain. In some examples, the transforms, may convert (or transform) the signal incident on the one or more antennas using a Fast Fourier Transformation (FFT). Additionally and/or alternatively, in some examples, the transforms may be configured to convert (or transform) a signal to be transmitted by one or more antennas, such as antennas102a-102fofFIGS.1A and1B, from the frequency domain to the time domain. In some examples, an Inverse Fourier transform may convert (or transform) the signal to be transmitted by one or more antennas using an Inverse Fourier Transformation (IFFT). A size of the transform may correspond to a number of subcarriers used in the communications sent to and from the communications node ofFIGS.1A and1B. Any number of subcarriers may be used, 4096 subcarriers over a 40 MHz carrier, in one example, although any number may be used. During operation, signals at the right-hand side of the transform blocks106a-finFIGS.1A and1Bare depicted as sets110. Each set corresponds to signals provided to and/or from a particular antenna (e.g., a particular transceiver). The sets110include signals arranged in frequency and time—with the frequencies spanning multiple sub-bands. Each sub-band may include a number of sub-carriers, 52 sub-carriers in one example, although any number may be used. Generally, a sub-band spans a portion of the frequency (e.g., portion of sub-carriers) over which channel characteristics may be approximated as flat (e.g., equal). The sub-bands are labeled SB-1 through SB-M inFIGS.1A and1B. Examples of systems described herein may include a beamforming network (e.g., one or more beamformers), such as beamforming network128ofFIGS.1A and1B. Beamforming network128, in some examples, may include one or more weight processors, such as weight processor108. The beamforming network may provide the signal to be transmitted by each of the multiple transceivers and/or antennas of the system (e.g., system100ofFIGS.1A and1B). The beamforming network may be used to additionally and/or alternatively combine signals received at multiple antennas to form a received signal (e.g., a stream, a data stream, etc.). The beamforming network128may include a weight processor108, which may be used to calculate weights for each of the transceivers and/or antennas in the system. Generally, beamforming networks may be implemented using one or more processors (e.g., central processing unit(s) (CPUs), microprocessors etc.), which may execute instructions stored in one or more computer readable media, and/or circuitry for performing computations (e.g., field programmable gate array (FPGA) circuitry, and/or software on chip (SOC), and/or advanced reduced instruction set machine (ARM) computing architectures, and/or application specific integrated circuits (ASICs)). In some examples, beamforming network128may be a bi-directional beamforming network. In some examples, beamforming network128may be a time-division-duplex (TDD) retro-directive beamforming (RDB) network. Beamforming networks described herein and utilized in devices and/or systems herein may include one or more weight processors, such as weight processor108ofFIGS.1A and1BWeight processor108may be used to calculate and/or apply weights to be used by beamforming network128to generate signals for transmission by multiple (or one or more) antennas in system100and/or to combine signals received by multiple antennas in system100. Accordingly, such beamforming network128, may receive data streams from one or more transceivers and may calculate and/or use, for each subcarrier of a frequency band, a respective plurality of weights used to generate signals for transmission by or combine the signals received at respective ones of the plurality of antennas. In the example ofFIGS.1A and1B, the weights are depicted as W having three indices—where the indices correspond to sub-band, stream, and antenna. For example W1,2,4would refer to the weight used for sub-band 1 (SB-1), stream 2, and antenna4. In this manner, the weight processors may utilize the weights to combine certain sub-bands from certain antennas into streams. A particular weight processor, may for example, be particular to a sub-band and a stream—for example the “SB1, STRM 1 weight processor” ofFIGS.1A and1Bmay create one sub-band of signals for one stream. The weight processor may create the sub-band of signals for the stream by combining signals from the multiple antennas in that sub-band in accordance with the weights. In this manner, different weightings of antennas may be used to for different sub-bands and different streams. Accordingly, the signals at the right-hand side of weight processors inFIGS.1A and1Bare depicted as streams S1-SM, with data in each stream arranged in multiple sub-bands, each having frequency and time dimensions. Examples of systems described herein may include one or more encoders, decoders, modulators, and/or demodulators, depicted herein as modulation/demodulation encode/decoders114ofFIGS.1A and1B. In some examples, modulation/demodulation encode/decoders114may be configured to modulate and encode data intended for transmission by the system and/or de-modulate and decode data intended for reception by the system. While a select number of encoders, decoders, modulators, and demodulators are depicted as modulation/demodulation encode/decoders114in system100ofFIGS.1A and1B, it should be appreciated, any number of encoders, decoders, modulators, and demodulators may be used in system100, including additional, fewer, and/or different encoders, decoders, modulators, and demodulators. As shown inFIGS.1A and1B, there may generally be one modulator/demodulator and/or encoder/decoder for each sub-band of each stream. The encoders, decoders, modulator, and demodulators ofFIGS.1A and1Bmay operate in accordance with an MCS selection received from ACM circuitry126. The MCS selection may be particular to a certain sub-band and/or certain stream and/or a certain user (e.g., per-user). Accordingly, the MCS selection used to modulate/demodulate and/or encode/decode a particular sub-band of a particular stream (e.g., SB-M of S1) may be different than the MCS selection used to modulate/demodulate and/or encode/decode a different sub-band of that stream or a different stream (e.g., SB-2 of S2). In some examples, the MCS for the same sub-band may be different per-stream. For example, the MCS of SB-M of S1may be different from SB-M of S2. In this manner, an MCS selection may be made with regard to metrics appropriate for a particular sub-band and/or stream. In some examples, and as described herein, the MCS selection used to modulate/demodulate and/or encode/decode a particular sub-band of a particular stream for a particular user may be may be based on per-sub-band margin generated based at least on observed interference. In some examples, modulation/demodulation encode/decoders114may be configured to modulate a data stream intended for transmission by the one or more antennas of system100to, for example, other communications nodes (e.g., remote notes, residential notes, etc.) within a wireless communications system. In some examples, modulation/demodulation encode/decoders114may modulate the data stream in accordance with a modulation and coding scheme selected by modulation and coding scheme circuitry, such as ACM circuitry126ofFIGS.1A and1B, as described herein. In some examples, modulation/demodulation encode/decoders114may demodulate a data stream derived from a signal incident on the one more antennas of system100and received by, for example, other communications nodes (e.g., remote notes, residential notes, etc.) within a wireless communications system. In some examples, modulation/demodulation encode/decoders114may demodulate the data stream in accordance with a modulation and coding scheme selected by modulation and coding scheme circuitry, such as ACM circuitry126ofFIGS.1A and1B, as described herein. In some examples, modulation/demodulation encode/decoders114may configured to encode a data stream intended for transmission by the one or more antennas of system100to, for example, other communications nodes (e.g., remote notes, residential notes, etc.) within a wireless communications system. In some examples, modulation/demodulation encode/decoders114may encode the data stream in accordance with a modulation and coding scheme selected by modulation and coding scheme circuitry, such as ACM circuitry126ofFIGS.1A and1B, as described herein. In some examples, modulation/demodulation encode/decoders114may configured to decode a data stream derived from a signal incident on the one more antennas of system100and received by, for example, other communications nodes (e.g., remote notes, residential notes, etc.) within a wireless communications system. In some examples, modulation/demodulation encode/decoders114may decode the data stream in accordance with a modulation and coding scheme selected by modulation and coding scheme circuitry, such as ACM circuitry126ofFIGS.1A and1B, as described herein. In some examples, a decoded stream may comprise decoded data bits. In some examples, an encoded RF signal may comprise encoded data bits. In some examples, beamforming network128may operate between the transceivers of system100and other components of system100, such as, for example, modulation/demodulation encode/decoders114, scheduler116, packet processor118, demand estimator122, spatial database124, ACM circuitry126, and switch120(or hub) which may be used to route data traffic. In some examples, beamforming network128may be bi-directional, meaning that it may function as an adaptive (receive) beamformer during the receive cycle, and as a transmit beamformer during the transmit cycle. Examples of systems described herein may include a hub or switch, such as switch120ofFIGS.1A and1Bthat may provide demodulated data to and/or from the encoders. In this manner RF signals incident on the antennas may generally be converted into received data and/or data provided to the system100may be converted into RF signals transmitted by the antennas. Examples of systems described herein may include one or more schedulers, such as scheduler116ofFIGS.1A and1B. In some examples, scheduler116may provide scheduling information to a packet processor, such as packet processor118ofFIGS.1A and1B. In some examples, the scheduling information provided to packet processor118may include information regarding sub-band and/or stream allocations for particular users. In some examples, the scheduling information may include spatial information, compatibility information, or combinations thereof. In some examples, scheduler116may additionally and/or alternatively provide scheduling information to other components of system100described herein. In some examples, scheduler116may determine an allocation (e.g., an allocation for a user), where the allocation comprises an uplink allocation, downlink allocation, or combinations thereof. In some examples, the uplink allocation and/or the downlink allocation may be based at least on a determined uplink per-sub-band per-stream margin, as described herein. In some examples, systems described herein may make matching UL and DL allocation (e.g., to make retro-directive beamforming work). In some examples, an allocator, such as allocators described herein, may base an allocation based, in part, on the UL and DL per-sub-band, per-stream, and per-user margin subMar. As described herein, a wireless communications system may include various communications nodes, including base nodes, remote nodes, residential nodes, and the like. In some examples, the distance information transmitted by scheduler116to packet processor118may include the distance between a particular remote (e.g., residential) node of the wireless communications system and the base node. In some examples, the spatial information transmitted by scheduler116to packet processor118may include the location information between various remote (e.g., residential) nodes of the wireless communications system. In some examples, the compatibility information transmitted by scheduler116to packet processor118may include information regarding the compatibility between the base node and a remote (e.g., residential node). In some examples, scheduler116may provide information to packet processor118regarding which sub-band(s) and/or which stream(s) a user (e.g., a remote node) has allocated. In some examples, that formation may be based at least in part on spatial information (between remote nodes of the wireless communications system), compatibility information (between remote nodes of the wireless communications system), distance information, location information, and the like. In some examples, scheduler116may provide allocation information to adaptive modulation and coding scheme circuitry, such as ACM circuitry126ofFIGS.1A and1B. In some examples, the allocation information provided to ACM circuitry126may include current allocation information, upcoming allocation information, spatial information, or combinations thereof. In some examples, ACM circuitry126may base an initial MCS selection at least in part on the allocation information provided by scheduler116. In some examples, ACM circuitry126ofFIGS.1A and1Bmay base an initial MCS selection based exclusively on the allocation information provided by scheduler116. In some examples, ACM circuitry126ofFIGS.1A and1Bmay base an initial MCS selection based in part on the allocation information provided by scheduler116in addition of other information described herein. In some examples, the allocation information may be dynamic in time, frequency, space, or combinations thereof. In some examples, ACM circuitry126may share information with scheduler116, such as, for example, margin134, hysteresis132, and LUT130. In some examples, based at least on this information, scheduler116may determine (e.g., decide) an initial MCS selection. In some examples, scheduler116may be configured to determine an allocation (e.g., determine allocation information to provide ACM circuitry126) based at least on an expected throughput per-sub-band per-stream. In some examples, scheduler116may be configured to determine the expected throughput per-sub-band per-stream based at least on channel sounding feedback, transmit power, an uplink per-sub-band per-stream margin, a downlink per-sub-band per-stream subMar, or combinations thereof. In some examples, allocations may be based on both the UL and the DL subMar. In some examples, the allocation that is most important may depend on UL and/or DL demand. In some examples, scheduler116may be configured to determine the expected throughout per-sub-band per-stream based at least on UL and/or DL SINR, UL and/or DL BASE SINR, or combinations thereof. In some examples, SINR, UL and DL BASE SINR, or combinations thereof may be determined based at least on channel knowledge, transmit power, interference knowledge, spatial properties of co-channel users, or combinations thereof. In some examples, scheduler116is an interference aware scheduler. In some examples, and as used herein, BASE SINR may comprise and/or include and/or be indicative of the predicted SINR if the allocator would make an allocation. In some examples, an SINR is computed based on channel sounding. The SINR may be modified by different margins of which the subMar (such as the subMar as described herein) is one. The SINR after those modifications may be the BASE SNR. In some examples, the calculation may be separate in the UL and DL since there are UL and DL margins. In some examples, there may be a DL BASE SINR and UL BASE SINR. Examples of systems described herein may include one or more packet processors, such as packet processor118ofFIGS.1A and1B. In some examples, packet processor118may receive various information packets and allocate them (e.g., break them up, fragment) into particular sub-bands and streams for a particular user (or remote node) of the wireless communications system described herein. In some examples, if system100is using ARQ and/or HARQ, such operations and/or functionality may be handled by packet processor118. In some examples, Packet processor118may receive other packets additionally and/or alternatively to the Ethernet packets, where the other packets are intended for transmission. Examples of other packets described herein may include information packets, input packets, output packets, modem packets, and the like. Packet processor118may, in some examples, break the other packets up (e.g., allocate them, fragment them) into particular sub-bands and streams for a particular user (or remote node) of the wireless communications system described herein. In some examples, packet processor118may break up the received Ethernet and/or other packets based on scheduling information (e.g., least distance information to a particular communication node of a wireless communications system, spatial information, compatibility information, or combinations thereof) received from scheduler116. In some examples, packet processor118may break up (e.g., allocate, fragment) input packets into modem packets. In some examples, packet processor118may break up input packets statically, dynamically, or combinations thereof. As one example, a particular Ethernet packet may need transmitting for to a particular user of the wireless communications system described herein. Scheduler116may determine, receive, etc. scheduling information for that particular user. For example, Scheduler116may determine the particular user may receive transmissions using a particular sub-band(s) on a particular stream(s), such as sub-band M on Stream 1, as well as sub-band 4 on Stream 2. Scheduler116may provide this (and/or other scheduling information) to packet processor118. Based at least on the received scheduling information from scheduler116, packet processor118may break up (e.g., allocate) the Ethernet packet for the particular user to sub-band M on Stream 1, and sub-band 4 on Stream 2. Examples of systems described herein may include one or more spatial database, such as spatial database124ofFIGS.1A and1B. Recall that the wireless communications system described herein may include various communications nodes, such as base nodes, residential nodes, remote nodes, and the like. In some examples, spatial database124may include spatial information regarding various communications nodes within the wireless communications system, to which the communication node ofFIGS.1A and1Bmay be in communication. In some examples, the spatial information included in spatial database124may be used to determine allocations (e.g., sub-band allocations, stream allocations, user allocations, and/or combinations thereof, and the like) by scheduler116ofFIGS.1A and1B. For example, users (e.g., remote nodes, residential nodes, etc.) within a threshold distance of one another within the wireless communications system (or based on various spatial channel characteristics or geographical location(s)) may not, in some examples, be allocated a same sub-band, as it may cause interference (e.g., transmission interference, receive interference, etc.). In some examples, spatial database124ofFIGS.1A and1Bmay provide spatial information to adaptive modulation and coding scheme circuitry, such as ACM circuitry126ofFIGS.1A and1B. In some examples, ACM circuitry126may base an initial MCS selection at least in part on the spatial information provided by spatial database124. In some examples, ACM circuitry126ofFIGS.1A and1Bmay base an initial MCS selection based exclusively on the spatial information provided by spatial database124. In some examples, ACM circuitry126ofFIGS.1A and1Bmay base an initial MCS selection based in part on the spatial information provided by spatial database124in addition of other information described herein. Examples of systems described herein may include one or more demand estimators, such as demand estimator122ofFIGS.1A and1B. In some examples, demand estimator122may review (e.g., analyze) and/or monitor various information packets (e.g., packets, other packets, output packets, input packets, modem packets, etc.) queued by a switch, such as switch120ofFIGS.1A and1B, for various users (e.g., remote nodes, residential nodes, etc.) and/or a packet processor, such as packet processor118(which in some examples may also maintain a queue) of the wireless communications system described herein. In some examples, if a user has a higher number of queued packets, demand estimator122may cause a scheduler, such as scheduler116ofFIGS.1A and1Bto allocate more sub-bands and/or streams to that user to flush out the queue. Examples of systems described herein may include multidimensional interference margin generator136. The multidimensional interference margin generator136may be a multidimensional interference margin generator in some examples. In some examples, multidimensional interference margin generator136may be communicatively coupled to ACM circuitry126. In some examples, multidimensional interference margin generator136may be configured to determine a margin (e.g., an uplink margin and/or a downlink margin) per-sub-band, per-user, and/or per-stream. In some examples, multidimensional interference margin generator136may be configured to determine an uplink per-sub-band, per-user, and/or per-stream margin based at least on a channel condition metric indicative of interference in the channel. In some examples, and as described herein, both a UL subMar and a DL subMar may be calculated. In some examples, the channel condition metric indicative of interference in the channel may comprise an uplink reference symbol signal to interference plus noise power ratio metric, an uplink pilot-based signal to interference plus noise power ratio metric, an uplink transport block error metric, or combinations thereof. In some examples, margin generator136may be configured to determine an uplink per-sub-band, per-user, and/or per-stream margin based at least on per-sub-band error information. In some examples, the per-sub-band error information may include codec quality metrics, error-detection codes, or combinations thereof. In some examples, the error-detection codes may include cyclic redundancy check codes available per-sub-band and/or per-stream and/or per-user. In some examples, margin generator136may receive channel condition metrics indicative of interference from one or more remote (e.g., residential) nodes of a wireless communications system. In some examples, margin generator136may receive channel condition metrics indicative of interference from one or more base nodes of the wireless communications system. In some examples, margin generator136may receive channel condition metrics indicative of interference from an administrator or owner or the wireless communications system. In some examples, margin generator136may receive channel condition metrics indicative of interference from the modulators, demodulators, encoders, and/or decoders, such as modulation/demodulation encode/decoders114, as described herein. For example, error rates may be provide by encoders and/or decoders. As should be appreciated, while other remote (e.g., residential) nodes, base nodes, and administrators are discussed from where to receive channel condition metrics indicative of interference, margin generator136may receive channel condition metrics from various other components shown and not shown within system100and/or within a wireless communications system, which are each within the scope of the present disclosure. In some examples, margin generator136may determine the uplink per-sub-band per-stream margin based at least on a rate of decay based on sub-band stream allocation. In some examples, margin generator136may adapt the uplink per-sub-band per-stream margin to meet a performance criterion. In some examples, the performance criterion may be implemented using a specific error rate, a re-transmission rate, decoder quality performance metrics, or combinations thereof. In some examples, the uplink per-sub-band per-stream margin may decay over a plurality of frames and increase based at least on a performance criterion. In some examples, the performance criterion may be implemented using a specific error rate, a re-transmission rate, decoder quality performance metrics, or combinations thereof. In some examples, a rate of decay for the uplink per-sub-band per-stream margin may be determined (e.g., by ACM circuitry126, margin generator136, or one or more other components described herein) based at least on prior decrease decisions, increase decisions, or combinations thereof. Examples of systems described herein may include one or more adaptive modulation and coding circuitry, such as ACM circuitry126ofFIGS.1A and1B. ACM circuitry126may select a modulation and coding scheme (MCS) for encoding/decoding and/or modulating/demodulating each stream of the plurality of streams incident on antennas of system100, and/or signals for transmission by antennas of system100. In some examples, ACM circuitry126selects the MCS based on channel condition metrics (e.g., fine-grained metrics) regarding wireless communication conditions. In some examples, ACM circuitry126selects the MCS based on channel condition metrics, such as, for example, channel condition metrics indicative of interference in the channel and/or other interference (e.g., observed interference by multidimensional interference margin generator136as described herein). In some examples, channel condition metrics may include a signal-to-interference-and-noise-ratio (SINR), codec conditions, or combinations thereof. In some examples, codec conditions may include error counts, transport block error rates, decoder iterations, or combinations thereof. The channel condition metrics may be particular to individual sub-bands or streams, or groups of sub-bands and streams. In some examples, the channel condition metrics indicative of interference may include an uplink reference symbol signal to interference plus noise power ratio metric, an uplink pilot-based signal to interference plus noise power ratio metric, an uplink transport block error metric, or combinations thereof. In some examples, ACM circuitry126selects the UL MCS per-sub-band and per-stream based at least on the uplink per-sub-band per-stream margin determined by multidimensional interference margin generator136, the determined uplink allocation, the determined downlink allocation, or combinations thereof. In some examples, ACM circuitry126selects the UL MCS per-sub-band and per-stream based at least on the determined uplink per-sub-band per-stream margin, a per-user margin for all sub-bands and streams in the selection, or combinations thereof. In this manner, ACM circuitry126may utilize a determined uplink per-sub-band per-stream margin based at least on sub-band and/or stream-specific and/or user-specific (e.g., per-user) channel condition metrics indicative of interference to select a UL MCS for individual sub-bands and/or streams per-user, which may be different than selections made for other sub-bands and/or streams per-user. Generally, and as described herein, components shown and described with reference toFIGS.1A and1B, which perform processing, calculating, or other data manipulations, may be understood to be implemented in hardware, software, or combinations thereof. For example, ACM circuitry126may be implemented using one or more processors and memory encoded with executable instructions for performing their functions (e.g., software). In some examples, one or more application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), systems on a chip (SOCs), or other logic may be used. As should be appreciated, while UL MSC selection is discussed throughout, additionally and/or alternatively, systems and methods described herein may implement downlink (DL) MCS per-sub-band and per-stream. In some examples, systems and methods described herein may select a downlink (DL) MCS per-sub-band and per-stream based at least on a downlink per-sub-band per-stream margin (DL subMar) determined, in some examples, at a remote node (e.g., RN) and provided by the RN to multidimensional margin generator136ofFIG.1B. As should be appreciated, and in some examples, ACM circuitry126may utilize a determined downlink per-sub-band per-stream margin based at least on sub-band and/or stream-specific and/or user-specific (e.g., per-user) channel condition metrics indicative of interference to select a DL MCS for individual sub-bands and/or streams per-user. In some examples, ACM circuitry126may receive channel condition metrics (in some examples channel condition metrics indicative of interference) from one or more remote (e.g., residential) nodes of a wireless communications system. In some examples ACM circuitry126may receive channel condition metrics from one or more base nodes of the wireless communications system. In some examples ACM circuitry126may receive channel condition metrics from an administrator or owner or the wireless communications system. In some examples, ACM circuitry126may receive channel condition metrics from the modulators, demodulators, encoders, and/or decoders, such as modulation/demodulation encode/decoders114, as described herein. For example, error rates may be provide by encoders and/or decoders. In one non-limiting example, a margin generator, such as multidimensional interference margin generator136ofFIG.1(and/or UL multidimensional interference margin generator (MIMG)712ofFIG.7) may be located at a base node. Multidimensional interference margin generator136may receive various inputs indicative of channel conditions (e.g., channel metrics). Such inputs may include but are not limited to UL RS-SINR706, UL P-SINR708, and/or UL TBE710. In some examples, these inputs may be generated at the base node based on observation of the channel(s). Margin generator may utilize these, and in some cases additional and/or alternative metrics to generate subMar (e.g., an UL subMar). This UL subMar may be, in some examples, transmitted to the UL ACM714ofFIG.7, as well as to UL BASE SINR720ofFIG.7, for use in for use in MCS selection and user scheduling. As should be appreciated, while other remote (e.g., residential) nodes, base nodes, and administrators are discussed from where to receive channel condition metrics, ACM circuitry126may receive channel condition metrics from various other components shown and not shown within system100and/or within a wireless communications system, which are each within the scope of the present disclosure. In some examples, the generated subMar may be used in the MCS selection in the pseudo code, e.g., in Algorithm (1). In that pseudo code, there may be an SINR and a margin. In some examples, the margin may now be the sum of the user margin and subMar, or the SINR is a derated SINR which is the per-sub-band-per-stream SINR reduced with the subMar. In some examples, this may have the same impact on the inequality in the Algorithm (1), but may also have impact on the implementation as discussed herein. As should be appreciated, and in some examples, there may be a separate UL and DL version of all of quantities discussed herein. Further, and as should be appreciated, at least both an adaptive margin and a subMar as discussed throughout, and while various metrics are discussed throughout, in some examples, the adaptive margin and the subMar may be stored as metrics, such as the margin134metric ofFIG.1. Similarly, and as should be appreciated, ACM circuitry126may receive the determined uplink per-sub-band per-stream margin from one or more remote (e.g., residential) nodes of a wireless communications system. In some examples, ACM circuitry126may the determined uplink per-sub-band per-stream margin from one or more base nodes of the wireless communications system. In some examples ACM circuitry126may receive the determined uplink per-sub-band per-stream margin from an administrator or owner or the wireless communications system. In some examples, during operation, ACM circuitry126may not have access to or may not have yet received channel condition metrics. In such examples (and/or other examples), ACM circuitry126may make an initial MCS selection based at least in part on allocation information, such as current allocation information, upcoming allocation information, spatial information, or combinations thereof, provided to it by scheduler116as described herein. Similarly, in such examples (and/or other examples), ACM circuitry126may make an initial MCS selection at least on spatial information regarding various communications nodes within the wireless communications system to which the communication node ofFIGS.1A and1Bmay be in communication, provided to it by spatial database124as described herein. In some examples, ACM circuitry126, may have access to channel condition metrics. In such examples (and/or other examples), ACM circuitry126may select an MCS using channel condition metrics. In such examples (and/or other examples), ACM circuitry126may select an MCS using channel condition metrics, and the allocation information. In such examples (and/or other examples), ACM circuitry126may select an MCS using channel condition metrics, and/or the spatial information and/or the determined uplink per-sub-band per stream margin. In such examples (and/or other examples), ACM circuitry126may select an MCS using channel condition metrics, the allocation information, the spatial information, and/or the determined uplink per-sub-band per stream margin. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on a capacity utilization of the wireless communications system. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on a look up table (LUT). The LUT may store an association between MCS selection and channel condition metrics, for example. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on various selection parameters. In some examples, selection parameters may include a margin, a hysteresis, or combinations thereof. In some examples, the margin value may adapt during operation, which may be designed to achieve a performance criterion. In some examples, the performance criterion may be predetermined. In some examples, the performance criterion may be determined by an administrator of the wireless communications system. In some examples, the performance criterion may be determined by other components of the wireless communications system. In some examples, the performance criterion may include at least one of a specific error rate, a re-transmission rate, a decoder quality performance metric, or combinations thereof. In some examples, the margin may be configured to adapt based at least on a granularity of a sub-banned spanned by the selected MCS. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on a preferred MCS switching rate. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on scheduling information. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on adaptive modulation and coding scheme techniques as described herein. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on a padding used to encode blocks to lower error rates, where in some examples, the padding comprises the difference between a capacity and a demand. As should be appreciated, ACM circuitry126may additionally and/or alternatively select an MCS based at least on any combinations of the above-recited metrics, parameters, look up tables, and/or information. As described herein, ACM circuitry126may select an MCS per-sub-band and/or per-stream and/or per-user based at least on a determined uplink per-sub-band per-stream margin. In some examples, ACM circuitry126may select (and/or determine) the UL MCS based at least on the UL per-sub-band-per-stream margin. In some examples, ACM circuitry126may adapt a rate of increase for the uplink per-sub-band per-stream margin based at least on one or more prior decrease decisions, increase decisions, or combinations thereof. In some examples, a signaling channel for transmitting the uplink per-sub-band per-stream margin may be beamformed using, for example, one or more of a plurality of antennas, wherein the signaling channel may comprise a control channel element (CCE) channel, a fast control channel (FCCH), or combinations thereof. In some examples, ACM circuitry126may separate the uplink per-sub-band per-stream margin for transmission into a slower signaling channel and a faster signaling channel, wherein the slower signaling channel may comprise a CCE and a faster signally channel may comprise an FCCH. As described herein, ACM circuitry126may select an MCS per-sub-band and/or per-stream and/or per-user based at least on a determined uplink per-sub-band per-stream margin. In some examples, the selection of an MCS for a certain sub-band (e.g., per-user and/or per-stream) by ACM circuitry126may result in transmitting encoded data bits, decoded data bits, or combinations thereof. In some examples, the selection of an MCS for a certain sub-band (e.g., per-user and/or per-stream) by ACM circuitry126may result in transmitting bits other than encoded data bits, decoded data bits, or combinations thereof. In some examples, the bits other than the encoded data bits, the decoded data bits, or combinations thereof, may comprise padding bits. As described herein, ACM circuitry126may select an MCS per-sub-band and/or per-stream and/or per-user. In some examples, an MCS selected for a particular sub-band and/or stream and/or user by ACM circuitry126, may be different than an MCS selected for a different sub-band and/or stream and/or user by ACM circuitry126. For example, ACM circuitry126may select different MCS for different sub-bands and/or different streams and/or different users. In some examples, ACM circuitry126may select similar and/or the same MCS for different sub-bands and/or streams and/or users. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on filtering an SINR difference, where the filtering comprises a filter with coefficients corresponding to a fast response, a filter with coefficients corresponding to a slow response, or combinations thereof. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on filtering an SINR difference, where the filtering comprises using a filter with coefficients corresponding to the fast response when a present value is higher than a filtered value. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on filtering an SINR difference, where the filtering comprises using a filter with coefficients corresponding to the slow response when a present value is lower than a filtered value. In some examples, residential (e.g., remote) nodes as described herein may comprise a baseband analog filter including noise or interference suppression different from the communications node filters. In some examples, the remote node may select a baseband analog filter based at least on interference impact as described herein. In some examples, the baseband analog filter selection may comprise a margin (e.g., in some examples a subMar as described herein; in some examples a margin other than the uplink per-sub-band per-stream margin, a BASE SINR, or combinations thereof.). As should be appreciated, and as described herein, in some examples, there may be both a UL and DL BASE SINR. In some examples, the UL subMar may affect the UL BASE SINR. In some examples, the DL subMar the DL BASE SINR. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS for a new allocation based at least on channel information, wherein the channel information comprises channel feedback, transmit power, or combinations thereof. In some examples, the opposite direction also apply. For example, the subMar and allocation could influence transmit power. In some examples, ACM circuitry126may additionally and/or alternatively select an MCS based at least on based at least on a post-processed SINR, wherein the post-processed SINR comprises SINR averaging. In some examples, ACM circuitry126may comprise multidimensional interference aware adaptive modulation and coding scheme circuitry. In some examples, the multidimensional interference aware adaptive modulation and coding scheme circuitry may perform the functions of multidimensional interference margin generator136described herein. In some examples, ACM circuitry126may transmit a selected MCS (e.g., per-sub-band, per-stream) to a modulator/demodulator and/or an encoder/decoder, such as modulation/demodulation encode/decoders114ofFIGS.1A and1B. As described herein, modulation/demodulation encode/decoders114modulate and/or encode a data stream intended for transmission by the one or more antennas of system100to, for example, other communications nodes (e.g., remote notes, residential notes, etc.) within a wireless communications system in accordance with an MCS selected by and transmitted from modulation and coding scheme circuitry, such as ACM circuitry126ofFIGS.1A and1B. Additionally, and as described herein, modulation/demodulation encode/decoders114may demodulate and/or decode a data stream derived from a signal incident on the one more antennas of system100and received by, for example, other communications nodes (e.g., remote notes, residential notes, etc.) within a wireless communications system in accordance with an MCS selected by and transmitted from modulation and coding scheme circuitry, such as ACM circuitry126ofFIGS.1A and1B. As should be appreciated, ACM circuitry126may be implemented in hardware, software, firmware, or combinations thereof. ACM circuitry126may, for example, including one or more processors, controllers, and/or circuitry. ACM circuitry126may include one or more computer readable media encoded with instructions (e.g., code) which, when executed, cause ACM circuitry126to perform operations described herein. Now turning toFIG.3,FIG.3is a schematic illustration of a radio frame structure300(e.g., frame300), in accordance with examples described herein. Many modern wireless communications systems (e.g., LTE, WiMax, etc.) have adopted OFDM and/or OFDM multiple-access (OFDMA) as a communication standard, as described herein. Under OFDM the frequency domain is divided into a set of equally spaced subcarriers, and the time domain is divided into equally spaced symbols as shown inFIG.3. In some examples, other transforms that create frequency bands are compatible with the ACM techniques described herein. Systems and methods described herein may transmit allocation information over a first channel type and the identification of selected modulation and coding scheme(s) over a second channel type. In this manner, the allocation information may be transmitted separately from the MCS selection information. The MCS selection information, recall, may be transmitted on a per-sub-band and/or per-stream basis. FIG.3illustrates subcarriers and symbols in accordance with examples described herein. The vertical-axis322represents frequency, while the vertical-axis324represents time. The frequency domain (e.g., vertical-axis) is divided into sub-bands, with an undisclosed number of sub-bands being shown inFIG.3, although any number may be used. Each sub-band contains a specified number of subcarriers, not shown, corresponding to the bins of an OFDM Fast Fourier Transform (FFT) residing within that sub-band. The time domain (e.g., horizontal-axis) is divided into FFT blocks, referred to herein as OFDM symbols. An undisclosed number of OFDM symbols are shown inFIG.3, although any number may be used within the same or a different amount of time by resizing the FFT as needed. The frame structure shown inFIG.3may be used to transmit data to and/or from the system shown inFIGS.1A and1B. Under OFDM, payload data may be transmitted on each subcarrier in each symbol, in some examples as quadrature amplitude modulated (QAM) constellation samples (e.g., payload blocks316,318, and320). In addition, a subset of subcarriers may be designated as non-payload pilot symbols, which may be used to provide amplitude and/or phase reference. In some examples, a base station (BS) or base node (BN) is transmitting to a user equipment (UE) or residential node (RN) of a wireless communications system, during the first part of the frame300. This part is often called the down-link (DL) part of the frame, e.g., downlink310. After a time gap, e.g., time gap312, the RN is transmitting to the BS or BN for the remainder of the frame. That part is often called the up-link (UL) portion of the frame, e.g., uplink314. In some examples, user allocations are scheduled in units of a sub-band that spans a number of OFDM subcarriers, e.g. downlink allocation information302. This may be a first channel type—one that sends allocation information (and/or timing information, power control information, system information, and the like). The channel type containing allocation information may also be referred to as a control channel element (CCE) herein. The CCE refers to a portion of symbols and/or time in a frame used to transmit allocation information (e.g., which users are associated with which sub-bands and/or streams or other transmission segments). In some examples, a user (e.g., remote node) may in some cases have a CCE which spans two sub-bands and one OFDM symbol. As should be appreciated, while the size may be different, in some examples, the CCE may always be present. In some examples, the FCCH may be present for allocated sub-bands and streams. In some examples, for TDD RDB systems, UL and DL allocations may span the same sub-bands to exploit channel reciprocity. In some examples, selected MCSs are scheduled in units of a sub-band that spans a number of OFDM subcarriers, e.g., downlink per-sub-band MCS information304. This may be a second channel type—one that sends MCS information. The channel type containing MCS information may also be referred to as a fast control channel (FCCH). The FCCH refers to a portion of symbols and/or time in a frame used to transmit MCS information (e.g., which MCS is associated with a particular sub-band and/or stream). In some examples, the remaining symbols may be used for control channels, beamformer training, and channel sounding. In some examples, an MCS selection and the feedback channels (CCE/FCCH) may be different in UL and DL. In some example systems, the BN may select both the UL and DL MCS, but in some examples, the selected MCS for the UL may be different than the MCS selected for the DL. In some examples, and as described herein, various packets, such as input packets, may be converted into modem packets (e.g., by packet processor118ofFIGS.1A AND1B) that in turn are sent over the link in one or many transport blocks (TB) (e.g., payload316,318, and320) depending, in some examples, on packet size and TB MCS across the allocated sub-bands and symbols. The mapping of modem packets into transport blocks can be done in many ways. A simple mapping is the natural order while more complicated mappings such as various forms of pseudo-random mappings that spreads a modem packet across the transport blocks are possible. Furthermore, the transport blocks can be mapped or striped into sub-bands in many different ways as well. In some examples, a transport block may be encoded and transmitted in a one or more frames (e.g., radio frames) in accordance with an MCS selection. In some examples, a transport block as described herein may be derived from a packet, such as an Ethernet packet, an input packet, and/or a modem packet. In some examples, a modem packet may be derived from an input packet or Ethernet packet. For example, an Ethernet and/or an input packet may be broken down into modem packets that in some examples may be broken into transport blocks which in some examples are what the MCS is selected for (e.g., what the MCS is applied to). As should be appreciated, there are various packet to transport block schemes which are contemplated to be within the scope of this disclosure. As should be further appreciated, the ACM applies to the various packet to transport block schemes, and the discussion of Ethernet and/or input packets to modem packs, and modem packets to transport blocks is in no way limiting. In some examples, the MCS is selected to be applied to transport blocks prior to transmission from, for example, a base node, and/or after being received, for example, at a base node. In some examples, uplink allocation information306and per-sub-band MCS information304may be sent back to the BS or the BN, by the RN. In some examples, uplink allocation information306and per-sub-band MCS information304sent back by the RN may be used by, for example, scheduler116or switch120, or packet processor118ofFIGS.1A and1Bfor subsequent allocation of sub-bands and subsequent transmission of packets over links of various wireless communications (e.g., access) systems described herein. In some examples, a base node may determine a UL and/or a DL allocation (they are matched for RDB) so an RN does not send information about which sub-bands/streams are allocated, but does send information, e.g., channel quality information for the allocated sub-bands. In some examples, the information and/or content of a CCE and FCCH channel may be different in the UL and DL. For example, the DL FCCH may in some examples contain the MCS and power control information, and the UL FCCH may contain the SINR and error information per-sub-band. Similarly, in some examples, the DL CCE may contain allocation information while the UL CCE may contain channel information from a node (e.g., a remote node), such as for example, overall errors, power control information, and/or many other metrics. Now turning toFIG.2,FIG.2is a schematic illustration of channel condition metrics, allocation information, and scheduling information used in some examples by ACM circuitry202to select modulation and coding schemes, in accordance with examples described herein. As depicted,FIG.2includes both components as well as various information inputs of wireless communications systems described herein. As depicted,FIG.2includes ACM circuitry202, scheduler204, base node206, and remote node208. Scheduler204may include allocation information (not shown). Scheduler204may be coupled to (e.g., communicatively and/or otherwise) one or more spatial databases (as described inFIGS.1A and1B) and/or one or more demand estimators (as described inFIGS.1A and1B). Scheduler204may further include input from the spatial database and/or the demand estimator. In some examples, Scheduler204may make allocation decisions based at least on the spatial database, the ACM input, the demand estimator, or combinations thereof. The ACM circuitry202may be implemented using and/or may be used to implement the ACM circuitry126ofFIGS.1A and1B, in some examples. As depicted, and in some examples, ACM circuitry202may receive SINR information (e.g., uplink (UL) SINR information) at a communications node (e.g., a base node). In some examples, the SINR information may be a channel condition metric used as an input to ACM circuitry202. Another input to ACM circuitry202is UL channel code error count, which as depicted inFIG.2, is labeled transport block error (TBE). In some examples, channel code error counts (e.g., TBEs) may generally be defined as the number of blocks of information bits that are received in error. In some examples, the TBE information may be a channel condition metric used as an input to ACM circuitry202. As depicted, and at remote node208(e.g., RN), the SINR and the TBE are fed back to ACM circuitry202. Recall, in some examples, ACM circuitry202may be located in, coupled to, or otherwise in communication with a base node (or other nodes), such as base node206. In some examples, and as described herein, the SINR and the TBE (which in some examples are examples of channel condition metrics) are fed back to ACM circuitry202using the UL fast control channel (FCCH) and the UL control channel element (CCE). In some examples, a per-sub-band per-spatial-stream SINR and a highly compressed version of the TBE are sent in the UL FCCH channel (e.g., UL FCCH210). In some examples, a richer version of the TBE but that is still compressed (or in some examples, that may be compressed) is sent back using the UL CCE channel (e.g., UL CCE212). These different metrics arrive at ACM circuitry202with different delays. For example, the BN metrics are available at the BN faster than the RN metrics. This is depicted inFIG.2as latency L1-L4for the different metrics. Recall that another source of input to ACM circuitry202may be scheduler204. In some examples, scheduler204includes or provides (e.g., stores, receives, determines, etc.) information about current allocations, upcoming allocations, and spatial information about all (or some) users within the wireless communications system. Scheduler204may provide such allocation information to ACM circuitry202. In some examples, ACM circuitry202may process this information (in some examples, along with other information) and select a corresponding UL and DL MCS per-sub-band per-spatial-stream. ACM circuitry202may signal the selected UL and DL MCS per-sub-band per-spatial-stream in the DL FCCH channel to the RN. In some examples, ACM circuitry202may also inform scheduler204of the chosen (e.g., selected) MCS in order for scheduler204to better decide on how many sub-bands are required to meet the throughput demand for a given user of the wireless communications system described herein. As described herein, scheduler204may include allocation information (in some examples, using which, it may partially base an allocation decision on). In some examples, scheduler204may be coupled to a spatial database (e.g., a channel database, a spatial channel database, etc.), as depicted inFIG.2. In some examples, the channel database may contain information about one user, one or more users, and/or all users of a wireless communications system, such as wireless communications systems described herein. In some examples, the channel database may include information regarding whether two users are spatially compatible; in such example, the channel database may be considered a per-user data base. As should be appreciated, in some examples, channel database may contain information regarding how a user relates to other users. In some examples, another reason for informing scheduler204is to enable evaluation of the accuracy of scheduler204's expected throughput versus the achieved throughput. For example, if an allocation is performing below expectation, it can be removed or shifted to other sub-bands in order to ensure a high system throughput. In some examples, once ACM circuitry202selects the DL MCS, it may also inform the BN's channel encoder which MCS to use (referring back toFIGS.1A and1B, this may include modulation/demodulation encode/decoders114). Similarly, once the RN has decoded the DL FCCH (e.g., DL FCCH214), it informs the channel decoder (e.g., modulation/demodulation encode/decoders114ofFIGS.1A and1B) of the chosen DL MCS in order to decode the message with the correct MCS. In some examples, the channel encoder at the transmitter and the decoder at the receiver can be implemented in either software or hardware or combinations thereof. Similarly, in some examples, for the UL, the RN (e.g., remote node208) uses the decoded DL FCCH message to determine the UL MCS and informs the channel encoder. The initial UL MCS for a new allocation may use a different scheme since there may be no DL FCCH message if the allocations starts with an UL allocation. In such examples, either a rule based on known parameters on both base node206and remote node208, or explicit signaling can be used to determine the initial UL MCS (although in some examples other information may be used to make such determination). In this example, the base node206decides both the UL and DL MCS in order to utilize the knowledge from scheduler204that resides at the base node206. As should be appreciated, additional and/or alternative implementations are possible and are contemplated to be within the scope of this disclosure, where the MCS decision can be distributed to remote node208and/or different combinations of ACM circuitry coupled to other base nodes and/or remote nodes. MCS Selection Scheme Recall that, in some examples, ACM circuitry, such as ACM circuitry202ofFIG.2may select an MCS based at least on using a look up table (LUT), such as LUT130ofFIGS.1A and1B. In some examples, an effective SINR for a specific MCS to achieve a specific transport block error rate (TBER) is stored in a look-up-table (LUT) (e.g., MCS_LUT). As stated throughout, in some examples, the MCS decision is performed for each sub-band (e.g., sb) and spatial stream or dimension (e.g., sd). Below is an example pseudo-code description of one example of the MCS selection algorithm described herein (e.g., Algorithm (1), below) where the SINR, margin (MAR), hysteresis (HYST) are all defined per-sub-band per-spatial-stream (e.g., per-stream), but to simplify notation, the sub-band and spatial dimension indices are suppressed. Algorithm (1)If SINReff(n) > MCS_LUT(MCS(n)+1) + MAR(n) + HYST(n)MCS(n + 1) = MCS(n)+1for loop = 2: STEP_UP_SIZEif SINReff(n) > MCS_LUT(MCS(n)+loop) + MAR(n) + HYST(n)MCS(n + 1) = MCS(n)+loopendendelseif SINReff(n) < MCS_LUT(MCS(n)) + MAR(n)MCS(n + 1) = MCS(n)-1for loop = 2: STEP_DOWN_SIZEif SINReff(n) < MCS_LUT(MCS(n) − loop + 1) + MAR(n)MCS(n + 1) = MCS(n)-loopendendelseMCS(n + 1) = MCS(n)endMCS(n + 1) = min[MCS(n + 1), MCSmax, MCSrem+ MAXdiff]MCS(n + 1) = max[MCS(n + 1), MCSmin] As depicted above in Algorithm (1), the index n refers to the current frame and n+1 refers to the next frame. In some examples, if the effective SINR exceeds the LUT value for the next MCS plus the margin and hysteresis, the MCS is increased. In some examples, the MCS is increased all the way up to a current MCS plus the maximum step up size if the SINR is large enough. On the other hand, in some examples, if the SINR is lower than the LUT for the current MCS plus the margin without the hysteresis, the MCS is decreased. In some examples, the MCS is decreased all the way down to the current MCS minus the maximum step down size if the SINR is small enough. In some examples, additional steps may be used (not shown) to handle edge conditions such as if the index exceeds the length of the LUT. In some examples, checks may be performed (e.g., by one or more components of the wireless access system described herein) to ensure that a maximum or minimum MCS is not exceeded. In some examples, given that the BN selects both the UL and DL MCS, difference between UL and DL MCS is restricted to not exceed a certain limit using the other link direction MCS (labeled MCS_REM in the Algorithm (1)). Effective SINR Calculation In some examples, an effective SINR is a metric (e.g., a channel condition metric) used for MCS selection by, for example, ACM circuitry202ofFIG.2. Many implementations of the fine-grained SINR information are possible. In some examples, the effective SINR may be the reported reference symbol (RS) SINR for the current sub-band and spatial dimension. However, there may be reasons to change the granularity by combining multiple sub-bands or spatial dimensions into one MCS decision despite that feedback channels such as FCCH are capable of finer granularity. In some examples, in practical wireless systems there are also several reasons why the effective SINR might change during a frame. For example, the RN might be mobile so the SINR estimate in the beginning of the frame from the RS-SINR might be quite different from the effective SINR at the end of the frame. Another reason that the SINR might change during the frame is phase noise that may reduce the SINR across the frame. Yet another reason is burst interference on portions of the frame. Referring briefly toFIG.3, this may be one reason why the frame structure inFIG.3also has pilots inserted across the frame to estimate the SINR across the frame to detect these possible SINR changes. In some examples, as the carrier frequency error increases, its corresponding effect on effect on the payload achieved SINR becomes more important. This SINR is called the pilot based SINR or P-SINR and is typically equal or less than the RS-SINR. An example of an effective SINR would be to simply select the RS-SINR as: SINReff(n)=RS-SINR(n−L11)  Equation (1) where the delay L11denotes the delay of the RS-SINR computation. In some examples, a better performing effective SINR than just the RS-SINR in some examples may be a combination of the RS-SINR and the P-SINR. One scheme would be to select the minimum out of the two as: SINReff(n)=min[RS-SINR(n−L11),P-SINR(n−L12)]  Equation (2) where the delay L12denotes the delay of the P-SINR. In some examples, in order to maximize payload throughput, there may be a desire to minimize the number of subcarriers used to transmit the pilots from which the P-SINR is estimated. Accordingly, in some examples, the P-SINR estimate may have a higher variance and standard deviation due to a limited number of pilots. A high standard deviation in the P-SINR while using Equation (2) may lead to a high standard deviation in the SINReffand thus also the selected MCS which may be undesirable. In some examples, a better choice may be to post-process the P-SINR to reduce the variance at the cost of less fine-grained P-SINR. This may be accomplished by computing (e.g., using one or more components of system100described herein) the minimum of P-SINR across multiple symbols, which also reduces the number of P-SINRs used by ACM circuitry (such as ACM circuitry202ofFIG.2) to one per-sub-band per-spatial-stream. The minimum captures P-SINR reductions due to external interference or phase noise in a fine-grained manner. In some examples, one drawback of that computation may be that the minimum can be lower than the RS-SINR due to the variance alone. In some examples, a slightly more advanced scheme is to compute the ithorder statistic across multiple symbols, which limits the variance while trading for less fine-grained P-SINR. In some examples, a yet more advanced option is to combine the ithorder statistic with a threshold that only activates the P-SINR if it is more than a pre-defined amount smaller than the RS-SINR. These schemes can be combined with processing across sub-bands. Note that in some examples, the P-SINR may not be independently signaled between the RN and BN so it is not part of standards definitions and can have low latency. For highly dynamic systems in where a sub-band only may be allocated a single or a few frames, the amount of available averaging or statistics collection may be limited. A method offering substantial variance reduction and low computational cost is to average across all sub-bands, all spatial dimension, as well as over a number of frames. One example of that is to compute a long-term offset from the RS-SINR as: SIN⁢Roffset(n)=1Nn⁢Ns⁢b⁢Nsd⁢∑k=n-Nn+1n∑sb∈SB∑sd∈S⁢DRS⁢‐⁢SIN⁢R⁡(k,sb,sd)-P⁢‐⁢SIN⁢R⁡(k,sb,sd)Equation⁢(3) where Nndenotes the number of frames that are averaged, SB denotes the set of allocated sub-bands, SD denotes the allocated spatial dimensions, and Nsband Nsddenotes the cardinality of the sets SB and SD. If the allocated sub-bands and spatial dimensions vary across frames, the average needs to be updated appropriately with recursive averaging techniques. The effective SINR may then be calculated (using various components of system100described herein), such as ACM circuitry202, as: SINReff(n)=RS-SINR(n−L1)−SINRoffset(n)  Equation (4) where the offset may be based on the average distance between the RS-SINR and P-SINR as defined above. As should be appreciated, additional and/or alternative versions of this may be possible with different sub-band and spatial dimension granularity. As should further be appreciated, this may be computed by ACM circuitry202across all sub-bands and streams, as well as some time duration (e.g., frames). In some examples, additional and/or alternative components of system100may perform such calculations. Hysteresis Selection Recall that in some examples, ACM circuitry202ofFIG.2may select an MCS based at least on a hysteresis (such as Hysteresis132ofFIGS.1A and1B). Selecting the hysteresis in the ACM technique may generally control various aspects of the system performance. In some examples, the purpose of the hysteresis is to reduce (e.g., prevent) the MCS to jitter between two adjacent MCSs, which may be detrimental in several ways. The jitter in the SINR estimate may increase the error rate by occasionally reporting too high of an SNR that results in a higher MCS than can be supported. Another aspect of hysteresis is that it may increase the variation in the supported bit rate that interacts in a complicated manner with various traffic protocols such as TCP. In some examples, ACM circuitry, such as ACM circuitry202ofFIG.2may perform hysteresis selection as described herein. In some examples, additional and/or alternative components of system100may perform such calculations. In some examples, the hysteresis can be selected by studying its impact on throughput and error rates (either during operation of a system and/or operation of other systems prior to configuration of a particular system) and tune to the best overall hysteresis across all traffic and channel conditions. In some examples, another option is to determine the optimal switching rate and then compute the required hysteresis based on the variance of the effective SINR. Below is a derivation of the switching rate as a function of the distance between neighboring MCS levels in the LUT8, the hysteresis h, the standard deviation a of the effective SINR, and x, the mean effective SNR minus the lookup table at the lower MCS minus the margin. The below analysis reveals that the maximum switching rate may occur at x=h/2. Since error rates may be dominated by the outliers, one option for selecting the hysteresis is to operate a hypothetical channel at x=h/2 and compute the required hysteresis to achieve a specific switching rate. Many different techniques to numerically solve for h may be formulated including non-linear search techniques. A benefit of this derivation may be that the switching rate can be kept below a specified maximum by adapting the hysteresis depending on a that is known to the system or can be estimated. In some examples, a hysteresis may be tuned based at least on observed behavior of the system (e.g., communication system100) in real radio channels. For example, too small hysteresis may lead to too much jitter which may lead to low throughput but too large hysteresis may lead to lower MCS and thus also lower throughput. The derivation may reveal the behavior of the jitter such that it is maximized at h/2 and the relationship with the standard deviation. In some examples, it is possible to analyze the MCS hysteresis by treating each MCS as a state and the system of MCSs as a Markov chain. Define the current MCS level as n and consider two MCS levels up and down as shown in the below. Further define state transition probabilities based on the SINR thresholds for each MCS Tn, the hysteresis h, the mean and standard deviation σ of the SINR. The state transition probabilities are also indicated inFIG.15, which is a state diagram of MCS selection with state transition probabilities. In some examples, a Gaussian distribution is assumed with a relatively small standard deviation, then only two MCSs away from the target MCS are considered. In some examples, only the maximum MCS change of one is considered since for steady state operation and with a relatively small standard deviation, it is uncommon to change more than one MCS at a time. As depicted inFIG.15, five states are considered (e.g., n−2, n−1, n, n+1, and n+2). Considering the five states, the transition to the P matrix becomes: P=[P0,0P0,1000P1,0P1,1P1,2000P2,1P2,2P2,3000P3,2P3,3P3,4000P4,3P4,4] With the transition matrix defined, the steady state probability of all the states can be computed as P∞=limn→∞Pn where in most cases the result converges relatively quickly. The average transition rate can then be calculated for state n=2 as: Trave=P∞(3,:) diag(1−P) where P∞(3,:) denotes the third row of the matrix P∞. To calculate the transition rate, the transition probabilities are required. For tractability, assume that the SINRs are Gaussian distributed with a mean of Tn+x where Tndenotes the SINR threshold including any margin and x denotes the offset from the switching point as illustrated inFIG.8. The threshold difference between the MCSs is defined as δ=Tn+1−Tnand is the same between all the states in the following analysis. Consider a current MCS n=2, then usingFIG.8it is clear that the transition probability marked by diagonal lines on the right-hand side of the curve becomes: P2,3=12⁢π⁢σ⁢∫δ+h-x∞e-x22⁢σ2⁢dx=Q⁡(δ+h-xσ)=1/2⁢erfc⁡(δ+h-x2⁢σ) where Q ( ) denotes the q-function and erfc ( ) the matlab definition of the error function. Similarly, the probability of stepping down from n=2 to n=1 marked by diagonal lines on the left-hand side of the curve inFIG.8can be expressed as: P2,1=12⁢π⁢σ⁢∫-∞-xe-x22⁢σ2⁢dx=1-Q⁡(-xσ)=1-1/2⁢erfc⁡(-x2⁢σ) The probability of staying at n=2 can then easily be expressed as a SINR error distribution with transitional levels of hysteresis, h, seen inFIG.8. The remaining transition probabilities are listed below: P0,0=1-P0,1P0,1=1-Q⁡(-x-δ+hσ)=12⁢erfc⁡(-x-δ+h2⁢σ)P1.0=1-Q⁡(-x-δσ)=1-12⁢erfc⁡(-x-δ2⁢σ)P1,1=1-P1,0-P1,2P1.2=1-Q⁡(-x+hσ)=12⁢erfc⁡(-x+h2⁢σ)P2,1=1-Q⁡(-xσ)=1-12⁢erfc⁡(-x2⁢σ)P2,2=1-P2,1-P2,3P2,3=1-Q⁡(δ-x+hσ)=12⁢erfc⁡(δ-x+h2⁢σ)P3,2=1-Q⁡(δ-xσ)=1-12⁢erfc⁡(δ-x2⁢σ)P3,3=1-P3,2-P3,4P3,4=1-Q⁡(2⁢δ-x+hσ)=12⁢erfc⁡(2⁢δ-x+h2⁢σ)P4,3=1-Q⁡(2⁢δ-xσ)=1-12⁢erfc⁡(2⁢δ-x2⁢σ)P4,4=1-P4,3 Simulation results using the above equations are shown inFIG.9for the case of δ=1.5 dB and h=0.5 dB.FIG.9depicts the average number of MCS changes versus standard deviation of SINR estimation error, in decibels. The dashed curves show the theoretical result while the solid curves are simply the average over 10000 random realizations and applying the ACM technique described herein. It is clear that the theoretical results match the simulated results well. Several scenarios where the mean of the Gaussian distribution was varied are includedFIG.9. The number of transitions increases as x grows from 0 to h/2, as shown inFIG.9. After h/2, the number of transitions decrease again. The MCS transition probabilities for different hysteresis and standard deviations are shown inFIG.10where both the simulated and theoretical values are shown. For an SINR estimation error standard deviation of 0.2 dB, selecting a hysteresis of 0.5 dB gives about a 10% MCS change or about once every 20 frames assuming a frame rate of 200 frames per second. In some examples,FIG.10and corresponding analysis may be useful as a design tool when deciding the amount of hysteresis to use. Margin Selection Recall that in some examples, ACM circuitry202ofFIG.2may select an MCS based at least on a margin, e.g., margin134ofFIGS.1A and1B. Similar to hysteresis, the margin in the ACM technique may generally impact various aspects of the system performance In some examples, the overall error rate can be controlled by selecting the margin, e.g., margin134ofFIGS.1A and1B, appropriately. By selecting a larger margin the overall error rates may decrease and correspondingly for a smaller margin. In some examples, a drawback of selecting a larger margin for a low error rate is that the selected MCS (e.g., selected by ACM circuitry202) may be lower, which may lower the capacity of the system. Accordingly, in some examples, the selected margin may represent a trade-off between capacity and error rate. In some examples, systems described herein may tune the margin based on simulations and measurements to select a margin that yields good performance on average. The margin may be calculated and selected during operation and/or may be preconfigured in a system. In some examples, the margin may be adapted during operation to current conditions. As should be understood, and as discussed herein, hysteresis may also impact the same or similar error versus capacity trade-off, and in some examples, a large hysteresis may allow for lower margin. In some examples, for optimal performance, both the margin and hysteresis can be optimized in terms of error rate and capacity but also different types of data traffic. Adaptive Margin As discussed above, in some examples, margin may be adapted to the current conditions. In some examples, the performance of a selected MCS based on a LUT (e.g., LUT130ofFIGS.1A and1B) with fixed margin and hysteresis may depend on the LUT parameters to accurately reflecting the current channel conditions. Basing the table on SINR alone may ignore other sources of errors that might contribute a higher TBER. Better performance may be achieved in some examples by adjusting (e.g., adapting) the margin and/or hysteresis based on other channel condition metrics such as for example observed error rates. The granularity of an adaptive margin may depend on the allocation structure. In examples described herein, a single margin (e.g., margin134ofFIGS.1A and1B.) may be computed for all sub-bands and spatial dimensions allocated to a user. This may enable statistics collection for rapid error feedback even in very dynamic allocation environments with relatively few sub-bands allocated. The following description focuses on this example, but the technique is also applicable to systems where a separate margin is used for groups of sub-bands or even a separate margin per-sub-band per-spatial-stream. This is straightforward at the BN where the UL errors may be local to the ACM technique but the DL errors need to be conveyed from RN to the BN. Different feedback strategies are discussed later but first different techniques for adapting the margin to achieve a target error rate are presented. In some examples, achieving a target TBER metric is desirable for adaptive margin selection. The basic idea behind a TBER based adaptive margin is to monitor the TBEs and transport blocks (TBs) and increase the margin if the errors exceed a threshold and correspondingly decrease the margin if the errors are below a lower threshold. Aside from the feedback aspect the same discussion applies to both the UL and DL adaptive margin. As should be appreciated, while target TBER metrics are discussed with respect to determining an adaptive margin selection, other target metrics may be used with a similar adaptation scheme. In some examples, depending on the system codec, other metrics may be available. For low-density parity-check (LDPC) codes, the number of corrected bits and iterations can provide information about the operating point before actually taking errors, which is a powerful tool to achieve low error rates. Similar metrics are often available for other codecs. However, in some examples, a benefit of operating on the transport block errors (TBE) in some examples is that the TBE is not directly related to traffic and packet sizes making a low complexity implementation possible. Below is example pseudo-code (e.g., Algorithm (2)) that illustrates an example implementation of adaptive margin selection discussed herein, and more particularly, pseudo-code for adaptive margin based on transport block error rate counters. In some examples, a TBE counter of modulation/demodulation encode/decoders114may provide the error count input to the adaptive margin algorithm (e.g. Algorithm (2)) pseudo-code which is implemented in the ACM circuitry in126. Algorithm (2)TBLOCK = TBLOCK + TBLOCK_DELTATBLOCK_ERR = TBLOCK_ERR + TBLOCK_ERR_DELTAIf TBLOCK_ERR > N_ERR_INCMAR = min [MAR + MAR_STEP_UP, MAR_MAX]TBLOCK_ERR = 0TBLOCK = 0elseif TBLOCK_ERR < N_ERR_DEC& TBLOCK > N_TBLOCKMAR = max [MAR- MAR_STEP_DWN, MAR_MIN]TBLOCK_ERR = 0TBLOCK = 0elseif TBLOCK > N_TBLOCKTBLOCK_ERR = 0TBLOCK = 0end As depicted above in Algorithm (2), TBLOCK denotes the TB counter, TBLOCK_DELTA denotes the number of transport blocks since last update. The error counter and the corresponding update are denoted as TBLOCK_ERR and TBLOCK_ERR_DELTA. Although Algorithm (2) may be compatible with any update rate, in some examples, the preferred update is every frame. In some examples, if the error counter exceeds the increase margin threshold N_ERR_INC, the counters may be reset and the margin may be increased by MAR_STEP_UP but not beyond the maximum margin MAR_MAX. Correspondingly, if the number of TBs exceed the adaptive margin block size N_TBLOCK and the errors are below the decrease margin threshold N_ERR_DEC, the counters may be reset and the margin may be decreased by MAR_STEP_DWN but not below the minimum margin MAR_MIN. Finally, if the number of TBs exceed the adaptive margin block size N_TBLOCK but the errors are in between the increase and decrease thresholds, the counters may be reset and the error collection may start over. The fact that, in some examples, the margin may be increased as soon as the error target is exceeded may result in a rapid increase in margin when there are errors since action is taken before the block size N_TBLOCK is met. In some examples, this avoids taking errors for long periods of time before increasing the margin when the channel conditions deteriorate. In some examples, Algorithm (2) may be applicable to both UL and DL margins. Note that, in some examples, the BN may generally not have access to the number of DL TBEs and a feedback mechanism may be used in order for the BN to update the DL margin. As mentioned herein, the adaptive margin concept may be applicable to any granularity but the description focuses on the case of a per-user margin. For that case, the RN feeds back the total number of TBE for a user to the BN using the UL CCE channel. In some examples, for broadband access systems, large number of TBs may be sent per frame. Using a direct encoding of the TBE may require a large number of bits that may exceed the capacity of the UL CCE control channel. In order to reduce the feedback while maintaining accurate information of the number of TBEs, a compression technique may be employed in some examples. A compression Encodex= TBLOCK_ERR_DELTA /N;y=(1/log(1+c))*log(1+c*abs(x));TBE_QUANT=round(y/(2{circumflex over ( )}-b));Decodeyq=TBE_QUANT *2{circumflex over ( )}-b;xq=(exp(abs(yq)*log(1+c))-1)/c;TBE_REC-min(round(N*xq),N); technique is outlined below in Algorithm (3) using pseudo-code for transport block error counter compression encoding and decoding. Algorithm (3) may be used with wireless communications system described herein, such system100ofFIGS.1A and1B. Algorithm (3) At the RN, the TBE counter update TBLOCK_ERR_DELTA may be encoded by first dividing by N to reduce the overall range and then logarithmically compressed based on parameter c. The final output may be then quantized to b bits. Once the TBE_QUANT bits are received at the BN, the count may be expanded based on the number of bits b and exponentially mapped using the same parameter c. Finally, the received TBE count may be computed by scaling up by the same parameter N and then rounded to an integer. Different parameter choices results in different amounts of compression and accuracy. In some examples, selecting N=2304, c=512, and b=7 may result in lossless coding of the number of TBEs up to 21 TBEs while keeping the relative error below 5%. In some examples, using both computer simulation and TDD retro-directive beamforming (RDB) systems with dynamic allocations in time, frequency, and/or space radio-frequency links shows that the above algorithms maintain the desired TBER that can be approximated as TBER=(N_ERR_INC+N_ERR_DEC)/(2*N_TBLOCK). In some examples, selecting the three thresholds helps provide control of not only the TBER but also the system behavior in terms of convergence and switching rates. The maximum and minimum margins further provides control of system behavior. In some examples, an unfortunate characteristic of radio channels is that there can be bursts of errors due to channel behaviors such as objects moving and changing the radio channel but also external interference. In such cases, the adaptive margin will rapidly increase and after the error burst is gone, the margin will decrease but at a slower rate since each down step requires N_TBLOCK TBs to be accumulated. Given that the rate that TBs are accumulated depends on the allocation size, smaller allocations might be slow to return the adaptive margin to its nominal value. A characteristic of recovering from error events is that the margin is decreased many times in a row. Hence, a technique can be designed that adjusts the margin more rapidly if multiple reductions in margin has occurred. There are many different implementations of this that are possible such as increasing the down step size with the number of consecutive down adjustments, e.g., an adaptive step size. Another implementation is to reduce the block size with the number of consecutive down adjustments, e.g., an adaptive block size. An updated version the pseudo-code for adaptive margin based on transport block error counters and adaptive block size is shown below in Algorithm (4). Algorithm (4)TBLOCK = TBLOCK + TBLOCK_DELTATBLOCK_ERR = TBLOCK_ERR + TBLOCK_ERR_DELTAN_BLOCK_RED = LUT (RED_CNTR)If TBLOCK_ERR > round(N_ERR_INC/N_BLOCK_RED)MAR = min [MAR+ MAR_STEP_UP, MAR_MAX]TBLOCK_ERR = 0TBLOCK = 0RED_CNTR = 1elseifTBLOCK_ERR < ceil(N_ERR_DEC/N_BLOCK_RED) ...& TBLOCK > ceil(N_TBLOCK/N_BLOCK RED)MAR = max [MAR- MAR_STEP_DWN, MAR_MIN]TBLOCK_ERR = 0TBLOCK = 0RED_CNTR = RED_CNTR + 1elseifTBLOCK > ceil(N_TBLOCK/N_BLOCK_RED)TBLOCK_ERR = 0TBLOCK = 0RED_CNTR = 1end In some examples, if the margin is adjusted (e.g., adapted) down, the block size may be reduced by increasing a reduction counter RED_CNTR that then maps into a look-up-table. For example, for the first down adjustment the block size v reduced by a factor of two and after five down adjustments the block size may be reduced by a factor of 12. Note that, in some examples, the increase and decrease thresholds may be adjusted correspondingly to maintain the target TBER unless large reductions are used where the number of errors thresholds are severely quantized. In some examples, it is important to have a ceil operation on the reduce error threshold rather than a round in order to avoid a comparison with zero when “N_BLOCK_RED” is large. If there is a margin increase or the errors are in between the thresholds, the reduction counter may be reset. Other similar implementations are possible where the LUT (e.g., LUT130ofFIGS.1A and1B) is replaced with a list of counter thresholds and a corresponding reduction list if the respective thresholds are exceeded. The proposed techniques (e.g., algorithms) discussed herein are applicable to many other scenarios where it is desired to decrease the recovery time after an error event or other cases where a margin needs to be substantially reduced. In some examples, the improvement in recovery may be directly proportional to the reduction factor used in the LUT (e.g., LUT130ofFIGS.1A and1B) and the rate of modifying the reduction factor. For the adaptive margin described above, a default starting margin is selected and for users with long periods of inactivity, the margin is reset to the default. Granularity Optimization Recall that MCS selection by, for example, ACM circuitry202ofFIG.2, may be closely related to the chosen codec since the selected MCS may determine the modulation and the amount of coding that is applied, which may affect the number of bits and sub-bands the encoded bits are transmitted on. For wireless systems with one MCS for the whole allocation such as LTE, the encoding spreads the bits from a single code block across all the sub-bands despite different channel conditions at the different sub-bands. For fine-grained MCS selection schemes, it is possible to separate a large allocation into many smaller blocks of which each one is encoded separately. Each one of this smaller frequency blocks, has a separate MCS and spans a set of contiguous sub-bands that are considered grouped together. The blocks of encoded bits using a specific MCS, e.g. transport blocks (TBs), are then mapped across those sub-bands. Referring briefly toFIG.5,FIG.5is an example of such a mapping or striping. A modem packet may be mapped into potentially many transport blocks. This mapping scheme may be optimized based on channel conditions and hardware capability. Hence, one transport block may span one or many contiguous sub-bands and a different number of OFDM symbols. Thus, there may be a need to find an MCS that yields the desired error rate when the coded bits are sent over multiple sub-bands with potentially different channel conditions. Many metrics may be used to quantify channel conditions of which the one used in the example system is the effective SINR computed from the RS-SINR and P-SINR. To determine a single MCS (e.g., by ACM circuitry202ofFIG.2) for multiple sub-bands, many techniques can be used. For broadband wireless access systems such as LTE and WiMAX, different non-linear mappings such as mutual information (MI) or exponential effective SINR mapping (EESM) based schemes are typically used. The optimal mapping clearly depends on the chosen codec and its implementation. For the LDPC code used in the example system it was found that a reciprocal average technique closely matched the codec performance for all MCSs: SINReff=1⁢0⁢log10[11Nsb⁢∑b=1Ns⁢b⁢10-SINR⁡(b)1⁢0]Equation⁢(5) where Nsbdenotes the number of sub-bands that are grouped together, SINR(b) is the SINR for sub-band b in the units of dB, SINReffis the corresponding effective SINR in units of dB that then is used for MCS selection. This metric has the benefit of being simpler to compute in some examples than most MI or EESM techniques. Another possible scheme that is more conservative is to select the ith order statistic of the RS-SINR. One benefit of grouping several sub-bands together for transmitting a transport block in some examples is that the variance of the SINR estimator is reduced. Another benefit in some examples of grouping several sub-bands in the example system is that the same bits are sent from multiple FCCH sub-bands that effectively lowers the code rate and improves the robustness of the FCCH codec through code diversity. From Equation (5) note that the more sub-bands are reciprocal averaged, the lower the variance. A lower SINR variance allows for lower margins and thus also higher capacity. On the other hand, grouping too many sub-bands makes the solution less fine-grained and less capable to adapt to different channel conditions per-sub-band per-spatial-stream. However, just like in wireless broadband access system such as LTE and WiMAX, coding across different channel conditions allows for extracting frequency diversity that partially offsets this loss of granularity. Dynamic MCS based on Spatial Information and Allocation Information Recall that, in some examples, ACM circuitry202may select an initial MCS based on allocation information and spatial information. In some examples, a typical implementation of a TDD RDB system may initiate a new allocation by an UL transmission in a first frame followed by both an UL and DL transmission in subsequent frames. Given that in some examples, the first few frames may not have feedback due to the latencies (e.g., L1-L4) a different mechanism, as described throughout (e.g., use of spatial information and/or allocation information), may be used until feedback (e.g., channel condition metrics) is available at ACM circuitry202. A powerful feature of a fine-grained MCS (such as an MCS selected by ACM circuitry202ofFIG.2using methods described herein) is the ability to connect the MCS selection with a scheduler (e.g., scheduler204ofFIG.2) and channel state information. In some examples, several performance improvements are possible in some examples in a dynamic allocation environment with spatial multiplexing between users as well as multiple spatial dimensions per-user. Many of these benefits are tied to a knowledge of the channel state information (CSI) at the RN and BN in the form of a spatial channel database (e.g., spatial database124ofFIGS.1A and1B). In some examples, there may be various ways of obtaining this database. In one example, it may be to transmit a known DL sounding signal from the BN. In another example, it may be to use the spatial structure of the received broadcast channel. Regardless of the method of obtaining the CSI, all RNs estimate their channels and send those back to the BN to form a database at the BN. In some examples, if the BN also announces the DL transmit power in a broadcast channel, the RN can estimate the pathloss or power-loss and feed that back to the BN. For the system100describe herein, the DL sounding signal is sent on the last DL symbols. An UL sounding signal can also be employed to complement or replace the feedback. In some examples, using the information stored on the spatial channel database, several important features can be implemented. As one example, when scheduling a new allocation on a set of sub-bands (e.g., using scheduler116ofFIGS.1A and1Band/or scheduler204ofFIG.2), the scheduler may load the per-user spatial channel from a data-base. If it is the first spatial dimension on a sub-band, a base SINR may be computed from the database based on the pathloss and used to select the MCS for the initial frames until the BN receives SINR feedback. For allocations that only stay up for a short time, the ability to start at the desired MCS provides a large performance improvement. As another example, when scheduling a new allocation on sub-bands that already have allocations either from different users or other spatial dimensions from the same user (e.g., using scheduler116ofFIGS.1A and1Band/or scheduler204ofFIG.2), the initial MCS computation may need to be different. With knowledge of the spatial properties of the other spatial channels that are spatially multiplexed, a predicted SINR can be computed. In some examples, there are various ways of computing this SINR using the spatial database. One possible method that is of relatively low complexity is to compute the similarity or correlation between users. For example, the correlation between dominant eigenvectors of the different user's channels can be used to compute the SINR impact. Other methods, of varying complexity, based on the azimuth difference or GPS location of users can also be used to predict the SINR impact of the allocation change. This predicted SINR can also be combined with the base SINR to arrive at the initial MCS. For a dynamic system with many users and spatial dimensions that are different per-sub-band per-spatial-stream or group of sub-bands, this feature can considerably improve the performance. Many mappings of correlation into an SINR prediction can be considered such as pair-wise correlations, multi-dimensional correlations or non-linear mappings to compute the predicted SINR based on the channel database. A feature of this in some examples is that not only can the initial SINR be predicted for a new user or spatial dimension, the SINR impact of a new user or spatial dimension on the existing allocations can also be predicted. Hence, if adding a new allocation reduces the SINR of the existing users, it can be predicted in advance and the MCS is adjusted accordingly for those users without incurring TBEs. That is a very important ACM component in a dynamic high throughput system. In another example, after a few frames, the SINR feedback for a new allocation becomes available to the BN's ACM technique and the MCS is adjusted away from the initial MCS value. When no predicted SINR is available or inaccurate, it can take several frames of MCS step-ups to arrive at the appropriate MCS. For example, if the initial MCS is selected as 0 but the SINR feedback indicates that MCS12can be supported, it would take twelve frames to arrive at MCS12if STEP_UP_SIZE=1. For short allocations this can negatively impact throughput. One way of making the ACM more responsive is to increase the step size at the first frame when the SINR feedback is available. In the example above, assuming that it takes two frames to receive the SINR feedback and a step size on this particular frame of 12, the average MCS for the first 14 frames would be 10.3 using this feature compared to 5.6 using a fixed step size of one. That corresponds to doubling the throughput. For allocations of even shorter duration, the gain can be even higher. For six frames, the improvement is a factor of five. In another example, in a TDD RDB based system the initial transmit beamforming weights may be selected based on the spatial channel database. In some examples, doing so may improve the convergence rate of the retro-directive beamforming which ultimately impacts the convergence of the SINR and thus also the MCS and throughput. Still the convergence rate will depend on the existence of other spatial users at the same sub-band. This convergence rate can be predicted using the current weights and the spatial database. Other systems parameters and control systems can also impact the convergence rate. With a predicted convergence rate, the MCS can be adjusted pre-emptively compared to waiting for SINR feedback before adjusting the MCS. Doing so may further improve the throughput. In another example, the performance improvement with a predicted SINR may depends on the accuracy of the MCS prediction. Correlation based approaches may yield accurate predictions but many other approaches such as mutual information (MI), exponential effective SINR mapping (EESM) can also be used. Concepts from machine learning, artificial intelligence, and neural networks can also be applied. Even a simple scheme such as asking the scheduler to report the average prediction error, average it, and then applying it to the prediction to reduce the average error. In some example, this scheme may be effective in removing bias in the prediction scheme. This scheme may also easily be extended by applying control theory concepts. Allocation Based Adaptive Margin Block Size In some examples, one drawback of examples of adaptive margin techniques described herein may be that the number of frames it takes to reach the block size N_TBLOCK depends on the size of the allocation. For example, an allocation spanning one sub-band may be 64 times slower in adjusting the margin compared to an allocation spanning 64 sub-bands due to the number of TB per frame dependency on allocation size. While various adaptive block size schemes are possible, once simple scheme of low implementation complexity is to simply compute a per-frame block size as: N_TBLOCK⁢(n)=N_TBLOC⁢_FULL⁢Ns⁢bNsbMaxEquation⁢(6) where N_TBLOCK_FULL denotes the full block size that is used for a full allocation of all sub-bands NsbMax. For smaller allocations, the block size is scaled based on the number of allocated sub-bands as shown in Equation (6). In order to maintain the same overall error rate the increase and decrease thresholds also needs to be modified as: N_ERR⁢_INC⁢(n)=round[N_ER⁢_INC⁢_FULL⁢Ns⁢bNsbMax]Equation⁢(7)N_ER⁢R_DEC⁢(n)=ceil[N_ERR⁢_DEC⁢_FULL⁢Ns⁢bNsbMax] Using the block size in Equation (6) and thresholds in Equation (7), the adaptation speed or convergence rate may be significantly less dependent on the allocation size. In some examples, this scheme may also be combined with the adaptive block size based on number of down adjustments described herein. It should be appreciated that additional and/or alternative implementation options may be devised that achieve the same or similar behavior. Another example is to define a few allocation size levels such as 8, 16, 32, 64 sub-bands and corresponding block size and then characterize each allocation to the closest allocation size level. For allocations that are dynamic in time, the above equations may be adjusted to use a time averaged number of allocated sub-bands instead. Margin Adjustment based on Granularity In some examples, manually offsetting the margin until performance is optimized where the margin for each sub-band is selected as MAR(SB)=MAR_AVG+MAR_SB_ADJ(N_GSB) where MAR_AVG denotes the average margin across all sub-band grouping options and MAR_SB_ADJ is a look-up table (e.g., LUT130ofFIGS.1A and1B) with the adjustment as a function of the number of grouped sub-bands. The different adaptive margin methods described herein may then be applied to the average margin MAR_AVG. In some examples, ACM circuitry, such as ACM circuitry126ofFIGS.1A and1Band/or ACM circuitry202ofFIG.2may perform the offsetting and/or other margin adjustment described herein. As should be appreciated, in some examples, additional and/or alternative components of system100may also perform such offsetting and/or other margin adjustment. In some examples, another option is to separate the TBEs per-sub-band per-spatial-stream into groups based on the sub-band grouping for each sub-band. Then the adaptive margin can be computed independently for each grouping number. A drawback of this scheme in some examples may be a higher computation and slower convergence due to fewer TBs in each group. In some examples, a separate margin for each group size can be obtained by still separating the TBEs into groups based on the number of grouped sub-bands. However, instead of computing a separate adaptive margin, a common adaptive margin is computed and then offset by a vector with a zero mean adjustments for each of the groups. For example systems described herein, the possible groupings are 1, 2, 4, or 8 so the length of the vector is four. Each time the adaptive margin is adjusted as described by Algorithm (2), the TBEs and TBs may be sorted into groups based on the sub-band grouping. In some examples, compute the TBER for each group and find the grouping ii with the highest TBER. Then define an adjustment update vector MAR_SB_ADJ_DELTA as having elements -D except for element ii that has the value 3D resulting in a zero mean adjustment update vector MAR_SB_ADJ_DELTA. The adjustment vector based on grouping is then computed as MAR_SB_ADJ=MAR_SB_ADJ+MAR_SB_ADJ_DELTA. The final margin may then be calculated MAR (SB)=MAR_AVG+MAR_SB_ADJ (N_GSB). For example, assume that the margin is starting out at 3.00 for all sub-groups. At the first update, the grouping of a single sub-band has the highest TBER so the adjustment update vector MAR_SBADJ_DELTA=[0.03, −0.01, −0.01, −0.01] for D=0.01. Hence, the adjustment vector becomes MAR_SB_ADJ=[0.03, −0.01, −0.01, −0.01] since no previous adjustment was available. At the next update, grouping of two sub-bands has the highest TBER so the MAR_SB_ADJ_DELTA=[−0.01, 0.03, −0.01, −0.01] and MAR_SB_ADJ=[0.02, 0.02, −0.02, −0.02]. To avoid dynamic interaction with the overall margin, D should be selected small so the per sub-group size adjustment is very slowly varying. Enhancements such as maximum adjustment and weighting based on allocated sub-bands may further improve performance. Error Burst Mode In some examples, ACM technique(s) described herein primarily react to observed metrics, such as SINR and TBEs. In many cases, there can be intermittent interference that goes on and off and each time the SNR may drop and the MCS is lowered. Unfortunately, due to the latency in the SINR feedback, errors accumulate during this time that cause a lower throughput. If the type of error source is not reflected in the SINR, errors may accumulate until the adaptive margin reacts. Although the two sources are different, in both cases it results in errors during each error cycle or burst. In some examples, one way to avoid or reduce the effects of an error burst may be to monitor the current and past behavior and change the state of the link to an error burst mode where the margin is increased pre-emptively to avoid frequent error cycles. In some examples, one drawback of this approach may be that the eliminated or lower error rate comes at the cost of a lower MCS until the link is declared to be out of the error burst mode. While various different implementations are possible, one is to monitor TBEs bursts and SINR drops and set the link in the error burst mode for a time and then evaluate if it can return to normal mode. Different levels of margin increase can be applied within this mode. For example, if the error burst mode is active but errors still accumulate, the margin may be further increased. An error burst mode may be efficient when there are strong interference sources or highly dynamic channel conditions that come and go relatively frequently but slower than the standard adaptive margin decay times. Another benefit of an error burst mode in some examples is that it can be implemented with a much finer granularity than an adaptive margin. In some examples, error burst ACM handling is applicable in both UL and DL link directions. To reduce the latency and the impact on the general ACM technique, the local node (e.g., remote node, base node, residential node, etc.) can pre-emptively reduce the reported SINR. By having the local node reduce the SINR, no extra feedback may be required. The RN may also have access to more channel condition parameters that can be used to make the local decision of activating the error burst mode rather than feeding burst metrics back to the BN. In some examples, another option of implementing an error burst mode is to use a memory of the lowest SINR across a window of time that is used for the effective SINR instead of the current SINR. If the number of TBE over the current memory length is higher than a threshold, then increase the window size over which the minimum SINR is calculated. Conversely, in some examples, if the transport block error is below the threshold, then decrease the window size. An example of possible window lengths are 0, 2, 4, 8, 16, 32, and 64 frames. If the bursts are very infrequent, retransmissions are effective so there is no need for very long memory lengths. A short memory should not change the SINR much while the window size is going back to zero. The response time to go from no memory to full memory in the example is only sum (2, 4, 8, 16, 32, 64)=126 frames. Conversely, it is only 126 frames to go back to no memory. No Data MCS Indicator In some examples, one limitation of the ACM schemes discussed herein may be that if a sub-band or group of sub-bands are exposed to an error source such as external interference, even the lowest MCS (or SNR) could generate errors. Consider an allocation that spans many sub-bands of which one is exposed to very strong interference and even the lowest MCS cannot be received correctly. If that sub-band consistently is in error, larger packets that require many transport blocks would consistently be in error, triggering ARQ re-transmissions that would also consistently be in error. Furthermore, the errors would accumulate resulting in a larger margin for the whole allocation and a dramatically lower throughput on the “good” sub-bands. To avoid this scenario, the ACM scheme may report the MCS back to the scheduler (e.g., scheduler204, and as indicated inFIG.2) which may enable the scheduler (e.g., scheduler204) to remove the allocation at the error prone sub-band. A problem with this approach in some examples may be that the scheduler is not aware the difference between the lowest MCS with no errors or the lowest MCS with many errors. One way of avoiding this scenario is to change the PHY to not send any of the information bits on this sub-band since that would avoid a repeated retransmissions. Hence, there is a desire to have a special MCS that indicates no data transmission is possible. Defining a no data MCS would signal to the scheduler that this sub-band is not capable of sustaining any payload data and need to be removed. Similarly, a no data MCS used inside the PHY to not map any information bits onto sub-bands avoid packets consistently being in errors and repeated retransmissions. This would also avoid running up the adaptive margin due to the errors on the sub-bands not capable of supporting the lowest MCS. Simulations and tests show large gains with this approach for the case of one or a group of poor sub-bands which can often be a common case in practical broadband wireless access system operating in un-licensed bands. MCS Selection Incorporating Allocation Granularity and Packet Size Broadband wireless access systems are capable of transmitting at very high data rates such as Gbits/s but some applications such as voice-over-internet-protocol (VOIP) have their data rate requirements measured in kbits/s. In many systems, the minimum allocation size can exceed the typical VOIP packet leading to possible inefficient allocations. Another aspect is that there are latency requirements making re-transmissions are undesirable. Other types of applications that have a large portion of small packets with latency constraints is online gaming applications. A unique aspect of TDD RDB systems is that the UL and DL are matched to enable channel reciprocity. However, the UL and DL traffic requirements can be quite different. For instance, a large packet with an updated graphical picture of the game situation is sent in the DL while a very short packet such as walk forward is sent in the UL. Since the allocations span the same number of sub-bands, only a portion of the UL capacity may be required. In the example system, an allocation where the encoded information bits does not fill up the allocation, additional bits are added or padded to fill out the allocation. This may be inefficient but sometimes advantageous as described above. To avoid costly re-transmissions of possibly latency sensitive small packets, a lower MCS can be used to better protect the information bits without losing any throughput. In this manner, the padded bits may be reduced, which are not used anyways, and replaced with additional bits created by choosing a lower MCS. This can make a large difference both for the UL/DL example as well as the small packet example where the allocation granularity is larger than the packet size. MCS based padding minimization may be implemented in many different ways. One PHY oriented technique is to monitor the padding per group of sub-bands and lower the MCS until the padding is minimized. A MAC centric approach is to monitor the queue and match the MCS with the allocation granularity. This works well for single small packets but less well for gaps in allocation granularity. Another technique is to monitor the traffic and separate it into different queues where a lower MCS is used for queues designated as small packet queues. Possible discrepancies between UL and DL may be monitored well by just looking at the overall queue depth without monitoring padding of individual sub-bands and queues. Dynamic TBER Target based on the Selected MCS and Packet Distribution The adaptive margin described herein is based at least in part on counting TBEs and achieving a specific TBER which is one of several PHY parameters. However, the TBER is only partly related to the user experience, which is more directly impacted by throughput and packet errors of different traffic protocol. A different adaptive margin approach is to target the packet error (PER) instead of the TBER. A challenge with this approach is that the PER is not directly a PHY parameter. It can be observed by measuring the packet performance at the switch or higher layers. However, with padding and UL and DL traffic imbalances there might be cases where transport blocks are sent without any corresponding traffic leaving no relevant PER to base the margin on. Another challenge is that the adaptive margin controls the TBER that then maps into a PER. That mapping depends on the size of packets as well as the MCS level. For example, assuming uncorrelated errors may be expressed as: PER=1−(1−TBER)NtbEquation (8) where Ntbdenotes the number of transport blocks that are required to transmit one packet. Using the binomial approximation, the PER can be expressed as: PER≈NtbTBER  Equation (9) Note that in practice, errors may be correlated which changes the TBER PER relationship significantly and that HARQ or ARQ will lower the final error rate at the cost of increased latency. It is clear that the mapping depends strongly on Nth. A typical large Ethernet packet is 1500 bytes while a small packet may be just 64 bytes. Further assuming a minimum TB size of 48 bytes and a maximum TB size of 354 bytes, it is clear that the range of Ntbin this example would be Ntb∈[1, 32]. For example, selecting an operating point of TBER=0.001 could yield PER=0.001 all the way up to PER=3.2% depending on MCS and packet size before retransmissions. Most traffic control protocols work best at a PER below a certain level such as 0.001% or 1 e-5. Selecting a TBER target of 0.001 but sending a 1500 byte packet using the lowest MCS would result in a 3.2% PER before retransmissions which may be too large for most traffic control protocols. Retransmissions will reduce the PER but add latency that interact with most traffic control protocols in negative way due the extra latency (particularly for TDD). Based on the above observations, better performance can be obtained by having a dynamic TBER target based on the MCS and packet distribution with the goal of achieving a target PER. For example, in the above example where one packet spanned 32 transport blocks, a TBER target of 3e-5 would approximate a PER of 0.001. Fortunately, for most codecs, the TBER versus SNR is relatively steep so the SINR increase is not as significant as the difference in operating point may suggest. There are many different techniques that can be designed to calculate the TBER target in order to achieve a target PER. In some examples, a low complexity technique would be to assume large 1500 byte packets, compute the predicted average MCS based on a pathloss which is fairly static, compute Ntb, and then use Equation (9) to compute the TBER target. In some examples, a slightly more involved technique is to compute a time averaged MCS level and use that to compute Ntb, and then use Equation (9) to compute the TBER target. This would be more accurate but would still not reflect the distribution of packets. In some examples, the previous technique can be extended to compute both average MCS and average packet sizes by monitoring the queues and switch metrics. In some examples, an even better performing technique is to separate traffic into different queues based on packet size and use that to select a separate TBER target per queue type using a separate adaptive margin per queue. For example, the large packet queue would use a lower TBER target than the small packet queue. Yet, in some examples, another possible implementation is to use a margin that depends on the SINR such that a larger margin is used for lower SNRs where a lower MCS would lead to a larger Ntb. Static and Dynamic Packet Fragmentation Examples described herein have included a description of the relationship between TBER and PER and that the application performance is closely related to the PER but the ACM module is more closely related to the TBER. Several techniques to dynamically select the TBER target based on the PER and packet size are described herein. A different approach to the same issue may be to modify the modem packet size to make the TBER closer to the PER by using fewer transport blocks to send a modem packet. Modem packet size is defined here as the packet size that the transmit and receive modems uses which is different from the input packet size. One approach is to split a large input packet into several smaller modem packets, e.g. fragment the input packet. These smaller modem packets may exhibit a PER that is closer to the TBER than the original input packet and may result in better performance. The packet fragmentation can be static or dynamic. For example, all packets larger than 128 bytes may be split into smaller packets of maximum size of 128 bytes. It can also be dynamic where links with poor SINR and correspondingly low MCS requires many transport blocks per packet, the input packet may be split into smaller packets than a high MCS link. The static or dynamic packet fragmentation sizes may be based on operator input and/or link parameters such as SINR, pathloss, geographical location, etc. The packet fragmentation may be implemented either in software or hardware or a combination thereof. ACM Integrated with Adaptive Transmit Power Control There are many examples where there is a benefit of integrating ACM with adaptive transmit power control (ATPC). For example, the DL may have four allocated users and one user is closer to the next higher MCS level while the others are not. In this scenario, ACM can inform ATPC that a small power increase to that user at the expense of slightly lower power for the other users would enable that user to select the next higher MCS with higher system throughput as a result. Variations of this scheme is power sharing between different spatial dimensions used by a single user often referred to as water-filling. Yet another example is that close users with a high SNR can share some of their power with cell-edge users with a low SINR and thus low MCS. Adaptive Margin with Sector Edge Input A common problem is that users located on the edge between different cell sectors suffer from interference from neighboring sectors. Although a RDB system is uniquely equipped to handle this interference by spatial beamforming, it still has the potential to impact the SINR. For example, if a neighboring sector of the same cell schedules a new allocation on a sub-band that already has a user with similar spatial structure in a neighboring sector, both will experience a lower SINR than expected. With a fine-grained MCS selection, only the sub-bands affected will suffer from a lower MCS but the initial MCS before feedback is received might be too high and generate errors. One way of avoiding this scenario is to designate some RNs as sector edge RNs based on their estimated SINR of the broadcast channel of neighboring sectors. For RNs designated as sector edge users, the initial MCS is selected with an additional offset. Of course, more advanced solutions that avoids poor allocations is to avoid scheduling sector edge users from different sectors on the same sub-band. For example, sector edge users may only be allocated to a portion of the band that is different for each sector. This increases the scheduler complexity but may improve the MCS and the resulting throughput. Adaptive Margin based on Capacity Utilization Deployed broadband wireless access systems described herein often operate at only a fraction of the total capacity. In such cases, the network interference level can be reduced by only allocating the sub-bands needed to support the requested traffic. Alternatively, an technique monitoring the capacity utilization may be employed that increases the margin when the capacity utilization is low. This may reduce the errors and improves the user experience considerably assuming that the network level interference is effectively managed. One method of managing the interference is to only employ the extra margin for non-sector-edge users. The classification of sector-edge users is also described herein. As should be appreciated, and as described herein, ACM circuitry202may be implemented in hardware, software, firmware, or combinations thereof. ACM circuitry202may, for example, including one or more processors, controllers, and/or circuitry. ACM circuitry202may include one or more computer readable media encoded with instructions (e.g., code) which, when executed, cause ACM circuitry202to perform operations described herein. Now turning toFIG.4,FIG.4is a schematic illustration of a radio frame structure, in accordance with examples described herein. Although examples described herein are generally applicable to TDD retro-directive beamforming (RDB) systems with dynamic allocations in time, frequency, and/or spatial multiplexing of multiple users in general, an example implementation depicted inFIG.4is used to facilitate understanding. Examples described herein may generally focus on the physical (PHY) layer but some aspects from the medium-access (MAC) layer are described as well. Many modern wireless communications systems (e.g., LTE, WiMax, etc.) have adopted OFDM and/or OFDM multiple-access (OFDMA) as a communication standard, as described herein. Under OFDM the frequency domain is divided into a set of equally spaced subcarriers, and the time domain is divided into equally spaced symbols as shown inFIG.4. FIG.4illustrates subcarriers and symbols in accordance with examples described herein. The vertical-axis416represents frequency, while the horizontal-axis418represents time. The frequency domain (e.g., vertical-axis) is divided into a number of sub-bands, with sub-bands 30-35 being shown inFIG.4. Any number of sub-bands may be used where each sub-band contains a specified number of subcarriers, not shown, corresponding to the bins of an OFDM Fast Fourier Transform (FFT) residing within that sub-band. The time domain (e.g., horizontal-axis) is divided into FFT blocks, referred to herein as OFDM symbols. 55 OFDM symbols are shown inFIG.4, although any number may be used within the same or a different amount of time by resizing the FFT as needed. An example of a TDD orthogonal frequency-division multiple access (OFDMA) frame-structure inFIG.4, where a base station (BS) or base node (BN) is transmitting to a user equipment (UE) or residential node (RN) during the first part of the frame (symbols 0-40). This part is often called the down-link (DL) part of the frame, e.g., downlink410. After a time gap, e.g., time gap412, the RN may transmit to the BS or BN for the remainder of the frame. That part is often called the up-link (UL) portion of the frame, e.g., uplink414, and in this example covers symbols 44-55 inFIG.4. In some examples, user allocations are scheduled in units of a sub-band that spans a number of OFDM subcarriers. In the example implementation depicted inFIG.4, an OFDM FFT size of 4096 subcarriers over a 40 MHz carrier covers 66 sub-bands with 52 subcarriers in each sub-band. For other bandwidths, the number of sub-bands may be different. For TDD RDB systems, UL and DL allocations may span the same sub-bands to exploit channel reciprocity. In the example show inFIG.4, a user is allocated sub-band 31 resulting in 32 data OFDM symbols over 52 subcarriers in the DL and 8 data symbols over the same subcarriers in the UL. The remaining symbols are used for control channels, beamformer training, and channel sounding. In some examples, and similarly described inFIG.3, various packets such as input packets may be converted into modem packets (e.g., by packet processor118ofFIGS.1A and1B) that in turn are sent over the link in one or many transport blocks (TB) (e.g., payload420,422,424, and426) depending on packet size and TB MCS across the allocated sub-bands and symbols. The mapping of modem packets into transport blocks can be done in many ways. A simple mapping is the natural order while more complicated mappings such as various forms of pseudo-random mappings that spreads a modem packet across the transport blocks are possible. Furthermore, the transport blocks can be mapped or striped into sub-bands in many different ways as well. In some examples, uplink allocation information406and per-sub-band MCS information408may be sent back to the BS or the BN, by the RN. In some examples, uplink allocation information406and per-sub-band MCS information408sent back by the RN may be used by, for example, scheduler116or switch120, or packet processor118ofFIGS.1A and1Bfor subsequent allocation of sub-bands and subsequent transmission of packets over links of various wireless communications (e.g., access) systems described herein. In some examples, a base node may determine a UL and/or a DL allocation (they are matched for RDB) so an RN does not send information about which sub-bands/streams are allocated, but does send information, e.g., channel quality information for the allocated sub-bands. In some examples, the information and/or content of a CCE and FCCH channel may be different in the UL and DL. For example, the DL FCCH may in some examples contain the MCS and power control information, and the UL FCCH may contain the SINR and error information per-sub-band. Similarly, in some examples, the DL CCE may contain allocation information while the UL CCE may contain channel information from a node (e.g., a remote node), such as for example, overall errors, power control information, and/or many other metrics. Now turning toFIG.5,FIG.5illustrates transport block mapping sub-bands and reference symbols, in accordance with examples described herein. As depicted herein,FIG.5is one example for a frame structure500(e.g., frame500). Frame structure500includes 18 data symbols (shown on x-axis502) with an allocation spanning sub-bands 1-6 and 8-16 (skipping sub-band 7) (shown on y-axis504). In some examples, a TB (e.g., TB506and/or TB508) needs to span eight resource elements where a resource element is one sub-band and symbol. The first transport block (TB) in frame500(e.g., TB506) is striped across sub-bands 1-4 and symbols one and two. Alternatively, this can be viewed as grouping sub-bands 1-4 together in terms of MCS since each TB has a single MCS. In some examples, grouping sub-bands together impacts not only the payload data performance but also the MCS signaling and feedback. As depicted herein, various TB blocks are labeled as TB1 through TB33. As depicted, the next three TBs are sent over the next six symbols while the fifth TB is sent over sub-bands 5-6 over symbols one through four. Transport block striping may be optimized for performance both in terms of payload performance and feedback channel performance. In many examples, the receiver needs to be aware how the transmitter striped the TBs either through dedicated signaling or through rules known by both transmitter and receiver. Further enhancements are possible, where certain packets are prioritized to be sent on sub-bands with better performance in terms of performance metrics such as SINR and/or TBER. Each link's allocation and MCS selection operations are managed over a combination of two bidirectional (UL and DL) control channel types, one type termed the link control channel element (CCE), of which there is one per link, and the other type termed the fast control channel (FCCH), of which there is one per-sub-band per-spatial stream allocated to that link. The allocation information, such as which sub-bands and spatial dimension a user is allocated to, is conveyed in the first symbol in the CCE. Each user has a dedicated beamformed CCE channel spanning a small number of sub-bands. Hence, the CCE channel is not a broadcast channel since that would limit the possibilities of beamforming that enables high throughput and robust interference performance. A difference from LTE and WiMAX is that the control channel (CCE) does not contain the MCS selection, which is instead contained in a separate channel labeled DL FCCH, which is integrated into the reference symbols (RS). For high performance, the beamformer may be trained independently on each sub-band and each frame using the RS. In addition, a separate DL FCCH channel is maintained per-sub-band per spatial stream which enables a distinct MCS per-sub-band per spatial stream. Hence, the MCS information is separate from the allocation information that is sent on the CCE in a sub-band or sub-bands unrelated to the allocation. Furthermore, the matching UL allocations provide a fine-grained feedback channel from the RN to the BN of channel condition metrics within the UL FCCH. Examples of metrics include, but are not limited to, signal-to-interference-and-noise-ratio (SINR) and different types of codec conditions (errors, decoder iterations, etc.). Furthermore, if sub-bands are grouped together, for example as shown here inFIG.5, various forms of coding of the feedback may be used to improve the reliability of decoding the FCCH channel. A simple example of coding that improves the reliability is to repeat the same information in the FCCH channel across the grouped sub-bands. Now turning toFIG.6,FIG.6illustrates a reference symbol (RS) punctuated with fast control channel (FCCH) for one sub channel, in accordance with examples described herein. Recall that, in examples described herein, a control channel (CCE) may not contain an MCS selection. In some examples, instead, an MCS selection may be contained in a separate channel, such as a DL FCCH. As depicted inFIG.6, FCCHs, such as FCCH604a-604nmay be integrated into the reference symbols (RS), such as reference symbols602a-602n. In some examples, for high performance, the beamformer is trained independently on each sub-band and each frame using the RS. In addition, a separate DL FCCH channel is maintained per-sub-band per spatial stream, which in some examples enables a distinct MCS per-sub-band per spatial stream. Hence, the MCS information may be separate from the allocation information that is sent on the CCE in a sub-band or sub-bands unrelated to the allocation. Recall that the feedback of channel conditions (e.g., per-sub-band per-spatial-stream) from a remote node (e.g., residential node, remote node208ofFIG.2, etc.) to a base node (e.g., base node206ofFIG.2) may, in some examples, enable ACM circuitry (e.g., ACM circuitry202ofFIG.2) coupled to the base node to select UL MCS and/or DL MCS. In some examples, selecting both UL and DL MCS at the base node (e.g., base node206ofFIG.2) may make it possible to include knowledge of the presence of other users (e.g., other remote nodes of wireless communications systems described herein) that may share the same sub-band and their spatial compatibility. In some examples, it may also enable the incorporation of dynamic beamforming effects of new users (e.g., new remote nodes) entering or users leaving the sub-band into the MCS selection. In some examples, if the RN (e.g., remote node208ofFIG.2) selects the UL MCS, that type of information would not be available since, in some examples, only the BN (e.g., base node206ofFIG.2) has access to the allocation of other users (via a scheduler and/or a spatial database, such as scheduler116and/or spatial database124ofFIGS.1A and1B). In some examples, it may also enable a BN (such as base node206ofFIGS.1A and1B) to share scheduling information between itself and nearby BNs to further improve performance by coordinating the per-sub-band per-spatial-stream user scheduling and MCS selection across sectors. In some examples, that information may contain metrics such as distance to BN but also spatial information and compatibility with other users. Now turning toFIG.7,FIG.7is a schematic illustration of a system700for interference aware adaptive selection of modulation and coding schemes, which may include user scheduling, arranged in accordance with examples described herein. Examples of system700described herein may include base node702and remote node704. Base node702may be similar in architecture, functionality, operation to the one or more base nodes described herein, such asFIGS.1A and1B, and/or one or more components of the one or more base nodes described herein, such as components ofFIGS.1A and1B. For example, base node702may be intended to communicate wirelessly to one or more other communications nodes (e.g., residential nodes, remote nodes, remote node704, and the like). A base node, such as base node702may, for example, be positioned on a tower or other centralized location. Remote node704may be similar in architecture, functionality, operation to the one or more remote and/or residential nodes described herein, such asFIGS.1A and1B, and/or one or more components of the one or more base nodes described herein, such as components ofFIGS.1A and1B. For example, remote node704may be particular to a building, home, office, or other location. Remote node704may in turn be in communication with one or more electronic devices and may facilitate communication between the electronic devices and the base node, such as base node702. Any number or type of electronic device may be serviced using communication techniques described herein including, but not limited to, computing systems, servers, desktops, laptops, tablets, cellular phones, appliances, vehicles, etc. Base node702may include uplink (UL) multidimensional interference margin generator712, UL ACM714, scheduler716, downlink (DL) ACM718, UL and DL base signal to noise ratio (SINR). In some examples, base node702may include uplink reference symbol signal to interference plus noise power ratio metric706(UL RS-SINR706), uplink pilot-based signal to interference plus noise power ratio metric708(UL P-SINR708), and uplink transport block error metric710(UL TBE710). In some examples, UL RS-SINR706, UL P-SINR708, and/or UL TBE710may be transmitted to various components within base node702with varying latencies, such as latency738a, latency738b, and latency738c, respectively. As should be appreciated, UL margin generator712may be similar in functionality and/or operation to multidimensional interference margin generator136ofFIG.1B. In some examples, the functionalities and/or operations performed by UL margin generator712may be implemented using multidimensional interference margin generator136. Remote node704may include DL margin generator728and sounding730. In some examples, remote node704may include downlink reference symbol signal to interference plus noise power ratio metric732(DL RS-SINR732), downlink pilot-based signal to interference plus noise power ratio metric734(DL P-SINR734), and downlink transport block error metric736(DL TBE736). In some examples, DL RS-SINR732, DL P-SINR734, and/or DL TBE738may be transmitted to various components within remote node704with varying latencies, such as latency738d, latency738e, and latency738f, respectively. As should be appreciated, DL margin generator728may be similar in functionality and/or operation to multidimensional interference margin generator136ofFIG.1B. In some examples, the functionalities and/or operations performed by DL margin generator728may be implemented using multidimensional interference margin generator136. In some examples, base node702and remote node704may communicate information (e.g., encoded data bits, decoded data bits, RF signals, margins, uplink per-sub-band per-stream margins, and the like), utilizing various channels, such as DL fast control channel722(DL FCCH722), UL fast control channel724(UL FCCH724), and/or UL control channel726(UL CCE726). As should be appreciated, additional and/or alternative and/or fewer implementations and/or components may be used to perform the functions of system700. Operationally, and as one example, base node702and remote node704may communicate information utilizing packet processor118and switch120ofFIG.1B. Recall that one common problem in wireless access is interference which may be caused by one or more external (or internal) sources, such as for example, other wireless transmitters nearby. In some examples, this may particularly be a problem in unlicensed parts of the radio frequency band but can be a problem in any band. Fine-grained MCS selection schemes are well suited to handle interference in general but in particular to interference that covers parts of the band but not the whole band which is a common scenario. Interference aware ACM techniques for fine-grained MCS selection in the presence of interference can substantially improve performance. Further improvements may be possible by including interference awareness also into the user scheduling. For example, avoiding scheduling users on frequency bands with interference may improve user experience as well as network performance. Accordingly, system700described herein may be configured for interference aware adaptive selection of modulation and coding schemes, including user scheduling, arranged in accordance with examples described herein. As should be appreciated, components of system700are described in further detail herein, but an overview of the interaction of certain components of system700, including the channel metrics (e.g., channel metrics indicative of interference) that may be exchanged between them follows below. In some examples, UL multidimensional interference margin generator (MIMG)712(e.g., similar to multidimensional interference margin generator136ofFIG.1B) may be configured to compute an uplink per-sub-band per-stream margin (e.g., a UL subMar) based at least on various metrics, for example, UL RS-SINR706, UL P-SINR708, and/or UL TBE710. In some examples, UL RS-SINK706, UL P-SINR708, and/or UL TBE710may be computed locally in base node hardware and software, such as locally in base node702. In some examples, the uplink per-sub-band per-stream margin may be sent (e.g., transmitted) to UL ACM714and/or UL & DL BASE SINR720. As should be appreciated, while shown inFIG.7as separate components, UL ACM714(and DL ACM718discussed herein) and UL & DL BASE SINR720may be coupled in the same circuitry. Additionally and/or alternatively, the functions and operations performed by UL ACM714(and DL ACM718discussed herein) and UL & DL BASE SINR720may be performed by ACM circuitry126ofFIG.1B. In some examples, UL ACM714may select an UL MCS and, in some examples, compute an UL user margin based at least on several channel metrics including UL RS-SINR706, UL P-SINR708, and/or UL TBE710. Each channel metric may include and/or comprise a separate latency, such as latency738a,738b, and/or738c, respectively, that compensate for inside UL ACM714. In some examples, UL ACM714may further base the MCS selection on the uplink per-sub-band per-stream margin determined by multidimensional interference margin generator712, a UL BASE SINR from the UL & DL BASE SINR720(e.g., ACM circuitry126ofFIG.1B), and a UL allocation from scheduler716(e.g., scheduler116ofFIG.1B). The selected UL MCS, in some examples, is signaled in DL FCCH722and in some examples the UL per-user margin (e.g., the determined UL per-sub-band per-stream margin) is sent to UL & DL BASE SINR720. As should be appreciated, scheduler116ofFIG.1Bmay perform the same functionalities and/or operations as scheduler716ofFIG.7, and in some examples, the functionalities and operations of scheduler716may be performed on and/or implemented using scheduler116. In some examples, scheduler716may base UL allocations, DL allocations, or combinations thereof, on information about the demand from different users as received from a switch (e.g., such as switch120ofFIG.1) and other sources. In some examples, scheduler716may base the UL allocations, and DL allocations, or combinations thereof, on the UL and DL BASE SINR, such as UL and DL BASE SINR720. As one example, if the demand is 100 Mb/s, then the number of sub-bands to allocate may depend on the resulting MCS selection, which in some examples may be a function of the BASE SINR that factors in path loss from sounding (e.g., such as sounding730), interference patterns as observed from UL RS-SINR706, UL P-SINR708, and/or UL TBE710, or combinations thereof. In some examples UL and DL BASE SINR720may compute and/or determine a BASE SINR for the UL, the DL, or combinations thereof, based at least on the sounding information from a remote node, such as remote node704, as well as the user and uplink per-sub-band per-stream margins. In some examples, DL ACM718may select a DL MCS and compute and/or determine DL user margin based on one or more metrics including the DL SINR received in UL FCCH724. In some examples, each metric may include a separate feedback latency in addition to the latency738d,738e,738f(e.g., L4-L6) within the components of remote node704. In some examples, DL ACM718may further base the MCS selection on the DL subMar received in UL CCE726, UL FCCH724, or combinations thereof. In some examples, the DL SINR sent in UL CCE726by the DL per-sub-band per-stream margin may be derated (e.g., directly derated). In some examples, the DL ACM718may use input from scheduler716about the allocations and the DL BASE SINR. In some examples, the selected DL MCS may be signaled in DL FCCH722and the DL user margin may be sent to UL and DL BASE SINR720. In some examples, DL multidimensional interference margin generator (MIMG)728(e.g., similar to multidimensional interference margin generator136ofFIG.1B) may compute a per-sub-band per-stream margin (e.g., a DL subMar, as used herein), based at least on DL RS-SINR732, DL P-SINR734, DL TBE736, or combinations thereof that in some examples may be computed locally in the remote node hardware and software (e.g., remote node704). In some examples, DL subMar is sent to UL FCCH724, UL CCE726, sounding730, or combinations thereof. In some examples, and depending on the chosen method, the DL subMar may also be used to derate the DL SINR that is sent in UL FCCH724. As should be appreciated, the above was an overview of interactions between and amongst certain components of system700, including the channel metrics (e.g., channel metrics indicative of interference) that may be exchanged between them. Various functionalities and/or operations of system700may entirely and/or in part be performed and/or implemented by various components ofFIGS.1A and1Bdescribed herein. Below table, e.g., Table (1), is an example of metrics that may be exchanged amongst and between certain components of system700. TABLE (1)InputsOutputsComponentNodeNameSourceNameDestinationULBaseUL RS-SINR 706UL subMarUL ACM 714MultidimensionalNodeUL P-SINR 708UL subMarBASE SINR 720Interference702UL TBE 710Margin Generator(MIMG) 712UL ACM 714BaseUL RS-SINR 706UL MIMG 712UL MCSDL FCCH 722NodeUL P-SINR 708BASE SINR 720UL USER marginBASE SINR 720702UL TBE 710Scheduler 716UL subMarUL BASE SINRUL allocationsScheduler 716BaseUL BASE SINRBASE SINR 720UL allocationsUL ACM 714NodeDL BASE SINRBASE SINR 720DL allocationsDL ACM 718702UL & DL BASEBaseAdjusted soundingUL CCE 726UL BASE SINRScheduler 716SINR 720NodeUL subMarUL MIMG 712DL BASE SINRScheduler 716702DL subMarUL FCCH/CCEUL user margin724/726DL user marginUL ACM 714DL ACM 718DL ACM 718BaseDL SINRUL FCCH 724DL MCSDL FCCH 722NodeDL TBE 736UL FCCH/CCEDL USER MARBASE SINR 720702DL subMar724/726DL BASE SINRUL FCCH/CCEDL allocations724/726BASE SINR 720Scheduler 716DLRemoteDL RS-SINR 732DL subMarUL FCCH/CCEMultidimensionalNodeDL P-SINR 734DL subMar724/726Interference704DL TBE 736DL SINRSounding 730Margin GeneratorUL FCCH 724(MIMG) 728Sounding 730RemoteDL subMarDL MIMG 728Adjusted soundingUL CCE 726Node704Example channel condition metrics that may be exchanged between UL MIMG- and DL MIMG-related components described herein Per-Sub-Band Per-Stream (and/or Per-User) Margin In some examples, systems (such as at least system100ofFIGS.1A and1B, and system700ofFIG.7) described here may transmit reference symbols (RS) that are used to compute beamforming weights for the remainder of the sub-frame. In some examples, the reference symbols may be transmitted using packet processor118and switch120ofFIG.1, and/or utilizing DL FCCH733, UL FCCH724, and UL CCE726ofFIG.7. The impact of interference on the system may, in some examples, depend on the duration and timing of the interference relative to the RS, and how frequent it is. As described herein, beamforming weights may be computed by a weights processor, such as weights processor108of beamforming network128ofFIG.1A. In some examples, the beamforming weights may be calculated for one or more transceivers and/or one or more antennas in a system, such as antennas102a-102fof system100ofFIGS.1A and1B, and (while not shown) various antennas of system700ofFIG.7. In some examples, if interference hits the RS, the beamforming weights may minimize the interference impact if the interference has a spatial signature different from the desired signal. In some examples, even if the beamformer largely can mitigate the interference, such mitigation may result in loss of signal strength to a desired user. In some examples, this may be referred to as beam-packing loss. Since the MCS selection by examples of ACM circuitry described herein, such as ACM circuitry126ofFIG.1, may include a time delay, intermittent interference hitting the RS may still result in errors. This may be due to the fact that the MCS may be selected based on a past frame without interference and therefore higher SNR. If the interference has a short time duration and does not hit the reference symbols but other parts of the frame, the beamforming weights may not mitigate the interference, and/or may not mitigate the interference as well as desired. In this case, the actual SNR on payload symbols may be lower than the RS SNR from which the MCS is selected. Hence, the selected MCS may be too high for later symbols causing errors. In some examples, if the interference is short in duration and infrequent, other parts of systems described herein, such as components of system100ofFIGS.1A and1Band/or system700ofFIG.7, may recover the transmitted data through retransmissions strategies (ARQ/HARQ). However, if the interference is short in duration but very frequent it may negatively impact the system performance since many retransmissions may be necessary, resulting in larger delays and lower throughput. In some examples, this is true regardless of whether the interference hits the RS or not when the interference spatial signature is not fully spatially resolvable by the beamformers. In some examples, if the interference affects all the sub-bands of the system, such as subbands included in the set of signals, sets110, ofFIGS.1A and1B, the per-user adaptive margin described in herein may quickly rise which may impact all sub-bands which may be the correct action for full band interference. However, if the interference is only impacting a portion of the band, the per-user margin may go up and impact not only the impacted sub-bands but other sub-bands, and in some examples all sub-bands. In some examples, this may potentially limit throughput since there may be a mismatch between the fine-grained MCS selection and the full band per-user adaptive margin. One advantage of having a full band user margin in some examples is that it may allow for quick response. As seen in Algorithm (4) above, in some examples, the condition for increasing the margin may be based on the accumulated errors exceeding a threshold. In some examples, collecting errors over all (or some) allocated sub-bands and multiple streams may accumulate much faster than for a single sub-band when tracking wide band channel conditions. In some examples, if errors only impact a portion of the band, a separate per-sub-band per-stream margin (e.g., subMar), such as an UL subMar calculated by ACM circuitry126ofFIG.1Band/or margin generator712ofFIG.7, may perform better even when operating on a slower time scale. In some examples, two narrow band interferers may cause transport block errors (TBEs) in a few sub-bands in addition to a normal error pattern covering all sub-bands, such as depicted inFIG.11. In some examples, as the total number of TBEs rise, the user margin also rises which may impact all sub-bands not just the interfered ones. The same error scenario is shown inFIG.12, but with a per-sub-band per-stream margin (subMar) as described herein, in addition to the user margin, also as described herein. In some examples, and as depicted inFIG.12, the subMar reacts on a per-sub-band granularity similar to the fine-grained MCS selection. It may be set to a high value on the interfered sub-bands but remain at a low or zero value on non-interfered sub-bands that may be covered by the user margin. The MCS selection may then consider both margins, which in some examples, may result in a lower MCS on the interfered sub-bands, which may reduce and/or eliminate the errors on those sub-bands and thus may avoid an increase in the user margin as seen inFIG.11. Since the user margin is lower in the subMar scenario depicted inFIG.12and as described herein, the overall throughput may higher since for most sub-bands a higher average MCS may be selected compared to the selection that is depicted inFIG.11. Note that, in some examples, and just as for the user margin, a separate sub-band margin in each link direction (UL/DL) may be beneficial since the interference conditions may be different. In some examples, one reason for maintaining the user margin for MCS selection may be that it may be much faster and responsive given that it collects errors across all (or some) of the allocated sub-bands. For consistent presence of short or intermittent interference, the sub-band margin may react and allow for a lower user margin longer term. In some examples, the impact of interference also depends on the spatial structure of the interference compared to the desired user. If the spatial structure of the interferer is very similar to the desired user, the impact may be more severe, making a larger subMar desirable. Furthermore, it may also depend on the number of spatial streams, e.g., the amount of spatial multiplexing the desired user employs. For example, the user might be able to spatially cancel the interference when using one stream but not if using two spatial streams. Hence, the subMar may depend on the amount of spatial multiplexing and better performance may be achieved by calculating a separate subMar for a single stream, two streams, and so forth resulting in a subMar per-sub-band per-number-of-streams. User Scheduling As described herein, although the subMar (such as UL per-sub-band per-user margin and/or DL per-sub-band per-stream margin) may improve the overall throughput substantially by reducing errors on the per-sub-band level, it may be even better to avoid allocating sub-bands with known high error rates. Incorporating interference knowledge as described herein through the subMar into a scheduler (such as scheduler716) may also and/or further enable better MAC efficiency in meeting user demands. For example, if the impact of interference is not known to a scheduler (such as scheduler716), the scheduler may schedule too few sub-bands to meet a demand. Incorporating interference knowledge into scheduling decisions as described herein may be beneficial for wireless systems generally but, especially for the example wireless communication systems described herein that employ retro-directive beamforming and/or spatial multiplexing (such as system100ofFIG.1and/or system700ofFIG.7). For the example system, the unique aspects of retro-directive beamforming and channel sounding may provide one example of introducing interference knowledge into a scheduler (such as scheduler716) through the use of a BASE SINR (such as UL & DL BASE SINR720) as described herein. User Scheduling and Retro-Directive Beamforming In some examples, one principle of retro-directive beamforming (RDB) is to use the spatial structure of the received signal to decide the spatial structure of the transmitted signal. For example, if the received signal impinging on the receiving antenna array comes primarily from one direction, the transmitter may then transmit back in the same direction. Both analog and digital forms for RDB are possible, however, examples described herein focus on a digital implementation. As should be appreciated, while digital examples are described, analog examples are considered to be within the scope of this disclosure. Since the characteristics of the radio channel typically may be frequency dependent, RDB is often implemented for TDD systems where both transmissions from the remote node to the base node (UL) or from the base node to the remote node (DL) use the same frequency so the receive beamforming weights may be used to derive the transmit beamforming weights. The beamforming weights described herein such as beamforming weights calculated by weights processor108of beamforming network128ofFIG.1A, may refer to, in some examples, how to weigh signals received from the multiple antennas. In this manner, beamforming, such as beamforming by beamforming network128using antennas102a-102fofFIG.1A, may generally correspond to forming a spatial signature that approximates and/or matches the channel and may optimize a performance metric, such as, for example, metrics described herein. In order to base the transmit weights on the receive weights, a TDD RDB system (such as system100ofFIG.1and system700ofFIG.7herein) may often employ matching UL and DL allocations in the sense that they occupy the same sub-bands and span the full sub-frame. Hence, each allocation decision may need to factor in the UL demand, the DL demand, or combinations thereof. Although not shown inFIG.7, scheduler716may start the allocation decision from the UL demand, which may be locally on base node702and the DL demand, which may be fed back to base node702from the remote node704. Demands for example systems described herein may be provided by a demand estimator, such as demand estimator122ofFIG.1through a switch, such as switch120ofFIG.1. The UL and DL throughput gains of scheduling a sub-band to a user may then be compared with the UL and DL demands. In example systems described herein, channel sounding (e.g., via sounding730ofFIG.7) may be used to predict the UL and DL throughput gains for the desired user and possible negative impact of other spatially multiplexed users on the same sub-band. This prediction may, in some examples, be extended to also include external interference knowledge through the subMar into an expected or BASE SINR, such as UL & DL BASE SINR720. In some examples, another use of interference awareness metrics such as UL RS-SINR706, UL P-SINR708, and/or UL TBE710, as well as DL RS-SINR732, DL P-SINR734, and/or DL TBE736, used to determine one or more subMars described herein (e.g., an UL subMar, a DL subMar, or combinations thereof), may include deciding the amount of spatial multiplexing. Recall that, in some examples, a scheduler, such as scheduler116ofFIG.1, may allocate one or more streams per-user for transmission. In some examples, it may be that a user can spatially separate the desired signal from interference if that user has a single spatial stream allocated but it may not be able to separate the interference when that user is allocated two spatial streams. In some examples, if the scheduler (such as scheduler116ofFIG.1, scheduler204ofFIG.2, and/or scheduler716ofFIG.7) is aware of this, it may be able to make allocation decisions that results in better user throughput as well as network efficiency. In some examples, one way of providing such information is to calculate a different subMar for different number of spatial streams as described herein. As one non-limiting example, the scheduler (e.g., scheduler116ofFIG.1) may receive information that a user cannot be allocated more than a threshold number of streams. In some examples, this information may come from one or more sources, such as from past communications with the user, hard coded information from an administrator of the communications system, from interference awareness metrics as described herein, and/or other sources. In some examples, the threshold number of streams may be two streams, five streams, ten streams, and/or any other number of streams. Based on the information, the scheduler may allocate any number of streams for a given user. In some examples, the number of streams allocated to a first user may be different from the number of streams allocated to a second user. In some examples, the number of streams allocated to a user may change (e.g., update) over time to include fewer, additional, and/or alternative streams. In some examples, a margin generator (e.g., multidimensional interference margin generator136ofFIG.1) may calculate one subMar value when a user is allocated a first number of streams. In some example, multidimensional interference margin generator136may calculate another subMar value when the user is allocated a second number of streams. In some examples, multidimensional interference margin generator136may calculate a subMar value for each stream of the number of streams allocated to a user. In some examples, the calculated subMar for each stream of the number of streams allocated to a user may be the same, may be different, or combinations thereof. In some examples, the subMar(s) generated by a margin generator (e.g., multidimensional interference margin generator136ofFIG.1) may be sent to the scheduler. In some examples, based at least in part on the calculated subMar(s), the scheduler may make updates to the allocation for the user. In some examples, these updates may include adding new allocations, subtracting (unallocating) previous allocations, and/or keeping the allocation the same. In some examples, the scheduler may also make determinations about which sub-bands are sufficient to allocate, and which are not, based on in some examples observed channel interference (e.g., the calculated subMar(s) etc.). In some examples, the scheduler may determine if current or potential allocations are within an acceptable range. If not, the scheduler may update number of streams (or which streams) per user, and/or which streams to choose from for allocation. In some examples, although an administrator may limit the number of streams, the presence of interference may not explicitly generate a threshold. For example, as the scheduler evaluates to go from one stream to two streams for a subband, it may be that the two stream subMar is high enough to either result in lower expected throughput than a single stream. In that example, the scheduler may not add one more stream. In some examples, this may not be an explicit threshold but an evaluation of the expected throughput that may result in lower throughput with more streams due to the interference metrics (e.g., subMar). Accordingly, in some examples, and as just one non-limiting example, scheduler116ofFIG.1(and/or scheduler204ofFIG.2and/or scheduler716ofFIG.7) may make stream allocation decisions, based at least on one or more interference awareness metrics described herein. By basing allocation, in some examples, as least on one or more interference awareness metrics, better throughout per stream may result, as well as better network efficiency. In some examples, a subMar may be calculated using multidimensional interference margin generator136ofFIG.1B(and/or MIMG712and MIMG728ofFIG.7) based on various metrics described herein. In some examples, multidimensional interference margin generator136may generate subMars per-sub-band, per-stream, and/or per-user. In some examples, multidimensional interference margin generator136may generate subMars per-number-of-streams. As illustrated at least in Table (1), the calculated subMars may ultimately be distributed (and/or sent) to a scheduler (such as scheduler730ofFIG.7) and used for stream allocation purposes. The allocated streams may, in some examples, be transmitted using antennas102a-102fofFIG.1A. Channel Sounding In some examples, a scheduler (such as scheduler716) has knowledge of the channel properties of some and/or all the users and possible external interference (for example through a calculated subMar(s), such as those discussed herein). Accordingly, the scheduler may predict the throughput result of an allocation decision. In some examples, the better prediction of the allocation outcome, the better user experience and network efficiency may be achieved. This channel knowledge may be acquired by sending a reference or sounding signal from a base node (such as base node702) to one or more remote nodes (e.g., including remote node704and/or other remote nodes described herein but not shown) such that the one or more remote nodes receive and use the reference or sounding signal to compute an effective channel. In some examples, this calculation may involve knowing the transit power of the sounding signal from which the path loss of the radio channel can be calculated. The transmit power may be provided (e.g., signaled) through a broadcast channel from the base node (such as base node702) to the remote node (such as remote node704). In some examples, one unique aspect of the TDD RDB systems described herein is the additional dependency on the spatial properties of the channel between users that the scheduler (such as scheduler716) may benefit from. Hence, in example systems described herein, the remote nodes may feed back the estimated channel to the base node using the UL CCE channel (such as UL CCE212ofFIG.2, and/or UL CCE726ofFIG.7). With this knowledge, the base node and the scheduler may, in some examples, predict the expected per-user DL and UL throughput for a given sub-band and include the impact of other spatially multiplexed users. Further, performance improvement may be possible by also including interference awareness as described herein. BASE SINR (e.g., UL & DL BASE SINR) In some examples, an expected throughput for a given scheduling decision may be computed (such as by UL & DL BASE SINR720ofFIG.7) as the sum of the expected throughput across sub-bands which in turn is computed from the resulting MCS on that sub-band and the error rate. As described herein, a look-up-table (LUT) MCS_LUT based on an effective SNR may be used to determine the MCS which in turn determines the throughput. Hence, in some examples, a method of predicting the throughput of a potential allocation decision includes computing (such as by UL & DL BASE SINR720ofFIG.7) an effective SINR from which the MCS can be predicted and thus also the throughput. In the following, that effective or expected SINR is, in some examples, called BASE SINR. A sounding SNR may be computed from the sounding channel knowledge and the transmit power. This sounding SINR may then be adjusted by the user margin, and the subMar, to produce a separate UL and DL BASE SINR. For example, the base SINR may be found by subtracting a margin particular to a user (e.g., userMargin) and a margin particular to a sub-band (e.g., subMar) from the sounding SINR. In some examples, a coUserLoss may also be subtracted from the sounding SNR to provide the base SINR. Equation (10) provides an example equation for calculating base SINR where SINRbase(SBk) is the base SINR for sub-band k, SNRSounding(SBk) is the sounding feedback for sub-band k, userMargin is the determined user margin for a user allocated to sub-band k, subMar(SBk) is the per-stream margin for sub-band k, and coUserLoss(SBk) is a loss from the presence of co-channel users. SINRbase(SBk)=SNRsounding(SBk)−userMargin−subMar(SBk)−coUserLoss(SBk)  Equation (10) Throughput (SBk)=f(SINRbase(SBk))  Equation (11) In some examples, and from the BASE SINR, the expected throughput from a scheduling decision may then be calculated as shown in Equation (11), where the function f( ) represents a computation from the BASE SINR into an MCS using the look-up-table (LUT) MCS_LUT described herein, and then to throughput based on the allocation size and selected MCS. In some examples, if more than one user is allocated to a sub-band, e.g., spatial multiplexing, a separate loss term coUserLoss, may be further inserted into Equation (10) to account for the potential SINR loss from the presence of other co-channel users. This term may be calculated from the sounding channel knowledge at the base node (such as base node702) that includes information from scheduled co-channel users, all scheduled co-channel users in some examples. By basing an SINR calculation on a loss from the presence of co-channel users, the scheduler may, in some examples, avoid allocating non-spatially compatible users on the same sub-band. As the allocator (e.g., scheduler716ofFIG.7) makes the decision of which sub-band to allocate, when the allocator uses the subMar values, the user of the subMar value(s) may, in some examples, drive allocations to sub-bands with low subMar that may indicate sub-bands with less interference. In some examples, this may be because the SINR calculated using Equation (10) may be larger for a particular sub-band that has a lower subMar. As such, the scheduler, such as scheduler716ofFIG.7, may be more likely to schedule communications into sub-bands having a higher SINR. Therefore, sub-bands having a lower subMar may be more likely to be allocated (e.g., scheduled) for communications. An extension of this may be an example where the interference is so strong that the subMar is high enough to push the expected SNR below the threshold for even the lowest MCS. In that example, the allocator (e.g., scheduler716ofFIG.7) may choose to not allocate and/or remove existing allocations that falls below this threshold. Thus, a fine-grained subMar combined with a fine grained MCS selection may play an important role in optimizing the system efficiency by avoiding errors but also avoid allocating and/or removing poor sub-bands. In some examples, the SINR and scheduler allocation discussed herein may be performed for both the UL and DL through the UL & DL BASE SINR (such as UL & DL BASE SINR720), the UL/DL subMar, and the UL/DL user margin. The scheduler (e.g., scheduler716) may weigh the UL and DL traffic needs with the UL/DL throughput expectations when allocating a sub-band to a user. For example, a user with only DL traffic demands may be scheduled on a sub-band with a low DL subMar but a high UL subMar since the lower UL throughput may not a problem if an UL MCS low enough for low error rates is available. Power Based Scheduling In some examples, increasing the power may improve performance by increasing the relative signal strength of a desired signal over the interference. In some examples, this may also be applicable to cell-edge remote nodes where the signal strength is low and the performance is poor even in the absence of interference. In some examples, if the transmitter has spare power, it may increase the power directly. In some examples, if the transmitter is already using all the power, reduction in the number of sub-bands allocated to a user may occur, and in some examples, an increase in the transmit power per-sub-band may also occur, accordingly. In some examples, this is applicable for both the base node (such as base node702) and the remote node (such as remote node704). However, in some examples, it may be more often applied at the remote node. Useful metrics for this decision may include but are not limited to the BASE SINR, subMar, error rate, and the average MCS, as described herein. Hence, in some examples, by using those metrics, the scheduler (such as scheduler716) may also consider and/or factor in the potential transmit power change in the allocation decision. As one non-limiting example, a base node (and/or a remote node, etc.) may only be able to transmit or receive signals using up to a threshold value of power. In some examples, the power needed to transmit or receive signals may not reach the power threshold. In other examples, the power threshold may be reached. In some examples, the power threshold may be exceeded. Accordingly, in some examples, a scheduler (e.g., scheduler116ofFIG.1) may receive information that the base node (and/or remote node) has not reached its threshold power level. This information may come from one or more sources, such as the BASE SINR, subMar, error rate, and the average MCS, etc. In some examples, the scheduler may use this information to schedule (e.g., allocate) additional and/or alternative sub-bands for a user for signal transmission. In some examples, the scheduler may receive information that the base node (and/or remote node) has already reached its power threshold. In some examples, the scheduler may use this information to not allocate additional sub-bands, and/or unallocated already allocated sub-bands for a user for signal transmission. In some examples, the scheduler may dynamically alter (e.g., change, update, etc.) the number of and/or which sub-bands to allocate based on received information regarding the amount of power a base node (and/or a remote node) is using relative to a power threshold value. In some examples, although power levels may be explicitly signaled between a transmitter and a receiver, the information may be additionally and/or alternatively obtained from the sounding information. In some examples, the sounding gives the pathloss that can be used to calculate an SINR under different power assumptions. For example, if the remote has a large pathloss but a small allocation, it may use all its power on those subbands. Given that, in some examples, the BN knows the maximum transmit power and the pathloss, it may know that the remote pooled all its power into those few subbands. In this example, a scheduler, such as the scheduler as described herein, might consider adding more subbands to that user since it may appear that that would increase the throughput. In some examples, with the power information described, the remote may then have to spread its power across more subbands and may provide lower throughput improvement than expected or none at all. Fast-Attack Slow-Decay (FASD) Interference Aware MCS Selection Techniques In some examples, the systems and methods descried herein may compute the beamforming weights as well as an SINR estimate, RS-SNR, from the reference symbols (RS) transmitted on symbols 1-2 as shown inFIG.6described herein. In some examples, a different SINR estimate P-SINR may additionally and/or alternatively be calculated in an analogous manner using the pilots that are transmitted throughout the frame as discussed herein. In some examples, a P-SINR value may calculated for different symbols across the frame yielding a separate value for each subset of symbols. If interference hits both the reference symbols and the pilot symbols, the two SINRs may be (and in some examples, will be) be similar. On the other hand, in some examples, if the interference only hits the pilots but not the reference symbols, the P-SINR calculated based on the interfered symbols of the pilot symbols may be lower than the RS-SINR calculated based on the reference symbols. This may be because the beamforming weights were computed when no interference was present but applied later in the frame when the interference was present. How much lower the P-SINR becomes may in some examples depend on the alignment of the interference and the subset of symbols used in calculating the P-SINR. For example, if all symbols in the subset are affected, then the computed P-SINR may capture the full impact of the interference. However, if interference only affects a fraction of the symbols in the subset, the computed P-SINR will not reflect the full amount of interference. Note that interference resulting from scheduling other users on the same sub-band may, in some examples, be largely reflected already in the RS-SINR since co-channel users may be present throughout the sub-frame so the interference discussed herein is largely external interference. In some examples, one way of detecting this condition and avoiding errors may be to compute (e.g., using multidimensional interference margin generator136ofFIG.1B, and/or MIMG712and/or MIMG728ofFIG.7) the sub-band margin subMar as the difference between the P-SINR and the RS-SINR and use that subMar in the MCS selection as shown in Equation (11), where the function f( ) represents a computation from the BASE SINR into an MCS using, in some examples, the look-up-table (LUT) MCS_LUT described herein, and then to throughput based on the allocation size and selected MCS. As one non-limiting example, a UL & DL Base SINR, such as UL & DL Base SINR720ofFIG.7, may calculate (e.g., determine, etc.) SINR based on one or more reference symbols (RS) found in a particular place in a given frame. The UL & DL Base SINR may also calculate SINR based on the pilot symbols found in the same or other places found in the given frame. This information may be used by a margin generator, such as multidimensional interference margin generator136ofFIG.1B, to determine a margin. For example, the margin generator may find the difference between the SINR calculated for the reference symbols, and the SINR calculated for the pilot symbols. This difference may be used as the margin. As should be appreciated, the BASE SINR may still be computed from sounding and then subtract out the user margin and the subMar as described herein. In some examples, the subMar in the case of FASD is calculated from the difference of RS-SINR and P-SINR. As used herein, the difference between the RS-SINR and P-SINR will be referred to as droop. Note that, in some examples, the computations of RS-SINR and P-SINR have different delays so the two values may have to be time aligned as discussed herein. Given that the interference can hit at any time, it may, in some examples, be desirable to use the minimum P-SINR across the frame when comparing with the RS-SINR to fully detect the interference. In some examples, computing (e.g., using multidimensional interference margin generator136ofFIG.1B, etc.) the difference between two estimated quantities of which one is a minimum of several values may result in potentially having a larger variance. In some examples, using a highly variable SINR difference as subMar in the ACM selection as described herein may lead to a highly variable MCS with errors resulting from occasionally selecting too high of an MCS. It is therefore desirable in some examples to filter the droop to reduce the variance of the subMar. In some examples, once interference hits, it may be desirable to react quickly and lower the MCS (e.g., using ACM circuitry126ofFIG.1B, etc.) to avoid excessive errors. Filtering to reduce variance may slow down the reaction speed and result in errors while the filter converges. In some examples, a solution to this is to use two different filters depending on the difference of the current droop and the filtered droop. A current droop larger than the filtered droop may be indicative of the SINR being lower on the payload symbols than at the reference symbols indicating presence of interference. On the other hand, a small droop difference may be indicative of the RS-SINR and the minimum P-SINR being similar, indicating no interference. Hence, with interference, the droop difference may be large and it may be better to use a fast-reacting filter with one set of coefficients. If the droop difference is small, indicating no interference, it may be better to use a slow filter with another set of coefficients. The slow filter may reduce the variance of the steady-state droop and thus the MCS and ensure good performance in the absence of interference. The fast filter may react quickly and result in rapidly increasing droop when interference hits. To allow for a fine-grained MCS and interference awareness, the droop and the subMar may be computed (e.g., using multidimensional interference margin generator136ofFIG.1B, etc.) per spatial stream, sub-band, and per-user. As should be appreciated, filter selection may be automatic, manual, or combinations thereof. In some examples, filter selection may be performed by one or more components of system100ofFIGS.1A and1B, and/or system700ofFIG.7. In some examples, filter selection may be performed by UL ACM714and/or DL ACM718ofFIG.7, and/or ACM circuitry126ofFIG.1. In some examples, systems and methods described herein may comprise and/or include a filter selection defined in the pseudo-code in Algorithm (5) where a droop larger than subMar selects the fast attach filter (alphaFA) and otherwise the slow-decay alphaSD. For example, if the droop is large, we may select alphaFA that could be 0.1 meaning that we incorporate 90% of the droop immediately. If the droop is small we may select alphaSD that could be 0.9999, which means that we may only incorporate 0.0001 of the new value resulting in a slow decay. In example systems described herein, sub-bands may be dynamically allocated (e.g., using scheduler116ofFIG.1B, scheduler716ofFIG.7, etc.) and one sub-band that is allocated one frame may not be allocated to the same user or not allocated at all. Hence, the updating of the subMar may depend on if a sub-band is allocated or not. In some examples, if it is allocated, the fast-attack slow-decay filters may be used but if a sub-band is not allocated a different approach is needed. In some examples, one way is to just use the slow filter but, in many cases, it may be important to keep the subMar high after an interference event such that if interference returns, the droop and subMar is already high and the MCS select is low enough to avoid errors. Hence, a separate even slower decay may be considered for un-allocated sub-bands. The different steps in the fast-attack slow-decay (FASD) subMar interference aware MCS selection algorithm are summarized in Algorithm (5), below. Algorithm (5)If subband is allocatedCompute droop per stream and subband in dB asdroop(sb,strm) = max(min(RsSinr(sb,strm),high_threshold),low_threshold) - max(min(minpsym[PSINR(sb,strm,psym)],high_threshold),low_threshold)Compute droop(sb) per subband as max droop(sb,strm) across streams, offset droopbased on number of streams, and limit to positive droop in dBdroop(sb) = max [ maxstrm[droop(sb,strm)]- expDroop[numStrms], 0]Filter allocated subbands using FASD filter with two alphas to get the per- sub-bandmargin in dBif droop(sb) > subMar(sb)alpha = alphaFAelsealpha = alphaSDwhere alphaFA= 0.1 and alphaSD=0.9999subMar(sb) = (alpha)*subMar(sb) + (1-alpha)*droop(sb)elseUpdate per subband counter ksb=ksb+1, once allocated, reset counter ksb=0If ksb> smFrameThrsubMar(sb) = subMar(sb)*betasm, ksb=0where smFrameThr and betasmare selected for desired long-term decay andsubMar is in dB. Pseudo Code for FASD subMar Computation Algorithm (5) contains additional and/or alternative details of the subMar computations described herein. In some examples, since the estimate of the RS-SINR and P-SINR may contain very large or very small values it is important in some examples to limit the range of these in the droop computation to avoid outsized impact from a few values. Hence, both the RS-SINR and P-SINR may be limited by a high threshold and low_threshold value. In some examples, various components of systems described herein may perform the thresholding. In some examples, MIMG module136ofFIGS.1and/or712and/or728may perform the thresholding. Other component may singularly or in combination may additionally and/or alternately perform the thresholding. This thresholding may additionally and/or alternatively limit the variation in the high and low SNR regime further reducing the variance in the absence of interference. Although a subMar may have been computed (e.g., by multidimensional interference margin generator136ofFIG.1B, and/or MIMG712and/or MIMG728ofFIG.7) per-sub-band as well as stream, the pseudo code may be used by a UL & DL Base SINR, such as UL & DL Base SINR720ofFIG.7, to compute the maximum droop across streams offset by an expected droop. The expected droop may be due to the fact that the statistics of computing the minimum value across multiple values may introduce a negative bias. The two estimated SINRs may also inherently have different bias due to different estimators. The expected droop may thus compensate for this, and limits the droop to only positive values to avoid an SNR boost from a negative droop. Although many different types of filter structures may be used to filter the droop to produce the subMar, and the many different types of filters are contemplated to be within the scope of this disclosure, in some examples, an Infinite Impulse Response (IIR) filter is used in the pseudo code in Algorithm (5), where the alpha coefficients may be selected based at least on the droop exceeding a current (e.g., present) subMar value. Examples of coefficients producing a fast filter alphaFA=0.1 and slow-filter alphaSD=0.9999, however, it should be appreciated that many other coefficients may be used and are contemplated to be within the scope of this disclosure. In some examples, if the sub-band no longer is allocated, the stored subMar may use an exponential decay although other decay mechanisms can be used. Here, to reduce the HW and DSP processing, a counter may be used to only decrease every smFrameThr. In some examples, decay parameter betasmmay also be selected as a value close to unity for slow decay. For example, selecting betasm=0.99995 and smFrameThr=200 may result in a 99% reduction of the sub margin in a day. The example pseudo code may be implemented in dB, but may also be implemented in non-logarithmic values. The pseudo-code in Algorithm (5) may first be used by a UL & DL Base SINR, such as UL & DL Base SINR720ofFIG.7, to compute droop per-sub-band and per-stream and may then a margin generator may use the computed droop to select the largest droop across all allocated streams on a sub-band to yield a subMar per-sub-band and per-number-of-streams-per-user. In some examples, the droop may be computed inside of an MIMG as described herein. In some examples, additional and/or alternative components of systems described herein may compute the droop. In some examples, the impact of interference may depend on the spatial signatures of the interferer and the weights selected by each of the user's streams. For example, it may be possible that a single stream is almost unaffected by an interferer but if the same user instead uses two streams the impact of the interference may be severe. In some examples, separating the impact into separate streams is may be quite difficult since the weights may wander from frame to frame meaning that the spatial signature of each stream is wandering and even swapping between streams over time. Given this behavior, it may be advantageous to compute using a margin generator (e.g., multidimensional interference margin generator136ofFIG.1B) a per-sub-band per-number-of-streams subMar rather than a single per-sub-band subMar or a per-sub-band per-stream subMar, such as methods described herein. In some examples, the pseudo-code in Algorithm (5) may be modified to still compute using a margin generator (e.g., multidimensional interference margin generator136ofFIG.1B) the maximum across allocated streams but only update the single stream subMar if only a single stream is allocated to a user on a sub-band. If two streams are allocated to a user on that sub-band, instead, the two stream subMar may be updated, and so forth for more streams. Hence, only one per-sub-band per-number-of-streams subMar may be updated per frame per-sub-band, advantageously keeping the computations low in some examples. As should be appreciated, the term per-sub-band per-stream subMar and per-sub-band per-number-of-streams subMar is used interchangeably herein when the subMar is applied in different modules such as ACM and scheduler modules. TBER Interference Aware Technique The FASD technique described herein, in some examples, bases the per-sub-band margin subMar computation (e.g., using a margin generator such as multidimensional interference margin generator136ofFIG.1B, etc.) at least on the difference between the RS-SINR and the P-SINR to detect instances of interference and lower the MCS through subMar to avoid errors. In some examples, it reacts fast but can in some cases react too fast and then take a long time to recover. In some examples, a goal of the per-sub-band margin is to avoid errors on a portion of the band cause the user margin to go up which may lower the overall system throughput. In some examples, a more direct approach to achieve this goal may be to analyze the transport block errors (TBEs) on a per-sub-band basis. In some examples, this may have the benefit of capturing other types of localized errors that may not show up in the SINR droop computation. Examples of non-interference types of errors could be hardware (HW) or operator-induced errors. In some examples, using TBE counters may allow for an implementation that requires more persistent errors over several frames to react which may result in better overall system performance rather than quickly reacting to a single frame of heavy interference as the FASD may do depending on parameter choices. Non-persistent errors may be resolved by retransmission schemes such as ARQ/HARQ. Using TBE counters for subMar may also allow for a more precise control of the target TBE rate (TBER) on a per-sub-band basis. It also complements the user margin that controls the TBER on a per-user basis (across all sub-bands and streams) also using TBE counters. The per-user margin technique herein may be changed to additionally and/or alternatively operate on a per-sub-band basis. In some examples, with only a single sub-band, the accumulation of TBEs and TBs may be substantially slower than for the per-user margin. In one example, a user may have 256 sub-band-streams allocated corresponding to 256 faster TB/TBE accumulation than a single sub-band and stream. With the goal of detecting the long-term interference landscape of the channel and not reacting to single occurrences, the speed difference between per-user margin and per-sub-band margin in some examples may not be a problem. An example setting is to select a TBER operating point of 1 e-3, N_ERR_INC=90, N_TBLOCK=90,000 corresponding to 90,000/200/4=112.5 s for a down step if low errors and 20 frames for an increase if all TBs in a sub-band are in error. For other operating points, there is a trade-off between time for a down step based on the N_TBLOCK size selection and the stability and increase speed based on the N_ERR_INC selection. In some examples, another option is to use a drain-based technique where the subMar is decreased or drained by a small amount every frame. This provides a smooth decrease using smaller steps than the per-sub-band version of the user margin. In some examples, this drain may be performed using various components described herein, such as for example, multidimensional interference margin generator136ofFIG.1B, and/or MIMG712and/or MIMG728ofFIG.7. Another benefit of the drain-based technique over extending the per-user-margin based technique is that it may enable fine tuning of the behavior. In such examples, a filtered error counter may be used to decide when the subMar is increased. More details are included in Algorithm (6), where for each frame, the transport block errors (TBEs) per sub-sub-band and stream may be summed across all sub-bands in a sub-band group for each stream. In one non-limiting example, a margin generator may receive information indicative of the need to reduce (e.g., decrease) the subMar for a particular sub-band. In some examples, this information may be received from various components of system100and/or system700. The information may derive from, in some examples, interference metrics described herein. For example, the information from the interference metrics may indicate there is less (e.g., decreased) channel interference, which may result in the ability to use smaller (e.g., decreased) subMars per sub-band for a given channel. As such, in some examples, the margin generator may, based at least on this information, decrease or drain the subMar by a small amount every frame. In some examples, a low complexity TBER update DeltaTber may then implemented by subtracting the number of transport blocks (TB) multiplied with a tbeDrain from the TBEs. The tbeDrain is typically selected to be equal to the desired TBER target. A filtered version tbeFilt may then be created by adding DeltaTber to the previous tbeFilt. To avoid negative values that may take a long time to recover from, tbeFilt may be limited to only zero or positive values. Note that due to the transport block striping, the number of TBEs may only be computed on a per-sub-band group basis. If the largest tbeFilt across all allocated streams for a sub-band exceeds a threshold tbeThreshold, the subMar is increased by subMarincStep and tbeFilt reset to zero. For all (or some of the) frames, the subMar is may be decreased by subMarDrain unless subMar is zero. Algorithm (6)Filter TBEs per subband and stream to detect subbands with higher error rateDeltaTber = sum across all subbands in subband group of sb[ TBE(sb,strm,n) - TB(sb,strm,n)*tbeDrain ]tbeFilt(sb,strm,n) = max[tbeFilt(sb,strm,n)+DeltaTber, 0]tbeDrain is selected based on TBER targetUse the same update for TBE and TB for all subbands in a subband groupUpdate subband margin with threshold and drainif maxstrm[tbeFilt(sb,strm,n)] > tbeThresholdsubMar(sb,n) = subMar(sb,n-1) + subMarIncSteptberFilt(sb,strm,n) = 0 for all strmelsesubMar(sb,n)=subMar(sb,n-1)subMar(sb,n) = max[subMar(sb,n)-subMarDrain, 0] Pseudo Code for TBER subMar Computation The settings utilized to achieve a specific TBER target may be more involved than for the user-margin based algorithm described herein. The amount drained every frame may, in some example, need to be matched to the increase step size and the number of error threshold among other things. By selecting the subMarDrain as: subMarDrain=subMarincStep*TBER/2*TbSBG/tbeThreshold  Equation (12) where TbSBG is the total number of TBs in a sub-band group per frame the error rate may be close but not exactly equal to the target error rate. Many variations of Equation (12) are possible to formulate to maintain the desired TBER rate. For example, if subMarincStep=0.1, TbSBG=9 and tbeThreshold=90, then subMarDrain=0.005 TBER. Hence, similar like the other methods described herein, the decreased speed is related to the TBER target. In one example, an error rate of TBER=1 e-3, the subMar decreases 0.005*1 e-3*200*100=0.1 over 100 s. This decay rate is similar to the subMar technique based on the user margin scheme discussed above with a decay of 112.5 s per down step. In some examples, the pseudo code in Algorithm (6) may be used by, for example, multidimensional interference margin generator136ofFIG.1B, to compute a separate tbeFilt per-sub-band and stream and then increases the per-sub-band subMar if the maximum tbeFilt exceeds a threshold. Similar to the discussion on the FASD interference aware technique described herein, it may be advantageous to compute a per-sub-band per-number-of-streams subMar due to different spatial properties of individual streams and interference. Modifying the pseudo-code in Algorithm (6) to compute by, for example, multidimensional interference margin generator136ofFIG.1B, the DeltaTber as the sum across allocated streams and then update the single stream tbeFilt if only a single stream is allocated to a user on a sub-band and so forth for more streams. The per-sub-band per-number-of-stream subMar may be updated by comparing the tbeFilt for the corresponding number of streams with the threshold. In some examples, one per-sub-band per-number-of-streams subMar is updated per frame per-sub-band keeping the computations low. As stated throughout, the term per-sub-band per-stream subMar and per-sub-band per-number-of-streams subMar will be used interchangeably, such as, for example, when the subMar is applied in different modules such as ACM and scheduler modules. Allocator (e.g., Scheduler) Multi-Stream Behavior In some examples, an allocator (e.g., a scheduler, such as scheduler116ofFIG.1, scheduler204ofFIG.2, and/or scheduler716ofFIG.7) may choose to allocate multiple streams to a user in a sub-band and the impact of interference may be quite different depending on how many streams are allocated. Although the channel knowledge from sounding may show that a user is capable of supporting a two-stream transmission, the interference environment may not enable the user to reliable spatially multiplex two streams and maintain a good user throughput. This may be especially common for dual polarized antenna systems where the interference may be primarily of one polarization. In instances like these, it may be desirable for the scheduler (e.g., a scheduler, such as scheduler116ofFIG.1, scheduler204ofFIG.2, and/or scheduler716ofFIG.7) to get that direct feedback and therefore increase the allocations in the frequency domain instead of the spatial domain. It is therefore important, in some examples, to have a fine-grained subMar approach also in terms of streams. Given that the per-stream beamforming weights are constantly changing, it may be difficult to designate a separate subMar per stream. In some examples, an alternate approach may be to calculate a separate subMar based on the number of streams and not the stream index as discussed herein. For example, if the beamformers are not capable of cancelling the interference when operating two streams, the two stream subMar might be high but the single stream subMar might be low. In some examples, adding an extra stream may not only result in a lower SINR for the added stream but may also lower the SINR the existing streams and increase the user's error rate. This may be difficult to predict but a robust and conservative approach may be to emphasize the degradation of adding streams by using a multiplier on the subMar term in the computation of the expected SINR in Equation (11) which is then used in Equation (10) to estimate the predicted throughput. For example, if the single stream subMar is 0.1 and the two stream subMar is 9.0, the expected throughput addition by adding a second stream may be based on an expected SINR that is almost 18 dB lower than the first stream if using a multiplier of two. Such a low SINR may result in a relatively small throughput increase by adding a second stream so the allocator decision may be incentivized to not add the extra stream. This improves the overall efficiency of the network since allocating low SINR sub-band streams lowers the throughput per resource. Although the examples above use two streams, it should be appreciated that it may be extended to more streams. In some examples, and additional and/or alternative technique that may be more complex may have the scheduler (such as scheduler716) reduce the expected throughput of the existing streams when adding more streams based on the difference in per-number-of-streams subMar. A scheduler (e.g., a scheduler, such as scheduler116ofFIG.1, scheduler204ofFIG.2, and/or scheduler716ofFIG.7) making allocation decisions (e.g., allocating additional sub-bands, allocating sub-bands, swapping out and/or changing sub-bands, etc. for a user for a given communication) based on information received from various sources, such as interference metrics, has been described throughout. Adaptive Rate of Change of subMar In some examples, there may be a desire to have an adaptive decay of the subMar when a sub-band is not allocated. In some examples, the rate of decay may be controlled through the parameter beta as shown in Algorithm (5) which may be applicable to both FASD and TBER interference aware subMar. If a burst of interference has brought the subMar up to a high value for a portion of the band and the allocator has deallocated those sub-bands, it is important in some examples to not lower the subMar too quickly. If the subMar goes down quickly, the allocator may decide to bring those sub-bands back causing another burst of errors. In some examples, if this happens frequently, the link performance may suffer. Hence, it is of interest to hold the high value for some time but not for many hours or days which would be the case if a uniform decay rate is used to meet the requirement of holding the subMar to avoid frequent re-allocations. The interference techniques described herein may use a similar mechanism as outlined in Algorithm (3) for the user margin to implement an adaptive decay rate of the subMar by using a reduction counter that is increased for consecutive decreases in the subMar. In some examples, one possible way of implementing that is to use the number of consecutive drains to index into a drain look-up table. An example using a drain look-up table with three regions is shownFIG.13, where the first 10 mins use a beta=1 which means no decay. The next 10 mins use a beta=0.99995 for a slow decay and after 20 mins a much faster decay beta=0.995 is used to quickly bring down the subMar due to the large number of consecutive decreases indicating that the interference may have disappeared. As should be appreciated, although the discussion above focused on un-allocated sub-bands, similar techniques may be used to create an adaptive subMar decay also for allocated sub-bands, and such technique are considered to be within the scope of this disclosure. For example, a similar counter for frames since last subMar increase may be used to select a drain from a drain-lookup-table. Similarly, a counter of the number of frames between increases may be used to increase the rate of increase if strong persistent interference hits. In some examples, one way of doing this is by lowering the tbe_threshold if consecutive increases with few frames in between occur. The tbe_threshold parameter may also be made a function of the TBER target to optimize draining speed at different operating points. As one non-limiting example, a margin generator, such as multidimensional interference margin generator136ofFIG.1B, may receive information indicative of channel interference (e.g., high channel interference). As such, the margin generator may, in some examples, generate a large valued subMar to send to the scheduler, such as scheduler116ofFIG.1B, to make allocation decisions. In some examples, those channels with high interference may maintain such interference for a prolonged time. However, in some examples, those channels may experience lesser and/or no interference at a later time. As such, the margin generator may implement an adaptive decay rate for the subMar associated with the channels experiencing high interference. In some examples, this may provide the scheduler with information to continue not scheduling the channels with high interference until a later date when those channels have less interference. As should be appreciated, and in some examples, implementing an adaptive decay rate may result in a faster decay when no interference has been present for some time, and in some examples, may enable those subbands to be used again. Signaling In some examples, the introduction of a per-sub-band margin (subMar) and the connection with the scheduler may introduce new signaling requirements. In some examples, a remote node may compute the subMar locally but in order for the ACM and the scheduler that reside at the base node to account for it, the computed subMar may need to be signaled to the base node. In some examples, on the other hand, the base node may compute the UL subMar locally and, in such examples, there may be no direct need to signal it to the remote node. However, a UL ACM (such as UL ACM714ofFIG.7) and a scheduler (such as scheduler716ofFIG.7) on the base node may, in some examples, need to know the UL subMar. For some of the example systems and methods described herein, the DL subMar can be signaled to the ACM and scheduler in the base node in several different ways. In some examples, it is important to design the feedback channels such that they may support a fine-grained approach also to the per-sub-band margin. In some examples, the UL ACM, DL ACM, BASE SINR, and scheduler (as depicted at least inFIG.7) may account for the DL subMar in different ways so separate solutions for each are possible. In some examples, the UL subMar may be computed locally at the base node and relayed to the ACM module and the scheduler through the UL BASE SINR computation that includes the current UL subMar value. There exist many possibilities for signaling the DL subMar to the ACM and scheduler. While two of those are described below, it should be appreciated that other possibilities are contemplated to be within the scope of this disclosure. Derate Approach In some examples, the derating approach may be based at least on derating the estimated DL SINR computed by the remote node with the DL subMar and feed back in the UL FCCH per-sub-band and stream. In some examples, this is the SINR that the ACM module use for selecting the next MCS. In some examples, that would substantially improve the error rate and link performance in presence of interference by accounting for the SINR drop in the MCS selection. In some examples, the derating approach may be based at least on derating the sounding feedback from the remote node to the base node with the DL subMar, since the scheduler used the DL BASE SINR to decide allocations. With interference present, in some examples, the DL BASE SINR may drop due to the DL subMar derating allowing the scheduler to modify or avoid allocations on interfered sub-bands. As used herein, derating is generally when a system or component is operated below its normal (and/or maximum) operating limit (e.g., current rating, power rating, voltage rating, etc.). In some examples, this reduces the deterioration rate of the component and minimizes failures attributed to extreme operating conditions. In some examples, operating a system or component below its normal (and/or maximum) operating limit may also prolong the system or component's lifespan. As should be understood, although interference aware ACM may prolong lifespan as described herein, interference aware ACM may have additional and/or alternative benefits, as discussed throughout. Signaling Approach In some examples, the signaling approach may be based at least on signaling the DL subMar from the remote node to the base node by extending the UL CCE (e.g., UL CCE726ofFIG.7) sounding feedback with the DL subMar. However, that feedback may be relatively slow and an additional fast feedback can be implemented by sending one bit feedback in the UL FCCH (e.g., UL FCCH724ofFIG.7) that updates the last known DL subMar from the UL CCE (e.g., UL CCE726ofFIG.7) sounding feedback. In some examples, with this approach, a fast per frame update may be received while the full feedback may be received at a slightly slower rate though the UL CCE (e.g., UL CCE726ofFIG.7) sounding feedback corresponding to a parsimonious approach in terms of the number of bits. At the base node, the received DL subMar may be then used inside the ACM module (e.g., used by ACM circuitry126ofFIG.1B, etc.) when selecting the DL MCS. Conversely, inside the BASE SINR module (such as UL & DL Base SINR720ofFIG.7) the DL subMar may be used to lower the DL BASE SINR which may be then sent to the scheduler (such as scheduler716ofFIG.7). The same information may also be used inside the scheduler directly to allow the scheduler to modify or avoid allocations on interfered sub-bands. In some examples, although the first approach (e.g., the derate approach) may require no additional signaling, the base node may not learn the actual DL subMar values explicitly, which may be useful for link monitoring and network optimization. Additionally, in some examples, the sounding feedback may be relatively slow, resulting in slow reaction from the scheduler in removing allocations when interference hits. The direct signaling of the DL subMar with the fast incremental feedback may allow for the scheduler to remove or modify allocations much faster avoiding lingering poor allocations. In some examples, one of the benefits of the TBER interference aware technique versus the FASD interference aware technique is that the interference aware target TBER may be selected in accordance with the user margins target TBER. The parameter selection is discussed herein, and may be implemented at the base node directly for the UL TBER target of, e.g., interference aware UL MIMG712ofFIG.7. In some examples, since the DL subMar may be computed by the DL MIMG728ofFIG.7located in some examples at the remote node, knowledge of the target TBER may be required to align the DL subMar TBER target with the user margin TBER target. In some examples, this may be achieved in many different ways of which one is to add the TBER target to the DL broadcast channel DL PBCH. With that knowledge, the remote node may set the DL subMar parameters to target that TBER. In some examples, various components of system100and system700described herein may set the DL subMar parameters. In some examples, MIMG728and/or sounding730may set the DL subMar parameters. In some examples, no such signaling is required for the UL subMar TBER target since, in some examples, both the UL ACM714and UL MIMG712reside on the base node (see, e.g.,FIG.7) where the target error rate may be known (e.g., see UL TBE710). No Data MCS Indicator with Allocate and Interference Awareness In some examples, a “no data” MCS indicator may be a solution to the problem that even the lowest MCS may not support low error transmissions. This “no data” MCS indicator may also be referred to, in some examples, as the FORBID option since it forbids user data to be striped on the affected sub-bands although these sub-bands may be allocated to this user by the scheduler. In some examples, transport block errors (e.g., TBEs) on FORBID sub-bands may not be included in the user-margin calculation or the TBER calculation but, in some examples, may be included in the per-sub-band subMar. In some examples, one FORBID use case is cell edge users that may have an allocation with SNRs that fluctuate around the lowest MCS leading to many errors. In some examples, a no data MCS that results in sending just padding may avoid the case where a few poor sub-bands may dominate the throughput performance. In that case, the information bits and the effective throughput will not be impacted by the poor sub-bands since those do not carry any data and do not contribute to the user margin nor the TBER. Similarly, and in some examples, if a poor allocation happens, it takes time for scheduler to update allocations, during which time many errors may accumulate without FORBID and may limit link performance. In some examples, the no data option may also be used in conjunction with an UL MIMG (such as UL MIMG712ofFIG.7), a DL MIMG (such as DL MIMG728ofFIG.7), or combinations thereof, to further improve link performance. In some examples, a scenario where the no data option is useful is where the subMar rises up to a point where the allocator decides to remove the allocation. Although the subMar decay may be slow as described herein, at one point it may fall enough to enable a sub-band to get allocated again. In some examples, if the interference still is present, even the lowest MCS may still result in errors. Depending on the rate of decay, this may happen several times and individually for each sub-band resulting in many error events that may limit the link performance. In some examples, this may be avoided by, e.g., the scheduler, using the FORBID option when re-allocating the sub-band. In some examples, if interference still is present, the subMar computed in some examples by multidimensional interference margin generator136ofFIG.1, may go up and the sub-band deallocated before the MCS moves out of FORBID. This may be implemented in several ways as described below. As one non-limiting example, the allocator (e.g., scheduler716ofFIG.7, scheduler116ofFIG.1B, etc.) or ACM circuitry (e.g., ACM circuitry126ofFIG.1, UL ACM714, and/or DL ACM718) may mark a sub-band as a FORBID sub-band when an allocation is removed, e.g., by a scheduler, due to poor SINR and/or high subMar as calculated by a margin generator. In some examples, when the subMar has decayed and the sub-band get allocated again, the selected MCS may be FORBID for a long enough duration to have UL MIMG712and/or DL MIMG728react and increase the subMar if interference still is present. During this time, the UL MIMG712and/or DL MIMG728may count the TBEs as usual and may update the determined subMar but not the user margin. In some examples, if the interference still is severe enough, the sub-band may be deallocated again, e.g., by a scheduler, without incurring any data errors. Some examples may start out an allocation with FORBID if the expected or BASE SINR is close to the lowest MCS. The FORBID duration may either be selected as a parameter or from using a threshold of a counter of error free frames. Allocation Starvation Avoidance In some examples, the per-sub-band margin, subMar, described herein provides fine-grained control by reducing the MCS or by de-allocating poor sub-bands suffering from interference or other issues. In some examples, it may be possible that all sub-bands have experienced interference as may be seen using the interference metrics described herein, which may result in a high subMar which may preclude allocating any sub-band. In such an example, this may result in no payload due to allocation starvation. Allocation starvation may be more common for the FASD method that quickly may rise when interference hits, as compared to the TBER method that may require consistent interference, with a given on/off pattern, for some time to rise to a high value. In some examples, if the interference in fact has been reduced (e.g., minimized, eliminated, etc.), it may be possible for a scheduler, such as scheduler116ofFIG.1B, etc., to allocate some sub-bands faster than the slow decay of the subMar allows. In some examples, the slow decay may be selected to make the links stable and to avoid repeated error bursts but in the case of no payload at all, it may be better to allocate some sub-bands exploiting the FORBID feature discussed herein. Note that in some examples, the adaptive rate of change described herein may help reduce the starvation occurrences but there may still be a duration (e.g., 10-20 mins) before the subMar decays enough to allow for new allocations. In some examples, if no sub-bands are possible to allocate to a user due to the BASE SINR for all sub-bands being below the lowest MCS, the sub-band with the highest BASE SINR may be selected. For that sub-band, the subMar may be artificially reduced or the BASE SINR may be artificially increased such that the sub-band becomes possible to allocate. In some examples, several different types of implementations are possible depending on the prevalence of allocation starvation, which are contemplated to be within the scope of this disclosure. In some examples, a conservative option may be to limit the starvation avoidance subMar reduction to a single sub-band that may allow a link to be present but at low throughput until the subMar naturally decreases. In some examples, another more aggressive option may be to lower the subMar or increase the BASE SINR one sub-band at a time using FORBID enabling faster recovery from large interference events at the cost of more variable throughput. In some examples, options in between include limiting the feature to only be active when the number of allocated sub-bands (e.g., when the number of sub-bands possible to allocate) is below a threshold, may additionally and/or alternatively be used. In some examples, the above may be utilized for sub-bands but allocations are made per-sub-band and stream (e.g., by scheduler116ofFIG.1B) and the BASE SINR may be computed per-sub-band and per stream (e.g., by UL & DL Base SINR720ofFIG.7). The systems and methods discussed above target going from no stream to a single stream per-sub-band but extensions where the BASE SINR for a second stream is increased is also possible, but other scenarios are contemplated to be within the scope of this disclosure. Dynamic Base Band Filter Selection In some examples, wireless communication systems employ a base band analog filter to filter out noise and interference outside the frequencies where the information is sent before digitizing the signal through an analog-to-digital converter (ADC) (e.g., Dual ADC/DAC104d,104e, and/or104fofFIG.1A). While out-of-band interference can be suppressed to some extent by steep high order baseband analog filtering techniques, in-band interference sources hit the ADC with no attenuation. In some examples, strong in-band interference sources may cause the ADC to saturate and therefore require the system to operate at a lower receive gain, which comes with a worse receiver sensitivity and therefore throughput. In some examples, when these interference sources are spatially resolvable, the penalty of operating with a worse receiver noise figure may be warranted. However, if the interference is not spatially resolvable, and the subMar may increase to level (as computed by, for example, multidimensional interference margin generator136ofFIG.1B) where the allocator (such as scheduler116ofFIG.1B) would deallocate these sub-bands, then the user may be paying the price of a worse receiver sensitivity with no benefits at all. In some examples, it may be beneficial for the receiver to select a narrower baseband filter bandwidth that filters out the interference, and allows the receiver to operate at a higher gain, therefore improving the user's throughput. In some examples, although many sources of this noise and interference knowledge exists, some example systems may use the per-sub-band margin subMar to determine the optimal bandwidth that optimizes the user's throughput by improving the receiver sensitivity. In some examples, by matching the subMar with different base band filters, the filter that best suppresses interference while maintaining as much bandwidth as possible is selected. An example is show inFIGS.14A and14B, where using a 40 MHz bandwidth filter (FIG.14A) captures substantial amounts of interference detected using the subMar resulting in a low receive amplifier gain with a poor noise figure. If instead, using a 20 MHz bandwidth filter (FIG.14B), the interference is suppressed allowing for a higher receive amplifier gain that reduces the noise figure. In some examples, for strong interference, large gains exceeding 10 dB may be possible to achieve in the scenario depicted inFIGS.14A and14B. In some examples, guided at least in part by the subMar magnitude and the received signal strength in different sub-bands, the benefit of different base-band filter may be evaluated on the tradeoff of improved noise figure and reduced bandwidth. In the scenario depicted inFIGS.14A and14B, the loss of 10 MHz of spectrumFIG.14Bis compensated by the improved noise figure resulting in overall higher throughput and system performance. Interference Aware Network Management In some examples, when operating in unlicensed bands, operators may typically be left with the decision to select an optimal carrier frequency to operate their networks. Although an initial spectrum scan is typically done at the time of the installation, the interference environment may vary with time, resulting in a degraded performance over time. In some examples, with the explicit feedback of the DL subMar to the base node (such as base node702ofFIG.7) through the UL FCCH (e.g., UL FCCH724ofFIG.7) and UL CCE (e.g., UL CCE726ofFIG.7), the base node may be equipped with a network view of the local DL interference landscape near each of the users, as well as the UL interference landscape that could be impacting some or all of the users in the UL direction. Coupled with the path loss estimates for each of the users, the base SINR (as computed, in some examples, by UL & DL Base SINR720ofFIG.7) may be used by a network management system to determine whether the majority of the users are experiencing a throughput that is far from optimal. Specifically, the per-sub-band view of the interference landscape may enable the operators to know exactly how much the operating frequency of the sector needs to be moved by in order to avoid the interference that is not spatially resolvable. SINR Post-Processing in the Presence of Interference In some examples, the UL ACM714ofFIG.7and DL ACM718ofFIG.7as described herein may base the MCS selection at least partly on the reference symbols SINR RS-SINR. In some examples, in the presence of interference, the RS-SINR may vary drastically depending on if the interference is present on the reference symbol. In some examples, to maintain a certain transport block error (TBER), the user margin and the per-sub-band per-number of stream margin may adjust (e.g., as calculated by a margin generator, such as multidimensional interference margin generator136ofFIG.1B, and/or MIMG712and/or MIMG728ofFIG.7) based at least on the strength of the channel interference and how often it is present, e.g., in a given frame, and in some examples, relative to a given reference symbol and/or pilot symbol. In some examples, there may be a scenario where the SINR may be 25 dB when interference is not present but drops to 15 dB when interference is present. In order to avoid an excessive TBER, the combination of the two margins may become 10 dB to account for the SINR drop leading to an effective SNR of 15 dB without interference and 5 dB with interference. In some examples, however, when interference is present the actual SINR is 15 dB and selecting the MCS based on just 5 dB instead of 15 dB may result in a throughput loss. One example of post-processing is to average the SINR in time and use the averaged SINR as one of the inputs to ACM circuitry126ofFIG.1Bwhen selecting the MCS. In the example above, if a time averaged SINR is used for MSC selection, the post-processed SINR may be consistently at 20 dB with the resulting combined margins of 5 dB. In this example, the throughput may be substantially higher since the MCS is selected based on effective SINR of 15 dB all the time without the dips to just 5 dB when interference is present. Another benefit of this post-processing is that the variance of the effective SINR may be lower which may further lower the required margins. As should be appreciated, many other types post-processing may be considered optimizing different performance metrics, and are considered to be within the scope of this disclosure. As one non-limiting example, order statistics may be used to achieve certain TBER rates and/or retransmission rates. The description of certain embodiments included herein is merely exemplary in nature and is in no way intended to limit the scope of the disclosure or its applications or uses. In the included detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and which are shown by way of illustration specific to embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized, and that structural and logical changes may be made without departing from the spirit and scope of the disclosure. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of embodiments of the disclosure. The included detailed description is therefore not to be taken in a limiting sense, and the scope of the disclosure is defined only by the appended claims. From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. The particulars shown herein are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of various embodiments of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for the fundamental understanding of the invention, the description taken with the drawings and/or examples making apparent to those skilled in the art how the several forms of the invention may be embodied in practice. As used herein and unless otherwise indicated, the terms “a” and “an” are taken to mean “one”, “at least one” or “one or more”. Unless otherwise required by context, singular terms used herein shall include pluralities and plural terms shall include the singular. Unless the context clearly requires otherwise, throughout the description and the claims, the words ‘comprise’, ‘comprising’, and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to”. Words using the singular or plural number also include the plural and singular number, respectively. Additionally, the words “herein,” “above,” and “below” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of the application. Examples described herein may refer to various components as “coupled” or signals as being “provided to” or “received from” certain components. It is to be understood that in some examples, the components are directly to one another, while in other examples, the components are coupled with intervening components disposed between them. Similarly, a signal(s) may be provided directly to and/or received directly from the recited components without intervening components, but also may be provided to and/or received from the certain components through intervening components. Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods. Finally, the above discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.
247,044
11863315
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purposes of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, and/or the like (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented with a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, and/or the like, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer. It is noted that while aspects may be described herein using terminology commonly associated with 3G and/or 4G wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems, such as 5G and later, including 5G technologies. FIG.1is a diagram illustrating a network100in which aspects of the present disclosure may be practiced. The network100may be an LTE network or some other wireless network, such as a 5G network. Wireless network100may include a number of BSs110(shown as BS110a, BS110b, BS110c, and BS110d) and other network entities. A BS is an entity that communicates with user equipment (UEs) and may also be referred to as a base station, a 5G BS, a Node B, a gNB, a 5G NB, an access point, a transmit receive point (TRP), and/or the like. Each BS may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a BS and/or a BS subsystem serving this coverage area, depending on the context in which the term is used. ABS may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or another type of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having association with the femto cell (e.g., UEs in a closed subscriber group (CSG)). A BS for a macro cell may be referred to as a macro BS. A BS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, a BS110amay be a macro BS for a macro cell102a, a BS110bmay be a pico BS for a pico cell102b, and a BS110cmay be a femto BS for a femto cell102c. A BS may support one or multiple (e.g., three) cells. The terms “eNB”, “base station”, “5G BS”, “gNB”, “TRP”, “AP”, “node B”, “5G NB”, and “cell” may be used interchangeably herein. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some examples, the BSs may be interconnected to one another and/or to one or more other BSs or network nodes (not shown) in the access network100through various types of backhaul interfaces such as a direct physical connection, a virtual network, and/or the like using any suitable transport network. Wireless network100may also include relay stations. A relay station is an entity that can receive a transmission of data from an upstream station (e.g., a BS or a UE) and send a transmission of the data to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that can relay transmissions for other UEs. In the example shown inFIG.1, a relay station110dmay communicate with macro BS110aand a UE120din order to facilitate communication between BS110aand UE120d. A relay station may also be referred to as a relay BS, a relay base station, a relay, and/or the like. Wireless network100may be a heterogeneous network that includes BSs of different types, e.g., macro BSs, pico BSs, femto BSs, relay BSs, and/or the like. These different types of BSs may have different transmit power levels, different coverage areas, and different impact on interference in wireless network100. For example, macro BSs may have a high transmit power level (e.g., 5 to 40 Watts) whereas pico BSs, femto BSs, and relay BSs may have lower transmit power levels (e.g., 0.1 to 2 Watts). A network controller130may couple to a set of BSs and may provide coordination and control for these BSs. Network controller130may communicate with the BSs via a backhaul. The BSs may also communicate with one another, e.g., directly or indirectly via a wireless or wireline backhaul. UEs120(e.g.,120a,120b,120c) may be dispersed throughout wireless network100, and each UE120may be stationary or mobile. A UE120may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, etc. A UE120may be a cellular phone (e.g., a smart phone), a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, medical device or equipment, biometric sensors/devices, wearable devices (smart watches, smart clothing, smart glasses, smart wrist bands, smart jewelry (e.g., smart ring, smart bracelet)), an entertainment device (e.g., a music or video device, or a satellite radio), a vehicular component or sensor, smart meters/sensors, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs120may be considered machine-type communication (MTC) or evolved or enhanced machine-type communication (eMTC) UEs. MTC and eMTC UEs include, for example, robots, drones, remote devices, such as sensors, meters, monitors, location tags, etc., that may communicate with a base station, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs120may be considered Internet-of-Things (IoT) devices, and/or may be implemented as may be implemented as NB-IoT (narrowband internet of things) devices. Some UEs120may be considered a Customer Premises Equipment (CPE). UE120may be included inside a housing that houses components of UE120, such as processor components, memory components, and/or the like. In some aspects, a UE120may be an MTC UE and/or an eMTC UE that operates in an unlicensed radio frequency (RF) spectrum band. In some aspects, an eMTC UE that operates in an unlicensed RF spectrum band may be referred to as an eMTC-u UE. UEs120and base stations110may communicate over an unlicensed RF spectrum band using one or more radio access technologies, such as a Wi-Fi radio access technology, an LTE radio access technology, a 5G radio access technology, and/or the like. An unlicensed RF spectrum band may refer to an RF spectrum band that is open for shared use by any device that complies with regulatory agency rules for communicating via the RF spectrum band. In contrast with most licensed RF spectrum band usage, users of unlicensed RF spectrum bands do not typically have regulatory protection against radio interference from devices of other users. For example, devices that use the unlicensed RF spectrum band must typically accept any radio interference caused by other devices that use the unlicensed RF spectrum band. Because the unlicensed RF spectrum band may be shared by devices operating under different protocols (e.g., different RATs), transmitting devices may contend for access to the unlicensed RF spectrum band (e.g., using a listen before talk procedure and/or the like). In some aspects, the unlicensed RF spectrum band may include one or more radio frequencies (e.g., one or more RF spectrum bands) included in the radio spectrum (e.g., the portion of the electromagnetic spectrum corresponding to radio frequencies, or frequencies lower than approximately 300 gigahertz (GHz)). In some aspects, the unlicensed RF spectrum band may include one or more RF spectrum bands that are open for shared use by any device that complies with regulatory agency rules (e.g., associated with a particular country) for communicating via the one or more RF spectrum bands. In some aspects, the unlicensed RF spectrum band may include one or more radio frequencies in the 2.4 GHz band. For example, the unlicensed RF spectrum band may include one or more radio frequencies between approximately 2.4 GHz and 2.48 GHz. Additionally, or alternatively, the unlicensed RF spectrum band may include one or more radio frequencies in the 5 GHz band. For example, the unlicensed RF spectrum band may include one or more radio frequencies between approximately 5.15 GHz and approximately 5.825 GHz. The unlicensed RF spectrum band may be divided into channels via which RF communications may be transmitted. In some aspects, the unlicensed RF spectrum band may include one or more channels of approximately 1.4 MHz bandwidth (e.g., up to 59 channels at 1.4 MHz bandwidth in the 2.4 GHz band). Additionally, or alternatively, the unlicensed RF spectrum band may include one or more channels of approximately 20 MHz bandwidth. Wireless devices may communicate via a channel included in the unlicensed RF spectrum band. For example, a wireless device may communicate via an RF channel using a Wi-Fi radio access technology, an LTE radio access technology, a 5G radio access technology, and/or the like. In some aspects, a UE120may contend for access to the unlicensed RF spectrum band before sending a transmission via the unlicensed RF spectrum band. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular RAT and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, and/or the like. A frequency may also be referred to as a carrier, a frequency channel, and/or the like. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, 5G RAT networks may be deployed. 5G may refer to radios configured to operate according to a new air interface (e.g., other than Orthogonal Frequency Divisional Multiple Access (OFDMA)-based air interfaces) or fixed transport layer (e.g., other than Internet Protocol (IP)). In aspects, 5G may utilize OFDM with a CP (herein referred to as cyclic prefix OFDM or CP-OFDM) and/or SC-FDM on the uplink, may utilize CP-OFDM on the downlink and include support for half-duplex operation using TDD. In aspects, 5G may, for example, utilize OFDM with a CP (herein referred to as CP-OFDM) and/or discrete Fourier transform spread orthogonal frequency-division multiplexing (DFT-s-OFDM) on the uplink, may utilize CP-OFDM on the downlink and include support for half-duplex operation using TDD. 5G may include Enhanced Mobile Broadband (eMBB) service targeting wide bandwidth (e.g., 80 megahertz (MHz) and beyond), millimeter wave (mmW) targeting high carrier frequency (e.g., 60 gigahertz (GHz)), massive MTC (mMTC) targeting non-backward compatible MTC techniques, and/or mission critical targeting ultra reliable low latency communications (URLLC) service. A single component carrier bandwidth of 100 MHZ may be supported. 5G resource blocks may span 12 sub-carriers with a sub-carrier bandwidth of 75 kilohertz (kHz) over a 0.1 ms duration. Each radio frame may include 50 subframes with a length of 10 ms. Consequently, each subframe may have a length of 0.2 ms. Each subframe may indicate a link direction (e.g., DL or UL) for data transmission and the link direction for each subframe may be dynamically switched. Each subframe may include DL/UL data as well as DL/UL control data. Beamforming may be supported, and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Multi-layer transmissions with up to 2 streams per UE may be supported. Aggregation of multiple cells may be supported with up to 8 serving cells. Alternatively, 5G may support a different air interface, other than an OFDM-based interface. 5G networks may include entities such central units or distributed units. The RAN may include a central unit (CU) and distributed units (DUs). A 5G BS (e.g., gNB, 5G Node B, Node B, transmit receive point (TRP), access point (AP)) may correspond to one or multiple BSs. 5G cells can be configured as access cells (ACells) or data only cells (DCells). For example, the RAN (e.g., a central unit or distributed unit) can configure the cells. DCells may be cells used for carrier aggregation or dual connectivity, but not used for initial access, cell selection/reselection, or handover. In some aspects, DCells may not transmit synchronization signals. In some aspects, DCells may transmit synchronization signals. 5G BSs may transmit downlink signals to UEs indicating the cell type. Based at least in part on the cell type indication, the UE may communicate with the 5G BS. For example, the UE may determine 5G BSs to consider for cell selection, access, handover, and/or measurement based at least in part on the indicated cell type. As indicated above,FIG.1is provided merely as an example. Other examples are possible and may differ from what was described with regard toFIG.1. FIG.2shows a block diagram200of a design of base station110and UE120, which may be one of the base stations and one of the UEs inFIG.1. Base station110may be equipped with T antennas234athrough234t, and UE120may be equipped with R antennas252athrough252r, where in general T≥1 and R≥1. At base station110, a transmit processor220may receive data from a data source212for one or more UEs, select one or more modulation and coding schemes (MCS) for each UE based at least in part on channel quality indicators (CQIs) received from the UE, process (e.g., encode and modulate) the data for each UE based at least in part on the MCS(s) selected for the UE, and provide data symbols for all UEs. Transmit processor220may also process system information (e.g., for semi-static resource partitioning information (SRPI), and/or the like) and control information (e.g., CQI requests, grants, upper layer signaling, and/or the like) and provide overhead symbols and control symbols. Transmit processor220may also generate reference symbols for reference signals (e.g., the CRS) and synchronization signals (e.g., the primary synchronization signal (PSS) and secondary synchronization signal (SSS)). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, the overhead symbols, and/or the reference symbols, if applicable, and may provide T output symbol streams to T modulators (MODs)232athrough232t. Each modulator232may process a respective output symbol stream (e.g., for OFDM and/or the like) to obtain an output sample stream. Each modulator232may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. T downlink signals from modulators232athrough232tmay be transmitted via T antennas234athrough234t, respectively. According to various aspects described in more detail below, the synchronization signals can be generated with location encoding to convey additional information. At UE120, antennas252athrough252rmay receive the downlink signals from base station110and/or other base stations and may provide received signals to demodulators (DEMODs)254athrough254r, respectively. Each demodulator254may condition (e.g., filter, amplify, downconvert, and digitize) a received signal to obtain input samples. Each demodulator254may further process the input samples (e.g., for OFDM and/or the like) to obtain received symbols. A MIMO detector256may obtain received symbols from all R demodulators254athrough254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive (RX) processor258may process (e.g., demodulate and decode) the detected symbols, provide decoded data for UE120to a data sink260, and provide decoded control information and system information to a controller/processor280. A channel processor may determine RSRP, RSSI, RSRQ, CQI, and/or the like. On the uplink, at UE120, a transmit processor264may receive and process data from a data source262and control information (e.g., for reports comprising RSRP, RSSI, RSRQ, CQI, and/or the like) from controller/processor280. Transmit processor264may also generate reference symbols for one or more reference signals. The symbols from transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by modulators254athrough254r(e.g., for DFT-s-OFDM, CP-OFDM, and/or the like), and transmitted to base station110. At base station110, the uplink signals from UE120and other UEs may be received by antennas234, processed by demodulators232, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by UE120. Receive processor238may provide the decoded data to a data sink239and the decoded control information to controller/processor240. Base station110may include communication unit244and communicate to network controller130via communication unit244. Network controller130may include communication unit294, controller/processor290, and memory292. Controller/processor240of base station110, controller/processor280of UE120, and/or any other component(s) ofFIG.2may perform one or more techniques associated with avoiding collisions on an uplink data channel and a cell-specific or UE-specific uplink control channel, as described in more detail elsewhere herein. For example, controller/processor240of base station110, controller/processor280of UE120, and/or any other component(s) ofFIG.2may perform or direct operations of, for example, method600ofFIG.6, method700ofFIG.7, and/or other processes as described herein. Memories242and282may store data and program codes for BS110and UE120, respectively. A scheduler246may schedule UEs for data transmission on the downlink and/or uplink. As indicated above,FIG.2is provided merely as an example. Other examples are possible and may differ from what was described with regard toFIG.2. FIG.3is a diagram illustrating an example300of a wireless communication mode for an enhanced machine-type communication (eMTC) UE operating in an unlicensed radio frequency spectrum band. As shown inFIG.3, an eMTC UE120that communicates using an unlicensed RF spectrum band (e.g., which may be referred to as an eMTC-u UE120) may operate in a frequency hopping mode. In the frequency hopping mode, the time domain may be divided into frames (e.g., shown as m-frames) of, for example, 80 milliseconds (ms). The eMTC-u UE120may communicate using different hopping frequencies in different frames. For example, the eMTC-u UE120may communicate using a first hopping frequency in a first frame, may communicate using a second hopping frequency in a second frame, may communicate using an Nthhopping frequency in an Nthframe (e.g., where N equals 16, 32, and/or the like), and may communicate again using the first hopping frequency in frame N+1. In this way, congestion on a particular hopping frequency may be reduced, and the eMTC-u UE120may increase the likelihood of finding a clear channel for communication. As shown by reference number305, at the beginning of a frame, the eMTC-u UE120may tune to an anchor channel to receive a discovery reference signal (DRS), which may include configuration parameters for communicating using the unlicensed RF spectrum band, such as a primary synchronization signal (PSS), a secondary synchronization signal (SSS), a physical broadcast channel (PBCH) communication, an indication of channels for frequency hopping, and/or the like. After obtaining the configuration parameters, the eMTC-u UE120may tune to a hopping frequency to receive downlink communications and/or to transmit uplink communications during a data segment. As shown by reference number310, at the beginning of the data segment, the eMTC-u UE120may perform a listen before talk (LBT) procedure to contend for access to a channel (e.g., a hopping frequency) of the unlicensed RF spectrum band and/or may transmit a preamble (e.g., a Tx preamble, a Wi-Fi preamble, and/or the like) to indicate that the eMTC-u UE120is accessing the channel. In some aspects, the LBT procedure may include performing a clear channel assessment (CCA) procedure to determine whether the channel is available. The CCA procedure may include detecting an energy level on the channel (e.g., an energy level of Tx preambles transmitted by other devices via the channel, such as Wi-Fi preambles) and determining whether the energy level satisfies a threshold. When the energy level satisfies the threshold (e.g., is less than or equal to the threshold), the eMTC-u UE120may access the channel to transmit and/or receive communications. When the energy level does not satisfy the threshold (e.g., is greater than or equal to the threshold), the channel may not be available for access by the eMTC-u UE120, and the eMTC-u UE120may perform the CCA procedure for the channel again at a later time. In some aspects, the eMTC-u UE120may transmit a preamble via the channel, prior to transmitting and/or receiving communications via the channel, to assist other devices with determining whether the channel is available (e.g., when the other devices are performing an LBT procedure). As shown by reference number315, once the eMTC-u UE120accesses the hopping frequency, the eMTC-u UE120may receive downlink communications and/or may transmit uplink communications during the data segment. The communications during the data segment may include data communications and/or control communications associated with the data communications (e.g., acknowledgement or negative acknowledgment (ACK/NACK) indications for the data communications). In some aspects, the data segment may be divided into a long burst of downlink subframes followed by a long burst of uplink subframes to reduce switching between downlink and uplink. In some aspects, the long burst of downlink subframes may be referred to as a downlink segment of the data segment, and the long burst of uplink subframes may be referred to as an uplink segment of the data segment. The eMTC-u UE120may receive one or more downlink communications during the downlink segment and/or may transmit one or more uplink communications during the uplink segment. According to some unlicensed RF spectrum standards (e.g., regulations regarding the 2.4 GHz RF spectrum band promulgated by the European Telecommunications Standards Institute (ETSI)), within an uplink segment of a frame, an eMTC-u UE120is permitted to transmit an uplink communication for only 5 contiguous milliseconds, after which the eMTC-u UE120must wait 5 milliseconds (e.g., a 5 ms transmission gap) before another uplink transmission. In this way, the eMTC-u UE120may permit other devices to communicate during the transmission gap. Because of these uplink limitations, as well as the contentious nature of the unlicensed RF spectrum band, a base station110and an eMTC-u UE120communicating in the unlicensed RF spectrum band use an asynchronous hybrid automatic repeat request (HARQ) procedure. In an asynchronous HARQ procedure, ACK/NACK indications are transmitted according to a dynamic or semi-static timeline (e.g., a timing between when a downlink data communication is received by the eMTC-u UE120and a corresponding ACK/NACK indication is transmitted by the eMTC-u UE120), rather than a fixed timeline. In this case, the eMTC-u UE120may transmit an ACK/NACK indication using a bitmap that corresponds to a number of HARQ processes (e.g., downlink data communications) to be acknowledged or negatively acknowledged. The base station110may configure uplink control channel (e.g., physical uplink control channel (PUCCH)) instances for transmission of these ACK/NACK indications by the eMTC-u UE120. The uplink control channel instances may be configured for different uplink control channel formats (e.g., PUCCH format1, format1a, format2, format2a, and/or the like), and may be configured with different resource block (RB) allocations of the data segment, different periodicities across data segments and/or frames, different offsets within the data segment, different transmission windows, different resource element (REs) for transmission of an ACK/NACK indication and/or other uplink control information (UCI) (e.g., channel state information (CSI), a scheduling request (SR), and/or the like), and/or other configuration parameters. In some aspects, the base station110may use a cell-specific uplink control channel configuration for initial network setup for a group of eMTC-u UEs120that are not connected to the base station110, and may use a UE-specific uplink control channel configuration for individual eMTC-u UEs120that are connected to the base station110. For example, the base station110may indicate (e.g., in a system information block (SIB)) a cell-specific uplink control channel (e.g., a cell-specific PUCCH) configuration for eMTC-u UEs120that are not connected to the base station110. The cell-specific uplink control channel may be used by an eMTC-u UE120for acknowledging or negatively acknowledging initial network setup messages, such as for a random access channel (RACH) procedure, may be common across all eMTC-u UEs120for initial access to a cell, and may be known or discoverable by all eMTC-u UEs120in the cell (e.g., UEs120that are RRC connected and UEs120that are not RRC connected). Additionally, or alternatively, the base station110may indicate (e.g., in an RRC configuration message) a UE-specific uplink control channel (e.g., a UE-specific PUCCH) for each eMTC-u UE120connected to the base station110. The UE-specific uplink control channel may be used by an eMTC-u UE120, after being RRC connected to the base station110, for acknowledging or negatively acknowledging downlink data communications in the data segment. An eMTC-u UE120that is RRC connected to the base station110may perform one or more collision avoidance techniques to avoid uplink data communication collisions with both the cell-specific uplink control channel (e.g., transmissions by other eMTC-u UEs120) and the UE-specific uplink control channel (e.g., transmission by the eMTC-u UE120), as described in more detail elsewhere herein. In some aspects, an eMTC-u UE120may be configured to repeat uplink data transmissions on an uplink data channel (e.g., a physical uplink shared channel (PUSCH)) due to poor channel conditions, a location of the eMTC-u UE120near a cell edge, and/or the like. This may increase a likelihood of collisions between a scheduled uplink control channel instance (e.g., a cell-specific PUCCH and/or a UE-specific PUCCH) and an uplink data communication scheduled for the eMTC-u UE120on an uplink data channel. To avoid a collision, the eMTC-u UE120may avoid transmitting an uplink data communication on RBs scheduled for the uplink control channel across all subframes or repetitions of the uplink data channel. However, this may waste resources when the uplink control channel is scheduled on fewer than all subframes or repetitions. Additionally, or alternatively, the UE may drop the uplink data communication to avoid a collision with the uplink control channel. However, this may result in a long delay before another opportunity to transmit the uplink data communication due to uplink limitations, data segment design, and contention for access associated with the unlicensed RF spectrum band. Some techniques and apparatuses described herein may be used to avoid or reduce collisions between uplink data communications and uplink control communications for an eMTC-u UE120operating in an unlicensed RF spectrum band. Furthermore, some techniques and apparatuses may avoid such a collision while still allowing efficient use of resources, such as by deferring an uplink data communication within a data segment, rate matching an uplink data communication around uplink control resources (e.g., cell-specific uplink control resources and/or UE-specific uplink control resources), puncturing the uplink data communication to avoid the uplink control resources, multiplexing uplink control information on an uplink data channel, and/or the like. As indicated above,FIG.3is provided as an example. Other examples are possible and may differ from what was described with respect toFIG.3. FIG.4is a diagram illustrating an example400of avoiding collisions on an uplink data channel and a cell-specific uplink control channel. As shown by reference number405, a UE120(e.g., an eMTC-u UE120) may determine one or more first resource elements (REs) to be used for a cell-specific uplink control channel (e.g., shown as a cell-specific PUCCH as an example). For example, the UE120may receive, from a base station110, an indication of a cell-specific uplink control channel configuration that indicates the one or more first REs to be used for the cell-specific uplink control channel. As used herein, the term uplink control channel may include a PUCCH. In some aspects, a PUCCH may be a specific type of uplink control channel. As described above in connection withFIG.3, in some aspects, the cell-specific uplink control channel may be used by eMTC-u UEs120, that are not connected to the base station110(e.g., that are not connected via an RRC connection), for acknowledging or negatively acknowledging initial network setup messages, such as for a RACH procedure. Additionally, or alternatively, the cell-specific uplink control channel configuration may be common across all eMTC-u UEs120in a cell. In some aspects, the cell-specific uplink control channel configuration may be signaled in a system information block (SIB). Additionally, or alternatively, the cell-specific uplink control channel configuration may be signaled in a group common uplink control channel (e.g., a group common PUCCH) that is accessible by multiple UEs120. As shown by reference number410, the UE120may determine that the one or more first REs (e.g., of the cell-specific PUCCH) collide with one or more second REs scheduled for the UE120on an uplink data channel (e.g., a PUSCH). For example, the UE120may receive a downlink grant, from the base station110, that indicates the one or more second REs scheduled for an uplink data communication (e.g., an initial uplink data communication and/or one or more repetitions of the initial uplink data communication). In some aspects, the downlink grant may indicate a number of repetitions for the uplink data communication. Additionally, or alternatively, the number of repetitions may be indicated in an RRC configuration message. As used herein, the term uplink data channel may include a PUSCH. In some aspects, a PUSCH may be a specific type of uplink data channel. The UE120may compare the one or more first REs and the one or more second REs to determine whether there is any overlap between the two sets of REs. If there is overlap, then the UE120may determine that the two sets of REs collide, and may employ one or more collision avoidance techniques, as described below. As shown by reference number415, the UE120may prevent transmission of an uplink data transmission using the one or more first REs (e.g., reserved for the PUCCH) based at least in part on determining that the one or more first REs collide with the one or more second REs. For example, the UE120may prevent a collision by deferring the uplink data transmission, by rate matching the uplink data transmission around the one or more first REs, by puncturing the uplink data transmission to avoid the one or more first REs, and/or the like. In some aspects, the uplink data transmission may include an initial uplink data transmission and/or one or more repetitions of the initial uplink data transmission. As shown by reference number420, in some aspects, the UE120may defer the uplink data transmission to avoid transmitting uplink data in the one or more first REs used for the cell-specific PUCCH. In this way, other UEs120may communicate with the base station110(e.g., to ACK or NACK initial network setup messages) during the REs scheduled for the cell-specific PUCCH. In some aspects, the UE120may defer the uplink data transmission within a data segment of a frame. For example, the UE120may defer the uplink data transmission if the UE120determines that there are sufficient remaining uplink segments (e.g., a threshold number of remaining uplink subframes), within a data segment, to permit transmission of the uplink data within the data segment. In some aspects, the UE120may account for uplink limitations (e.g., a limit on contiguous uplink transmissions, a required transmission gap, and/or the like) when determining whether there are sufficient remaining uplink segments, within the data segment, for transmission of the uplink data. In this way, the UE120may prevent transmission of the uplink data from being delayed to a subsequent frame. As shown by reference number425, in some aspects, the UE120may rate match the uplink data transmission around the one or more first REs to avoid transmitting uplink data in the one or more first REs used for the cell-specific PUCCH. Additionally, or alternatively, the UE120may puncture the uplink data transmission to avoid transmitting uplink data in the one or more first REs used for the cell-specific PUCCH. In some aspects, the UE120may rate match or puncture the uplink data transmission to avoid the one or more first REs based at least in part on a determination that there are not sufficient remaining uplink segments (e.g., a threshold number of remaining uplink subframes), within a data segment, to permit transmission of the uplink data within the data segment, in a similar manner as described above. For example, the UE120may defer the uplink data transmission when a number of subsequent uplink segments (e.g., that occur after the one or more second REs) in a data segment or a frame satisfies a threshold (e.g., is greater than or equal to a threshold), and may rate match or puncture the uplink data transmission when a number of subsequent uplink segments in the data segment or the frame does not satisfy a threshold (e.g., is less than or equal to a threshold). In this way, the UE120may avoid delaying transmission of the uplink data, and may increase the efficiency of network resource usage. In some aspects, the UE120may receive, from the base station110, an indication of whether to defer the uplink data transmission or rate match or puncture the uplink data transmission to avoid the one or more second REs. For example, the base station110may determine whether the base station110will be able to schedule the uplink data transmission in one or more subsequent uplink subframes of the same data segment, and may signal a collision avoidance technique to be used by the UE120based at least in part on this determination, in a similar manner as described above. In some aspects, the UE120may defer the uplink data transmission within the data segment based at least in part on receiving an indication to defer the uplink data transmission. Additionally, or alternatively, the UE120may rate match or puncture the uplink data transmission around the one or more first REs based at least in part on receiving an indication to rate match or puncture the uplink data transmission. In this way, a collision avoidance technique may be flexibly configured to account for dynamic network conditions. As indicated above,FIG.4is provided as an example. Other examples are possible and may differ from what was described with respect toFIG.4. FIG.5is a diagram illustrating an example500of avoiding collisions on an uplink data channel and a UE-specific uplink control channel. As shown by reference number505, a UE120(e.g., an eMTC-u UE120) may determine one or more first resource elements (REs) to be used for a UE-specific uplink control channel (e.g., shown as a UE-specific PUCCH as an example). For example, the UE120may receive, from a base station110, an indication of a UE-specific uplink control channel configuration that indicates the one or more first REs to be used for the UE-specific uplink control channel. As described above in connection withFIG.3, in some aspects, the UE-specific uplink control channel may be used by an eMTC-u UE120, that is connected to the base station110(e.g., via an RRC connection), for acknowledging or negatively acknowledging downlink data communications (e.g., received in a data segment). Additionally, or alternatively, the UE-specific uplink control channel configuration may be different for different eMTC-u UEs120in a cell. In some aspects, the UE-specific uplink control channel configuration may be signaled in an RRC configuration message. As shown by reference number510, the UE120may determine that the one or more first REs (e.g., of the UE-specific PUCCH) collide with one or more second REs scheduled for the UE120on an uplink data channel (e.g., a PUSCH), in a similar manner as described above in connection withFIG.4. If the UE120determines that there is a collision, then the UE120may determine that the two sets of REs collide, and may employ one or more collision avoidance techniques, as described below. As shown by reference number515, the UE120may perform a collision avoidance technique to prevent collision of an uplink data transmission and an uplink control transmission based at least in part on determining that the one or more first REs collide with the one or more second REs. For example, the UE120may prevent a collision by deferring the uplink data transmission, by multiplexing UCI in the uplink data transmission, and/or the like. In some aspects, the uplink data transmission may include an initial uplink data transmission and/or one or more repetitions of the initial uplink data transmission. As shown by reference number520, in some aspects, the UE120may defer the uplink data transmission to avoid transmitting uplink data in the one or more first REs used for the UE-specific PUCCH. In this way, the UE120may transmit UCI (e.g., an ACK or a NACK to a downlink data communication, CSI, an SR, and/or the like) to the base station110using the one or more first REs scheduled for the UE-specific PUCCH. In some aspects, the UE120may defer the uplink data transmission within a data segment of a frame, in a similar manner as described above in connection withFIG.4. For example, the UE120may defer the uplink data transmission if the UE120determines that there are sufficient remaining uplink segments (e.g., a threshold number of remaining uplink subframes), within a data segment, to permit transmission of the uplink data within the data segment. In some aspects, the UE120may account for uplink limitations (e.g., a limit on contiguous uplink transmissions, a required transmission gap, and/or the like) when determining whether there are sufficient remaining uplink segments, within the data segment, for transmission of the uplink data. In this way, the UE120may prevent transmission of the uplink data from being delayed to a subsequent frame. As shown by reference number525, in some aspects, the UE120may multiplex UCI on the one or more second REs of the uplink data channel (e.g., the PUSCH, shown as UL-SCH), and may transmit the multiplexed UCI via the uplink data channel. In this case, the UE120may drop the uplink control channel (e.g., the PUCCH) to avoid a collision. In some aspects, the UE120may multiplex the UCI on the uplink data channel based at least in part on a determination that there are not sufficient remaining uplink segments (e.g., a threshold number of remaining uplink subframes), within a data segment, to permit transmission of the uplink data within the data segment, in a similar manner as described above. For example, the UE120may defer the uplink data transmission when a number of subsequent uplink segments (e.g., that occur after the one or more second REs) in a data segment or a frame satisfies a threshold (e.g., is greater than or equal to a threshold), and may multiplex the UCI on the uplink data channel when a number of subsequent uplink segments in the data segment or the frame does not satisfy a threshold (e.g., is less than or equal to a threshold). In this way, the UE120may avoid delaying transmission of the uplink data, and may increase the efficiency of network resource usage. In some aspects, the UE120may receive, from the base station110, an indication of whether to defer the uplink data transmission or multiplex the UCI on the uplink data channel to avoid a collision. For example, the base station110may determine whether the base station110will be able to schedule the uplink data transmission in one or more subsequent uplink subframes of the same data segment, and may signal a collision avoidance technique to be used by the UE120based at least in part on this determination, in a similar manner as described above in connection withFIG.4. In some aspects, the UE120may defer the uplink data transmission within the data segment based at least in part on receiving an indication to defer the uplink data transmission and/or an indication that multiplexing of the UCI on the uplink data channel is to be disabled. Additionally, or alternatively, the UE120may multiplex the UCI on the uplink data channel based at least in part on receiving an indication not to defer the uplink data transmission and/or an indication that multiplexing of the UCI on the uplink data channel is to be enabled. In this way, a collision avoidance technique may be flexibly configured to account for dynamic network conditions. In some aspects, when the UE120multiplexes the UCI on the uplink data channel, the UE120may rate match or puncture the uplink data transmission to avoid the multiplexed UCI. For example, the UE120may rate match the uplink data transmission around the one or more second REs used to transmit the UCI. Additionally, or alternatively, the UE120may puncture the uplink data transmission to avoid the one or more second REs used to transmit the UCI. In this way, the UE120may avoid delaying transmission of the uplink data, and may increase the efficiency of network resource usage. In some aspects, the UE120may determine a manner in which the UCI is to be multiplexed on the uplink data channel based at least in part on a first number of repetitions configured for the UCI and/or a second number of repetitions configured for the uplink data transmission. For example, the UE120may be configured to repeat uplink control transmissions (e.g., UCI) using a first number of repetitions, and/or may be configured to repeat uplink data transmissions using a second number of repetitions. In some aspects, the first number of repetitions and/or the second number of repetitions may be indicated in downlink control information (DCI) (e.g., an uplink grant), an RRC configuration message, a SIB, and/or the like. In some aspects, the UE120may compare the first number of repetitions for the UCI and the second number of repetitions for the uplink data transmission, and may configure UCI multiplexing on the uplink data channel based at least in part on the comparison. In some aspects, if the first number is less than or equal to the second number, then the UE120may multiplex and transmit the UCI on a number of subframes equal to the first number. For example, inFIG.5, the UCI is configured for two repetitions and the uplink data is configured for five repetitions. In this case, the UCI is multiplexed and transmitted two times (e.g., in subframes 0 and 1). In some aspects, if the first number is greater than the second number, then the UE120may increase a beta factor that controls a number of REs used for the UCI (e.g., a beta factor BoffsetPUSCH). For example, the UE120may scale the beta factor by a factor equal to the first number divided by the second number. In this way, the UE120may allow more resources for reliable transmission of the UCI. As indicated above,FIG.5is provided as an example. Other examples are possible and may differ from what was described with respect toFIG.5. FIG.6is a flow chart of a method600of wireless communication. The method may be performed by a UE (e.g., the UE120ofFIG.1, the apparatus802/802′ ofFIG.8and/orFIG.9, and/or the like). At610, the UE may determine one or more first REs to be used for a cell-specific uplink control channel. For example, the UE may determine (e.g., using controller/processor280and/or the like) one or more first REs to be used for a cell-specific uplink control channel (e.g., a cell-specific PUCCH), as described above in connection withFIG.4. In some aspects, the cell-specific uplink control channel is associated with acknowledging or negatively acknowledging initial network setup messages in an unlicensed radio frequency spectrum band. At620, the UE may determine that the one or more first REs are scheduled to collide with one or more second REs scheduled for the UE for an uplink data transmission on an uplink data channel. For example, the UE may determine (e.g., using controller/processor280and/or the like) the one or more first REs, to be used for the cell-specific uplink control channel, are scheduled to collide with one or more second REs scheduled for the UE for an uplink data transmission on an uplink data channel (e.g., a PUSCH), as described above in connection withFIG.4. In some aspects, the uplink data transmission includes an initial uplink data transmission and one or more repetitions of the initial uplink data transmission. At630, the UE may modify the uplink data transmission. For example, the UE may modify (e.g., using controller/processor280, transmit processor264, TX MIMO processor266, MOD254, antenna252, and/or the like) the uplink data transmission based at least in part on determining that the one or more first resource elements are scheduled to collide with the one or more second resource elements, as described above in connection withFIG.4. Method600may include additional aspects, such as any single aspect or any combination of aspects described below. In some aspects, modifying the uplink data transmission comprises deferring the uplink data transmission. In some aspects, the uplink data transmission is deferred based at least in part on receiving an indication that the uplink data transmission is to be deferred. In some aspects, the uplink data transmission is deferred within a data segment or a frame in which the one or more second resource elements are scheduled based at least in part on a determination that a number of subsequent uplink segments in the data segment or the frame satisfies a threshold, wherein the subsequent uplink segments are subsequent to the one or more second resource elements. In some aspects, modifying the uplink data transmission comprises rate matching or puncturing the uplink data transmission to avoid the one or more first resource elements. In some aspects, the uplink data transmission is rate matched or punctured based at least in part on receiving an indication that the uplink data transmission is to be rate matched or punctured. In some aspects, the uplink data transmission is rate matched or punctured based at least in part on a determination that a number of subsequent uplink segments in a data segment or a frame, in which the one or more second resource elements are scheduled, does not satisfy a threshold, wherein the subsequent uplink segments are subsequent to the one or more second resource elements. In some aspects, the one or more first resource elements of the cell-specific uplink control channel are signaled in a system information block. In some aspects, the UE is a machine-type communication device that operates in the unlicensed radio frequency spectrum band. AlthoughFIG.6shows example blocks of a method of wireless communication, in some aspects, the method may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those shown inFIG.6. Additionally, or alternatively, two or more blocks shown inFIG.6may be performed in parallel. FIG.7is a flow chart of a method700of wireless communication. The method may be performed by a UE (e.g., the UE120ofFIG.1, the apparatus802/802′ ofFIG.8and/orFIG.9, and/or the like). At710, the UE may determine one or more first REs to be used for a UE-specific uplink control channel. For example, the UE may determine (e.g., using controller/processor280and/or the like) one or more first REs to be used for a UE-specific uplink control channel (e.g., a UE-specific PUCCH), as described above in connection withFIG.5. In some aspects, the UE-specific uplink control channel is associated with acknowledging or negatively acknowledging, by the UE, downlink data communications in an unlicensed radio frequency spectrum band. At720, the UE may determine that the one or more first REs are scheduled to collide with one or more second REs scheduled for the UE for an uplink data transmission on an uplink data channel. For example, the UE may determine (e.g., using controller/processor280and/or the like) that the one or more first REs are scheduled to collide with one or more second REs scheduled for the UE for an uplink data transmission on an uplink data channel (e.g., a PUSCH), as described above in connection withFIG.5. In some aspects, the uplink data transmission includes an initial uplink data transmission and one or more repetitions of the initial uplink data transmission. At730, the UE may defer transmission of the uplink data transmission on the uplink data channel or multiplex UCI on the one or more second REs of the uplink data channel. For example, the UE may defer transmission (e.g., using controller/processor280, transmit processor264, TX MIMO processor266, MOD254, antenna252, and/or the like) of the uplink data transmission on the uplink data channel or may multiplex (e.g., using controller/processor280, transmit processor264, TX MIMO processor266, MOD254, antenna252, and/or the like) UCI on the one or more second REs of the uplink data channel based at least in part on determining that the one or more first REs are scheduled to collide with the one or more second REs, as described above in connection withFIG.5. Method700may include additional aspects, such as any single aspect or any combination of aspects described below. In some aspects, the uplink data transmission is deferred based at least in part on receiving an indication that multiplexing of the UCI on the uplink data channel is to be disabled. In some aspects, the UCI is multiplexed on the one or more second resource elements of the uplink data channel based at least in part on receiving an indication that multiplexing of the UCI on the uplink data channel is to be enabled. In some aspects, the uplink data transmission is rate matched or punctured around the one or more second resource elements on which the UCI is multiplexed. In some aspects, the UCI is multiplexed on the one or more second resource elements of the uplink data channel based at least in part on a first number of repetitions configured for the UCI and a second number of repetitions configured for the uplink data transmission. In some aspects, the UCI is transmitted on a number of subframes equal to the first number when the first number is less than or equal to the second number. In some aspects, a beta factor, that controls a number of resource elements used for the UCI, is adjusted when the first number is greater than the second number. In some aspects, the beta factor is scaled by a factor equal to the first number divided by the second number. In some aspects, the one or more first resource elements of the UE-specific uplink control channel are signaled in a radio resource control configuration message. In some aspects, the UE is a machine-type communication device that operates in an unlicensed radio frequency spectrum band. AlthoughFIG.7shows example blocks of a method of wireless communication, in some aspects, the method may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those shown inFIG.7. Additionally, or alternatively, two or more blocks shown inFIG.7may be performed in parallel. FIG.8is a conceptual data flow diagram800illustrating the data flow between different modules/means/components in an example apparatus802. The apparatus802may be a UE. In some aspects, the apparatus802includes a reception module804, a determination module806, a collision avoidance module808, a transmission module810, and/or the like. In some aspects, the reception module804may receive information812, from an apparatus850(e.g., a base station), that indicates one or more first REs to be used for a cell-specific uplink control channel and/or a UE-specific uplink control channel. Additionally, or alternatively, the reception module may receive information812, from the apparatus850, indicating one or more second REs scheduled for the apparatus802on an uplink data channel. The reception module804may provide information identifying the one or more first REs and/or the one or more second REs to the determination module806as information814. The determination module806may determine the one or more first REs and/or the one or more second REs using the information814, and may determine that the one or more first REs collide with one or more second REs. The determination module806may provide an indication of the collision to the collision avoidance module808as information816. The collision avoidance module808may perform one or more collision avoidance techniques to prevent the collision (e.g., to prevent transmission of an uplink data transmission suing the one or more first REs). For example, the collision avoidance module808may defer the uplink data transmission, in which case the collision avoidance module808may provide an instruction as information818to the transmission module810instruction to defer transmission. In this case, the transmission module810may defer the uplink data transmission, and may transmit the uplink data as information820to the apparatus850at a later time. As another example, the collision avoidance module808may rate match or puncture the uplink data transmission to avoid the one or more first REs, and may provide the rate matched or punctured uplink data transmission to the transmission module810as information818. In this case, the transmission module810may transmit the rate matched or punctured uplink data transmission to the apparatus850as information820. As another example, the collision avoidance module808may multiplex UCI on the one or more second REs of the uplink data channel. In this case, the transmission module810may transmit the multiplexed UCI to the apparatus850on the uplink data channel as information820. The apparatus may include additional modules that perform each of the blocks of the algorithm in the aforementioned method600ofFIG.6, method700ofFIG.7, and/or the like. As such, each block in the aforementioned method600ofFIG.6, method700ofFIG.7, and/or the like may be performed by a module and the apparatus may include one or more of those modules. The modules may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. The number and arrangement of modules shown inFIG.8are provided as an example. In practice, there may be additional modules, fewer modules, different modules, or differently arranged modules than those shown inFIG.8. Furthermore, two or more modules shown inFIG.8may be implemented within a single module, or a single module shown inFIG.8may be implemented as multiple, distributed modules. Additionally, or alternatively, a set of modules (e.g., one or more modules) shown inFIG.8may perform one or more functions described as being performed by another set of modules shown inFIG.8. FIG.9is a diagram900illustrating an example of a hardware implementation for an apparatus802′ employing a processing system902. The apparatus802′ may be a UE. The processing system902may be implemented with a bus architecture, represented generally by the bus904. The bus904may include any number of interconnecting buses and bridges depending on the specific application of the processing system902and the overall design constraints. The bus904links together various circuits including one or more processors and/or hardware modules, represented by the processor906, the modules804,806,808, and/or810, and the computer-readable medium/memory908. The bus904may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further. The processing system902may be coupled to a transceiver910. The transceiver910is coupled to one or more antennas912. The transceiver910provides a means for communicating with various other apparatus over a transmission medium. The transceiver910receives a signal from the one or more antennas912, extracts information from the received signal, and provides the extracted information to the processing system902, specifically the reception module804. In addition, the transceiver910receives information from the processing system902, specifically the transmission module810, and based at least in part on the received information, generates a signal to be applied to the one or more antennas912. The processing system902includes a processor906coupled to a computer-readable medium/memory908. The processor906is responsible for general processing, including the execution of software stored on the computer-readable medium/memory908. The software, when executed by the processor906, causes the processing system902to perform the various functions described supra for any particular apparatus. The computer-readable medium/memory908may also be used for storing data that is manipulated by the processor906when executing software. The processing system further includes at least one of the modules804,806,808, and/or810. The modules may be software modules running in the processor906, resident/stored in the computer readable medium/memory908, one or more hardware modules coupled to the processor906, or some combination thereof. The processing system902may be a component of the UE120and may include the memory282and/or at least one of the TX MIMO processor266, the RX processor258, and/or the controller/processor280. In some aspects, the apparatus802/802′ for wireless communication includes means for determining one or more first resource elements to be used for a cell-specific uplink control channel associated with acknowledging or negatively acknowledging initial network setup messages in an unlicensed radio frequency spectrum band; means for determining that the one or more first resource elements are scheduled to collide with one or more second resource elements scheduled for the apparatus802/802′ for an uplink data transmission on an uplink data channel, wherein the uplink data transmission includes an initial uplink data transmission and one or more repetitions of the initial uplink data transmission; means for modifying the uplink data transmission based at least in part on determining that the one or more first resource elements are scheduled to collide with the one or more second resource elements; and/or the like. Additionally, or alternatively, the apparatus802/802′ for wireless communication may include means for determining one or more first resource elements to be used for a UE-specific uplink control channel associated with acknowledging or negatively acknowledging, by the apparatus802/802′, downlink data communications in an unlicensed radio frequency spectrum band; means for determining that the one or more first resource elements are scheduled to collide with one or more second resource elements scheduled for the apparatus802/802′ for an uplink data transmission on an uplink data channel, wherein the uplink data transmission includes an initial uplink data transmission and one or more repetitions of the initial uplink data transmission; means for deferring transmission of the uplink data transmission on the uplink data channel or multiplexing uplink control information (UCI) on the one or more second resource elements of the uplink data channel based at least in part on determining that the one or more first resource elements are scheduled to collide with the one or more second resource elements; and/or the like. The aforementioned means may be one or more of the aforementioned modules of the apparatus802and/or the processing system902of the apparatus802′ configured to perform the functions recited by the aforementioned means. As described supra, the processing system902may include the TX MIMO processor266, the RX processor258, and/or the controller/processor280. As such, in one configuration, the aforementioned means may be the TX MIMO processor266, the RX processor258, and/or the controller/processor280configured to perform the functions recited by the aforementioned means. FIG.9is provided as an example. Other examples are possible and may differ from what was described in connection withFIG.9. It is understood that the specific order or hierarchy of blocks in the processes/flow charts disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes/flow charts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C,” “at least one of A, B, and C,” and “A, B, C, or any combination thereof” include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C,” “at least one of A, B, and C,” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.”
67,362
11863317
DETAILED DESCRIPTION In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. Various embodiments described herein provide methods and apparatus for reliably delivering data from a sender to a receiver over a channel, such as a packet network channel, where the data is carried in packets that may be lost en route. The data stream is partitioned into blocks at the sender and each block is reliably delivered to the receiver. Each block may be of variable size and may in some cases be aligned with the boundaries of application data objects. The delivery latency for a block is the time between when the first data of the block is available at the sender and when the receiver recovers the block. In the general case, a sender organizes data from a source into one or more blocks and encodes a block into a sequence of packets. Some of these operations might be performed prior to receipt by the sender, but either way the sender can operate on blocks and packets. Typically, a packet is a unit of data that is to be sent and is expected to be received correctly in its entirety (based on whatever packet handling protocol might be needed) or discarded in its entirety. The blocks are in an order that might correspond to the order that the blocks are provided to the sender and the order the blocks are preferably recovered at the receiver. The data of a block might be encoded by the sender, such as using an erasure code, to form encoded packets, which may include base packets and repair packets, which are sent to the receiver. The payloads of the base packets include all the original data from the data block, and thus a receiver can correctly recover the entire block upon receiving all the base packets. The repair packets are packets sent with the base packets and are usable to recover from lost packets of the base packets, and wherein the number of generated and sent repair packets can vary dynamically during the transmission of encoded packets for the data block based on continuous feedback sent from the receiver to the sender indicating how many additional repair packets are needed and are usable to recover the data block. In some cases, the number of sent repair packets may not depend on any feedback. The blocks of a data stream might be encoded using a systematic code, wherein the base packets for a block correspond to the original data of the block partitioned into packet payloads, and thus the aggregate size of the base packet payloads is the size of the data block. Nonsystematic encoding might be used as well. Where nonsystematic encoding is used, the base packets and repair packets might be generated from a data block using the same process, which might be using an erasure code process. Where systematic encoding is used, the same process might be used to generate each repair packet (but with different input parameters to the process for each repair packet), and repair packets can be generated as needed on-the-fly based on received feedback. In a particular instance, a sender transmits data of a data stream to a receiver. To ensure or improve reliability of transmission, wherein the reliability of transmission is indicated by a confirmation from the receiver to the sender that the data is considered received, various steps of organizing might be performed. For example, a data stream processor at the sender (or other system associated with or affiliated with the sender) obtains the data of the data stream to be sent from a source device coupled to the sender to a destination device and the destination device is configured to receive data from the receiver in a presumed reliable form. The data stream processor organizes data of the data stream into a sequence of one or more data blocks, each block having a block identifier associated therewith. The data stream processor then encodes a data block of the one or more data blocks into a sequence of packets for the data block, wherein the packets of the sequence of packets for the data block are interchangeable among the sequence of packets. The packets of the sequence of packets can be interchangeable if an exact copy of the data block from which the sequence of packets is generated can be decoded, using an erasure decoding process or otherwise, from any set of a sufficient number of packets from the sequence. The sufficient number of packets can be equal to or slightly more than the data block size in units of packet payloads. The data stream processor can then organize data of a packet of the sequence of packets into a packet header and a packet payload and include, in the packet header of the packet, representations of at least (1) the block size, (2) a global packet sequence number that uniquely identifies the packet relative to other packets of the data stream, (3) a block identifier of the data block, and (4) an encoding identifier indicating which encoded data, derived via the encoding process from the data block, is carried in the packet payload. The data stream processor can also include, in the packet payload of the packet, encoded data derived from the data block and identified by the block identifier of the data block and the encoding identifier included in the packet header. The data stream processor can then determine a sendable number of packets for the data block, compare a total number of sent packets, sent from the sequence of packets for the data block, and the sendable number of packets for the data block, and based on whether the total number of sent packets is less than the sendable number, send the packet to the receiver. The sendable number of packets for a block represents a target total number of packets to be sent for the block. The sendable number of packets for a data block can be determined using a variety of different methods and processes as described in more detail herein. The sendable number of packets for a data block may vary during the process of sending packets from the sequence of packets for the data block based on feedback from a receiver. Using these methods and apparatus, the bandwidth needed to reliably deliver a data stream can be kept low, and block delivery latency for each block of the data stream can also be kept low. For many applications, such as metaverse applications, it can be required that block delivery latency is reliably below a given threshold time for essentially all blocks of the data stream. Using these methods and apparatus, the amount of feedback needed from the receiver, both in terms of the frequency of sending feedback packets and in terms of the amount of bandwidth needed to carry the feedback data from the receiver to the sender, can be kept low. This can be important in some applications when the available bandwidth from the receiver to the sender is much less than the bandwidth from the sender to the receiver. Certified reliable delivery of data blocks, i.e., where the sender can certify that the receiver has recovered each block of the data stream, is often desirable as well. Encoding might be such that a data block can be recovered from any combination of encoded packets from the base and repair packets equal in size to the data block in units of packet payloads (in rare cases, a slightly larger set of encoded packets may be needed to recover the data block). A fountain erasure code or a low-rate erasure code can be used. A fountain erasure code (also called a rateless erasure code) or a low-rate erasure code might be used with its ability to generate encoded packets as needed on-the-fly from a block of data. With a code that can generate encoded packets as needed, the number of packets of encoded data generated for a block can be easily adjusted as the sendable number of packets varies. The sender can be configured with an encoding functionality with which it can generate encoded data to insert into packet payloads that is encoded data encoded from the source data received by the sender and encoded according to the particular erasure code being used. The receiver can include a decoder that decodes the encoded data to recover the original data. Where the receiver needs to rely on the repair packets, the decoder can operate to recover from packet erasures. In general, the specific lost encoded packets need not be retransmitted. Instead, the sender can generate and send additional repair packets as needed, and these can be independent of which particular encoded packets are lost. One advantage is that the receiver does not need to specifically request the lost encoded packets or inform the sender of which packets are lost. With the particular coding, there is a high probability that the repair packets will be useful to the receiver to recover the block for which the repair packets are sent in response to feedback from the receiver. The erasure code used might support a wide range of block sizes, which has an advantage of not limiting the available block sizes at the sender to support data streams over a large range of transmission rates to be delivered over networks with a large range of packet round-trip times between the sender and the receiver. An erasure code might be systematic, in that data in a block is considered as encoded data for that block, and the sender can immediately start transmitting base packets of encoded data from a block before all data of the block is available to the sender. This can help reduce data block delivery latency. One fountain erasure code (also called a rateless erasure code) that can be used is the RaptorQ code specified in IETF RFC 6330, as it is systematic (i.e., for the base packets, as a packet payload's worth of data is provided to the sender for a block, it can be packaged into a packet and sent, before other data is available at the sender), and the number of repair packets that can be generated need not be fixed ahead of time or be a fixed ratio relative to the number of base packets for a data block, and in particular repair packets can be generated on-the-fly as needed. Overview of Sending System For each data block, the sender can generate encoded packets for that data block using the erasure encoder. As each data block arrives at the sender, the sender can generate and send the encoded data as payloads in base packets and repair packets, and the sender is continuously receiving feedback from the receiver including packet loss statistics and block status statistics for blocks that are currently actively being sent. When feedback indicates there is minimal packet loss between the sender and receiver when a block first becomes available to send, the sendable number of packets for the block is typically large enough to allow all base packets to be sent, and the sendable number of packets may allow in addition sending some repair packets, wherein the aggregate number of repair packets allowed to be sent for the data block is a small percentage of the aggregate number of base packets. Thus, the sendable number of packets is slightly more than the block size in units of packet payloads, and thus if all of the sendable packets arrive at the receiver then the data block can be decoded. The sendable number of packets can be set to slightly more than the block size in units of packet payloads to anticipate that some base and repair packets may be lost en route to the receiver. When feedback indicates the packet loss between the sender and receiver is larger when a block first becomes available to send, the sender might set the sendable number of packets to be a larger number so that the aggregate number of repair packets allowed to be sent along with the base packets is a larger percentage of the number of base packets, anticipating that more of the base and repair packets may be lost en route to the receiver. One objective is to ensure that the sendable number of packets is large enough that with high probability the number of encoded packets that arrive at the receiver is at least the number of base packets (and thus the data block can be recovered), but not too much larger than the number of base packets (and thus not too much bandwidth is wasted in delivering the data block). The feedback protocol and the methods for determining the number sendable packets based on feedback described hereafter are designed to meet this objective. Feedback is used to automatically adjust the sendable number of packets for each data block to meet this objective for each data block. The sender and receiver might maintain a listing of block status for one or more block, for example tracking the status of a block among status states selected from a set of status states {active, recoverable, recovered, abandoned, inactive} and process blocks, data, packets, etc. according to the status state of a block. The receiver might continually send a small amount of feedback on statistics related to each active data block, as well as overall statistics, and based on this feedback, the sender can decide to adjust the sendable number of packets for active blocks, which may cause the sender to generate and send more encoded packets for some of the active data blocks. Feedback can also be used to decide the initial sendable number of packets for a block, the sendable number of packets for a block when the first packets are sent for the block. The sender might initially set the sendable number of packets large enough so that all base packets can be sent, and an additional number of repair packets can be generated and sent, where the number of repair packets is determined based on feedback received up to that point in time. As the encoded packets are transmitted for a data block, the sender can use the continual feedback to dynamically adjust, either up or down, the sendable number of packets for the block, which may allow a number of additional repair packets to be generated and sent for the block. The feedback statistics sent for each block are sufficient for the sender to determine whether the receiver has recovered that block, thus enabling the sender to determine with certainty which blocks have been recovered by the receiver. A block has a status of “active” at the sender from the time data from the block becomes available at the sender and the time when the sender receives confirmation from the receiver that either the block has been recovered or that the block is no longer relevant to the receiver. When a current time is greater than a timeout value for a block while the status state for the block is “active”, the sender might change the state status to “inactive” to stop working on sending that block. Feedback from a receiver might indicate that the receiver considers a block to be recovered (e.g., the receiver has recovered all of the data block from received encoded packets) or abandoned (e.g., where the receiver has not completed recovery of the block but no longer needs it), so after a receiver updates its listing of block status, it can provide feedback to the sender on the status state updates. This can be done with very little bandwidth back to the sender. The sender can then have the block status state changed to “inactive” for that block. A receiver might return feedback indicating that a block is not yet recovered, but the receiver estimates that it is recoverable from the encoded data received so far, and thus indicate “recoverable” as its status state. This might be estimated when the receiver has received a quantity of encoded packets (any combination of base and repair packets) that are in aggregate payload size at least the size of the data block. A block might usually be recoverable if the number of packets (Xi) the receiver has received for a block i is at least the number (Ki) of packet payloads that are in aggregate size of block i (i.e., Kiis the size of block i in units of packet payloads, and thus Kiis the number of base packets). Overview of Example Structures FIG.1provides a high-level view of the methods described herein. In system100illustrated there, input data to be sent (130) arrives at a sender (110) and the sender generates and sends encoded data packets (145) over a network (140) to a receiver (120). The receiver (120) on a regular basis sends feedback packets (146) to the sender, and the receiver (120) decodes and reproduces the data stream (131) that is as similar as possible to the input data to be sent (130). The input data to be sent (130) might be obtained from storage whereby all of the data might be available when the sending process starts and/or might be obtained as a stream whereby perhaps not all of the data is available when the sending process starts but is available during the sending process as needed. The feedback received from the receiver allows the sender to set the sendable number of packets large enough for each block so that the number of sent encoded packets for each block ensures reliable recovery of the block at the receiver, allows the sender to minimize the amount encoded packets to ensure reliable recovery, and allows the sender to minimize the block delivery latency. At various times during a transmission process, the sender can determine if there is capacity to send additional packets. Whether or not there is capacity can depend on measured available bandwidth between the sender and receiver, or on a configured sending rate. As an example, when there is a configured sending rate, the sender has capacity to send an encoded packet when the size of an encoded packet divided by the time since the last encoded packet was sent is less than or equal to the configured sending rate. Whenever there is capacity to send, the sender can determine for which blocks the number of sent packets for that block is less than the sendable number of packets for that block, and thus additional encoded packets are eligible to be sent (hereafter referred to as eligible blocks) based on computations the sender performs, as described herein. The sender might be programmed to prefer earlier blocks and when it can generate and send additional encoded packets, they are from the block that was earliest to arrive at the sender (or earliest to be provided to the sender) from among the active eligible blocks. In some embodiments, the sending rate and bandwidth and latency are considered in a trade-off process, considering that some bandwidth might be consumed to increase a likelihood of very low latency. FIG.2provides a more detailed view200of apparatus and methods for a sender to send encoded data to a receiver as described herein. As the input data to be sent (130) arrives at a sender (110), the sender partitions the data stream into blocks of data (210,211,212), and uses an encoder (220) to generate encoded data for each block (230,231,232). A sender process (240) can be used to determine which encoded data to send in packets (145) at each point in time, and the sender process (240) can also process feedback packets (146) received from the receiver to facilitate the decisions made by the sender process (240). FIG.3provides a more detailed view300of apparatus and methods for a receiver to receive encoded data from a sender described herein. As the packets carrying encoded data (145) arrive at a receiver (120) to the receiver process (340), the receiver saves the encoded data received for each block (330,331,332), and uses a decoder (320) to recover each block (310,311,312). Receiver process (240) is also used to determine which feedback to send in packets (146) at each point in time. The overall methods have the property that lost packets need not be identified at the receiver and specifically requested to be retransmitted. Instead, newly generated encoded packets can be sent whenever more packets are allowed to be sent for a block, i.e., whenever the sendable number of packets for the block is greater than the number of packets already sent for the block, and a sender determines that more packets are needed. The methods are designed so that the number of encoded packets initially allowed to be sent for a block, i.e., the initial sendable number of packets for the block, is: (1) enough so that with reasonably high probability the number of these packets that arrive at the receiver enables the receiver to recover the block; (2) not much larger than the minimal number of encoded packets needed to recover the block. Property (1) ensures that the block delivery latency is minimal with reasonably high probability, whereas property (2) ensures that the amount of bandwidth used to reliably deliver the block is not much larger than the minimal amount of bandwidth possible. Using feedback from a receiver, a sender can compute how many packets the receiver received, how many are in flight to the receiver, and knowing the size of the data block, the sender can then compute or estimate the sendable number of packets for the block and use that number for determining how many repair packets to generate and send for the data block. Using feedback from the receiver, the sender can also compute loss rates for previously sent packets, compute the sendable number of packets for future blocks, and use that for computing how many repair packets to generate and send for future blocks. In some embodiments, the sender will optimize the sendable number of packets for a block based on that feedback, a block size, an expected variation in packet loss rate, etc. Where the repair packets are encoded using a rateless code (fountain code), the number of packets generated can vary from instant to instant without requiring a change of encoding scheme—the sender can just generate more or fewer packets as needed. Further, since the additional repair packets need not be retransmissions of lost packets, the sender can send some number of packets for recovery without having to be concerned about which packets were lost. This also saves on feedback traffic, as the receiver can simply send overall statistics in feedback. A feedback packet might contain various fields in a feedback packet as might be sent from a receiver to a sender. Not all feedback packets need contain all fields and some subcombinations are described herein. For example, a feedback packet might have a field for fseqno, a feedback global packet sequence number corresponding to the largest global packet sequence number received at the receiver up to some point in time, rseqno, a feedback global packet count corresponding to the total number of packets the receiver has received up to the same point in time, and a status state indicator. Some of the feedback fields are on a block-by-block basis (e.g., a plurality of status state indicators, each status state indicator for one of a plurality of active blocks). FIG.4illustrates an example table of data that might be used for transmissions. A receiver might send a feedback packet containing feedback(fs, rs, F) having a value fs for the feedback global packet sequence number fseqno, a value rs for the feedback global packet count rseqno, and a value F for a status state indicator. Feedback packets might contain values that are block-specific and/or flow-specific, the latter used where multiple flows are provided for. For example, referring to the definitions described inFIG.5, for block i, a feedback packet might contain (fs, rs, i, Xi, statusi), where i indicates the block for which the feedback applies, feedback global packet count fs is the largest global packet sequence number received in a packet at the receiver up to some point in time t, feedback global packet count rs is the total number of packets received up to time t, feedback block packet count Xiis the number of packets of encoded data the receiver has received for block i up to time t, and statusiindicates the status of block i up to time t. The value of time t may also be included in the feedback packet. The sender might have different values for fseqno and rseqno and could update its local copy accordingly. For example, if the sender receives a feedback packet with fseqno=fs while the sender has a value for fseqno that is smaller than fs, the sender can reset its value for fseqno to the fs received in the feedback packet since it is larger than the value for fseqno it had previously. Similarly, if the sender receives a feedback packet with rseqno=rs while the sender has a value for rseqno that is smaller than rs, the sender can reset its value for rseqno to the rs received in the feedback packet since it is larger than the value for rseqno it had previously. The status state indicator might be selected from an enumerated set a set of status states {active, recoverable, recovered, abandoned, inactive}. Methods referred to herein can provide that the feedback from the receiver to the sender can be continuous. If the receiver is not able to recover the block from the reception of encoded packets from the set of initially allowed to be sent encoded packets (the initial sendable number of packets for the block), the feedback from the receiver to the sender enables the sender to automatically increase the sendable number of packets for the block, until ultimately enough encoded packets arrive at the receiver to recover the block. For example, this can occur if packet loss conditions worsen as packets for the data block are being sent by the sender. As the feedback arrives at the sender, at each point in time the design maintains properties (1) and (2) above with respect to the sendable number of packets for the block. Thus, for each block, the methods ensure that the block delivery latency is minimal or close to minimal with very high probability and that the bandwidth used is close to minimal, while at the same time ensuring with essential certainty that blocks are recovered at the receiver. A sender and receiver may be deployed in a device to deliver data across various types of packet-based networks. Features As explained herein in more detail, the sender never needs to retransmit lost packets and is not required to know which packets specifically are lost. Feedback from the receiver to the sender might include counts of the number of packets (or bytes) that the receiver has received for active blocks of data. The sendable number of packets for an active block can be greater than the number of packets needed to recover the block. The sendable number of packets for the block can be automatically adjusted based on a continually updated estimate of the packet loss rate, where the estimate of the packet loss rate can be continually updated at the sender based on feedback received from the receiver. Thus, in addition to the base packets, from which all of a block can be recovered if all of the base packets is correctly received, some repair packets can be sent with the base packets and, based on additional feedback, some additional repair packets may be sent. The sendable number of packets for a block can be a function of the block size and an estimate of the packet loss rate. The sender might include a global packet sequence number in the packet header of each sent packet that represents a sequence number that starts at zero and increases by one in each subsequently sent packet independent of for which block that packet is sent. The receiver can send feedback to the sender that includes a feedback global packet sequence number and corresponding feedback global packet count determined at some point in time t at the receiver. The time, t, may also be included in the feedback. This feedback can be used by the sender to estimate the packet loss rate. This feedback can also be used by the sender to determine how many sent packets should have arrived for each block at the receiver by the time the receiver sent the feedback if there were no packet loss. This, together with the feedback on the number of packets the receiver has received for each active block, in the form of a feedback block packet count for each active block perhaps, enables the sender to determine the number of sent packets for each block that have been lost by the time the receiver sent the feedback, which enables the sender to update the sendable number of packets for each active block. The sender can sometimes send packets carrying no data. This is useful to trigger feedback from the receiver when many or all the sent packets carrying data for blocks have been lost, as this feedback avoids deadlocks in making progress on reliably delivering the blocks of data. Implementation Details of Examples The methods described herein can enable reliably delivering a rich stream of data from a sender to a receiver with ultra-low data delivery latency. The packets that are used to carry the data from the sender to the receiver carry a packet header and a packet payload. The packet header carries fields that identify the data, if any, carried in the packet payload. For example, the packets may be UDP packets, where the packet payload includes the IP/UDP headers together with fields described in more detail below that are specific to the methods described herein. A data stream is scoped by a sender and a receiver, i.e., a data stream includes all the data that is streamed from a particular sender to a particular receiver, as for example identified by the IP addresses of the sender and receiver. The sender partitions the data stream into blocks on-the-fly, wherein all blocks may be the same size, or boundaries between blocks may be determined based on the portion of the data stream that becomes available at the sender for transmission over fixed intervals of time, or the boundaries between blocks may be determined at least in part based on application object boundaries within the data stream. For example, an application may explicitly provide each application object to the sender as a data block. Sender Parameters A block number is associated with each block, wherein the block numbers start at zero and increment by one for each subsequent block in the data stream. The block number is carried in the packet header for each packet sent for a block. Sequential numbering of blocks within a data stream allows the receiver to determine if there are blocks for which the receiver has received no packets. For example, if the receiver has received packets for blocks 0, 2, 3, 4, and 5, then the receiver can determine that all packets sent for block 1 have likely been lost. Other ways of generating block numbers may also be used, e.g., positive integers A, B, N may be shared between a sender and receiver, and the block numbers are generated according to the rule (A·i+B) mod N, for blocks i=0, 1 2, . . . . A global packet sequence number is associated with each packet, wherein the global packet sequence numbers start at zero and increment by one for each subsequently sent packet in the data stream independently of for which block the packet is sent. As an example of one alternative, the global packet sequence number may start at some integer value other than zero. The global packet sequence number is carried in each packet header for each packet, independently of whether the packet payload carries any data. The sender maintains a data structure as shown inFIG.4for storing variables that are used in the sending process, including seqno, a value representing a global packet sequence number of the most recent packet sent by the sender, fseqno, which is the largest global sequence number feedback received at the sender from the receiver, and rseqno, which is the largest feedback global packet count received at the sender from the receiver representing a total number of packets the receiver has reported as received in feedback received at the sender. There are several other block parameters relevant to the sender during the delivery of a data stream. For simplicity, we assume hereafter that all packet payloads that carry data carry the same amount of data. For each block i of the data stream, the sender might maintain a data structure in computer-readable memory such as that depicted inFIG.5. Example of Sender Process While the block i is active, the sender is sending encoded data packets for block i. Sent packets for block i for which no feedback has yet been received from the receiver are considered in flight. A target value for the total number of packets to arrive at the receiver is set to Ki+Ri, where Kiis the block size in units of packet payloads and Riis a predetermined additional number. Referring toFIG.5, the sendable number of packets for block i is equal to the allowed number of packets in flight for block i plus the number, Xi, of packets that are known to the sender to have arrived at the receiver for block i, which is the largest feedback block packet count for block i received at the sender from the receiver. By setting the allowed number of packets in flight for block i in such a way that the expected number of these packets that arrive at the receiver is the target value minus the number of packets known to the sender to have already arrived at the receiver for block i, i.e., to the value Ri+Ki−Xi, where Ki−Xiis the number of additional packets needed to recover block i, and Riis the number of packets expected to arrive at the receiver for block i beyond the minimal number of packets needed to recover block i, benefits are obtained. Setting the target value in this way ensures that if the actual number of packets in flight that arrive at the receiver for block i is below the expected number by at most a predetermined additional number Rithen the receiver is able to recover block i from packets already in flight. Rishould be set large enough to ensure block i can be recovered with high probability from packets in flight even when there are variations in packet loss, and Rishould be set small enough to ensure that only a small amount of additional bandwidth is used to reliably recover block i. How the predetermined additional number Riis calculated might vary as well over time. For example, Rimight be initially set to the square root of the size in packet payloads (Ki) of block i, and then be adjusted up or down based on feedback from the receiver. The sendable number of packets for a block can vary from block to block (and in the case of multiple flows, from flow to flow as well) and can be adjusted based on measured packet loss rates, expected packet loss rates, block size, and/or available bandwidth. Since there is an ability to encode additional packets as needed for a block, this variance can be easily handled. FIG.6provides a snapshot of a simple example400of the sender parameter values while sending encoded data for block 0. InFIG.6, packets carrying encoded data that have arrived at the receiver by the time the receiver sent the latest feedback that has arrived at the sender are shown as rectangles with a white interior, packets carrying encoded data that have been lost before arrival at the receiver by the time the receiver sent the latest feedback that has arrived at the sender are shown as rectangles with a hatched interior, and packets carrying encoded data that are still in flight to the receiver after the time the receiver sent the latest feedback that has arrived at the sender are shown as rectangles with a black interior. For example, the first packet (410) sent by the sender has arrived at the receiver, whereas the second packet (420) sent by the sender has been lost before its arrival at the receiver. At the time of the snapshot, feedback (450) has arrived at the sender from the receiver, where the feedback450indicates that the number of packets received for block 0 is X0=5, and that the highest sequence number in a packet at the receiver was fseqno=7, where the packet carrying fseqno=7 is packet (425). At the time of the snapshot there were four packets in flight, including the most recently sent packet (430), which carries sequence number seqno=11 (440). From the definitions described inFIGS.4-5, the number of packets in flight for block i is Zi−Yi. Since pest is the current estimate of the packet loss probability, the expected number of packets in flight for block i that are expected to arrive at the receiver is (Zi−Yi)*(1−pest). Since Xiis the number of packets that the sender knows have arrived at the receiver for block i, and since Kiis the number of packets needed to recover block i, Ki−Xiis the number of packets in flight that need to arrive at the receiver to recover block i. Thus, the sendable number of packets for block i is (Ki−Xi+Ri)/(1−pest)+Yi, whereas the number of sent packets for block i is Zi. Equivalently, the sender is allowed to send additional encoded data packets for block i if (Zi−Yi)*(1−pest)<Ki−Xi+Ri. The sendable number of packets for a block can change as packets are sent for the block and feedback is received from the receiver, i.e., the values of Xi, Yi, pest, and possibly Rican change. Data delivery latency achieved for the methods when both Riand pest are pinned to zero during the data delivery is a lower bound on the data delivery latency for an optimal retransmission-based solution. Data delivery latency can be reduced below that of retransmission-based solutions and enable consistently small data delivery latency using the flexibility provided by allowing Rito be greater than zero and allowing pest to adjust based on the actual packet loss rate during the data delivery. The value of pest can be a global value, a per-block value, and/or a per-flow value. Sender Methods Two parts to the sender methods are described in this section, the sender computations for updating parameters and the sender computations for sending packets. FIG.7describes a sender updating parameters process600. When the sender is ready to start sending the data stream, the parameters nblk, seqno, fseqno, and rseqno are all initialized to zero (610), where nblk is the next block number to be assigned and seqno, fseqno, and rseqno are defined above. When data for next block becomes available at the sender is assigned block number i=nblk, statusiis initialized to active, compiis initialized to false, and Xi, Yi, Zi, and Kiare all initialized to zero, where these parameters are defined above (620). Then, nblk is incremented by one to prepare for the next block (620). When all data for block i is available, compiis set to true, Kiand Riare set based on the size of block i, and timeoutiis set to Δ plus the current time (630). The value of Δ is a time duration after which the sender will cease to send data for block i if block i has not been successfully delivered to the receiver. The value of Δ is typically set to a much longer duration than the timeout value at the receiver. In particular, if the current time is greater than timeoutifor some block i with statusi=active then the sender deactivates block i and sets statusito inactive (640). If the sender receives feedback (fs, rs, F) from the receiver (650), the sender resets fseqno to the maximum offs and fseqno, and resets rseqno to the maximum of rs and rseqno, where fs is a feedback global packet sequence number and rs is a corresponding feedback global packet count (660). Furthermore, for each active block i, the sender updates Yito the number of packets it has sent for block i up till packet carrying fseqno (660). If F contains feedback (X, status) about active block i (670) then the sender resets Xito the maximum of X and Xi, where X is a feedback block packet count for block i, and if status is equal to either recovered or abandoned then statusiis reset to inactive (680). FIG.8illustrates a sender sending packets process700. When it is time for the sender to send a next packet, the sender proceeds as follows (710). Let I be the set of block numbers i such that statusiis active and compiis false and there is unsent data for block I (710). Let I′ be the set of block numbers i such that i such that statusiis active and compiis true and (Zi−Yi)*(1−pest)<Ki−Xi+Ri(710). The union of I and I′ are the block numbers of the blocks for which it is allowed to send a next encoded packet at this time. If the union of I and I′ is empty (720) then the sender sends a packet carrying global sequence number seqno with no data in the packet payload (730), and seqno is incremented by one to prepare for sending the next packet (750). Even though there is no data to send from any of the blocks, this packet is sent because it can trigger useful feedback from the receiver. If the union of I and I′ is not empty (720) then let b be the smallest block number in the union of I and I′ (740). The next packet sent will carry encoded data for block number b. The packet header for the packet carries global sequence number seqno, block identifier b, block size Kb, and encoding identifier Zb, and the packet payload carries the encoded data identified by encoding identifier Zbgenerated from block b (740). This packet is sent, Zbis incremented by one to prepare for sending the next packet for block b (740), and global sequence number seqno is incremented by one to prepare for sending the next packet (750). As an alternative embodiment, block i may be made available to the sender together with a delivery deadline di, where the ordering of the deadlines for blocks may not coincide with the sequencing of blocks, e.g., the deadline dbfor delivering block b may precede the deadline difor delivering block i even though b is larger than i, i.e., even though block b started arriving to the sender after block i started arriving to the sender. In this embodiment, if the union of I and I′ is not empty then let b be the block number in the union of I and I′ such that block b has the earliest delivery deadline among all blocks in the union of I and I′. The next packet sent will carry encoded data for block number b. Receiver Methods FIG.9illustrates an example receiver receiving packets process800. Before receiving any packets for the data stream the receiver initializes fseqno and rseqno to zero, and there are no active blocks at the receiver (805). When the receiver receives a packet carrying a global packet sequence number seqno in the packet header then the receiver updates fseqno to the maximum of seqno and fseqno, and increments rseqno by one (810). The feedback F for this packet is initialized to empty (810). If the received packet carries encoded data for a block (820), then the packet header carries i, K, and Y, where i is the block number, K is either zero (if the block was not complete when this packet was sent) or the size of block i in units of packet payloads (if the block was complete when this packet was sent), and Y is the encoding identifier that identifies the encoded data carried in the packet payload generated from block i (830). The receiver sets feedback block packet count Xito the total number of packets it has received so far with encoded data for block i with different encoding identifiers (including encoding identifier Y) and sets the block size Kito the maximum of K and Ki(830). If block i is recoverable from the encoded data received so far, then the receiver sets statusito recoverable (830). Block i is usually recoverable if Xiis at least Kiand Kiis greater than zero indicating Kiis the size of block i in units of packet payloads. If block i is not yet recoverable then the receiver sets statusito active unless block i is no longer relevant to the receiver in which case the receiver sets statusito abandoned (830). The receiver adds the feedback (i, Xi, statusi) to F (830). The receiver does the following for every block b that is active at the receiver, apart from block i if the received packet carries encoded data (840). If block b is no longer relevant to the receiver, then statusbis set to abandoned, and otherwise statusbis still active (840). The feedback (b, Xb, statusb) is added to F (840). The receiver sends a feedback packet carrying (fseqno, rseqno, F) to the sender, where fseqno is a feedback global packet sequence number and rseqno is a corresponding feedback global packet count (850). The receiver can also include its own sequence number in the packet header of each feedback packet it sends to the sender so that if feedback packets arrive out of order to the sender, then the sender can determine based on this sequence number which is the latest feedback from the receiver. In a particular embodiment, two types of feedback could be included in each feedback packet: block information and sequence information. Block information might comprise, for each active block, the block status, including how many encoded packets have been received for the block up through the packet with the largest global packet sequence number included in the sequence information. Sequence information might comprise, for some subset of packets previously received from the sender, the global packet sequence number of the packet and the total number of previously received packets up through the time the packet is received. The packet with highest global packet sequence number received so far by the receiver might be included in the subset of packets previously received from the sender. Table 1 illustrates an example of sequence information that might be included in a feedback packet. In this example, the sequence information corresponds to a state in which the receiver received the sequences of packets where fs is the global packet sequence number in a received packet up to some point in time and rs is the global packet count up to the same point in time, i.e., rs is the total number of packets the receiver has received up through the packet that has the global packet sequence number fs. TABLE 1Feedback Packet Contentsfsrs2319242025214022432344244525472649275028512952305331 As a first example, the sequence information included in one feedback packet could list all of these (fs, rs) pairs, where fs is a feedback global packet sequence number and rs is a corresponding feedback global packet count, which would provide the sender with a verbose description of the entire pattern of recent packet loss. As a second example, the sequence information in a feedback packet could list the following subset of these (fs, rs) pairs: (23, 19), (25, 21), (40, 22), (49, 27), (53, 31). This second example of sequence information is more succinct than the first example and provides most of the details of the packet loss patterns, i.e., it provides the information that there is no packet loss between fs=23 and fs=25, there is a burst loss of 14 packets between fs=25 and fs=40, it condenses the exact pattern of loss between fs=40 and fs=49 such that the sender can infer that four out of the nine packets were lost, albeit not necessarily indicating which four of the nine were lost, and that there is no packet loss between fs=49 and fs=53. With this sequence information format for feedback packets, the same (fs, rs) pair can be repeated in multiple feedback packets, which for example allows the feedback to be more robust when there is packet loss in the path between the receiver and the sender. For example, a sparse subset of previously sent (fs, rs) pairs can be sent in the current feedback packet, ensuring that even if previous feedback packets are lost and only the current feedback packet is received by the sender, then the sender at least has a coarse idea of previous packet loss patterns. For example, the sequence information included in a feedback packet could include half of the (fs, rs) pairs sent in the previous feedback packet together with some new (fs, rs) pairs corresponding to packets that have arrived at the receiver since the previous feedback packet was sent. Updates to previously sent (fs, rs) pairs can also be provided, e.g., if some packets arrive out of order at the receiver then the sequence information in a first feedback packet may contain an (fs, rs) pair and the sequence information in a subsequently sent feedback packet may contain an (fs, rs′) pair where rs' is greater than rs, which can happen if packets sent from the sender to the receiver arrive out of order at the receiver. An advantage of this sequence information format is that if some feedback packets are lost, the sender can still obtain a rough approximation of packet loss from subsequent feedback packets. There can be many variations of the above methods. For example, the sequence feedback could include a global packet sequence number fs together with the number of packets received with consecutively smaller global packet sequence numbers, e.g., the feedback pair (25, 10) could indicate that the data packets with global packet sequence numbers 16, 17, 18, 19, 20, 21, 22, 23, 24, 25 were received by the receiver. Another alternative is to include a global packet sequence number together with a bitmap of received packets with smaller global packet sequence numbers, e.g., the feedback (25, 1011001101) could indicate that the data packets with global packet sequence numbers 16, 18, 19, 22, 23, were received by the receiver and that the data packets with global packet sequence numbers 17, 20, 21, 24 were not received by the receiver. Including different types of feedback in the same feedback packet is another alternative, e.g., (type=1, 25, 10), (type=2, 11011011) could be included in the same feedback packet, where type=1 indicates a sequence number followed by a consecutive count of how many previous data packets were received, and type=2 indicates a sequence number followed by a bitmap of sequence numbers of previously received data packets. The sender may indicate the frequency of feedback required for example by indicating to the receiver a pair (X, Y), where X indicates that the that there should be at most X data packets received at the receiver before sending the next feedback packet to the sender, and Y indicates that the delay between consecutive feedback packets sent from the receiver to the sender should be at most Y milliseconds. For example, instead of sending a feedback packet in response to each received packet, the receiver may send feedback less or more frequently, e.g., the receiver sends feedback for each third received packet, or the receiver sends a feedback packet for each 5 ms that passes, or the receiver sends a feedback packet for each fifth received packet and whenever 10 ms has passed. The amount and frequency of feedback from the receiver to the sender may also be application specific, i.e., the receiving system might be programmed to have feedback that depends on overall delivery latency requirements of an application. For example, if a first application requires 10 ms delivery latency whereas a second application requires only 100 ms delivery latency, then the frequency of feedback for the first application may be set to be much higher than for the second application. In some cases, the feedback may be very infrequent compared to the number of packets sent from the sender to the receiver. For example, there may be a 10 Gbps stream of data sent from the sender to the receiver, which equates to around 1 million packets a second when using a typical sized UDP data packet of around 10,000 bits in size. In this case, there may be only a feedback packet each 10 ms, which is around 100 feedback packets per second, i.e., there is only one feedback packet sent for each 10,000 data packets sent. Typically, feedback packets are shorter than data packets, and thus the amount of bandwidth required between receiver and sender for feedback is a small fraction of the amount of bandwidth required between the sender and receiver for data. The sender may send a message to the receiver to signal that the receiver should send more feedback or send less feedback, based on packet loss of feedback packets sent from the receiver to the sender. For example, if the sender is experiencing higher feedback packet loss, the sender may indicate to the receiver to increase the frequency and redundancy of information provided in the feedback to increase the reliability and accuracy of the feedback at the sender, whereas if the feedback packet loss is lower than the sender may indicate to the receiver to reduce the frequency and redundancy of information provided in the feedback to reduce bandwidth consumption between the receiver and sender. Automatic Calculation of Parameters at Sender In an embodiment, the sender sets Rito the square root of Kifor block i, where Kiis the size of block i in units of packet payloads. In other embodiments, the sender may set Rito a constant c times the square root of Kifor block i, where c is greater than zero, e.g., c is equal to 0.5 or c is equal to 1.5. Larger values of c provide higher assurance of small data delivery latency at the expense of more bandwidth consumption, whereas smaller values of c provide lower assurance of small data delivery latency while saving on bandwidth consumption. In another embodiment, the sender sets Rito the square root of seqno−fseqno at the time block i becomes available at the sender. In other embodiments, the sender may set Rito a constant c times the square root of seqno−fseqno for block i, where c is greater than zero, e.g., c is equal to 0.5 or c is equal to 1.5. The value of pest can be initialized to some value prior to sending any packets for the data stream, e.g., pest might be set to zero if the expected packet loss rates are typically low or unknown, and pest might be set to a value alpha greater than zero if there is some a priori information suggesting that the typical packet loss rates are close to alpha, or if there is a requirement to have higher assurance of small data delivery latency at the beginning of the data stream. In a first embodiment, the sender continually updates pest as follows. Let numflt be the current number of packets in flight for all data blocks, i.e., the sender sets numflt to seqno minus fseqno at each point in time, where seqno is the global packet sequence number in the most recent packet sent by the sender, and fseqno is the largest feedback global packet sequence number the sender has received from the receiver in feedback. Let rseqno be the feedback global packet count corresponding to fseqno. Let fseqno′ be the second largest global packet sequence number the sender has received from the receiver in feedback and let rseqno′ be feedback global packet count corresponding to fseqno′. Then, upon receiving the feedback from the receiver (fseqno, rseqno), the sender updates pest as indicated in Equation 1, where ds is equal to fseqno minus fseqno′, dr is equal to rseqno minus rseqno′, and dl is equal to ds minus dr. (1-dsnumflt+ds)·pest+dlnumflt+ds(Eqn.1) The value of ds is the number of additional packets reported as sent between the last feedback report and the current feedback report from the receiver, the value of dr is the number of additional packets reported as being received between the last feedback report and the current feedback report from the receiver, and thus the value of dl is the number of additional packets calculated as being lost between the last feedback report and the current feedback report from the receiver. In second embodiment, pest is updated as follows. Let pprev be the estimate of pest described in the first embodiment at the time in the past when the packet being sent was pseqno=seqno−numflt and let pcur be the estimate of pest described in the first embodiment at the current time when the packet just sent was seqno. Let pwin=(e·pcur—pprev)/(e−1) where e=2.71828 . . . (the natural number). The value of pwin can viewed as a more accurate estimate of the loss rate over just the past numflt packets for which feedback has been received, instead of over all previous feedback. Then, pest is set to the maximum of pcur and pwin. In a third embodiment, pest is updated as follows. Let Kavg be the average number of packets in blocks of data that are being sent. Then, upon receiving the feedback from the receiver (fseqno, rseqno), the sender updates pest as indicated in Equation 2, where ds and dl are defined as in the first embodiment. (1-dsK⁢a⁢vg+ds)·pest+dlK⁢a⁢vg+ds(Eqn.2) In a fourth embodiment, pest is updated as follows. Let pprev be the estimate of pest described in the third embodiment at the time in the past when the packet being sent was pseqno=seqno−Kavg and let pcur be the estimate of pest described in the third embodiment at the current time when the packet just sent was seqno. Let pwin be as defined above, and the value of pwin can viewed as a more accurate estimate of the loss rate over just the past Kavg packets for which feedback has been received, instead of over all previous feedback. Then, pest is set to the maximum of pcur and pwin. The methods above to calculate pest can be extended when the sequence information in a feedback packet includes multiple pairs (fseqno1, rseqno1), (fseqno2, rseqno2), . . . , (fseqnoj, rseqnoj), where fseqno1<fseqno2< . . . <fseqnoj. For example, suppose the sender receives such a feedback packet at the time when the sender has just sent the packet with global packet sequence number seqno, and (fseqno0, rseqno0) is a pair in a previously received feedback packet, fseqno0is the largest value in such a previously received pair, and fseqno0<fseqno1. The sender can then apply the methods described above for updating pest by updating, for each i=1, . . . , j, the value of pest with respect to larger pair (fseqnoi, rseqnoi), second larger pair (fseqnoi-1, rseqnoi-1) and current global packet sequence number seqno. Multi-Flow Delivery Multiple paths for delivering data are sometimes available between a sender and receiver, e.g., a mobile sender may be connected to multiple Mobile Network Operators (MNOs) and the sender may be able to concurrently send data to a receiver via each MNO. As another example, there are some networks that natively support multiple paths between senders and receivers, e.g., some mesh networks. In these cases, utilizing multiple paths for delivering a data stream from a sender to a receiver, wherein the sender sends a flow of a portion of the encoded packets to the receiver over each such path, can be beneficial. For example, the network characteristics of different paths, especially of paths that include wireless links along the path, can experience intermittent and long-term variations in signal quality and available bandwidth, making it difficult to reliably deliver data over a single path at a consistent rate with minimal data delivery latency. Enabling data delivery utilizing multiple flows over multiple paths can make it possible to reliably deliver data at a consistently higher rate with improved data delivery latency compared to data delivery over a single path. There are also examples where it is advantageous for a sender to send multiple flows when there is only one path known to the sender between the sender and receiver. For example, when each flow is sent using a protocol that uses a data rate control protocol that does not scale well over large round-trip times, e.g., TCP, multiple TCP flows can be used to deliver data over a single path between the sender and receiver. FIG.10illustrates an example sender multi-flow overall process900. The sender process unit (905) is partitioned into sender flow process units (910(0),910(1),910(2)) and sender destination process unit (920). There is one sender flow process shown for each of the flows, where three flows are shown inFIG.10. Exemplary sender flow process (910(0)) is for flow 0. Sender flow process (910(0)) generates and sends packets that include a flow identifier 0 in the packet headers to distinguish the flow from the other flows sent to the same receiver destination. Sender flow process (910(0)) initializes and manages the following parameters independently of the other sender flow processes: For each flow, the sender might maintain a data structure in computer-readable memory such as that depicted inFIG.12. For each flow of each block i of the data stream, the sender might maintain a data structure in computer-readable memory such as that depicted inFIG.13. Sender flow process (910(0)) determines when the next packet can be sent within flow 0, and when it is time to send the next packet within flow 0, sender flow process (910(0)) signals the sender destination process (920) to provide the next packet to send, sender destination process (920) provides the packet to sender flow process (910(0)), sender flow process (910(0)) adds its flow identifier 0 and seqno0 to the packet header and sends the packet within flow 0 to the receiver. Sender flow process (910(0)) receives feedback from the receiver containing feedback relevant to flow 0, i.e., fseqno0, rseqno0, and for each active block i, the values of X0i, Y0i, and Z0i, similar to portions of the process described inFIG.7. FIG.11provides a more detailed description1000of sender flow process (910(0)) for an exemplary flow 0. In an initialization step, seqno0, fseqno0 and rseqno0 are initialized to zero (1010). When sender flow process (910(0)) receives a packet to send to flow 0 from the sender destination unit (920), the flow 0 identifier and seqno0 values are added to the packet header of the packet and the packet is sent to flow 0 (1020). If the packet carries encoded data for some block i (1030) and this is the first packet sent for block i (1040) then X0 and Y0i, are initialized to zero and Z0iis initialized to one (1050). If the packet carries encoded data for some block i (1030) and this is not the first packet sent for block i (1040) then Z0iis incremented by one (1060). When sender flow process (910(0)) receives feedback (fs0, rs0, F0) from the receiver for flow 0, where fs0 is a feedback global packet sequence number for flow 0 and rs0 is the corresponding feedback global packet count for flow 0 (1070), then fseqno0 is updated to the maximum of fs0 and fseqno0, rseqno0 is updated to the maximum of rs0 and rseqno0, and, for each active block i, Y0iis updated to the number of packets sent for block i carrying encoded data within flow 0 up till packet carrying fseqno0 (1080). If F0 contains feedback (X0, status) about an active block i that has not already been processed (1085), then X0iis updated to the maximum of X0, and X0iwhere X0 is a feedback block packet count for flow 0, and if status is either recovered or abandoned then statusiis updated to inactive (1090). The value of the estimate of the packet loss probability pest0 for flow 0 is updated, and the value of pest0 and, for each relevant block i, the values of X0i, Y0i, Z0i, and statusiare provided to the sender destination unit (1095). FIG.14provides a more detailed description of sender destination process1100for parameter updates. The number of blocks nblk is initialized to zero and, for each flow j, the estimate of the packet loss probability pestjfor flow j is initialized to zero (1110). When data for a next block becomes available to the sender destination process (920; seeFIG.10), the block number i is set to nblk, statusiis set to active, compiis set to false, Kiis set to zero, and nblk is incremented by one in preparation for the arrival of the next block (1120). When all data for block i is available then compiis set to true, timeoutiis set to the current time plus an increment of time Δ, and Kiand Riare set according to the size of block i (1130). If at some point the current time is beyond timeoutifor block i, then block i is deactivated and statusiis set to inactive (1140). FIG.15provides a more detailed description1200of sender destination process (920) for sending packets to flow and receiving feedback from flows. When some sender flow process j indicates a packet can be sent then the following processing is executed (1210). For each block i for which statusiis active and comp is true, let exprecibe the sum over all flows j of (Zji−Yji)*(1−pestj), and let Xibe the sum over all flows j of Xji(1220). The value of expreciis an estimate of how many of the packets in flight for block i across all flows are expected to reach the receiver based on estimated packet loss probabilities for each flow, and the value of Xiis the number of packets for block i across all flows that have been acknowledged in feedback as arriving at the receiver. If statusiis active and compiis true and expreciis less than Ki−Xi+Rifor block i then the sendable number of packets for block i is greater than the number of packets sent so far for block i, i.e., another packet can be sent for block i. Let I be the set of block numbers i such that statusiis active and compiis false and there is unsent data for block i and let I′ be the set of block numbers i such that status is active and compiis true and expreciis less than Ki−Xi+Ri(1230). If the union of I and I′ is empty (1240) then there is no encoded data to send for any block at this point and time, and thus the packet provided to sender flow process j to send is a packet with an empty payload (1250). If the union of I and I′ is not empty (1240) then let b be the minimum block number in the union of I and I′, and then the packet payload pktpay for the packet is set to the encoded data for block b with encoding identifier Zb, the packet header for the packet is set to (j, Kb, Zb), the packet is provided to flow j, and encoding identifier Zbis incremented by one in preparation for sending the next packet for block b (1260). When some sender flow process j indicates that it has new feedback for the sender destination unit (1270), the estimate of the packet loss probability pestjand, for each relevant block i, the values of Xji, Yji, Zji, and statusiare updated based on the feedback (1280). There are many variants of the above methods. For example, instead of the receiver sending feedback individually for each flow to each sender flow process, e.g., to sender flow process (910(0)) for flow 0 and to sender flow process (910(1)) for flow 1 and to sender flow process (910(2)) for flow 2, the receiver may send the feedback combined for all flows to either a separate feedback process, or directly to the sender destination process (920). As another alternative, the receiver may feedback the number of packets Xiit has received in aggregate for block i over all flows instead of feeding back the individual number of packets for block i received from each flow. In another variant, a packet loss rate pest averaged over all flows may be calculated by the sender destination process (920) and used to determine whether more packets can be sent for each block i. In another variant, the sender destination process (920) is similar to the sender process (110) depicted inFIG.2and described in more detail inFIGS.7-8. One main difference in this embodiment is that the sender destination process interacts with each of the sender flow processes for each of the flows, wherein each sender flow process indicates to the sender destination process when the sender flow process is able to send another packet within its flow, and the sender destination process provides the next fully formed packet (formed according to the details described with reference toFIGS.7-8), and the sender flow process sends this packet directly to its flow. In this variant, the feedback from the receiver is global feedback about all the packets received across all the flows, using processes very similar to those described with reference toFIG.9, and the feedback is sent directly back to the sender destination process. Some Simulation Results FIGS.16-19illustrate results of a simulation for a stream of equal-sized blocks sent in a single flow from a sender to a receiver. The parameters of the simulation are K, BW, RTT, blks, and a varying packet loss pattern. The value of K is the size of each block in units of packet payloads. The embodiment method used in the simulations shown inFIGS.16-19uses R=√{square root over (K)}, where R is the number of additional repair packets of data that the sender can proactively send beyond the base packets for each block when there is no packet loss. The embodiment method uses the fourth embodiment for estimating pest described above in Section “Automatic calculation of parameters at sender”, where pest is initialized to zero. FIGS.16-19also show, for comparison, results of a retransmission-based method. The retransmission-based method sends the minimal number of packets to reliably deliver each block and delivers each block with the minimum possible block delivery latency among all methods for reliably delivering blocks based on retransmission. Time slots are available at a regular time interval to the sender in the simulation, where at each time slot the sender can send exactly one packet. For i=0, blks−1, block i becomes available to the sender for transmission after i·BW time slots, where blks is the number of blocks delivered in the simulation and the value of BW provides a constraint on the number of time slots available to the sender to deliver each block (this is essentially a bandwidth constraint). The value of BW must be at least K, and BW needs to be larger than K when there is packet loss or when more than K packets are sent proactively for each block. Packets sent from the sender to the receiver are lost at a packet loss rate that is varied over the duration of a simulation. There is no packet loss in the feedback from the receiver to the sender in the simulations. The value of RTT is the number of time slots between when the sender sends a packet and when feedback is received at the sender that was generated at the receiver after having received that packet or a subsequently sent packet. FIGS.16-19show simulation results with respect to the setting K=5,000, BW=8,000, RTT=8,000, blks=100. In each chart shown in the figures, moving from left to right on the X-axis corresponds to the passing of time, displayed on the charts in units of block availability, where the number of time slots for sending packets between the two consecutive blocks is BW. InFIG.16, the Y-axis of plot1300corresponds to the packet loss rates at each point in time, where the line (1310) corresponds to the packet loss rate set in the simulation. The simulation randomly loses the packet in each time slot with its associated packet loss probability (1310), which determines which packets are delivered to the receiver. The exact same packet loss pattern is used for the embodiment method and for the retransmission-based method. The line (1320) and line (1330) correspond to the embodiment method packet loss estimates pcur and pest in the fourth embodiment described above, respectively. FIG.17shows a magnified portion of the lines (1310,1320,1330) shown inFIG.16. InFIG.18, the Y-axis of plot1400corresponds to the number of sent packets for each block to reliably deliver each block from the sender to the receiver. The line (1410) shows the results for the retransmission-based method, whereas the line (1420) shows the results for the embodiment method. InFIG.19, the Y-axis of plot1500corresponds to the number of time slots between when a block becomes available at the sender until the block is recovered at the receiver, i.e., this shows the block delivery latency. The RTT is assumed to be evenly split between the sender to receiver data delivery path and the receiver to sender feedback path, i.e., a sent packet arrives at the receiver (if it is not lost) RTT/2 packet time slots after it is sent, and thus the minimal possible block latency delivery is K+RTT/2=9,000. The line (1510) shows the block delivery latency for the retransmission-based method, whereas the line (1520) shows the block delivery latency for the embodiment method. The minimal possible block delivery latency is achieved for the retransmission-based method and for the embodiment method when there is no packet loss, e.g., at the beginning of the simulation. However, the block delivery latency is significantly above the minimal possible block delivery latency for the retransmission-based method when there is minimal packet loss, and the block delivery latency is a large multiple of the minimal possible block delivery latency (and quite variable) as the packet loss rate grows. On the other hand, the block delivery latency consistently remains close to the minimal possible block delivery latency for the embodiment method as the packet loss rate grows. FIG.20is a simplified functional block diagram of a storage device2048having an application that can be accessed and executed by a processor in a computer system as might be part of embodiments of sending systems and/or receiving systems and/or a computer system that does data or signal processing.FIG.20also illustrates an example of memory elements that might be used by a processor to implement elements of the embodiments described herein. In some embodiments, the data structures are used by various components and tools, some of which are described in more detail herein. The data structures and program code used to operate on the data structures may be provided and/or carried by a transitory computer readable medium, e.g., a transmission medium such as in the form of a signal transmitted over a network. For example, where a functional block is referenced, it might be implemented as program code stored in memory. The application can be one or more of the applications described herein, running on servers, clients or other platforms or devices and might represent memory of one of the clients and/or servers illustrated elsewhere. Storage device2048can be one or more memory device that can be accessed by a processor and storage device2048can have stored thereon application code2050that can be configured to store one or more processor readable instructions, in the form of write-only memory and/or writable memory. The application code2050can include application logic2052, library functions2054, and file I/O functions2056associated with the application. The memory elements ofFIG.20might be used for a server or computer that interfaces with a user, generates data, and/or manages other aspects of a process described herein. Storage device2048can also include application variables2062that can include one or more storage locations configured to receive input variables2064. The application variables2062can include variables that are generated by the application or otherwise local to the application. The application variables2062can be generated, for example, from data retrieved from an external source, such as a user or an external device or application. The processor can execute the application code2050to generate the application variables2062provided to storage device2048. Application variables2062might include operational details needed to perform the functions described herein. Storage device2048can include storage for databases and other data described herein. One or more memory locations can be configured to store device data2066. Device data2066can include data that is sourced by an external source, such as a user or an external device. Device data2066can include, for example, records being passed between servers prior to being transmitted or after being received. Other data2068might also be supplied. Storage device2048can also include a log file2080having one or more storage locations2084configured to store results of the application or inputs provided to the application. For example, the log file2080can be configured to store a history of actions, alerts, error message and the like. A computer system may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs the computer system to be a special-purpose machine. According to one embodiment, the techniques herein are performed by a computer system in response to a processor executing one or more sequences of one or more instructions contained in memory, such as described in reference toFIG.20. Such instructions may be read into a main memory from another storage medium, such as a storage device. Execution of the sequences of instructions contained in the main memory might cause the processor to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may include non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire, and fiber optics, including the wires that include a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to a processor for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a network connection. A method of transmitting data of a data stream reliably from a sender to a receiver, wherein reliability of transmission is indicated by a confirmation from the receiver to the sender that the data is considered received, might comprise obtaining the data of the data stream to be sent from a source device to a destination device, wherein the destination device is configured to receive data from the receiver in a presumed reliable form, organizing the data into a sequence of one or more blocks of data, each block having a block identifier associated therewith, encoding a block of the one or more blocks of data into a sequence of packets for the block, organizing data of a packet of the sequence of packets for a block into a packet header and a packet payload, wherein the packet is to be processed such that it is either received correctly at the receiver or discarded at the receiver if it cannot be determined to be entirely recovered correctly, including, in the packet header of the packet, a first indication and a second indication, wherein the first indication comprises a block size and a block identifier of a block and an encoding identifier from which the packet payload was derived and the second indication comprises a global packet sequence number that uniquely identifies the packet relative to other packets of the data stream, including, in the packet payload of the packet, encoded data derived from the block of data identified by the block size and the block identifier and the encoding identifier included in the packet header, determining a sendable number of packets for the block, sending the packet to the receiver for the block if the total number of packets sent so far from the sequence of packets for the block is less than the sendable number of packets for the block, receiving, from the receiver, a third indication of a largest global packet sequence number among packets received correctly at the receiver and a fourth indication of a total number of packets received correctly, determining, at the sender, the sendable number of packets for the block, at least in part, based on the third indication and the fourth indication, wherein packets of the sequence of packets for the block are interchangeable with packets lost from the plurality of packets of the sequence of packets for the block to recover the block. According to some embodiments, the techniques described herein are implemented by one or more generalized computing systems programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Special-purpose computing devices may be used, such as desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. One embodiment might include a carrier medium carrying data that includes data having been processed by the methods described herein. The carrier medium can comprise any medium suitable for carrying the data, including a storage medium, e.g., solid-state memory, an optical disk or a magnetic disk, or a transient medium, e.g., a signal carrying the data such as a signal transmitted over a network, a digital signal, a radio frequency signal, an acoustic signal, an optical signal or an electrical signal. Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. Processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory. The code may also be provided carried by a transitory computer readable medium e.g., a transmission medium such as in the form of a signal transmitted over a network. Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of the set of A and B and C. For instance, in the illustrative example of a set having three members, the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. The use of examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Further embodiments can be envisioned to one of ordinary skill in the art after reading this disclosure. In other embodiments, combinations or sub-combinations of the above-disclosed invention can be advantageously made. The example arrangements of components are shown for purposes of illustration and combinations, additions, re-arrangements, and the like are contemplated in alternative embodiments of the present invention. Thus, while the invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the processes described herein may be implemented using hardware components, software components, and/or any combination thereof. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims and that the invention is intended to cover all modifications and equivalents within the scope of the following claims. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
85,580
11863318
DETAILED DESCRIPTION As discussed above, client devices, applications, server computers, etc., may store data on data storage devices and/or data storage systems. The client devices, applications, server computers, etc., may access the data to perform various functions, operations, actions. The data storage systems may use solid state drives (SSDs, flash memory) as data storage devices. SSDs provide high performance but at a higher cost. For example, SSDs may be an order of magnitude more expensive than hard disk drives (HDDs, magnetic disk drives). The data that is accessed by client devices, servers, applications, etc., may be stored in a public cloud (e.g., cloud storage). Public cloud storage may be expensive, particularly if large amounts of data are stored in the public cloud. For example, a cloud storage provider may charge for both storing the data and for data traffic (e.g., for reading and/or writing the data to the cloud). In addition, because many public clouds use virtual environments (e.g., virtual machines, containers, etc.), performance may not be guaranteed in a public cloud. Storing data in a public cloud may also result in privacy issues when dealing with sensitive customer data. Private clouds are another option for storing data. For example, a company may buy hardware/software to set up their own private cloud for storing data. Private clouds typically use x86-based hardware/software solutions, which may be more expensive and may not achieve good performance. In addition, storage appliances used in data centers may hit performance bottlenecks due to x86 server architectures that introduce contention on various resources such as the central processing unit (CPU), memory, and network/disk controllers. This often results in I/O throughput saturation of around 10 MBps per hard drive. Other cheap alternatives that are based on cold storage approaches exist (e.g. storing on tapes or Write-Once Read-Many (WORM) optical disks) but these alternatives are best used for data archival and not for running real-time big data analytics. In one embodiment, a storage microserver architecture is provided. The storage microserver architecture may result in significantly lower cost (e.g., both hardware costs and power consumption) and may have similar or better I/O performance and/or reliability when compared to commodity x86 servers. In one embodiment, the microservers (e.g., the storage microservers) may use Advanced RISC Machines (ARM) processors and/or coprocessors. The ARM processors/coprocessors may be integrated onto a system on a chip (SoC). The system architecture (e.g., the storage microserver architecture) may use software modifications that are optimized for ARM-based microservers. This may allow the microservers to deliver significantly high I/O performance when compared to alternatives that run generic unmodified software customized for x86 servers. In addition, the system architecture may provide fully integrated storage solution that includes backend storage systems, management, and monitoring systems. The system architecture and/or the microservers described herein may reduce costs due to lower hardware costs (e.g., ARM processors may be cheaper than x86 processors) and reduce power consumption. The microservers may also provide improved I/O performance that can lead to improvements in user experiences for their customers and reduced downtime due to faster recovery and better reliability. The microservers may have a hardware architecture that deploys commodity disks in a more compact chassis. The microservers may help avoid CPU, memory bus, and I/O contention that plague traditional storage solutions. Various software on the microservers, such as the kernel, device drivers, and an accelerator (e.g., an I/O accelerator) may be designed and/or configured for ARM-based microservers. The system architecture may also include middleware and cloud-based infrastructure to handle object storage, block storage, file system storage, and database storage, etc. FIG.1is a block diagram that illustrates an example technology stack100, in accordance with some embodiments of the present disclosure. The technology stack includes the following layers: the microserver, the OS, data I/O, storage middleware, and cloud. As discussed above, at the microserver layer, ARM-based storage microservers may be used to provide reliable, high-performance storage. A microserver may include various hardware110. For example, a single microserver may include an ARM based SoC. The SoC may include an ARM processor (e.g., a primary processing device) and one or more ARM coprocessors (e.g., one or more secondary processing devices). ARM processors may include 32-bit and 64-bit RISC processors. Other examples of hardware110may include dual 1 Gbps Ethernet network interface cards (NIC) for redundancy and multiple disk drives (e.g., 2 HDDs). The microservers can be stacked together into a chassis configuration and inter-connected via an Ethernet switch. The microserver may be cheaper because the ARM processors may be cheaper than traditional x86 processors/hardware. In addition, ARM-based processors may consume less power than a processor based on x86 chip-set. Furthermore, an ARM based SoC may use less board space (e.g., may use smaller printed circuit boards (PCB)s). This results in higher storage density and physical space efficiency in data centers. In addition, because a microserver may include fewer disk drives, the failure of a microserver impact fewer disks (e.g., one or two disks) when compare with a x86 server which may often have 8, 16, up to 72 disk drives. ARM based SoC boards may also include less hardware components than x86 servers (e.g., no CPU fan, no video card, no Northbridge/Southbridge chips, etc.). Fewer components may increase the reliability of the microserver. At the OS layer, a microserver may use a Linux OS with enhancements to help optimize its performance for ARM-based processor. The drivers122may include special device drivers are used to optimize for raw I/O performance of disk and network controllers. At the data I/O layer, there may be additional software that communicates with the hardware/software of the microserver. For example, an accelerator component131may speed up I/O performance. Various network protocols132may be used to reduce communication overhead between the microserver and other computing devices. Optimizations133may include storage related optimizations (memory zero-copy, threads parallelization and pipelining, data checksum computation offload, and accurate resource scheduling) to improve I/O performance (e.g., performance when reading and/or writing data). In one embodiment, the accelerator component131may span across kernel space and user space and may bridge the I/O gap between storage software and the capacities provided by lower-level hardware. The accelerator component131may maximize resource utilization on the microserver while avoiding bottleneck in CPU cycles, memory access, disk read/write, and network packet send/receive. For example, the accelerator component131may run multiple threads to parallelize the access to CPU, memory, network and disk. The accelerator component131may also control resource allocation and deallocation, while coordinating resource occupation among threads. To reduce CPU cycles and memory footprint, the accelerator component131may adopt zero-copy techniques to create mapped memory directly into kernel so as to avoid copying memory between kernel and user space and device buffers. For more efficient disk reads/writes, the accelerator component131utilizes raw Linux disk I/O APIs with the aid of Direct I/O to bypass kernel cache management. Data checksum (e.g., an MD5 checksum) is often stored along with data's metadata in order to verify data correctness. Data checksum computation is CPU-intensive. To accelerate checksum computation, the accelerator component131leverages customized checksum algorithms that parallelize computation among data blocks. In addition, the accelerator component131offloads the checksum computation to an ARM co-processor called Cryptographic Engine and Security Accelerator (CESA). To overcome network bottlenecks, the accelerator component131may fine tune network packet size, buffer size, number of queues, transmission control protocol (TCP) slow start, TCP transfer window size, and exploit NIC features such as large segment offload and jumbo frame. To improve the I/O performance on small files, the small files are batched together to reduce the overhead of network connection creation, disk seeking and CPU interrupts. Other optimizations performed by the accelerator component131may include a memory/object pool, a thread pool, a network connection pool, a customized network protocol (e.g., a customized version of the user datagram protocol (UDP)) to reduce communication overhead, and on-the-fly data compression/decompression to reduce data size. In one embodiment, the accelerator component131has the choice of using TCP or UDP as the transport layer. TCP has connection setup and teardown overhead and is the default option in high reliable networks with low loss rates. In a network with poor network conditions (e.g., a lossy network), the accelerator component131may use a reliable variant of UDP which keeps sequence numbers and retransmit packets if acknowledgements are not received after timeouts. The amount of data sent using UDP may adjusted adaptively based on how many packets are received successfully over a time window. The accelerator component131may also use erasure coding to transmit data. Each object or block is broken into smaller encoded blocks (e.g., codewords, symbols) and transmitted in UDP packets. Only a subset of the encoded blocks is needed to recover the original block. At the storage middleware layer, the system may provide various types of data storage capabilities. For example, the microserver may provide object storage141(e.g., object-based storage). In another example, the microserver may provide block storage142(e.g., block level or block-based storage). In a further example, the microserver may provide file storage143(e.g., file-based storage). In another example, the microserver may provide database (DB) storage144(e.g., table based storage). At the cloud layer, the system may include cloud-based infrastructure to manage a large number of microservers. The infrastructure consists of management151(e.g., management tools/modules), monitoring152(e.g., monitoring tools/modules), and usability153(e.g., usability tools/models). The management tools/modules automate cluster deployment, expansion, repairing, and upgrading. The monitoring tools/modules may be responsible for collecting, storing, visualizing data metrics of system status. FIG.2is a block diagram that illustrates an example system architecture200, in accordance with some embodiments of the present disclosure. The system architecture200includes data storage system250, servers240, and client devices210. The data storage systems250may each include one or more processing devices (e.g., a primary processing device and/or one or more secondary processing devices), an accelerator component131, and one or more data storage devices340(e.g., an HDD). The data storage systems may be deployed within private cloud (e.g., located on leased racks within a co-location facility or located on premise). The data storage systems may be ARM-based storage microservers. As discussed above, each data storage system250may use low-level hardware/software optimizations to speed up I/O performance on ARM hardware within the data storage system250. The client devices210may access data stored on the data storage system250via application programming interfaces (APIs) of the storage middleware. For example, in an object store, the API may perform PUT/GET/DELETE of objects using HTTP messages. These requests are handled by servers240, which may be referred to as access servers. The servers240may receive requests from the client devices210and may transform the requests into actual read/write operations/instructions on the data storage systems250. The servers240may also forward data between the client device210and the data storage systems250. As discussed above, a cloud-based infrastructure may be used to automate storage cluster deployment, configuration, maintenance, and monitoring. The cloud-based management infrastructure automates all aspects of storage cluster management, including cluster expansion (add storage capability), repairing (replace failed hardware/software component), upgrade (change hardware/software component), and decommission (remove outdated hardware). As data storage systems250are added and removed, the management system may automatically deal with initialization, rebalancing, and re-distribution among all data storage systems250. Moreover, the management system deploys a cloud orchestration infrastructure (e.g., OpenStack). The management system may also help ensure that the cluster can tolerate fault with high availability, so that any component failure will not bring down the system architecture200. Monitoring may be an important function in the system architecture200. This allows the system architecture200(and/or users) to take timely actions whenever there are disk failures, chassis overheating, power outages, network disconnections, or storage capacity depletion in production environment. The system architecture may deploy scale-out and highly available monitoring system for the data storage systems250. The monitoring system uses low-overhead status collecting software to extract data metrics from the data storage systems250, and then store them in a data store with built-in data redundancy. Further, visualizations may be generated based on the collected data metrics. Alerting notifications for anomalies are sent out via phone call, email, or SMS, and/or other types of messages. In one embodiment, the management system includes a caching tier to improve the performance of data read by prefetching them into a random access memory (RAM) array, a load balancing substrate for backend microservers to avoid any hot spots, and a web-based UI dashboard that displays current statuses, metrics, etc., in a single-panel mode for easy access. The cloud-based infrastructure may enable users to manage and monitor the data storage systems250from a web interface. The cloud-based infrastructure may be located in-house within a user's servers. In one embodiment, the cloud-based infrastructure may be based on OpenStack. In other embodiments, the cloud-based infrastructure may operate with any cloud orchestration system. The cloud-based infrastructure may provide HTTP RESTful APIs for object PUT/GET/DELETE. The accelerator component131may boost the I/O performance of the data storage systems250. The accelerator component131may also boost the speed of data recovery if any data storage system250fails. The network communication protocol between servers240and data storage system250has been modified so that the I/O performance boost in the data storage systems250be leveraged by the servers240(e.g., the access servers). This may involve replacing current HTTP RESTFul calls with the customized protocol used by the accelerator component131. The architecture of the servers240may be based on consistent hashing of an object's keys. This enables the storage system architecture to scale linearly. By adding more data storage system250, the system architecture's aggregate I/O performance grows linearly. The system architecture200can simultaneously provide diverse storage including object storage, block storage, and file storage, etc. The data storage systems250may use error correction codes (e.g., erasure codes) to both transmit data and to store data within the data storage devices240. For example, codewords may be spread across multiple data storage devices304and/or across data storage system250. By using erasure coding, storage capacity requirement can be significantly reduced compared to using copies. In addition, erasure coding allows the client devices210to decode and/or reconstruct data even if some of the codewords are not received. FIG.3is a block diagram of an example data storage system250, in accordance with some embodiments of the present disclosure. In one embodiment, the data storage system250may be a server (e.g., a microserver, a storage server, etc.) that may provide storage capabilities for computing devices. The data storage system250includes a network interface350, an offloading component330, a primary processing device310, a secondary processing device320, and a data storage device340. The network interface350may be one or more devices, components, modules, interfaces, ports, etc., that may receive data from or transmit data to one or more networks or devices. For example, the network interface350may be a network interface card (NIC), a wireless network card, an Ethernet port, etc. In one embodiment, a network interface350may be an ingress interface that receives data (e.g., messages, packets, frames, etc.). In another embodiment, a network interface350may be an egress interface that transmits data. In a further embodiment, a network interface350may be both an egress interface and an ingress interface. The network interface350may be coupled to a network. The network may be a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), or a combination thereof. In one embodiment, a network may include a wired or a wireless infrastructure, which may be provided by one or more wireless communications systems, such as a wireless fidelity (Wi-Fi) hotspot connected with the network and/or a wireless carrier system that can be implemented using various data processing equipment, communication towers (e.g. cell towers), etc. The network interface may also be referred to as a communication interface. In one embodiment, the offloading component330may be hardware (e.g., a circuit, processing logic, a programmable logic device, a processing device, etc.), firmware, software, or a combination thereof that may provide data to one or more of the primary processing device310and the secondary processing device320. The offloading component330may be part of an accelerator component (e.g., accelerator component131) illustrated inFIGS.1and2. In one embodiment, the data storage device340may be a device, component, module, that is capable of storing data. For example, the data storage device340may allow data to be read from a memory of the data storage device340(e.g., one or more magnetic disks, one or more pages, blocks, sectors, tracks, etc.), may allow data to be written to the memory, and/or may allow data to be modified. In some embodiments, the data storage device340may be a hard disk drive (HDD). For example, the data storage device340may be a magnetic disk drive. Using a magnetic disk drive may reduce the costs to manufacture and/or operate the data storage system250. For example, a magnetic disk drive may not deteriorate, degrade, break, etc., as quickly as other types of drives (e.g., a solid state drive (SSD)). In other embodiments, the data storage device340may be a SSD or some other type of storage device that uses flash memory. In one embodiment, the primary processing device310may be a general purpose processing device. A general purpose processing device may be a processing device that is configured to perform general operations, rather than specific type of operations. For example, an x86 processor, an advanced reduced instruction set computing (RISC) machine (ARM) processor, etc., may be examples of general purpose processors. In one embodiment, the secondary processing device320may be a device that may be configured to perform various other operations to supplement the primary processing device310. For example, the secondary processing device320may be a cryptographic engine that is configured to perform cryptographic operations (e.g., encryption and/or decryption operations) on data (e.g., encrypt data and/or decrypt data). One example of a cryptographic engine may be a cryptographic engine and security accelerator (CESA). Because the secondary processing device320may be capable of performing cryptographic operations, this may allow the secondary processing device320to be used and/or adapted for generating checksums more easily. In other embodiments, the secondary processing device320may perform various other tasks, operations, instructions, etc. For example, the secondary processing device320may perform floating point operations, vector operations, matrix operations, tensor operations, graphic operations, etc., to supplement the primary processing device310. In one embodiment, the primary processing device310may be an advanced reduced instruction set computing (RISC) machine (ARM) processor. The ARM processor may be a lower power general purpose processor when compared to other general purpose processors, such as x86 processors. The secondary processing device320may be an ARM coprocessor. By using ARM processors and/or ARM coprocessors, the data storage system250may use less power and/or operate more efficiently than if x86 processors were used in the data storage system250. In one embodiment, one or more of the primary processing device310and secondary processing device320may be part of a system on a chip (SoC). A SoC may be a device (e.g., hardware, circuits, etc.) that integrate some or all of the components of a computing device on a single substrate or chip (e.g., single microchip). For example, the processing device, primary memory (e.g., random access memory), input/output ports (e.g., bus ports), etc., may be integrated onto a single substrate, single die, single chip, etc. The SoC may improve the performance of the data storage system250and/or reduce the power consumption of the data storage system250. The SoC may also allow for a reduction in the size of the data storage system250when compared to traditional systems which may use an x86 processor, a motherboard, and various peripheral devices. In one embodiment, the offloading component330may receive data (e.g., a set of data) via the network interface350(e.g., a communication interface). For example, the network interface350may receive a set of data transmitted by another computing device. The network interface350and the computing device may be communicatively coupled via one or more networks. The network interface350may provide the data to the offloading component330. In other embodiments, the network interface350may provide the data (received from the computing device) directly to one or more of the primary processing device310and the secondary processing device320. As discussed above, a checksum (e.g., checksum data, parity bits, parity data, etc.) may be used to detect errors in data that is stored on the data storage system250(e.g., on the data storage device340). For example, if the data storage devices340has an issue, error, malfunctions, problem, etc., some of the data stored in the data storage device340may have errors, may become corrupted, etc. A checksum for a piece of data may allow a computing device and/or a processing device to determine whether there are any errors in the data. In one embodiment, the offloading component330may determine whether the primary processing device310should be used for performing checksum computations for the data that is received via the network interface350(e.g., whether the primary processing device310should be used to calculate, computer, determine, generate, etc., a checksum). For example, the data storage system250may receive data from computing devices that want to store their data on the data storage system250. For example, the offloading component330may analyze various factors, parameters, criteria, thresholds, etc., to determine whether the primary processing device310should generate the checksum for a set of data (e.g., one or more blocks, pages, etc., of data). The various factors, parameters, criteria, thresholds, etc., for determining whether the primary processing device310should generate the checksum for the set of data are discussed in more detail below. In one embodiment, the offloading component330may provide the first set of data to the primary processing device310in response to determining that the primary processing device310should be used for performing checksum computations for the first set of data. For example, based on various factors, parameters, criteria, thresholds, etc., the offloading component330may forward the data (that was received from the computing device via the network interface350) to the primary processing device310so that the primary processing device310can generate a checksum for the data. In another embodiment, the offloading component330may instruct and/or cause the network interface350to forward the data to the primary processing device310. For example, the network interface350may directly forward the data directly to the primary processing device310. In one embodiment, the offloading component330may provide the first set of data to the secondary processing device320in response to determining that the primary processing device310should not be used for performing checksum computations for the first set of data. For example, based on various factors, parameters, criteria, thresholds, etc., the offloading component330may forward the data (that was received from the computing device via the network interface350) to the secondary processing device320so that the secondary processing device320can generate a checksum for the data. In another embodiment, the offloading component330may instruct and/or cause the network interface350to forward the data to the secondary processing device320. For example, the network interface350may directly forward the data directly to the secondary processing device320. In one embodiment, the offloading component330may a first portion of the data (received from the computing device) to the primary processing device310and a second portion of the data to the secondary processing device320. For example, based on various factors, parameters, criteria, thresholds, etc., the offloading component330may determine that the primary processing device310should generate a first set of checksums for the first portion of the data and that the secondary processing device320should generate a second set of checksums for the second portion of the data. This may spread the work and/or load of generating checksums across the primary processing device310and the secondary processing device320, which may allow the checksums to be generated more quickly, efficiently, etc. In another embodiment, the offloading component330may instruct and/or cause the network interface350to forward the first portion of the data to the primary processing device310and the second portion of the data to the secondary processing device320. For example, the network interface350may directly forward the first portion of the data to the primary processing device310and may directly forward the second portion of the data to the secondary processing device320. In one embodiment, one of the factors, parameters, criteria, thresholds that the offloading component330may analyze may be the number of checksums generated by the primary processing device310over a period of time. The offloading component330may determine, analyze, calculate, etc., the number of checksums that the primary processing device310generated (e.g., calculated, determined, etc.) over a period of time. For example, the offloading component330may determine the number of checksums (e.g., the amount of checksum data) that the primary processing device310has generated in the last second, last 5 seconds, or some other appropriated period of time. If the number of checksums generated by the primary processing device310over a period of time is below a threshold number (e.g., is below a threshold number of checksums), the offloading component330may determine that the secondary processing device320should generate the checksums for the received data. For example, if the number of checksums generated by the primary processing device310over a period of time is below a threshold number, this may indicate that the primary processing device310is busy performing other operations and/or functions, and may not be able to generate checksums for the receive data quickly and/or efficiently. In one embodiment, one of the factors, parameters, criteria, thresholds that the offloading component330may analyze may be a current load of the primary processing device310. The offloading component330may determine, analyze, calculate, etc., the number of instructions, operations, etc., that the primary processing device310is currently executing and/or are pending execution. For example, the offloading component330may analyze an instruction queue to determine the number of instructions and/or operations that are pending execution by the primary processing device310. If the number of instructions and/or operations in the instruction queue is above a threshold number (e.g., is below a threshold number of instructions), the offloading component330may determine that the secondary processing device320should generate the checksums for the received data. For example, if the number of instructions and/or operations in the instruction queue is above a threshold number, this may indicate that the primary processing device310is busy and/or will be busy performing other operations and/or functions, and may not be able to generate checksums for the receive data quickly and/or efficiently. In one embodiment, one of the factors, parameters, criteria, thresholds that the offloading component330may analyze may be a current data rate of data that is received via the network interface350. The offloading component330may determine a current data rate of data received via the network interface350. For example, the offloading component330may determine the amount of data that has been received from other computing devices via the network interface350over a period of time. Based on the current data rate, the offloading component330may determine whether the data should be provided to primary processing device310or the secondary processing device320. For example, if the data rate is above a threshold data rate, the offloading component330may determine that the primary processing device310is unable to generate checksums for the data and may provide the data to the secondary processing device320. If the data rate is below a threshold, the offloading component330may determine that the primary processing device310is capable of generating the checksums and may provide the data to the primary processing device310. In one embodiment, one of the factors, parameters, criteria, thresholds that the offloading component330may analyze may be the current time. For example, the offloading component330may determine a current time. The offloading component330may determine that different amounts of data are received by the data storage system250at different times of the day. Based on the current time (e.g., time of day), the processing device may determine whether the primary processing device should be used for performing checksum computations. For example, at the times of the day where a higher amount of data is expected, the offloading component330may forward some or all of the data to the secondary processing device320to generate checksums. In one embodiment, the primary processing device310and/or the offloading component330may store the data (that was received from a computing device via the network interface350) in the data storage device340. For example, the primary processing device310and/or the offloading component330may write the data to the data storage device340(e.g., a hard disk drive (HDD), a magnetic disk drive, etc.). The primary processing device310and/or the offloading component330may also store the checksum for the data in the data storage device340. In one embodiment, the data storage device (e.g., primary processing device310and/or the offloading component330) may receive a request to access a first set of data. For example, the same computing device and/or another computing device may transmit a request to access the data to the data storage system250. The request may be received via the network interface350and may be provided to the primary processing device310and/or the offloading component330. The primary processing device310and/or the offloading component330may access and/or retrieve the first set of data and a first checksum (for the first set of data) from the data storage device. For example, the primary processing device310and/or the offloading component330may read the first set of data and the checksum from the data storage device340. The primary processing device310and/or the offloading component330may verify the first set of data using the first checksum. For example, the primary processing device310and/or the offloading component330may perform one or more operations using the first checksum to determine whether there are any errors in the first set of data. If there are no errors, the primary processing device310and/or the offloading component330may transmit the first set of data to the computing device via the network interface350. In one embodiment, the data storage system250may be a computing device. A computing device may be a device that may include hardware such as processing devices (e.g., processors, central processing units (CPUs), processing cores, memory (e.g., random access memory (RAM), data storage devices (e.g., hard-disk drive (HDD), solid-state drive (SSD), etc.), and/or other hardware devices (e.g., sound card, video card, etc.). A computing device may include any suitable type of device or machine that has a programmable processor including, for example, server computers, desktop computers, laptop computers, tablet computers, smartphones, set-top boxes, etc. In some examples, a computing device may include a single machine or may include multiple interconnected machines (e.g., multiple servers configured in a cluster). Each computing device may execute or include an operating system (OS), as discussed in more detail below. The OS of a computing device may manage the execution of other components (e.g., software, applications, etc.) and/or may manage access to the hardware (e.g., processors, memory, storage devices etc.) of the computing device. The data storage system250may also include a virtual machine (VM). A VM may be a software implementation of a machine (e.g., a software implementation of a computing device) that includes its own operating system (referred to as a guest OS) and executes application programs, applications, software. A VM may execute on a hypervisor which executes on top of the OS for a computing device (referred to as a host OS). The hypervisor may also be referred to as a virtual machine monitor (VMM). The hypervisor may be a component of an OS for a computing device, may run on top of the OS for a computing device, or may run directly on host hardware without the use of an OS. The hypervisor may manage system resources, including access to hardware devices such as physical processing devices (e.g., processors, CPUs, etc.), physical memory (e.g., RAM), data storage device (e.g., HDDs, SSDs), and/or other devices (e.g., sound cards, video cards, etc.). The hypervisor may also emulate the hardware (or other physical resources) which may be used by the VMs to execute software/applications. The hypervisor may also present other software (e.g., “guest” software) the abstraction of one or more virtual machines (VMs). A VM may execute guest software that uses an underlying emulation of the physical resources (e.g., virtual processors and guest memory). The data storage system250may also include a container. A container may be an isolated set of resources allocated to executing an application, software, and/or process independent from other applications, software, and/or processes. A container may execute on a container engine which executes on top of the OS for a computing device. The host OS (e.g., an OS of the computing device) may use namespaces to isolate the resources of the containers from each other. A container may also be a virtualized object similar to virtual machines. However, a container may not implement separate guest OS (like a VM). The container may share the kernel, libraries, and binaries of the host OS with other containers that are executing on the computing device. The container engine may allow different containers to share the host OS (e.g., the OS kernel, binaries, libraries, etc.) of a computing device. For example, the container engine may multiplex the binaries and/or libraries of the host OS between multiple containers. The container engine may also facilitate interactions between the container and the resources of the computing device. For example, the container engine may manage requests from container to access a memory (e.g., a RAM) of the computing device. In another example, the container engine may manage requests from the container to access certain libraries/binaries of the host OS. The container engine may also be used to create, remove, and manage containers. In one embodiment, the container engine may be a component of a host operating system. In another embodiment, container engine may run on top of a host operating system, or may run directly on host hardware without the use of a host operating system. In one embodiment, the offloading component330may perform load balancing functions for data storage system250. For example, the packet forwarding component120may balance the amount of data that is forwarded to the primary processing device310and the secondary processing device320for generating checksums. This may help prevent primary processing device310and/or the secondary processing device320from being overloaded. This may also allow the data storage system250to operate more efficiently (e.g., to generate checksums more efficiently). In one embodiment, the data storage system250may be located in one or more data centers or cloud computing architectures (e.g., clouds) that includes multiple computing devices, such as server computers. For example, the data storage system250may be located in one or more server/device racks of a data center, along with multiple other data storage systems. As discussed above, the data storage system250may be microserver (e.g., a smaller form factor device) such that multiple data storage system may be able to fit within a single rack space of a rack in the data center. FIG.4is a diagram of an example sequence diagram400, in accordance with some embodiments of the present disclosure. The sequence diagram400may illustrate operations, functions, actions, etc., performed by different components of a data storage system (e.g., data storage system250illustrated inFIG.3). For example, the operations of the network interface350, offloading component330, primary processing device310, and secondary processing device320are illustrated in sequence diagram400. At block405, the network interface350may receive data from a computing device to be stored in a data storage device (e.g., a HDD) of the data storage system. The network interface350may forward the data to the offloading component410. In other embodiments, the network interface350may forward the data (or portions of the data) directly to one or more of the primary processing device310and the secondary processing device320, rather than to the offloading component330, as discussed above. At block415, the offloading component330may analyze various factors, parameters, criteria, thresholds, etc., to determine whether the data (or portions of the data) should be forwarded to the primary processing device310and/or the secondary processing device320. For example, the offloading component330may analyze the current load on the primary processing device310, the number of checksums generated by the primary processing device310, etc. At block420, the offloading component330may forward the data to the primary processing device310or may instruct/cause the network interface350to forward the data to the primary processing device310. At block425, the primary processing device310may generate a checksum for the data. At block430, the offloading component330may forward the data to the secondary processing device320or may instruct/cause the network interface350to forward the data to the secondary processing device320. At block435, the secondary processing device320may generate a checksum for the data. As discussed above, in different embodiments, the offloading component330may forward the data or portions of the data to one or more of the primary processing device310and the secondary processing device320. Thus, any combination of blocks420,425,430, and435may be performed by the offloading component330, the primary processing device310, and the secondary processing device320. FIG.5is a flow diagram of a process500of storing data in a data storage system, in accordance with some embodiments. Process500may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, a processor, a processing device, a central processing unit (CPU), a system-on-chip (SoC), etc.), software (e.g., instructions running/executing on a processing device), firmware (e.g., microcode), or a combination thereof. In some embodiments, the process500may be performed by one or more of a data storage system, an accelerator component, an offloading component, and/or a computing device. The process500begins at block505, where the process500may receive data from a computing device. The data may be received via a network interface of a data storage system. At block510, the process500may determine whether one or more of the primary processing device and secondary processing device should be used to generate one or more checksums for the data. For example, the process500may analyze the current load of the primary processing device, may analyze the number of checksums generated by the primary processing device over a period of time, etc. If the primary processing device should be used, the process500may provide the data to the primary processing device at block515, wherein the primary processing device may generate the checksum. If the secondary processing device should be used, the process500may provide the data to the secondary processing device at block520, wherein the secondary processing device may generate the checksum. If both the primary processing device and the secondary processing device should be used, the process500may provide a first portion of the data to the primary processing device and a second portion of the data to the secondary processing device. The primary processing device may generate a first checksum for the first portion and the secondary processing device may generate a checksum for the second portion. At block530, the process500may store the data and the checksum to a data storage device (e.g., a HDD). For example, the process500may write the data and the checksum to the data storage device. At block535, the process500may receive a request to access the data from a computing device. The process500may retrieve the data (e.g., read the data) from the data storage device at block540. The process500may also retrieve the checksum from the data storage device. At block545, the process500may verify the data using the checksum. For example, the process500may perform one or more operations (e.g., an exclusive OR (XOR) operation) on the data and the checksum to determine whether there are errors in the data. At block550, the process500may transmit the data to the computing device if there are no errors in the data (e.g., if the data is verified). FIG.6is a block diagram of an example data storage system250, in accordance with some embodiments of the present disclosure. In one embodiment, the data storage system250may be a server (e.g., a microserver, a storage server, etc.) that may provide storage capabilities for computing devices. The data storage system250includes a network interface350, an error component630, and a data storage device340. The network interface350may be one or more devices, components, modules, interfaces, ports, etc., that may receive data from or transmit data to one or more networks or devices. The network interface350may be coupled to one or more networks. The one or more networks may carry communications (e.g., messages, packets, frames, etc.) between the data storage system250and other devices (e.g., other computing devices). The one or more networks may include a combination of public networks, private networks, wide area networks, metropolitan area networks, wired networks, wireless, networks, etc. In one embodiment, the error component630may be hardware (e.g., a circuit, processing logic, a programmable logic device, a processing device, etc.), firmware, software, or a combination thereof that may generate one or more codewords (e.g., symbols) for data. The error component630may be part of an accelerator component (e.g., accelerator component131) illustrated inFIGS.1and2. In one embodiment, the data storage device340may be a device, component, module, that is capable of storing data. For example, the data storage device340may be a hard disk drive (HDD). In one embodiment, the network interface350may receive a request for a first set of data stored on the data storage system250(e.g., stored in the data storage device340). For example, the network interface350may receive a request (e.g., a message) from a computing device to access the first set of data. The network interface350may forward the request to the error component630. The error component630may retrieve the first set of data from the data storage device340. For example, the error component630may read the first set of data from the data storage device340. The error component630may also read the checksum for the first set of data and verify the first set of data based on the checksum (e.g., determine whether there are errors in the first set of data). In one embodiment, the error component630may generate a set of codewords based on the first set of data and an error correction code. For example, the error correction code may include and/or be associated with coding function that may be used to encode data to generate the set of codewords. The set of codewords may also be referred to as symbols. Encoding the first set of data into the set of codewords may allow the first set of data to be recovered and/or decoded even if one or more of the set of codewords is lost, as discussed in more detail below. In one embodiment, the error component630may transmit the set of codewords to the computing device that requested the first set of data via a set of network packets (e.g., a set of messages, packets, frames, etc.). For example, each network packet that is transmitted to the computing device may include a codeword from the set of codewords (that was generated using the error correction code). As discussed above, the first set of data may still be decoded and/or recovered if one or more of the set of codewords is not received by the computing device. For example, one or more of the network packets (that is transmitted by the data storage system250) may not be received by the computing device. The computing device may not receive one or more of the network packets due to various network conditions (e.g., due to a bad link, due to lag, etc.). The set of codewords may include parity data that is generated by the coding function of the error correction code. The parity data may be distributed throughout the set of codewords along with the first set of data. The parity data that is distributed throughout the set of codewords allows the computing device to decode and/or recover the first set of data using a subset of the set of codewords. In one embodiment, the error correction code may be an erasure code. An erasure code may be a type of error correction code that may transform, encode, etc., a set of data into one or more codewords (e.g., symbols). The set of codewords may have a total size that is larger than the set of data. The larger size maybe due to the parity bits that are included in each of the codewords, which allow for decoding and/or recovery of the set of data using a subset of the set of codewords. For example, the set of data may have a size k. The set of codewords may have a size (e.g., a total size) of n. Thus, the erasure code may have coding rate of k/n, where k is the number bits and/or size of the original data and n is the number of bits and/or size of the encoded codewords/symbols. Different erasure codes may allow for a different number of codewords to be lost while still allowing the set of data to be recovered and/or decoded. For example, an erasure code may generate four codewords for a set of data and may allow the set of data to be decoded/recovered if at least three codewords are received. In another example, an erasure code may generate eight codewords for a set of data and may allow the set of data to be decoded/recovered if at least five codewords are received. The minimum number of codewords to decode/recover a set of data may be based on the coding function and/or the type of erasure code used. In one embodiment, the error component630may generate the set of codewords for a set of data by dividing the data into multiple portions. The multiple portions may be of equal size. A coding function of an error correction code and/or erasure code may be applied to the multiple portions. To generate the set of codewords. The number of codewords in the set of code words may be greater than or equal to the number of portions. For example, the set of data may be divided into two portions and four codewords may be generated based on the two portions. The coding function may include equations, actions, operations, functions, that may be applied to the multiple portions to generate the codewords. For example, the coding function may include an OR operation, an exclusive OR (XOR) operation, etc. As discussed above, an error correction code, such as an erasure code, may use a coding function to generate one or more codewords. The minimum number of codewords to decode/recover a set of data may be based on the coding function and/or the type of erasure code used. The error component630may provide an indication of the erasure code and/or the coding function to the computing device. For example, the error component630may transmit a string, identifier, or some other value to the computing device to indicate the coding function and/or the type of erasure code used to generate the codewords. This may allow the computing device to properly decode and/or recover the set of data because the computing device can apply the proper coding function to the codewords to decoded and/or recover the set of data from the codewords. In one embodiment, the set of network packets may be transmitted to the computing device using a network protocol that is unable to provide guaranteed (e.g., reliable) delivery of the set of network packets. For example, the error component630may use the user datagram protocol (UDP) protocol to transmit a set of UDP packets to the computing device. Each UDP packet may include one codeword. In other embodiments, the network protocol used by the data storage system250may be unable to provide one or more of the following features: reliable/guaranteed delivery of packets, ordered deliver of packets, congestion control, etc. Because the network protocol may not be able to provide guaranteed delivery of packets, encoding the data with an error correction code to generate codewords may allow the computing device to recover and/or decode data when some of the packets are not received. As discussed above, the data storage system250may be a computing device. A computing device may be a device that may include hardware such as processing devices (e.g., processors, memory (e.g., RAM), storage devices (e.g., HDDs, SSDs), and other hardware devices (e.g., a video card), as discussed above. A computing device may include any suitable type of device or machine that has a programmable processor. The data storage system250may also include a VM. A VM may be a software implementation of a computing device) that includes its own operating system (referred to as a guest OS) and executes application programs, applications, software. A VM may execute on a hypervisor which may manage system resources, including access to hardware devices (e.g., a physical processor, a physical memory or storage device, etc.). The hypervisor may also emulate the physical resources or hardware which may be used by the VMs to execute software/applications. The data storage system may also include a container. A container may be an isolated set of resources allocated to executing an application, software, and/or process independent from other applications, software, and/or processes, as discussed above. A container may execute on a container engine which executes on top of the OS for a computing device which may allow different containers to share the host OS (e.g., the OS kernel, binaries, libraries, etc.) of a computing device. FIG.7is a block diagram illustrating an example process700for generating codewords, in accordance with some embodiments of the present disclosure. As discussed above, a computing device may request a set of data705from a data storage system. The data storage system may retrieve the set of data705and a checksum for the set of data. The data storage system may verify the set of data based on the checksum. To transmit the set of data705to the computing device, the data storage system may generate a set of codewords based on process700. The data storage system may take the set of data705and may divide the set of data705into two portions710. The portions710may be of equal size. For example, each portion710may behalf of the size of the set of data705. Although two portions710are illustrated inFIG.7the set of data705may be divided into another number of portions. The portions may be of equal size or may have different sizes. A coding function may be applied to the portions710to generate the codewords715. For example, the portions710may be XORed together to generate some of the codewords715. Each of the codewords715may be included as the payload of a network packet (e.g., as the payload of a UDP packet) and transmitted to the computing device that requested the set of data705. As discussed above, the computing device may be able to recover and/or decode the set of data705using a subset of the codewords715(e.g., using less than four codewords715). FIG.8is a flow diagram of a process for transmitting data, in accordance with some embodiments of the present disclosure. Process800may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, a processor, a processing device, a central processing unit (CPU), a system-on-chip (SoC), etc.), software (e.g., instructions running/executing on a processing device), firmware (e.g., microcode), or a combination thereof. In some embodiments, the process800may be performed by one or more of a data storage system, an accelerator component, an error component, and/or a computing device. The process800begins at block801where the process800may optionally provide an indication of an error correction code and/or a coding function of the error correction code to a computing device. At block805the process500may receive a request for a first set of data from the computing device. For example, the computing device may transmit a message indicating that the computing device would like to access the first set of data. At block810, the process800may retrieve the first set of data from a data storage device (e.g., an HDD). The process800may also retrieve the checksum from the first set of data from the data storage device. At block815the process800may verify the first set of data based on the checksum. For example, the process800may determine whether the first set of data has any errors based on the checksum. At block820, the process800may generate a set of codewords based on the set of data and an error correction code (e.g., an erasure code). For example, the process800may divide the set of data into multiple portions and may apply a coding function to the multiple portions to generate the set of codewords. The process800may transmit a set of network packets to the computing device and each of the network packets may include one codeword from the set of codewords. FIG.9is a block diagram of an example data storage system250, in accordance with some embodiments of the present disclosure. In one embodiment, the data storage system250may be a server (e.g., a microserver, a storage server, etc.) that may provide storage capabilities for computing devices (e.g., may store data for computing devices). The data storage system250includes a network interface350, a network component930, and a data storage device340. The network interface350may be one or more devices, components, modules, interfaces, ports, etc., that may receive data from or transmit data to one or more networks or devices. The network interface350may be coupled to network950. The network950may carry communications (e.g., messages, packets, frames, etc.) between the data storage system250and the computing device910. The network950may include a combination of public networks, private networks, wide area networks, metropolitan area networks, wired networks, wireless, networks, etc. In one embodiment, the network component930may be hardware (e.g., a circuit, processing logic, a programmable logic device, a processing device, etc.), firmware, software, or a combination thereof that may change the network protocol used by the data storage system250. The network component930may be part of an accelerator component (e.g., accelerator component131) illustrated inFIGS.1and2. In one embodiment, the data storage device340may be a device, component, module, that is capable of storing data. For example, the data storage device340may be a hard disk drive (HDD). In one embodiment, the network component930may transmit a first set of data from the data storage system to the computing device910using a first network protocol. For example, the network component930may transmit the first set of data (e.g., a first set of network packets) to the computing device910using the transmission control protocol (TCP). In another example, the network component930may transmit the first set of data using UDP. In one embodiment, the network component930may analyze one or more network conditions of the network950used by the computing device910and the data storage system250. For example, the network component930may analyze a retransmission rate for network packets transmitted by the data storage system250. The retransmission rate may indicate a number and/or a percentage of packets that are retransmitted because the packets were not received by the computing device910. The retransmission rate may also be referred to as a loss rate, a packet loss rate, etc. In another example, the network component930may analyze the transmission delay for transmitting packets. The transmission delay may indicate the amount of time (e.g., in milliseconds, in seconds, etc.) for a packet to arrive at the computing device910. The transmission delay may also be referred to as latency, lag, etc. In a further example, the network component930may analyze the throughout or bandwidth of the network. For example, the network component930may analyze the amount of data that was transmitted to the computing device910over a period of time. The retransmission rate, transmission delay, bandwidth/throughput are examples of network metrics. In other embodiments, other types of network metrics may be used and/or analyzed by the network component930. In one embodiment, the network component930may determine whether a second network protocol should be used to transmit a second set of data to the computing device910, based on the one or more network metrics. For example, if the retransmission rate is higher than a threshold rate, the network component930may determine that a second network protocol should be used to transmit the second set of data to the computing device910. In another example, if the bandwidth/throughput of the data is below a threshold, the network component930may determine that a second network protocol should be used to transmit the second set of data to the computing device910. In a further example, if the transmission delay is above a threshold, the network component930may determine that a second network protocol should be used to transmit the second set of data to the computing device910. In one embodiment, if the network component930determines that a second network protocol should be used to transmit the second set of data to the computing device910, the network component930may switch to the second network protocol and use the second network protocol to transmit the second set of data to the computing device910. If the network component930determines that a second network protocol should not be used to transmit the second set of data to the computing device910, the network component930continue to use the first network protocol and may use the first network protocol to transmit the second set of data to the computing device910. In one embodiment, the first network protocol may be TCP and the second network protocol may be UDP. For example, the data storage system250may be using TCP to transmit network packets to the computing device910. However, the TCP connection between the computing device910and the data storage system250may have problems or issues (e.g., bad links, faulty networks devices, congestion on a link/connection, etc.). TCP may be a network protocol that incurs a higher overhead, cost, resource usage, etc., than UDP. If the TCP connection between the data storage system250and the computing device910is not reliable, then it may be better to switch to a lighter weight protocol such as UDP. When switching from TCP to UDP, the network component930may close any TCP connections between the computing device910and the data storage device340. Closing the TCP connections may be referred to as terminating the connections, tearing down the connections, etc. In one embodiment, the first network protocol may be UDP and the second network protocol may be TCP. For example, the data storage system250may be using UDP to transmit network packets to the computing device910. However, the network component930may periodically analyze network conditions of the network950(e.g., may reanalyze the network conditions of the network950every few minutes, few hours, or some other appropriate period of time). If the network conditions of the network950have improved, the network component930may determine that the TCP protocol should be used. When switching from UDP to TCP, the network component930may establish one or more new TCP connections between the computing device910and the data storage device340. The new TCP connections may be used to communicate data (e.g., network packets) between the computing device910and the data storage system250. As discussed above, the data storage system250may be a computing device. A computing device may be a device that may include hardware such as processing devices (e.g., processors, memory (e.g., RAM), storage devices (e.g., HDDs, SSDs), and other hardware devices (e.g., a video card), as discussed above. A computing device may include any suitable type of device or machine that has a programmable processor. The data storage system250may also include a VM. A VM may be a software implementation of a computing device) that includes its own operating system (referred to as a guest OS) and executes application programs, applications, software. A VM may execute on a hypervisor which may manage system resources, including access to hardware devices (e.g., a physical processor, a physical memory or storage device, etc.). The hypervisor may also emulate the physical resources or hardware which may be used by the VMs to execute software/applications. The data storage system may also include a container. A container may be an isolated set of resources allocated to executing an application, software, and/or process independent from other applications, software, and/or processes, as discussed above. A container may execute on a container engine which executes on top of the OS for a computing device which may allow different containers to share the host OS (e.g., the OS kernel, binaries, libraries, etc.) of a computing device. FIG.10is a flow diagram of a process1000of switching network protocols, in accordance with some embodiments. Process1000may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, a processor, a processing device, a central processing unit (CPU), a system-on-chip (SoC), etc.), software (e.g., instructions running/executing on a processing device), firmware (e.g., microcode), or a combination thereof. In some embodiments, the process1000may be performed by one or more of a data storage system, an accelerator component, a network component, and/or a computing device. The process1000begins at block1005, where the process1000may transmit data to a computing device using a first network protocol. At block1010, the process1000may analyze one or more network conditions of a network that couples the data storage system and the computing device. For example, the process1000may determine one or more of the retransmission rate, the bandwidth/throughput, the transmission delay, etc. At block1015, the process1000may determine whether to change network protocols based on the one or more network conditions. For example, if the retransmission rate is above a threshold, the process1000may determine that the network protocol used by the data storage system should be changed. If the process1000determines that the network protocols should be changed, the process1000may switch to a second network protocol and may use the second network protocol to transmit network packets at block1025. If the process1000determines that the network protocols should not be changed, the process1000may continue using the first network protocol and may transmit network packets using the first network protocol at block1020. In some embodiments, the process1000may repeat block1010through1025. For example, the process1000may periodically analyze network conditions and determine whether the network protocol currently used by the data storage system should be changed. If so, the process1000may change the network protocol. FIG.11is a block diagram of an example computing device1100that may perform one or more of the operations described herein, in accordance with some embodiments. Computing device1100may be connected to other computing devices in a LAN, an intranet, an extranet, and/or the Internet. The computing device may operate in the capacity of a server machine in client-server network environment or in the capacity of a client in a peer-to-peer network environment. The computing device may be provided by a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single computing device is illustrated, the term “computing device” shall also be taken to include any collection of computing devices that individually or jointly execute a set (or multiple sets) of instructions to perform the methods discussed herein. In some embodiments, the computing device1100may be one or more of an access point and a packet forwarding component. The example computing device1100may include a processing device (e.g., a general purpose processor, a PLD, etc.)1102, a main memory1104(e.g., synchronous dynamic random access memory (DRAM), read-only memory (ROM)), a static memory1106(e.g., flash memory and a data storage device1118), which may communicate with each other via a bus1130. Processing device1102may be provided by one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. In an illustrative example, processing device1102may comprise a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. Processing device1102may also comprise one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device1102may be configured to execute the operations described herein, in accordance with one or more aspects of the present disclosure, for performing the operations and steps discussed herein. Computing device1100may further include a network interface device1108which may communicate with a network1120. The computing device1100also may include a video display unit1110(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device1112(e.g., a keyboard), a cursor control device1114(e.g., a mouse) and an acoustic signal generation device1116(e.g., a speaker). In one embodiment, video display unit1110, alphanumeric input device1112, and cursor control device1114may be combined into a single component or device (e.g., an LCD touch screen). Data storage device1118may include a computer-readable storage medium1128on which may be stored one or more sets of instructions, e.g., instructions for carrying out the operations described herein, in accordance with one or more aspects of the present disclosure. Instructions1126implementing one or more of a data storage device, an accelerator component (e.g., an offloading component, an error component, a network component, etc.), may also reside, completely or at least partially, within main memory1104and/or within processing device1102during execution thereof by computing device1100, main memory1104and processing device1102also constituting computer-readable media. The instructions may further be transmitted or received over a network1120via network interface device1108. While computer-readable storage medium1128is shown in an illustrative example to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform the methods described herein. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media and magnetic media. Unless specifically stated otherwise, terms such as “receiving,” “determining,” “providing,” “generating,” “encoding,” “decoding,” “retrieving,” “transmitting,” “verifying,” “,” “dividing,” “analyzing,” “establishing,” “closing,” or the like, refer to actions and processes performed or implemented by computing devices that manipulates and transforms data represented as physical (electronic) quantities within the computing device's registers and memories into other data similarly represented as physical quantities within the computing device memories or registers or other such information storage, transmission or display devices. Also, the terms “first,” “second,” “third,” “fourth,” etc., as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation. Examples described herein also relate to an apparatus for performing the operations described herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computing device selectively programmed by a computer program stored in the computing device. Such a computer program may be stored in a computer-readable non-transitory storage medium. The methods and illustrative examples described herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used in accordance with the teachings described herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear as set forth in the description above. The above description is intended to be illustrative, and not restrictive. Although the present disclosure has been described with references to specific illustrative examples, it will be recognized that the present disclosure is not limited to the examples described. The scope of the disclosure should be determined with reference to the following claims, along with the full scope of equivalents to which the claims are entitled. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising”, “includes”, and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Therefore, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Although the method operations were described in a specific order, it should be understood that other operations may be performed in between described operations, described operations may be adjusted so that they occur at slightly different times or the described operations may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing. Various units, circuits, or other components may be described or claimed as “configured to” or “configurable to” perform a task or tasks. In such contexts, the phrase “configured to” or “configurable to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs the task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task, or configurable to perform the task, even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” or “configurable to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks, or is “configurable to” perform one or more tasks, is expressly intended not to invoke 35 U.S.C. 112, sixth paragraph, for that unit/circuit/component. Additionally, “configured to” or “configurable to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configured to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks. “Configurable to” is expressly intended not to apply to blank media, an unprogrammed processor or unprogrammed generic computer, or an unprogrammed programmable logic device, programmable gate array, or other unprogrammed device, unless accompanied by programmed media that confers the ability to the unprogrammed device to be configured to perform the disclosed function(s). The foregoing description, for the purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the embodiments and its practical applications, to thereby enable others skilled in the art to best utilize the embodiments and various modifications as may be suited to the particular use contemplated. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the disclosure is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
78,163
11863319
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially utilized on other aspects without specific recitation. DETAILED DESCRIPTION Aspects of the present disclosure relate to wireless communications, and more particularly, to techniques for utilizing network coding for efficient multicast transmission using a radio link layer (RLC) network coding sublayer. The following description provides examples, and is not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. The techniques described herein may be used for various wireless communication technologies, such as LTE, CDMA, TDMA, FDMA, OFDMA, SC-FDMA and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. cdma2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as NR (e.g. 5G RA), Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDMA, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). New Radio (NR) is an emerging wireless communications technology under development in conjunction with the 5G Technology Forum (5GTF). 3GPP Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). cdma2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the wireless networks and radio technologies mentioned above as well as other wireless networks and radio technologies. For clarity, while aspects may be described herein using terminology commonly associated with 3G and/or 4G wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems, such as 5G and later, including NR technologies. New radio (NR) access (e.g., 5G technology) may support various wireless communication services, such as enhanced mobile broadband (eMBB) targeting wide bandwidth (e.g., 80 MHz or beyond), millimeter wave (mmW) targeting high carrier frequency (e.g., 25 GHz or beyond), massive machine type communications MTC (mMTC) targeting non-backward compatible MTC techniques, and/or mission critical targeting ultra-reliable low-latency communications (URLLC). These services may include latency and reliability requirements. These services may also have different transmission time intervals (TTI) to meet respective quality of service (QoS) requirements. In addition, these services may co-exist in the same subframe. Example Wireless Communications System FIG.1illustrates an example wireless communication network100in which aspects of the present disclosure may be performed. For example, UEs120and/or BS110ofFIG.1may be configured to perform operations described below with reference toFIGS.5and6for efficient and reliable multicast transmission using a radio link layer (RLC) network coding sublayer. As illustrated inFIG.1, the wireless communication network100may include a number of base stations (BSs)110and other network entities. A BS may be a station that communicates with user equipments (UEs). Each BS110may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a Node B (NB) and/or a NB subsystem serving this coverage area, depending on the context in which the term is used. In NR systems, the term “cell” and next generation NodeB (gNB or gNodeB), NR BS, 5G NB, access point (AP), or transmission reception point (TRP) may be interchangeable. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile BS. In some examples, the base stations may be interconnected to one another and/or to one or more other base stations or network nodes (not shown) in wireless communication network100through various types of backhaul interfaces, such as a direct physical connection, a wireless connection, a virtual network, or the like using any suitable transport network. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular radio access technology (RAT) and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, etc. A frequency may also be referred to as a carrier, a subcarrier, a frequency channel, a tone, a subband, etc. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. A BS may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or other types of cells. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having an association with the femto cell (e.g., UEs in a Closed Subscriber Group (CSG), UEs for users in the home, etc.). A BS for a macro cell may be referred to as a macro BS. A BS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, the BSs110a,110band110cmay be macro BSs for the macro cells102a,102band102c, respectively. The BS110xmay be a pico BS for a pico cell102x. The BSs110yand110zmay be femto BSs for the femto cells102yand102z, respectively. A BS may support one or multiple (e.g., three) cells. Wireless communication network100may also include relay stations. A relay station is a station that receives a transmission of data and/or other information from an upstream station (e.g., a BS or a UE) and sends a transmission of the data and/or other information to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that relays transmissions for other UEs. In the example shown inFIG.1, a relay station110rmay communicate with the BS110aand a UE120rin order to facilitate communication between the BS110aand the UE120r. A relay station may also be referred to as a relay BS, a relay, etc. Wireless communication network100may be a heterogeneous network that includes BSs of different types, e.g., macro BS, pico BS, femto BS, relays, etc. These different types of BSs may have different transmit power levels, different coverage areas, and different impact on interference in the wireless communication network100. For example, macro BS may have a high transmit power level (e.g., 20 Watts) whereas pico BS, femto BS, and relays may have a lower transmit power level (e.g., 1 Watt). Wireless communication network100may support synchronous or asynchronous operation. For synchronous operation, the BSs may have similar frame timing, and transmissions from different BSs may be approximately aligned in time. For asynchronous operation, the BSs may have different frame timing, and transmissions from different BSs may not be aligned in time. The techniques described herein may be used for both synchronous and asynchronous operation. A network controller130may couple to a set of BSs and provide coordination and control for these BSs. The network controller130may communicate with the BSs110via a backhaul. The BSs110may also communicate with one another (e.g., directly or indirectly) via wireless or wireline backhaul. The UEs120(e.g.,120x,120y, etc.) may be dispersed throughout the wireless communication network100, and each UE may be stationary or mobile. A UE may also be referred to as a mobile station, a terminal, an access terminal, a subscriber unit, a station, a Customer Premises Equipment (CPE), a cellular phone, a smart phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet computer, a camera, a gaming device, a netbook, a smartbook, an ultrabook, an appliance, a medical device or medical equipment, a biometric sensor/device, a wearable device such as a smart watch, smart clothing, smart glasses, a smart wrist band, smart jewelry (e.g., a smart ring, a smart bracelet, etc.), an entertainment device (e.g., a music device, a video device, a satellite radio, etc.), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) devices or evolved MTC (eMTC) devices. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, etc., that may communicate with a BS, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, which may be narrowband IoT (NB-IoT) devices. Certain wireless networks (e.g., LTE) utilize orthogonal frequency division multiplexing (OFDM) on the downlink and single-carrier frequency division multiplexing (SC-FDM) on the uplink. OFDM and SC-FDM partition the system bandwidth into multiple (K) orthogonal subcarriers, which are also commonly referred to as tones, bins, etc. Each subcarrier may be modulated with data. In general, modulation symbols are sent in the frequency domain with OFDM and in the time domain with SC-FDM. The spacing between adjacent subcarriers may be fixed, and the total number of subcarriers (K) may be dependent on the system bandwidth. For example, the spacing of the subcarriers may be 15 kHz and the minimum resource allocation (called a “resource block” (RB)) may be 12 subcarriers (or 180 kHz). Consequently, the nominal Fast Fourier Transfer (FFT) size may be equal to 128, 256, 512, 1024 or 2048 for system bandwidth of 1.25, 2.5, 5, 10, or 20 megahertz (MHz), respectively. The system bandwidth may also be partitioned into subbands. For example, a subband may cover 1.8 MHz (i.e., 6 resource blocks), and there may be 1, 2, 4, 8, or 16 subbands for system bandwidth of 1.25, 2.5, 5, 10 or 20 MHz, respectively. While aspects of the examples described herein may be associated with LTE technologies, aspects of the present disclosure may be applicable with other wireless communications systems, such as NR. NR may utilize OFDM with a CP on the uplink and downlink and include support for half-duplex operation using TDD. Beamforming may be supported and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Multi-layer transmissions with up to 2 streams per UE may be supported. Aggregation of multiple cells may be supported with up to 8 serving cells. In some examples, access to the air interface may be scheduled. A scheduling entity (e.g., a BS) allocates resources for communication among some or all devices and equipment within its service area or cell. The scheduling entity may be responsible for scheduling, assigning, reconfiguring, and releasing resources for one or more subordinate entities. That is, for scheduled communication, subordinate entities utilize resources allocated by the scheduling entity. Base stations are not the only entities that may function as a scheduling entity. In some examples, a UE may function as a scheduling entity and may schedule resources for one or more subordinate entities (e.g., one or more other UEs), and the other UEs may utilize the resources scheduled by the UE for wireless communication. In some examples, a UE may function as a scheduling entity in a peer-to-peer (P2P) network, and/or in a mesh network. In a mesh network example, UEs may communicate directly with one another in addition to communicating with a scheduling entity. InFIG.1, a solid line with double arrows indicates desired transmissions between a UE and a serving BS, which is a BS designated to serve the UE on the downlink and/or uplink. A finely dashed line with double arrows indicates interfering transmissions between a UE and a BS. FIG.2shows a block diagram illustrating an example base station (BS) and an example user equipment (UE) in accordance with some aspects of the present disclosure. At the BS110, a transmit processor220may receive data from a data source212and control information from a controller/processor240. The control information may be for the physical broadcast channel (PBCH), physical control format indicator channel (PCFICH), physical hybrid ARQ indicator channel (PHICH), physical downlink control channel (PDCCH), group common PDCCH (GC PDCCH), etc. The data may be for the physical downlink shared channel (PDSCH), etc. The processor220may process (for example, encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. The transmit processor220may also generate reference symbols, such as for the primary synchronization signal (PSS), secondary synchronization signal (SSS), and cell-specific reference signal (CRS). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (for example, precoding) on the data symbols, the control symbols, or the reference symbols, if applicable, and may provide output symbol streams to the modulators (MODs)232a-232t. Each modulator232may process a respective output symbol stream (for example, for OFDM, etc.) to obtain an output sample stream. Each modulator may further process (for example, convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from modulators232a-232tmay be transmitted via the antennas234a-234t, respectively. At the UE120, the antennas252a-252rmay receive the downlink signals from the BS110and may provide received signals to the demodulators (DEMODs) in transceivers254a-254r, respectively. Each demodulator254may condition (for example, filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator may further process the input samples (for example, for OFDM, etc.) to obtain received symbols. A MIMO detector256may obtain received symbols from all the demodulators254a-254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor258may process (for example, demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE120to a data sink260, and provide decoded control information to a controller/processor280. On the uplink, at UE120, a transmit processor264may receive and process data (for example, for the physical uplink shared channel (PUSCH)) from a data source262and control information (for example, for the physical uplink control channel (PUCCH) from the controller/processor280. The transmit processor264may also generate reference symbols for a reference signal (for example, for the sounding reference signal (SRS)). The symbols from the transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by the demodulators in transceivers254a-254r(for example, for SC-FDM, etc.), and transmitted to the BS110. At the BS110, the uplink signals from the UE120may be received by the antennas234, processed by the modulators232, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by the UE120. The receive processor238may provide the decoded data to a data sink239and the decoded control information to the controller/processor240. The memories242and282may store data and program codes for BS110and UE120, respectively. A scheduler244may schedule UEs for data transmission on the downlink or uplink. The controller/processor280(and/or other processors and modules) at the UE120and/or the controller/processor240(and/or other processors and modules) of the BS110may direct perform or direct the execution of processes for the techniques described herein (e.g., with reference toFIGS.5and6). Example Multicast Network Coding Scheme for NR Aspects of the present disclosure relate to wireless communications, and more particularly, to techniques for utilizing network coding for more efficient multicast transmission. As will be described in greater detail below, in some cases, a radio link layer (RLC) network coding sublayer may be used to increase efficiency and reliability while satisfying latency requirements. Network coding generally refers to a technique where operations (e.g., algebraic algorithms), are performed on packets as they pass through nodes within a network. This is in contrast to traditional routing networks, where packets are simply cached and then forwarded to the next node downstream in the network. Network coding typically merges relevant messages at a node, using a given encoding, then forwards the accumulated result to a destination/receiver for decoding. FIGS.3A and3Billustrate a simple single hop scenario utilizing NC. As shown inFIG.3A, a transmitter device (Tx) generates and sends network coded packets to a receiver device (Rx). The transmitter device Tx (also referred to as a transmitter node, transmitter, encoder node, or encoder) and/or the receiver device Rx (also referred to as a receiver node, receiver, decoder node, or decoder) may be any type of UE, base station, an integrated access and backhaul (IAB) device, and/or the like. As shown inFIG.3B, the transmitter device Tx may generate the network coding packets from a set of original (or source) packets (e.g., packet 1 (p1), packet 2 (p2), and packet 3 (p3)). As illustrated, the network coding packets may be the same as a source packet or may include some combination of source packets (e.g., a linear combination of a subset of the source packets). Network coding may be performed using any type of network coding scheme, such as fountain coding, linear network coding, random linear network coding, Luby transform (LT) network coding, Raptor network coding, and/or the like. The number of encoded packets is typically greater than the number of source packets, which provides redundancy and increases reliability. In the example illustrated inFIG.3B, the transmitter device Tx encodes K original packets (e.g., K=3) into N network coded packets (e.g., N=4). As shown, the three source packets (p1, p2, and p3) are encoded into the four network coded packets: p2, p1+p2, p1+p3, and p2+p3. The redundant information carried in the encoded packets may help the receiver device Rx recover the source packets even if not all of the network coded packets are successfully decoded. For example, assuming the receiver device Rx does not successfully decode network coded packet p1+p2 (as indicated by the X), the receiver device Rx may still be able to recover the source packets, as there is sufficient information in the other network coded packets (p2, p1+p3, and p2+p3). For example, the receiver device Rx may first decode network coded packet p2. Using the information for packet p2, the receiver may obtain packet p3 after decoding network coded packet p2+p3 (e.g., because the receiver has already decoded p2 and can use combining techniques to obtain p3 from p2+p3). In a similar manner, the receiver device Rx can obtain packet p1 from the network coded packet p1+p3 (because the receiver device Rx has already decoded packet p3 and can use combining to obtain packet p1 from network coded packet p1+p3). As illustrated, in some cases, the receiver device Rx may provide feedback. In this example, the receiver device Rx indicates the three source packets (p1, p2, and p3) were successfully decoded. As will be described in greater detail below, such feedback may be used to update the network coded scheme. As shown inFIG.4, the network coded coding scheme can be extended to a multi-hop deployment. In such cases, intermediate nodes (indicated by NC inFIG.4) can either relay packets or encode and relay network coded packets. Aspects of the present disclosure provide network coding schemes that may be applied to multicast transmission scenarios. As will be described in greater detail below, in some cases, a RLC network coding sublayer may be used to increase efficiency and reliability while still satisfying latency requirements. FIG.5illustrates example operations500that may be performed by a transmitter device/UE, in accordance with certain aspects of the present disclosure. For example, operations500may be performed by one of the UEs or base stations ofFIG.1orFIG.2to efficiently and reliably transmit data to a receiver device. Operations500begin, at502, by generating, from a RLC service data unit (SDU), a first number of source packets. At504, the transmitter UE generates, from the first number of source packets, a second number of network coded packets using a network coding scheme, wherein the second number is greater than the first number. At506, the transmitter transmits the second number of network coded packets to a receiver device via one or multiple diverse paths. FIG.6illustrates example operations600that may be performed by a receiver device, in accordance with certain aspects of the present disclosure, and may be considered complementary to operations500ofFIG.5. For example, operations600may be performed by one of the UEs or base stations ofFIG.1orFIG.2to process network coded packets from a transmitter performing operations500ofFIG.5. Operations600begin, at602, by receiving network coded packets from multiple transmitter devices. At604, the receiver device decodes the network coded packets, based on a network coding decoding scheme, to recover one or more source packets. At606, the receiver device generates feedback based on the decoding. At608, the receiver device transmitting the feedback to the multiple transmitter devices. FIG.7is a diagram illustrating an example RLC network coding sublayer, in accordance with certain aspects of the present disclosure. As illustrated, a PDCP PDU may be concatenated (although this step may not be necessary when PDCP PDU size is relatively large) and divided into K RLC service data units (SDUs) (e.g., S1-SK). The size of SDUs may be predefined and, in some cases, the value of K may be optimized using the SDU size and a generator matrix. In some cases, the value of K may be optimized based on one or more of the following parameters: targeted error rate, channel condition, network coding functions, device computation resources, and delay budget. Next, network coding (e.g., encoding) may be performed to generate N packets (e.g., p1-pN), which may be mapped to N PDUs (PDU1-PDUN). The value of N may be determined, for example, by the particular network coding function, targeted error probability, and/or channel conditions. The N PDUs may then be passed to the MAC layer, to be distributed to physical (PHY) layers for transmission over multiple links (e.g., different component carriers and/or different RATs). At the receiver-side, the network coding sublayer may process the network coded packets, performing network decoding operations corresponding to the encoding operations described above, and generating feedback to the transmitter (or multiple transmitters). In a multicast scenario with multiple transmitters, each participating transmitter may have a network coding sublayer, as shown inFIG.7, and may use the same network coding scheme. Depending on the scenario, however, each transmitter may send the same source packets or each transmitter may send their own (e.g., different) source packets. FIGS.8A &8Billustrate a first scenario for multicast network coding, in which multiple transmitters (Tx 1, Tx 2, and Tx n) have the same source packets to send (packets 1-m). Thus, in this scenario, the receiver intends to receive one set of the original (source) packets. As illustrated inFIG.8A, due to the general randomness of the network coding scheme, however, each transmitter Tx 1, Tx 2, and Tx n generates different network coded packets. This application scenario may be used, for example, in a multiple transmitter receiver point (multi-TRP) scenario where each link corresponds to a beam, a dual connectivity (DC) scenario where each link is associated with one gNB, and/or a carrier aggregation (CA) scenario where each link is one carrier. As illustrated inFIG.8B, the receiver decodes the data and sends the same feedback information to all transmitters Tx 1, Tx 2, and Tx n (e.g., indicating which of the packets was successfully received). Each transmitter Tx 1, Tx 2, and Tx n then updates the network coding scheme (e.g., via a encoding distribution function), according to the decoded packets report from receiver. In some cases, the feedback may be provided via a network coding sub layer decoded packets report. FIG.9illustrates, in greater detail, how a network coding scheme may be updated based on feedback received via an NC sub layer decoded packets report. As illustrated, the receiver Rx generates acknowledgment information (ACK) for decoded packets and provides this as feedback to the transmitter Tx NC sublayer. In the illustrated example, the receiver Rx failed to decode one of the packets (as indicated by the X). The feedback may be provided via a special (new) field in an RLC status Report or could be a (newly defined) NC sub-layer report. As illustrated, upon receiving the feedback, the transmitter Tx NC sublayer may update the NC scheme, for example, adjusting encoding distribution function according to feedback. For example, assuming the transmitter Tx wants to send packets p1, p2, p3, p4 to the receiver Rx, the transmitter Tx may first generate encoded (e.g., network coded) packets (pc1, pc2 . . . ) using an encoding function ƒ, such that: pc_i=ƒ(p1,p2,p3,p4), fori=1,2,3,4, . . . ,N where pc_i is the ith encoded packet. Based on feedback from the receiver Rx, the transmitter Tx may realize that the receiver Rx has already successfully decoded source packets p1 and p3. Thus, the transmitter Tx can update its encoding function from ƒ to F, and send encoded packets: pc_i′=F(p2,p4), fori′=1,2,3,4 . . . , where pc_i′ is the ith encoded packet after receiving the feedback information. In this example, the updated encoded function F does not take (positively) acknowledged (ACKed) packets (p1, p3) into account when encoding. As another example, again assuming source packets p1, p2, p3, and p4, the transmitter Tx could update its encoding function to F′, and send encoded packets: pc_1=F′(p1,p3,p4)=p1+p3+p4, and pc_2=F′(p1,p2,p3)=p1+p2+p3, so that the receiver Rx and further decode p4 and p2, respectively. In this example, the updated encoded function does take ACKed packets (p1, p3) into account when encoding. FIGS.10A &10Billustrate a second scenario for multicast network coding, in which multiple transmitters (Tx 1, Tx 2, and Tx n) have different source packets to send. As illustrated inFIG.10A, Tx 1 sends packets a1-am, Tx 3 sends b1-bm, and Tx n sends n1-nm. Thus, in this scenario, the receiver, in effect, acts as three separate sub-receivers, to receive the different sets of the original (source) packets. This application scenario may also be used, for example, in multi-TRP, DC, and/or CA scenarios as explained above. As illustrated inFIG.10B, the receiver intends to receive a different set of original packets from each of the transmitters. Each transmitter generates different encoded packets using the same network coding scheme, but from different source packets. Thus, in this case, the receiver sends unique feedback information for each corresponding transmitter. Each transmitter then updates the NC scheme according to the feedback information. In this scenario, the feedback may be sent as a NC sub-layer report, an RLC status report, or via hybrid automatic repeat request (HARQ) ACK/NACK feedback. FIG.11illustrates, in greater detail, how a NC scheme may be updated based on feedback received via an RLC status report. As illustrated, the receiver generates the status report with ACK information for decoded packets as feedback to the transmitter Tx NC sublayer. As illustrated, the transmitter Tx may maintain a buffer of the (RLC PDU) packets that were successfully received. The transmitter Tx may then try to tentatively decode the packets from this buffer, to recover the source packets, effectively doing the same work as the receiver. If the transmitter Tx is able to recover a source packet, it may assume the receiver can also able to decode the source packet and may update the encoder function accordingly. FIG.12illustrates, in greater detail, how a network coding scheme may be updated based on feedback received via HARQ ACK/NACK feedback. As illustrated, the receiver generates the ACK/NACK feedback (one layer lower than for the RLC status report example above) for decoded packets as feedback to the transmitter Tx NC sublayer. Because the N encoded packets are transmitted as one or more MAC PDUs, the receiver side extracts MAC PDUs to decode the original packets. As illustrated, based on this feedback, the transmitter will maintain a buffer with MAC PDUs that were successfully decoded at the receiver (e.g., as indicated by the ACK feedback). The transmitter will then extract the RLC PDUs and, in a similar manner as described above with reference toFIG.11, tentatively decode these packets to recover the source packets, effectively doing the same work as the receiver. Again, if the transmitter Tx is able to recover a source packet, the transmitter Tx may assume that the receiver can also decode the source packet and may update the encoder function accordingly. One advantage of the feedback approaches shown inFIGS.11and12is that they may be implemented without a change to a standard specification (e.g., the implementation may just be on the transmitter and receiver side). FIG.13illustrates a communications device1300that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIG.5. The communications device1300includes a processing system1302coupled to a transceiver1308. The transceiver1308is configured to transmit and receive signals for the communications device1300via an antenna1310, such as the various signals as described herein. The processing system1302may be configured to perform processing functions for the communications device1300, including processing signals received and/or to be transmitted by the communications device1300. The processing system1302includes a processor1304coupled to a computer-readable medium/memory1312via a bus1306. In certain aspects, the computer-readable medium/memory1312is configured to store instructions (e.g., computer-executable code) that when executed by the processor1304, cause the processor1304to perform the operations illustrated inFIG.5, or other operations for performing the various techniques discussed herein. In certain aspects, computer-readable medium/memory1312stores code1314for generating, from a radio link control (RLC) service data unit (SDU), a first number of source packets; code1316for generating, from the first number of source packets, a second number of network coded packets based on a network coding scheme, wherein the second number is greater than the first number; and code1318for transmitting the second number of network coded packets to a receiver device via one or multiple diverse paths. In certain aspects, the processor1304has circuitry configured to implement the code stored in the computer-readable medium/memory1312. The processor1304includes circuitry1320for generating, from a RLC SDU, a first number of source packets; circuitry1322for generating, from the first number of source packets, a second number of network coded packets based on a network coding scheme, wherein the second number is greater than the first number; and circuitry1324for transmitting the second number of network coded packets to a receiver device via one or multiple diverse paths. FIG.14illustrates a communications device1400that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIG.6. The communications device1400includes a processing system1402coupled to a transceiver1408. The transceiver1408is configured to transmit and receive signals for the communications device1400via an antenna1410, such as the various signals as described herein. The processing system1402may be configured to perform processing functions for the communications device1400, including processing signals received and/or to be transmitted by the communications device1400. The processing system1402includes a processor1404coupled to a computer-readable medium/memory1412via a bus1406. In certain aspects, the computer-readable medium/memory1412is configured to store instructions (e.g., computer-executable code) that when executed by the processor1404, cause the processor1404to perform the operations illustrated inFIG.6, or other operations for performing the various techniques discussed herein. In certain aspects, computer-readable medium/memory1412stores: code1414for receiving network coded packets from multiple transmitter devices; code1416for decoding the network coded packets, based on a network coding decoding scheme, to recover one or more source packets; code1418for generating feedback based on the decoding; and code1420for transmitting the feedback to the multiple transmitter devices. In certain aspects, the processor1404has circuitry configured to implement the code stored in the computer-readable medium/memory1412. The processor1404includes circuitry1422for receiving network coded packets from multiple transmitter devices; circuitry1424for decoding the network coded packets, based on a network coding decoding scheme, to recover one or more source packets; circuitry1426for generating feedback based on the decoding; and circuitry1428for transmitting the feedback to the multiple transmitter devices. Example Aspects Aspect 1: A method of wireless communication by a receiver device, comprising receiving network coded packets from multiple transmitter devices; decoding the network coded packets, based on a network coding scheme, to recover one or more source packets; generating feedback based on the decoding; and transmitting the feedback to the multiple transmitter devices. Aspect 2: The method of Aspect 1, wherein the network coded packets are based on a same set of source packets encoded using the network coding scheme; and transmitting the feedback comprises transmitting the same feedback to each of the multiple transmitter devices indicating which of the same set of source packets were successfully recovered. Aspect 3: The method of Aspect 2, wherein the feedback is sent via a network coded sublayer decoded packets report. Aspect 4: The method of Aspect 2 or 3, wherein the feedback is sent via a field in a radio link control (RLC) status report. Aspect 5: The method of Aspect 4, wherein the RLC status report comprises information associated with the decoded network coded packets. Aspect 6: The method of any of Aspects 1-5, wherein the network coded packets are based on a different sets of source packets encoded using the network coding scheme; and transmitting the feedback comprises transmitting different feedback to the multiple transmitter devices, each different feedback indicating which source packets of a different set of source packets were successfully recovered. Aspect 7: The method of Aspect 6, wherein the feedback is sent via a network coded sublayer decoded packets report. Aspect 8: The method of Aspect 6 or 7, wherein the feedback is sent via a field in a RLC status report. Aspect 9: The method of Aspect 8, wherein the RLC status report comprises information associated with the decoded network coded packets. Aspect 10: The method of any of Aspects 6-9, wherein the feedback is sent via a RLC status report with information regarding received network coded packets. Aspect 11: The method of Aspect 6-10, wherein the feedback comprises hybrid automatic repeat request (HARD) acknowledgment feedback. Aspect 12: The method of any of Aspects 1-11, wherein the same feedback is sent to each of the multiple transmitter devices. Aspect 13: The method of any of Aspects 1-12, wherein different feedback is sent to each of the multiple transmitter devices. Aspect 14: A method of wireless communication by a first transmitter device, comprising generating, from a RLC service data unit (SDU), a first number of source packets; generating, from the first number of source packets, a second number of network coded packets based on a network coding scheme, wherein the second number is greater than the first number; and transmitting the second number of network coded packets to a receiver device via one or multiple diverse paths. Aspect 15: The method of Aspect 14, wherein the first transmitter device is one of multiple transmitter devices; and the network coded packets are sent as part of a multi-cast transmission in which each of the multiple transmitter devices generates, from the same first number of source packets, the greater second number of network coded packets using the same network coding scheme, and transmits the greater second number of network coded packets to the receiver device. Aspect 16: The method of Aspect 15, further comprising receiving feedback from the receiver device regarding packets successfully decoded at the receiver device; and updating the network coding scheme based on the feedback. Aspect 17: The method of Aspect 16, wherein the feedback is received via a network coded sublayer decoded packets report. Aspect 18: The method of Aspect 16 or 17, wherein the feedback is received via a field in an RLC status report. Aspect 19: The method of Aspect 18, wherein the RLC status report indicates one or more decoded network coded packets. Aspect 20: The method of any of Aspects 16-19, wherein updating the network coding scheme comprises updating an encoding distribution function according to the feedback. Aspect 21: The method of any of Aspects 14-20, wherein the first transmitter device is one of multiple transmitter devices; and the network coded packets are sent as part of a multi-cast transmission in which each of the multiple transmitter devices generates, from different source packets, different network coded packets using the same network coding scheme, and transmits the different network coded packets to the receiver device. Aspect 22: The method of Aspect 21, further comprising receiving feedback from the receiver device regarding packets successfully decoded at the receiver device; and updating the network coding scheme based on the feedback. Aspect 23: The method of Aspect 22, wherein the feedback is received via a network coded sublayer decoded packets report. Aspect 24: The method of Aspect 22 or 23, wherein the feedback is received via a field in an RLC status report. Aspect 25: The method of Aspect 24, wherein the RLC status report indicates one or more decoded network coded packets. Aspect 26: The method of any of Aspects 22-25, wherein the feedback is received via a RLC status report with information regarding network coded packets as received at the receiver device. Aspect 27: The method of any of Aspects 24-26, further comprising buffering network coded packets indicated as successfully received in the RLC status report; decoding the buffered network coded packets to recover one or more of the source packets; and updating an encoding distribution function of the network coding scheme based on a result of the decoding. Aspect 28: The method of any of Aspects 22-27, wherein the feedback comprises HARQ acknowledgment feedback; and the method further comprising buffering medium access control (MAC) protocol data units (PDUs) indicated as successfully received in the HARQ acknowledgment feedback; extracting network coded packets from the buffered MAC PDUs; decoding the extracted network coded packets to recover one or more of the source packets; and updating an encoding distribution function of the network coding scheme based on a result of the decoding. Aspect 29: An apparatus for wireless communication by a UE, comprising a memory and at least one processor coupled to the memory, the memory and the at least one processor being configured to perform any of the operations of Aspects 1-28. Aspect 30: An apparatus for wireless communication by a UE, comprising means for performing any of the operations of Aspects 1-28. Aspect 31: A computer readable medium having instructions stored thereon for performing any of the operations of Aspects 1-28. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components. For example, various operations shown inFIGS.5and6may be performed by various processors shown inFIG.2of the BS110and/or UE120. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. If implemented in hardware, an example hardware configuration may comprise a processing system in a wireless node. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement the signal processing functions of the PHY layer. In the case of a UE120(seeFIG.1), a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the machine-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For example, instructions for performing the operations described herein and illustrated inFIGS.5and6. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
54,719
11863320
DETAILED DESCRIPTION OF EMBODIMENTS Reference will now be made in detail to particular embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents that may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be readily apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, processes, components, structures, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present invention. Some portions of the detailed descriptions which follow are presented in terms of processes, procedures, logic blocks, functional blocks, processing, schematic symbols, and/or other symbolic representations of operations on data streams, signals, or waveforms within a computer, processor, controller, device, and/or memory. These descriptions and representations are generally used by those skilled in the data processing arts to effectively convey the substance of their work to others skilled in the art. Usually, though not necessarily, quantities being manipulated take the form of electrical, magnetic, optical, or quantum signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer or data processing system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, waves, waveforms, streams, values, elements, symbols, characters, terms, numbers, or the like. One problem with having all devices that share the same media also use the same protocol is this can prevent new and more advanced devices from being added to the same media, whereby such advanced devices may be capable of sending data with higher performance (e.g., a higher data rate). Typically, if the data rate of a shared media channel needs to be increased, such as when the full capacity of the channel has been reached, all of the devices sharing the communication channel may need to be replaced with the new higher performance devices. However, this is a very expensive solution since there may be a large number of such devices sharing the same media in a system. It would more cost effective if a smaller set of devices (e.g., 2 or more devices) at a time could be replaced, in order to increase the channel capacity. In addition, if new capability needs to be added to a shared media that already has multiple devices operating thereon, the new devices might benefit from higher performance (e.g., a higher data rate), but these devices would typically instead be forced to use the same lower performance communication protocol as the existing devices that are already deployed on the shared media. In particular embodiments, higher or “enhanced” performance devices can be added to media that is shared by other lower or “standard” performance devices that share the same communication protocol. As described herein, the “standard” format may indicate the physical layer and media access control (MAC) layer format used by such existing devices that are all connected to the same physical media and share the same common standard protocol, while the “enhanced” format may indicate a higher performance physical and MAC layer format that is not normally or otherwise compatible with the standard devices using the standard format protocol. While certain embodiments are applicable to devices (e.g., transceivers) on any shared media, including wired or wireless media types, the examples herein primarily utilize the IEEE 802.3 Ethernet protocol as the standard format protocol. The IEEE 802.3 compliant devices have a MAC layer protocol for getting access onto the wire or communication media, while avoiding collisions as much as possible. In this protocol, transmissions are packetized with variable sized payloads of data of between 46 and 1500 bytes. The available modes for a device to get access to the media include carrier sense multiple access with collision detection (CSMA/CD) mode and beacon mode. For the CSMA/CD mode, devices may first listen for a transmitted preamble, and upon establishing that there is none, begin to transmit a packet using the standard format physical symbols. If the transmitting device determines that another device chose the same moment to also begin a transmission (i.e., collision detection), then that device may stop transmitting, in order to wait for another opportunity. For beacon mode, one node/device sharing the media and using the standard protocol may act as a unique master for the group of all compliant devices sharing the media. This master node periodically transmits a beacon to mark the beginning of a set of opportunities for all nodes to transmit data. Then, each device (e.g., with a unique identifier) may have its own time slot for transmitting data when no other device compliant to the protocol is allowed to transmit. Thus, in beacon mode there are no collisions since every device has its own unique communication time slot. For both modes, each packet may start and end with a known sequence of bit waveforms defined by the physical layer specification in the standard shared protocol. For example, the sequence is 64 bits of preamble, 48 bits each of MAC source and destination addresses for the packet, 16 bits of length of the packet, then the actual packet data sequence consistent with the length field, and finally 32 bits of cyclic redundancy check (CRC) error-detecting code. Normal IEEE 802.3c compliant packets fill the data section with standard format bit waveforms representing the data to be sent, and in the format that is defined in that standard. For example, the length field preceding the data section is a count of how many bytes of data transmitted are to be transmitted during the data section at the bit rate as defined by the common standard. The 802.3 compliant devices know what data length to expect, having decoded the preceding length field, and these transceivers are designed to tolerate receiving noise on the waveform that may corrupt some of the bits in the data. This is why there is a CRC following the data section, in order to check if the noise has corrupted any bits in the data section during transmission. The CRC calculation may be based on all of the bits as received in the data section. The CRC can be designed such that if any bit or bits are decoded in error, that CRC value is very unlikely to match the value calculated on the data when there are no errors. Each receiver may calculate a CRC based on the data as received in the data section, and then check that value against the CRC that is sent in the packet after the data section. If the calculated CRC does not match the received CRC value, the packet can be thrown out and the data is not used. It should also be noted that error correction may be present to correct some number of corrupt bits, but the error correction code (ECC) may only be able to correct a finite number of corruptions and can be overwhelmed by too many corruptions. Thus, the CRC can be a final check after error correction to check that all critical payload data are correct. Referring now toFIG.1, shown is a block diagram of an example device and shared media arrangement, in accordance with embodiments of the present invention. In example100, standard transceivers (e.g.,102-0,102-1, etc.) and an enhanced transceivers (e.g.,104-0,104-1, etc.) may be connected to shared media (e.g., Ethernet, twisted-pair, WiFi, etc.). It should be noted that any number of transceivers102and104can be included on the shared media. Transceivers102and104may be implemented by an integrated circuit (e.g., custom ASIC, DSP, ADC, DAC, etc.) or other components, but transceivers104may have enhanced performance capability, such as an enhanced data rate, and can utilize an enhanced communication protocol. For example, the shared media can include high speed Ethernet standards, such as the IEEE 802.3cg multidrop standard (e.g., for IoT applications and increased bandwidth/data rates). A “multidrop” is a single cable to which multiple devices/nodes can connect without switches, as opposed to point-to-point systems that require switches for network distribution. However, particular embodiments are suitable for the sharing of any single medium between legacy (standard) devices and newer (enhanced) devices, where the newer devices can communicate with each other, while also complying with legacy devices, such as in terms of field length, such that these devices can hold off and not interfere during packet communications. In this way, higher data rates for enhanced devices connected to shared media (e.g., 802.3cg multidrop networks) can be accommodated without negatively affecting the performance of legacy devices. In particular embodiments, the enhanced protocol can utilize a completely different symbol modulation scheme for the data section, while the preamble and length fields may utilize the given standard symbol modulation scheme, whereby the enhanced protocol is incomprehensible to modems/transceivers that understand only the standard symbol modulation scheme. For example, the Ethernet 802.3 standard protocol uses baseband pulse amplitude symbol modulation (PAM) scheme for sending data, while the enhanced protocol may utilize an orthogonal frequency-division multiplexing (OFDM) or Wavelet symbol modulation scheme. This enhanced protocol may allow for data rate increases by factors of 10 to 100 or more, as compared to the standard protocol (e.g., Ethernet 802.3). It is also important in some applications that packets that are sent in the format of the enhanced protocol, but received by modems/transceivers that understand only the standard protocol, not be acted upon (e.g., at least to within a very high probability). For example, if the packet of the enhanced protocol was sent to control the operation of a large door with the intent to light a warning light (e.g., in an Internet of things (IoT) application), and the receiver that understood only the standard protocol misinterpreted the data to mistakenly close the door rather than light a warning light, then this could cause serious harm, especially if there were people in the way. Thus in particular embodiments, allowing the increased data rate of the enhanced protocol can also ensure that the CRC as calculated by the modems/transceivers using the standard protocol calculate the CRC as invalid and thus do not act upon data in an enhanced protocol packet. In this way, the intentional CRC corruption as described herein allows use in applications where it is necessary to throw out packets of the enhanced protocol by modems understanding only packets of the standard protocol. Many standards start with the same preamble format and then signal somewhere in the header to indicate what the format of the data to be sent will be such that the receivers know what to expect, the length portion can be in units compatible with that expected data. However, in certain embodiments, there is no signal in the header to tell the receivers what data format to expect, and the length field that is sent can be timed in units of the expected standard (e.g., slower) data format. In this way, all transceivers conforming to the standard protocol can allow a non-conformant or enhanced transceiver to send more data than would normally be possible without changing the standard and without causing collisions with the slower data packets of the standard protocol. Particular embodiments are suitable to any type of shared media and communication protocols, such as twisted pair channels using BACNET-MSTP formatted data at 78 kbps as a standard protocol, versus an enhanced protocol with formatted data that can result in a greater than 10× increase in data rate (1 Mbps). Certain embodiments are applicable to communication between devices connected to the same physical media, without limiting devices sharing the same media to either all be compliant to some common standard protocol or risk causing collisions and loss of data. In this way, higher performance communication (e.g., a higher data rate, a longer transmission distance, etc.) can be allowed between enhanced devices without causing a loss of data in standard devices sharing the same physical media but that our not equipped with the enhanced capabilities. Referring now toFIG.2, shown is a schematic block diagram of an example shared media packet with enhanced data section, in accordance with embodiments of the present invention. In example200, the IEEE 802.3 MAC frame format is shown, whereby a start-of-stream delimiter (SSD) marks the beginning of the packetized communication, and end-of-stream delimiter (ESD) marks the end of the packetized communication. This example packet can include an (e.g., 8 byte) preamble202, which may include a (e.g., 1 byte) start frame delimiter (SFD) value that marks the end of the preamble. This can be followed by a (e.g., 6 byte) destination address204, then by a (e.g., 6 byte) source address206, and then by a (e.g., 2 byte) length field208. In certain embodiments, fields202,204,206, and208can all be sent with standard format symbols, e.g., as formatted in accordance with the standard 802.3 specification, whether the sender is a standard or an enhanced device. However, the length field, rather than expressing the true value for how many bytes are in the data section, can instead be a length indication or value that represents approximately the length of time that it would take to send the enhanced data but correlated to how many bytes would be needed using the standard 802.3 specification, in order to occupy essentially the same amount of time. Thus, if 1000 bytes of data are to be sent at 10× the data rate of the 802.3 specification, and it takes 100 data symbols at the 802.3 specified rate to send that 1000 bytes of data at the higher rate, then a value of 100 can be provided in the length field. In this way, the standard specification compliant units/devices would know how long of a data block to expect, and thus can time for this data by using their slower bit clocks. Modern modem specifications allow for the data section to be corrupted due to noise, which is why a CRC212character can be included in order to check the validity of the data. After counting out the proper amount of time in the data section210, but not decoding the actual data, the standard compliant receivers can correctly read the value of the CRC that is also sent in the standard format. However, the CRC sent would, with a very high likelihood, not match the CRC calculated using the misread data, thus invalidating the data for those non-compliant units. As such, the length value can be modified in order to represent the number of standard format symbols to fit the enhanced data waveforms that utilize enhanced format symbols. Further, the CRC as set in field212can be a purposefully incorrect value (e.g., a random number or known/different number) that results in an enhanced packet being discarded by standard devices. For example, prime or other numbers can be utilized in generating CRC values. In particular embodiments, a higher performance waveform (enhanced) can be substituted for the data section (e.g.,210), while standard symbol/waveforms can be employed for the other packet sections (e.g.,202,204,206,208, and212), such that other transceivers can allow the transmission without colliding with its data section. First, length field208may be formed by calculation of the length of the high speed data section and of the corresponding equivalent length at the standard slower speed. This modified slower speed length value can then be sent in the standard specific communication protocol MAC length field208. The data in the high speed format that fits into the length as specified for the data section can be sent in data section210. Following the data section, the CRC value that does not match what was expected in the standard format by standard receivers can be provided at CRC212. The first bit of this CRC value may be time-aligned in order to match the expected timing given the expected length value in208that was sent to standard (e.g., 802.3cg) compliant devices. If the CRC does not match the predicted value from the data packet, such as for a standard device receiving an enhanced data section, the packet can be thrown out by the receiving device. Referring now toFIG.3, shown is a flow diagram of an example method of communicating between devices on a shared media, in accordance with embodiments of the present invention. In example300, a length field that complies with a standard communication protocol can be formed at302. At304, a data field that is compliant with an enhanced communication protocol, and non-compliant with the standard communication protocol, can be formed. As discussed above, different symbols/waveforms may characterize the enhanced communication protocol relative to the standard communication protocol. At306, the CRC can be set to be an incorrect value but otherwise complying with the standard communication protocol. At308, the data signal including the length, the data, and the CRC can be transmitted on the shared media. At310, a standard transceiver that complies with the standard communication protocol may reject the data signal due to the incorrect CRC with respect to the data section as received by that standard transceiver. At312, an enhanced transceiver may properly decode the data signal. Particular embodiments take advantage of the fact that transceivers are tolerant of bit errors in the data section. The standard IEEE 802.3cg waveforms can be sent up until the data section (e.g., in the preamble202, destination address204, source address206, and length208fields). Thus, all devices on the shared media can decode the addresses and the expected length of the data section, since they understand the standard communication protocol. Then, rather than sending the slower 802.3cg waveform in the data section, a completely different type of waveform that holds more data bits than 802.3, but can be accommodated in that designated length of time, can be provided in the data section. Thus, how long the high speed data section would be and what that equivalent length is at the standard slower speed can be calculated. In some cases, the length value might be slightly longer than it takes to send the higher speed data to insure that the bit timing edges for the field that follows the data field aligns properly with the expected lower speed bit stream. In any event, the slower speed length value can be sent in the MAC length portion of the packet such that all standard protocol (e.g., 802.3) devices can be made aware of when the data section will end, even though such devices are not equipped to properly decode the enhanced data section. Following the data section, the CRC value can be sent in the standard format, but with a value that will not match what was expected with a very high probability for standard receivers. For example, a random 32 bit number could be used since the chance of that number matching the value calculated by transceivers without the ability to decode the higher speed data is 1 in 4 billion. In some cases, the random number may not be fully random, but rather a known/different number. The first bit of this length value can be time-aligned to match the expected timing given the expected length value sent to standard/802.3cg compliant devices. Thus, although the 802.3cg devices lack the ability to properly decode the enhanced data section, they can properly decode the CRC and discard the incoming packet based on the incorrect CRC value. This would not represent a loss of data, however, since this packet would not have been addressed to devices that could not properly receive the more advanced waveform in any event. The communication application may only send the advanced higher speed mode packets to devices in the system that are known to have the capability to decode such packets. Initially, in order to find out which devices on the shared media are enhanced, enhanced packets may be sent to all devices with the expectation that responses would only be received from enhanced devices, and standard devices would not respond. In any event, the standard 802.3 devices can effectively ignore the enhanced data in these cases due to the addressing mismatch, and regardless of the incorrect CRC value. Further, the MAC addresses can be sent in the slower/standard format that all devices on the shared media are capable of decoding properly. In this way, higher performance proprietary transceivers can be implemented to be deployed alongside already accepted 802.3 standard transceivers without loss of data. Such an enhanced transceiver can communicate with all the standard 802.3 devices at the standard slow speed, but can also communicate faster to devices that have this enhanced transceiver capability. Referring now toFIG.4, shown is a diagram of an example communication between device on shared media, in accordance with embodiments of the present invention. In example400, packet200is shown as being sent via a multidrop network to standard transceivers102and enhanced receivers104. In this particular example, enhanced transceiver104-0can send packet200, which can be discarded by standard transceivers102-0and102-1, but understood and decoded by enhanced transceiver104-1. Generally, an enhanced transceiver104can send a packet to other enhanced receivers using enhanced/high rate modulation, but the standard transceivers102would throw out such packets due to incorrect CRC values. Enhanced transceiver104can also send packets to standard transceivers102using low rate modulation, and these packets would be acceptable. As such, standard transceivers can communicate with each other and/or with enhanced transceivers using low rate modulation, but high rate modulation can effectively be reserved for enhanced transceivers. While the CRC value in CRC section212of packet200can be set to be incorrect such that standard transceivers102-0and102-1will ignore the packet, this incorrect CRC value should not result in the packet being discarded by its intended target (e.g., enhanced transceiver104-1). In some cases, a second CRC value that is in the enhanced communication format can be included at the end of data section210in designated portion402. In other cases, the CRC value in portion212may be understood as correct by the enhanced transceivers, and as incorrect by the standard transceivers. In this way, a CRC value can be understood and correctly set such that enhanced transceivers104may be able to properly decode and accept the packet data. In particular embodiments, a variety of approaches (see, e.g.,FIGS.5and6) can be employed in order to create a CRC value that with high probability will calculate to be wrong for the lower data rate default transceiver (e.g.,102) and a CRC value that will be correct for the high rate data received by the high rate receiver (e.g.,104), assuming the data and CRC are received without actual errors. Referring now toFIG.5, shown is a flow diagram of a first example error check character formation, in accordance with embodiments of the present invention. In example500, the preamble up to the length field can be sent at502. At504, a low data rate equivalent length value can be calculated. At506, a low data rate equivalent length value can be sent. At508, a CRC2 can be calculated using high rate data. At510, the high rate data section that includes CRC2 (e.g., in portion402) therein can be sent. At512, the high rate data may be fed through the internal low rate receiver on the transceiver device, as the enhanced transceivers may include both standard/slow and enhanced/fast operational capability and appropriate circuitry therein. At514, a CRC1 can be calculated using the low rate received data. At516, CRC1 can be sent in the CRC portion that uses the low data rate. Accordingly, the advanced transceivers may feed the advanced higher rate data waveform into its own low data rate receiver. Thus, the higher rate data would be interpreted incorrectly because the low data rate receiver may not understand the higher data rate waveforms, but incorrectly in the same way that the other lower rate transceivers (e.g.,102) may interpret the data using their low data rate receivers. Then, the enhanced sending transceiver can calculate the “correct” CRC based on that same incorrect data all the low data rate transceivers receive, and choose to send a different (incorrect) CRC (e.g., in212) than the “correct” one it calculated. Since all standard transceivers102decoding the high rate data may arrive at the same “correct” CRC as the enhanced transmitter, they can interpret a mismatch between their calculated CRC and the different (incorrect) CRC value as sent. As such, the standards transceivers may discard the packet. A second high rate CRC can be calculated from the high rate data, and sent with therewith in the high rate data section (e.g., in402). It should be noted that an “error check character” as used herein can include any suitable number of bits (e.g., 8-bits, 16-bits, 32-bits, etc.). Referring now toFIG.6, shown is a flow diagram of a second example error check character formation, in accordance with embodiments of the present invention. In example600, the preamble up to the length field can be sent at602. At604, a low data rate equivalent length value can be calculated. At606, a low data rate equivalent length value can be sent. At608, a CRC2 can be calculated using high rate data. At610, the high rate data section can be sent. At612, the CRC2 value can be sent using the low data rate (e.g., in212). In this way, a high rate CRC can be calculated from the high rate data, but sent in the low rate data section (e.g., in212). CRC algorithms are specifically chosen such that there is a very low probability that different sets of data would produce the same CRC value. Thus, if the CRC that is sent at the low data rate was calculated using the high rate data, it can very likely have a different value than that as calculated by a default/standard transceiver's interpretation of the high rate data as seen by its low rate receiver. Therefore, the high data rate transceiver can send the CRC value as calculated from the high rate data with confidence that transceivers with only the low rate receiver may discard the packet due to a mismatched CRC, while transceivers with the high rate receiver can receive this CRC that is consistent with the high rate data and not throw the packet away if the data and CRC were not corrupted. In another approach, a random CRC value or otherwise known/different number can be utilized for the incorrect CRC value. For example, at614, the high rate data section that includes CRC2 therein (e.g., in portion402) can be sent, and at616, a random CRC value can be sent using the low data rate (e.g., in portion212). If the CRC is long, such as 32 bits, and if a random 32 bit number is chosen for the low rate CRC value, the probability of it calculating to be the correct value for the data is less than 1 in 4 billion. In this way, a purposefully incorrect CRC value can be sent in the low data rate portion (e.g.,212), while a second high rate CRC can be calculated from the high rate data and sent with in the high rate data section (e.g.,402) of the packet to allow enhanced transceivers104to properly decode the data. Referring now toFIG.7, shown is a flow diagram of an example packet receive operation for an enhanced transceiver, in accordance with embodiments of the present invention. In example700, the preamble up to the length field/portion can be received at702. At704, the first symbol of the data section can be received. At706, the first symbol can be checked to determine if the symbol is a high or low data rate symbol. If the first symbol is a low data rate symbol, the data section can be decoded using the low data rate at708. Also, at710, the CRC following the data section (e.g., in portion212) can be checked using the low data rate. If the CRC is determined to be incorrect at718, the packet may be discarded at720; otherwise, processing may continue. If the first symbol is a high data rate symbol/waveform, the data section can be decoded using the high data rate at712. At714, the high data rate CRC at end of the data section (e.g., in portion402) can be checked. At716, the low data rate CRC that follows the data section (e.g., in portion212) can be checked. If the CRC is determined to be incorrect at718, the packet may be discarded at720; otherwise, processing may continue. Particular embodiments can be applied to many standards in addition to 802.3cg, thus allowing higher performance communication on the same media without causing loss of data. Many, if not most, standards follow this general model where the length is sent before the data. For example, BACNET-MSTP on twisted pairs and IEEE709.2 for communication over the powerline both send the length, then the data, then the CRC. Particular embodiments may apply to any communication standard that specifies that the length of the data section be sent before the data section is sent. For some protocols, such as 802.11 (i.e., WiFi), both the length and CRC fields are sent before the data payload, and this can also be accommodated in certain embodiments. In any event, information other than the data payload can be sent in the standard format, while the data itself may be sent in the enhanced format, regardless of where the CRC portion is sent. Particular embodiments are suitable for any physical communication media (e.g., twisted pair, radio frequency, coaxial, etc.) so long as the protocol of the transceivers using this shared media specifies that the length of the data segment, when required to be sent by the particular protocol, is sent before the data segment. For some protocols, the length field may actually not be sent at all. In these cases, another mechanism can be used so that receivers are made aware that the data section has ended. For example, some protocols may require checking that signals are currently being sent by looking at, e.g., whether there are baseband transitions occurring at a high enough rate. For other protocols, the data format may require high frequency carriers to be present and in which case the receiver may sense when the carriers go away to know that is the end of the data section. Both the transition detection and the carrier detection can be termed “carrier sensing.” For these protocols, data can be decoded until the data symbols stop being sent, sometimes described as the carrier going away. For such cases, a slightly different technique may be utilized in order to send alternate format data to keep the standards based devices from transmitting. In certain embodiments, the beginning of the packet before the data section can be sent in the standard format up until the data section. Then, the enhanced format data section may be substituted in during the data section portion of the packet. When the data is complete, then any other symbols expected after the data section can accordingly be sent in the specified format, which can include a CRC value, and may be time-aligned with the expected specified symbol boundaries. In these cases, the enhanced format physical symbols can be interpreted by the standard format receivers as having carrier present. Particular embodiments may be suitable to any communication protocols, whereby some devices on the shared media are only able to understand a subset of the possible communication thereon, while other devices can understand a wider range of communication waveforms/symbols. In one example of multidrop Ethernet, the IEEE 802.3cg 10 Mbits/sec mode (standard format) can be enhanced by using something similar or the same as IEEE 1901 (enhanced format) for the data section, which has data rates ranging from 90 Mbits/sec to 1 Gbit/sec, which can result in a substantially increased (e.g., 10× or more) data rate. Thus, advantages of particular embodiments include enabling implementation of higher performance proprietary transceivers that can be deployed on the same shared physical media (e.g., 802.3) as standard transceivers, without loss of data. This enhanced mode transceiver may have the ability to communicate with all the standard 802.3 devices at the standard slow speed, but also with higher performance (e.g., higher data rate) devices that have this enhanced mode transceiver capability. While the above examples include circuit, operational, and structural implementations of certain memory arrangements and devices, one skilled in the art will recognize that other technologies and/or architectures, as well as other modes of operation, can be used in accordance with embodiments. Further, one skilled in the art will recognize that other device circuit arrangements, architectures, elements, and the like, may also be used in accordance with embodiments. The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
34,456
11863321
DETAILED DESCRIPTION Coverage is one of the key factors that an operator considers when commercializing cellular communication networks due to its direct impact on service quality as well as capital expenditure (CAPEX) and operating expense (OPEX). Despite the importance of coverage on the success of new radio (NPS) commercialization, a thorough coverage evaluation and a comparison with legacy RATs considering all NR specification details have not been done up to now. Compared to Long Term Evolution (LTE), NR is designed to operate at much higher frequencies such as 28 GHz or 39 GHz in frequency range 2 (FR2). Furthermore, many countries are making available more spectrums on frequency range 1 (FR1), such as 3.5 GHz which is typically in higher frequencies than for LTE or 3G. Due to the higher frequencies, it is inevitable that the wireless channel will be subject to higher path-loss making it more challenging to maintain an adequate quality of service that is at least equal to that of legacy radio access technologies (RATs). Embodiments herein describe systems, apparatuses, and methods for implementing coverage enhancements for NR using repetition and feedback from a user equipment. In some embodiments herein the UE uses a soft Acknowledgement/Negative Acknowledgement (ACK/NACK) report to indicate a desired number of repetitions for Physical Downlink Shared Channel (PDSCH) transmissions. In some embodiments herein a user equipment (UE) uses the soft ACK/NACK to indicate an increase to a number of symbols between the end of a PDSCH transmission and the start of a Physical Uplink Control Channel (PUCCH) transmission. Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the disclosure. The order of the description, however, should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. Additional details and examples are provided with reference to the figures below. The embodiments of the disclosure can be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The components of the disclosed embodiments, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus the following detailed description of the embodiments of the systems and methods of the disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of possible embodiments. FIG.1is a simplified signal flow diagram of an example procedure for transmitting an ACK/NACK report in accordance with one embodiment. As shown, a network node106(e.g., a Next Generation NodeB (gNB)) may transmit a downlink communication to a UE104. The downlink communication may be PDSCH108. In some embodiments, the network node106transmits a bundle of PDSCH repetitions. The more repetitions that the network node106sends the more likely the UE104is to receive and be able to decode the PDSCH108thus improving coverage. However, repetitions conic at a cost of degradation of system efficiency. Embodiments herein balance coverage and efficiency by having the UE104transmit additional information in a soft ACK/NACK report110. For example, the network node106may be set to transmit an initial number of repetitions of the PDSCH108, and the UE104may indicate whether the initial number of repetitions was sufficient or not. If the initial number of repetitions is sufficient, the UE104includes an ACK message in the soft ACK/NACK report110indicating that the UE was able to successfully decode a transmission block within the PDSCH108. If the initial number of repetitions is not sufficient, the UE104includes an ACK message in the soft ACK/NACK report110indicating that the UE was not able to decode a transmission block within the PDSCH108, and requesting a desired number of additional repetitions in a retransmission.FIGS.2-3include additional details regarding embodiments in which the UE104indicates a desired number of repetitions. The network node106determines if the soft ACK/NACK report110includes a NACK message. If NACK102is present, then the network node106retransmits PDSCH112. The retransmitted PDSCH112includes the number of repetitions indicated in the NACK message as desired by the UE104. Additionally, in some embodiments the UE104may use the soft ACK/NACK report110to indicate an increase to a number of symbols between the end of a PDSCH transmission and the start of a PUCCH transmission (N1). If the network node106detects from the soft ACK/NACK report110an indication for increasing N1, the network node106may extend the time to make sure the UE104has enough time to perform processing.FIG.5includes additional details regarding a UE indicating a desire to increase N1. FIG.2is a flow diagram of a method200for a UE to indicate a number of desired repetitions of a PDSCH in accordance with a first embodiment. For PDSCH repetitions, a UE using method200may provide soft ACK/NACK report instead of a single bit ACK/NACK for the bundle of repetitions. The soft ACK/NACK report may allow the UE to provide more information to a gNB on whether an allocated number of repetitions was sufficient, redundant or insufficient, and how many more repetitions are needed or desire by the UE. In the illustrated embodiment, the UE receive202a bundle of PDSCH repetitions from a gNB. The UE may attempt204to decode a transmission block within the bundle of PDSCH repetitions. The UE may prepare a soft ACK/NACK report to inform the gNB of whether the UE was able to successfully decode206the transmission block, or if the UE needs the gNB to retransmit the PDSCH. Additionally, the soft ACK/NACK report may include additional information indicating the whether the number of repetitious for the PDSCH was sufficient, redundant or insufficient, and how many more repetitions are needed or desire by the UE. The format of the soft ACK/NACK report may depend on whether the transmission block was successfully decoded. Additionally, in some embodiments the ACK/NACK report may use different resources based on whether it is an acknowledge message or a negative acknowledge message. For example, if the UE was able to successfully decode206the transmission block, the UE may generate208a soft ACK/NACK report with a single bit. For instance, when the UE is able to decode the transmission block within the bundle of PDSCH repetitions, the UE may generate and send a single bit ACK message on PUCCH resource A. If the UE fails to decode the transmission block, the UE may generate210and send a soft ACK/NACK report with a multiple bit NACK message. For instance, the UE may send two or more bits instead of a single bit NACK message on PUCCH resource B. In some embodiments, the bits may be mapped to different code points indicating a number of additional repetitions desired by the UE to successfully decode the transmission block. The receiving the soft ACK HACK report may add the additional repetitions to the bundle of repetitions originally allocated for the PDSCH and retransmit the PDSCH so the UE may successfully decode the transmit block. For example, a NACK message with two bits may use the two bits to indicate whether one, two, four, or eight more repetitions are needed or desired by the UE for a retransmission of the PDSCH. For instance, in some embodiments if the two bits are 00 then one more repetition is desired by the UE during a retransmission of the PDSCH; if the two bits are 01 then two more repetitions are desired by the UE during a retransmission of the PDSCH; if the two bits are 10 then four more repetitions are desired by the UE during a retransmission of the PDSCH; and if the two bits are 11 then eight more repetitions are desired by the UE during a retransmission of the PDSCH. In some embodiments, the bits may be mapped to different values of repetitions. In some embodiments, additional bits may be used. Further, in some embodiments the bits may also or alternatively be used to indicate a desired redundancy version (RV) sequence and MCS modulation and coding scheme (TCS). The additional information provided by the bits may come at a cost of more complexity and larger uplink control information (UCI) payload, but may provide valuable information to the gNB. In some embodiments, PUCCH resource A and B can be the same, meaning the gNB may need to go through different UCI payload hypothesis, rather than different PUCCH resources. For instance, rather than transmit the ACK message on resource A and the NACK message on resource b, the ACK message may be associated with a first hypotheses and the NACK message may be associated with a second hypotheses. FIG.3is a flow diagram of a method300for a UE to indicate a number of desired repetitions of a PDSCH in accordance with a second embodiment. In this embodiment, the UE generates a soft ACK/NACK report that is multi-bit for both ACK and NACK messages. In other words, both ACK and NACK are mapped to one or more code points (i.e., a bit sequence). In this embodiment, a single PUCCH resource is used and the gNB does not need to go through different hypotheses. For PDSCH repetitions, a UE using method300may provide soft ACK/NACK report instead of a single bit ACK/NACK for the bundle of repetitions. The soft ACK/NACK report may allow the UE to provide more information to a gNB on whether an allocated number of repetitions was sufficient, redundant or insufficient, and how many more repetitions are needed or desire by the UE. In the illustrated embodiment, the UE receive302a bundle of PDSCH repetitions from a gNB. The UE may attempt304to decode a transmission block within the bundle of PDSCH repetitions. The UE may prepare a soft ACK/NACK report to inform the gNB of whether the UE was able to successfully decode successfully decode306the transmission block, or if the UE needs the gNB to retransmit the PDSCH. Additionally, the soft ACK/ACK report may include additional information indicating the whether the number of repetitions for the PDSCH was sufficient, redundant or insufficient, and how many more repetitions are needed or desire by the UE. The UE may generate308a soft ACK/NACK report with a multi-bit ACK message or NACK message based on decoding success. For example, if the UE was able to successfully decode306the transmission block, the UE may generate308a soft ACK/NACK report with a multi-bit ACK message. If the UE fails to decode the transmission block, the UE may generate308and send a soft ACK/NACK report with a multi-bit NACK message. In some embodiments, the bits may be mapped to different code points indicating a number of additional repetitions desired by the UE to successfully decode the transmission block. The gNB receiving the soft ACK/NACK report may add the additional repetitions to the bundle of repetitions originally allocated for the PDSCH and retransmit the PDSCH so the UE may successfully decode the transmit block. For example, a soft ACK/NACK report may include two bits. In some embodiments if the two bits are 00 then the bits correspond to a NACK indicating eight more repetitions are desired by the UE during a retransmission of the PDSCH; if the two bits are 01 then the bits correspond to a NACK indicating four more repetitions are desired by the UE during a retransmission of the PDSCH; if the two bits are 10 then the bits correspond to a NACK indicating two more repetitions are desired by the UE during a retransmission of the PDSCH; and if the two bits are 11 then the bits correspond to an ACK indicating that the transmission block was successfully decoded. In some embodiments, the bits may be mapped to different values of repetitions. In some embodiments, additional bits may be used. With more bits, ACK may be mapped to different states, each state can indicate that the number of extra repetitions originally allocated was too many and also indicate a how many repetitions were extra repetitions and where not needed to decode the transmission block. For example, the bits may include code points that can indicate to the gNB that UE decoded the transmission block in just 2 repetitions. FIG.4is a flow diagram of a method400for a gNB to determine a desired number of repetitions of a PDSCH in accordance with a one embodiment. As illustrated, the gNB may transmit402a bundle of PDSCH repetitions to a UE. Based on whether or not the UE is capable of decoding a transmission block of the PDSCH, the UE may transmit a soft ACK/NACK report. The gNB receives404the soft ACK/NACK report, and decodes406the soft ACK/NACK report. As described with reference toFIGS.2and3, the soft ACK/NACK report may indicate whether an allocated number of repetitions was sufficient, redundant or insufficient, and how many more repetitions are needed or desire by the UE. The may use this information from the soft ACK/NACK report to determine408whether to maintain the number of repetitions, increase the number of repetitions, or decrease the number of repetitions for a future transmission of the PDSCH. For example, if a NACK is present410in the soft ACK/NACK report, the gNB may retransmit412the PDSCH with the number of additional repetitions indicated in the soft ACK/NACK report. FIG.5illustrates a flow diagram of a method500for a UE to indicate a desired increase to a number of symbols between the end of a PDSCH transmission and the start of a PUCCH transmission (N1) in accordance with one embodiment. N1 represents the number of symbols between end of PDSCH and stat of PUCCH transmission. This number, N1 depends on the minimum subcarrier spacing (SCS) between PDCCH, PDSCH and PUCCH (min (μ_PDCCH,μ_PDSCH,μ_UL)), and also depends on UE capability. The soft ACK/NACK report may indicate to increase N1, e.g. N1+d, where d>=0 based on UE capability. The purpose of the extra d is to make sure UE has enough time to perform required processing. As shown, the UE may receive502the bundle of PDSCH repetitions from the gNB and determine504if there is sufficient time to perform the processing. If there is not sufficient time, the UE may generate506a soft ACK/NACK report to indicate a desired increase to N1. In some embodiments, to report the UE capability, the UE may report N1 only, d may be fixed in the specification (e.g., d is pre-programmed to equal 1 or 2 symbols). In some embodiments, UE reports N1 and d together (i.e., N1+d). In some embodiments, several factors may impact the d value determination. For example, handling the soft A/N bits in the soft ACK/NACK report may impact the d value. In some embodiments, the repetition number estimation may impact the d value determination. The repetition number estimation is based on the processing of determining; effective signal-to-interference-plus-noise ratio (SINR) based on current receptions and mapping estimation of further required repetitions to fill the gap between effective SINR and desired SINR. A gNB receiving a soft ACK/NACK report that indicates to increase N1 may accordingly increase the number of symbols between end of PDSCH and stat of PUCCH transmission. Example System Architecture In certain embodiments, 5G System architecture supports data connectivity and services enabling deployments to use techniques such as Network Function Virtualization and Software Defined Networking. The 5G System architecture may leverage service-based interactions between Control Plane Network Functions. Separating User Plane functions from the Control Plane functions allows independent scalability, evolution, and flexible deployments (e.g., centralized location or distributed (remote) location). Modularized function design allows for function re-use and may enable flexible and efficient network slicing. A Network Function and its Network Function Services may interact with another NF and its Network Function Services directly or indirectly via a Service Communication Proxy. Another intermediate function may help route Control Plane messages. The architecture minimizes dependencies between the AN and the CN. The architecture may include a converged core network with a common AN-CN interface that integrates different Access Types (e.g., 3GPP access and non-3GPP access). The architecture may also support a unified authentication framework, stateless NFs where the compute resource is decoupled from the storage resource, capability exposure, concurrent access to local and centralized services (to support low latency services and access to local data networks, User Plane functions can be deployed close to the AN), and/or roaming with both Home routed traffic as well as Local breakout traffic in the visited PLMN. The 5G architecture may be defined as service-based and the interaction between network functions may include a service-based representation, where network functions (e.g., AMF) within the Control Plane enable other authorized network functions to access their services. The service-based representation may also include point-to-point reference points. A reference point representation may also be used to show the interactions between the NF services in the network functions described by point-to-point reference point (e.g., N11) between any two network functions (e.g., AMF and SW). FIG.6illustrates a service based architecture600in 5GS according to one embodiment. As described in 3GPP TS 23.501, the service based architecture600comprises NFs such as an NSSF608, a NEF610, an NRF614, a PCF612, a UDM626, an AUSF618, an AMF620, an SMF622, for communication with a UE616, a (R)AN606, a UPF602, and a DN604. The NFs and NF services can communicate directly, referred to as Direct Communication, or indirectly via a SCP624, referred to as indirect Communication.FIG.6also shows corresponding service-based interfaces including Nutni, Naf, Nudm, Npcf, Nsmf, Nnrf, Namf, Nnef, Nnssf, and Nausf, as well as reference points N1, N2, N3, N4, and N6. A few example functions provided by the NFs shown inFIG.6are described below. The NSSF608supports functionality such as: selecting the set of Network Slice instances serving the UE determining the Allowed NSSAI and, if needed, mapping to the Subscribed S-NSSAIs; determining the Configured NSSAI and, if needed, the mapping to the Subscribed S-NSSAIs; and/or determining the AMF Set to be used to serve the UE, or, based on configuration, a list of candidate AMF(s), possibly by querying the NRF. The NEF610supports exposure of capabilities and events. NE capabilities and events may be securely exposed by the NEF610(e.g., for 3rd party, Application Functions, and/or Edge Computing). The NEF610may store/retrieve information as structured data using a standardized interface (Nudr) to a UDR. The NEF610may also secure provision of information from an external application to 3GPP network and may provide for the Application Functions to securely provide information to the 3GPP network (e.g., expected UE behavior, 5GLAN group information, and service specific information), wherein the NEF610may authenticate and authorize and assist in throttling the Application Functions. The NEF610may provide translation of internal-external information by translating between information exchanged with the AF and information exchanged with the internal network function. For example, the NEF610translates between an AF-Service-Identifier and internal 5G Core information such as DNN and S-NSSAI. The NEF610may handle masking of network and user sensitive information to external AF's according to the network policy. The NEF610may receive information from other network functions (based on exposed capabilities of other network functions), and stores the received information as structured data using a standardized interface to a UDR. The stored information can be accessed and re-exposed by the NEF610to other network functions and Application Functions, and used for other purposes such as analytics. For external exposure of services related to specific UE(s) the NEF610may reside in the FIPLMN. Depending on operator agreements, the NEF610in the HPLMN may have interface(s) with NF(s) in the VPLMN. When a LIE is capable of switching between EPC and 5GC, an SCEF+NEF may be used for service exposure. The NRF614supports service discovery function by receiving an NF Discovery Request from an NF instance or SCP and providing the information of the discovered NF instances to the NF instance or SCP. The NRF614may also support P-CSCE discovery (specialized case of AF discovery by SMF), maintains the NF profile of available NE instances and their supported services, and/or notify about newly registered/updated/deregistered NE instances along with its NF services to the subscribed NF service consumer or SCP. In the context of Network Slicing, based on network implementation, multiple NRFs can be deployed at different levels such as a PLMN level (the NRF is configured with information for the whole PLMN), a shared-slice level (the NRF is configured with information belonging to a set of Network Slices), and/or a slice-specific level (the NRF is configured with information belonging to an S-NSSAI). In the context of roaming, multiple NRFs may be deployed in the different networks, wherein the NRF(s) in the Visited PLMN (known as the vNRF) are configured with information for the visited PLMN, and wherein the NRF(s) in the Home PLMN (known as the hNRF) are configured with information for the home PLMN, referenced by the vNRF via an N27 interface. The PCF612supports a unified policy framework to govern network behavior. The PCF612provides policy rules to Control Plane function(s) to enforce them. The PCF612accesses subscription information relevant for policy decisions in a Unified Data Repository (UDR). The PCF612may access the UDR located in the same PLAIN as the PCF. The UDM626supports generation of 3GPP AKA Authentication Credentials, User Identification Handling (e.g., storage and management of SUFI for each subscriber in the 5G system), de-concealment of a privacy-protected subscription identifier (SUCI), access authorization based on subscription data (e.g., roaming restrictions), UE's Serving NF Registration Management (e.g., storing serving AMF for UE, storing serving SMF for UE's PDU Session), service/session continuity (e.g., by keeping Sell/DNN assignment of ongoing sessions, MT-SMS delivery, Lawful Intercept Functionality (especially in outbound roaming cases where a UDM is the only point of contact for LI), subscription management, SMS management, 5GLAN group management handling, and/or external parameter provisioning (Expected UE Behavior parameters or Network Configuration parameters). To provide such functionality, the UDM626uses subscription data (including authentication data) that may be stored in a UDR, in which case a UDM implements the application logic and may not require an internal user data storage and several different UDMs may serve the same user in different transactions. The DM626may be located in the BPIMN of the subscribers it serves, and may access the information of the UDR located in the same PLMN. The AF628interacts with the Core Network to provide services that, for example, support the following: application influence on traffic routing; accessing, the NEF610; interacting with the Policy framework for policy control; and/or IMS interactions with 5GC. Based on operator deployment, Application Functions considered to be trusted by the operator can be allowed to interact directly with relevant Network Functions. Application Functions not allowed by the operator to access directly the Network Functions may use the external exposure framework via the NEF610to interact with relevant Network Functions. The AUSF618supports authentication for 3GPP access and untrusted non-3GPP access. The AUSF618may also provide support for Network Slice-Specific Authentication and Authorization. The AMF620supports termination of RAN CP interface (N2), termination of NAS (N1) for NAS ciphering and integrity protection, registration management, connection management, reachability management, Mobility Management, lawful intercept (for AMF events and interface to LI System), transport for SM messages between UE and SMF, transparent proxy for routing SM messages, Access Authentication, Access Authorization, transport for SMS messages between UE and SMSF, SEAF, Location Services management for regulatory services, transport for Location Services messages between UE and LAU as well as between RAN and LMF, EPS Bearer ID allocation for interworking with EPS, UE mobility event notification, Control Plane CIoT 5GS Optimization, User Plane CIoT 5GS Optimization, provisioning of external parameters (Expected UE Behavior parameters or Network Configuration parameters), and/or Network Slice-Specific Authentication and Authorization. Some or all of the AMF functionalities may be supported in a single instance of the AMF620. Regardless of the number of Network functions, in certain embodiments there is only one NAS interface instance per access network between the UE and the CN, terminated at one of the Network functions that implements at least NAS security and Mobility Management. The AMF620may also include policy related functionalities. In addition to the functionalities described above, the AMF620may include the following functionality to support non-3GPP access networks: support of N2 interface with N3IWF/TNGF, over which some information (e.g., 3GPP Cell identification) and procedures (e.g., Handover related) defined over 3GPP access may not apply, and non-3GPP access specific information may be applied that do not apply to 3GPP accesses; support of NAS signaling with a UE over N3IWF/TNGF, wherein some procedures supported by NAS signaling over 3GPP access may be not applicable to untrusted non-3GPP (e.g., Paging) access; support of authentication of connected over N3IWF/TNGF; management of mobility, authentication, and separate security context state(s) of a UE connected via a non-3GPP access or connected via a 3GPP access and a non-3GPP access simultaneously; support a coordinated RM management context valid over a 3GPP access and a Non 3GPP access; and/or support dedicated CM management contexts for the TIE for connectivity over non-3GPP access. Not all of the above functionalities may be required to be supported in an instance of a Network Slice. The SMF622supports Session Management (e.g., Session Establishment, modify and release, including tunnel maintain between UPF and AN node), UE IP address allocation & management (including optional Authorization) wherein the UE IP address may be received from a UPF or from an external data network, DHCPv4 (server and client) and DHCPv6 (server and client) functions, functionality to respond to Address Resolution Protocol requests and/or IPv6 Neighbor Solicitation requests based on local (male information for the Ethernet PDUs (e.g., the SMF responds to the ARP and/or the IPv6 Neighbor Solicitation Request by providing the MAC address corresponding to the IP address sent in the request), selection and control of User Plane functions including controlling the UPF to proxy ARP or IPv6 Neighbor Discovery or to forward all ARP/IPv6 Neighbor Solicitation traffic to the SMF for Ethernet PDU Sessions, traffic steering configuration at the UPF to route traffic to proper destinations, 5G VN group management (e.g., maintain the topology of the involved PSA UPFs, establish and release the N19 tunnels between PSA UPFs, configure traffic forwarding at UPF to apply local switching, and/or N6-based forwarding or N19-based forwarding), termination of interfaces towards Policy control functions, lawful intercept (for SM events and interface to LI System), charging data collection and support of charging interfaces, control and coordination of charging data collection at the UPF, termination of SM parts of NAS messages, Downlink Data Notification, Initiator of AN specific SM information sent via AMF over N2 to AN, determination of SSC mode of a session, Control Plane CIoT 5GS Optimization, header compression, acting as I-SMF in deployments where I-SMF can be inserted/removed/relocated, provisioning of external parameters (Expected UE Behavior parameters or Network Configuration parameters), P-CSCF discovery for IMS services, roaming functionality handle local enforcement to apply QoS SLAs (VPLMN) charging data collection and charging interface (VPLMN), and/or lawful intercept (in VPLMN for SM events and interface to LI System), interaction with external DN for transport of signaling for PDU Session authentication/authorization by external DN, and/or instructing UPF and NG-RAN to perform redundant transmission on N3/N9 interfaces. Some or all of the SMF functionalities may be supported in a single instance of a SMF. However, in certain embodiments, not all of the functionalities are required to be supported in an instance of a Network Slice. In addition to the functionalities, the SMF622may include policy related functionalities. The SCP624includes one or more of the following functionalities: Indirect Communication; Delegated Discovery; message forwarding and routing to destination NF/NF services: communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer's API), load balancing, monitoring, overload control, etc.; and/or optionally interact with the UDR, to resolve the UDM Group ID/UDR Group ID/AUSF Group ID/PCF Group ID/CHF Group ID/HSS Group ID based on UE identity (e.g., SUPI or IMPI/IMPU). Some or all of the SCP functionalities may be supported in a single instance of an SCP. In certain embodiments, the SCP624may be deployed in a distributed manner and/or more than one SCP can be present in the communication path between NF Services. SCPs can be deployed at PLMN level, shared-slice level, and slice-specific level. It may be left to operator deployment to ensure that SCPs can communicate with relevant NRFs. The UE616may include a device with radio communication capabilities. For example, the UE616may comprise a smartphone (e.g., handheld touchscreen mobile computing devices connectable to one or more cellular networks). The LTE616may also comprise any mobile or non-mobile computing device, such as Personal Data Assistants (PDAs), pagers, laptop computers, desktop computers, wireless handsets, or any computing device including a wireless communications interface. A UE may also be referred to as a client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, or reconfigurable mobile device. The UE616may comprise an IoT UE, which can comprise a network access layer designed for low-power IoT applications utilizing short-lived UE connections. An IoT UE can utilize technologies (e.g., M2M, MTC, or mMTC technology) for exchanging data with an MTC server or device via a PLMN, other UEs using ProSe or D2D communications, sensor networks, or IoT networks. The M2M or MTC exchange of data may be a machine-initiated exchange of data. An IoT network describes interconnecting IoT UEs, which may include uniquely identifiable embedded computing devices (within the Internet infrastructure). The IoT UEs may execute background applications (e.g., keep-alive messages, status updates, etc.) to facilitate the connections of the IoT network. The UE616may be configured to connect or communicatively couple with the (R)AN606through a radio interface630, which may be a physical communication interface or layer configured to operate with cellular communication protocols such as a GSM protocol, a CDMA network protocol, a Push-to-Talk (PIT) protocol a PTT over Cellular (POC) protocol, a UMTS protocol, a 3GPP LTE protocol, a 5G protocol, a NR protocol, and the like. For example, the UE616and the (R)AN606may use a Uu interface (e.g., an LTE-Uu interface) to exchange control plane data via a protocol stack comprising a PRY layer, a MAC layer, an RLC layer, a PDCP layer and an RRC layer. A DL transmission may be from the (R)AN606to the UE616and a UL transmission may be from the UE616to the (R)AN606. The UE616may further use a sidelink to communicate directly with another UE (not shown) for D2D, P2P, and/or ProSe communication. For example, a ProSe interface may comprise one or more logical channels, including but not limited to a Physical Sidelink Control Channel (PSCCH), a Physical Sidelink Shared Channel (PSSCH), a Physical Sidelink Discovery Channel (PSDCH), and a Physical Sidelink Broadcast Channel (PSBCH). The (R)AN606can include one or more access nodes, which may be referred to as base stations (BSs), NodeBs, evolved NodeBs (eNBs), next Generation NodeBs (gNB), RAN nodes, controllers, transmission reception points (TRPs), and so forth, and can comprise ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell). The (R)AN606may include one or more RAN nodes for providing macrocells, picocells, femtocells, or other types of cells. A macrocell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A picocell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femtocell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having an association with the femtocell UEs in a Closed Subscriber Group (CSG), UEs for users in the home, etc.). Although not shown, multiple RAN nodes (such as the (R)AN606) may be used, wherein an Xn interface is defined between two or more nodes. In some implementations, the Xn interface may include an Xn user plane (Xn-U) interface and an Xn control plane (Xn-C) interface. The Xn-U may provide non-guaranteed delivery of user plane PDUs and support/provide data forwarding and flow control functionality. The Xn-C may provide management and error handling functionality, functionality to manage the Xn-C interface; mobility support for the UE616in a connected mode (e.g., CM-CONNECTED) including functionality to manage the UE mobility for connected mode between one or more (R)AN nodes. The mobility support may include context transfer from an old (source) serving (R)AN node to new (target) serving (R)AN node; and control of user plane tunnels between old (source) serving (R)AN node to new (target) serving (R)AN node. The UPF602may act as an anchor point for intra-RAT and inter-RAT mobility, external PDU session point of interconnect to the DN604, and a branching point to support multi-homed PDU session. The UPF602may also perform packet routing and forwarding, packet inspection, enforce user plane part of policy rules, lawfully intercept packets (UP collection); traffic usage reporting, perform QoS handling for user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform Uplink Traffic verification (e.g., SDF to QoS flow mapping), transport level packet marking in the uplink and downlink, and downlink packet buffering and downlink data notification triggering. The UPF602may include an uplink classifier to support routing traffic flows to a data network. The DN604may represent various network operator services, Internet access, or third party services. The DN604may include, for example, an application server. FIG.7is a block diagram of an example UE700configurable according to various embodiments of the present disclosure, including by execution of instructions on a computer-readable medium that correspond to any of the example methods and/or procedures described herein. The UE700comprises one or more processor702, transceiver704, memory706, user interface708, and control interface710. The one or more processor702may include, for example, an application processor, an audio digital signal processor, a central processing unit, and/or one or more baseband processors. Each of the one or more processor702may include internal memory and/or may include interface(s) to communication with external memory (including the memory706). The internal or external memory can store software code, programs, and/or instructions for execution by the one or more processor702to configure and/or facilitate the UE700to perform various operations, including operations described herein. For example, execution of the instructions can configure the UE700to communicate using one or more wired or wireless communication protocols, including one or more wireless communication protocols standardized by 3GPP such as those commonly known as 5G/NR, LTE, LTE-A, UNITS, HSPA, GSM, CPRS, EDGE, etc., or any other current or future protocols that can be utilized in conjunction with the one or more transceiver704, user interface708, and/or control interface710. As another example, the one or more processor702may execute program code stored in the memory706or other memory that corresponds to MAC, RLC, PDCP, and RRC layer protocols standardized by 3GPP (e.g., for NR and/or LTE). As a further example, the processor702may execute program code stored in the memory706or other memory that, together with the one or more transceiver704, implements corresponding PHI layer protocols, such as Orthogonal Frequency Division Multiplexing (OFDM) Orthogonal Frequency Division Multiple Access (OFDMA), and Single-Carrier Frequency Division Multiple Access (SC-FDMA). The memory706may comprise memory area for the one or more processor702to store variables used in protocols, configuration, control, and other functions of the UE700, including operations corresponding to, or comprising, any of the example methods and/or procedures described herein. Moreover, the memory706may comprise non-volatile memory (e.g., flash memory), volatile memory (e.g., static or dynamic RAM), or a combination thereof. Furthermore, the memory706may interface with a memory slot by which removable memory cards in one or more formats (e.g., SD Card, Memory Stick, Compact Flash, etc.) can be inserted and removed. The one or more transceiver704may include radio-frequency transmitter and/or receiver circuitry that facilitates the UE700to communicate with other equipment supporting like wireless communication standards and/or protocols. For example, the one or more transceiver704may include switches, mixer circuitry, amplifier circuitry, filter circuitry, and synthesizer circuitry. Such RF circuitry may include a receive signal path with circuitry to down-convert RF signals received from a front-end module (FEM) and provide baseband signals to a baseband processor of the one or more processor702. The RF circuitry may also include a transmit signal path Which may include circuitry to up-convert baseband signals provided by a baseband processor and provide RF output signals to the FEM for transmission. The FEM may include a receive signal path that may include circuitry configured to operate on RF signals received from one or more antennas, amplify the received signals and provide the amplified versions of the received signals to the RF circuitry for further processing. The FEM may also include a transmit signal path that may include circuitry configured to amplify signals for transmission provided by the RF circuitry for transmission by one or more antennas. In various embodiments, the amplification through the transmit or receive signal paths may be done solely in the RF circuitry, solely in the FEM, or in both the RF circuitry and the FEM circuitry. In some embodiments, the FEM circuitry may include a TX/RX switch to switch between transmit mode and receive mode operation. In some exemplary embodiments, the one or more transceiver704includes a transmitter and a receiver that enable device1200to communicate with various 5G/NR networks according to various protocols and/or methods proposed for standardization by 3GPP and/or other standards bodies. For example, such functionality can operate cooperatively with the one or more processor702to implement a PHY layer based on OFDM, OFDMA, and/or SC-FDMA technologies, such as described herein with respect to other figures. The user interface708may take various forms depending on particular embodiments, or can be absent from the UE700. In some embodiments, the user interface708includes a microphone, a loudspeaker, slidable buttons, depressible buttons, a display, a touchscreen display, a mechanical or virtual keypad, a mechanical or virtual keyboard, and/or any other user-interface features commonly found on mobile phones. In other embodiments, the UE700may comprise a tablet computing device including a larger touchscreen display. In such embodiments, one or more of the mechanical features of the user interface708may be replaced by comparable or functionally equivalent virtual user interface features (e.g., virtual keypad, virtual buttons, etc.) implemented using the touchscreen display, as familiar to persons of ordinary skill in the art. In other embodiments, the UE700may be a digital computing device, such as a laptop computer, desktop computer, workstation, etc. that comprises a mechanical keyboard that can be integrated, detached, or detachable depending on the particular exemplary embodiment. Such a digital computing device can also comprise a touch screen display. Many example embodiments of the UE700having a touch screen display are capable of receiving user inputs, such as inputs related to exemplary methods and/or procedures described herein or otherwise known to persons of ordinary skill in the art. In some exemplary embodiments of the present disclosure, the UE700may include an orientation sensor, which can be used in various ways by features and functions of the UE700. For example, the UE700can use outputs of the orientation sensor to determine when a user has changed the physical orientation of the UE700's touch screen display. An indication signal from the orientation sensor can be available to any application program executing on the UE700, such that an application program can change the orientation of a screen display (e.g., from portrait to landscape) automatically when the indication signal indicates an approximate 90 degree change in physical orientation of the device. In this manner, the application program can maintain the screen display in a manner that is readable by the user, regardless of the physical orientation of the device. In addition, the output of the orientation sensor can be used in conjunction with various exemplary embodiments of the present disclosure. The control interface710may take various forms depending on particular embodiments. For example, the control interface710may include an RS-232 interface, an RS-485 interface, a USB interface, an HDMI interface, a Bluetooth interface, an IEEE (“Firewire”) interface, an I2C interface, a PCMCIA interface, or the like. In some exemplary embodiments of the present disclosure, control interface1260can comprise an IEEE 802.3 Ethernet interface such as described above. In some embodiments of the present disclosure, the control interface710may include analog interface circuitry including, for example, one or more digital-to-analog (D/A) and/or analog-to-digital (A/D) converters. Persons of ordinary skill in the art can recognize the above list of features, interfaces, and radio-frequency communication standards is merely exemplary, and not limiting to the scope of the present disclosure. In other words, the UE700may include more functionality than is shown inFIG.7including, for example, a video and/or stall image camera, microphone, media player and/or recorder, etc. Moreover, the one or more transceiver704may include circuitry for communication using additional radio-frequency communication standards including Bluetooth, GPS, and/or others. Moreover, the one or more processor702may execute software code stored in the memory706to control such additional functionality. For example, directional velocity and/or position estimates output from a GPS receiver can be available to any application program executing on the UE700, including various exemplary methods and/or computer-readable media according to various exemplary embodiments of the present disclosure. FIG.8is a block diagram of an example network node800configurable according to various embodiments of the present disclosure, including by execution of instructions on a computer-readable medium that correspond to any of the example methods and/or procedures described herein. The network node800includes a one or more processor802, a radio network interface804, a memory806, a core network interface808, and other interfaces810. The network node800may comprise, for example, a base station, eNB, gNB, access node, or component thereof. The one or more processor802may include any type of processor or processing circuitry and may be configured to perform an of the methods or procedures disclosed herein. The memory806may store software code, programs, and/or instructions executed by the one or more processor802to configure the network node800to perform various operations, including operations described herein. For example, execution of such stored instructions can configure the network node800to communicate with one or more other devices using protocols according to various embodiments of the present disclosure, including one or more methods and/or procedures discussed above. Furthermore, execution of such stored instructions can also configure and/or facilitate the network node800to communicate with one or more other devices using other protocols or protocol layers, such as one or more of the PHY MAC, RLC, PDCP, and RRC layer protocols standardized by 3GPP for LTE, LTE-A, and/or NR, or any other higher-layer protocols utilized in conjunction with the radio network interface804and the core network interface808. By way of example and without limitation, the core network interface808comprise an S1 interface and the radio network interface804may comprise a Uu interface, as standardized by 3GPP. The memory806may also store variables used in protocols, configuration, control, and other functions of the network node800. As such, the memory806may comprise non-volatile memory (e.g., flash memory, hard disk, etc.), volatile memory (e.g., static or dynamic RAM), network-based (e.g., “cloud”) storage, or a combination thereof. The radio network interface804may include transmitters, receivers, signal processors, ASICs, antennas, beamforming units, and other circuitry that enables network node800to communicate with other equipment such as, in some embodiments, a plurality of compatible user equipment (UE). In some embodiments, the network node800may include various protocols or protocol layers, such as the PHY, MAC, RLC, PDCP, and RRC layer protocols standardized by 3GPP for LTE, LTE-A, and/or 5G/NR. According to further embodiments of the present disclosure, the radio network interface804may include a PHY layer based on OFDM, OFDMA, and/or SC-FDMA technologies. In some embodiments, the functionality of such a PHY layer can be provided cooperatively by the radio network interface804and the one or more processor802. The core network interface808may include transmitters, receivers, and other circuitry that enables the network node800to communicate with other equipment in a core network such as, in some embodiments, circuit-switched (CS) and/or packet-switched Core (PS) networks. In some embodiments, the core network interface808may include the S1 interface standardized by 3GPP. In some embodiments, the core network interface808may include one or more interfaces to one or more SGWs, MMPs, SGSNs, GGSNs, and other physical devices that comprise functionality found in GERAN, UTRAN, E-UTRAN, and CDMA2000 core networks that are known to persons of ordinary skill in the art. In some embodiments, these one or more interfaces may be multiplexed together on a single physical interface. In some embodiments, lower layers of the core network interface808may include one or more of asynchronous transfer mode (ATM), Internet Protocol (IP)-over-Ethernet, SDH over optical fiber, T1/E1/PDH over a copper wire, microwave radio, or other wired or wireless transmission technologies known to those of ordinary skill in the art. The other interfaces810may include transmitters, receivers, and other circuitry that enables the network node800to communicate with external networks, computers, databases, and the like for purposes of operations, administration, and maintenance of the network node800or other network equipment operably connected thereto. For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the Example Section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section. Example Section The following examples pertain to further embodiments. Example 1 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of the methods or processes described herein. Example 2 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of the above Examples, or any other method or process described herein. Example 3 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of the above Examples, or any other method or process described herein. Example 4 may include a method, technique, or process as described in or related to any of the above Examples, or portions or parts thereof. Example 5 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of the above Examples, or portions thereof. Example 6 may include a signal as described in or related to any of the above Examples, or portions or parts thereof. Example 7 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of the above Examples, or portions or parts thereof, or otherwise described in the present disclosure. Example 8 may include a signal encoded with data as described in or related to any of the above Examples, or portions or parts thereof, or otherwise described in the present disclosure. Example 9 may include a signal encoded with a datagram, packet, frame, segment, PDU, or message as described in or related to any of the above Examples, or portions or parts thereof, or otherwise described in the present disclosure. Example 10 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of the above Examples, or portions thereof. Example 11 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of the above Examples, or portions thereof. Example 12 may include a signal in a wireless network as shown and described herein. Example 13 may include a method of communicating in a wireless network as shown and described herein. Example 14 may include a system for providing wireless communication as shown and described herein. Example 15 may include a device for providing wireless communication as shown and described herein. Any of the above described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. Embodiments and implementations of the systems and methods described herein may include various operations which may be embodied in machine-executable instructions to be executed by a computer system. A computer system may include one or more general-purpose or special-purpose computers (or other electronic devices). The computer system may include hardware components that include specific logic for performing the operations or may include a combination of hardware, software, and/or firmware. It should be recognized that the systems described herein include descriptions of specific embodiments. These embodiments can be combined into single systems, partially combined into other systems, split into multiple systems or divided or combined in other ways. In addition, it is contemplated that parameters, attributes, aspects, etc. of one embodiment can be used in another embodiment. The parameters, attributes, aspects, etc. are merely described in one or more embodiments for clarity, and it is recognized that the parameters, attributes, aspects, etc. can be combined with or substituted for parameters, attributes, aspects, etc. of another embodiment unless specifically disclaimed herein. It is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users. Although the foregoing has been described in some detail for purposes of clarity, it will be apparent that certain changes and modifications may be made without departing from the principles thereof. It should be noted that there are many alternative ways of implementing both the processes and apparatuses described herein. Accordingly, the present embodiments are to be considered illustrative and not restrictive, and the description is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
55,509
11863322
DESCRIPTION OF EMBODIMENTS FIG.3is a schematic diagram of an architecture of a possible communication system according to an embodiment of this disclosure. The communication system includes a plurality of hosts and a plurality of transmission nodes. The transmission nodes on an access layer and an aggregation layer may be divided into different clusters, and in each cluster, each transmission node on the access layer is connected to each transmission node on the aggregation layer. In addition, each transmission node at the aggregation layer is connected to one or more transmission nodes at a core layer, so that each cluster may be connected to any transmission node at the core layer. In the communication system architecture shown inFIG.3, there are a plurality of transmission paths between transmission nodes on any two access layer, to implement high-bandwidth and low-delay communication between the two transmission nodes. In addition, during data transmission, any host inFIG.3may be used as a source node for data transmission, namely, a data transmitting end, or may be used as a destination node for data transmission, namely, a data receiving end, to transmit data by using a plurality of transmission paths established by a plurality of transmission nodes. For example, a host A is used as the source node for data transmission, and a host B is used as the destination node for data transmission. Transmission paths existing between the source node and the destination node include a transmission path1: a transmission node11to a transmission node21to a transmission node31to a transmission node24to a transmission node16; a transmission path2: the transmission node11to the transmission node22to the transmission node31to the transmission node24to the transmission node16; a transmission path3: the transmission node11to the transmission node23to the transmission node31to the transmission node24to the transmission node16; a transmission path4: the transmission node11to the transmission node23to the transmission node32to the transmission node24to the transmission node16; and a transmission path5: the transmission node11to the transmission node23to the transmission node32to the transmission node25to the transmission node16and another transmission paths. This disclosure aims to resolve a problem of how to distribute an amount of data transmitted on each transmission path when there are a plurality of transmission paths between a source node and a destination node for data transmission, to fully utilize bandwidth resources of transmission paths, avoid disorder of transmitted data, and implement optimal load sharing. Before embodiments of this disclosure are described, some terms in this disclosure are first described, to help a person skilled in the art have a better understanding. (1) A host is a device having a receiving and sending function, for example, a handheld device, a vehicle-mounted device, a wearable device, a computing device, a service server, a mobile station (MS) or another processing device connected to a wireless modem that has a wireless/wired connection function, a mobile terminal that communicates with one or more core networks by using an access network, and the like. This is not limited in this embodiment of this disclosure. (2) A transmission node is a device having a data exchange (e.g., forwarding) function, and may be a switch, a device such as a router or a gateway, or another apparatus or device having a data exchange function. This is not limited in this embodiment of this disclosure. (3) A 5-tuple usually refers to a source IP address, a source port, a destination IP address, a destination port, and a transport layer protocol (TCP). For example, 192.168.1.1 10000 TCP 121.14.88.76 80 is a 5-tuple. This means that a node whose IP address is 192.168.1.1 is connected to a node whose IP address is 121.14.88.76 and whose port number is 80 through a port 10000 by using the TCP protocol. In this embodiment of this disclosure, the transmission node may perform hash calculation based on a 5-tuple of a received data packet, and select an egress port based on a hash value obtained through calculation. For example, if the hash value obtained after the 5-tuple hash calculation is 80, an egress port 80 of the transmission node is selected to forward a data packet; and if the hash value obtained after the 5-tuple hash calculation is 100, an egress port 100 of the transmission node is selected to forward the data packet. (4) An equal cost multipath (ECMP) solution is a flow-by-flow load balancing solution in which a congestion status is unaware. As shown inFIG.4, in an ECMP scheduling solution, egress ports for forwarding different data flows are calculated by using a hash method based on a 5-tuple, to complete one-to-one mapping between each data flow and an end-to-end transmission path, and evenly hash different data flows to the end-to-end transmission paths. Because a 5-tuple of each flow is determined, an egress port of each ECMP hash is also uniquely determined, and an end-to-end transmission path of the flow is also finally uniquely determined. However, a biggest problem of the ECMP load sharing solution is that when traffic sizes are not evenly distributed on a network (an elephant flow and a mouse flow are mixed), the elephant flow and the mouse flow are treated equally and are allocated to different transmission paths. This results in load imbalance between the transmission paths. (5) A Presto solution is a fine-grained load balancing solution in which a congestion status is unaware. As shown inFIG.5, the Presto solution does not adjust with a real-time change of a data center status in an entire scheduling process, and a scheduling policy is fixed. In this solution, a round robin method is used, and a plurality of data flows arriving at an end side are sequentially scheduled to different end-to-end transmission paths in turn at a granularity of a flow cell (a data flow with a fixed size of 64 KB). Because this type of technical solution does not need to probe and record real-time status information, execution efficiency of the solution is very high, and no additional storage and computing overheads is introduced. A main principle of the Presto solution is as follows. It is assumed that a plurality of end-to-end transmission paths in a network are equivalent. Therefore, a simple hash method is used to implement one-to-one mapping between each data flow and each end-to-end transmission path, so that the data flow is evenly distributed and scheduled to each end-to-end transmission path. However, in a data center network or a wide area network (WAN) network of an asymmetric architecture, quantities of links connected to transmission nodes in a plurality of end-to-end transmission paths are usually different. In addition, due to heterogeneous devices, physical transmission rates of the links may also be different. In addition, in a scheduling process, because a data flow may arrive suddenly, a load status of each transmission node on each end-to-end transmission path dynamically changes. As a result, performance of each end-to-end transmission path in an asymmetric network is likely to vary significantly. In this case, if the data flow is simply evenly scheduled to the end-to-end transmission paths, severe load imbalance of the transmission paths is caused. Some transmission paths are heavily congested, while load of another transmission path is relatively light. In addition, in an asymmetric network architecture, a data disorder problem may occur. (6) A CONGA solution is a scheduling policy solution based on end-to-end congestion information. As shown inFIG.6, before each data scheduling decision is made, congestion information of all end-to-end transmission paths that pass through an edge transmission node is detected (generally, a congestion status of a path may be approximately analyzed by using an indicator such as round trip time (RTT) or an explicit congestion notification (ECN)). In addition, congestion statuses of all transmission paths are recorded in a path status table. When the scheduling decision is made, a scheduling algorithm queries congestion statuses of all end-to-end transmission paths in the path status table, to complete deterministic scheduling of data. For example, a newly generated data flow is scheduled to an end-to-end transmission path with a lightest current load (a method of a CONGA policy). In this policy, a congestion status of a data center network is detected in real time, and a data scheduling solution is continuously adjusted in real time, to implement load balancing of a plurality of transmission paths. In terms of scheduling granularity, to avoid a data disorder problem caused by packet-by-packet scheduling, CONGA uses a scheduling solution based on a flowlet data packet burst (burst of packets) to ensure that a packet interval between flowlets is greater than a maximum delay difference of a path. In this way, it is ensured that data disorder is not caused between flowlets of a plurality of paths. However, a biggest disadvantage of the CONGA solution is that probing status information of all transmission paths in a large-scale network causes a large amount of information polling time consumption, therefore, this results in a severe delay of information in the path status table. Serious information polling and time-consuming control are not conducive to real-time scheduling of data flows. In addition, maintaining congestion information of all end-to-end transmission paths and executing a scheduling policy in a large-scale network cause great storage and computing overheads. Therefore, a policy such as CONGA can be used only in a small-scale layer-2 data center network, and cannot be applied to a large-scale network. (7) A DRILL solution is a scheduling policy based on local state information. As shown inFIG.7, in a local scheduling policy, local scheduling of data is completed by using only local congestion information of a node. Therefore, when this technical solution is applied, each node on an end-to-end transmission path makes a scheduling decision again, instead of making a scheduling decision only on an edge transmission node or a source node. During each local scheduling, a scheduling policy generates a scheduling decision based on congestion information (for example, a data backlog amount of each port) of a local node. In this solution, only status information of local nodes needs to be recorded, and status information of a global end-to-end transmission path does not need to be recorded. Therefore, information storage and polling overheads are greatly optimized. Real-time scheduling requirements can be met. However, according to a local information-based scheduling policy, global scheduling decisions are distributed to local nodes, and end-to-end routing of network data is completed by sequentially executing local node policies on an end-to-end transmission path. The biggest problem of this solution is that a response to a link and device faults is slow. In this policy, if a fault occurs on a node (for example, a transmission node), only a neighboring node of the node may detect the fault in time. A remote node cannot detect a congestion and fault status because it does not detect status information of the end-to-end transmission path. Therefore, when a fault or congestion occurs on the remote node of the end-to-end transmission path, because a local node cannot sense a change of a path status in time, the local node still uses a previous scheduling solution, and does not adjust a scheduling policy until congestion is transferred to a neighboring node of the local node. Therefore, the scheduling policy is easy to form a local congestion tree. In addition, the local optimal solution also sacrifices precision of load balancing. In addition, it should be understood that in this embodiment of this disclosure, at least one may be alternatively described as one or more, and more may represent two, three, four, or more. This is not limited in this disclosure. In embodiments of this disclosure, “/” may represent an “or” relationship between associated objects. For example, A/B may represent A or B. “and/or” may be used to indicate that there are three relationships between associated objects. For example, A and/or B may represent: Only A exists, both A and B exist, and only B exists. A and B may be singular or plural. To facilitate description of the technical solutions in embodiments of this disclosure, terms such as “first” and “second” may be used to distinguish between technical features with a same or similar function. Terms such as “first” and “second” do not limit a quantity and an execution sequence, and the terms such as “first” and “second” do not indicate a definite difference. In embodiments of this disclosure, the word such as “example” or “for example” is used to represent an example, an illustration, or a description, and is described as “example” or “for example”. Any embodiment or design scheme should not be explained as being more preferred or having more advantages than another embodiment or design scheme. Use of the word such as “example” or “for example” is intended to present a related concept in a specific manner for ease of understanding. Because a data packet may provide a possibility of data flow adjustment in a minimum unit, in this embodiment of this disclosure, a packet-by-packet scheduling solution based on a finest granularity is a theoretically optimal load sharing policy, and aims to further resolve a data disorder problem based on packet-by-packet scheduling, to implement optimal load balancing. The following describes embodiments of this disclosure in detail with reference to the accompanying drawings. Embodiment 1 FIG.8is a schematic diagram of a communication process according to an embodiment of this disclosure. The process includes the following operations. In operation S801, a source node assigns a first number to a probe data packet in a probe data flow in a sending sequence, where the first number is used to select a transmission path for the probe data packet. In an embodiment of this disclosure, a congestion control mechanism of probe data packet (credit) micro-rehearsal is used to send a service data packet. To be specific, before the service data packet is sent, transmission of the service data packet is simulated by using probe data, and the service data packet is sent based on a simulation result of the probe data packet. To hash data packets in a same data flow to different transmission paths, and implement load sharing between a plurality of transmission paths, in an embodiment of this disclosure, a hash algorithm is redesigned based on an existing hash algorithm for a transmission path. Based on the 5-tuple (i.e., source IP address, source port, destination IP address, destination port, and transport layer protocol), a new variable “seq” is introduced to form a 6-tuple. In this way, different data packets in a same data flow have a same 5-tuple but different numbers, hash values calculated through hash calculation based on the 5-tuple+number are different, so that different data packets in a same data flow may be hashed to different transmission paths. In an embodiment, the number may be carried in any idle field in a packet header. For example, the number may be carried by using an options field, a time to live (TTL) field, or the like in the packet header. In an embodiment, when a service data flow needs to be transmitted by the source node, the source node first sends a probe data packet to probe congestion statuses of a plurality of transmission paths. To ensure that a plurality of probe data packets in a same probe data flow are routed and forwarded through the plurality of transmission paths, the source node may assign a first number to the probe data packets in the same probe data flow in ascending order based on a sending sequence of the probe data packets. For example, a value “1” is assigned to a first number of a first probe data packet in the probe data flow, a value “2” is assigned to a first number of a second probe data packet, a value “3” is assigned to a first number of a third probe data packet, and the rest may be deduced by analogy. In addition, to reduce traffic overheads caused by the probe data packet, the probe data packet may not include a data domain (e.g., payload), that is, the probe data packet may include only a header, to reduce an amount of data. In operation S802, the source node sends the probe data packet in the probe data flow to a destination node at a first sending rate. In a possible embodiment, the source node may use determined duration as a transmission period, and send the probe data packet to the destination node at an equal interval (pacing) in each transmission period. The first sending rate at which the source node sends the probe data packet to the destination node may be 0.5 Gbps or the like. After receiving the probe data packet (a forward probe data packet) that carries the first number and that is sent by the source node, the transmission node located between the source node and the destination node performs hash calculation based on a total of six input parameters: the 5-tuple+the first number in the probe data packet, to obtain a hash value, and selects an egress port based on the obtained hash value. For example, a 5-tuple of a first probe data packet+the first number are 192.168.1.1 10000 TCP 121.14.88.76 80+1, the hash value obtained by the transmission node through calculation is 80, and the first probe data packet is forwarded through the egress port 80. A 5-tuple of a second probe data packet+a first number are 192.168.1.1 10000 TCP 121.14.88.76 80+2, the hash value obtained by the transmission node through calculation is 85, and the second probe data packet is forwarded through an egress port 85. In operation S803, the destination node backhauls a probe data packet to the source node each time the destination node receives the probe data packet from the source node. In an embodiment, each time after receiving a data packet, the destination node may first determine whether the data packet is a probe data packet or a service data packet, and if the received data packet is the probe data packet, the destination node backhauls the probe data packet. For example, the destination node may exchange a source IP address and a destination IP address of the probe data packet and send the probe data packet, and backhaul the probe data packet to the source node. If the data packet received by the destination node is a service data packet, the destination node parses the service data packet. For example, the destination node decapsulates the service data packet, and reads data carried in the service data packet. Similarly, after receiving the probe data packet (e.g., a backhaul probe data packet) that carries the first number and that is backhauled by the destination node, the transmission node located between the source node and the destination node performs hash calculation based on a total of six input parameters: the 5-tuple+the first number in the probe data packet, to obtain a hash value, and selects an egress port based on the obtained hash value. In operation S804, each time the source node receives a probe data packet backhauled by the destination node, the source node sends a service data packet in a service data flow, where the service data packet is assigned a second number corresponding to the probe data packet, and the second number is used to select a transmission path for the service data packet. Each time the source node receives a probe data packet (e.g., the backhaul probe data packet) backhauled by the destination node, the source node reads a first number carried in the probe data packet, and assigns the first number carried in the received probe data packet to a current service data packet in the service data flow as a second number of the service data packet, and sends the service data packet to the destination node. For example, the source node receives a probe data packet backhauled by the destination node, and a first number carried in the probe data packet is assigned with “5”. In this case, the source node may assign “5” to a second number of a current service data packet in the service data flow, and send the service data packet to the destination node. After receiving the service data packet (e.g., a forward service data packet) that carries the second number and sent by the source node, the transmission node located between the source node and the destination node performs hash calculation based on a total of six input parameters: the 5-tuple+the second number in the service data packet, to obtain a hash value, and selects an egress port based on the obtained hash value. Probe data packets backhauled by the destination node continuously arrive at the source node. This triggers the source node to send service data packets in the service data flow to the destination node. The source node continuously sends the service data packets to the destination node until the service data packets are completely sent. To accurately simulate service data packet transmission, in an embodiment, the transmission node that is located between the source node and the destination node and that is configured to forward a service data packet and a probe data packet may allocate different transmission bandwidths to service data packet transmission and probe data packet transmission. For example, the transmission node may allocate a first bandwidth to the service data packet and allocate a second bandwidth to the probe data packet based on a ratio of the average size of the service data packets to the average size of the probe data packets. For example, the ratio of the average size of the probe data packets to the average size of the service data packets is 1:19. The transmission node may use 5% of total bandwidth of the transmission node for the probe data packet transmission, and use 95% of total bandwidth of the transmission node for the service data packet transmission. In addition, the transmission node may further discard a service data packet and/or a probe data packet that exceeds a transmission capability, and the source node may also adjust, based on a packet loss rate of the probe data packet, the first sending rate for sending the probe data packet. For example, when the packet loss rate of the probe data packet is greater than a specified threshold (for example, 20%), the source node may down-regulate the first sending rate for sending the probe data packets (for example, down-regulate the first sending rate by 10%). In addition, to cope with burst transmission of the probe data packets, a small quantity of buffers may be further reserved in the transmission node to cope with the burst transmission of the probe data packets. For example, the transmission node may reserve 8 or 10 buffers of an average size of probe data to cope with the burst transmission of the probe data packets. As shown inFIG.9, a total bandwidth of a transmission node is 10 Gbps. The transmission node may configure a bandwidth of 9.5 Gbps to transmit a service data packet, and configure a bandwidth of 0.5 Gbps to transmit a probe data packet. A transmission rate at which the source node sends a probe data packet is 1 Gbps, and exceeds a second bandwidth (0.5 Gbps) used by the transmission node to transmit the probe data packet. The transmission node discards the probe data packet that exceeds a transmission capability, and a packet loss rate of the transmission node for the probe data packet is 50%. The probe data packets backhauled by the destination node are only 50% of the probe data packets sent by the source node. Therefore, after 50% of the probe data packets are discarded, an arrival rate of the backhaul probe data packets (the probe data packets backhauled by the destination node) is 1 Gbps*50%=0.5 Gbps. Each time the source node receives a backhaul probe data packet, the source node sends a true service data packet. With reference to a ratio of the size of the service data packet to the size of the probe data packet, a sending rate of the service data packet is 0.5 Gbps*19=9.5 Gbps. This exactly matches the first bandwidth. Therefore, it is ensured that transmission of the service data packet does not cause congestion (buffer overstock occurs on the transmission node) or an under-throughput (the first bandwidth is not fully occupied). In addition, to avoid discarding of the backhaul probe data packet (the probe data packet backhauled by the destination node), the backhaul probe data packet carries information about a highest transmission priority, and the backhaul probe data packet may be transmitted on the transmission node by using the first bandwidth used to transmit the service data packets. A packet loss priority corresponding to the highest transmission priority is the lowest. In the foregoing method, it is equivalent to that a service data packet transmission procedure is rehearsed in a current network environment by using a probe data packet as a “substitute” of a service data packet. The probe data packet is discarded by the transmission node, the probe data packet is backhauled by the destination node, and an accurate load status of the plurality of transmission paths in the network is transferred to the source node. The source node plans a transmission path of the service data packet based on an arrival sequence of the probe data packets. A service data packet that is first sent is sent by using a transmission path of a probe data packet that first arrived (by assigning values to a 5-tuple and a number, it is ensured that hash results of the service data packet and the probe data packet are consistent, that is, transmission paths are consistent). In addition, in an embodiment of this disclosure, a feature of strong symmetry of the data center network is mainly used. It is assumed that backhaul probe packets transmitted at a highest priority are not out of order in the symmetric data center network. Therefore, there is no need to add additional sequence information to the probe data packets, and a load status of a forward transmission path of the probe data packet is directly determined by using an arrival sequence of the backhaul probe data packets. As shown inFIG.10, the source node sends four probe data packets on a probe data packet channel: Stand-in1, Stand-in2, Stand-in3and Stand-in4. A packet of Stand-in4is lost in a transmission process. After receiving Stand-in1to Stand-in3, the destination node transmits Stand-in1to Stand-in3back to the source node. The source node first receives Stand-in3. This indicates that transmission path load of the probe data packet Stand-in3is the lightest. Therefore, when a first service data packet is transmitted, a number assignment value is the same as that of Stand-in3. Then, the probe data packet Stand-in2arrives, and the source node transmits a service data packet2, where a number assignment value is the same as that of Stand-in2. Finally, the probe data packet Stand-in1arrives, and the source node transmits a service data packet3, where a number assignment value is the same as that of Stand-in1. Table1shows the comparison of number assignment values of related fields in the service data packet and the probe data packet. In Table1, src_ip indicates the source IP address, src_port indicates the source port, dst_ip indicates the destination IP address, dst_port indicates the destination port, protocol indicates the transport layer protocol, and Seq indicates the number. TABLE 1Stand-in packetService data packetSix-tuple (Five-tuple +Six-tuple (Five-tuple +Seq value)Seq value)Sending(src_ip, src_port, dst_ip,(src_ip, dst_ip, src_port,sequencedst_port, protocol, Seq)dst_port, protocol, Seq)1(10.111.166.213, 10000,(10.111.166.213, 10000,10.111.166.206, 80, TCP, 1)10.111.166.206, 80, TCP, 3)2(10.111.166.213, 10000,(10.111.166.213, 10000,10.111.166.206, 80, TCP, 2)10.111.166.206, 80, TCP, 2)3(10.111.166.213, 10000,(10.111.166.213, 10000,10.111.166.206, 80, TCP, 3)10.111.166.206, 80, TCP, 1)4(10.111.166.213, 10000,—10.111.166.206, 80, TCP, 4) Finally, in an embodiment of this disclosure, it is ensured that the service data packet that is first sent by the source node is transmitted by using a transmission path with the lightest current load, to ensure that the service data packet arrives at the destination node in sequence. This resolves a disorder problem of packet-by-packet load sharing at a network layer, and does not require an additional order-preserving restoration operation at a transport layer. As shown inFIG.11, in a scenario of random service data flow transmission and a scenario of many-to-one data transmission (Incast), a disorder level of service data transmission in different packet-by-packet load sharing policies is separately tested. In different scenarios, the quantity of concurrently transmitted service data flows is gradually increased (from 16 flows gradually increased to 2010 flows), and an overall disorder situation of the data center under different loads is compared. Other load sharing policies, such as DRILL, DRB, and Presto, proposed by the academic field are all implemented based on an OMNET simulation platform in a simulation experiment, and are used for comparison with the solutions in this disclosure. It may be learned that, compared with another latest load sharing method in the field, in the packet-by-packet load sharing in this disclosure, a proportion of out-of-order data packets is optimized to be within 0.4%. Even in a heavy-load scenario in which 2010 service data flows are concurrently transmitted, a problem of service data packet disorder is solved in the solutions of this disclosure. In addition, it should be noted that in the heavy-load scenario (concurrent transmission of 2010 data flows) in the Incast scenario, about 0.4% of data packets are out-of-order by using the solutions in this disclosure. A reason may be that high-concurrency probe data packet transmission causes packet loss and backhaul probe data packet disorder. This problem may be resolved by introducing a method for probing an arrival sequence of data packets in the following Embodiment 2. Generally, data disorder barely occurs when the load sharing method in this disclosure is applied. As shown inFIG.12, overall completion time of data flows in different load sharing policies in the two scenarios shown inFIG.11is further compared. Background traffic is continuously increased on the network and comparison is performed on service data transmission completion time corresponding to different load balancing policies in different load conditions (from 30% to 90%). In this simulation experiment, classic load sharing solutions ECMP, CONGA, Presto, and DRILL are implemented based on the OMNET simulation platform, and are used as comparison solutions in this disclosure. A inFIG.12and B inFIG.12respectively show changes of service data transmission completion time corresponding to different load sharing solutions in the random data flow transmission scenario and the Incast scenario. It may be learned that because packet-by-packet load sharing implemented in this disclosure further balances load of a plurality of transmission paths, overall service data transmission completion time is better than that in another solution. Embodiment 2 The solution in Embodiment 1 is mainly intended for a topologically symmetric data center network, and is designed based on a fact that disorder does not occur in backhaul probe data packets. It is considered that in a general WAN scenario, a forward transmission path and a backhaul transmission path for data transmission between the source node and the destination node may be highly asymmetric. As shown inFIG.13, a transmission path of a forward probe data packet is different from a transmission path of a backhaul probe data packet, and a load status of a transmission path of the forward probe data packet may differ greatly from a load status of a transmission path of the backhaul probe data packet. Consequently, a disorder problem also occurs in transmission of the backhaul probe data packets transmitted on a plurality of paths, in this case, it is difficult to correctly deduce, based on only an order in which the backhaul probe data packets arrive at the source node, an order in which the forward probe data packets arrive at the destination node. FIG.14describes a scenario in which disorder occurs in the backhaul probe data packets in the WAN scenario. As shown inFIG.14, the source node sequentially transmits six probe data packets, and the six probe data packets are hashed to different transmission paths. Data disorder does not occur in a forward transmission process, and the destination node sequentially receives a probe data packet1, a probe data packet2, . . . , and a probe data packet6. Each time the destination node receives a probe data packet, the destination node immediately backhauls the probe data packet to the source node. However, the data disorder problem occurs in transmission of the backhaul probe data packets. Due to different performance and load of backhaul paths, the probe data packet2first arrives at the source node, then the probe data packet1arrives, the probe data packet4arrives, . . . , and finally the probe data packet5arrives. In this scenario, if the technical solution in Embodiment 1 is still used, a second number of a service data packet1is assigned a value based on a first number of the probe data packet2, and the service data packet1is transmitted based on a forward path of the probe data packet2. A second number of the second transmitted service data packet2is assigned a value based on a first number of the probe data packet1, and the service data packet2is transmitted based on a forward path of the probe data packet1. However, it is known that during transmission on the forward path, a speed of the transmission path of the probe data packet1is higher than that of the transmission path of the probe data packet2. Therefore, the service data packet2arrives at the destination node earlier than the service data packet1. This causes transmission disorder of the service data packets. Therefore, in the WAN network scenario, the technical solution in Embodiment 1 needs to be improved, to further resolve a problem of transmission disorder of the backhaul probe data packets. FIG.15is a schematic diagram of a communication process according to an embodiment of this disclosure. The process includes the following operations. In operation S1501, a source node assigns a first number to a probe data packet in a probe data flow in a sending sequence, where the first number is used to select a transmission path for the probe data packet. For a value assignment of the first number in the probe data packet, refer to related description in Embodiment 1. Details are not described again. In operation S1502, the source node sends the probe data packet in the probe data flow to a destination node at a first sending rate. In a possible embodiment, the source node may use determined duration as a transmission period, and send the probe data packet to the destination node at an equal interval (pacing) in each transmission period. The first sending rate at which the source node sends the probe data packet to the destination node may be 0.5 Gbps or the like. For implementations of forwarding, by the transmission node, the probe data packet based on the first number of the probe data packet, and the like, refer to related description in Embodiment 1. Details are not described again. In operation S1503, the destination node updates a number receiving sequence table each time the destination node receives a probe data packet from the source node, where the number receiving sequence table is used to record, based on a receiving sequence, a first number of a probe data packet that has been received by the destination node in a current period. For example, as shown inFIG.16, it is assumed that probe data packets arrive in a sequence from a probe data packet #1 to a probe data packet #6, and an action of the destination node is as follows. When the probe data packet #1 arrives, the destination node records a first number Seq1 in the number receiving sequence table. When a probe data packet #2 arrives, the destination node records a first number Seq2 in the number receiving sequence table. The rest may be deduced by analogy. When the probe data packet #6 arrives finally, the destination node records a first number Seq6 in the number receiving sequence table. In operation S1504, the destination node backhauls the probe data packet to the source node, where the backhauled probe data packet carries the number receiving sequence table. For example, each time the destination node receives a forward probe data packet, after completing an operation of updating a first number carried in the forward probe data packet to the number receiving sequence table, the destination node immediately backhauls the probe data packet, and records an updated number receiving sequence table in a packet header of the probe data packet. As shown inFIG.16, when backhauling the probe data packet #1, the destination node adds Seq1 to the probe data packet #1. When backhauling the probe data packet #2, the destination node adds Seq1 and Seq2 to the probe data packet #2. When a probe data packet #3 is backhauled, the destination node adds Seq1, Seq2, and Seq3 to the probe data packet #3. The rest may be deduced by analogy. When the probe data packet #6 is finally backhauled, first numbers Seq1 to Seq5 of the probe data packet #1 to a probe data packet #5 that arrive previously and the first number Seq6 of the probe data packet #6 that arrives previously are all recorded in backhaul probe data packets. In operation S1505, each time the source node receives the probe data packet backhauled by the destination node, the source node updates a probe data packet arrival sequence table based on the number receiving sequence table carried in the probe data packet. The probe data packet arrival sequence table records, based on a sequence in which a probe data packet arrives at the destination node, a first number of the probe data packet that has arrived at the destination node in the current period. For example, it is assumed that a disorder case shown inFIG.17occurs in the backhaul probe data packets, the source node first receives the probe data packet #2, and the probe data packet #2 sequentially carries two first numbers: Seq1 and Seq2. The source node sequentially records the two first numbers in the probe data packet arrival sequence table. After receiving the probe data packet #1, the source node records the first number Seq1 in the probe data packet arrival sequence table (if a record of the first number exists in the probe data packet arrival sequence table, this operation is skipped). Then, the source node receives a probe data packet #4, and sequentially records four first numbers Seq1 to Seq4 in the probe data packet arrival sequence table. The rest may be deduced by analogy. The probe data packet #5 finally arrives. In this case, the probe data packet arrival sequence table of the source node sequentially records six first numbers in total, namely, Seq1 to Seq6. In operation S1506, the source node sends a service data packet in the service data flow, where the service data packet carries a second number. The second number is a first number that is determined by the source node in the probe data packet arrival sequence table based on an order of receiving probe data packets backhauled by the destination node in a current period and that is consistent with the order. Each time the source node receives the probe data packet backhauled by the destination node, the source node sends a true service data packet. A 5-tuple (i.e., a source IP address, a destination IP address, a source port, a destination port, and a transport-layer protocol) of the service data packet is the same as that of the probe data packet, the source node plans a transmission path of the service data packet by assigning a specific value to a second number of the service data packet, so that service data packets arrive at the destination node in sequence. In an embodiment, in the scenario shown inFIG.17, when receiving a backhaul probe data packet #2, the source node sends a service data packet #1. In this case, the source node obtains two first numbers Seq1 and Seq2 by parsing the probe data packet, therefore, it may be learned that although the probe data packet #2 arrives first in backhaul transmission, a first number of a probe data packet that first arrives at the destination node in forward transmission is actually Seq1. Therefore, when the service data packet #1 is sent, a second number of a service data packet is assigned Seq1. Then, a backhaul probe data packet #1 arrives, the source node sends a service data packet #2. In this case, it may be learned that, by searching the probe data packet arrival sequence table, a first number of a second arrived probe data packet in the forward transmission is Seq2, and a second number of the service data packet #2 is assigned Seq2. Then, the source node receives a backhaul probe data packet #4, and sends a service data packet #3. In this case, the probe data packet arrival sequence table sequentially records four first numbers Seq1 to Seq4 (recorded in the backhaul probe data packet #4), and the source node learns that a first number of a probe data packet that third arrives in the forward transmission is Seq3, therefore, a second number of the service data packet #3 is assigned Seq3. The rest may be deduced by analogy. Finally, the source node receives a backhaul probe data packet #5 and sends a service data packet #6. In this case, the probe data packet arrival sequence table of the source node sequentially records six first numbers in total: Seq1 to Seq6, and the source node assigns a first number Seq6 of a probe data packet that finally arrives in the forward transmission to a service data packet #6. Finally, second numbers of the six service data packets are sequentially assigned Seq1 to Seq6, so that the six service data packets arrive at the destination node in a transmission sequence of forward probe data packets. In an embodiment of this disclosure, for processing performed by the source node and the destination node on the probe data packet and the service data packet in addition to assigning the second number to the probe data packet and the service data packet, and processing performed by the transmission node located between the source node and the destination node on the probe data packet and the service data packet, refer to Embodiment 1. Repeated parts are not described again. In an embodiment, a core method procedure of a scheduling policy based on an end-to-end transmission path status designed in this disclosure is summarized inFIG.18. When the source node needs to transmit data, the source node first sends a probe data packet on a control channel (corresponding to a second bandwidth allocated for probe data packet transmission) to detect congestion statuses of a plurality of transmission paths. To ensure that different probe data packets in a same probe data flow are routed and forwarded based on the plurality of transmission paths, the source node assigns ascending values to probe data packet numbers (Seq) in the same probe data flow based on a probe data packet sending sequence. For example, Seq=1 is assigned to a first probe data packet, Seq=2 is assigned to a second probe data packet, and Seq=3 is assigned to a third probe data packet. In addition, in this disclosure, a conventional hash routing method is improved, a number (Seq) value is introduced as a new input parameter, and the transmission node is based on six input parameters in total, including a 5-tuple (i.e., a source IP address, a destination IP address, a source port, a destination port, and a transport layer protocol) and Seq, to calculate a hash result and select an egress port. After receiving the probe data packet, the destination node reads the Seq value of the probe data packet, and records Seq values of probe data packets that arrive sequentially into the number receiving sequence table in sequence based on an arrival sequence of the probe data packets. Each time the destination node receives a probe data packet, the destination node exchanges a source address of the probe data packet with a destination address of the probe data packet, and transmits the probe data packet back to the source node with a highest priority. By querying the number receiving sequence table, the destination node adds Seq values of a 1stto an ntharrived probe data packet to headers of an nthbackhaul probe data packet, so that the source node subsequently resolves a disorder problem of the backhauled probe data packets. After receiving the backhaul probe data packets, the source node reads all Seq values carried in the backhaul probe data packets, and sequentially records the Seq values in the probe data packet arrival sequence table. Each time the source node receives one backhauled probe data packet, the source node sends one service data packet on a data channel (corresponding to a first bandwidth allocated for service data packet transmission). When an nthservice data packet is sent, the source node definitely has received n probe data packets, and at least n Seq values have been recorded in the probe data packet arrival sequence table. Therefore, the source node may accurately find, from the probe data packet arrival sequence table, a Seq value Seq n corresponding to the n th arrived probe data packet in forward transmission, and assign a value to the nthservice data packet. Because the Seq value is the same as the 5-tuple, the nthservice data packet is transmitted to the destination node by using a transmission path that is the same as that of the nthforward probe data packet. In conclusion, it is ensured that the nthservice data packet ntharrives at the destination node, that is, all service data packets arrive in sequence, and no additional order-preserving restoration operation at a transport layer is required. Compared with the technical solution in Embodiment 1, the technical solution in this embodiment focuses on a WAN network scenario in which forward and backhaul paths and traffic distribution are seriously asymmetric. Therefore, the network topology architecture in Embodiment 1 is improved, and an asymmetric network architecture similar to the WAN is constructed. Specifically, an OMNET++ simulation tool is used to randomly select 25% of links in the data center network shown inFIG.1, and reduce a link bandwidth of the 25% of links from 10 Gbps to 3 Gbps, to construct the asymmetric network architecture similar to the WAN. Then, in the Incast scenario, indicators such as service data transmission completion time (FCT) and a service data packet disorder ratio (Percentage of Disordered Packets) of a network under an effect of this implementation technical solution are evaluated. CONGA, Presto, DRILL, DRB, and conventional ECMP solutions newly proposed in the academic field are all implemented based on an OMNET++ environment in a simulation experiment, and are used as comparison technologies of the technical solutions in this embodiment. FIG.19shows service data transmission completion time (A) and a service data packet disorder ratio (B) in different load balancing solutions. It may be learned fromFIG.19that, after arrival sequence information of the probe data packets is introduced, the technical solution of this disclosure almost perfectly resolves a data disorder problem. Even in a heavy-load scenario in which 2010 data flows are concurrently transmitted, zero disorder is still successfully maintained in the technical solution of this disclosure. It may be learned from a result inFIG.19that, under an effect of a packet-by-packet load sharing solution (DRILL, the solution in this disclosure), overall service data transmission completion time of a network is better than that of a coarse-grained load sharing solution (CONGA, Presto, and ECMP). In addition, compared with the DRILL, the technical solution of this disclosure further resolves the data disorder problem. Therefore, overall performance of the technical solution of this disclosure is better than that of the DRILL. Then, an unstable network environment is further constructed to test a scheduling effect of the technical solution of this disclosure in a highly dynamic network environment. In the DCN network shown inFIG.1, two transmission nodes are randomly selected to introduce a 50% packet loss rate to data flows that pass through the two transmission nodes. In this way, a WAN network fault is simulated. In a network fault scenario, performance of the technical solution in this disclosure is further compared with that of the latest load sharing solutions in the field. (A) inFIG.20and (B) inFIG.20separately compare transmission success rates and transmission completion time of different load sharing methods in a WAN network fault scenario. It may be learned from (A) inFIG.20that in the three solutions DRILL, Presto, and this disclosure, a network fault may be successfully sensed, and 100% transmission of all data flows may be ensured under different network load levels. However, a policy that is unaware of a congestion status, such as, ECMP, performs hash routing on a data packet only based on a 5-tuple. As a result, a specific proportion of data is always hashed to a faulty path, therefore, this part of data cannot be successfully transmitted. It is further found from (B) inFIG.20that, although the network fault may be successfully sensed in all the three solutions of DRILL, Presto, and this disclosure, transmission completion time of the technical solution of this disclosure is shorter than that of DRILL and Presto. It should be noted that compared with the latest technology in the industry, the solutions of this disclosure can sense a change of a network environment more quickly, and therefore, the solutions of this disclosure are more applicable to a WAN scenario in which a network environment highly dynamically changes. The foregoing mainly describes the solutions provided in this disclosure from perspectives of the source node, the destination node, and the transmission node. It may be understood that, to implement the foregoing functions, each network element includes a corresponding hardware structure and/or software module (or unit) for performing each function. A person skilled in the art should easily be aware that, in combination with units and algorithm operations of the examples described in embodiments disclosed in this specification, this disclosure may be implemented by hardware or a combination of hardware and computer software. Whether a specific function is performed by hardware or hardware driven by computer software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this disclosure. FIG.21is a schematic diagram of a structure of a possible communication apparatus according to an embodiment of this disclosure. These communication apparatuses may be configured to implement functions of the source node, the destination node, or the transmission node in the foregoing method embodiments, and therefore can also implement beneficial effects of the foregoing method embodiments. In this embodiment of this disclosure, the communication apparatus may be any source node, any destination node, or any transmission node inFIG.8orFIG.15, or may be a unit (or a module) applied to the source node, the destination node, or the transmission node, or the like. As shown inFIG.21. A communication apparatus2100may include a processing unit2102and a communication unit2103, and may further include a storage unit2101. The communication apparatus2100is configured to implement a function of the source node, the destination node, or the transmission node in the method embodiment shown inFIG.8orFIG.15. In an embodiment, the processing unit2102is configured to implement a corresponding processing function. The communication unit2103is configured to support the communication apparatus2100to communicate with another network entity. The storage unit2101is configured to store program code and/or data of the communication apparatus2100. Optionally, the communication unit2103may include a receiving unit and/or a sending unit, respectively configured to perform a receiving operation and a sending operation. When the communication apparatus2100is configured to implement a function of the source node in the method embodiment, the processing unit2102is configured to assign a first number to a probe data packet in a probe data flow in a sending sequence, where the first number is used to select a transmission path for the probe data packet. The communication unit2103is configured to send a probe data packet in the probe data flow to a destination node at a first sending rate. The communication unit2103is further configured to send a service data packet in a service data flow each time the communication unit2103receives a probe data packet backhauled by the destination node, where the service data packet is assigned a second number corresponding to the probe data packet, and the second number is used to select a transmission path for the service data packet. When the first number is the same as the second number, the probe data packet to which the first number is assigned and the service data packet to which the second number is assigned correspond to a same transmission path. In an embodiment, when the communication unit2103sends the probe data packet in the probe data flow to the destination node at the first sending rate, the communication unit2103is configured to send the probe data packet in the probe data flow to the destination node at the first sending rate in one or more transmission periods. In an embodiment, the probe data packet backhauled by the destination node carries a number receiving sequence table, and the number receiving sequence table is used by the destination node to record, in a receiving sequence when receiving the probe data packet, a first number of the probe data packet received in a current period. The processing unit2102is further configured to: each time the communication unit2103receives one probe data packet backhauled by the destination node, update a probe data packet arrival sequence table based on the number receiving sequence table carried in the probe data packet backhauled by the destination node; and record, in the probe data packet arrival sequence table based on a sequence in which a probe data packet arrives at the destination node, a first number of the probe data packet that has arrived at the destination node in the current period. In an embodiment, the second number is a first number that is determined by the processing unit2102in the probe data packet arrival sequence table based on an order of receiving probe data packets backhauled by the destination node in the current period. In an embodiment, the processing unit2102is further configured to adjust the first sending rate based on a packet loss rate of the probe data packet. In an embodiment, the probe data packet does not include a data domain. When the communication apparatus2100is configured to implement functions of the destination node in the method embodiment, the processing unit2102is configured to update a number receiving sequence table each time the communication unit2103receives a probe data packet from the source node, where the number receiving sequence table is used to record, based on a receiving sequence, a first number of a probe data packet that has been received in the current period. The communication unit2103is configured to backhaul the probe data packet to the source node, where the backhauled probe data packet carries the number receiving sequence table. In an embodiment, the probe data packet backhauled to the source node carries information about a highest transmission priority, where a packet loss priority corresponding to the highest transmission priority is the lowest. In an embodiment, the probe data packet does not include a data domain. When the communication apparatus2100is configured to implement a function of the transmission node in the method embodiment, the communication unit2103is configured to receive a data packet, where the data packet carries a number, and the number is used to select a transmission path for the data packet. The processing unit2102is configured to: calculate a hash value based on a source IP address, a destination IP address, a source port, a destination port, a transport-layer protocol, and the number that are corresponding to the data packet; and select, based on the hash value, an egress port corresponding to the hash value to forward the data packet. In an embodiment, the data packet is a probe data packet or a service data packet. In an embodiment, the processing unit2102is further configured to allocate a first bandwidth to service data packet transmission and allocate a second bandwidth to probe data packet transmission, where the first bandwidth is greater than the second bandwidth. In an embodiment, a ratio of the second bandwidth to the first bandwidth is a ratio of an average size of the probe data packet to an average size of the service data packet. In an embodiment, when the probe data packet received by the communication unit2103is a forward probe data packet, the communication unit2103transmits the forward probe data packet by using the second bandwidth. When the probe data packet received by the communication unit2103is a backhaul probe data packet, the communication unit2103transmits the backhaul probe data packet by using the first bandwidth. The forward probe data packet is a probe data packet sent by the source node to the destination node, and the backhaul probe data packet is a probe data packet backhauled by the destination node to the source node. In an embodiment, when a rate at which the communication unit2103receives the forward probe data packet is greater than the second bandwidth, the communication unit2103discards the forward probe data packet, for example, discards a forward probe data packet that exceeds a transmission capability (the second bandwidth). In an embodiment, the backhaul probe data packet carries information about a highest transmission priority, and a packet loss priority corresponding to the highest transmission priority is the lowest. In an embodiment, the probe data packet does not include a data domain. Based on the foregoing embodiments, an embodiment of this disclosure further provides a communication apparatus. Refer toFIG.22. A communication apparatus2200includes a communication interface2201, a processor2202, and a memory2203. The communication interface2201, the processor2202, and the memory2203are connected to each other. Optionally, the communication interface2201, the processor2202, and the memory2203are interconnected by using a bus2204. The bus2204may be a peripheral component interconnect (PCI) bus, an extended industry standard architecture (EISA) bus, or the like. The bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of indication, the bus is indicated by using only one bold line inFIG.22. However, it does not indicate that there is only one bus or only one type of bus. When the communication apparatus2200implements the communication method applicable to the source node shown inFIG.8orFIG.15,the communication interface2201is configured to receive and send data; andthe processor2202is configured to invoke program instructions stored in the memory to perform the following method:assigning a first number to a probe data packet in a probe data flow in a sending sequence, where the first number is used to select a transmission path for the probe data packet;sending the probe data packet in the probe data flow to a destination node at a first sending rate by using the communication interface2201;each time a probe data packet backhauled by the destination node is received by using the communication interface2201, sending a service data packet in a service data flow by using the communication interface2201, where the service data packet is assigned a second number corresponding to the probe data packet, and the second number is used to select a transmission path for the service data packet; andwhen the first number is the same as the second number, the probe data packet to which the first number is assigned and the service data packet to which the second number is assigned correspond to a same transmission path. In an embodiment, the sending the probe data packet in the probe data flow to a destination node at a first sending rate by using the communication interface2201includes:sending the probe data packet in the probe data flow to the destination node at the first sending rate in one or more transmission periods by using the communication interface2201. In an embodiment, the probe data packet backhauled by the destination node carries a number receiving sequence table, and the number receiving sequence table is used by the destination node to record, in a receiving sequence when the destination node receives the probe data packet, a first number of a probe data packet that has been received in a current period, and the method further includes:each time a probe data packet backhauled by the destination node is received by using the communication interface2201, the communication interface2201updates a probe data packet arrival sequence table based on the number receiving sequence table carried in the probe data packet backhauled by the destination node, where the probe data packet arrival sequence table records, based on a sequence in which a probe data packet arrives at the destination node, a first number of the probe data packet that has arrived at the destination node in the current period. In an embodiment, the second number is a first number that is determined in the probe data packet arrival sequence table based on an order of receiving probe data packets backhauled by the destination node in a current period and that is consistent with the order. In an embodiment, the method further includes:adjusting the first sending rate based on a packet loss rate of the probe data packet. In an embodiment, the probe data packet does not include a data domain. In another possible embodiment, when the communication apparatus2200implements the communication method applicable to the destination node shown inFIG.8orFIG.15,the communication interface2201is configured to receive and send data; andthe processor2202is configured to invoke program instructions stored in the memory to perform the following method:updating a number receiving sequence table each time the communication interface2201receives a probe data packet from the source node, where the number receiving sequence table is used to record, based on a receiving sequence, a first number of a probe data packet that has been received in a current period; andbackhauling the probe data packet to the source node by using the communication interface2201, where the backhauled probe data packet carries the number receiving sequence table. In an embodiment, the probe data packet backhauled by the destination node to the source node carries information about a highest transmission priority, where a packet loss priority corresponding to the highest transmission priority is the lowest. In an embodiment, the probe data packet does not include a data domain (e.g., payload). In another possible embodiment, when the communication apparatus2200implements the communication method applicable to the transmission node shown inFIG.8orFIG.15,the communication interface2201is configured to receive and send data; andthe processor2202is configured to invoke program instructions stored in the memory to perform the following method:receiving a data packet by using the communication interface2201, where the data packet carries a number, and the number is used to select a transmission path for the data packet;calculating a hash value based on a source IP address, a destination IP address, a source port, a destination port, a transport-layer protocol, and the number that are corresponding to the data packet; andselecting, based on the hash value, an egress port corresponding to the hash value to forward the data packet. In an embodiment, the data packet is a probe data packet or a service data packet. In an embodiment, the method further includes:allocating a first bandwidth for service data packet transmission and allocating a second bandwidth for probe data packet transmission, where the first bandwidth is greater than the second bandwidth. In an embodiment, a ratio of the second bandwidth to the first bandwidth is a ratio of an average size of the probe data packet to an average size of the service data packet. In an embodiment, when the probe data packet received by using the communication interface2201is a forward probe data packet, the forward probe data packet is transmitted by using the second bandwidth. When the probe data packet received by using the communication interface2201is a backhaul probe data packet, the backhaul probe data packet is transmitted by using the first bandwidth, where the forward probe data packet is a probe data packet sent by the source node to the destination node, and the backhaul probe data packet is a probe data packet backhauled by the destination node to the source node. In an embodiment, when a rate at which the forward probe data packet is received by using the communication interface2201is greater than the second bandwidth, the forward probe data packet is discarded, for example, a forward probe data packet that exceeds a transmission capability (the second bandwidth) is discarded. In an embodiment, the backhaul probe data packet carries information about a highest transmission priority, and a packet loss priority corresponding to the highest transmission priority is the lowest. In an embodiment, the probe data packet does not include a data domain. In another form of this embodiment, a computer-readable storage medium is provided. The computer-readable storage medium stores instructions. When the instructions are executed, the method applicable to the source node, the destination node, or the transmission node in the foregoing method embodiments may be performed. In another form of this embodiment, a computer program product including instructions is provided. When the instructions are executed, the method applicable to the source node, the destination node, or the transmission node in the foregoing method embodiments may be performed. In another form of this embodiment, a chip is provided. When running, the chip may perform the method applicable to the source node, the destination node, or the transmission node in the foregoing method embodiments. In this embodiment of this disclosure, the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or another programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The processor may implement or execute the methods, operations, and logical block diagrams disclosed in embodiments of this disclosure. The general-purpose processor may be a microprocessor or any conventional processor or the like. The operations of the method disclosed with reference to embodiments of this disclosure may be directly performed by a hardware processor, or may be performed by using a combination of hardware and software modules in the processor. In embodiments of this disclosure, the memory may be a nonvolatile memory, for example, a hard disk drive (HDD) or a solid-state drive (SSD), or may be a volatile memory such as a random access memory (RAM). The memory is any other medium that can be used to carry or store expected program code in a form of instructions or a data structure and that can be accessed by a computer. However, this is not limited thereto. The memory in embodiments of this disclosure may alternatively be a circuit or any other apparatus that can implement a storage function, and is configured to store program instructions and/or data. All or some of the methods provided in embodiments of this disclosure may be implemented by using software, hardware, firmware, or any combination thereof When the embodiments are implemented by using software, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or some of the procedures or functions according to embodiments of this disclosure are generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, a network device, a terminal device, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, for example, a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium, or the like. In embodiments of this disclosure, on the premise that there is no logical conflict, embodiments may be mutually referenced. For example, methods and/or terms in the method embodiments may be mutually referenced, and functions and/or terms in the apparatus embodiments may be mutually referenced, for example, functions and/or terms between the apparatus embodiments and the method embodiments may be mutually referenced. It is clear that a person skilled in the art can make various modifications and variations to this disclosure without departing from the scope of this disclosure. This disclosure is intended to cover these modifications and variations of this disclosure provided that they fall within the scope of protection defined by the following claims and their equivalent technologies.
71,640
11863323
DETAILED DESCRIPTION Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art. Additional information may also be found in the document(s) provided in the Appendix. As explained in the current version of 3GPP TR 24.890, in order to transmit a 5GSM message, the UE sends a transport message (e.g., uplink (UL) session management (SM) MESSAGE TRANSPORT message) comprising a session management (SM) message (e.g., 5GSM message), PDU session ID and other parameters (e.g. DNN) to an access mobility function (AMF). Upon receiving the transport message comprising the SM message, PDU session ID, and other parameters from the UE, the AMF selects an SMF (if not selected already for the PDU session), based on the received transport message, and forwards the SM message to the selected SMF. Clause 8.5.1.1.2.1.1.4 of 3GPP TR 24.890 explains abnormal cases on the network side regarding UE-initiated SM message transport procedures where the AMF may be unable to select a SMF based on the transport message. In some embodiments, a first abnormal case may be where the AMF does not have a PDU session routing context for the PDU session ID of the transport message and the UE, the request type IE of the transport message is set to “initial request,” and the AMF fails to select a SMF. In some embodiments, a second abnormal case may be where the AMF does not have a PDU session routing context for the PDU session ID of the transport message and the UE, the request type IE of the transport message is set to “existing PDU session,” and the user's subscription context obtained from a unified data management (UDM) does not contain an SMF ID corresponding to: (i) the DNN of the transport message, if the DNN is included in the transport message; or (ii) a default DNN, if the DNN is not included in the transport message. In these scenarios, the AMF may fail to select a SMF. In some embodiments, another abnormal case may be where the UE does not provide a request type in the transport message. The AMF may be unable to select a SMF based on the transport message. The current version of 3GPP TR 24.890 does not specify how the AMF informs the UE about the failure to select a SMF, as described, for instance, in the abnormal cases described above. Accordingly, the absence of any specification of such may result in determining that the failure is due to a permanent cause (e.g. the requested DNN is not authorized DNN for the UE) and the UE may retransmit the SM message in a new transport message to the AMF. Upon receipt of the new transport message, the AMF may need to repeat the same SMF selection only to result in the same failure to select a SMF. In some embodiments, the SM transport procedures (clause 8.5.1.1.2.1) as described by 3GPP TR 24.890 may be improved as described in the present disclosure below. In some embodiments, if the AMF is unable to forward the SM message (e.g., 5GSM message) of the transport message (e.g., UL SM MESSAGE TRANSPORT message), the AMF may create and send a status message (e.g., 5GMM STATUS message) to the UE. The status message may comprise a 5GMM message container IE containing the transport message, and a cause of failure to forward the SM message. In some embodiments, if the UE receives the status message comprising the 5GMM message container IE containing the transport message containing the SM message, the 5GMM layer may inform the 5GSM layer about non-delivery of the SM message. Based on the notification about the non-delivery of the SM message, the 5GSM procedure may stop any retransmissions of the SM message and consider the 5GSM procedure as unsuccessfully completed. In some embodiments, the AMF may create the status message based on a failure of the AMF to select a SMF as described above, for instance, in the first abnormal case. For example, the AMF may create the status message if the AMF does not have a PDU session routing context for the PDU session ID of the transport message and the UE, the request type IE of the transport message is set to “initial request,” and the AMF fails to select a SMF. The AMF may set a 5GMM message container IE of the created status message to the U transport message, according to some embodiments. The AMF may set a cause IE of the created status message to a cause indicating a cause of failure to select a SMF. The AMF may send the created status message to the UE. In some embodiments, the AMF may create the status message based on a failure of the AMF to select a SMF as described above, for instance, in the second abnormal case. For example, the AMF may create the status message if the AMF does not have a PDU session routing context for the PDU session ID of the transport message and the UE, the request type IE of the transport message is set to “existing PDU session,” and the user's subscription context obtained from a unified data management (UDM) does not contain an SMF ID corresponding to the DNN of the transport message, if the DNN is included in the transport message. The AMF may set a 5GMM message container IE of the created status message to the transport message, according to some embodiments. The AMF may set a cause IE of the created status message to a cause indicating a cause of failure to select a SMF. The AMF may send the created status message to the UE. As another example, the AMF may create the status message if the AMF does not have a PDU session routing context for the PDU session ID of the transport message and the UE, the request type IE of the transport message is set to “existing PDU session,” and the user's subscription context obtained from a unified data management (UDM) does not contain an SMF ID corresponding to a default DNN, if the DNN is not included in the transport message. The AMF may set a 5GMM message container IE of the created status message to the transport message, according to some embodiments. The AMF may set a cause IE of the created status message to a cause indicating a cause of failure to select a SMF. The AMF may send the created status message to the UE. In some embodiments, the AMF may create the status message based on a failure of the AMF to select a SMF when the AMF does not have a PDU session routing context for the PDU session ID of the transport message and the UE, and the request type IE of the transport message is not provided. The AMF may set a 5GMM message container IE of the created status message to the transport message, according to some embodiments. The AMF may set a cause IE of the created status message to a cause indicating a cause of failure to select a SMF. The AMF may send the created status message to the UE. In some embodiments, clause 8.5.1.1.2.1.1 of 3GPP TR 24.890 may be improved to describe embodiments where a UE-initiated SM message transport initiation is not accepted by the network. The UE may receive the status message (e.g., 5GMM STATUS message) transmitted by the AMF described above, according to some embodiments. Upon reception of the status message with the 5GMM message container IE containing the transport message (e.g., UL SM MESSAGE TRANSPORT message), the UE may pass a non-delivery indication along with the SM message (e.g., 5GSM message) of the transport message to the 5GSM procedures specified in clause 9 of 3GPP TR 24.890. Specifically, the mobility management layer of the UE may pass the non-delivery indication along with the SM message to the session management protocol layer of the UE to notify that the SM message could not be forwarded by the AMF. In some embodiments, the 5GS session management procedures (clause 9.4) as described by 3GPP TR 24.890 may be improved as described in the present disclosure below. Clause 9.4.2.5 of 3GPP TR 24.890 describes abnormal cases in the UE in UE-requested PDU session establishment procedures. In some embodiments, the session management protocol layer of the UE may receive a non-delivery indication from the mobility management layer of the UE along with a session establishment request message (e.g., PDU SESSION ESTABLISHMENT REQUEST message) with PTI IE set to the allocated PTI value. In some embodiments, the non-delivery indication may be a UE internal indication triggered by the UE receiving the status message (e.g., 5GMM STATUS message) transmitted by the AMF. Upon receipt of the non-delivery indication along with the session establishment request message with the PTI IE set to the allocated PTI value, the UE may stop a timer (e.g., Tx), release the allocated PTI value and consider that the PDU session is not established. Clause 9.4.4.5 of 3GPP TR 24.890 describes abnormal cases in the UE in UE-requested PDU session modification procedures. In some embodiments, the session management protocol layer of the UE may receive a non-delivery indication from the mobility management layer of the UE along with a session modification request message (e.g., PDU SESSION MODIFICATION REQUEST message) with a PTI IE set to the allocated PTI value. In some embodiments, the non-delivery indication may be a UE internal indication triggered by the UE receiving the status message (e.g., 5GMM STATUS message) transmitted by the AMF. Upon receipt of the non-delivery indication along with the session modification request message with the PTI IE set to the allocated PTI value, the UE may stop a timer (e.g., Tk), release the allocated PTI value and consider that the PDU session is not modified. Clause 9.4.6.5 of 3GPP TR 24.890 describes abnormal cases in the UE in UE-requested PDU session release procedures. In some embodiments, the session management protocol layer of the UE may receive a non-delivery indication along with a session release request message (e.g., PDU SESSION RELEASE REQUEST message) with a PTI IE set to the allocated PTI value. In some embodiments, the non-delivery indication may be a UE internal indication triggered by the UE receiving the status message (e.g., 5GMM STATUS message) transmitted by the AMF. Upon receipt of the non-delivery indication along with the session release request message with the PTI IE set to the allocated PTI value, the UE may stop a timer (e.g., Tz), release the allocated PTI value and consider that the PDU session is not released. In some embodiments, alternative improvements to 3GPP TR 24.890 may be provided as described by the present disclosure below. Alternative (1): the UE-initiated NAS transport procedure may be extended with a transport accept message (e.g., UL SM MESSAGE TRANSPORT ACCEPT message) or a transport reject message (e.g., UL SM MESSAGE TRANSPORT REJECT message), which AMF sends upon reception and handling of a transport request message (e.g., UL SM MESSAGE TRANSPORT REQUEST message), according to some embodiments. Only up to one UE-initiated NAS transport procedure may be run at any given time. If the AMF is able to forward a SM message (e.g., 5GSM message) of the transport request message, the AMF may send the transport accept message. If the AMF is unable to forward the SM message of the transport request message, the AMF may send the transport reject message. In some embodiments, the transport request message may contain a cause of failure to forward the SM message of the transport request message cause. Accordingly, reliability may be provided on SM transport layer, and the 5GSM procedure will not need to retransmit the SM message. If transport of the SM message fails, the UE will receive the transport reject message and the 5GSM procedure will consider the 5GSM procedure as unsuccessfully completed. In some embodiments, alternative (1) may require two NAS messages to transport the SM message while the existing procedure described in 3GPP TR 24.890 requires one NAS message. Alternative (2): the AMF may be configured with a default SMF for rejection, according to some embodiments. The AMF may route any SM message (e.g., 5GSM message) which the AMF is unable to route forward to the default SMF for rejection. Accordingly, the default SMF may reject the SM message with an appropriate response message (e.g., 5GSM response message). In some embodiments, alternative (2) requires deployment of an SMF. In some embodiments, the SMF may not have to be fully functional. For example, the SMF may only need to be able to reject the SM message from the UE. Alternative (3): the AMF may do nothing and continue to receive retransmissions of the SM message (e.g., 5GSM message) from the UE when the AMF is not able to select an SMF for the SM message, according to some embodiments. Although the subject matter described herein may be implemented in any appropriate type of system using any suitable components, the embodiments disclosed herein are described in relation to a wireless network, such as the example wireless network illustrated inFIG.1. For simplicity, the wireless network ofFIG.1only depicts network106, network nodes160and160b, and WDs110,110b, and110c. In practice, a wireless network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device. Of the illustrated components, network node160and wireless device (WD)110are depicted with additional detail. The wireless network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by, or via, the wireless network. The wireless network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system. In some embodiments, the wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards. Network106may comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices. Network node160and WD110comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network. In different embodiments, the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)). Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS). Yet further examples of network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs. As another example, a network node may be a virtual network node as described in more detail below. More generally, however, network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network. InFIG.1, network node160includes processing circuitry170, device readable medium180, interface190, auxiliary equipment184, power source186, power circuitry187, and antenna162. Although network node160illustrated in the example wireless network ofFIG.1may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Moreover, while the components of network node160are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium180may comprise multiple separate hard drives as well as multiple RAM modules). Similarly, network node160may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which network node160comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeB's. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, network node160may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate device readable medium180for the different RATs) and some components may be reused (e.g., the same antenna162may be shared by the RATs). Network node160may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node160, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node160. Processing circuitry170is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry170may include processing information obtained by processing circuitry170by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Processing circuitry170may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node160components, such as device readable medium180, network node160functionality. For example, processing circuitry170may execute instructions stored in device readable medium180or in memory within processing circuitry170. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, processing circuitry170may include a system on a chip (SOC). In some embodiments, processing circuitry170may include one or more of radio frequency (RF) transceiver circuitry172and baseband processing circuitry174. In some embodiments, radio frequency (RF) transceiver circuitry172and baseband processing circuitry174may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry172and baseband processing circuitry174may be on the same chip or set of chips, boards, or units In certain embodiments, some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device may be performed by processing circuitry170executing instructions stored on device readable medium180or memory within processing circuitry170. In alternative embodiments, some or all of the functionality may be provided by processing circuitry170without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry170can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry170alone or to other components of network node160, but are enjoyed by network node160as a whole, and/or by end users and the wireless network generally. Device readable medium180may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by processing circuitry170. Device readable medium180may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry170and, utilized by network node160. Device readable medium180may be used to store any calculations made by processing circuitry170and/or any data received via interface190. In some embodiments, processing circuitry170and device readable medium180may be considered to be integrated. Interface190is used in the wired or wireless communication of signalling and/or data between network node160, network106, and/or WDs110. As illustrated, interface190comprises port(s)/terminal(s)194to send and receive data, for example to and from network106over a wired connection. Interface190also includes radio front end circuitry192that may be coupled to, or in certain embodiments a part of, antenna162. Radio front end circuitry192comprises filters198and amplifiers196. Radio front end circuitry192may be connected to antenna162and processing circuitry170. Radio front end circuitry may be configured to condition signals communicated between antenna162and processing circuitry170. Radio front end circuitry192may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry192may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters198and/or amplifiers196. The radio signal may then be transmitted via antenna162. Similarly, when receiving data, antenna162may collect radio signals which are then converted into digital data by radio front end circuitry192. The digital data may be passed to processing circuitry170. In other embodiments, the interface may comprise different components and/or different combinations of components. In certain alternative embodiments, network node160may not include separate radio front end circuitry192, instead, processing circuitry170may comprise radio front end circuitry and may be connected to antenna162without separate radio front end circuitry192. Similarly, in some embodiments, all or some of RF transceiver circuitry172may be considered a part of interface190. In still other embodiments, interface190may include one or more ports or terminals194, radio front end circuitry192, and RF transceiver circuitry172, as part of a radio unit (not shown), and interface190may communicate with baseband processing circuitry174, which is part of a digital unit (not shown). Antenna162may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna162may be coupled to radio front end circuitry190and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna162may comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz. An omni-directional antenna may be used to transmit/receive radio signals in any direction, a sector antenna may be used to transmit/receive radio signals from devices within a particular area, and a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna may be referred to as MIMO. In certain embodiments, antenna162may be separate from network node160and may be connectable to network node160through an interface or port. Antenna162, interface190, and/or processing circuitry170may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna162, interface190, and/or processing circuitry170may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment. Power circuitry187may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node160with power for performing the functionality described herein. Power circuitry187may receive power from power source186. Power source186and/or power circuitry187may be configured to provide power to the various components of network node160in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source186may either be included in, or external to, power circuitry187and/or network node160. For example, network node160may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry187. As a further example, power source186may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry187. The battery may provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, may also be used. Alternative embodiments of network node160may include additional components beyond those shown inFIG.1that may be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node160may include user interface equipment to allow input of information into network node160and to allow output of information from network node160. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node160. As used herein, wireless device (WD) refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Unless otherwise noted, the term WD may be used interchangeably herein with user equipment (UE). Communicating wirelessly may involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. In some embodiments, a WD may be configured to transmit and/or receive information without direct human interaction. For instance, a WD may be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network. Examples of a WD include, but are not limited to, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE). a vehicle-mounted wireless terminal device, etc. A WD may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-everything (V2X) and may in this case be referred to as a D2D communication device. As yet another specific example, in an Internet of Things (IoT) scenario, a WD may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another WD and/or a network node. The WD may in this case be a machine-to-machine (M2M) device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the WD may be a UE implementing the 3GPP narrow band internet of things (NB-IoT) standard. Particular examples of such machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g. refrigerators, televisions, etc.) personal wearables (e.g., watches, fitness trackers, etc.). In other scenarios, a WD may represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation. A WD as described above may represent the endpoint of a wireless connection, in which case the device may be referred to as a wireless terminal. Furthermore, a WD as described above may be mobile, in which case it may also be referred to as a mobile device or a mobile terminal. As illustrated, wireless device110includes antenna111, interface114, processing circuitry120, device readable medium130, user interface equipment132, auxiliary equipment134, power source136and power circuitry137. WD110may include multiple sets of one or more of the illustrated components for different wireless technologies supported by WD110, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies may be integrated into the same or different chips or set of chips as other components within WD110. Antenna111may include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface114. In certain alternative embodiments, antenna111may be separate from WD110and be connectable to WD110through an interface or port. Antenna111, interface114, and/or processing circuitry120may be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data and/or signals may be received from a network node and/or another WD. In some embodiments, radio front end circuitry and/or antenna111may be considered an interface. As illustrated, interface114comprises radio front end circuitry112and antenna111. Radio front end circuitry112comprise one or more filters118and amplifiers116. Radio front end circuitry114is connected to antenna111and processing circuitry120, and is configured to condition signals communicated between antenna111and processing circuitry120. Radio front end circuitry112may be coupled to or a part of antenna111. In some embodiments, WD110may not include separate radio front end circuitry112; rather, processing circuitry120may comprise radio front end circuitry and may be connected to antenna111. Similarly, in some embodiments, some or all of RF transceiver circuitry122may be considered a part of interface114. Radio front end circuitry112may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry112may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters118and/or amplifiers116. The radio signal may then be transmitted via antenna111. Similarly, when receiving data, antenna111may collect radio signals which are then converted into digital data by radio front end circuitry112. The digital data may be passed to processing circuitry120. In other embodiments, the interface may comprise different components and/or different combinations of components. Processing circuitry120may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD110components, such as device readable medium130, WD110functionality. Such functionality may include providing any of the various wireless features or benefits discussed herein. For example, processing circuitry120may execute instructions stored in device readable medium130or in memory within processing circuitry120to provide the functionality disclosed herein. As illustrated, processing circuitry120includes one or more of RF transceiver circuitry122, baseband processing circuitry124, and application processing circuitry126. In other embodiments, the processing circuitry may comprise different components and/or different combinations of components. In certain embodiments processing circuitry120of WD110may comprise a SOC. In some embodiments, RF transceiver circuitry122, baseband processing circuitry124, and application processing circuitry126may be on separate chips or sets of chips. In alternative embodiments, part or all of baseband processing circuitry124and application processing circuitry126may be combined into one chip or set of chips, and RF transceiver circuitry122may be on a separate chip or set of chips. In still alternative embodiments, part or all of RF transceiver circuitry122and baseband processing circuitry124may be on the same chip or set of chips, and application processing circuitry126may be on a separate chip or set of chips. In yet other alternative embodiments, part or all of RF transceiver circuitry122, baseband processing circuitry124, and application processing circuitry126may be combined in the same chip or set of chips. In some embodiments, RF transceiver circuitry122may be a part of interface114. RF transceiver circuitry122may condition RF signals for processing circuitry120. In certain embodiments, some or all of the functionality described herein as being performed by a WD may be provided by processing circuitry120executing instructions stored on device readable medium130, which in certain embodiments may be a computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by processing circuitry120without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry120can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry120alone or to other components of WD110, but are enjoyed by WD110as a whole, and/or by end users and the wireless network generally. Processing circuitry120may be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by processing circuitry120, may include processing information obtained by processing circuitry120by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD110, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Device readable medium130may be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry120. Device readable medium130may include computer memory (e.g., Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (e.g., a hard disk), removable storage media (e.g., a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that may be used by processing circuitry120. In some embodiments, processing circuitry120and device readable medium130may be considered to be integrated. User interface equipment132may provide components that allow for a human user to interact with WD110. Such interaction may be of many forms, such as visual, audial, tactile, etc. User interface equipment132may be operable to produce output to the user and to allow the user to provide input to WD110. The type of interaction may vary depending on the type of user interface equipment132installed in WD110. For example, if WD110is a smart phone, the interaction may be via a touch screen; if WD110is a smart meter, the interaction may be through a screen that provides usage (e.g., the number of gallons used) or a speaker that provides an audible alert (e.g., if smoke is detected). User interface equipment132may include input interfaces, devices and circuits, and output interfaces, devices and circuits. User interface equipment132is configured to allow input of information into WD110, and is connected to processing circuitry120to allow processing circuitry120to process the input information. User interface equipment132may include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment132is also configured to allow output of information from WD110, and to allow processing circuitry120to output information from WD110. User interface equipment132may include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment132, WD110may communicate with end users and/or the wireless network, and allow them to benefit from the functionality described herein. Auxiliary equipment134is operable to provide more specific functionality which may not be generally performed by WDs. This may comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment134may vary depending on the embodiment and/or scenario. Power source136may, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic devices or power cells, may also be used. WD110may further comprise power circuitry137for delivering power from power source136to the various parts of WD110which need power from power source136to carry out any functionality described or indicated herein. Power circuitry137may in certain embodiments comprise power management circuitry. Power circuitry137may additionally or alternatively be operable to receive power from an external power source; in which case WD110may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable. Power circuitry137may also in certain embodiments be operable to deliver power from an external power source to power source136. This may be, for example, for the charging of power source136. Power circuitry137may perform any formatting, converting, or other modification to the power from power source136to make the power suitable for the respective components of WD110to which power is supplied. FIG.2illustrates one embodiment of a UE in accordance with various aspects described herein. As used herein, a user equipment or UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter). UE2200may be any UE identified by the 3rd Generation Partnership Project (3GPP), including a NB-IoT UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE. UE200, as illustrated inFIG.2, is one example of a WD configured for communication in accordance with one or more communication standards promulgated by the 3rd Generation Partnership Project (3GPP), such as 3GPP's GSM, UMTS, LTE, and/or 5G standards. As mentioned previously, the term WD and UE may be used interchangeable. Accordingly, althoughFIG.2is a UE, the components discussed herein are equally applicable to a WD, and vice-versa. InFIG.2, UE200includes processing circuitry201that is operatively coupled to input/output interface205, radio frequency (RF) interface209, network connection interface211, memory215including random access memory (RAM)217, read-only memory (ROM)219, and storage medium221or the like, communication subsystem231, power source233, and/or any other component, or any combination thereof. Storage medium221includes operating system223, application program225, and data227. In other embodiments, storage medium221may include other similar types of information. Certain UEs may utilize all of the components shown inFIG.2, or only a subset of the components. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc. InFIG.2, processing circuitry201may be configured to process computer instructions and data. Processing circuitry201may be configured to implement any sequential state machine operative to execute machine instructions stored as machine-readable computer programs in the memory, such as one or more hardware-implemented state machines (e.g., in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored program, general-purpose processors, such as a microprocessor or Digital Signal Processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry201may include two central processing units (CPUs). Data may be information in a form suitable for use by a computer. In the depicted embodiment, input/output interface205may be configured to provide a communication interface to an input device, output device, or input and output device. UE200may be configured to use an output device via input/output interface205. An output device may use the same type of interface port as an input device. For example, a USB port may be used to provide input to and output from UE200. The output device may be a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. UE200may be configured to use an input device via input/output interface205to allow a user to capture information into UE200. The input device may include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, another like sensor, or any combination thereof. For example, the input device may be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor. InFIG.2, RF interface209may be configured to provide a communication interface to RF components such as a transmitter, a receiver, and an antenna. Network connection interface211may be configured to provide a communication interface to network243a. Network243amay encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network243amay comprise a Wi-Fi network. Network connection interface211may be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, or the like. Network connection interface211may implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like). The transmitter and receiver functions may share circuit components, software or firmware, or alternatively may be implemented separately. RAM217may be configured to interface via bus202to processing circuitry201to provide storage or caching of data or computer instructions during the execution of software programs such as the operating system, application programs, and device drivers. ROM219may be configured to provide computer instructions or data to processing circuitry201. For example, ROM219may be configured to store invariant low-level system code or data for basic system functions such as basic input and output (I/O), startup, or reception of keystrokes from a keyboard that are stored in a non-volatile memory. Storage medium221may be configured to include memory such as RAM, ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, or flash drives. In one example, storage medium221may be configured to include operating system223, application program225such as a web browser application, a widget or gadget engine or another application, and data file227. Storage medium221may store, for use by UE200, any of a variety of various operating systems or combinations of operating systems. Storage medium221may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), floppy disk drive, flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a subscriber identity module or a removable user identity (SIM/RUIM) module, other memory, or any combination thereof. Storage medium221may allow UE200to access computer-executable instructions, application programs or the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied in storage medium221, which may comprise a device readable medium. InFIG.2, processing circuitry201may be configured to communicate with network243busing communication subsystem231. Network243aand network243bmay be the same network or networks or different network or networks. Communication subsystem231may be configured to include one or more transceivers used to communicate with network243b. For example, communication subsystem231may be configured to include one or more transceivers used to communicate with one or more remote transceivers of another device capable of wireless communication such as another WD, UE, or base station of a radio access network (RAN) according to one or more communication protocols, such as IEEE 802.QQ2, CDMA, WCDMA, GSM, LTE, UTRAN, WiMax, or the like. Each transceiver may include transmitter233and/or receiver235to implement transmitter or receiver functionality, respectively, appropriate to the RAN links (e.g., frequency allocations and the like). Further, transmitter233and receiver235of each transceiver may share circuit components, software or firmware, or alternatively may be implemented separately. In the illustrated embodiment, the communication functions of communication subsystem231may include data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. For example, communication subsystem231may include cellular communication, Wi-Fi communication, Bluetooth communication, and GPS communication. Network243bmay encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network243bmay be a cellular network, a Wi-Fi network, and/or a near-field network. Power source213may be configured to provide alternating current (AC) or direct current (DC) power to components of UE200. The features, benefits and/or functions described herein may be implemented in one of the components of UE200or partitioned across multiple components of UE200. Further, the features, benefits, and/or functions described herein may be implemented in any combination of hardware, software or firmware. In one example, communication subsystem231may be configured to include any of the components described herein. Further, processing circuitry201may be configured to communicate with any of such components over bus202. In another example, any of such components may be represented by program instructions stored in memory that when executed by processing circuitry201perform the corresponding functions described herein. In another example, the functionality of any of such components may be partitioned between processing circuitry201and communication subsystem231. In another example, the non-computationally intensive functions of any of such components may be implemented in software or firmware and the computationally intensive functions may be implemented in hardware. FIG.3is a schematic block diagram illustrating a virtualization environment300in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to a node (e.g., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks). In some embodiments, some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments300hosted by one or more of hardware nodes330. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node), then the network node may be entirely virtualized. The functions may be implemented by one or more applications320(which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. Applications320are run in virtualization environment300which provides hardware330comprising processing circuitry360and memory390. Memory390contains instructions395executable by processing circuitry360whereby application320is operative to provide one or more of the features, benefits, and/or functions disclosed herein. Virtualization environment300, comprises general-purpose or special-purpose network hardware devices330comprising a set of one or more processors or processing circuitry360, which may be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware device may comprise memory390-1which may be non-persistent memory for temporarily storing instructions395or software executed by processing circuitry360. Each hardware device may comprise one or more network interface controllers (NICs)370, also known as network interface cards, which include physical network interface380. Each hardware device may also include non-transitory, persistent, machine-readable storage media390-2having stored therein software395and/or instructions executable by processing circuitry360. Software395may include any type of software including software for instantiating one or more virtualization layers350(also referred to as hypervisors), software to execute virtual machines340as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein. Virtual machines340, comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer350or hypervisor. Different embodiments of the instance of virtual appliance320may be implemented on one or more of virtual machines340, and the implementations may be made in different ways. During operation, processing circuitry360executes software395to instantiate the hypervisor or virtualization layer350, which may sometimes be referred to as a virtual machine monitor (VMM). Virtualization layer350may present a virtual operating platform that appears like networking hardware to virtual machine340. As shown inFIG.3, hardware330may be a standalone network node with generic or specific components. Hardware330may comprise antenna3225and may implement some functions via virtualization. Alternatively, hardware330may be part of a larger cluster of hardware (e.g. such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO)3100, which, among others, oversees lifecycle management of applications320. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment. In the context of NFV, virtual machine340may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of virtual machines340, and that part of hardware330that executes that virtual machine, be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines340, forms a separate virtual network elements (VNE). Still in the context of NFV, Virtual Network Function (VNF) is responsible for handling specific network functions that run in one or more virtual machines340on top of hardware networking infrastructure330and corresponds to application320inFIG.3. In some embodiments, one or more radio units3200that each include one or more transmitters3220and one or more receivers3210may be coupled to one or more antennas3225. Radio units3200may communicate directly with hardware nodes330via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signalling can be effected with the use of control system3230which may alternatively be used for communication between the hardware nodes330and radio units3200. With reference toFIG.4, a communication system in accordance with an embodiment is shown. The illustrated communication system includes telecommunication network410, such as a 3GPP-type cellular network, which comprises access network411, such as a radio access network, and core network414. Access network411comprises a plurality of base stations412a,412b,412c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area413a,413b,413c. Each base station412a,412b,412cis connectable to core network414over a wired or wireless connection415. A first UE491located in coverage area413cis configured to wirelessly connect to, or be paged by, the corresponding base station412c. A second UE492in coverage area413ais wirelessly connectable to the corresponding base station412a. While a plurality of UEs491,492are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station412. Telecommunication network410is itself connected to host computer430, which may be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm. Host computer430may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider. Connections421and422between telecommunication network410and host computer430may extend directly from core network414to host computer430or may go via an optional intermediate network420. Intermediate network420may be one of, or a combination of more than one of, a public, private or hosted network; intermediate network420, if any, may be a backbone network or the Internet; in particular, intermediate network420may comprise two or more sub-networks (not shown). The communication system ofFIG.4as a whole enables connectivity between the connected UEs491,492and host computer430. The connectivity may be described as an over-the-top (OTT) connection450. Host computer430and the connected UEs491,492are configured to communicate data and/or signaling via OTT connection450, using access network411, core network414, any intermediate network420and possible further infrastructure (not shown) as intermediaries. OTT connection450may be transparent in the sense that the participating communication devices through which OTT connection450passes are unaware of routing of uplink and downlink communications. For example, base station412may not or need not be informed about the past routing of an incoming downlink communication with data originating from host computer430to be forwarded (e.g., handed over) to a connected UE491. Similarly, base station412need not be aware of the future routing of an outgoing uplink communication originating from the UE491towards the host computer430. Example implementations, in accordance with an embodiment, of the UE, base station and host computer discussed in the preceding paragraphs will now be described with reference toFIG.5. In communication system500, host computer510comprises hardware515including communication interface516configured to set up and maintain a wired or wireless connection with an interface of a different communication device of communication system500. Host computer510further comprises processing circuitry518, which may have storage and/or processing capabilities. In particular, processing circuitry518may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Host computer510further comprises software511, which is stored in or accessible by host computer510and executable by processing circuitry518. Software511includes host application512. Host application512may be operable to provide a service to a remote user, such as UE530connecting via OTT connection550terminating at UE530and host computer510. In providing the service to the remote user, host application512may provide user data which is transmitted using OTT connection550. Communication system500further includes base station520provided in a telecommunication system and comprising hardware525enabling it to communicate with host computer510and with UE530. Hardware525may include communication interface526for setting up and maintaining a wired or wireless connection with an interface of a different communication device of communication system50500, as well as radio interface527for setting up and maintaining at least wireless connection570with UE530located in a coverage area (not shown inFIG.5) served by base station520. Communication interface526may be configured to facilitate connection560to host computer510. Connection560may be direct or it may pass through a core network (not shown inFIG.5) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment shown, hardware525of base station520further includes processing circuitry528, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Base station520further has software521stored internally or accessible via an external connection. Communication system500further includes UE530already referred to. Its hardware535may include radio interface537configured to set up and maintain wireless connection570with a base station serving a coverage area in which UE530is currently located. Hardware535of UE530further includes processing circuitry538, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. UE530further comprises software531, which is stored in or accessible by UE530and executable by processing circuitry538. Software531includes client application532. Client application532may be operable to provide a service to a human or non-human user via UE530, with the support of host computer510. In host computer510, an executing host application512may communicate with the executing client application532via OTT connection550terminating at UE530and host computer510. In providing the service to the user, client application532may receive request data from host application512and provide user data in response to the request data. OTT connection550may transfer both the request data and the user data. Client application532may interact with the user to generate the user data that it provides. It is noted that host computer510, base station520and UE530illustrated inFIG.5may be similar or identical to host computer430, one of base stations412a,412b,412cand one of UEs491,492ofFIG.4, respectively. This is to say, the inner workings of these entities may be as shown inFIG.5and independently, the surrounding network topology may be that ofFIG.4. InFIG.5, OTT connection550has been drawn abstractly to illustrate the communication between host computer510and UE530via base station520, without explicit reference to any intermediary devices and the precise routing of messages via these devices. Network infrastructure may determine the routing, which it may be configured to hide from UE530or from the service provider operating host computer510, or both. While OTT connection550is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network). Wireless connection570between UE530and base station520is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to UE530using OTT connection550, in which wireless connection570forms the last segment. More precisely, the teachings of these embodiments improve the handling of SM messages (e.g., 5GSM messages) transmitted by a UE when a AMF fails to forward a SM message transmitted by the UE to a SMF. Specifically, the teachings of these embodiments allow the AMF to notify the UE regarding the failure to forward the SM message to a SMF by creating a status message (5GMM STATUS message) comprising the SM message and transmitting the status message to the UE. Upon receipt of the status message, the UE determines that the AMF has failed to forward the SM message, thereby preventing the UE from sending the same SM message to the AMF which would result in the same failure. A measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring OTT connection550between host computer510and UE530, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring OTT connection550may be implemented in software511and hardware515of host computer510or in software531and hardware535of UE530, or both. In embodiments, sensors (not shown) may be deployed in or in association with communication devices through which OTT connection550passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software511,531may compute or estimate the monitored quantities. The reconfiguring of OTT connection550may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect base station520, and it may be unknown or imperceptible to base station520. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling facilitating host computer510's measurements of throughput, propagation times, latency and the like. The measurements may be implemented in that software511and531causes messages to be transmitted, in particular empty or ‘dummy’ messages, using OTT connection550while it monitors propagation times, errors etc. FIG.6is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.4and5. For simplicity of the present disclosure, only drawing references toFIG.6will be included in this section. In step610, the host computer provides user data. In substep611(which may be optional) of step610, the host computer provides the user data by executing a host application. In step620, the host computer initiates a transmission carrying the user data to the UE. In step630(which may be optional), the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step640(which may also be optional), the UE executes a client application associated with the host application executed by the host computer. FIG.7is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.4and5. For simplicity of the present disclosure, only drawing references toFIG.7will be included in this section. In step710of the method, the host computer provides user data. In an optional substep (not shown) the host computer provides the user data by executing a host application. In step720, the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure. In step730(which may be optional), the UE receives the user data carried in the transmission. FIG.8is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.4and5. For simplicity of the present disclosure, only drawing references toFIG.8will be included in this section. In step810(which may be optional), the UE receives input data provided by the host computer. Additionally or alternatively, in step820, the UE provides user data. In substep821(which may be optional) of step820, the UE provides the user data by executing a client application. In substep811(which may be optional) of step810, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application may further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in substep830(which may be optional), transmission of the user data to the host computer. In step840of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure. FIG.9is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference toFIGS.4and5. For simplicity of the present disclosure, only drawing references toFIG.9will be included in this section. In step910(which may be optional), in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In step920(which may be optional), the base station initiates transmission of the received user data to the host computer. In step930(which may be optional), the host computer receives the user data carried in the transmission initiated by the base station. Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure. FIG.10depicts a method1000, in accordance with particular embodiments, that is performed by a wireless device. Method1000may begin at step1002in which the wireless device transmits a transport message (e.g., UL SM Message Transport message) to an Access and Mobility Function (AMF), wherein the transport message comprises a SM message (e.g., 5GSM message). In some embodiments, the transport message may further comprise at least one or more of: a protocol data unit (PDU) session identifier (ID), a data network name (DNN), and a request type indication. In some embodiments, the SM message may comprise a procedure transaction identity (PTI) indication identifying a session management transaction (e.g., 5GSM transaction) associated with the SM message. At step1004, the wireless device receives a status message (e.g., 5GMM Status message) transmitted by the AMF, wherein the status message comprises at least a portion of the transport message and an indication of non-delivery of the SM message to a SMF. In such an embodiments, the portion of the transport message comprises the SM message. In some embodiments, the indication of non-delivery may comprise a cause of failure to deliver the SM message to a SMF. In some embodiments, the SM message may be one of: (i) a session establishment request message (e.g., PDU Session Establishment Request message), (ii) a session modification request message (e.g., PDU Session Modification Request message), and (iii) a session release request message (e.g., PDU Session Release Request message). In such an embodiment, the method1000may further include the wireless device stopping a timer (e.g., Tx, Tk or Tz) as a result of receiving the indication of non-delivery. In such an embodiment, the method1000may further include determining that a session associated with the SM message is: (i) not established, (ii) not modified or (iii) not released. FIG.11depicts a method1100, in accordance with particular embodiments, that is performed by an Access and Mobility Management Function (AMF). Method1100may begin at step1102in which the AMF receives a transport message (e.g., UL SM Message Transport message) transmitted by a wireless device, wherein the transport message comprises a SM message (e.g., 5GSM message). In some embodiments, the SM message may comprise a procedure transaction identity (PTI) indication identifying a session management transaction (e.g., 5GSM transaction) associated with the SM message. In some embodiments, the transport message may further comprise at least one or more of: a protocol data unit (PDU) session identifier (ID), a data network name (DNN), and a request type indication. At step1104, the AMF determines, based on the transport message, whether the SM message can be forwarded to a SMF. In some embodiments, the step1104of determining, based on the transport message, whether the SM message can be forwarded to a SMF may further comprise: the AMF determining whether the AMF has a PDU session routing context for the PDU session identifier, wherein the request type indication indicates that the SM message is associated to an initial request; and as a result of determining that the AMF does not have a PDU session routing context for the PDU session identifier, the AMF determining that a SMF cannot be selected for the SM message. In some embodiments, the step1104of determining, based on the transport message, whether the SM message can be forwarded to a SMF may further comprise: the AMF determining whether the AMF has a PDU session routing context for the PDU session identifier, wherein the request type indication indicates that the SM message is associated to an existing PDU session; the AMF obtaining subscription context for the wireless device from a unified data management (UDM), wherein the subscription context comprises at least one or more SMF identifier (ID); and as a result of determining: (i) that the AMF does not have a PDU session routing context for the PDU session identifier and (ii) the at least one or more SMF ID is not associated with the DNN, the AMF determining that a SMF cannot be selected for the SM message. In some embodiments, the step1104of determining, based on the transport message, whether the SM message can be forwarded to a SMF may further comprise: the AMF determining whether the AMF has a PDU session routing context for the PDU session identifier, wherein the request type indication indicates that the SM message is associated to an existing PDU session, and the DNN is not included in the transport message; the AMF obtaining subscription context for the wireless device from a unified data management (UDM), wherein the subscription context comprises at least one or more SMF identifier (ID); and as a result of determining: (i) that the AMF does not have a PDU session routing context for the PDU session identifier and (ii) the at least one or more SMF ID is not associated with a default DNN, the AMF determining that a SMF cannot be selected for the SM message. In some embodiments, the step1104of determining, based on the transport message, whether the SM message can be forwarded to a SMF may further comprise: the AMF determining whether the AMF has a PDU session routing context for the PDU session identifier, wherein the request type indication is not included in the transport message; and as a result of determining that the AMF does not have a PDU session routing context for the PDU session identifier, the AMF determining that a SMF cannot be selected for the SM message. At step1106, as a result of determining that the SM message cannot be forwarded to a SMF, the AMF creates a status message (e.g., 5GMM Status message) comprising at least a portion of the transport message and an indication of non-delivery of the SM message to a SMF. In some embodiments, the indication of non-delivery comprises a cause of failure to deliver the SM message to a SMF. In some embodiments, the portion of the transport message comprises the SM message. At step1108, the AMF transmits the status message to the wireless device. FIG.12illustrates a schematic block diagram of an apparatus1200in a wireless network (for example, the wireless network shown inFIG.1). The apparatus may be implemented in a wireless device or network node (e.g., wireless device110or network node160shown inFIG.1). Apparatus1200is operable to carry out the example method described with reference toFIG.10and possibly any other processes or methods disclosed herein. It is also to be understood that the method ofFIG.10is not necessarily carried out solely by apparatus1200. At least some operations of the method can be performed by one or more other entities. Virtual Apparatus1200may comprise processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In some implementations, the processing circuitry may be used to cause transmitter unit1202to transmit a transport message (e.g., UL SM Message Transport message) to an Access and Mobility Function (AMF), wherein the transport message comprises a SM message (e.g., 5GSM message), receiver unit1204to receive a status message (e.g., 5GMM Status message) transmitted by the AMF, wherein the status message comprises at least a portion of the transport message and an indication of non-delivery of the SM message to a SMF, and any other suitable units of apparatus1200to perform corresponding functions according one or more embodiments of the present disclosure. As illustrated inFIG.12, apparatus1200includes a transmitter unit1202configured to transmit a transport message (e.g., UL SM Message Transport message) to an Access and Mobility Function (AMF), wherein the transport message comprises a SM message (e.g., 5GSM message), and a receiver unit1204configured to receive a status message (e.g., 5GMM Status message) transmitted by the AMF, wherein the status message comprises at least a portion of the transport message and an indication of non-delivery of the SM message to a SMF. FIG.13illustrates a schematic block diagram of an apparatus1300in a wireless network (for example, the wireless network shown inFIG.1). The apparatus may be implemented in a wireless device or network node (e.g., wireless device110or network node160shown inFIG.1). Apparatus1300is operable to carry out the example method described with reference toFIG.11and possibly any other processes or methods disclosed herein. It is also to be understood that the method ofFIG.11is not necessarily carried out solely by apparatus1300. At least some operations of the method can be performed by one or more other entities. Virtual Apparatus1300may comprise processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In some implementations, the processing circuitry may be used to cause receiver unit1302to receive a transport message (e.g., UL SM Message Transport message) transmitted by a wireless device, wherein the transport message comprises a SM message (e.g., 5GSM message), determining unit1304to determine, based on the transport message, whether the SM message can be forwarded to a SMF, creating unit1306to create a status message (e.g., 5GMM Status message) comprising at least a portion of the transport message and an indication of non-delivery of the SM message to a SMF as a result of determining that the SM message cannot be forwarded to a SMF, transmitter unit1308to transmit the status message to the wireless device, and any other suitable units of apparatus1300to perform corresponding functions according one or more embodiments of the present disclosure. As illustrated inFIG.13, apparatus1300includes a receiver unit1302configured to receive a transport message (e.g., UL SM Message Transport message) transmitted by a wireless device, wherein the transport message comprises a SM message (e.g., 5GSM message), a determining unit1304configured to determine, based on the transport message, whether the SM message can be forwarded to a SMF, a creating unit1306to create a status message (e.g., 5GMM Status message) comprising at least a portion of the transport message and an indication of non-delivery of the SM message to a SMF as a result of determining that the SM message cannot be forwarded to a SMF, and a transmitter unit1308configured to transmit the status message to the wireless device. The term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. EMBODIMENTS Group A Embodiments—UE A1. A method implemented in a wireless device, comprising: transmitting a transport message (e.g., UL SM Message Transport message) to an Access and Mobility Function (AMF), wherein the transport message comprises a SM message (e.g., 5GSM message); and receiving a status message (e.g., 5GMM Status message) transmitted by the AMF, wherein the status message comprises at least a portion of the transport message and an indication of non-delivery of the SM message to a SMF. A2. The method of A1, wherein the portion of the transport message comprises the SM message. A3. The method of A1 or A2, wherein the transport message further comprises at least one or more of: a protocol data unit (PDU) session identifier (ID), a data network name (DNN), and a request type indication. A4. The method of any one of A1-A3, wherein the SM message comprises a procedure transaction identity (PTI) indication identifying a session management transaction (e.g., 5GSM transaction) associated with the SM message. A5. The method of any one of A1-A4, wherein the SM message is one of: (i) a session establishment request message (e.g., PDU Session Establishment Request message), (ii) a session modification request message (e.g., PDU Session Modification Request message), and (iii) a session release request message (e.g., PDU Session Release Request message), the method further comprising: as a result of receiving the indication of non-delivery, stopping a timer (e.g., Tx, Tk or Tz). A6. The method of A5, the method further comprising: as a result of receiving the indication of non-delivery, determining that a session associated with the SM message is: (i) not established, (ii) not modified or (iii) not released. A7. The method of any one of A1-A6, wherein the indication of non-delivery comprises a cause of failure to deliver the SM message to a SMF. A8. The method of any of the previous embodiments, further comprising: providing user data; and forwarding the user data to a host computer via the transmission to the base station. Group B Embodiments—Base Station B1. A method performed by an Access and Mobility Management Function (AMF), comprising: receiving a transport message (e.g., UL SM Message Transport message) transmitted by a wireless device, wherein the transport message comprises a SM message (e.g., 5GSM message); determining, based on the transport message, whether the SM message can be forwarded to a SMF; as a result of determining that the SM message cannot be forwarded to a SMF, creating a status message (e.g., 5GMM Status message) comprising at least a portion of the transport message and an indication of non-delivery of the SM message to a SMF; and transmitting the status message to the wireless device. B2. The method of B1, wherein the portion of the transport message comprises the SM message. B3. The method of B1 or B2, wherein the SM message comprises a procedure transaction identity (PTI) indication identifying a session management transaction (e.g., 5GSM transaction) associated with the SM message. B4. The method of any one of B1-B3, wherein the transport message further comprises at least one or more of: a protocol data unit (PDU) session identifier (ID), a data network name (DNN), and a request type indication. B5. The method of B4, wherein the determining, based on the transport message, whether the SM message can be forwarded to a SMF further comprises: determining whether the AMF has a PDU session routing context for the PDU session identifier, wherein the request type indication indicates that the SM message is associated to an initial request; and as a result of determining that the AMF does not have a PDU session routing context for the PDU session identifier, determining that a SMF cannot be selected for the SM message. B6. The method of B4, wherein the determining, based on the transport message, whether the SM message can be forwarded to a SMF further comprises: determining whether the AMF has a PDU session routing context for the PDU session identifier, wherein the request type indication indicates that the SM message is associated to an existing PDU session; obtaining subscription context for the wireless device from a unified data management (UDM), wherein the subscription context comprises at least one or more SMF identifier (ID); and as a result of determining (i) that the AMF does not have a PDU session routing context for the PDU session identifier and (ii) the at least one or more SMF ID is not associated with the DNN, determining that a SMF cannot be selected for the SM message. B7. The method of B4, wherein the determining, based on the transport message, whether the SM message can be forwarded to a SMF further comprises: determining whether the AMF has a PDU session routing context for the PDU session identifier, wherein the request type indication indicates that the SM message is associated to an existing PDU session, and the DNN is not included in the transport message; obtaining subscription context for the wireless device from a unified data management (UDM), wherein the subscription context comprises at least one or more SMF identifier (ID); and as a result of determining (i) that the AMF does not have a PDU session routing context for the PDU session identifier and (ii) the at least one or more SMF ID is not associated with a default DNN, determining that a SMF cannot be selected for the SM message. B8. The method of B4, wherein the determining, based on the transport message, whether the SM message can be forwarded to a SMF further comprises: determining whether the AMF has a PDU session routing context for the PDU session identifier, wherein the request type indication is not included in the transport message; and as a result of determining that the AMF does not have a PDU session routing context for the PDU session identifier, determining that a SMF cannot be selected for the SM message. B9. The method of any one of B1-B8, wherein the indication of non-delivery comprises a cause of failure to deliver the SM message to a SMF. B10. The method of any of the previous embodiments, further comprising: obtaining user data; and forwarding the user data to a host computer or a wireless device. Group C Embodiments C1. A wireless device comprising: processing circuitry configured to perform any of the steps of any of the Group A embodiments; and power supply circuitry configured to supply power to the wireless device. C2. A base station, the base station comprising: processing circuitry configured to perform any of the steps of any of the Group B embodiments; power supply circuitry configured to supply power to the wireless device. C3. A user equipment (UE) comprising: an antenna configured to send and receive wireless signals; radio front-end circuitry connected to the antenna and to processing circuitry, and configured to condition signals communicated between the antenna and the processing circuitry; the processing circuitry being configured to perform any of the steps of any of the Group A embodiments; an input interface connected to the processing circuitry and configured to allow input of information into the UE to be processed by the processing circuitry; an output interface connected to the processing circuitry and configured to output information from the UE that has been processed by the processing circuitry; and a battery connected to the processing circuitry and configured to supply power to the UE. C4. A communication system including a host computer comprising: processing circuitry configured to provide user data; and a communication interface configured to forward the user data to a cellular network for transmission to a user equipment (UE), wherein the cellular network comprises a base station having a radio interface and processing circuitry, the base station's processing circuitry configured to perform any of the steps of any of the Group B embodiments. C5. The communication system of the pervious embodiment further including the base station. C6. The communication system of the previous 2 embodiments, further including the UE, wherein the UE is configured to communicate with the base station. C7. The communication system of the previous 3 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; and the UE comprises processing circuitry configured to execute a client application associated with the host application. C8. A method implemented in a communication system including a host computer, a base station and a user equipment (UE), the method comprising: at the host computer, providing user data; and at the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising the base station, wherein the base station performs any of the steps of any of the Group B embodiments. C9. The method of the previous embodiment, further comprising, at the base station, transmitting the user data. C10. The method of the previous 2 embodiments, wherein the user data is provided at the host computer by executing a host application, the method further comprising, at the UE, executing a client application associated with the host application. C11. A user equipment (UE) configured to communicate with a base station, the UE comprising a radio interface and processing circuitry configured to performs the of the previous 3 embodiments. C12. A communication system including a host computer comprising: processing circuitry configured to provide user data; and a communication interface configured to forward user data to a cellular network for transmission to a user equipment (UE), wherein the UE comprises a radio interface and processing circuitry, the UE's components configured to perform any of the steps of any of the Group A embodiments. C13. The communication system of the previous embodiment, wherein the cellular network further includes a base station configured to communicate with the UE. C14. The communication system of the previous 2 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data; and the UE's processing circuitry is configured to execute a client application associated with the host application. C15. A method implemented in a communication system including a host computer, a base station and a user equipment (UE), the method comprising: at the host computer, providing user data; and at the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising the base station, wherein the UE performs any of the steps of any of the Group A embodiments. C16. The method of the previous embodiment, further comprising at the UE, receiving the user data from the base station. C17. A communication system including a host computer comprising: communication interface configured to receive user data originating from a transmission from a user equipment (UE) to a base station, wherein the UE comprises a radio interface and processing circuitry, the UE's processing circuitry configured to perform any of the steps of any of the Group A embodiments. C18. The communication system of the previous embodiment, further including the UE. C19. The communication system of the previous 2 embodiments, further including the base station, wherein the base station comprises a radio interface configured to communicate with the UE and a communication interface configured to forward to the host computer the user data carried by a transmission from the UE to the base station. C20. The communication system of the previous 3 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application; and the UE's processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data. C21. The communication system of the previous 4 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application, thereby providing request data; and the UE's processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data in response to the request data. C22. A method implemented in a communication system including a host computer, a base station and a user equipment (UE), the method comprising: at the host computer, receiving user data transmitted to the base station from the UE, wherein the UE performs any of the steps of any of the Group A embodiments. C23. The method of the previous embodiment, further comprising, at the UE, providing the user data to the base station. C24. The method of the previous 2 embodiments, further comprising: at the UE, executing a client application, thereby providing the user data to be transmitted; and at the host computer, executing a host application associated with the client application. C25. The method of the previous 3 embodiments, further comprising: at the UE, executing a client application; and at the UE, receiving input data to the client application, the input data being provided at the host computer by executing a host application associated with the client application, wherein the user data to be transmitted is provided by the client application in response to the input data. C26. A communication system including a host computer comprising a communication interface configured to receive user data originating from a transmission from a user equipment (UE) to a base station, wherein the base station comprises a radio interface and processing circuitry, the base station's processing circuitry configured to perform any of the steps of any of the Group B embodiments. C27. The communication system of the previous embodiment further including the base station. C28. The communication system of the previous 2 embodiments, further including the UE, wherein the UE is configured to communicate with the base station. C29. The communication system of the previous 3 embodiments, wherein: the processing circuitry of the host computer is configured to execute a host application; the UE is configured to execute a client application associated with the host application, thereby providing the user data to be received by the host computer. C30. A method implemented in a communication system including a host computer, a base station and a user equipment (UE), the method comprising: at the host computer, receiving, from the base station, user data originating from a transmission which the base station has received from the UE, wherein the UE performs any of the steps of any of the Group A embodiments. C31. The method of the previous embodiment, further comprising at the base station, receiving the user data from the UE. C32. The method of the previous 2 embodiments, further comprising at the base station, initiating a transmission of the received user data to the host computer. Additional Disclosure Reason for Change Problem Description TR 24.890 contains the following editor's notes: 8.5.1.1.2.1.1.4 Abnormal cases on the network sideThe following abnormal cases in AMF are identified:a)the AMF does not have a PDU session routing context for the PDU session ID of the ULSM MESSAGE TRANSPORT message and the UE, the request type IE of the UL SMMESSAGE TRANSPORT message is set to “initial request”, and the SMF selectionfails.Editor's note: Handling of this abnormal case is FFS. . .b)the AMF does not have a PDU session routing context for the PDU session ID of the ULSM MESSAGE TRANSPORT message and the UE, the request type IE of the UL SMMESSAGE TRANSPORT message is set to “existing PDU session”, and the user'ssubscription context obtained from the UDM does not contain an SMF ID correspondingto.1)the DNN of the UL SM MESSAGE TRANSPORT message, if the DNN is includedin the NAS SM MESSAGE TRANSPORT message; or2)the default DNN, if the DNN is not included in the UL SM MESSAGETRANSPORT message.Editor's note: Handling of this abnormal case is FFS Similar error can also occur when request type is not provided by the UE. If no handling is defined for the cases above, the failure is due to a permanent cause (e.g. the requested DNN is not authorized DNN for the UE) and the SM messages are retransmitted, then the UE will retransmit the SM message in a new UL SM MESSAGE TRANSPORT message and the AMF needs to repeat the SMF selection again with the same failure. Possible Solutions Alternative-1 UE-initiated NAS transport procedure is extended with an UL SM MESSAGE TRANSPORT ACCEPT message or an UL SM MESSAGE TRANSPORT REJECT message, which AMF sends upon reception and handling of UL SM MESSAGE TRANSPORT REQUEST message. Only up to one UE-initiated NAS transport procedure would be run at any given time. If the AMF is able to forward 5GSM message of UL SM MESSAGE TRANSPORT REQUEST message, the AMF sends UL SM MESSAGE TRANSPORT ACCEPT message. If the AMF is unable to forward 5GSM message of UL SM MESSAGE TRANSPORT REQUEST message, the AMF sends UL SM MESSAGE TRANSPORT REJECT message. The UL SM MESSAGE TRANSPORT REJECT message contains a cause. As reliability is provided on SM transport layer, the 5GSM procedures will not need to retransmit 5GSM messages. If transport of 5GSM message fails, the 5GSM procedure will consider the 5GSM procedure as unsuccessfully completed. Alternative-2 If the AMF is unable to forward 5GSM message of UL SM MESSAGE TRANSPORT message, the AMF sends 5GMM STATUS message. The 5GMM STATUS message contains a 5GMM message container IE containing the UL SM MESSAGE TRANSPORT message, and a cause. If the UE receives a 5GMM STATUS message with 5GMM message container IE containing the UL SM MESSAGE TRANSPORT message containing a 5GSM message, the 5GMM layer informs the 5GSM layer about non-delivery of the 5GSM message. Based on non-delivery of the 5GSM message, the 5GSM procedure will stop any retransmissions of the 5GSM message and consider the 5GSM procedure as unsuccessfully completed. Alternative-3 AMF is configured with a SMF for rejection. AMF routes any SM message which is unable to route forward to the SMF for rejection. The SMF rejects the 5GSM request message with appropriate 5GSM response message. Alternative-4 Do nothing and live with retransmissions in case of AMF not being able to select an SMF. Evaluation Alternative-1 requires two NAS messages to transport a 5G SM message while the existing procedure requires only 1 NAS message. Alternative-3 requires deployment of an SMF. The SMF does not need to be fully functional—it only needs to be able to reject the 5GSM message from the UE. Alternative-4 does not solve the problem. Proposal It is proposed to apply alternative-2. It is proposed to agree the following changes to 3GPP TR 24.890. 8.5.1.1.2.1.1.4 Abnormal Cases on the Network Side The following abnormal cases in AMF are identified: a)if the AMF does not have a PDU session routing context for the PDU session ID of theUL SM MESSAGE TRANSPORT message and the UE, the request type IE of the ULSM MESSAGE TRANSPORT message is set to “initial request”, and the SMF selectionfails, then the AMF shall create a 5GMM STATUS message. The AMF shall set the5GMM message container IE of the 5GMM STATUS message to the UL SMMESSAGE TRANSPORT message. The AMF shall set the cause IE of the 5GMMSTATUS message to a cause indicating cause of failure. The AMF shall send the 5GMMSTATUS message to the UE.b)if the AMF does not have a PDU session routing context for the PDU session ID of theUL SM MESSAGE TRANSPORT message and the UE, the request type IE of the ULSM MESSAGE TRANSPORT message is set to “existing PDU session”, and the user'ssubscription context obtained from the UDM does not contain an SMF ID correspondingto:1)the DNN of the UL SM MESSAGE TRANSPORT message, if the DNN is includedin the NAS SM MESSAGE TRANSPORT message; or2)the default DNN, if the DNN is not included in the UL SM MESSAGETRANSPORT message.then the AMF shall create a 5GMM STATUS message. The AMF shall set the 5GMMmessage container IE of the 5GMM STATUS message to the UL SM MESSAGETRANSPORT message. The AMF shall set the cause IE of the 5GMM STATUSmessage to a cause indicating cause of failure. The AMF shall send the 5GMM STATUSmessage to the UE.c)if the AMF does not have a PDU session routing context for the PDU session ID of theUL SM MESSAGE TRANSPORT message and the UE, and the request type IE of theUL SM MESSAGE TRANSPORT message is not provided, then the AMF shall createa 5GMM STATUS message. The AMF shall set the 5GMM message container IE of the5GMM STATUS message to the UL SM MESSAGE TRANSPORT message. The AMFshall set the cause IE of the 5GMM STATUS message to a cause indicating cause offailure. The AMF shall send the 5GMM STATUS message to the UE.d)if the AMF has a PDU session routing context for the PDU session ID of the UL SMMESSAGE TRANSPORT message and the UE, the request type IE of the UL SMMESSAGE TRANSPORT message is set to “initial request” and the AMF has notreceived a reallocation requested indication, the AMF should forward the SM message,the PDU session ID, the S-NSSAI (if received), the DNN (if received) and the requesttype of the UL SM MESSAGE TRANSPORT message towards the SMF ID of the PDUsession routing context.e)if the AMF has a PDU session routing context for the PDU session ID of the UL SMMESSAGE TRANSPORT message and the UE, the PDU session routing contextindicates that the PDU session is an emergency PDU session, the request type IE of theUL SM MESSAGE TRANSPORT message is set to “initial emergency request”, theAMF should forward the SM message, the PDU session ID, the S-NSSAI (if received),the DNN (if received) and the request type of the UL SM MESSAGE TRANSPORTmessage towards the SMF ID of the PDU session routing context.f)if the AMF has a PDU session routing context for the PDU session ID of the UL SMMESSAGE TRANSPORT message and the UE, the request type IE of the UL SMMESSAGE TRANSPORT message is set to “initial request”, the AMF has received areallocation requested indication from the SMF indicating that the SMF is to bereallocated, and the PDU session routing context contains reallocated SMF ID, the AMFshould forward the SM message, the PDU session ID, the S-NSSAI (if received), theDNN (if received) and the request type of the UL SM MESSAGE TRANSPORTmessage towards the reallocated SMF ID of the PDU session routing context. 8.5.1.1.2.1.1.5 UE-Initiated SM Message Transport Initiation not Accepted by the Network Upon reception of 5GMM STATUS message with the 5GMM message container IE containing an UL SM MESSAGE TRANSPORT message, the UE passes a non-delivery indication along with the SM message of the UL SM MESSAGE TRANSPORT message to the 5GSM procedures specified in clause 9. 9.4.2.5 Abnormal Cases in the UE The following abnormal cases can be identified: a) Tx expired (Editor's note: Further abnormal cases in the UE are FFS.) b) Upon receiving a non-delivery indication along with a PDU SESSION ESTABLISHMENT REQUEST message with PTI IE set to the allocated PTI value, the UE shall stop timer Tx, shall release the allocated PTI value and shall consider that the PDU session is not established. 9.4.4.5 Abnormal Cases in the UE The following abnormal cases can be identified: a) Tk expired (Editor's note: Further abnormal cases are FFS.) b) Upon receiving a non-delivery indication along with a PDU SESSION MODIFICATION REQUEST message with PTI IE set to the allocated PTI value, the UE shall stop timer Tk, shall release the allocated PTI value and shall consider that the PDU session is not modified. 9.4.6.5 Abnormal Cases in the UE The following abnormal cases can be identified: a) Tz expire (Editors' note: Further abnormal cases are FFS.) b) Upon receiving a non-delivery indication along with a PDU SESSION RELEASE REQUEST message with PTI IE set to the allocated PTI value, the UE shall stop timer Tz, shall release the allocated PTI value and shall consider that the PDU session is not released. While various embodiments of the present disclosure are described herein, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present disclosure should not be limited by any of the above described exemplary embodiments. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. Additionally, while the processes described above and illustrated in the drawings are shown as a sequence of steps, this was done solely for the sake of illustration. Accordingly, it is contemplated that some steps may be added, some steps may be omitted, the order of the steps may be re-arranged, and some steps may be performed in parallel. Abbreviations At least some of the following abbreviations may be used in this disclosure. If there is an inconsistency between abbreviations, preference should be given to how it is used above. If listed multiple times below, the first listing should be preferred over any subsequent listing(s).1×RTT CDMA2000 1× Radio Transmission Technology3GPP3rd Generation Partnership Project5G 5th GenerationABS Almost Blank SubframeARQ Automatic Repeat RequestAWGN Additive White Gaussian NoiseBCCH Broadcast Control ChannelBCH Broadcast ChannelCA Carrier AggregationCC Carrier ComponentCCCH SDU Common Control Channel SDUCDMA Code Division Multiplexing AccessCGI Cell Global IdentifierCIR Channel Impulse ResponseCP Cyclic PrefixCPICH Common Pilot ChannelCPICH Ec/No CPICH Received energy per chip divided by the power density in the bandCQI Channel Quality informationC-RNTI Cell RNTICSI Channel State InformationDCCH Dedicated Control ChannelDL DownlinkDM DemodulationDMRS Demodulation Reference SignalDRX Discontinuous ReceptionDTX Discontinuous TransmissionDTCH Dedicated Traffic ChannelDUT Device Under TestE-CID Enhanced Cell-ID (positioning method)E-SMLC Evolved-Serving Mobile Location CentreECGIEvolved CGIeNB E-UTRAN NodeBePDCCH enhanced Physical Downlink Control ChannelE-SMLC evolved Serving Mobile Location CenterE-UTRA Evolved UTRAE-UTRAN Evolved UTRANFDD Frequency Division DuplexFFS For Further StudyGERAN GSM EDGE Radio Access NetworkgNB Base station in NRGNSS Global Navigation Satellite SystemGSM Global System for Mobile communicationHARQ Hybrid Automatic Repeat RequestHO HandoverHSPA High Speed Packet AccessHRPD High Rate Packet DataLOS Line of SightLPP LTE Positioning ProtocolLTE Long-Term EvolutionMAC Medium Access ControlMBMS Multimedia Broadcast Multicast ServicesMBSFN Multimedia Broadcast multicast service Single Frequency NetworkMBSFN ABS MBSFN Almost Blank SubframeMDT Minimization of Drive TestsMIB Master Information BlockMME Mobility Management EntityMSC Mobile Switching CenterNPDCCH Narrowband Physical Downlink Control ChannelNR New RadioOCNG OFDMA Channel Noise GeneratorOFDM Orthogonal Frequency Division MultiplexingOFDMA Orthogonal Frequency Division Multiple AccessOSS Operations Support SystemOTDOA Observed Time Difference of ArrivalO&M Operation and MaintenancePBCH Physical Broadcast ChannelP-CCPCH Primary Common Control Physical ChannelPCell Primary CellPCFICH Physical Control Format Indicator ChannelPDCCH Physical Downlink Control ChannelPDP Profile Delay ProfilePDSCH Physical Downlink Shared ChannelPGW Packet GatewayPHICH Physical Hybrid-ARQ Indicator ChannelPLMN Public Land Mobile NetworkPMI Precoder Matrix IndicatorPRACH Physical Random Access ChannelPRS Positioning Reference SignalPSS Primary Synchronization SignalPUCCH Physical Uplink Control ChannelPUSCH Physical Uplink Shared ChannelRACH Random Access ChannelQAMQuadrature Amplitude ModulationRAN Radio Access NetworkRAT Radio Access TechnologyRLM Radio Link ManagementRNC Radio Network ControllerRNTIRadio Network Temporary IdentifierRRC Radio Resource ControlRRM Radio Resource ManagementRS Reference SignalRSCP Received Signal Code PowerRSRP Reference Symbol Received Power ORReference Signal Received PowerRSRQ Reference Signal Received Quality ORReference Symbol Received QualityRSSI Received Signal Strength IndicatorRSTD Reference Signal Time DifferenceSCH Synchronization ChannelSCell Secondary CellSDU Service Data UnitSFN System Frame NumberSGW Serving GatewaySI System InformationSIB System Information BlockSNR Signal to Noise RatioSON Self Optimized NetworkSS Synchronization SignalSSS Secondary Synchronization SignalTDD Time Division DuplexTDOA Time Difference of ArrivalTOA Time of ArrivalTSS Tertiary Synchronization SignalTTI Transmission Time IntervalUE User EquipmentUL UplinkUMTS Universal Mobile Telecommunication SystemUSIM Universal Subscriber Identity ModuleUTDOA Uplink Time Difference of ArrivalUTRA Universal Terrestrial Radio AccessUTRAN Universal Terrestrial Radio Access NetworkWCDMA Wide CDMAWLAN Wide Local Area Network
117,162
11863324
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer. FIG.1is a diagram illustrating an example of a wireless communications system and an access network100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations102, UEs104, an Evolved Packet Core (EPC)160, and another core network190(e.g., a 5G Core (5GC)). The base stations102may include macrocells (high power cellular base station) and/or small cells (low power cellular base station). The macrocells include base stations. The small cells include femtocells, picocells, and microcells. The base stations102configured for 4G LTE (collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) may interface with the EPC160through first backhaul links132(e.g., S1 interface). The base stations102configured for 5G NR (collectively referred to as Next Generation RAN (NG-RAN)) may interface with core network190through second backhaul links184. In addition to other functions, the base stations102may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. The base stations102may communicate directly or indirectly (e.g., through the EPC160or core network190) with each other over third backhaul links134(e.g., X2 interface). The first backhaul links132, the second backhaul links184, and the third backhaul links134may be wired or wireless. The base stations102may wirelessly communicate with the UEs104. Each of the base stations102may provide communication coverage for a respective geographic coverage area110. There may be overlapping geographic coverage areas110. For example, the small cell102′ may have a coverage area110′ that overlaps the coverage area110of one or more macro base stations102. A network that includes both small cell and macrocells may be known as a heterogeneous network. A heterogeneous network may also include Home Evolved Node Bs (eNBs) (HeNBs), which may provide service to a restricted group known as a closed subscriber group (CSG). The communication links120between the base stations102and the UEs104may include uplink (UL) (also referred to as reverse link) transmissions from a UE104to a base station102and/or downlink (DL) (also referred to as forward link) transmissions from a base station102to a UE104. The communication links120may use multiple-input and multiple-output (MIMO) antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity. The communication links may be through one or more carriers. The base stations102/UEs104may use spectrum up to Y MHz (e.g., 5, 10, 15, 20, 100, 400, etc. MHz) bandwidth per carrier allocated in a carrier aggregation of up to a total of Yx MHz (x component carriers) used for transmission in each direction. The carriers may or may not be adjacent to each other. Allocation of carriers may be asymmetric with respect to DL and UL (e.g., more or fewer carriers may be allocated for DL than for UL). The component carriers may include a primary component carrier and one or more secondary component carriers. A primary component carrier may be referred to as a primary cell (PCell) and a secondary component carrier may be referred to as a secondary cell (SCell). Certain UEs104may communicate with each other using device-to-device (D2D) communication link158. The D2D communication link158may use the DL/UL WWAN spectrum. The D2D communication link158may use one or more sidelink channels, such as a physical sidelink broadcast channel (PSBCH), a physical sidelink discovery channel (PSDCH), a physical sidelink shared channel (PSSCH), and a physical sidelink control channel (PSCCH). D2D communication may be through a variety of wireless D2D communications systems, such as for example, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, LTE, or NR. The wireless communications system may further include a Wi-Fi access point (AP)150in communication with Wi-Fi stations (STAs)152via communication links154, e.g., in a 5 GHz unlicensed frequency spectrum or the like. When communicating in an unlicensed frequency spectrum, the STAs152/AP150may perform a clear channel assessment (CCA) prior to communicating in order to determine whether the channel is available. The small cell102′ may operate in a licensed and/or an unlicensed frequency spectrum. When operating in an unlicensed frequency spectrum, the small cell102′ may employ NR and use the same unlicensed frequency spectrum (e.g., 5 GHz, or the like) as used by the Wi-Fi AP150. The small cell102′, employing NR in an unlicensed frequency spectrum, may boost coverage to and/or increase capacity of the access network. The electromagnetic spectrum is often subdivided, based on frequency/wavelength, into various classes, bands, channels, etc. In 5G NR, two initial operating bands have been identified as frequency range designations FR1 (410 MHz-7.125 GHz) and FR2 (24.25 GHz-52.6 GHz). The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “sub-6 GHz” band in various documents and articles. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. With the above aspects in mind, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like if used herein may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like if used herein may broadly represent frequencies that may include mid-band frequencies, may be within FR2, or may be within the EHF band. A base station102, whether a small cell102′ or a large cell (e.g., macro base station), may include and/or be referred to as an eNB, gNodeB (gNB), or another type of base station. Some base stations, such as gNB180may operate in a traditional sub 6 GHz spectrum, in millimeter wave frequencies, and/or near millimeter wave frequencies in communication with the UE104. When the gNB180operates in millimeter wave or near millimeter wave frequencies, the gNB180may be referred to as a millimeter wave base station. The millimeter wave base station180may utilize beamforming182with the UE104to compensate for the path loss and short range. The base station180and the UE104may each include a plurality of antennas, such as antenna elements, antenna panels, and/or antenna arrays to facilitate the beamforming. The base station180may transmit a beamformed signal to the UE104in one or more transmit directions182′. The UE104may receive the beamformed signal from the base station180in one or more receive directions182″. The UE104may also transmit a beamformed signal to the base station180in one or more transmit directions. The base station180may receive the beamformed signal from the UE104in one or more receive directions. The base station180/UE104may perform beam training to determine the best receive and transmit directions for each of the base station180/UE104. The transmit and receive directions for the base station180may or may not be the same. The transmit and receive directions for the UE104may or may not be the same. The EPC160may include a Mobility Management Entity (MME)162, other MMEs164, a Serving Gateway166, a Multimedia Broadcast Multicast Service (MBMS) Gateway168, a Broadcast Multicast Service Center (BM-SC)170, and a Packet Data Network (PDN) Gateway172. The MME162may be in communication with a Home Subscriber Server (HSS)174. The MME162is the control node that processes the signaling between the UEs104and the EPC160. Generally, the MME162provides bearer and connection management. All user Internet protocol (IP) packets are transferred through the Serving Gateway166, which itself is connected to the PDN Gateway172. The PDN Gateway172provides UE IP address allocation as well as other functions. The PDN Gateway172and the BM-SC170are connected to the IP Services176. The IP Services176may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. The BM-SC170may provide functions for MBMS user service provisioning and delivery. The BM-SC170may serve as an entry point for content provider MBMS transmission, may be used to authorize and initiate MBMS Bearer Services within a public land mobile network (PLMN), and may be used to schedule MBMS transmissions. The MBMS Gateway168may be used to distribute MBMS traffic to the base stations102belonging to a Multicast Broadcast Single Frequency Network (MBSFN) area broadcasting a particular service, and may be responsible for session management (start/stop) and for collecting eMBMS related charging information. The core network190may include an Access and Mobility Management Function (AMF)192, other AMFs193, a Session Management Function (SMF)194, and a User Plane Function (UPF)195. The AMF192may be in communication with a Unified Data Management (UDM)196. The AMF192is the control node that processes the signaling between the UEs104and the core network190. Generally, the AMF192provides QoS flow and session management. All user Internet protocol (IP) packets are transferred through the UPF195. The UPF195provides UE IP address allocation as well as other functions. The UPF195is connected to the IP Services197. The IP Services197may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a Packet Switch (PS) Streaming (PSS) Service, and/or other IP services. The base station may include and/or be referred to as a gNB, Node B, eNB, an access point, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a basic service set (BSS), an extended service set (ESS), a transmit reception point (TRP), or some other suitable terminology. The base station102provides an access point to the EPC160or core network190for a UE104. Examples of UEs104include a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal digital assistant (PDA), a satellite radio, a global positioning system, a multimedia device, a video device, a digital audio player (e.g., MP3 player), a camera, a game console, a tablet, a smart device, a wearable device, a vehicle, an electric meter, a gas pump, a large or small kitchen appliance, a healthcare device, an implant, a sensor/actuator, a display, or any other similar functioning device. Some of the UEs104may be referred to as IoT devices (e.g., parking meter, gas pump, toaster, vehicles, heart monitor, etc.). The UE104may also be referred to as a station, a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or some other suitable terminology. Referring again toFIG.1, in certain aspects, the base station180may include a transmission component199configured to receive, from a last hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the last hop node at a beginning of a first set of uplink (UL) or downlink (DL) resources, where a received UL or DL communication is based on the notification flag. Transmission component199may also be configured to receive, via a first set of UL or DL resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node. Transmission component199may also be configured to decode, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time. Transmission component199may also be configured to transmit an acknowledgement (ACK) or negative ACK (NACK) upon decoding the at least one data packet at the reception entity of the node. Transmission component199may also be configured to stop reception of at least one remaining first repetition unit of the one or more first repetition units upon successfully decoding the at least one data packet, where the reception of the at least one remaining first repetition unit is stopped at an early termination instance. Transmission component199may also be configured to transmit, to the next hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the forwarding entity at a beginning of the second set of UL or DL resources. Transmission component199may also be configured to encode the at least one data packet at the forwarding entity of the node, the at least one data packet being transmitted to the next hop node via the one or more second repetition units after the at least one data packet is encoded, where the one or more second repetition units overlap with the at least one remaining first repetition unit. Transmission component199may also be configured to transmit, via a second set of UL or DL resources, the UL or DL communication including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources. Referring again toFIG.1, in certain aspects, the base station180may include a transmission component198configured to transmit, to a second node via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the second node. Transmission component198may also be configured to receive, from the second node, an acknowledgement (ACK) or negative ACK (NACK) based on the at least one data packet being decoded at the reception entity of the second node. Transmission component198may also be configured to transmit, to the second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the first node at a beginning of the first set of UL or DL resources, where the transmitted UL or DL communication is based on the notification flag. Referring again toFIG.1, in certain aspects, the base station180may include a reception component191configured to receive, from a second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by a forwarding entity of the second node at a beginning of a first set of uplink (UL) or downlink (DL) resources. Reception component191may also be configured to receive, from the second node via the first set of UL or DL resources, the UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for the forwarding entity of the second node, where at least one second resource of a second set of UL or DL resources overlaps with at least one first resource of the first set of UL or DL resources. Although the following description may be focused on 5G NR, the concepts described herein may be applicable to other similar areas, such as LTE, LTE-A, CDMA, GSM, and other wireless technologies. FIG.2Ais a diagram200illustrating an example of a first subframe within a 5G NR frame structure.FIG.2Bis a diagram230illustrating an example of DL channels within a 5G NR subframe.FIG.2Cis a diagram250illustrating an example of a second subframe within a 5G NR frame structure.FIG.2Dis a diagram280illustrating an example of UL channels within a 5G NR subframe. The 5G NR frame structure may be frequency division duplexed (FDD) in which for a particular set of subcarriers (carrier system bandwidth), subframes within the set of subcarriers are dedicated for either DL or UL, or may be time division duplexed (TDD) in which for a particular set of subcarriers (carrier system bandwidth), subframes within the set of subcarriers are dedicated for both DL and UL. In the examples provided byFIGS.2A,2C, the 5G NR frame structure is assumed to be TDD, with subframe 4 being configured with slot format 28 (with mostly DL), where D is DL, U is UL, and F is flexible for use between DL/UL, and subframe 3 being configured with slot format 1 (with all UL). While subframes 3, 4 are shown with slot formats 1, 28, respectively, any particular subframe may be configured with any of the various available slot formats 0-61. Slot formats 0, 1 are all DL, UL, respectively. Other slot formats 2-61 include a mix of DL, UL, and flexible symbols. UEs are configured with the slot format (dynamically through DL control information (DCI), or semi-statically/statically through radio resource control (RRC) signaling) through a received slot format indicator (SFI). Note that the description infra applies also to a 5G NR frame structure that is TDD. Other wireless communication technologies may have a different frame structure and/or different channels. A frame (10 ms) may be divided into 10 equally sized subframes (1 ms). Each subframe may include one or more time slots. Subframes may also include mini-slots, which may include 7, 4, or 2 symbols. Each slot may include 7 or 14 symbols, depending on the slot configuration. For slot configuration 0, each slot may include 14 symbols, and for slot configuration 1, each slot may include 7 symbols. The symbols on DL may be cyclic prefix (CP) OFDM (CP-OFDM) symbols. The symbols on UL may be CP-OFDM symbols (for high throughput scenarios) or discrete Fourier transform (DFT) spread OFDM (DFT-s-OFDM) symbols (also referred to as single carrier frequency-division multiple access (SC-FDMA) symbols) (for power limited scenarios; limited to a single stream transmission). The number of slots within a subframe is based on the slot configuration and the numerology. For slot configuration 0, different numerologies 0 to 4 allow for 1, 2, 4, 8, and 16 slots, respectively, per subframe. For slot configuration 1, different numerologies 0 to 2 allow for 2, 4, and 8 slots, respectively, per subframe. Accordingly, for slot configuration 0 and numerology μ, there are 14 symbols/slot and 2μslots/subframe. The subcarrier spacing and symbol length/duration are a function of the numerology. The subcarrier spacing may be equal to 2μ*15 kHz, where μ is the numerology 0 to 4. As such, the numerology μ=0 has a subcarrier spacing of 15 kHz and the numerology μ=4 has a subcarrier spacing of 240 kHz. The symbol length/duration is inversely related to the subcarrier spacing.FIGS.2A-2Dprovide an example of slot configuration 0 with 14 symbols per slot and numerology μ=2 with 4 slots per subframe. The slot duration is 0.25 ms, the subcarrier spacing is 60 kHz, and the symbol duration is approximately 16.67 μs. Within a set of frames, there may be one or more different bandwidth parts (BWPs) (seeFIG.2B) that are frequency division multiplexed. Each BWP may have a particular numerology. A resource grid may be used to represent the frame structure. Each time slot includes a resource block (RB) (also referred to as physical RBs (PRBs)) that extends 12 consecutive subcarriers. The resource grid is divided into multiple resource elements (REs). The number of bits carried by each RE depends on the modulation scheme. As illustrated inFIG.2A, some of the REs carry reference (pilot) signals (RS) for the UE. The RS may include demodulation RS (DM-RS) (indicated as R for one particular configuration, but other DM-RS configurations are possible) and channel state information reference signals (CSI-RS) for channel estimation at the UE. The RS may also include beam measurement RS (BRS), beam refinement RS (BRRS), and phase tracking RS (PT-RS). FIG.2Billustrates an example of various DL channels within a subframe of a frame. The physical downlink control channel (PDCCH) carries DCI within one or more control channel elements (CCEs) (e.g., 1, 2, 4, 8, or 16 CCEs), each CCE including six RE groups (REGs), each REG including 12 consecutive REs in an OFDM symbol of an RB. A PDCCH within one BWP may be referred to as a control resource set (CORESET). A UE is configured to monitor PDCCH candidates in a PDCCH search space (e.g., common search space, UE-specific search space) during PDCCH monitoring occasions on the CORESET, where the PDCCH candidates have different DCI formats and different aggregation levels. Additional BWPs may be located at greater and/or lower frequencies across the channel bandwidth. A primary synchronization signal (PSS) may be within symbol 2 of particular subframes of a frame. The PSS is used by a UE104to determine subframe/symbol timing and a physical layer identity. A secondary synchronization signal (SSS) may be within symbol 4 of particular subframes of a frame. The SSS is used by a UE to determine a physical layer cell identity group number and radio frame timing. Based on the physical layer identity and the physical layer cell identity group number, the UE can determine a physical cell identifier (PCI). Based on the PCI, the UE can determine the locations of the aforementioned DM-RS. The physical broadcast channel (PBCH), which carries a master information block (MIB), may be logically grouped with the PSS and SSS to form a synchronization signal (SS)/PBCH block (also referred to as SS block (SSB)). The MIB provides a number of RBs in the system bandwidth and a system frame number (SFN). The physical downlink shared channel (PDSCH) carries user data, broadcast system information not transmitted through the PBCH such as system information blocks (SIBs), and paging messages. As illustrated inFIG.2C, some of the REs carry DM-RS (indicated as R for one particular configuration, but other DM-RS configurations are possible) for channel estimation at the base station. The UE may transmit DM-RS for the physical uplink control channel (PUCCH) and DM-RS for the physical uplink shared channel (PUSCH). The PUSCH DM-RS may be transmitted in the first one or two symbols of the PUSCH. The PUCCH DM-RS may be transmitted in different configurations depending on whether short or long PUCCHs are transmitted and depending on the particular PUCCH format used. The UE may transmit sounding reference signals (SRS). The SRS may be transmitted in the last symbol of a subframe. The SRS may have a comb structure, and a UE may transmit SRS on one of the combs. The SRS may be used by a base station for channel quality estimation to enable frequency-dependent scheduling on the UL. FIG.2Dillustrates an example of various UL channels within a subframe of a frame. The PUCCH may be located as indicated in one configuration. The PUCCH carries uplink control information (UCI), such as scheduling requests, a channel quality indicator (CQI), a precoding matrix indicator (PMI), a rank indicator (RI), and hybrid automatic repeat request (HARD) ACK/NACK feedback. The PUSCH carries data, and may additionally be used to carry a buffer status report (BSR), a power headroom report (PHR), and/or UCI. FIG.3is a block diagram of a base station310in communication with a UE350in an access network. In the DL, IP packets from the EPC160may be provided to a controller/processor375. The controller/processor375implements layer 3 and layer 2 functionality. Layer 3 includes a radio resource control (RRC) layer, and layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor375provides RRC layer functionality associated with broadcasting of system information (e.g., MIB, SIBs), RRC connection control (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), inter radio access technology (RAT) mobility, and measurement configuration for UE measurement reporting; PDCP layer functionality associated with header compression/decompression, security (ciphering, deciphering, integrity protection, integrity verification), and handover support functions; RLC layer functionality associated with the transfer of upper layer packet data units (PDUs), error correction through ARQ, concatenation, segmentation, and reassembly of RLC service data units (SDUs), re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. The transmit (TX) processor316and the receive (RX) processor370implement layer 1 functionality associated with various signal processing functions. Layer 1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, interleaving, rate matching, mapping onto physical channels, modulation/demodulation of physical channels, and MIMO antenna processing. The TX processor316handles mapping to signal constellations based on various modulation schemes (e.g., binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), M-phase-shift keying (M-PSK), M-quadrature amplitude modulation (M-QAM)). The coded and modulated symbols may then be split into parallel streams. Each stream may then be mapped to an OFDM subcarrier, multiplexed with a reference signal (e.g., pilot) in the time and/or frequency domain, and then combined together using an Inverse Fast Fourier Transform (IFFT) to produce a physical channel carrying a time domain OFDM symbol stream. The OFDM stream is spatially precoded to produce multiple spatial streams. Channel estimates from a channel estimator374may be used to determine the coding and modulation scheme, as well as for spatial processing. The channel estimate may be derived from a reference signal and/or channel condition feedback transmitted by the UE350. Each spatial stream may then be provided to a different antenna320via a separate transmitter318TX. Each transmitter318TX may modulate an RF carrier with a respective spatial stream for transmission. At the UE350, each receiver354RX receives a signal through its respective antenna352. Each receiver354RX recovers information modulated onto an RF carrier and provides the information to the receive (RX) processor356. The TX processor368and the RX processor356implement layer 1 functionality associated with various signal processing functions. The RX processor356may perform spatial processing on the information to recover any spatial streams destined for the UE350. If multiple spatial streams are destined for the UE350, they may be combined by the RX processor356into a single OFDM symbol stream. The RX processor356then converts the OFDM symbol stream from the time-domain to the frequency domain using a Fast Fourier Transform (FFT). The frequency domain signal comprises a separate OFDM symbol stream for each subcarrier of the OFDM signal. The symbols on each subcarrier, and the reference signal, are recovered and demodulated by determining the most likely signal constellation points transmitted by the base station310. These soft decisions may be based on channel estimates computed by the channel estimator358. The soft decisions are then decoded and deinterleaved to recover the data and control signals that were originally transmitted by the base station310on the physical channel. The data and control signals are then provided to the controller/processor359, which implements layer 3 and layer 2 functionality. The controller/processor359can be associated with a memory360that stores program codes and data. The memory360may be referred to as a computer-readable medium. In the UL, the controller/processor359provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets from the EPC160. The controller/processor359is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. Similar to the functionality described in connection with the DL transmission by the base station310, the controller/processor359provides RRC layer functionality associated with system information (e.g., MIB, SIBs) acquisition, RRC connections, and measurement reporting; PDCP layer functionality associated with header compression/decompression, and security (ciphering, deciphering, integrity protection, integrity verification); RLC layer functionality associated with the transfer of upper layer PDUs, error correction through ARQ, concatenation, segmentation, and reassembly of RLC SDUs, re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto TBs, demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Channel estimates derived by a channel estimator358from a reference signal or feedback transmitted by the base station310may be used by the TX processor368to select the appropriate coding and modulation schemes, and to facilitate spatial processing. The spatial streams generated by the TX processor368may be provided to different antenna352via separate transmitters354TX. Each transmitter354TX may modulate an RF carrier with a respective spatial stream for transmission. The UL transmission is processed at the base station310in a manner similar to that described in connection with the receiver function at the UE350. Each receiver318RX receives a signal through its respective antenna320. Each receiver318RX recovers information modulated onto an RF carrier and provides the information to a RX processor370. The controller/processor375can be associated with a memory376that stores program codes and data. The memory376may be referred to as a computer-readable medium. In the UL, the controller/processor375provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, control signal processing to recover IP packets from the UE350. IP packets from the controller/processor375may be provided to the EPC160. The controller/processor375is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. At least one of the TX processor316, the RX processor370, and the controller/processor375may be configured to perform aspects in connection with191,198, and/or199ofFIG.1. Some aspects of wireless communication may include an integrated access and backhaul (IAB) network. In IAB networks, a portion of the wireless spectrum may be utilized for a backhaul connection of nodes or base stations. This utilization of the wireless spectrum may be used in place of another type of connection, e.g., a fiber connection. IAB networks may be beneficial as they have the ability to make high density deployments of wireless networks more economically viable. Based on this, some aspects of wireless communications are increasingly utilizing multiple hop (multi-hop) IAB networks. IAB networks may include a number of different nodes or base stations, such as an IAB donor and an IAB node. An IAB donor is an enhanced base station node with functions to control the IAB network. IAB donors may include a central unit (CU), which is the central entity that controls the entire IAB network through a configuration. Additionally, the CU may include a number of functions, e.g., radio resource control (RRC) protocol or packet data convergence protocol (PDCP) layer functions. IAB donors may also include a distributed unit (DU), which is a scheduling node that may schedule child nodes of the IAB donor. The DU may also include a number of functions, e.g., radio link control (RLC), medium access control (MAC), and/or physical (PHY) layer functions. An IAB node may be a layer 2 (L2) relay node that includes a number of functions, e.g., mobile-termination (MT) unit and DU functions. In some aspects, the MT of an IAB node may be a scheduled node similar to a UE. Moreover, the MT may be scheduled by its parent IAB node or an IAB donor. Further, the DU may be a scheduling node that schedules child nodes of the IAB node. FIG.4is a diagram400illustrating an example IAB network. As shown inFIG.4, diagram400includes core network410, IAB donor422, and IAB nodes432/434/436/438.FIG.4also illustrates a number of UEs, child nodes of IAB node432, wireless access links, and wireless backhaul links. As mentioned above, IAB donor422includes a CU and a DU. Additionally, IAB nodes432/434/436/438include an MT and a DU. Some aspects of IAB networks may include resource management solutions to handle different constraints, e.g., a half-duplex constraint. A half-duplex constraint is a node that cannot perform transmission (TX) and reception (RX) functions at the same time over a same frequency band. One solution to a half-duplex constraint may be time division multiplexing (TDM), as well as space division multiplexing (SDM) TX, frequency division multiplexing (FDM) TX, SDM RX, or FDM RX. FIGS.5A and5Bare diagrams500and550, respectively, illustrating example nodes. As shown inFIG.5A, diagram500includes parent nodes502/512, IAB nodes504/514, child IAB nodes506/516, and UEs508/518.FIG.5Adisplays one example of a TDM solution between different nodes. As shown inFIG.5B, diagram550includes parent nodes552/562, IAB nodes554/564, child IAB nodes556/566, and UEs558/568.FIG.5Bdisplays one example of SDM RX or FDM RX solutions between different nodes, as well as SDM TX or FDM TX solutions between different nodes. IAB nodes may include a number of DU resource attributes, such as hard (H) resources, not available (NA) resources, and soft (S) resources. A DU may use an H resource unconditionally, but it does not have to use the H resource. The DU may not use an NA resource, with an exception if the DU matches an allocation for a number of cell-specific signals or channels. For example, an exception may apply to synchronization signal block (SSB) transmission (both cell-defining SSB (CD-SSB) and non-CD-SSB), RACH receptions, periodic CSI-RS transmissions, and SR receptions. The DU may use an S resource if a condition is satisfied or true. For example, these conditions may include an explicit indication, i.e., where the parent node sends an indication to release the resource, an implicit determination, i.e., where the node determines that the use of the DU resource has no impact on what the MT is expected to do, and the same exception as the NA case above for cell-specific signals or channels. Some aspects of wireless communications, e.g., 5G NR communication, may include a number of repetition schemes. For instance, the repetition schemes may include slot aggregation or multiple transmission-reception point (TRP) or multi-TRP TDM repetition. In slot aggregation, a single DCI may schedule a PDSCH or PUSCH that may span multiple consecutive slots, e.g., N slots. In some instances, a same set of symbols over the N slots may be used for data transmission. Also, a number of aggregated slots may be semi-statically configured via RRC signaling, e.g., a pdsch-AggregationFactor parameter in a PDSCH configuration (pdsch-config). In one case, based on a TDD slot configuration, a UE may determine that the allocated symbol(s) to receive a PDSCH (or transmit a PUSCH) is UL (or DL), and determine that there is no transmission in a slot. In multi-TRP TDM repetition, aspects of wireless communications may include support for slot or mini-slot repetition. Multi-TRP TDM repetition may also include TCI state or a redundancy version (RV) pattern cross repetitions. Additionally, multi-TRP TDM repetition may include a dynamic indication of a number of repetitions via DCI, e.g., a time domain resource assignment. In some instances, the DCI may point to an entry in an RRC configured table (e.g., a PDSCH-TimeDomainResourceAllocation parameter). Also, a RepNumR16 parameter for multi-TRP repetition may be part of a configuration for the PDSCH-TimeDomainResourceAllocation parameter. Additionally, some aspects of wireless communications may include a work item description (WID) for IAB networks. For instance, IAB networks may include a number of enhancements, such as topology, routing, and transport enhancements. IAB networks may also include specifications of enhancements, such as to improve topology-wide fairness, multi-hop latency, and/or congestion mitigation. FIGS.6A and6Bare diagrams600and650, respectively, illustrating example repetition transmissions.FIGS.6A and6Bdisplay that aspects of wireless communications may include repetition with non-overlapping resources and overlapping resources across IAB networks. As shown inFIG.6A, diagram600includes parent node602, IAB node604, and child node606.FIG.6Aalso includes a number of repetition resources. For instance, the repetition resources corresponding to the MT of IAB node604are white inFIG.6A, which include K1 repetitions. The repetition resources corresponding to the DU of IAB node604have diagonal lines inFIG.6A, which include K2 repetitions.FIG.6Adepicts repetition with non-overlapping resources across an IAB MT of IAB node604and an IAB DU of IAB node604. As shown inFIG.6B, diagram650includes parent node652, IAB node654, and child node656.FIG.6Bdepicts repetition with overlapping resources across an IAB MT of IAB node654and an IAB DU of IAB node654.FIG.6Balso includes a number of repetition resources. For instance, the repetition resources corresponding to the MT of IAB node654are white inFIG.6B, which include K1 repetitions. The repetition resources corresponding to the DU of IAB node654have diagonal lines inFIG.6B, which include K2 repetitions.FIG.6Balso depicts there are a number of overlapping repetition resources. As indicated inFIG.6B, repetition with overlapping resources may include improved latency over a multi-hop IAB network. Additionally, a repetition scheme may refer to slot aggregation or multi-TRP TDM repetition. Based on the above, it may be beneficial to include repetition transmissions with overlapping resources across an IAB node. For instance, repetition with overlapping resources may provide a number of enhancements, such as a latency enhancement. Accordingly, it may be beneficial to include repetition with overlapping resources across an IAB MT and an IAB DU to improve latency issues. Aspects of the present disclosure may include repetition transmissions with overlapping resources across an IAB node. For instance, aspects of the present disclosure may utilize repetition with overlapping resources in order to provide a number of enhancements, such as a latency enhancement. Aspects of the present disclosure may also include repetition with overlapping resources across an IAB MT and an IAB DU to optimize latency issues. FIG.7is a diagram700illustrating an example repetition transmission. As shown inFIG.7, diagram700includes parent node702, IAB node704, and child node706.FIG.7depicts repetition with overlapping resources across an IAB MT of IAB node704and an IAB DU of IAB node704.FIG.7includes a number of repetition resources corresponding to the MT and the DU. For instance, the repetition resources corresponding to the MT of IAB node704are white inFIG.7, which include K1 repetitions. The repetition resources corresponding to the DU of IAB node704have diagonal lines inFIG.7, which include K2 repetitions.FIG.7also depicts there are a number of overlapping resources of the repetition resources. Further,FIG.7includes an early termination instance, as well as DCI and ACK/NACKs. As shown inFIG.7, some aspects of the present disclosure may be aware of an early termination instance at a scheduling time. In some instances, overlapping resources allocated for a next hop may start after an early termination instance of this hop. As illustrated inFIG.7, a DL repetition TX may be transmitted via dynamic DCI, where an IAB DU (e.g., a forwarding entity) may perform dynamic scheduling for a next hop upon knowledge of an early termination at a co-located IAB MT (e.g., a reception entity). FIG.8is a diagram800illustrating an example repetition transmission. As shown inFIG.8, diagram800includes parent node802, IAB node804, and child node806.FIG.8depicts repetition with overlapping resources across an IAB MT of IAB node804and an IAB DU of IAB node804.FIG.8includes a number of repetition resources corresponding to the MT and the DU. For instance, the repetition resources corresponding to the DU of IAB node804are white inFIG.8, which include K1 repetitions. The repetition resources corresponding to the MT of IAB node804have diagonal lines inFIG.8, which include K2 repetitions.FIG.8also depicts there are a number of overlapping resources of the repetition resources. Moreover,FIG.8includes an early termination instance, as well as DCI and a pre-emption buffer status report (BSR). As shown inFIG.8, some aspects of the present disclosure may not be aware of an early termination instance at a scheduling time. In some aspects, overlapping resources allocated for a next hop may start before an early termination instance of this hop. As illustrated inFIG.8, an UL repetition TX may be transmitted via dynamic DCI, where a parent DU is the scheduling node for a next hop TX by an IAB MT (e.g., a forwarding entity). The scheduling node may not know the early termination instance at an IAB DU (e.g., a reception entity). FIGS.9A and9Bare diagrams900and950, respectively, illustrating example repetition transmissions. As shown inFIG.9A, diagram900includes parent node902, IAB node904, and child node906.FIG.9Adepicts repetition with overlapping resources across an IAB MT of IAB node904and an IAB DU of IAB node904.FIG.9Aincludes a number of repetition resources corresponding to the MT and the DU. For instance, the repetition resources corresponding to the MT of IAB node904are white inFIG.9A, which include K1 repetitions. The repetition resources corresponding to the DU of IAB node904have diagonal lines inFIG.9A, which include K2 repetitions.FIG.9Aalso depicts there are a number of overlapping resources of the repetition resources.FIG.9Aalso includes an early termination instance and ACK/NACKs.FIG.9Adepicts that aspects of the present disclosure may correspond to DL semi-persistent scheduling (SPS). As shown inFIG.9B, diagram950includes parent node952, IAB node954, and child node956.FIG.9Balso depicts repetition with overlapping resources across an IAB MT of IAB node954and an IAB DU of IAB node954.FIG.9Bincludes a number of repetition resources corresponding to the MT and the DU. For instance, the repetition resources corresponding to the DU of IAB node904are white inFIG.9B, which include K1 repetitions. The repetition resources corresponding to the MT of IAB node954have diagonal lines inFIG.9B, which include K2 repetitions.FIG.9Balso depicts there are a number of overlapping resources of the repetition resources, as well as an early termination instance.FIG.9Bdepicts that aspects of the present disclosure may correspond to an UL configured grant with repetition. As shown inFIGS.9A and9B, examples of the present disclosure may correspond to DL SPS and/or an UL configured grant with repetition. For these cases, resources may be periodically allocated beforehand via activation DCI or by RRC configuration. The starting resource for a forwarding entity may be determined based on an earliest possible successful decoding time at a reception entity. The actual successful decoding time at the reception entity may occur after the starting time of the allocated resources for the forwarding entity. FIG.10is a diagram1000illustrating an example repetition transmission. As shown inFIG.10, diagram1000includes parent node1002, IAB node1004, and child node1006.FIG.10depicts repetition with overlapping resources across an IAB MT of IAB node1004and an IAB DU of IAB node1004.FIG.10includes a number of repetition resources corresponding to the MT and the DU. For instance, the repetition resources corresponding to the MT of IAB node1004are white inFIG.10, which include K1 repetitions. The repetition resources corresponding to the DU of IAB node1004have diagonal lines inFIG.10, which include K2 repetitions.FIG.10also depicts there are a number of soft type of overlapping resources.FIG.10also includes an early termination instance, DCI, and ACK/NACKs. FIG.10shows that overlapping resources allocated for a forwarding entity may start after an early termination instance at a reception entity. As depicted inFIG.10, the reception entity (e.g., an IAB MT for DL) may stop RX upon successfully decoding at least one packet before the end of the repetitions (i.e., an early termination instance). In this case, though a parent node may not be aware of the early termination at the IAB MT and may continue to transmit, the IAB MT may stop RX and allow a co-located IAB DU to use the remaining resources. As shown inFIG.10, once the reception entity stops RX upon a successful reception, the co-located forwarding entity (e.g., an IAB DU for DL) may allocate resources with repetitions that overlap with remaining unused resources of the reception entity for TX toward a next hop node. Also, the overlapping resources may be a soft type. According to implicit determination principle for soft resources, the IAB DU may use the remaining soft resources after an IAB MT stops RX upon successful reception. FIG.11is a diagram1100illustrating an example repetition transmission. As shown inFIG.11, diagram1100includes parent node1102, IAB node1104, and child node1106.FIG.11depicts repetition with overlapping resources across an IAB MT of IAB node1104and an IAB DU of IAB node1104.FIG.11includes a number of repetition resources corresponding to the MT and the DU. For instance, the repetition resources corresponding to the MT of IAB node1104are white inFIG.11, which include K1 repetitions. The repetition resources corresponding to the DU of IAB node1104have diagonal lines inFIG.11, which include K2 repetitions.FIG.11also depicts noise among the resources, which includes a dotted pattern, as well as an early termination instance. FIG.11shows that overlapping resources allocated for a forwarding entity may start before an early termination instance at a reception entity. As illustrated inFIG.11, the overlapping resources allocated for a forwarding entity (e.g., an IAB DU for DL or an IAB MT for UL) may start before the early termination instance at a reception entity (e.g., an IAB MT for DL or an IAB DU for UL). In this case, the forwarding entity may not be able to use the full resource allocation for packet forwarding at all times. The forwarding entity may have to skip some of the beginning repetition units (in slots or mini-slots) until the early termination instance, and start to forward the packet to the next hop node near the middle of the repetition units. As shown inFIG.11, there may be impacts to the reception entity at the next hop node. For an unknown starting time for effective reception, the first few allocated resources for reception may contain noise. For an under-optimized redundancy version (RV) pattern, the existing RV pattern defined for repetition may not be optimized for cases with skipped transmissions, e.g., the first received signal after skipping may not have a RV with sufficient systematic bits for decoding. Also, there may be a reduced number of repetitions compared to a target value due to the skipped TX. As indicated above, in some aspects, overlapping resources may start before an early termination instance. For a resource allocation with repetition that spans over multiple slots or mini-slots, a flag may be associated with this resource allocation to enable or disable operation of a floating starting time. The flag may be indicated to the TX node and/or the RX node of this resource allocation as part of an RRC configuration associated with the resource allocation by an IAB donor CU, or as part of a DCI grant for scheduling the resource allocation by the scheduling node. The DCI grant may be a grant for dynamic scheduling or an activation grant for SPS or an UL configured grant. In some cases, additional information, such as the number of skipped repetition units, may also be indicated by the forwarding entity to the next hop node via a medium access control (MAC) control element (MAC-CE), DCI, or UCI for a more efficient reception at the next hop node. If an enabling flag is indicated, the RX node may assume that the TX node may start TX near the middle of the repetition units (e.g., in slots or mini-slots) and optimize its reception procedure accordingly. For example, the RX node may perform some hypothesis testing on starting slots or mini-slots within the allocated resources during reception. A separate RV pattern may be indicated for the case with a floating starting time enabled. For example, this may include RV patterns that contain certain RV versions, e.g., RV0 and/or RV3, with sufficient systematic bits. If additional information, such as the number of skipped repetition units, is also indicated to the RX node, the RX node may also skip these repetition units during reception and/or a decoding procedure. If an enabling flag is indicated for the resource allocation and the forwarding entity starts TX near a middle of the repetition units, one or more of options may be adopted by the forwarding entity. In some aspects, the number of TXs may be equal to the remaining number of repetitions, i.e., the total number of allocated repetitions minus the skipped number of repetitions. In these aspects, the number of TXs may vary depending on the starting time. Also, the forwarding entity may schedule additional TXs via another dynamic DCI to meet a target reliability. In other instances, a fixed number of TXs may be indicated for the resource allocation regardless of the starting time. In these instances, the total number of allocated repetitions may be equal to the latest starting TX plus the indicated number of TXs. Also, the latest starting TX at the forwarding entity of an IAB node may correspond to the end of the allocation at the co-located reception entity of the IAB node. FIG.12is a diagram1200illustrating an example repetition transmission. As shown inFIG.12, diagram1200includes parent node1202, IAB node1204, and child node1206.FIG.12depicts repetition with overlapping resources across an IAB MT of IAB node1204and an IAB DU of IAB node1204.FIG.12includes a number of repetition resources corresponding to the MT and the DU. For instance, the repetition resources corresponding to the DU of IAB node1204are white inFIG.12, which include K1 repetitions. The repetition resources corresponding to the MT of IAB node1204have diagonal lines inFIG.12, which include K2 repetitions.FIG.12also depicts an early termination instance, DCI, and a pre-emption BSR. As shown inFIG.12, for UL communication, a parent DU may utilize some information on a child DU's resource allocation, e.g., a starting slot or mini-slot index, in order to determine the overlapping resources for an IAB MT. This information may be utilized by the parent DU in case of dynamic UL scheduling and/or a periodic allocation via a configured grant, e.g., a type-2 configured grant. For both cases, information of the child DU's resource allocation, e.g., a starting slot or mini-slot index, may be indicated to the parent DU by a child IAB MT. Signaling overhead may also be a concern for dynamic UL scheduling. For periodic allocation via a configured grant, e.g., a type-2 configured grant, signaling overhead may not be a concern because information may be indicated for one allocation and be applied periodically. This information may not be utilized for a periodic allocation via a type 1 configured grant, as the full resource allocation may be determined by a donor CU via an RRC configuration, and the donor CU may align the overlapping resources. In some aspects, for an IAB node, a first set of resources with a repetition spanning multiple slots or mini-slots may be allocated for a reception entity of the IAB node to receive at least one packet. Also, a second set of resources with a repetition spanning multiple slots or mini-slots may be allocated for a forwarding entity of the IAB node to forward the received packet to the next hop node, where there are overlapping resources between the first set of resources and the second set of resources. The number of repetition units in the first set of resources may be different from the second set of resources. For DL, the reception entity may be an MT of the IAB node, the forwarding entity may be a DU of the IAB node, and the next hop node may be a child node of the IAB node. For UL, the reception entity may be a DU of the IAB node, the forwarding entity may be a MT of the IAB node, and the next hop node may be the parent node of the IAB node. In some instances, the overlapping resources at the forwarding entity may start after a successful reception of a packet at the co-located reception node. Also, the overlapping resources at the forwarding entity may start before the successful reception of a packet at the co-located reception node. In some instances, one or more second repetition units may belong to a second set of UL or DL resources allocated to a forwarding entity of a node for communication with a next hop node. Also, at least one first resource of a first set of UL or DL resources may overlap with at least one second resource of the second set of UL or DL resources. The resources with repetition units may be allocated for communication based on an average or worst channel condition in order to achieve a target reliability. Due to varying channel conditions in a wireless network, in some cases with more favorable channel conditions, a reception entity may be able to decode a data packet near the middle of the allocated repetition units. By allowing overlapping resources between the first set of resources allocated for the reception entity and the second set of resources allocated for the forwarding entity, the packet may be forwarded to the next hop node immediately after the early termination instance at the reception entity. As a result, latency may be improved significantly over a multi-hop network. In some aspects, the second set of resources may be allocated for the forwarding entity after the early termination instance at the reception entity. For example, in dynamic DL scheduling via DCI, the scheduling node for allocation of the second set of resources may correspond to the forwarding entity. In this case, the forwarding entity may allocate the second set of resources via dynamic DCI to the next hop node after the early termination instance at the co-located reception entity. In this case, the starting location of the second set of resources may be dynamically determined, which may be located after the early termination instance at the reception entity. Also, in this case, both the forwarding entity and the next hop node may perform normal communication based on the allocation. In other aspects, the second set of resources may be allocated for the forwarding entity before the early termination instance at the reception entity. For example, the second set of resources may be semi-persistently allocated via DL SPS or an UL configured grant. In another example, in dynamic UL scheduling, the scheduling node for allocation of the second set of resources may be another node, which may not know the early termination instance of this node. In this case, the second set of resources may start before the early termination instance at the reception entity, and the forwarding entity may skip some of the beginning repetition units of the second set of resources and transmit the data packet to the next hop node via the second resources after the early termination instance at the reception entity. Additionally, as indicated above, nodes may receive, from a last hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the last hop node at a beginning of a first set of UL or DL resources, where a received UL or DL communication is based on the notification flag. In some examples, the notification flag may be initiated or generated by a last hop node. In some examples, the notification flag may be initiated by a CU of the network and be delivered via a last hop node. Nodes herein may also transmit, to the next hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the forwarding entity at a beginning of the second set of UL or DL resources. FIG.13is a diagram1300illustrating example communication between a first node1302, e.g., an IAB node, at least one second node1304, e.g., a last hop node, and at least one third node1306, e.g., a next hop node. At1310, second node1304may transmit a notification flag, e.g., notification flag1314, indicating one or more potentially skipped resources. At1312, first node1302may receive, from a last hop node, a notification flag, e.g., notification flag1314, indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the last hop node at a beginning of a first set of uplink (UL) or downlink (DL) resources, where a received UL or DL communication is based on the notification flag. In some aspects, the node may be an integrated access and backhaul (IAB) node associated with an IAB network, the reception entity corresponding to a mobile termination (MT) of the node or a distributed unit (DU) of the node, and the forwarding entity corresponding to the DU of the node or the MT of the node. When the UL or DL communication is UL communication, the reception entity may correspond to the DU of the node and the forwarding entity may correspond to the MT of the node, the next hop node corresponding to a parent IAB node or an IAB donor. When the UL or DL communication is DL communication, the reception entity may correspond to the MT of the node and the forwarding entity may correspond to the DU of the node, the next hop node corresponding to a child IAB node or a child user equipment (UE). In some instances, upon receiving the notification flag, the reception entity may apply hypothesis testing on a starting location of a first resource of the first set of UL or DL resources, the first resource transmitted by the last hop node, the hypothesis testing being applied while receiving the UL or DL communication from the last hop node. Also, upon receiving the notification flag, the reception entity may apply a pattern of redundancy version (RV) over one or more repetition resource units, the one or more repetition resource units being different when the notification flag is not received. The notification flag may be received via a radio resource control (RRC) message or an F1 application protocol (F1-AP) message from an integrated access and backhaul (IAB) donor central unit (CU), or the notification flag is received via a medium access control (MAC) control element (MAC-CE) or downlink control information (DCI) from the last hop node. The last hop node may be a parent node of the node for DL communication or a child node of the node for UL communication. At1320, second node1304may transmit UL or DL communication, e.g., communication1324, including at least one data packet. At1322, first node1302may receive, via a first set of UL or DL resources, UL or DL communication, e.g., communication1324, including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node. At1330, first node1302may decode, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time. At1340, first node1302may transmit an acknowledgement (ACK) or negative ACK (NACK), e.g., ACK/NACK1344, upon decoding the at least one data packet at the reception entity of the node. At1342, second node1304may receive an ACK/NACK, e.g., ACK/NACK1344. At1350, first node1302may stop reception of at least one remaining first repetition unit of the one or more first repetition units upon successfully decoding the at least one data packet, where the reception of the at least one remaining first repetition unit is stopped at an early termination instance. In some aspects, the reception entity may attempt to decode the at least one data packet upon reception of each of the one or more first repetition units. At1360, first node1302may transmit, to the next hop node, a notification flag, e.g., notification flag1364, indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the forwarding entity at a beginning of the second set of UL or DL resources. At1362, third node1306may receive a notification flag, e.g., notification flag1364, indicating one or more potentially skipped resources. At1370, first node1302may encode the at least one data packet at the forwarding entity of the node, the at least one data packet being transmitted to the next hop node via the one or more second repetition units after the at least one data packet is encoded, where the one or more second repetition units overlap with the at least one remaining first repetition unit. At1380, first node1302may transmit, via a second set of UL or DL resources, the UL or DL communication, e.g., communication1384, including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources. At1382, third node1306may receive UL or DL communication, e.g., communication1384, including at least one data packet. The first set of UL or DL resources may include one or more first repetition units and the second set of UL or DL resources include one or more second repetition units. Also, an amount of the one or more first repetition units may be different from an amount of the one or more second repetition units. In some aspects, the second set of UL or DL resources may be allocated after the decoding completion time. The allocated second set of UL or DL resources may overlap with one or more remaining first resources of the first set of UL or DL resources, the one or more remaining first resources not being used by the reception entity. Also, the second set of UL or DL resources may be allocated before the decoding completion time. The second set of UL or DL resources may be allocated based on an assumption of an earliest possible decoding completion time. Further, at least one second resource of the second set of UL or DL resources may begin prior to the decoding completion time, where the forwarding entity may skip the at least one second resource, where the forwarding entity may transmit the at least one data packet to the next hop node via a portion of the second set of UL or DL resources that begins after the decoding completion time. In some instances, a number of repetition transmissions performed by the forwarding entity may be equal to a difference between a total number of allocated repetition units of the second set of UL or DL resources and a number of skipped repetition units before the decoding completion time. A number of repetition transmissions performed by the forwarding entity may also be equal to a fixed number. The fixed number may be equal to a difference between a total number of allocated repetition units and a maximum number of skipped repetition units. Additionally, the first set of UL or DL resources may include one or more first slots or mini-slots and the second set of UL or DL resources may include one or more second slots or mini-slots. The at least one data packet may be associated with one or more data packet repetitions or one or more data packet retransmissions. Further, at least one of the first set of UL or DL resources or the second set of UL or DL resources may be configured via DL semi-persistent scheduling (SPS), configured via an UL configured grant, or scheduled via dynamic downlink control information (DCI). FIG.14is a flowchart1400of a method of wireless communication. The method may be performed by a node or base station or a component of a node or base station (e.g., the base station102,180,310, node604,654,704,804,904,954,1004,1104,1204,1302; the apparatus1802; a processing system, which may include the memory376and which may be the entire base station or a component of the base station, such as the antenna(s)320, receiver318RX, the RX processor370, the controller/processor375, and/or the like). The methods described herein may provide a number of benefits, such as improving communication signaling, resource utilization, and/or power savings. At1404, the apparatus may receive, via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may receive, via a first set of UL or DL resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node, as described in connection with1322inFIG.13. Further,1404may be performed by determination component1840. At1406, the apparatus may decode, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may decode, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time, as described in connection with1330inFIG.13. Further,1406may be performed by determination component1840. At1416, the apparatus may transmit, via a second set of UL or DL resources, the UL or DL communication including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may transmit, via a second set of UL or DL resources, the UL or DL communication including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources, as described in connection with1380inFIG.13. Further,1416may be performed by determination component1840. The first set of UL or DL resources may include one or more first repetition units and the second set of UL or DL resources include one or more second repetition units, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. Also, an amount of the one or more first repetition units may be different from an amount of the one or more second repetition units, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. FIG.15is a flowchart1500of a method of wireless communication. The method may be performed by a node or base station or a component of a node or base station (e.g., the base station102,180,310, node604,654,704,804,904,954,1004,1104,1204,1302; the apparatus1802; a processing system, which may include the memory376and which may be the entire base station or a component of the base station, such as the antenna(s)320, receiver318RX, the RX processor370, the controller/processor375, and/or the like). The methods described herein may provide a number of benefits, such as improving communication signaling, resource utilization, and/or power savings. At1502, the apparatus may receive, from a last hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the last hop node at a beginning of a first set of uplink (UL) or downlink (DL) resources, where a received UL or DL communication is based on the notification flag, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may receive, from a last hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the last hop node at a beginning of a first set of uplink (UL) or downlink (DL) resources, where a received UL or DL communication is based on the notification flag, as described in connection with1312inFIG.13. Further,1502may be performed by determination component1840. In some aspects, the node may be an integrated access and backhaul (IAB) node associated with an IAB network, the reception entity corresponding to a mobile termination (MT) of the node or a distributed unit (DU) of the node, and the forwarding entity corresponding to the DU of the node or the MT of the node, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. When the UL or DL communication is UL communication, the reception entity may correspond to the DU of the node and the forwarding entity may correspond to the MT of the node, the next hop node corresponding to a parent IAB node or an IAB donor, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. When the UL or DL communication is DL communication, the reception entity may correspond to the MT of the node and the forwarding entity may correspond to the DU of the node, the next hop node corresponding to a child IAB node or a child user equipment (UE), as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. In some instances, upon receiving the notification flag, the reception entity may apply hypothesis testing on a starting location of a first resource of the first set of UL or DL resources, the first resource transmitted by the last hop node, the hypothesis testing being applied while receiving the UL or DL communication from the last hop node, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. Also, upon receiving the notification flag, the reception entity may apply a pattern of redundancy version (RV) over one or more repetition resource units, the one or more repetition resource units being different when the notification flag is not received, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. The notification flag may be received via a radio resource control (RRC) message or an F1 application protocol (F1-AP) message from an integrated access and backhaul (IAB) donor central unit (CU), or the notification flag is received via a medium access control (MAC) control element (MAC-CE) or downlink control information (DCI) from the last hop node, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. The last hop node may be a parent node of the node for DL communication or a child node of the node for UL communication, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. At1504, the apparatus may receive, via a first set of UL or DL resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may receive, via a first set of UL or DL resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node, as described in connection with1322inFIG.13. Further,1504may be performed by determination component1840. At1506, the apparatus may decode, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may decode, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time, as described in connection with1330inFIG.13. Further,1506may be performed by determination component1840. At1508, the apparatus may transmit an acknowledgement (ACK) or negative ACK (NACK) upon decoding the at least one data packet at the reception entity of the node, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may transmit an acknowledgement (ACK) or negative ACK (NACK) upon decoding the at least one data packet at the reception entity of the node, as described in connection with1340inFIG.13. Further,1508may be performed by determination component1840. At1510, the apparatus may stop reception of at least one remaining first repetition unit of the one or more first repetition units upon successfully decoding the at least one data packet, where the reception of the at least one remaining first repetition unit is stopped at an early termination instance, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may stop reception of at least one remaining first repetition unit of the one or more first repetition units upon successfully decoding the at least one data packet, where the reception of the at least one remaining first repetition unit is stopped at an early termination instance, as described in connection with1350inFIG.13. Further,1510may be performed by determination component1840. In some aspects, the reception entity may attempt to decode the at least one data packet upon reception of each of the one or more first repetition units, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. At1512, the apparatus may transmit, to the next hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the forwarding entity at a beginning of the second set of UL or DL resources, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may transmit, to the next hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the forwarding entity at a beginning of the second set of UL or DL resources, as described in connection with1360inFIG.13. Further,1512may be performed by determination component1840. At1514, the apparatus may encode the at least one data packet at the forwarding entity of the node, the at least one data packet being transmitted to the next hop node via the one or more second repetition units after the at least one data packet is encoded, where the one or more second repetition units overlap with the at least one remaining first repetition unit, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may encode the at least one data packet at the forwarding entity of the node, the at least one data packet being transmitted to the next hop node via the one or more second repetition units after the at least one data packet is encoded, where the one or more second repetition units overlap with the at least one remaining first repetition unit, as described in connection with1370inFIG.13. Further,1514may be performed by determination component1840. At1516, the apparatus may transmit, via a second set of UL or DL resources, the UL or DL communication including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1302may transmit, via a second set of UL or DL resources, the UL or DL communication including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources, as described in connection with1380inFIG.13. Further,1516may be performed by determination component1840. The first set of UL or DL resources may include one or more first repetition units and the second set of UL or DL resources include one or more second repetition units, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. Also, an amount of the one or more first repetition units may be different from an amount of the one or more second repetition units, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. In some aspects, the second set of UL or DL resources may be allocated after the decoding completion time, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. The allocated second set of UL or DL resources may overlap with one or more remaining first resources of the first set of UL or DL resources, the one or more remaining first resources not being used by the reception entity, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. Also, the second set of UL or DL resources may be allocated before the decoding completion time, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. The second set of UL or DL resources may be allocated based on an assumption of an earliest possible decoding completion time, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. Further, at least one second resource of the second set of UL or DL resources may begin prior to the decoding completion time, where the forwarding entity may skip the at least one second resource, where the forwarding entity may transmit the at least one data packet to the next hop node via a portion of the second set of UL or DL resources that begins after the decoding completion time, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. In some instances, a number of repetition transmissions performed by the forwarding entity may be equal to a difference between a total number of allocated repetition units of the second set of UL or DL resources and a number of skipped repetition units before the decoding completion time, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. A number of repetition transmissions performed by the forwarding entity may also be equal to a fixed number, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. The fixed number may be equal to a difference between a total number of allocated repetition units and a maximum number of skipped repetition units, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. Additionally, the first set of UL or DL resources may include one or more first slots or mini-slots and the second set of UL or DL resources may include one or more second slots or mini-slots, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. The at least one data packet may be associated with one or more data packet repetitions or one or more data packet retransmissions, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. Further, at least one of the first set of UL or DL resources or the second set of UL or DL resources may be configured via DL semi-persistent scheduling (SPS), configured via an UL configured grant, or scheduled via dynamic downlink control information (DCI), as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. FIG.16is a flowchart1600of a method of wireless communication. The method may be performed by a node or base station or a component of a node or base station (e.g., the base station102,180,310, node604,654,704,804,904,954,1004,1104,1204,1304; the apparatus1902; a processing system, which may include the memory376and which may be the entire base station or a component of the base station, such as the antenna(s)320, receiver318RX, the RX processor370, the controller/processor375, and/or the like). The methods described herein may provide a number of benefits, such as improving communication signaling, resource utilization, and/or power savings. At1602, the apparatus may transmit, to the second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the first node at a beginning of the first set of UL or DL resources, where the transmitted UL or DL communication is based on the notification flag, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1304may transmit, to the second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the first node at a beginning of the first set of UL or DL resources, where the transmitted UL or DL communication is based on the notification flag, as described in connection with1310inFIG.13. Further,1602may be performed by determination component1940. At1604, the apparatus may transmit, to a second node via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the second node, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1304may transmit, to a second node via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the second node, as described in connection with1320inFIG.13. Further,1604may be performed by determination component1940. At1606, the apparatus may receive, from the second node, an acknowledgement (ACK) or negative ACK (NACK) based on the at least one data packet being decoded at the reception entity of the second node, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1304may receive, from the second node, an acknowledgement (ACK) or negative ACK (NACK) based on the at least one data packet being decoded at the reception entity of the second node, as described in connection with1342inFIG.13. Further,1606may be performed by determination component1940. FIG.17is a flowchart1700of a method of wireless communication. The method may be performed by a node or base station or a component of a node or base station (e.g., the base station102,180,310, node604,654,704,804,904,954,1004,1104,1204,1306; the apparatus2002; a processing system, which may include the memory376and which may be the entire base station or a component of the base station, such as the antenna(s)320, receiver318RX, the RX processor370, the controller/processor375, and/or the like). The methods described herein may provide a number of benefits, such as improving communication signaling, resource utilization, and/or power savings. At1702, the apparatus may receive, from a second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by a forwarding entity of the second node at a beginning of a first set of uplink (UL) or downlink (DL) resources, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1306may receive, from a second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by a forwarding entity of the second node at a beginning of a first set of uplink (UL) or downlink (DL) resources, as described in connection with1362inFIG.13. Further,1702may be performed by determination component2040. At1704, the apparatus may receive, from the second node via the first set of UL or DL resources, the UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for the forwarding entity of the second node, where at least one second resource of a second set of UL or DL resources overlaps with at least one first resource of the first set of UL or DL resources, as described in connection with the examples inFIGS.4,5A,5B,6A,6B,7,8,9A,9B,10,11,12, and13. For example, node1306may receive, from the second node via the first set of UL or DL resources, the UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for the forwarding entity of the second node, where at least one second resource of a second set of UL or DL resources overlaps with at least one first resource of the first set of UL or DL resources, as described in connection with1382inFIG.13. Further,1704may be performed by determination component2040. FIG.18is a diagram1800illustrating an example of a hardware implementation for an apparatus1802. The apparatus1802is a base station and includes a baseband unit1804. The baseband unit1804may communicate through a cellular RF transceiver with the UE104. The baseband unit1804may include a computer-readable medium/memory. The baseband unit1804is responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the baseband unit1804, causes the baseband unit1804to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the baseband unit1804when executing software. The baseband unit1804further includes a reception component1830, a communication manager1832, and a transmission component1834. The communication manager1832includes the one or more illustrated components. The components within the communication manager1832may be stored in the computer-readable medium/memory and/or configured as hardware within the baseband unit1804. The baseband unit1804may be a component of the BS310and may include the memory376and/or at least one of the TX processor316, the RX processor370, and the controller/processor375. The communication manager1832includes a determination component1840that is configured to receive, from a last hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the last hop node at a beginning of the first set of UL or DL resources, where the received UL or DL communication is based on the notification flag, e.g., as described in connection with step1502above. Determination component1840may also be configured to receive, via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node, e.g., as described in connection with step1504above. Determination component1840may also be configured to decode, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time, e.g., as described in connection with step1506above. Determination component1840may also be configured to transmit an acknowledgement (ACK) or negative ACK (NACK) upon decoding the at least one data packet at the reception entity of the node, e.g., as described in connection with step1508above. Determination component1840may also be configured to stop reception of at least one remaining first repetition unit of the one or more first repetition units upon successfully decoding the at least one data packet, where the reception of the at least one remaining first repetition unit is stopped at an early termination instance, e.g., as described in connection with step1510above. Determination component1840may also be configured to transmit, to the next hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the forwarding entity at a beginning of the second set of UL or DL resources, e.g., as described in connection with step1512above. Determination component1840may also be configured to encode the at least one data packet at the forwarding entity of the node, the at least one data packet being transmitted to the next hop node via the one or more second repetition units after the at least one data packet is encoded, where the one or more second repetition units overlap with the at least one remaining first repetition unit, e.g., as described in connection with step1514above. Determination component1840may also be configured to transmit, via a second set of UL or DL resources, the UL or DL communication including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources, e.g., as described in connection with step1516above. The apparatus may include additional components that perform each of the blocks of the algorithm in the aforementioned flowcharts ofFIGS.13-15. As such, each block in the aforementioned flowcharts ofFIGS.13-15may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. In one configuration, the apparatus1802, and in particular the baseband unit1804, includes means for receiving, via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node. The apparatus1802may also include means for decoding, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time. The apparatus1802may also include means for transmitting, via a second set of UL or DL resources, the UL or DL communication including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources. The apparatus1802may also include means for receiving, from a last hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the last hop node at a beginning of the first set of UL or DL resources, where the received UL or DL communication is based on the notification flag. The apparatus1802may also include means for transmitting an acknowledgement (ACK) or negative ACK (NACK) upon decoding the at least one data packet at the reception entity of the node. The apparatus1802may also include means for stopping reception of at least one remaining first repetition unit of the one or more first repetition units upon successfully decoding the at least one data packet, where the reception of the at least one remaining first repetition unit is stopped at an early termination instance. The apparatus1802may also include means for transmitting, to the next hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the forwarding entity at a beginning of the second set of UL or DL resources. The apparatus1802may also include means for encoding the at least one data packet at the forwarding entity of the node, the at least one data packet being transmitted to the next hop node via the one or more second repetition units after the at least one data packet is encoded, where the one or more second repetition units overlap with the at least one remaining first repetition unit. The aforementioned means may be one or more of the aforementioned components of the apparatus1802configured to perform the functions recited by the aforementioned means. As described supra, the apparatus1802may include the TX Processor316, the RX Processor370, and the controller/processor375. As such, in one configuration, the aforementioned means may be the TX Processor316, the RX Processor370, and the controller/processor375configured to perform the functions recited by the aforementioned means. FIG.19is a diagram1900illustrating an example of a hardware implementation for an apparatus1902. The apparatus1902is a base station and includes a baseband unit1904. The baseband unit1904may communicate through a cellular RF transceiver with the UE104. The baseband unit1904may include a computer-readable medium/memory. The baseband unit1904is responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the baseband unit1904, causes the baseband unit1904to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the baseband unit1904when executing software. The baseband unit1904further includes a reception component1930, a communication manager1932, and a transmission component1934. The communication manager1932includes the one or more illustrated components. The components within the communication manager1932may be stored in the computer-readable medium/memory and/or configured as hardware within the baseband unit1904. The baseband unit1904may be a component of the BS310and may include the memory376and/or at least one of the TX processor316, the RX processor370, and the controller/processor375. The communication manager1932includes a determination component1940that is configured to transmit, to the second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the first node at a beginning of the first set of UL or DL resources, where the transmitted UL or DL communication is based on the notification flag, e.g., as described in connection with step1602above. Determination component1940may also be configured to transmit, to a second node via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the second node, e.g., as described in connection with step1604above. Determination component1940may also be configured to receive, from the second node, an acknowledgement (ACK) or negative ACK (NACK) based on the at least one data packet being decoded at the reception entity of the second node, e.g., as described in connection with step1606above. The apparatus may include additional components that perform each of the blocks of the algorithm in the aforementioned flowcharts ofFIGS.13and16. As such, each block in the aforementioned flowcharts ofFIGS.13and16may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. In one configuration, the apparatus1902, and in particular the baseband unit1904, includes means for transmitting, to the second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the first node at a beginning of the first set of UL or DL resources, where the transmitted UL or DL communication is based on the notification flag. The apparatus1902may also include means for transmitting, to a second node via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the second node. The apparatus1902may also include means for receiving, from the second node, an acknowledgement (ACK) or negative ACK (NACK) based on the at least one data packet being decoded at the reception entity of the second node. The aforementioned means may be one or more of the aforementioned components of the apparatus1902configured to perform the functions recited by the aforementioned means. As described supra, the apparatus1902may include the TX Processor316, the RX Processor370, and the controller/processor375. As such, in one configuration, the aforementioned means may be the TX Processor316, the RX Processor370, and the controller/processor375configured to perform the functions recited by the aforementioned means. FIG.20is a diagram2000illustrating an example of a hardware implementation for an apparatus2002. The apparatus2002is a base station and includes a baseband unit2004. The baseband unit2004may communicate through a cellular RF transceiver with the UE104. The baseband unit2004may include a computer-readable medium/memory. The baseband unit2004is responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the baseband unit2004, causes the baseband unit2004to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the baseband unit2004when executing software. The baseband unit2004further includes a reception component2030, a communication manager2032, and a transmission component2034. The communication manager2032includes the one or more illustrated components. The components within the communication manager2032may be stored in the computer-readable medium/memory and/or configured as hardware within the baseband unit2004. The baseband unit2004may be a component of the BS310and may include the memory376and/or at least one of the TX processor316, the RX processor370, and the controller/processor375. The communication manager2032includes a determination component2040that is configured to receive, from a second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by a forwarding entity of the second node at a beginning of a first set of uplink (UL) or downlink (DL) resources, e.g., as described in connection with step1702above. Determination component2040may also be configured to receive, from the second node via the first set of UL or DL resources, the UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for the forwarding entity of the second node, where at least one second resource of a second set of UL or DL resources overlaps with at least one first resource of the first set of UL or DL resources, e.g., as described in connection with step1704above. The apparatus may include additional components that perform each of the blocks of the algorithm in the aforementioned flowcharts ofFIGS.13and17. As such, each block in the aforementioned flowcharts ofFIGS.13and17may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. In one configuration, the apparatus2002, and in particular the baseband unit2004, includes means for receiving, from a second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by a forwarding entity of the second node at a beginning of a first set of uplink (UL) or downlink (DL) resources. The apparatus2002may also include means for receiving, from the second node via the first set of UL or DL resources, the UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for the forwarding entity of the second node, where at least one second resource of a second set of UL or DL resources overlaps with at least one first resource of the first set of UL or DL resources. The aforementioned means may be one or more of the aforementioned components of the apparatus2002configured to perform the functions recited by the aforementioned means. As described supra, the apparatus2002may include the TX Processor316, the RX Processor370, and the controller/processor375. As such, in one configuration, the aforementioned means may be the TX Processor316, the RX Processor370, and the controller/processor375configured to perform the functions recited by the aforementioned means. It is understood that the specific order or hierarchy of blocks in the processes/flowcharts disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes/flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words “module,” “mechanism,” “element,” “device,” and the like may not be a substitute for the word “means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.” The following aspects are illustrative only and may be combined with other aspects or teachings described herein, without limitation. Aspect 1 is an apparatus for wireless communication at a node including at least one processor coupled to a memory and configured to: receive, via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the node; decode, at the reception entity, the at least one data packet during a decoding period, the decoding period including a decoding start time and a decoding completion time; and transmit, via a second set of UL or DL resources, the UL or DL communication including the at least one data packet to a next hop node, the second set of UL or DL resources allocated for a forwarding entity of the node, at least one first resource of the first set of UL or DL resources overlapping with at least one second resource of the second set of UL or DL resources. Aspect 2 is the apparatus of aspect 1, where the first set of UL or DL resources include one or more first repetition units and the second set of UL or DL resources include one or more second repetition units. Aspect 3 is the apparatus of any of aspects 1 and 2, where an amount of the one or more first repetition units is different from an amount of the one or more second repetition units. Aspect 4 is the apparatus of any of aspects 1 to 3, where the reception entity attempts to decode the at least one data packet upon reception of each of the one or more first repetition units, where the at least one processor is further configured to: stop reception of at least one remaining first repetition unit of the one or more first repetition units upon successfully decoding the at least one data packet, where the reception of the at least one remaining first repetition unit is stopped at an early termination instance. Aspect 5 is the apparatus of any of aspects 1 to 4, where the at least one processor is further configured to: encode the at least one data packet at the forwarding entity of the node, the at least one data packet being transmitted to the next hop node via the one or more second repetition units after the at least one data packet is encoded, where the one or more second repetition units overlap with the at least one remaining first repetition unit. Aspect 6 is the apparatus of any of aspects 1 to 5, where the node is an integrated access and backhaul (IAB) node associated with an IAB network, the reception entity corresponding to a mobile termination (MT) of the node or a distributed unit (DU) of the node, and the forwarding entity corresponding to the DU of the node or the MT of the node. Aspect 7 is the apparatus of any of aspects 1 to 6, where the UL or DL communication is UL communication, the reception entity corresponding to the DU of the node and the forwarding entity corresponding to the MT of the node, the next hop node corresponding to a parent IAB node or an IAB donor. Aspect 8 is the apparatus of any of aspects 1 to 7, where the UL or DL communication is DL communication, the reception entity corresponding to the MT of the node and the forwarding entity corresponding to the DU of the node, the next hop node corresponding to a child IAB node or a child user equipment (UE). Aspect 9 is the apparatus of any of aspects 1 to 8, where the second set of UL or DL resources is allocated after the decoding completion time. Aspect 10 is the apparatus of any of aspects 1 to 9, where the allocated second set of UL or DL resources overlap with one or more remaining first resources of the first set of UL or DL resources, the one or more remaining first resources not being used by the reception entity. Aspect 11 is the apparatus of any of aspects 1 to 10, where the second set of UL or DL resources is allocated before the decoding completion time. Aspect 12 is the apparatus of any of aspects 1 to 11, where the second set of UL or DL resources is allocated based on an assumption of an earliest possible decoding completion time. Aspect 13 is the apparatus of any of aspects 1 to 12, where at least one second resource of the second set of UL or DL resources begins prior to the decoding completion time, the forwarding entity skipping the at least one second resource, where the forwarding entity transmits the at least one data packet to the next hop node via a portion of the second set of UL or DL resources that begins after the decoding completion time. Aspect 14 is the apparatus of any of aspects 1 to 13, where the at least one processor is further configured to: transmit, to the next hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the forwarding entity at a beginning of the second set of UL or DL resources. Aspect 15 is the apparatus of any of aspects 1 to 14, where the at least one processor is further configured to: receive, from a last hop node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the last hop node at a beginning of the first set of UL or DL resources, where the received UL or DL communication is based on the notification flag. Aspect 16 is the apparatus of any of aspects 1 to 15, where, upon receiving the notification flag, the reception entity applies hypothesis testing on a starting location of a first resource of the first set of UL or DL resources, the first resource transmitted by the last hop node, the hypothesis testing being applied while receiving the UL or DL communication from the last hop node. Aspect 17 is the apparatus of any of aspects 1 to 16, where, upon receiving the notification flag, the reception entity applies a pattern of redundancy version (RV) over one or more repetition resource units, the one or more repetition resource units being different when the notification flag is not received. Aspect 18 is the apparatus of any of aspects 1 to 17, where the notification flag is received via a radio resource control (RRC) message or an F1 application protocol (F1-AP) message from an integrated access and backhaul (IAB) donor central unit (CU), or the notification flag is received via a medium access control (MAC) control element (MAC-CE) or downlink control information (DCI) from the last hop node. Aspect 19 is the apparatus of any of aspects 1 to 18, where the last hop node is a parent node of the node for DL communication or a child node of the node for UL communication. Aspect 20 is the apparatus of any of aspects 1 to 19, where a number of repetition transmissions performed by the forwarding entity is equal to a difference between a total number of allocated repetition units of the second set of UL or DL resources and a number of skipped repetition units before the decoding completion time. Aspect 21 is the apparatus of any of aspects 1 to 20, where a number of repetition transmissions performed by the forwarding entity is equal to a fixed number. Aspect 22 is the apparatus of any of aspects 1 to 21, where the fixed number is equal to a difference between a total number of allocated repetition units and a maximum number of skipped repetition units. Aspect 23 is the apparatus of any of aspects 1 to 22, where the at least one processor is further configured to: transmit an acknowledgement (ACK) or negative ACK (NACK) upon decoding the at least one data packet at the reception entity of the node. Aspect 24 is the apparatus of any of aspects 1 to 23, where the first set of UL or DL resources includes one or more first slots or mini-slots and the second set of UL or DL resources includes one or more second slots or mini-slots. Aspect 25 is the apparatus of any of aspects 1 to 24, where the at least one data packet is associated with one or more data packet repetitions or one or more data packet retransmissions. Aspect 26 is the apparatus of any of aspects 1 to 25, where at least one of the first set of UL or DL resources or the second set of UL or DL resources is configured via DL semi-persistent scheduling (SPS), configured via an UL configured grant, or scheduled via dynamic downlink control information (DCI). Aspect 27 is the apparatus of any of aspects 1 to 26, further including a transceiver or an antenna coupled to the at least one processor. Aspect 28 is an apparatus for wireless communication at a first node including at least one processor coupled to a memory and configured to: transmit, to a second node via a first set of uplink (UL) or downlink (DL) resources, UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for a reception entity of the second node; and receive, from the second node, an acknowledgement (ACK) or negative ACK (NACK) based on the at least one data packet being decoded at the reception entity of the second node. Aspect 29 is the apparatus of aspect 28, where the at least one processor is further configured to: transmit, to the second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by the first node at a beginning of the first set of UL or DL resources, where the transmitted UL or DL communication is based on the notification flag. Aspect 30 is an apparatus for wireless communication at a first node including at least one processor coupled to a memory and configured to: receive, from a second node, a notification flag indicating one or more potentially skipped resources, the one or more potentially skipped resources being skipped by a forwarding entity of the second node at a beginning of a first set of uplink (UL) or downlink (DL) resources; and receive, from the second node via the first set of UL or DL resources, the UL or DL communication including at least one data packet, the first set of UL or DL resources allocated for the forwarding entity of the second node, where at least one second resource of a second set of UL or DL resources overlaps with at least one first resource of the first set of UL or DL resources. Aspect 31 is a method of wireless communication for implementing any of aspects 1 to 30. Aspect 32 is an apparatus for wireless communication including means for implementing any of aspects 1 to 30. Aspect 33 is a computer-readable medium storing computer executable code, where the code when executed by a processor causes the processor to implement any of aspects 1 to 30.
121,084
11863325
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially utilized on other aspects without specific recitation. DETAILED DESCRIPTION Aspects of the present disclosure provide apparatus, methods, processing systems, and computer readable mediums for techniques for reducing ambiguity of HARQ feedback. In certain aspects, a UE is configured to provide feedback to a transmitting device (e.g., a base station (BS)), indicating whether the UE has successfully received and decoded a transmission sent from the transmitting device. In certain aspects, the feedback is one or more of an acknowledgement (ACK) indicating the UE has successfully received and decoded the transmission and/or a negative ACK (NACK) indicating the UE has not successfully received and decoded the transmission. In certain aspects, reference to ACK feedback, HARQ-ACK feedback, or HARQ feedback herein may generally refer to feedback using ACK and/or NACK indications. In certain aspects, a UE transmits an ACK when it has successfully received and decoded the transmission and refrains from transmitting an ACK when it has not successfully received and decoded the transmission. In certain aspects, a UE transmits a NACK when it has not successfully received and decoded the transmission and refrains from transmitting a NACK when it has successfully received and decoded the transmission. In certain aspects (e.g., for a HARQ process with feedback enabled as discussed herein), a UE transmits an ACK when it has successfully received and decoded the transmission and transmits a NACK when it has not successfully received and decoded the transmission. In certain aspects, a UE is configured with one or more HARQ processes. Accordingly, in certain aspects, the UE maintains one or more buffers, each buffer corresponding to one of the one or more HARQ processes. Each HARQ process may be used for buffering data for a given downlink channel (e.g., control channel such as a physical downlink control channel (PDCCH) or a data channel such as a physical downlink shared channel (PDSCH)) at a time (e.g., per subframe, slot, etc.). In particular, as part of a HARQ process, the UE buffers data that is received even if it cannot successfully decode the data, and informs the BS that it could not decode the data for that channel for that time period. The BS may then resend the data to the UE, and the UE may then use both the previously received data and the resent data in combination (e.g., soft combining) to attempt to decode the data. Accordingly, different HARQ processes of the UE may be assigned to different downlink channels/downlink occasions at a time, and used to try and successfully receive and decode data. Each HARQ process may be identified by an identifier referred to as a HARQ ID, so that the receiver and transmitter are aware of which data belongs to which HARQ process. In aspects, ACK/NACK feedback reported by a UE may be formatted according to a codebook. For example, a codebook with respect to HARQ may define the number of HARQ bits to be reported and the order in which certain HARQ bits are arranged. The codebook may also define what each HARQ bit represents based on the location of the HARQ bit in the HARQ feedback. For example, a given HARQ bit may correspond to a specific code block group (CBG), a specific transport block (TB), a specific HARQ process, a specific carrier, and/or a specific serving cell. The codebook provides a mapping of the HARQ bit locations in the HARQ feedback to specific HARQ transmissions based on the respective CBG, TB, HARQ process, carrier, and/or serving cell. As used herein, a carrier may refer to a component carrier. In some cases, a UE may transmit aggregated HARQ feedback in response to a plurality of downlink communications. For example, the UE may be configured to receive up to k downlink communications during a period of time. In some examples, the period of time is also known as a constrained set of resources. The constrained set of resources may include M possible downlink occasions of which a downlink communication may be transmitted. Here, although k and M may be any suitable whole number, examples of possible values are provided throughout this disclosure. After receiving k or less downlink communications during a single constrained set of resources, the UE may map a binary vector corresponding to each of one or downlink transmission occasions over which downlink data is received to a codepoint value in the codebook. The UE may then calculate an aggregated HARQ feedback based on each of the codepoint values for that constrained set. In this way, a single HARQ feedback can indicate an ACK/NACK for each of the k or fewer downlink communications received by the UE in a constrained set. In some cases, the UE may calculate an aggregated HARQ feedback based on fewer than k downlink transmissions (e.g., k−2 downlink transmissions received by the UE) during a constrained set despite the BS having transmitted additional downlink communications that the UE did not receive. In such cases, the aggregated HARQ feedback may provide an ambiguous indication of which transmissions were received by the UE and which were not. For example, the aggregated HARQ feedback may correspond to two or more different possibilities of received or not received downlink transmissions. Thus, the UE may use an enhanced codebook comprising one or more additional pools of codepoints that corresponds to scenarios that may result in an ambiguous HARQ feedback. Alternatively, the UE may calculate an aggregated HARQ feedback that includes additional information (e.g., a q value described in more detail below) configured to rule out ambiguous possibilities of which downlink communications were received or not received by the UE. The following description is not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular radio access technology (RAT) and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, etc. A frequency may also be referred to as a carrier, a subcarrier, a frequency channel, a tone, a subband, etc. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. The techniques described herein may be used for various wireless networks and radio technologies. While aspects may be described herein using terminology commonly associated with 3G, 4G, and/or new radio (e.g., 5G NR) wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems. NR access may support various wireless communication services, such as enhanced mobile broadband (eMBB) targeting wide bandwidth (e.g., 80 MHz or beyond), millimeter wave (mmW) targeting high carrier frequency (e.g., 25 GHz or beyond), massive machine type communications MTC (mMTC) targeting non-backward compatible MTC techniques, and/or mission critical targeting ultra-reliable low-latency communications (URLLC). These services may include latency and reliability requirements. These services may also have different transmission time intervals (TTI) to meet respective quality of service (QoS) requirements. In addition, these services may co-exist in the same subframe. NR supports beamforming and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Multi-layer transmissions with up to 2 streams per UE may be supported. Aggregation of multiple cells may be supported with up to 8 serving cells. FIG.1illustrates an example wireless communication network100in which aspects of the present disclosure may be performed. For example, the wireless communication network100may be an NR system (e.g., a 5G NR network). As shown inFIG.1, the wireless communication network100may be in communication with a core network132. The core network132may in communication with one or more base station (BSs)110and/or user equipment (UE)120in the wireless communication network100via one or more interfaces. As illustrated inFIG.1, the wireless communication network100may include a number of BSs110a-z(each also individually referred to herein as BS110or collectively as BSs110) and other network entities. A BS110may provide communication coverage for a particular geographic area, sometimes referred to as a “cell”, which may be stationary or may move according to the location of a mobile BS110. In some examples, the BSs110may be interconnected to one another and/or to one or more other BSs or network nodes (not shown) in wireless communication network100through various types of backhaul interfaces (e.g., a direct physical connection, a wireless connection, a virtual network, or the like) using any suitable transport network. In the example shown inFIG.1, the BSs110a,110band110cmay be macro BSs for the macro cells102a,102band102c, respectively. The BS110xmay be a pico BS for a pico cell102x. The BSs110yand110zmay be femto BSs for the femto cells102yand102z, respectively. A BS may support one or multiple cells. A network controller130may couple to a set of BSs110and provide coordination and control for these BSs110(e.g., via a backhaul). The BSs110communicate with UEs120a-y(each also individually referred to herein as UE120or collectively as UEs120) in the wireless communication network100. The UEs120(e.g.,120x,120y, etc.) may be dispersed throughout the wireless communication network100, and each UE120may be stationary or mobile. Wireless communication network100may also include relay stations (e.g., relay station110r), also referred to as relays or the like, that receive a transmission of data and/or other information from an upstream station (e.g., a BS110aor a UE120r) and sends a transmission of the data and/or other information to a downstream station (e.g., a UE120or a BS110), or that relays transmissions between UEs120, to facilitate communication between devices. According to certain aspects, the BSs110and UEs120may be configured for reducing ambiguous HARQ feedback based on a maximum number of downlink transmissions (k) the UE120is configured to receive during a particular time period, and a number of downlink occasions (M) provided during that time period. As shown inFIG.1, the BS110aincludes a HARQ manager112. The HARQ manager112may be configured to indicate, to the UE120a, a codebook for uplink transmission of HARQ feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE120acan receive during the time period. The HARQ manager112may also be configured to select an action to perform based on a set of codepoints of the codebook, the set of codepoints comprising: (i) a first combination of codepoints, and (ii) a second combination of codepoints different from the first combination, wherein HARQ feedback calculated from the first combination is equal to HARQ feedback calculated from the second combination. As shown inFIG.1, the UE120aincludes a HARQ manager122. The HARQ manager122may be configured to receive, from a BS110a, an indication of a codebook for uplink transmission of HARQ feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period, the codebook comprising: (i) a first plurality of codepoints, (ii) a second plurality of codepoints, (iii) a third plurality of codepoints, and (iv) another one or more codepoints. The HARQ manager122may also be configured to receive a downlink transmission over the time period. The HARQ manager122may also be configured to transmit HARQ feedback for the downlink transmission, the HARQ feedback calculated using one of: (i) the first plurality of codepoints, the second plurality of codepoints, or the third plurality of codepoints if a codepoint location is ambiguous or if the downlink transmission is received in error, or (ii) the other one or more codepoints if a codepoint location is not ambiguous and if the downlink transmission is not received in error. FIG.2illustrates example components of BS110aand UE120a(e.g., in the wireless communication network100ofFIG.1), which may be used to implement aspects of the present disclosure. At the BS110a, a transmit processor220may receive data from a data source212and control information from a controller/processor240. The control information may be for the physical broadcast channel (PBCH), physical control format indicator channel (PCFICH), physical hybrid ARQ indicator channel (PHICH), physical downlink control channel (PDCCH), group common PDCCH (GC PDCCH), etc. The data may be for the physical downlink shared channel (PDSCH), etc. A medium access control (MAC)-control element (MAC-CE) is a MAC layer communication structure that may be used for control command exchange between wireless nodes. The MAC-CE may be carried in a shared channel such as a physical downlink shared channel (PDSCH), a physical uplink shared channel (PUSCH), or a physical sidelink shared channel (PSSCH). The processor220may process (e.g., encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. The transmit processor220may also generate reference symbols, such as for the primary synchronization signal (PSS), secondary synchronization signal (SSS), and channel state information reference signal (CSI-RS). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, and/or the reference symbols, if applicable, and may provide output symbol streams to the modulators (MODs) in transceivers232a-232t. Each modulator in transceivers232a-232tmay process a respective output symbol stream (e.g., for OFDM, etc.) to obtain an output sample stream. Each modulator may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from modulators in transceivers232a-232tmay be transmitted via the antennas234a-234t, respectively. At the UE120a, the antennas252a-252rmay receive the downlink signals from the BS110aand may provide received signals to the demodulators (DEMODs) in transceivers254a-254r, respectively. Each demodulator in transceivers254a-254rmay condition (e.g., filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator may further process the input samples (e.g., for OFDM, etc.) to obtain received symbols. A MIMO detector256may obtain received symbols from all the demodulators in transceivers254a-254r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. A receive processor258may process (e.g., demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE120ato a data sink260, and provide decoded control information to a controller/processor280. On the uplink, at UE120a, a transmit processor264may receive and process data (e.g., for the physical uplink shared channel (PUSCH)) from a data source262and control information (e.g., for the physical uplink control channel (PUCCH) from the controller/processor280. The transmit processor264may also generate reference symbols for a reference signal (e.g., for the sounding reference signal (SRS)). The symbols from the transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by the modulators (MODs) in transceivers254a-254r(e.g., for SC-FDM, etc.), and transmitted to the BS110a. At the BS110a, the uplink signals from the UE120amay be received by the antennas234, processed by the modulators in transceivers232a-232t, detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by the UE120a. The receive processor238may provide the decoded data to a data sink239and the decoded control information to the controller/processor240. The memories242and282may store data and program codes for BS110aand UE120a, respectively. A scheduler244may schedule UEs for data transmission on the downlink and/or uplink. Antennas252, processors266,258,264, and/or controller/processor280of the UE120aand/or antennas234, processors220,230,238, and/or controller/processor240of the BS110amay be used to perform the various techniques and methods described herein. For example, as shown inFIG.2, the controller/processor240of the BS110aincludes the HARQ manager112that may be configured for indicating, to a user equipment (UE), a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period. The HARQ manager112may be configured for selecting an action to perform based on a set of codepoints of the codebook, the set of codepoints comprising: (i) a first combination of codepoints, and (ii) a second combination of codepoints different from the first combination, wherein HARQ feedback calculated from the first combination is equal to HARQ feedback calculated from the second combination. As shown inFIG.2, the controller/processor280of the UE120aincludes the HARQ manager122that may be configured for receiving, from a base station (BS), an indication of a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period, the codebook comprising: (i) a first plurality of codepoints, (ii) a second plurality of codepoints, (iii) a third plurality of codepoints, and (iv) another one or more codepoints. The HARQ manager122may be configured for receiving a downlink transmission over the time period. The HARQ manager122may also be configured for transmitting HARQ feedback for the downlink transmission, the HARQ feedback calculated using one of: (i) the first plurality of codepoints, the second plurality of codepoints, or the third plurality of codepoints if a codepoint location is ambiguous or if the downlink transmission is received in error, or (ii) the other one or more codepoints if a codepoint location is not ambiguous and if the downlink transmission is not received in error. NR may utilize orthogonal frequency division multiplexing (OFDM) with a cyclic prefix (CP) on the uplink and downlink. NR may support half-duplex operation using time division duplexing (TDD). OFDM and single-carrier frequency division multiplexing (SC-FDM) partition the system bandwidth into multiple orthogonal subcarriers, which are also commonly referred to as tones, bins, etc. Each subcarrier may be modulated with data. Modulation symbols may be sent in the frequency domain with OFDM and in the time domain with SC-FDM. The spacing between adjacent subcarriers may be fixed, and the total number of subcarriers may be dependent on the system bandwidth. The minimum resource allocation, called a resource block (RB), may be 12 consecutive subcarriers. The system bandwidth may also be partitioned into subbands. For example, a subband may cover multiple RBs. NR may support a base subcarrier spacing (SCS) of 15 KHz and other SCS may be defined with respect to the base SCS (e.g., 30 kHz, 60 kHz, 120 kHz, 240 kHz, etc.). FIG.3is a diagram showing an example of a frame format300for NR. The transmission timeline for each of the downlink and uplink may be partitioned into units of radio frames. Each radio frame may have a predetermined duration (e.g., 10 ms) and may be partitioned into 10 subframes, each of 1 ms, with indices of 0 through 9. Each subframe may include a variable number of slots (e.g., 1, 2, 4, 8, 16, . . . slots) depending on the SCS. Each slot may include a variable number of symbol periods (e.g., 7, 12, or 14 symbols) depending on the SCS. The symbol periods in each slot may be assigned indices. A mini-slot, which may be referred to as a sub-slot structure, refers to a transmit time interval having a duration less than a slot (e.g., 2, 3, or 4 symbols). Each symbol in a slot may indicate a link direction (e.g., downlink (DL), uplink (UL), or flexible) for data transmission and the link direction for each subframe may be dynamically switched. The link directions may be based on the slot format. Each slot may include DL/UL data as well as DL/UL control information. In NR, a synchronization signal block (SSB) is transmitted. In certain aspects, SSBs may be transmitted in a burst where each SSB in the burst corresponds to a different beam direction for UE-side beam management (e.g., including beam selection and/or beam refinement). The SSB includes a PSS, a SSS, and a two symbol PBCH. The SSB can be transmitted in a fixed slot location, such as the symbols 0-3 as shown inFIG.3. The PSS and SSS may be used by UEs for cell search and acquisition. The PSS may provide half-frame timing, the SS may provide the CP length and frame timing. The PSS and SSS may provide the cell identity. The PBCH carries some basic system information, such as downlink system bandwidth, timing information within radio frame, SS burst set periodicity, system frame number, etc. The SSBs may be organized into SS bursts to support beam sweeping. Further system information such as, remaining minimum system information (RMSI), system information blocks (SIBs), other system information (OSI) can be transmitted on a physical downlink shared channel (PDSCH) in certain subframes. The SSB can be transmitted up to sixty-four times, for example, with up to sixty-four different beam directions for mmWave. The multiple transmissions of the SSB are referred to as a SS burst set. SSBs in an SS burst set may be transmitted in the same frequency region, while SSBs in different SS bursts sets can be transmitted at different frequency regions. In some cases, a base station may communicate downlink transmissions to a UE using a downlink channel in a slot and/or mini slot. In response, the UE may transmit feedback transmissions to the base station in response to receipt or not receiving the downlink transmission. For example, base station may send downlink transmissions on a physical downlink shared channel (PDSCH) using the slot or a portion of the slot. The UE may receive the downlink data transmitted by base station and may send feedback transmissions. In some cases, downlink transmissions may include one or more downlink messages and feedback transmissions may include HARQ feedback (e.g., formatted according to a semistatic HARQ-ACK codebook). According to some aspects, the UE may use HARQ feedback to ensure reception of the transmitted data. For example, the UE may send HARQ feedback transmissions that include an acknowledgement (ACK) or a negative acknowledgement (NACK) for data received by the UE. In such cases, the UE may monitor for downlink messages sent by a base station during one or more downlink transmission occasions. In some examples, each downlink transmission occasion is characterized by a time period (e.g., a subframe, slot, mini slot, etc.) during which the UE monitors a set of resources (e.g., resource elements (REs), resource blocks (RBs), etc.) to identify data sent to the UE from the base station. As described below, a UE (e.g., UE120ofFIG.1) may be configured to communicate HARQ feedback to a base station based on a number (M) of candidate downlink transmission occasions (e.g., downlink resources for transmission of downlink data), for a time period (e.g., a constrained set of resources), and a maximum number (k) of simultaneous downlink transmissions the UE can receive during the time period. Thus, when the UE successfully receives one or more downlink transmissions during the time period from the base station, the UE may determine a type of encoding for the HARQ feedback based on: (i) a number of the one or more downlink transmissions received, (ii) the number (M) of candidate downlink transmission occasions for the corresponding time period, and (iii) the maximum number (k) of simultaneous downlink transmissions. The UE may then encode the HARQ feedback using the determined type of encoding, and transmit the HARQ feedback to the base station. FIG.4is a block diagram illustrating an example of an aggregated HARQ ACK feedback transmitted via a PUCCH410. The UE (e.g., UE120aofFIG.1) may aggregate feedback for downlink transmissions received in one or a plurality of constrained sets (e.g., constrained sets402athrough402z—collectively referred to as “constrained sets402”) into a single HARQ ACK message transmitted to a BS (e.g., BS110aofFIG.1). Thus, the UE120amay be configured to receive a plurality of downlink data transmissions in each of one or more constrained sets402prior to generating a HARQ feedback based on an aggregate of the downlink data transmissions received by the UE, and transmitting the HARQ feedback to the BS110a. For example, the UE120amay be configured generate and transmit aggregated HARQ feedback for every four contiguous constrained sets, or any other suitable number of constrained sets. In some examples, the number of constrained sets that the UE120aaggregates HARQ feedback for may be configured by the BS110aor according to a particular RAT. The plurality of constrained sets402may form a temporally contiguous series of constrained sets. In this example, each of the plurality of constrained sets402includes four slots, wherein each of the four slots includes seven downlink occasions. For example, a first constrained set402aincludes four slots (e.g., slot412athrough412d—collectively referred to as “slots412”), wherein each of the four slots412include seven downlink occasions for a total of M=28 candidate downlink transmission occasions numbered 0 through 27. The UE120amay be configured to receive any suitable number of downlink transmissions during each of the constrained sets402. In certain aspects, the UE120amay be configured to receive up to k downlink transmissions during each of the constrained sets402. In some examples, k may be set by a manufacturer according to hardware and/or software capabilities of a particular UE, and may indicate a maximum number of downlink transmissions the UE120acan receive during a given constrained set. Alternatively, k of a UE120amay be configured by a BS110aor according to a particular RAT. For purposes of this example, k of the UE120amay be set to 4. That is, the UE120amay be configured to receive up to 4 downlink transmissions during each of the plurality of constrained sets402. Stated differently, for the UE120awith k=4 and M=28, the UE120amay receive up to four downlink transmissions within the 28 candidate downlink transmission occasions of each of the plurality of constrained sets402. As illustrated in the example ofFIG.4, the UE120amay receive a downlink transmission in candidate occasions16,19,23, and27of the first constrained set402a. AlthoughFIG.4illustrates each received downlink transmission as including one candidate occasion, each of the downlink transmissions may span one or more contiguous candidate occasions. In certain aspects, the BS110amay configure the UE120awith the codebook prior to transmitting downlink data to the UE120a. In some examples, the BS110amay transmit an indication of a codebook for uplink transmission of HARQ ACK feedback to the UE120a. The codebook may be based on: (i) the number (M) of candidate downlink transmission occasions for each of the plurality of constrained sets402, and (ii) the number (k) of downlink transmissions the UE120acan receive within a constrained set. For each of the plurality of constrained sets402, the UE120amay determine a codebook entry that corresponds to the received downlink transmissions. In one example, the UE120amay first determine an M-bit binary vector that corresponds individually to each received downlink data transmission of a constrained set, or alternatively, an M-bit binary vector that corresponds to all of the received downlink data transmissions as a group. In either case, the M-bit binary vector identifies the candidate occasions over which a downlink transmission was received. Thus, for the first constrained set402a, the UE120amay determine four 28-bit binary vectors individually for each of the four downlink data transmissions received by the UE120a:0x0000000000001, 0x0000000010000, 0x0000100000000, and 0x0100000000000. Alternatively, the UE120amay determine a 28-bit binary vector for the four downlink data transmissions received by the UE120aas a group: 0x0100100010001, or alternatively, 1x1011011101110. The UE120amay then map the determined binary vector(s) to a corresponding codebook entry. In certain aspects, for each determined binary vector, the UE120amay also determine a codebook index or “codepoint” of a codebook entry that corresponds to the vector. The codepoint may be any integer or whole number that uniquely (relative to other codepoints in the codebook) identifies the codebook entry that corresponds to a particular binary vector. For example, the codebook may include an entry having a same 28-bit binary vector as the one determined by the UE120afor the first constrained set402a. Once the UE120adetermines that the vector matches the codebook entry, the UE120amay determine a codepoint of that codebook entry. Accordingly, for each of one or more constrained sets402, the UE120amay determine a codepoint from the codebook based on the determined binary vector of each constrained set. Once the UE120ahas determined a codepoint for each of either (i) the received downlink data transmissions of a single constrained set, or (ii) the plurality of constrained sets402, the UE120amay calculate a sum of the codepoints (x) then determine a number of bits (L) corresponding to an aggregated HARQ feedback for uplink transmission over the PUCCH410according to equation 1 below. L=xmod(M+1)  Equation 1 For example, if the UE120ais configured to transmit an aggregated feedback of four constrained sets, the UE120awill calculate a sum (x) of the four codepoints corresponding to the four constrained sets, then calculate the value of L using x mod (29). Alternatively, if the UE120ais configured to transmit an aggregated feedback of downlink data transmissions received in a single constrained set402a, the UE120awill calculate a sum (x) of codepoints corresponding to each of the received downlink data transmissions, then calculate the value of L using x mod (29). The UE120amay then transmit L to the BS110a. FIG.5is a block diagram illustrating an example codebook500generated by the BS110afor the UE120a. The codebook includes a plurality of entries (e.g., entries502a-502f), each of which corresponds to a particular binary vector and a codepoint. As discussed, the codebook may be generated by the BS110abased on the M and k values for a particular UE120a. For example, where M=28 and k=4, the codebook may include 1 codepoint for 0 ACKs (e.g., binary vector 0x000), 28 codepoints for 1 ACK, 32 codepoints for 2 ACKs, 32 codepoints for 3 ACKS, and 1 codepoint for 4 ACKs. As such, the codebook may only include entries that correspond to binary vectors having k or less bits indicating successful reception of downlink transmissions. That is, the codebook may not include a binary vector having five or more bits indicating a successful transmission because the BS110awill send no more than four downlink transmissions for each constrained set. Example Method for Providing Harq Feedback for k−1 and k−2 Downlink Receptions As discussed, the UE120adetermines one codepoint for each of the plurality of constrained sets402, then computes an aggregated HARQ feedback based on a plurality of codepoints. However, there may be situations where the UE120adoes not receive all k of the downlink transmissions provided by the BS110ain a single constrained set. Thus, the L value transmitted by the UE120amay not account for all of the downlink transmissions, and furthermore, the L value may be ambiguous in that it does not indicate to the BS110awhich of the downlink transmissions the UE120adid not receive (e.g., the L value may indicate at least two possibilities of which downlink transmissions were received by the UE120a). In one example, a UE120amay be configured to determine and transmit an aggregated HARQ feedback (e.g., a calculated LUEvalue) to the BS110aafter a constrained set (e.g., constrained set402aofFIG.4, wherein M=28 transmission occasions across four slots412a-412d). In this example, the BS110atransmits k=4 downlink communications to the UE120ain the constrained set, wherein the four transmissions correspond to codepoints {1, 4, 5, and 8} within the codebook. Thus, the sum (x) of the four codepoints associated with downlink transmissions made by the BS110aover the constrained set402aequals 18 (e.g., x=18). Assuming, however, that the UE120adoes not receive the second downlink transmission corresponding to codepoint 4, then the sum of the codepoints corresponding to downlink transmissions received by the UE120aequals 14 (e.g., x=14). Accordingly, using equation 1, the UE120acalculates the aggregated HARQ feedback using (LUE=14 mod 29) and determines LUE=14. The UE120athen transmits 14 to the BS110aas the aggregated HARQ feedback for the constrained set. In this example, the BS110amay use the expected L value (e.g., Le=18) to determine which of the downlink transmission(s) of the constrained set402awere not received by the UE120a. The BS110amay calculate the missing codepoint (e.g., cm—the codepoint corresponding to the downlink transmission not received by the UE120a) using equation 2: cm=Le−LUEEquation 2 Here, the BS110acalculates the codepoint not received by the UE120aas cm=4. The BS110amay then determine which of the four transmissions of the constrained set402awould have a codepoint that is equal to 4 in order to determine which of the downlink transmissions the UE120adid not receive. However, in a case where the UE120areceives k−2 downlink transmissions during a constrained set, there may be additional ambiguity that prevents the BS110afrom being able to determine which downlink transmission the UE120adid not receive. Using the same example as above, the BS110atransmits k=4 downlink communications to the UE120ain the constrained set402a, wherein the four transmissions correspond to codepoints {1, 4, 5, and 8} within the codebook. However, in this example, the UE120adoes not receive downlink transmissions corresponding to codepoints 4 or 5. Here, the sum of the codepoints corresponding downlink transmissions received by the UE120aequals 9 (e.g., x=1+8=9). Accordingly, using equation 1, the UE120acalculates the aggregated HARQ feedback using (9 mod 29) and determines LUE=9. The UE120athen transmits 9 to the BS110aas the aggregated HARQ feedback for the constrained set402a. The BS110amay then use the expected L value (e.g., Le=18) to determine which of the k downlink transmissions were not received by the UE120a. However, in this example, if the BS110auses equation 2 to calculate the missing codepoint (e.g., cm), the calculation will result in a value of 9. However, because the BS110awould expect that the UE120areceived downlink transmissions corresponding to codepoints 1, 4, 5, and 8, the calculated cmdoes not correspond to a valid codepoint reflective of the k downlink transmissions by the BS110a. Because the calculation of cmdoes not result in a valid codepoint, the BS110amay attempt to determine if downlink transmissions associated with two of the k downlink transmissions failed to reach the UE120a. However, this may result in ambiguity as to which two of downlink transmissions were not received by the UE120a. For example, the UE120aprovided “9” as the HARQ feedback, yet a sum of the codepoints corresponding to corresponding to codepoints 4 and 5 equals 9, and a sum of the corresponding to codepoints 1 and 8 equals 9. Accordingly, the BS110amay not be able to determine with certainty which two codepoints correspond to the downlink transmissions that the UE120adid not receive. Example Method for Resolving Ambiguity for k−2 Downlink Receptions In certain aspects, the BS110amay take one or more actions in order to avoid the foregoing ambiguity. For example,FIG.6is a call flow diagram illustrating communications600between a BS110aand a UE120ain a manner that takes into account the possibility of receiving an ambiguous HARQ feedback in response to downlink communications made in a constrained set of resources. In a first process606, the BS110amay generate a codebook (e.g., the codebook500ofFIG.5) for the UE120abased on M and k values of the UE120a. Then, in a first communication608, the BS110amay transmit an indication of the codebook to the UE120a. In some examples, the BS110amay transmit the entire codebook to the UE120a, or alternatively, the UE120amay already include one or more codebooks stored in a memory, of which the BS110acan select from and enable via the first communication608. In a second process610, the BS110amay determine a set of codepoints comprising: (i) a first combination of codepoints, and (ii) a second combination of codepoints different from the first combination. Here, the BS110adetermines the set based on whether HARQ feedback calculated from the first combination is equal to HARQ feedback calculated from the second combination. Accordingly, the BS110ahas a set of codepoints that can cause ambiguity in a HARQ feedback from the UE120a. In a third process612, the BS110amay determine and select one or more of a plurality of actions to perform when scheduling and transmitting downlink data to the UE120a. The plurality of actions may include scheduling, by the BS110a, a plurality of downlink data transmissions to the UE120adownlink resources (e.g., downlink transmission occasions) that do not correspond to the first combination of codepoints and the second combination of codepoints. For example, in a k=4 scenario, the BS110amay schedule three downlink transmissions to the UE, then for a fourth downlink transmission the BS110amay determine to transmit over downlink resources that do not correspond to a codepoint that would result in ambiguity with any of the initial three downlink transmissions of a constrained set. In some examples, the plurality of actions may include the BS110adetermining to refrain from transmitting the fourth downlink transmission if there is no alternative downlink resource that will resolve a potential ambiguity (e.g., the fourth downlink transmission can only be sent over downlink resources that correspond to a codepoint in one of the first or second combinations, wherein one of the other three downlink transmissions correspond to the other of the first or second combination). In some examples, the plurality of actions may include determining, by the BS110a, a downlink transmission occasion in a constrained set (e.g., constrained set402aofFIG.4) for transmitting downlink data to the UE120a. The BS110amay then determine a likelihood that HARQ feedback calculated from the downlink data transmissions will result in an ambiguity. For example, the BS110amay determine a likelihood that the UE120awould not receive one or more downlink transmissions that would result in ambiguous HARQ feedback based on movement of the UE120a, quality of previous communications between the BS110aand the UE120a(e.g., determining that reference signal received power (RSRP), reference signal received quality (RSRQ), and the like are measured to be above a threshold). In this example, the BS110amay determine whether to transmit or schedule one or more downlink transmissions if it is likely that the UE120amay not receive a transmission. That is, if it is likely that the UE120awill not receive a transmission, and a HARQ feedback calculated by the UE120awould result in an ambiguous value if calculated without having received that transmission, then the BS110amay determine an action to take. In some examples, the plurality of actions may include ignoring, by the BS110a, any HARQ feedback from the UE120athat may be ambiguous. In a second communication614, the BS110amay transmit downlink data to the UE120a. Example Method for Resolving Ambiguity for k−2 Downlink Receptions Via Enhanced Codebook In certain aspects, the BS110amay generate an enhanced codebook comprising subsets of codepoints (e.g., one or more pools of codepoints, wherein each pool represents less than all of the codepoints in a codebook) or “pools” of codepoints configured to resolve any ambiguity of a HARQ feedback. For example, where M=28 and k=4, the enhanced codebook may include the following enumerated codepoints: (i) 1 codepoint for 0 ACK; (ii) 28 codepoints for 1 ACK; (iii) 32 codepoints for 2 ACKs (e.g., a first pool of codepoints corresponding to a scenario wherein the UE120areceives two downlink control channel transmissions in a single constrained set, or four downlink control channel transmissions in the single constrained set with no ambiguous ACK location); (iv) 32 codepoints for 2 ACKs (e.g., a second pool of codepoints corresponding to a scenario wherein the UE120areceives three downlink control channel transmissions in a single constrained set and q=0, or four downlink control channel transmissions in the single constrained set with an ambiguous ACK location and q=0); (v) 32 codepoints for 2 ACKs (e.g., a third pool of codepoints corresponding to a scenario wherein the UE120areceives three downlink control channel transmissions in a single constrained set and q=1, or four downlink control channel transmissions in the single constrained set with an ambiguous ACK location and q=1); (vi) 32 codepoints for 3 ACKs; and (vii) 1 codepoint for 4 ACKs. Accordingly, if the UE120areceives k downlink transmissions in a constrained set that meet one of the foregoing rules, the UE120amay provide HARQ feedback using a corresponding codepoint from one of the pools of codepoints in the enhanced codebook. FIG.7is a call flow diagram illustrating example communications700between a UE (e.g., the UE120aofFIG.1) and a BS (e.g., the BS110aofFIG.2). As a first example, the BS110amay transmit four downlink communications to the UE during a first constrained set of downlink resources in a first communication702. The four downlink communications include: (i) a first downlink data transmission corresponding to codepoint 1, (ii) a first downlink control transmission (e.g., transmission over PDCCH) corresponding to codepoint 4, (iii) a second downlink data transmission corresponding to codepoint 5, and (iv) a third downlink data transmission corresponding to codepoint 8. In this example, the UE120areceives the first downlink data transmission, the first downlink control transmission, and the third downlink data transmission, but does not receive the second downlink data transmission. Thus, in a first process704, the UE120amay perform a calculation of an aggregated HARQ feedback for the received transmissions. Note that the UE120amay determine not to include the first downlink control transmission in its calculation of the HARQ feedback. Accordingly, the UE120asums the codepoints corresponding to the received downlink data transmissions, resulting in 9. The UE120athen determines whether there are any codepoints that correspond to a valid downlink resource that could lead to ambiguity of the HARQ feedback. Here, because the UE120areceived transmissions corresponding to codepoints 1, 4, and 8, and decoded PDSCH corresponding to codepoints 1 and 8, the UE120amay determine that codepoint 5 would lead to ambiguity (e.g., the UE would feedback a HARQ ACK for codepoints {1,8} which is equivalent (e.g., leading to ambiguity) to a HARQ ACK for codepoints {4,5}. That is, the UE120adetermines that if the BS110atransmitted a downlink communication that was not received in a downlink resource that corresponds to codepoint 5, the HARQ feedback of 9 would result in an ambiguous HARQ feedback. The HARQ feedback would be ambiguous because the BS110awould not be able to determine which of the transmissions were not received by the UE120a(e.g., 1+8=9 and 4+5=9), so a HARQ feedback based on 9 could provide two different indications of what the UE120areceived. Thus, the UE120amay determine that codepoint 5 is an ambiguity, and could make a HARQ feedback ambiguous if the BS110atransmitted a downlink communication in a downlink resource corresponding to codepoint 5. Based on the determination that codepoint 5 is an ambiguity, the UE120amay determine a q value. In some examples, the UE120adetermines a q value by determining a lowest codepoint value of the three codepoints corresponding to the received downlink transmissions and the ambiguous codepoint. For example, if the UE120areceives codepoints 4, 5, and 8 corresponding to the received downlink transmissions, and determines the ambiguous codepoint to be 1, then the lowest codepoint value is the ambiguous codepoint. However, if the UE120areceives codepoints 1, 4, and 5 corresponding to the received downlink transmissions, and determines the ambiguous codepoint to be 8, then the lowest codepoint value is the “1” value received in a downlink transmission. The UE120amay then determine a q value based on whether the lowest codepoint value is a codepoint corresponding to a received downlink communication. For example, if the lowest codepoint value corresponds to the ambiguous codepoint, then the UE may set q=1. However, if the lowest codepoint value corresponds to a codepoint associated with a received downlink communication, then the UE120amay set q=0. In this example, because the lowest codepoint value is 1, and because the lowest codepoint value corresponds to a downlink data transmission received by the UE120a, the UE120awill set a q value to “0” (e.g., q=0). In a second communication706, the UE120atransmits the determined HARQ feedback. In this example, the HARQ feedback comprises two values, (9,0), where the 9 is the HARQ feedback determined based on the received downlink data transmissions, and the 0 is the determined q value. Although outside the scope of the first example, it should be note that if no ambiguous codepoint is found by the UE120aduring the first process704, the UE120awill transmit, during the second communication706, HARQ ACK feedback using Equation 1 above and the codepoints corresponding to the received downlink transmissions without a q value. At a second process708, the BS110awill determine which (if any) downlink transmission was not received by the UE based on the HARQ feedback. Here, the BS110amay use the q value to determine that the downlink transmission corresponding to the lowest ambiguous codepoint value (e.g., 1) was received by the UE120a, and thus, the downlink transmission corresponding to codepoint 5 was not received. Still referring toFIG.7, a second example is provided as follows. In this example, in the first communication702, the BS110atransmits four downlink communications to the UE120a(e.g., k=4). The four downlink communications include: (i) a first downlink data transmission corresponding to codepoint 1, (ii) a second downlink data transmission corresponding to codepoint 4, (iii) a third downlink data transmission corresponding to codepoint 5, and (iv) a first downlink control transmission corresponding to codepoint 8. In this example, the UE120areceives the downlink transmissions that correspond to codepoints 4, 5, and 8, but the UE120adoes not receive the downlink transmission corresponding to codepoint 1. Thus, in the first process704, the UE120acalculates a HARQ feedback based on the codepoints of the received downlink data transmissions resulting in 9. The UE120athen determines whether there is a downlink resource corresponding to a codepoint that could lead to ambiguity. In this example, because the UE120areceived transmissions corresponding to codepoints 4, 5, and 8, the UE120amay determine that codepoint 1 would lead to ambiguity. That is, the UE120adetermines that if the BS110atransmitted a downlink communication that was not received in a downlink resource that corresponds to codepoint 1, the HARQ feedback of 9 would result in an ambiguous HARQ feedback. The HARQ feedback would be ambiguous because the BS110awould not be able to determine which of the transmissions were not received by the UE120a(e.g., 1+8=9 and 4+5=9), so a HARQ feedback based on 9 could provide two different indications of what the UE120areceived. Thus, the UE120amay determine that codepoint 1 is an ambiguity, and could make a HARQ feedback ambiguous if the BS110atransmitted a downlink communication in a downlink resource corresponding to codepoint 1. Based on the determination that codepoint 1 is an ambiguity, the UE120amay determine a lowest codepoint value among the three codepoints corresponding to the received downlink transmissions and the ambiguous codepoint. In this example, because the lowest codepoint value is 1, and because the lowest codepoint value corresponds to the ambiguous codepoint, the UE120awill set a q value to “1” (e.g., q=1). In the second communication706, the UE120atransmits the determined HARQ feedback. In this example, the HARQ feedback comprises two values, (9,1), where the 9 is the HARQ feedback determined based on the received downlink data transmissions, and the 1 is the determined q value. At the second process708, the BS110awill determine which (if any) downlink transmission was not received by the UE based on the HARQ feedback. Here, the BS110amay use the q value to determine that the downlink transmission corresponding to the lowest ambiguous codepoint value (e.g., 1) was not received by the UE120a. FIG.8is a flow diagram illustrating example operations800for wireless communication, in accordance with certain aspects of the present disclosure. The operations800may be performed, for example, by a BS (e.g., such as the BS110ain the wireless communication network100). The operations800may be complimentary operations by the BS to the operations800performed by the UE. Operations800may be implemented as software components that are executed and run on one or more processors (e.g., controller/processor240ofFIG.2). Further, the transmission and reception of signals by the BS in operations800may be enabled, for example, by one or more antennas (e.g., antennas234ofFIG.2). In certain aspects, the transmission and/or reception of signals by the BS may be implemented via a bus interface of one or more processors (e.g., controller/processor240) obtaining and/or outputting signals. The operations800may begin, at a first block802, by indicating, to a UE, a codebook for uplink transmission of HARQ feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period. The operations800may proceed to a second block804by selecting an action to perform based on a set of codepoints of the codebook, the set of codepoints comprising: (i) a first combination of codepoints, and (ii) a second combination of codepoints different from the first combination, wherein HARQ feedback calculated from the first combination is equal to HARQ feedback calculated from the second combination. In certain aspects, the HARQ feedback is calculated from multiple codepoints, and wherein each of the multiple codepoints correspond to a unique one or more of the M candidate downlink transmission occasions. In certain aspects, the selecting the action comprises scheduling, by the BS, a plurality of downlink data transmissions to the UE during the time period, wherein a codepoint of each of a plurality of downlink transmission occasions used for the plurality of downlink data transmissions do not correspond to the first combination of codepoints and the second combination of codepoints. In certain aspects, the selecting the action comprises determining, by the BS, a first downlink transmission occasion in the time period for transmitting downlink data to the UE, wherein the determined first downlink transmission occasion in the time period corresponds to a codepoint in the set of codepoints; determining a likelihood that HARQ feedback calculated from the first combination will be equal to HARQ feedback calculated from the second combination; if the likelihood is below a threshold value, transmitting the downlink data to the UE using the first downlink transmission occasion; and if the likelihood is above the threshold value, refraining from transmitting the downlink data using the first downlink transmission occasion. In certain aspects, the selecting the action comprises: determining, by the BS, a first downlink transmission occasion in the time period for transmitting downlink data to the UE, wherein the determined first downlink transmission occasion in the time period corresponds to a codepoint in the set of codepoints; transmitting the downlink data to the UE using the first downlink transmission occasion; and ignoring HARQ feedback received from the UE if the HARQ feedback is calculated from the first combination or the second combination. In certain aspects, the codebook comprises a plurality of codepoints, and wherein each of the plurality of codepoints correspond to a unique binary vector with M elements having a weight less than or equal to k. In certain aspects, the codebook comprises: a first plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the first plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions and (ii) a plurality of downlink data transmissions that do not correspond to either of the first combination of codepoints or the second combination of codepoints; a second plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the second plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions, and (ii) a plurality of downlink data transmissions that correspond to the first combination of codepoints; and a third plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the third plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions, and (ii) a plurality of downlink data transmissions that correspond to the second combination of codepoints. In certain aspects, the first combination of codepoints includes a codepoint having a lowest value of codepoints in both the first combination and the second combination. FIG.9is a flow diagram illustrating example operations900for wireless communication, in accordance with certain aspects of the present disclosure. The operations900may be performed, for example, by UE (e.g., such as a UE120ain the wireless communication network100). The operations900may be complimentary operations by the UE to the operations900performed by the BS. Operations900may be implemented as software components that are executed and run on one or more processors (e.g., controller/processor280ofFIG.2). Further, the transmission and reception of signals by the UE in operations900may be enabled, for example, by one or more antennas (e.g., antennas252ofFIG.2). In certain aspects, the transmission and/or reception of signals by the UE may be implemented via a bus interface of one or more processors (e.g., controller/processor280) obtaining and/or outputting signals. The operations900may begin, at a first block902, by receiving, from a base station (BS), an indication of a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period, the codebook comprising: (i) a first plurality of codepoints, (ii) a second plurality of codepoints, (iii) a third plurality of codepoints, and (iv) another one or more codepoints. The operations900may proceed to a second block904by receiving a downlink transmission over the time period. The operations900may proceed to a third block906by transmitting HARQ feedback for the downlink transmission, the HARQ feedback calculated using one of: the first plurality of codepoints, the second plurality of codepoints, or the third plurality of codepoints if a codepoint location is ambiguous or if the downlink transmission is received in error; or the other one or more codepoints if a codepoint location is not ambiguous and if the downlink transmission is not received in error. In certain aspects, the codebook comprises a plurality of codepoints, and wherein each of the plurality of codepoints correspond to a unique binary vector with M elements having a weight less than or equal to k. In certain aspects, receiving the downlink transmission over the time period further comprises receiving k−1 downlink transmissions over the time period, the method further comprising calculating a single HARQ feedback for an aggregation of the k−1 downlink transmissions. In certain aspects, the operations900further include determining a single codepoint corresponding to each of the k−1 downlink transmissions, wherein the HARQ feedback is calculated based on a sum of codepoints corresponding to each of the k−1 downlink transmissions. In certain aspects, the k−1 downlink transmissions over the time period includes one or more downlink control information transmissions and one or more downlink data transmissions. In certain aspects, the operations900further include: determining codepoints, wherein each of the determined codepoints corresponds to one of the k−1 downlink transmissions; calculating an ambiguous codepoint based on the determined codepoints, wherein the combination of the ambiguous codepoint and the determined codepoints would result in a calculation of HARQ feedback that is true for two separate combinations of codepoints; determining which of the ambiguous codepoint and the determined codepoints has a lowest value; and selecting one of the second plurality of codepoints or the third plurality of codepoints based on which of the ambiguous codepoint and the determined codepoints has the lowest value. In certain aspects, the operations900further include selecting the first plurality of codepoints for calculating HARQ feedback if one of: k−2 of the downlink transmissions are downlink control information transmissions; or k of the downlink transmissions are downlink control information transmissions. FIG.10illustrates a communications device1000that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIG.8. The communications device1000includes a processing system1002coupled to a transceiver1008(e.g., a transmitter and/or a receiver). The transceiver1008is configured to transmit and receive signals for the communications device1000via an antenna1010, such as the various signals as described herein. The processing system1002may be configured to perform processing functions for the communications device1000, including processing signals received and/or to be transmitted by the communications device1000. The processing system1002includes a processor1004coupled to a computer-readable medium/memory1012via a bus1006. In certain aspects, the computer-readable medium/memory1012is configured to store instructions (e.g., computer-executable code) that when executed by the processor1004, cause the processor1004to perform the operations illustrated inFIG.8, or other operations for performing the various techniques discussed herein. In certain aspects, computer-readable medium/memory1012stores code1060for indicating, to a user equipment (UE), a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period. In some examples, computer-readable medium/memory1012may optionally store code1062for selecting an action to perform based on a set of codepoints of the codebook, the set of codepoints comprising: (i) a first combination of codepoints, and (ii) a second combination of codepoints different from the first combination, wherein HARQ feedback calculated from the first combination is equal to HARQ feedback calculated from the second combination. In certain aspects, the processor1004has circuitry1022configured to implement the code stored in the computer-readable medium/memory1012. The processor1004includes circuitry1040for indicating, to a user equipment (UE), a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period. In some examples, the processor1004may optionally include circuitry1042for selecting an action to perform based on a set of codepoints of the codebook, the set of codepoints comprising: (i) a first combination of codepoints, and (ii) a second combination of codepoints different from the first combination, wherein HARQ feedback calculated from the first combination is equal to HARQ feedback calculated from the second combination. FIG.11illustrates a communications device1100that may include various components (e.g., corresponding to means-plus-function components) configured to perform operations for the techniques disclosed herein, such as the operations illustrated inFIG.9. The communications device1100includes a processing system1102coupled to a transceiver1108(e.g., a transmitter and/or a receiver). The transceiver1108is configured to transmit and receive signals for the communications device1100via an antenna1110, such as the various signals as described herein. The processing system1102may be configured to perform processing functions for the communications device1100, including processing signals received and/or to be transmitted by the communications device1100. The processing system1102includes a processor1104coupled to a computer-readable medium/memory1112via a bus1106. In certain aspects, the computer-readable medium/memory1112is configured to store instructions (e.g., computer-executable code) that when executed by the processor1104, cause the processor1104to perform the operations illustrated inFIG.9, or other operations for performing the various techniques discussed herein. In certain aspects, computer-readable medium/memory1112stores code1160for receiving, from a base station (BS), an indication of a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period, the codebook comprising: (i) a first plurality of codepoints, (ii) a second plurality of codepoints, (iii) a third plurality of codepoints, and (iv) another one or more codepoints. Computer-readable medium/memory1112may also store code1162for receiving a downlink transmission over the time period. Computer-readable medium/memory1112may also store code1164for transmitting HARQ feedback for the downlink transmission, the HARQ feedback calculated using one of: the first plurality of codepoints, the second plurality of codepoints, or the third plurality of codepoints if a codepoint location is ambiguous or if the downlink transmission is received in error; or the other one or more codepoints if a codepoint location is not ambiguous and if the downlink transmission is not received in error. The computer-readable medium/memory1112may optionally store code1166for determining a single codepoint corresponding to each of the k−1 downlink transmissions, wherein the HARQ feedback is calculated based on a sum of codepoints corresponding to each of the k−1 downlink transmissions. The computer-readable medium/memory1112may optionally store code1168for calculating an ambiguous codepoint based on the determined codepoints, wherein the combination of the ambiguous codepoint and the determined codepoints would result in a calculation of HARQ feedback that is true for two separate combinations of codepoints. The computer-readable medium/memory1112may optionally store code1170for determining which of the ambiguous codepoint and the determined codepoints has a lowest value. The computer-readable medium/memory1112may optionally store code1172for selecting one of the second plurality of codepoints or the third plurality of codepoints based on which of the ambiguous codepoint and the determined codepoints has the lowest value. In certain aspects, the processor1104has circuitry1122configured to implement the code stored in the computer-readable medium/memory1112. In certain aspects, the processing system1102includes circuitry1140for receiving, from a base station (BS), an indication of a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period, the codebook comprising: (i) a first plurality of codepoints, (ii) a second plurality of codepoints, (iii) a third plurality of codepoints, and (iv) another one or more codepoints. The processing system1102includes circuitry1142for receiving a downlink transmission over the time period. The processing system1102includes circuitry1144for transmitting HARQ feedback for the downlink transmission, the HARQ feedback calculated using one of: the first plurality of codepoints, the second plurality of codepoints, or the third plurality of codepoints if a codepoint location is ambiguous or if the downlink transmission is received in error; or the other one or more codepoints if a codepoint location is not ambiguous and if the downlink transmission is not received in error. The processing system1102includes circuitry1146for determining a single codepoint corresponding to each of the plurality of time periods, wherein the HARQ feedback is calculated based on a sum of codepoints corresponding to each of the plurality of time periods. The processing system1102includes circuitry1148for calculating an ambiguous codepoint based on the determined codepoints, wherein the combination of the ambiguous codepoint and the determined codepoints would result in a calculation of HARQ feedback that is true for two separate combinations of codepoints. The processing system1102includes circuitry1150for determining which of the ambiguous codepoint and the determined codepoints has a lowest value. The processing system1102includes circuitry1152for selecting one of the second plurality of codepoints or the third plurality of codepoints based on which of the ambiguous codepoint and the determined codepoints has the lowest value. Example Aspects Implementation examples are described in the following numbered clauses:1. A method for wireless communication by a base station (BS), the method comprising: indicating, to a user equipment (UE), a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period; and selecting an action to perform based on a set of codepoints of the codebook, the set of codepoints comprising: (i) a first combination of codepoints, and (ii) a second combination of codepoints different from the first combination, wherein HARQ feedback calculated from the first combination is equal to HARQ feedback calculated from the second combination.2. The method of aspect 1, wherein the HARQ feedback is calculated from multiple codepoints, and wherein each of the multiple codepoints correspond to a unique one or more of the M candidate downlink transmission occasions.3. The method of any of aspects 1 and 2, wherein the selecting the action comprises scheduling, by the BS, a plurality of downlink data transmissions to the UE during the time period, wherein a codepoint of each of a plurality of downlink transmission occasions used for the plurality of downlink data transmissions do not correspond to the first combination of codepoints and the second combination of codepoints.4. The method of any of aspects 1-3, wherein the selecting the action comprises: determining, by the BS, a first downlink transmission occasion in the time period for transmitting downlink data to the UE, wherein the determined first downlink transmission occasion in the time period corresponds to a codepoint in the set of codepoints; determining a likelihood that HARQ feedback calculated from the first combination will be equal to HARQ feedback calculated from the second combination; if the likelihood is below a threshold value, transmitting the downlink data to the UE using the first downlink transmission occasion; and if the likelihood is above the threshold value, refraining from transmitting the downlink data using the first downlink transmission occasion.5. The method of any of aspects 1-4, wherein the selecting the action comprises: determining, by the BS, a first downlink transmission occasion in the time period for transmitting downlink data to the UE, wherein the determined first downlink transmission occasion in the time period corresponds to a codepoint in the set of codepoints; transmitting the downlink data to the UE using the first downlink transmission occasion; and ignoring HARQ feedback received from the UE if the HARQ feedback is calculated from the first combination or the second combination.6. The method of any of aspects 1-5, wherein the codebook comprises a plurality of codepoints, and wherein each of the plurality of codepoints correspond to a unique binary vector with M elements having a weight less than or equal to k.7. The method of any of aspects 1-6, wherein the codebook comprises: a first plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the first plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions and (ii) a plurality of downlink data transmissions that do not correspond to either of the first combination of codepoints or the second combination of codepoints; a second plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the second plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions, and (ii) a plurality of downlink data transmissions that correspond to the first combination of codepoints; and a third plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the third plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions, and (ii) a plurality of downlink data transmissions that correspond to the second combination of codepoints.8. The method of aspect 7, wherein the first combination of codepoints includes a codepoint having a lowest value of codepoints in both the first combination and the second combination.9. A method for wireless communication by a user equipment (UE), the method comprising: receiving, from a base station (BS), an indication of a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period, the codebook comprising: (i) a first plurality of codepoints, (ii) a second plurality of codepoints, (iii) a third plurality of codepoints, and (iv) another one or more codepoints; receiving a downlink transmission over the time period; and transmitting HARQ feedback for the downlink transmission, the HARQ feedback calculated using one of: the first plurality of codepoints, the second plurality of codepoints, or the third plurality of codepoints if a codepoint location is ambiguous or if the downlink transmission is received in error; or the other one or more codepoints if a codepoint location is not ambiguous and if the downlink transmission is not received in error.10. The method of aspects 9, wherein the codebook comprises a plurality of codepoints, and wherein each of the plurality of codepoints correspond to a unique binary vector with M elements having a weight less than or equal to k.11. The method of any of aspects 9 and 10, wherein receiving the downlink transmission over the time period further comprises receiving k−1 downlink transmissions over the time period, the method further comprising calculating a single HARQ feedback for an aggregation of the k−1 downlink transmissions.12. The method of any of aspects 9-11, further comprising determining a single codepoint corresponding to each of the k−1 downlink transmissions, wherein the HARQ feedback is calculated based on a sum of codepoints corresponding to each of the k−1 downlink transmissions.13. The method of any of aspects 9-12, wherein the k−1 downlink transmissions over the time period includes one or more downlink control information transmissions and one or more downlink data transmissions.14. The method of any of aspects 9-13, further comprising: determining codepoints, wherein each of the determined codepoints corresponds to one of the k−1 downlink transmissions; calculating an ambiguous codepoint based on the determined codepoints, wherein the combination of the ambiguous codepoint and the determined codepoints would result in a calculation of HARQ feedback that is true for two separate combinations of codepoints; determining which of the ambiguous codepoint and the determined codepoints has a lowest value; and selecting one of the second plurality of codepoints or the third plurality of codepoints based on which of the ambiguous codepoint and the determined codepoints has the lowest value.15. The method of any of aspects 9-13, further comprising selecting the first plurality of codepoints for calculating HARQ feedback if one of: k−2 of the downlink transmissions are downlink control information transmissions; or k of the downlink transmissions are downlink control information transmissions.16. A base station (BS), comprising: a memory; and a processor coupled to the memory, the processor and the memory configured to: indicate, to a user equipment (UE), a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period; and select an action to perform based on a set of codepoints of the codebook, the set of codepoints comprising: (i) a first combination of codepoints, and (ii) a second combination of codepoints different from the first combination, wherein HARQ feedback calculated from the first combination is equal to HARQ feedback calculated from the second combination.17. The BS of aspect 16, wherein the HARQ feedback is calculated from multiple codepoints, and wherein each of the multiple codepoints correspond to a unique candidate downlink transmission occasion.18. The BS of any of aspects 16 and 17, wherein the processor and the memory, being configured to select the action, are further configured to schedule a plurality of downlink data transmissions to the UE during the time period, wherein a codepoint of each of a plurality of downlink transmission occasions used for the plurality of downlink data transmissions does not correspond to the first combination of codepoints and the second combination of codepoints.19. The BS of any of aspects 16-18, wherein the processor and the memory, being configured to select the action, are further configured to: determine a downlink transmission occasion in the time period for transmitting downlink data to the UE, wherein the downlink transmission occasion corresponds to a codepoint; determine a likelihood that HARQ feedback calculated from the first combination will be equal to HARQ feedback calculated from the second combination; if the likelihood is below a threshold value, transmit the downlink data to the UE using the downlink transmission occasion of the time period; and if the likelihood is above the threshold value, refrain from transmitting at least a portion of the downlink data to the UE.20. The BS of any of aspects 16-19, wherein the processor and the memory, being configured to select the action, are further configured to: determine a downlink transmission occasion in the time period for transmitting downlink data to the UE, wherein the downlink transmission occasion corresponds to a codepoint; transmit the downlink data to the UE using the downlink transmission occasion of the time period; and ignore HARQ feedback received from the UE if the HARQ feedback is calculated from the first combination or the second combination.21. The BS of any of aspects 16-20, wherein the codebook comprises a plurality of codepoints, and wherein each of the plurality of codepoints correspond to a unique binary vector with M elements having a weight less than or equal to k.22. The BS of any of aspects 16-21, wherein the codebook comprises: a first plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the first plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions and (ii) a plurality of downlink data transmissions that do not correspond to either of the first combination of codepoints or the second combination of codepoints; a second plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the second plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions, and (ii) a plurality of downlink data transmissions that correspond to the first combination of codepoints; and a third plurality of codepoints for calculating HARQ feedback for downlink transmissions made over the time period, the third plurality of codepoints corresponding to: (i) downlink transmissions that include a plurality of downlink control information transmissions, and (ii) a plurality of downlink data transmissions that correspond to the second combination of codepoints.23. The BS of any of aspects 16-22, wherein the first combination of codepoints includes a codepoint having a lowest value of codepoints in both the first combination and the second combination.24. A user equipment (UE), comprising: a memory; and a processor coupled to the memory, the processor and the memory configured to: receive, from a base station (BS), an indication of a codebook for uplink transmission of hybrid automatic repeat request (HARQ) feedback, the codebook based on: (i) a number (M) of candidate downlink transmission occasions for a time period, and (ii) a maximum number (k) of downlink transmissions the UE can receive during the time period, the codebook comprising: (i) a first plurality of codepoints, (ii) a second plurality of codepoints, (iii) a third plurality of codepoints, and (iv) another one or more codepoints; receiving a downlink transmission over the time period; and transmit HARQ feedback for the downlink transmission, the HARQ feedback calculated using one of: the first plurality of codepoints, the second plurality of codepoints, or the third plurality of codepoints if a codepoint location is ambiguous or if the downlink transmission is received in error; or the other one or more codepoints if a codepoint location is not ambiguous and if the downlink transmission is not received in error.25. The UE of aspect 24, wherein the codebook comprises a plurality of codepoints, and wherein each of the plurality of codepoints correspond to a unique binary vector with M elements having a weight less than or equal to k.26. The UE of any of aspects 24 and 25, wherein the processor and the memory, being configured to receive the downlink transmission over the time period, are further configured to: receive k−1 downlink transmissions over the time period; and calculating a single HARQ feedback to an aggregation of the k−1 downlink transmissions.27. The UE of any of aspects 24-26, wherein the processor and the memory are further configured to determine a single codepoint corresponding to each of the plurality of downlink transmission occasions, wherein the HARQ feedback is calculated based on a sum of codepoints corresponding to each of the downlink transmission occasions over which downlink data is transmitted.28. The UE of any of aspects 24-27, wherein the k−1 downlink transmissions over the time period includes one or more downlink control information transmissions and one or more downlink data transmissions.29. The UE of any of aspects 24-28, wherein the processor and the memory are further configured to: determine codepoints for each of the k−1 downlink transmissions; calculate an ambiguous codepoint based on the determined codepoints, wherein the combination of the ambiguous codepoint and the determined codepoints would result in a calculation of HARQ feedback that is true for two separate combinations of codepoints; determine which of the ambiguous codepoint and the determined codepoints has a lowest value; and select one of the second plurality of codepoints or the third plurality of codepoints based on which of the ambiguous codepoint and the determined codepoints has the lowest value.30. The UE of any of aspects 24-29, wherein the processor and the memory are further configured to select the first plurality of codepoints for calculating HARQ feedback if one of: k−2 of the downlink transmissions are downlink control information transmissions; or k of the downlink transmissions are downlink control information transmissions. ADDITIONAL CONSIDERATIONS The techniques described herein may be used for various wireless communication technologies, such as NR (e.g., 5G NR), 3GPP Long Term Evolution (LTE), LTE-Advanced (LTE-A), code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal frequency division multiple access (OFDMA), single-carrier frequency division multiple access (SC-FDMA), time division synchronous code division multiple access (TD-SCDMA), and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), CDMA2000, etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. CdMA2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as NR (e.g. 5G RA), Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDMA, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). LTE and LTE-A are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). CDMA2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). NR is an emerging wireless communications technology under development. In 3GPP, the term “cell” can refer to a coverage area of a Node B (NB) and/or a NB subsystem serving this coverage area, depending on the context in which the term is used. In NR systems, the term “cell” and BS, next generation NodeB (gNB or gNodeB), access point (AP), distributed unit (DU), carrier, or transmission reception point (TRP) may be used interchangeably. A BS may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or other types of cells. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having an association with the femto cell (e.g., UEs in a Closed Subscriber Group (CSG), UEs for users in the home, etc.). A BS for a macro cell may be referred to as a macro BS. A BS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. A UE may also be referred to as a mobile station, a terminal, an access terminal, a subscriber unit, a station, a Customer Premises Equipment (CPE), a cellular phone, a smart phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet computer, a camera, a gaming device, a netbook, a smartbook, an ultrabook, an appliance, a medical device or medical equipment, a biometric sensor/device, a wearable device such as a smart watch, smart clothing, smart glasses, a smart wrist band, smart jewelry (e.g., a smart ring, a smart bracelet, etc.), an entertainment device (e.g., a music device, a video device, a satellite radio, etc.), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered machine-type communication (MTC) devices or evolved MTC (eMTC) devices. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, etc., that may communicate with a BS, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices, which may be narrowband IoT (NB-IoT) devices. In some examples, access to the air interface may be scheduled. A scheduling entity (e.g., a BS) allocates resources for communication among some or all devices and equipment within its service area or cell. The scheduling entity may be responsible for scheduling, assigning, reconfiguring, and releasing resources for one or more subordinate entities. That is, for scheduled communication, subordinate entities utilize resources allocated by the scheduling entity. Base stations are not the only entities that may function as a scheduling entity. In some examples, a UE may function as a scheduling entity and may schedule resources for one or more subordinate entities (e.g., one or more other UEs), and the other UEs may utilize the resources scheduled by the UE for wireless communication. In some examples, a UE may function as a scheduling entity in a peer-to-peer (P2P) network, and/or in a mesh network. In a mesh network example, UEs may communicate directly with one another in addition to communicating with a scheduling entity. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. If implemented in hardware, an example hardware configuration may comprise a processing system in a wireless node. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement the signal processing functions of the PHY layer. In the case of a user terminal (seeFIG.1), a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the machine-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein, for example, instructions for performing the operations described herein and illustrated inFIG.8and/orFIG.9. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
101,802
11863326
DETAILED DESCRIPTION A base station which communicates with a user equipment (UE) is described. The base station may comprise higher layer processing circuitry configured to send first radio resource control (RRC) configuration information and second RRC configuration information. The first RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The second RRC configuration information may indicate that a maximum number of codewords scheduled by Downlink Control Information (DCI) is two. The base station may also comprise transmitting circuitry configured to, after a channel access procedure, transmit, to the UE, a PDSCH which contains only a first transport block. The base station may further comprise receiving circuitry configured to receive, from the UE, a HARQ-ACK feedback including at least a first HARQ-ACK information bit and a second HARQ-ACK information bit. The first HARQ-ACK information bit may correspond to the first transport block of the PDSCH. The second HARQ-ACK information bit may correspond to a second transport block of the PDSCH. The second HARQ-ACK information bit may be set to Negative ACK (NACK). A contention window for the channel access procedure may be adjusted using the HARQ-ACK feedback, wherein the second HARQ-ACK information bit may be ignored. A base station which communicates with a user equipment (UE) is described. The base station may comprise higher layer processing circuitry configured to send radio resource control (RRC) configuration information. The RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The base station may also comprise transmitting circuitry configured to, after a channel access procedure, transmit, to the UE, a PDSCH in a first slot and does not transmit, to the UE, any PDSCH in a second slot. The base station may further comprise receiving circuitry configured to receive, from the UE, a HARQ-ACK feedback including at least a first HARQ-ACK information bit and a second HARQ-ACK information bit. The first HARQ-ACK information bit may correspond to the PDSCH in the first slot. The second HARQ-ACK information bit may correspond to a PDSCH in the second slot. The second HARQ-ACK information bit may be set to Negative ACK (NACK). A contention window for the channel access procedure may be adjusted using the HARQ-ACK feedback, wherein the second HARQ-ACK information bit may be ignored. A base station which communicates with a user equipment (UE) is described. The base station may comprise higher layer processing circuitry configured to send first radio resource control (RRC) configuration information and second RRC configuration information. The first RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The second RRC configuration may indicate that PDSCH Aggregation Factor is set to N which is an integer greater than 1. The base station may also comprise transmitting circuitry configured to, after a channel access procedure, transmit, to the UE, PDSCHs carrying a transport block in N slots. The base station may further comprise receiving circuitry configured to, for the transport block, receive, from the UE, only a HARQ-ACK information bit. The HARQ-ACK information bit may correspond to a last slot of the N slots. A contention window for the channel access procedure may be adjusted using a HARQ-ACK information for another slot of the N slots, wherein the HARQ-ACK information may be assumed to be the same value as the HARQ-ACK information bit corresponding to the last slot of the N slots. A method for a base station which communicates with a user equipment (UE) is described. The method may comprises sending first radio resource control (RRC) configuration information. The first RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The method may also comprises sending second RRC configuration information. The second RRC configuration information may indicate that a maximum number of codewords scheduled by DCI is two. The method may further comprises, after a channel access procedure, transmitting, to the UE, a PDSCH which contains only a first transport block. The method may further comprises receiving, from the UE, a HARQ-ACK feedback including at least a first HARQ-ACK information bit and a second HARQ-ACK information bit. The first HARQ-ACK information bit may correspond to the first transport block of the PDSCH. The second HARQ-ACK information bit may correspond to a second transport block of the PDSCH. The second HARQ-ACK information bit may be set to Negative ACK (NACK). A contention window for the channel access procedure may be adjusted using the HARQ-ACK feedback, wherein the second HARQ-ACK information bit is ignored. A method for a base station which communicates with a user equipment (UE) is described. The method may comprise sending radio resource control (RRC) configuration information. The RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The method may also comprise, after a channel access procedure, transmitting, to the UE, a PDSCH in a first slot and does not transmit, to the UE, any PDSCH in a second slot. The method may further comprise receiving, from the UE, a HARQ-ACK feedback including at least a first HARQ-ACK information bit and a second HARQ-ACK information bit. The first HARQ-ACK information bit may correspond to the PDSCH in the first slot. The second HARQ-ACK information bit may correspond to a PDSCH in the second slot The second HARQ-ACK information bit may be set to Negative ACK (NACK). A contention window for the channel access procedure may be adjusted using the HARQ-ACK feedback, wherein the second HARQ-ACK information bit may be ignored. A method for a base station which communicates with a user equipment (UE) is described. The method may comprise sending first radio resource control (RRC) configuration information. The first RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The method may also comprise sending second RRC configuration information. The second RRC configuration may indicate that PDSCH Aggregation Factor is set to N which is an integer greater than 1. The method may further comprise, after a channel access procedure, transmitting, to the UE, PDSCHs carrying a transport block in N slots. The method may further comprise, for the transport block, receiving, from the UE, only a HARQ-ACK information bit. The HARQ-ACK information bit may correspond to a last slot of the N slots. A contention window for the channel access procedure may be adjusted using a HARQ-ACK information for another slot of the N slots, wherein the HARQ-ACK information is assumed to be the same value as the HARQ-ACK information bit corresponding to the last slot of the N slots. The 3rd Generation Partnership Project, also referred to as “3GPP,” is a collaboration agreement that aims to define globally applicable technical specifications and technical reports for third and fourth generation wireless communication systems. The 3GPP may define specifications for next generation mobile networks, systems and devices. 3GPP Long Term Evolution (LTE) is the name given to a project to improve the Universal Mobile Telecommunications System (UMTS) mobile phone or device standard to cope with future requirements. In one aspect, UMTS has been modified to provide support and specification for the Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal Terrestrial Radio Access Network (E-UTRAN). At least some aspects of the systems and methods disclosed herein may be described in relation to the 3GPP LTE, LTE-Advanced (LTE-A) and other standards (e.g., 3GPP Releases 8, 9, 10, 11, 12, 13, 14 and/or 15) including New Radio (NR) which is also known as The 3rd Generation NR (5G NR). However, the scope of the present disclosure should not be limited in this regard. At least some aspects of the systems and methods disclosed herein may be utilized in other types of wireless communication systems. A wireless communication device may be an electronic device used to communicate voice and/or data to a base station, which in turn may communicate with a network of devices (e.g., public switched telephone network (PSTN), the Internet, etc.). In describing systems and methods herein, a wireless communication device may alternatively be referred to as a mobile station, a UE, an access terminal, a subscriber station, a mobile terminal, a remote station, a user terminal, a terminal, a subscriber unit, a mobile device, etc. Examples of wireless communication devices include cellular phones, smart phones, personal digital assistants (PDAs), laptop computers, netbooks, e-readers, wireless modems, vehicles, Internet of Things (IoT) devices, etc. In 3GPP specifications, a wireless communication device is typically referred to as a UE. However, as the scope of the present disclosure should not be limited to the 3GPP standards, the terms “UE” and “wireless communication device” may be used interchangeably herein to mean the more general term “wireless communication device.” A UE may also be more generally referred to as a terminal device. In 3GPP specifications, a base station is typically referred to as a Node B, an evolved Node B (eNB), a home enhanced or evolved Node B (HeNB), a next Generation Node B (gNB) or some other similar terminology. As the scope of the disclosure should not be limited to 3GPP standards, the terms “base station,” “Node B,” “eNB,” “HeNB,” and “gNB” may be used interchangeably herein to mean the more general term “base station.” Furthermore, the term “base station” may be used to denote an access point. An access point may be an electronic device that provides access to a network (e.g., Local Area Network (LAN), the Internet, etc.) for wireless communication devices. The term “communication device” may be used to denote both a wireless communication device and/or a base station. An eNB and gNB may also be more generally referred to as a base station device. It should be noted that as used herein, a “cell” may be any communication channel that is specified by standardization or regulatory bodies to be used for International Mobile Telecommunications-Advanced (IMT-Advanced) and all of it or a subset of it may be adopted by 3GPP as licensed bands (e.g., frequency bands) to be used for communication between an eNB and a UE. It should also be noted that in E-UTRA and E-UTRAN overall description, as used herein, a “cell” may be defined as “combination of downlink and optionally uplink resources.” The linking between the carrier frequency of the downlink resources and the carrier frequency of the uplink resources may be indicated in the system information transmitted on the downlink resources. “Configured cells” are those cells of which the UE is aware and is allowed by an eNB to transmit or receive information. “Configured cell(s)” may be serving cell(s). The UE may receive system information and perform the required measurements on all configured cells. “Configured cell(s)” for a radio connection may include a primary cell and/or no, one, or more secondary cell(s). “Activated cells” are those configured cells on which the UE is transmitting and receiving. That is, activated cells are those cells for which the UE monitors the physical downlink control channel (PDCCH) and in the case of a downlink transmission, those cells for which the UE decodes a physical downlink shared channel (PDSCH). “Deactivated cells” are those configured cells that the UE is not monitoring the transmission PDCCH. It should be noted that a “cell” may be described in terms of differing dimensions. For example, a “cell” may have temporal, spatial (e.g., geographical) and frequency characteristics. The 5th generation communication systems, dubbed NR (New Radio technologies) by 3GPP, envision the use of time/frequency/space resources to allow for services, such as eMBB (enhanced Mobile Broad-Band) transmission, URLLC (Ultra-Reliable and Low Latency Communication) transmission, and eMTC (massive Machine Type Communication) transmission. Also, in NR, single-beam and/or multi-beam operations is considered for downlink and/or uplink transmissions. Various examples of the systems and methods disclosed herein are now described with reference to the Figures, where like reference numbers may indicate functionally similar elements. The systems and methods as generally described and illustrated in the Figures herein could be arranged and designed in a wide variety of different implementations. Thus, the following more detailed description of several implementations, as represented in the Figures, is not intended to limit scope, as claimed, but is merely representative of the systems and methods. FIG.1is a block diagram illustrating one implementation of one or more gNBs160and one or more UEs102in which systems and methods for downlink and uplink transmissions may be implemented. The one or more UEs102communicate with one or more gNBs160using one or more physical antennas122a-n. For example, a UE102transmits electromagnetic signals to the gNB160and receives electromagnetic signals from the gNB160using the one or more physical antennas122a-n. The gNB160communicates with the UE102using one or more physical antennas180a-n. The UE102and the gNB160may use one or more channels and/or one or more signals119,121to communicate with each other. For example, the UE102may transmit information or data to the gNB160using one or more uplink channels121. Examples of uplink channels121include a physical shared channel (e.g., PUSCH (Physical Uplink Shared Channel)), and/or a physical control channel (e.g., PUCCH (Physical Uplink Control Channel)), etc. The one or more gNBs160may also transmit information or data to the one or more UEs102using one or more downlink channels119, for instance. Examples of downlink channels119physical shared channel (e.g., PDSCH (Physical Downlink Shared Channel), and/or a physical control channel (PDCCH (Physical Downlink Control Channel)), etc. Other kinds of channels and/or signals may be used. Each of the one or more UEs102may include one or more transceivers118, one or more demodulators114, one or more decoders108, one or more encoders150, one or more modulators154, a data buffer104and a UE operations module124. For example, one or more reception and/or transmission paths may be implemented in the UE102. For convenience, only a single transceiver118, decoder108, demodulator114, encoder150and modulator154are illustrated in the UE102, though multiple parallel elements (e.g., transceivers118, decoders108, demodulators114, encoders150and modulators154) may be implemented. The transceiver118may include one or more receivers120and one or more transmitters158. The one or more receivers120may receive signals from the gNB160using one or more antennas122a-n. For example, the receiver120may receive and downconvert signals to produce one or more received signals116. The one or more received signals116may be provided to a demodulator114. The one or more transmitters158may transmit signals to the gNB160using one or more physical antennas122a-n. For example, the one or more transmitters158may upconvert and transmit one or more modulated signals156. The demodulator114may demodulate the one or more received signals116to produce one or more demodulated signals112. The one or more demodulated signals112may be provided to the decoder108. The UE102may use the decoder108to decode signals. The decoder108may produce decoded signals110, which may include a UE-decoded signal106(also referred to as a first UE-decoded signal106). For example, the first UE-decoded signal106may include received payload data, which may be stored in a data buffer104. Another signal included in the decoded signals110(also referred to as a second UE-decoded signal110) may include overhead data and/or control data. For example, the second UE-decoded signal110may provide data that may be used by the UE operations module124to perform one or more operations. In general, the UE operations module124may enable the UE102to communicate with the one or more gNBs160. The UE operations module124may include one or more of a UE scheduling module126. The UE scheduling module126may also be referred to as UE-side higher layer processing module which performs higher layer processing. The other units than UE scheduling module126in UE102may perform physical layer processing. In a radio communication system, physical channels (uplink physical channels and/or downlink physical channels) may be defined. The physical channels (uplink physical channels and/or downlink physical channels) may be used for transmitting information that is delivered from a higher layer. For example, PCCH (Physical Control Channel) may be defined. PCCH is used to transmit control information. In uplink, PCCH (e.g., Physical Uplink Control Channel (PUCCH)) is used for transmitting Uplink Control Information (UCI). The UCI may include Hybrid Automatic Repeat Request (HARQ-ACK), Channel State information (CSI), and/or Scheduling Request (SR). The HARQ-ACK is used for indicating a positive acknowledgement (ACK) or a negative acknowledgment (NACK) for downlink data (i.e., Transport block(s) carrying Medium Access Control Control Element (MAC CE) and/or MAC Protocol Data Unit (MAC PDU) which may contain Downlink Shared Channel (DL-SCH)). The CSI is used for indicating state of downlink channel. Also, the SR is used for requesting resources of uplink data (i.e., Transport block(s) carrying MAC CE and/or MAC PDU which may contain Uplink Shared Channel (UL-SCH)). The UE102may be configured, for DL, to receive code block group (CBG) based transmissions where retransmissions may be scheduled to carry one or more sub-sets of all the code blocks of a transport block. The UE102may be configured to transmit CBG based transmissions where retransmissions may be scheduled to carry one or more sub-sets of all the code blocks of a transport block. In downlink, PCCH (e.g., Physical Downlink Control Channel (PDCCH)) may be used for transmitting Downlink Control Information (DCI). Here, more than one DCI formats may be defined for DCI transmission on the PDCCH. Namely, fields may be defined in the DCI format, and the fields are mapped to the information bits (i.e., DCI bits). For example, a DCI format 1A that is used for scheduling of one physical shared channel (PSCH) (e.g., PDSCH, transmission of one downlink transport block) in a cell is defined as the DCI format for the downlink. The DCI format(s) for PDSCH scheduling may include multiple information field, for example, carrier indicator field, frequency domain PDSCH resource allocation field, time domain PDSCH resource allocation field, bundling size field, MCS field, new data indicator field, redundancy version field, HARQ process number field, code block group flush indicator (CBGFI) field, code block group transmission indicator (CBGTI) field, PUCCH power control field, PUCCH resource indicator field, antenna port field, number of layer field, quasi-co-location (QCL) indication field, SRS triggering request field, and RNTI field. More than one pieces of the above information may be jointly coded, and in this instance jointly coded information may be indicated in a single information field. Also, for example, a DCI format 0 that is used for scheduling of one PSCH (e.g., PUSCH, transmission of one uplink transport block) in a cell is defined as the DCI format for the uplink. For example, information associated with PSCH (a PDSCH resource, PUSCH resource) allocation, information associated with modulation and coding scheme (MCS) for PSCH, and DCI such as Transmission Power Control (TPC) command for PUSCH and/or PUCCH are included the DCI format. Also, the DCI format may include information associated with a beam index and/or an antenna port. The beam index may indicate a beam used for downlink transmissions and uplink transmissions. The antenna port may include DL antenna port and/or UL antenna port. The DCI format(s) for PUSCH scheduling may include multiple information field, for example, carrier indicator field, frequency domain PUSCH resource allocation field, time domain PUSCH resource allocation field, MCS field, new data indicator field, redundancy version field, HARQ process number field, code block group flush indicator (CBGFI) field, code block group transmission indicator (CBGTI) field, PUSCH power control field, SRS resource indicator (SRI) field, wideband and/or subband transmit precoding matrix indicator (TPMI) field, antenna port field, scrambling identity field, number of layer field, CSI report triggering request field, CSI measurement request field, SRS triggering request field, and RNTI field. More than one pieces of the above information may be jointly coded, and in this instance jointly coded information may be indicated in a single information field. Also, for example, PSCH may be defined. For example, in a case that the downlink PSCH resource (e.g., PDSCH resource) is scheduled by using the DCI format, the UE102may receive the downlink data, on the scheduled downlink PSCH resource. Also, in a case that the uplink PSCH resource (e.g., PUSCH resource) is scheduled by using the DCI format, the UE102transmits the uplink data, on the scheduled uplink PSCH resource. Namely, the downlink PSCH is used to transmit the downlink data. And, the uplink PSCH is used to transmit the uplink data. Furthermore, the downlink PSCH and the uplink PSCH are used to transmit information of higher layer (e.g., Radio Resource Control (RRC)) layer, and/or MAC layer). For example, the downlink PSCH and the uplink PSCH are used to transmit RRC message (RRC signal) and/or MAC Control Element (MAC CE). Here, the RRC message that is transmitted from the gNB160in downlink may be common to multiple UEs102within a cell (referred as a common RRC message). Also, the RRC message that is transmitted from the gNB160may be dedicated to a certain UE102(referred as a dedicated RRC message). The RRC message and/or the MAC CE are also referred to as a higher layer signal. The UE operations module124may provide information148to the one or more receivers120. For example, the UE operations module124may inform the receiver(s)120when to receive retransmissions. The UE operations module124may provide information138to the demodulator114. For example, the UE operations module124may inform the demodulator114of a modulation pattern anticipated for transmissions from the gNB160. The UE operations module124may provide information136to the decoder108. For example, the UE operations module124may inform the decoder108of an anticipated encoding for transmissions from the gNB160. The UE operations module124may provide information142to the encoder150. The information142may include data to be encoded and/or instructions for encoding. For example, the UE operations module124may instruct the encoder150to encode transmission data146and/or other information142. The other information142may include PDSCH HARQ-ACK information. The encoder150may encode transmission data146and/or other information142provided by the UE operations module124. For example, encoding the transmission data146and/or other information142may involve error detection and/or correction coding, mapping data to space, time and/or frequency resources for transmission, multiplexing, etc. The encoder150may provide encoded data152to the modulator154. The UE operations module124may provide information144to the modulator154. For example, the UE operations module124may inform the modulator154of a modulation type (e.g., constellation mapping) to be used for transmissions to the gNB160. The modulator154may modulate the encoded data152to provide one or more modulated signals156to the one or more transmitters158. The UE operations module124may provide information140to the one or more transmitters158. This information140may include instructions for the one or more transmitters158. For example, the UE operations module124may instruct the one or more transmitters158when to transmit a signal to the gNB160. For instance, the one or more transmitters158may transmit during a UL subframe. The one or more transmitters158may upconvert and transmit the modulated signal(s)156to one or more gNBs160. Each of the one or more gNBs160may include one or more transceivers176, one or more demodulators172, one or more decoders166, one or more encoders109, one or more modulators113, a data buffer162and a gNB operations module182. For example, one or more reception and/or transmission paths may be implemented in a gNB160. For convenience, only a single transceiver176, decoder166, demodulator172, encoder109and modulator113are illustrated in the gNB160, though multiple parallel elements (e.g., transceivers176, decoders166, demodulators172, encoders109and modulators113) may be implemented. The transceiver176may include one or more receivers178and one or more transmitters117. The one or more receivers178may receive signals from the UE102using one or more physical antennas180a-n. For example, the receiver178may receive and downconvert signals to produce one or more received signals174. The one or more received signals174may be provided to a demodulator172. The one or more transmitters117may transmit signals to the UE102using one or more physical antennas180a-n. For example, the one or more transmitters117may upconvert and transmit one or more modulated signals115. The demodulator172may demodulate the one or more received signals174to produce one or more demodulated signals170. The one or more demodulated signals170may be provided to the decoder166. The gNB160may use the decoder166to decode signals. The decoder166may produce one or more decoded signals164,168. For example, a first eNB-decoded signal164may include received payload data (e.g. UL TB), which may be stored in a data buffer162. A second eNB-decoded signal168may include overhead data and/or control data. For example, the second eNB-decoded signal168may provide data (e.g., Uplink control information such as HARQ-ACK feedback information for PDSCH) that may be used by the gNB operations module182to perform one or more operations. In general, the gNB operations module182may enable the gNB160to communicate with the one or more UEs102. The gNB operations module182may include one or more of a gNB scheduling module194. The gNB scheduling module194may also be referred to as gNB-side higher layer processing module which performs higher layer processing. The other units than gNB scheduling module194in gNB160may perform physical layer processing. The gNB operations module182may provide information188to the demodulator172. For example, the gNB operations module182may inform the demodulator172of a modulation pattern anticipated for transmissions from the UE(s)102. The gNB operations module182may provide information186to the decoder166. For example, the gNB operations module182may inform the decoder166of an anticipated encoding for transmissions from the UE(s)102. The gNB operations module182may provide information101to the encoder109. The information101may include data to be encoded and/or instructions for encoding. For example, the gNB operations module182may instruct the encoder109to encode information101, including transmission data105. The encoder109may encode transmission data105and/or other information included in the information101provided by the gNB operations module182. For example, encoding the transmission data105and/or other information included in the information101may involve error detection and/or correction coding, mapping data to space, time and/or frequency resources for transmission, multiplexing, etc. The encoder109may provide encoded data111to the modulator113. The transmission data105may include network data to be relayed to the UE102. The gNB operations module182may provide information103to the modulator113. This information103may include instructions for the modulator113. For example, the gNB operations module182may inform the modulator113of a modulation type (e.g., constellation mapping) to be used for transmissions to the UE(s)102. The modulator113may modulate the encoded data111to provide one or more modulated signals115to the one or more transmitters117. The gNB operations module182may provide information192to the one or more transmitters117. This information192may include instructions for the one or more transmitters117. For example, the gNB operations module182may instruct the one or more transmitters117when to (or when not to) transmit a signal to the UE(s)102. The one or more transmitters117may upconvert and transmit the modulated signal(s)115to one or more UEs102. It should be noted that a DL subframe may be transmitted from the gNB160to one or more UEs102and that a UL subframe may be transmitted from one or more UEs102to the gNB160. Furthermore, both the gNB160and the one or more UEs102may transmit data in a standard special slot. It should also be noted that one or more of the elements or parts thereof included in the gNB(s)160and UE(s)102may be implemented in hardware. For example, one or more of these elements or parts thereof may be implemented as a chip, circuitry or hardware components, etc. It should also be noted that one or more of the functions or methods described herein may be implemented in and/or performed using hardware. For example, one or more of the methods described herein may be implemented in and/or realized using a chipset, an application-specific integrated circuit (ASIC), a large-scale integrated circuit (LSI) or integrated circuit, etc. The downlink physical layer processing of transport channels may include: Transport block CRC attachment; Code block segmentation and code block CRC attachment; Channel coding (LDPC coding); Physical-layer hybrid-ARQ processing; Rate matching; Scrambling; Modulation (QPSK, 16QAM, 64QAM and 256QAM); Layer mapping; and Mapping to assigned resources and antenna ports. FIG.2illustrates various components that may be utilized in a UE202. The UE202described in connection withFIG.2may be implemented in accordance with the UE22described in connection withFIG.1. The UE202includes a processor203that controls operation of the UE202. The processor203may also be referred to as a central processing unit (CPU). Memory205, which may include read-only memory (ROM), random access memory (RAM), a combination of the two or any type of device that may store information, provides instructions207aand data209ato the processor203. A portion of the memory205may also include non-volatile random access memory (NVRAM). Instructions207band data209bmay also reside in the processor203. Instructions207band/or data209bloaded into the processor203may also include instructions207aand/or data209afrom memory205that were loaded for execution or processing by the processor203. The instructions207bmay be executed by the processor203to implement the methods described above. The UE202may also include a housing that contains one or more transmitters258and one or more receivers220to allow transmission and reception of data. The transmitter(s)258and receiver(s)220may be combined into one or more transceivers218. One or more antennas222a-nare attached to the housing and electrically coupled to the transceiver218. The various components of the UE202are coupled together by a bus system211, which may include a power bus, a control signal bus and a status signal bus, in addition to a data bus. However, for the sake of clarity, the various buses are illustrated inFIG.2as the bus system211. The UE202may also include a digital signal processor (DSP)213for use in processing signals. The UE202may also include a communications interface215that provides user access to the functions of the UE202. The UE202illustrated inFIG.2is a functional block diagram rather than a listing of specific components. FIG.3illustrates various components that may be utilized in a gNB360. The gNB360described in connection withFIG.3may be implemented in accordance with the gNB160described in connection withFIG.1. The gNB360includes a processor303that controls operation of the gNB360. The processor303may also be referred to as a central processing unit (CPU). Memory305, which may include read-only memory (ROM), random access memory (RAM), a combination of the two or any type of device that may store information, provides instructions307aand data309ato the processor303. A portion of the memory305may also include non-volatile random access memory (NVRAM). Instructions307band data309bmay also reside in the processor303. Instructions307band/or data309bloaded into the processor303may also include instructions307aand/or data309afrom memory305that were loaded for execution or processing by the processor303. The instructions307bmay be executed by the processor303to implement the methods described above. The gNB360may also include a housing that contains one or more transmitters317and one or more receivers378to allow transmission and reception of data. The transmitter(s)317and receiver(s)378may be combined into one or more transceivers376. One or more antennas380a-nare attached to the housing and electrically coupled to the transceiver376. The various components of the gNB360are coupled together by a bus system311, which may include a power bus, a control signal bus and a status signal bus, in addition to a data bus. However, for the sake of clarity, the various buses are illustrated inFIG.3as the bus system311. The gNB360may also include a digital signal processor (DSP)313for use in processing signals. The gNB360may also include a communications interface315that provides user access to the functions of the gNB360. The gNB360illustrated inFIG.3is a functional block diagram rather than a listing of specific components. FIG.4is a block diagram illustrating one implementation of a UE402in which systems and methods for downlink and uplink transmissions may be implemented. The UE402includes transmit means458, receive means420and control means424. The transmit means458, receive means420and control means424may be configured to perform one or more of the functions described in connection withFIG.1above.FIG.2above illustrates one example of a concrete apparatus structure ofFIG.4. Other various structures may be implemented to realize one or more of the functions ofFIG.1. For example, a DSP may be realized by software. FIG.5is a block diagram illustrating one implementation of a gNB560in which systems and methods for downlink and uplink transmissions may be implemented. The gNB560includes transmit means517, receive means578and control means582. The transmit means517, receive means578and control means582may be configured to perform one or more of the functions described in connection withFIG.1above.FIG.3above illustrates one example of a concrete apparatus structure ofFIG.5. Other various structures may be implemented to realize one or more of the functions ofFIG.1. For example, a DSP may be realized by software. FIG.6is a diagram illustrating one example of a resource grid. The resource grid illustrated inFIG.6may be applicable for both downlink and uplink and may be utilized in some implementations of the systems and methods disclosed herein. More detail regarding the resource grid is given in connection withFIG.1. InFIG.6, physical channels and physical signals may be transmitted/received using one or several slots683. For a given numerology μ, NμRBis bandwidth configuration of a bandwidth part (BWP) in the serving cell, expressed in multiples of NRBsc, where NRBscis a resource block689size in the frequency domain expressed as a number of subcarriers, and NSF,μsymbis the number of Orthogonal Frequency Division Multiplexing (OFDM) symbols687in a subframe669. In other words, For each numerology μ and for each of downlink and uplink, a resource grid of NμRBNRBscsubcarriers and NSF,μsymbOFDM symbols may be defined. There may be one resource grid per antenna port p, per subcarrier spacing configuration (i.e. numerology) μ, and per transmission direction (uplink or downlink). A resource block689may include a number of resource elements (RE)691. Multiple OFDM numerologies (also referred to as just numerologies) are supported as given by Table X1. Each of the numerologies may be tied to its own subcarrier spacing Δf. TABLE X1μΔf = 2μ· 15 [kHz]Cyclic prefix015Normal130Normal260Normal, Extended3120Normal4240Normal For subcarrier spacing configuration μ, slots are numbered nμs∈{0, . . . , NSF,μhd slot−1} in increasing order within a subframe and nμs,f∈[0, . . . , Nframe,μslot−1] in increasing order within a frame. There are Nslot,μsymbconsecutive OFDM symbols in a slot where Nslot,μsymbdepends on the subcarrier spacing used as given by Table X2 for normal cyclic prefix and Table X3 for extended cyclic prefix. The number of consecutive OFDM symbols per subframe is NSF,μsymb=Nslot,μsymb·NSF,μslot. The start of slot nμsin a subframe is aligned in time with the start of OFDM symbol nμsNslot,μsymbin the same subframe. Not all UEs may be capable of simultaneous transmission and reception, implying that not all OFDM symbols in a downlink slot or an uplink slot may be used. TABLE X2μNslot, μsymbNframe,μslotNSF,μslot01410111420221440431480841416016 TABLE X3μNslot,μsymbNframe,μslotNSF,μslot212404 For an initial BWP, NμRBmay be broadcast as a part of system information (e.g. Master Information Block (MIB), System Information Block Type 1 (SIB1)). For an SCell (including a Licensed-Assisted Access (LAA) SCell), NμRBis configured by a RRC message dedicated to a UE102. For PDSCH mapping, the available RE691may be the RE691whose index/fulfils/≥/data start and/or/data,end≥/in a subframe. The OFDM access scheme with cyclic prefix (CP) may be employed, which may be also referred to as CP-OFDM. In the downlink, PDCCH, EPDCCH (Enhanced Physical Downlink Control Channel), PDSCH and the like may be transmitted. A radio frame may include a set of slots683(e.g., 10 slots for μ=1). The RB is a unit for assigning downlink radio resources, defined by a predetermined bandwidth (RB bandwidth) and one slot. A resource block is defined as NRBsc=12 consecutive subcarriers in the frequency domain and one slot (which consists of 14 symbols for normal CP and 12 symbols for extended CP) in the time domain. Carrier resource blocks are numbered from 0 to NμRB−1 in the frequency domain for subcarrier spacing configuration μ. The relation between the carrier resource block number nCRBin the frequency domain and resource elements (k,I) is given by nCRB=floor(k/NRBsc) where k is defined relative to the resource grid. Physical resource blocks are defined within a carrier bandwidth part (BWP) and numbered from 0 to NsizeBWP,i−1 where i is the number of the carrier bandwidth part. The relation between physical and absolute resource blocks in carrier bandwidth part i is given by nCRB=nPRB+NstartBWP,i−1, where NBWP,iis the carrier resource block where carrier bandwidth part starts. Virtual resource blocks are defined within a carrier bandwidth part and numbered from 0 to NsizeBWP,i−1 where i is the number of the carrier bandwidth part. A carrier bandwidth part is a contiguous set of physical resource blocks, selected from a contiguous subset of the carrier resource blocks for a given numerology μ on a given carrier. The number of resource blocks NsizeBWP,iin a carrier BWP may fulfil Nmin,μRB,x<=NsizeBWP,i<=Nmax,μRB,x. A UE can be configured with up to four carrier bandwidth parts in the downlink with a single downlink carrier bandwidth part being active at a given time. The UE is not expected to receive PDSCH or PDCCH outside an active bandwidth part. A UE can be configured with up to four carrier bandwidth parts in the uplink with a single uplink carrier bandwidth part being active at a given time. The UE shall not transmit PUSCH or PUCCH outside an active bandwidth part. The RB may include twelve sub-carriers in frequency domain and one or more OFDM symbols in time domain. A region defined by one sub-carrier in frequency domain and one OFDM symbol in time domain is referred to as a resource element (RE) and is uniquely identified by the index pair (k,IRG) in the resource grid, where k=0, . . . , NμRBNRBsc−1 and IRG=0, . . . , NSF,μsymb−1 are indices in the frequency and time domains, respectively. Moreover, RE is uniquely identified by the index pair (k,I) based on a certain reference point, where I are indices in the time domain. The reference point can be based on the resource grid, i.e. component carrier (CC) basis. Alternatively the reference point can be based on a certain band width part in the component carrier. While subframes in one CC are discussed herein, subframes are defined for each CC and subframes are substantially in synchronization with each other among CCs. In the uplink, in addition to CP-OFDM, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) access scheme may be employed, which is also referred to as Discrete Fourier Transform-Spreading OFDM (DFT-S-OFDM). In the uplink, PUCCH, PDSCH, Physical Random Access Channel (PRACH) and the like may be transmitted. For each numerology and carrier, a resource grid of Nmax,μRB,xNRBscsubcarriers and NSF,μsymbOFDM symbols is defined, where Nmax,μRB,xis given by Table X4 and x is DL or UL for downlink and uplink, respectively. There is one resource grid per antenna port p, per subcarrier spacing configuration and per transmission direction (downlink or uplink). TABLE X4μNmin,μRB,DLNmax,μRB,DLNmin,μRB,ULNmax,μRB,UL0202752427512027524275220275242753202752427542013824138 A UE102may be instructed to receive or transmit using a subset of the resource grid only. The set of resource blocks a UE is referred to as a carrier bandwidth part and may be configured to receive or transmit upon are numbered from 0 to NμRB−1 in the frequency domain. The UE may be configured with one or more carrier bandwidth parts, each of which may have the same or different numerology. Transmissions in multiple cells can be aggregated where up to fifteen secondary cells can be used in addition to the primary cell. A UE102configured for operation in bandwidth parts (BWPs) of a serving cell, is configured by higher layers for the serving cell a set of at most four bandwidth parts (BWPs) for receptions by the UE (DL BWP set) in a DL bandwidth by parameter DL-BWP-index and a set of at most four BWPs for transmissions by the UE102(UL BWP set) in an UL bandwidth by parameter UL-BWP-index for the serving cell. For unpaired spectrum operation, a DL BWP from the set of configured DL BWPs is linked to an UL BWP from the set of configured UL BWPs, where the DL BWP and the UL BWP have a same index in the respective sets. For unpaired spectrum operation, a UE102can expect that the center frequency for a DL BWP is same as the center frequency for a UL BWP. The Physical Downlink Control Channel (PDCCH) can be used to schedule DL transmissions on PDSCH and UL transmissions on PUSCH, where the Downlink Control Information (DCI) on PDCCH includes: Downlink assignments containing at least modulation and coding format, resource allocation, and HARQ information related to DL-SCH; and Uplink scheduling grants containing at least modulation and coding format, resource allocation, and HARQ information related to UL-SCH. In addition to scheduling, PDCCH can be used to for: Activation and deactivation of configured PUSCH transmission with configured grant; Activation and deactivation of PDSCH semi-persistent transmission; Notifying one or more UEs of the slot format; Notifying one or more UEs of the PRB(s) and OFDM symbol(s) where the UE may assume no transmission is intended for the UE; Transmission of TPC commands for PUCCH and PUSCH; Transmission of one or more TPC commands for SRS transmissions by one or more UEs; Switching a UE's active bandwidth part; and Initiating a random access procedure. One or more sets of PRB(s) may be configured for DL control channel monitoring. In other words, a control resource set is, in the frequency domain, a set of PRBs within which the UE102attempts to blindly decode downlink control information (i.e., monitor downlink control information (DCI)), where the PRBs may or may not be frequency contiguous, a UE102may have one or more control resource sets, and one DCI message may be located within one control resource set. In the frequency-domain, a PRB is the resource unit size (which may or may not include DMRS) for a control channel. A DL shared channel may start at a later OFDM symbol than the one(s) which carries the detected DL control channel. Alternatively, the DL shared channel may start at (or earlier than) an OFDM symbol than the last OFDM symbol which carries the detected DL control channel. In other words, dynamic reuse of at least part of resources in the control resource sets for data for the same or a different UE102, at least in the frequency domain may be supported. Namely, a UE102may have to monitor a set of PDCCH candidates in one or more control resource sets on one or more activated serving cells or bandwidth parts (BWPs) according to corresponding search spaces where monitoring implies decoding each PDCCH candidate according to the monitored DCI formats. Here, the PDCCH candidates may be candidates for which the PDCCH may possibly be assigned and/or transmitted. A PDCCH candidate is composed of one or more control channel elements (CCEs). The term “monitor” means that the UE102attempts to decode each PDCCH in the set of PDCCH candidates in accordance with all the DCI formats to be monitored. The set of PDCCH candidates that the UE102monitors may be also referred to as a search space or a search space set. That is, the search space (or search space set) is a set of resource that may possibly be used for PDCCH transmission. Furthermore, a common search space (CSS) and a user-equipment search space (USS) are set (or defined, configured). For example, the CSS may be used for transmission of PDCCH with DCI format(s) to a plurality of the UEs102. That is, the CSS may be defined by a resource common to a plurality of the UEs102. For example, the CSS is composed of CCEs having numbers that are predetermined between the gNB160and the UE102. For example, the CSS is composed of CCEs having indices 0 to 15. Here, the CSS may be used for transmission of PDCCH with DCI format(s) to a specific UE102. That is, the gNB160may transmit, in the CSS, DCI format(s) intended for a plurality of the UEs102and/or DCI format(s) intended for a specific UE102. There may be one or more types of CSS. For example, Type 0 PDCCH CSS may be defined for a DCI format scrambled by a System Information-Radio Network Temporary Identifier (SI-RNTI) on a primary cell (PCell). Type 1 PDCCH CSS may be defined for a DCI format scrambled by a Random Access-(RA-)RNTI. Additionally and/or alternatively, Type 1 PDCCH CSS may be used for a DCI format scrambled by a Temporary Cell-(TC-)RNTI or Cell-(C-)RNTI. Type 2 PDCCH CSS may be defined for a DCI format scrambled by a Paging-(P-)RNTI. Type 3 PDCCH CSS may be defined for a DCI format scrambled by an Interference-(INT-)RNTI, where if a UE102is configured by higher layers to decode a DCI format with CRC scrambled by the INT-RNTI and if the UE102detects the DCI format with CRC scrambled by the INT-RNTI, the UE102may assume that no transmission to the UE102is present in OFDM symbols and resource blocks indicated by the DCI format. Additionally and/or alternatively, Type 3 PDCCH CSS may be used for a DCI format scrambled by the other RNTI (e.g., Transmit Power Control-(TPC-)RNTI, Pre-emption Indication-(PI-)RNTI, Slot Format Indicator-(SFI-)RNTI, Semi persistent scheduling-(SPS-)RNTI, Grant free-(GF-)RNTI, Configured Scheduling-(CS-)RNTI, URLLC-(U-) RNTI), Autonomous Uplink-(AUL-) RNTI, Downlink Feedback Information-(DFI-) RNTI. A UE102may be indicated by System Information Block Type® (SIBO), which is also referred to as MIB, a control resource set for Type0-PDCCH common search space and a subcarrier spacing and a CP length for PDCCH reception. The Type0-PDCCH common search space is defined by the CCE aggregation levels and the number of candidates per CCE aggregation level. The UE may assume that the DMRS antenna port associated with PDCCH reception in the Type0-PDCCH common search space and the DMRS antenna port associated with Physical Broadcast channel (PBCH) reception are quasi-collocated with respect to delay spread, Doppler spread, Doppler shift, average delay, and spatial Rx parameters. PBCH carries Master Information Block (MIB) which contains most important pieces of system information. A PDCCH with a certain DCI format in Type0-PDCCH common search space schedules a reception of a PDSCH with SIB Type1 (SIB1) or with other SI messages. A UE may be indicated by SIB1 control resource set(s) for Type1-PDCCH common search space. A subcarrier spacing and a CP length for PDCCH reception with Type1-PDCCH common search space are same as for PDCCH reception with Type0-PDCCH common search space. The UE may assume that the DMRS antenna port associated with PDCCH reception in the Type1-PDCCH common search space and the DMRS antenna port associated with PBCH reception are quasi-collocated with respect to delay spread, Doppler spread, Doppler shift, average delay, and spatial Rx parameters. A monitoring periodicity of paging occasions for PDCCH in Type2-PDCCH common search space may be configured to the UE by higher layer parameter. A UE may be configured by higher layer signaling whether and/or which serving cell(s) to monitor Type3-PDCCH common search space. The USS may be used for transmission of PDCCH with DCI format(s) to a specific UE102. That is, the USS is defined by a resource dedicated to a certain UE102. That is, the USS may be defined independently for each UE102. For example, the USS may be composed of CCEs having numbers that are determined based on a RNTI assigned by the gNB160, a slot number in a radio frame, an aggregation level, or the like. Here, the RNTI(s) may include C-RNTI (Cell-RNTI), Temporary C-RNTI. Also, the USS (the position(s) of the USS) may be configured by the gNB160. For example, the gNB160may configure the USS by using the RRC message. That is, the base station may transmit, in the USS, DCI format(s) intended for a specific UE102. Here, the RNTI assigned to the UE102may be used for transmission of DCI (transmission of PDCCH). Specifically, CRC (Cyclic Redundancy Check) parity bits (also referred to simply as CRC), which are generated based on DCI (or DCI format), are attached to DCI, and, after attachment, the CRC parity bits are scrambled by the RNTI. The UE102may attempt to decode DCI to which the CRC parity bits scrambled by the RNTI are attached, and detects PDCCH (i.e., DCI, DCI format). That is, the UE102may decode PDCCH with the CRC scrambled by the RNTI. When the control resource set spans multiple OFDM symbols, a control channel candidate may be mapped to multiple OFDM symbols or may be mapped to a single OFDM symbol. One DL control channel element may be mapped on REs defined by a single PRB and a single OFDM symbol. If more than one DL control channel elements are used for a single DL control channel transmission, DL control channel element aggregation may be performed. The number of aggregated DL control channel elements is referred to as DL control channel element aggregation level. The DL control channel element aggregation level may be 1 or 2 to the power of an integer. The gNB160may inform a UE102of which control channel candidates are mapped to each subset of OFDM symbols in the control resource set. If one DL control channel is mapped to a single OFDM symbol and does not span multiple OFDM symbols, the DL control channel element aggregation is performed within an OFDM symbol, namely multiple DL control channel elements within an OFDM symbol are aggregated. Otherwise, DL control channel elements in different OFDM symbols can be aggregated. DCI formats may be classified into at least 4 types, DL regular (also referred to as DCI format 1_1), UL regular (also referred to as DCI format 0_1), DL fallback (also referred to as DCI format 1_0) and UL fallback (also referred to as DCI format 0_0) for PDSCH and PUSCH scheduling. In addition, there may be some other types for control signaling. Furthermore, some more types (e.g. DCI format 0_2, 0_3, 1_2 and 1_3) may be defined for scheduling of one or more PUSCH(s) and one or more PDSCH(s), which may be applicable to an NR-based unlicensed access (NR-U) cell. Table X5 shows an example of a set of the DCI format types. TABLE X5DCI formatUsageRNTI0_0Scheduling of PUSCHC-RNTI, CS-RNTI, U-containing up to one TBRNTI, TC-RNTIin one cell0_1Scheduling of PUSCHC-RNTI, CS-RNTI, SP-containing up to two TBsCSI-RNTI, U-RNTIin one cell0_2Scheduling of one orC-RNTI, CS-RNTI, U-more PUSCH(s) eachRNTI, AUL-RNTI,containing up to one TBDFI-RNTIin one cell0_3Scheduling of one orC-RNTI, CS-RNTI, U-more PUSCH(s) eachRNTI, AUL-RNTI,containing up to two TBsDFI-RNTIin one cell1_0Scheduling of PDSCHC-RNTI, CS-RNTI, U-containing up to one TBRNTI, P-RNTI, SI-in one cellRNTI, RA-RNTI, TC-RNTI1_1Scheduling of PDSCHC-RNTI, CS-RNTI, U-containing up to two TBsRNTIin one cell1_2Scheduling of one orC-RNTI, CS-RNTI, U-more PDSCH(s) eachRNTIcontaining up to one TBin one cell1_3Scheduling of one orC-RNTI, CS-RNTI, U-more PDSCH(s) eachRNTIcontaining up to two TBsin one cell2_0Notifying a group of UEsSFI-RNTIof the slot format2_1Notifying a group of UEsINT-RNTIof the PRB(s) and OFDMsymbol(s) where UE mayassume no transmission isintended for the UE2_2Transmission of TPCTPC-PUSCH-RNTI,commands for PUCCHTPC-PUCCH-RNTIand PUSCH2_3Transmission of a groupTPC-SRS-RNTIof TPC commands forSRS transmissions by oneor more UEs2_4Notifying a group of UEsCC-RNTIof common controlinformation related to theNR-U cell The DL regular DCI format and the UL regular DCI format may have a same DCI payload size. The DL fallback DCI format and the UL fallback DCI format may have a same DCI payload size. Table X6, X7, X8, and X9 show examples of DCI formats 0_0, 0_1, 1_0 and 1_1, respectively. “Mandatory” may mean the information field is always present irrespective of RRC (re)configuration. “Optional” may mean the information field may or may not be present depending on RRC (re)configuration. In the DL fallback DCI format and the UL fallback DCI format, all information fields are mandatory so that their DCI payload sizes are fixed irrespective of RRC (re)configuration. TABLE X6TheInformationnumberMandatory/fieldof bitsOptionalRemarksIdentifier for1MandatoryThe value of this bit field may beDCI formatsalways set to 0, indicating an UL DCIformatFrequency15MandatoryVirtual Resource Blocks (VRBs)domainindicated using type 1 resourceresourceallocationassignmentTime2MandatoryIndex of an entry of a table providingdomainsets of OFDM symbols and a slot usedresourcefor PUSCH transmissionassignmentFrequency1MandatoryFlag to control whether to usehopping flagfrequency hoppingModulation5MandatoryModulation and coding scheme (MCS)and codingfor a single TB which is contained inschemethe PUSCHNew data1MandatoryIndicating whether the TBindicatortransmission is an initial transmission(in which case the NDI value istoggled) or re-transmission (in whichcase the NDI value is nottoggled).Redundancy2MandatoryIndicating rate-matching patternversionHARQ4MandatoryprocessnumberTPC2MandatorycommandforscheduledPUSCHPaddingbits, ifrequiredUL/SUL0 or 1Optional1 bit for UEs configured with SUL inindicatorthe cell as defined in Table 7.3.1.1.1-1and the number of bits for DCI format1_0 before padding is larger than thenumber of bits for DCI format 0_0before padding; 0 bit otherwise. TABLE X7TheInformationnumberMandatory/fieldof bitsOptionalRemarksIdentifier1MandatoryThe value of this bit field may befor DCIalways set to 0, indicating an UL DCIformatsformatCarrier0 or 3OptionalIndicating SCellIndex of the servingindicatorcell in which the scheduled PUSCH isto be transmittedUL/SUL0 or 1Optional0 bit for UEs not configured with SULindicatorin the cell or UEs configured with SULin the cell but only PUCCH carrier inthe cell is configured for PUSCHtransmission; 1 bit for UEs configuredwith SUL in the cellBandwidth0, 1 orOptionalIndicating BWP ID of the BWP whichpart2contains scheduled PUSCH. If a UEindicatordoes not support active BWP changevia DCI, the UE may ignore this bitfieldFrequency25MandatoryVirtual Resource Blocks (VRBs)domainindicated using type 0 or type 1resourceresource allocationassignmentTime0, 1, 2,MandatoryIndex of an entry of an RRC-domain3 or 4configured table providing the set ofresourceOFDM symbols used for PUSCHassignmenttransmissionFrequency0 or 1Optional0 bit if only resource allocation type 0hopping flagis configured or if the higher layerparameter frequencyHopping is notconfigured, 1 bit otherwiseModulation5MandatoryMCS for TB(s) which are contained inand codingthe PUSCHschemeNew data1MandatoryindicatorRedundancy2MandatoryversionHARQ4Mandatoryprocessnumber1st1 or 2Mandatory1 bit for semi-static HARQ-ACKdownlinkcodebook, 2 bits for dynamic HARQ-assignmentACK codebook.index2nd0 or 2Optional2 bits for dynamic HARQ-ACKdownlinkcodebook with two HARQ-ACK sub-assignmentcodebooks, 0 bit otherwiseindexTPC2MandatorycommandforscheduledPUSCHSRS0, 1 orOptionalresource2indicatorPrecoding0, 1, 2,Optional0 bit if the higher layer parameterinformation3, 4, 5txConfig = nonCodeBook or for 1and numberor 6antenna portof layersAntenna2, 3, 4Mandatoryportsor 5SRS request2 or 3MandatoryThis bit field may also indicate theassociated CSI-RS.CSI request0, 1, 2,OptionalThe bit size may be determined by3, 4, 5higher layer parameteror 6reportTriggerSizePTRS-0 or 2Optional0 bit if PTRS-UplinkConfig is notDMRSconfigured andassociationtransformPrecoder = disabled, or iftransformPrecoder = enabled, or ifmaxRank = 1, 2 bits otherwise.beta_offset0 or 2Optional0 bit if the higher layer parameterindicatorbetaOffsets = semiStatic; otherwise 2bitsDMRS0 or 1OptionalsequenceinitializationUL-SCH1MandatoryA value of “1” may indicate UL-SCHindicatorshall be transmitted on the PUSCH anda value of “0” may indicate UL-SCHshall not be transmitted on the PUSCH. TABLE X8TheInformationnumberMandatory/fieldof bitsOptionalRemarksIdentifier for1MandatoryThe value of this bit field isDCI formatsalways set to 1, indicating a DLDCI formatFrequency15MandatoryVRBs indicated using type 1 RA.domain resourceassignmentTime domain4MandatoryIndex of an entry of a tableresourceproviding sets of OFDM symbolsassignmentand a slot used for PDSCHtransmissionVRB-to-PRB1MandatoryFlag to control VRB-to-PRBmappingmappingModulation and5MandatoryMCS for a single TB which iscoding schemecontained in the PDSCHNew data1MandatoryIndicating whether the TBindicatortransmission is an initialtransmission (in which case theNDI value is toggled) or re-transmission (in which case theNDI value is not toggled).Redundancy2MandatoryIndicating rate-matching patternversionHARQ process3MandatorynumberDownlink2Mandatoryas counter DAIassignmentindexTPC command2MandatoryTPC command for the PUCCH onfor scheduledwhich HARQ-ACK feedback forPUCCHthe scheduled PDSCH is to betransmitted.PUCCH3MandatoryIndicating a PUCCH resourceresourceindex.indicatorPDSCH-to-3MandatoryIndicating a timing offset betweenHARQ_feedbackthe slot where the scheduledtiming indicatorPDSCH is transmitted and the slotwhere the corresponding PUCCHis to be transmitted. TABLE X9TheInformationnumberMandatory/fieldof bitsOptionalRemarksIdentifier for1MandatoryThe value of this bit field is always setDCI formatsto 1, indicating a DL DCI formatCarrier indicator0 or 3OptionalIndicating SCellIndex of the serving cellin which the scheduled PDSCH istransmittedBandwidth part0, 1 orOptionalIndicating BWP ID of the BWP whichindicator2contains scheduled PDSCH. If a UE doesnot support active BWP change via DCI,the UE may ignore this bit fieldFrequency25MandatoryVRBs, indicated using type 0 or type 1domain resourceresource allocationassignmentTime domain0, 1, 2,OptionalIndex of an entry of an RRC-configuredresource3 or 4table providing the set of OFDMassignmentsymbols used for PUSCH transmissionVRB-to-PRB0 or 1MandatoryFlag to control VRB-to-PRB mappingmapping0 bit if only resource allocation type 0 isconfigured; 1 bit otherwisePRB bundling0 or 1Optional1 bit if the higher layer parameter prb-size indicatorBundlingType is set to ‘dynamic’, 0 bitotherwiseRate matching0, 1 orOptionalRB-level and/or RE-level indication ofindicator2REs which are not available for thescheduled PDSCH transmission.Each bit corresponds to respectivehigher layer parameter rateMatchPattern.ZP CSI-RS0, 1 orOptionalIndicating CSI-RS REs which are nottrigger2available for the scheduled PDSCHtransmission.UCI on PUSCH2OptionalIndication of beta value for UCI oninformationPUSCH, possibly also other UCI-on-PUSCH-related informationModulation and5MandatoryMCS for TB1 which is contained by thecoding schemescheduled PDSCH.for TB1New data1MandatoryNDI for TB1 which is contained by theindicator forscheduled PDSCH.TB1Redundancy2MandatoryRV for TB1 which is contained by theversion for TB1scheduled PDSCH.Modulation and5OptionalMCS for TB2 which is contained by thecoding schemescheduled PDSCH.for TB2Only present ifmaxNrofCodeWordsScheduledByDCIequals 2New data1OptionalNDI for TB2 which is contained by theindicator forscheduled PDSCH.TB2Only present ifmaxNrofCodeWordsScheduledByDCIequals 2Redundancy2OptionalRV for TB2 which is contained by theversion for TB2scheduled PDSCH.Only present ifmaxNrofCodeWordsScheduledByDCIequals 2HARQ process4MandatorynumberDownlink0, 2 orOptional4 bits if more than one serving cell areassignment4configured in the DL and the higherindexlayer parameter pdsch-HARQ-ACK-Codebook = dynamic, where the 2 MSB(most significant bit) bits are the counterDAI and the 2 LSB (least significant bit)bits are the total DAI,2 bits if only one serving cell isconfigured in the DL and the higherlayer parameter pdsch-HARQ-ACK-Codebook = dynamic, where the 2 bits arethe counter DAI,0 bit otherwiseTPC command2MandatoryTPC command for the PUCCH on whichfor scheduledHARQ-ACK feedback for the scheduledPUCCHPDSCH is to be transmitted.PUCCH3MandatoryIndicating a PUCCH resource index.resourceindicatorPDSCH-to-0, 1, 2OptionalIndicating a timing offset between theHARQfeedbackor 3slot where the scheduled PDSCH istiming indicatortransmitted and the slot where thecorresponding PUCCH is to betransmitted.Antenna port(s)4, 5 orMandatoryIndicating antenna ports used for the6scheduled PDSCH transmission and/orthe number of CDM groups without data(i.e. the number of CDM groups whoseREs are not available for the PDSCHtransmissions)Transmission0 or 3Optional0 bit if higher layer parameter tci-configurationPresentInDCI is not enabled, 3 bitsindicationotherwiseSRS request2 or 3MandatoryThis bit field may also indicate theassociated CSI-RS.CBG0, 2, 4,OptionalThe bit size may be determined by thetransmission6 or 8higher layer parametersinformationmaxCodeBlockGroupsPerTransportBlock(CBGTI)and Number-MCS-HARQ-DL-DCI forthe PDSCH.CBG flushing0 or 1OptionalThe bit size may be determined byout informationhigher layer parameter(CBGFI)codeBlockGroupFlushIndicator.DMRS sequence0 or 1Optionalinitialization FIG.7shows examples of several numerologies. The numerology #1 (μ=0) may be a basic numerology. For example, a RE of the basic numerology is defined with subcarrier spacing of 15 kHz in frequency domain and 2048κTs+CP length (e.g., 512κTs, 160κTs or 144κTs) in time domain, where Ts denotes a baseband sampling time unit defined as 1/(15000*2048) seconds. For the μ-th numerology, the subcarrier spacing may be equal to 154*2μand the effective OFDM symbol length NuTs=2048*2−μκTs. It may cause the symbol length is 2048*2−μκTs+CP length (e.g., 512*2−μκTs, 160*2−μκTs or 144*2−μκTs). Note that κ=64, Ts=1/(Δfmax·Nf), Δfmax=480·103Hz (i.e. Δf for μ=5), and Nf=4096. In other words, the subcarrier spacing of the μ+1-th numerology is a double of the one for the μ-th numerology, and the symbol length of the μ±1-th numerology is a half of the one for the μ-th numerology.FIG.7shows four numerologies, but the system may support another number of numerologies. FIG.8shows a set of examples of subframe structures for the numerologies that are shown inFIG.7. These examples are based on the slot configuration set to 0. A slot includes 14 symbols, the slot length of the μ+1-th numerology is a half of the one for the μ-th numerology, and eventually the number of slots in a subframe (i.e., 1 ms) becomes double. It may be noted that a radio frame may include 10 subframes, and the radio frame length may be equal to 10 ms. FIG.9shows another set of examples of subframe structures for the numerologies that are shown inFIG.7. These examples are based on the slot configuration set to 1. A slot includes 7 symbols, the slot length of the μ+1-th numerology is a half of the one for the μ-th numerology, and eventually the number of slots in a subframe (i.e., 1 ms) becomes double. A downlink physical channel may correspond to a set of resource elements carrying information originating from higher layers. The downlink physical channels may include Physical Downlink Shared Channel (PDSCH), Physical Broadcast Channel (PBCH), Physical Downlink Control Channel (PDCCH). A downlink physical signal corresponds to a set of resource elements used by the physical layer but might not carry information originating from higher layers. The downlink physical signals may include Demodulation reference signals (DM-RS), Phase-tracking reference signals (PT-RS), Channel-state information reference signal (CSI-RS) Primary synchronization signal (PSS), Secondary synchronization signal (SSS). An uplink physical channel may correspond to a set of resource elements carrying information originating from higher layers. The uplink physical channels may include Physical Uplink Shared Channel (PUSCH), Physical Uplink Control Channel (PUCCH), Physical Random Access Channel (PRACH). An uplink physical signal is used by the physical layer but might not carry information originating from higher layers. The uplink physical signals may include Demodulation reference signals (DM-RS), Phase-tracking reference signals (PT-RS), Sounding reference signal (SRS). The Synchronization Signal and PBCH block (SSB) may consist of primary and secondary synchronization signals (PSS, SSS), each occupying 1 symbol and 127 subcarriers, and PBCH spanning across 3 OFDM symbols and 240 subcarriers, but on one symbol leaving an unused part in the middle for SSS. The periodicity of the SSB can be configured by the network and the time locations where SSB can be sent are determined by sub-carrier spacing. Within the frequency span of a carrier, multiple SSBs can be transmitted. The physical cell identities (PCIs) of those SSBs may not have to be unique, i.e. different SSBs can have different PCIs. However, when an SSB is associated with an SIB1 (also known as remaining minimum system information (RMSI)), the SSB may correspond to an individual cell, which has a unique NR Cell Global Identifier (NCGI). Such an SSB may be referred to as a Cell-Defining SSB (CD-SSB). A PCell may be always associated to a CD-SSB located on the synchronization raster. Slot format indicator (SFI) may be defined to specify a format for one or more slot(s). With SFI, the UE102may be able to derive at least which symbols in a given slot that are ‘DL’, ‘UL’, and ‘unknown’, respectively. In addition, it may also indicate which symbols in a given slot that are ‘reserved’. With SFI, the UE102may also be able to derive the number of slots for which the SFI indicates their formats. SFI may be configured by dedicated RRC configuration message. Alternatively and/or additionally, SFI may be signaled by a group-common PDCCH (e.g., PDCCH with SFI-RNTI). Yet alternatively and/or additionally, SFI may be broadcasted via master information block (MIB) or remaining minimum system information (RMSI). For example, each SFI can express up to 8 combinations of ‘DL’, ‘UL’, ‘Unknown’ and ‘reserved’, each combination includes Nslot,μsymbpieces of symbol types. More specifically, given that Nslot,μsymb=14, one combination may be ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’ ‘Unknown’. Another combination may be all ‘DL, that is ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’. Yet another combination may be all ‘UL, that is ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’ ‘UL’. Yet another combination may be a combination of ‘DL’, ‘UL’ and ‘Reserved’ such as ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘DL’ ‘Reserved’ ‘Reserved’ ‘Reserved’ ‘Reserved’ ‘UL’. ‘DL’ symbols may be available for DL receptions and CSI/RRM measurements at the UE102side. ‘UL’ symbols may be available for UL transmissions at the UE102side. ‘Unknown’ resource may also be referred to as ‘Flexible’ and can be overridden by at least by DCI indication. ‘Unknown’ may be used to achieve the same as ‘Reserved’ if not overridden by DCI and/or SFI indication. On ‘Unknown’ symbols, UE102may be allowed to assume any DL and UL transmissions which are configured by higher-layer, unless overridden by DCI indicating the other direction, and any DL and UL transmissions indicated by DCI. For example, periodic CSI-RS, periodic CSI-IM, semi-persistently scheduled CSI-RS, periodic CSI reporting, semi-persistently scheduled CSI reporting, periodic SRS transmission, higher-layer configured Primary synchronization signal (PSS)/secondary SS (SSS)/PBCH can be assumed (i.e. for DL, assumed to be present and to be able to perform the reception, and for UL, assumed to be able to perform the transmission). The overriding of ‘Unknown’ symbols by the DCI means that UE102may have to assume only DL and UL transmissions (PDSCH transmission, PUSCH transmission, aperiodic CSI-RS transmission, aperiodic CSI-IM resource, aperiodic SRS transmission) which are indicated by DCI indications. The overriding of ‘Unknown’ symbols by the SFI means that UE102may have to assume the symbols as either ‘DL’, ‘UL’, or ‘Reserved’ according to SFI indications. If the UE102assumes aperiodic CSI-RS transmission and/or aperiodic CSI-IM resource, the UE102may perform CSI and/or RRM measurement based on the aperiodic CSI-RS transmission and/or aperiodic CSI-IM resource. If the UE102does not assume aperiodic CSI-RS transmission and/or aperiodic CSI-IM resource, the UE102may not use the aperiodic CSI-RS transmission and/or aperiodic CSI-IM resource for CSI and/or RRM measurement. The UE102may have to monitor PDCCH on some ‘DL’ or ‘Unknown’ symbols. There may be several options to monitor PDCCH. If all of the OFDM symbols which are assigned for a given control resource set (CORESET) are ‘DL’, the UE102may assume all of the OFDM symbols are valid for monitoring of a PDCCH associated with the given CORESET. In this case, the UE102may assume each PDCCH candidate in the CORESET is mapped to all of the OFDM symbols for time-first RE group (REG)-to-control channel element (CCE) mapping. If all of the OFDM symbols which are assigned for a given CORESET are ‘Unknown’, the UE102may assume all of the OFDM symbols are valid for monitoring of a PDCCH associated with the given CORESET. In this case, the UE102may assume each PDCCH candidate in the CORESET is mapped to all of the OFDM symbols for time-first REG-to-CCE mapping. If every OFDM symbols which is assigned for a given combination of CORESET and search space set is either ‘UL’ or ‘Reserved’, the UE102may assume those OFDM symbols are not valid for monitoring of a PDCCH associated with the given combination of CORESET and search space set. If some of the OFDM symbols which are assigned for a given combination of CORESET and search space set are ‘DL’ and the others are ‘UL’ or ‘Reserved’ or if some of the OFDM symbols which are assigned for a given combination of CORESET and search space set are ‘Unknown’ and the others are ‘UL’ or ‘Reserved’, the UE102may not monitor PDCCH in the CORESET. FIG.10is a block diagram illustrating one implementation of a gNB1060(an example of the gNB160). The gNB1060may include a higher layer processor1001(also referred to as higher layer processing circuitry), a DL transmitter1002, a UL receiver1003, and antennas1004. The DL transmitter1002may include a PDCCH transmitter1005and a PDSCH transmitter1006. The UL receiver1003may include a PUCCH receiver1007and a PUSCH receiver1008. The higher layer processor1001may manage physical layer's behaviors (the DL transmitter's and the UL receiver's behaviors) and provide higher layer parameters to the physical layer. The higher layer processor1001may obtain transport blocks from the physical layer. The higher layer processor1001may send/acquire higher layer messages such as a common and dedicated RRC messages and/or MAC messages to/from a UE's higher layer. The higher layer processor1001may also set and/or store higher layer parameters carried by the higher layer messages. The higher layer processor1001may provide the PDSCH transmitter1006transport blocks and provide the PDCCH transmitter1005transmission parameters related to the transport blocks. The UL receiver1003may receive multiplexed uplink physical channels and uplink physical signals via receiving antennas and de-multiplex them. The PUCCH receiver1007may provide the higher layer processor UCI. The PUSCH receiver1008may provide the higher layer processor1001received transport blocks. FIG.11is a block diagram illustrating one implementation of a UE1102(an example of the UE102). The UE1102may include a higher layer processor1111, a UL transmitter1113, a DL receiver1112, and antennas1114. The UL transmitter1113may include a PUCCH transmitter1117and a PUSCH transmitter1118. The DL receiver1112may include a PDCCH receiver1115and a PDSCH receiver1116. The higher layer processor1111may manage physical layer's behaviors (the UL transmitter's and the DL receiver's behaviors) and provide higher layer parameters to the physical layer. The higher layer processor1111may obtain transport blocks from the physical layer. The higher layer processor1111may send/acquire higher layer messages such as a common and dedicated RRC messages and/or MAC messages to/from a UE's higher layer. The higher layer processor1111may also set and/or store higher layer parameters carried by the higher layer messages. The higher layer processor1111may provide the PUSCH transmitter transport blocks and provide the PUCCH transmitter1117UCI. The DL receiver1112may receive multiplexed downlink physical channels and downlink physical signals via receiving antennas and de-multiplex them. The PDCCH receiver1115may provide the higher layer processor DCI. The PDSCH receiver1116may provide the higher layer processor1111received transport blocks. For downlink data transmission, the UE1102may attempt blind decoding of one or more PDCCH (also referred to just as control channel) candidates. This procedure is also referred to as monitoring of PDCCH. The PDCCH may carry DCI format which schedules PDSCH (also referred to just as shared channel or data channel). The gNB1060may transmit PDCCH and the corresponding PDSCH in a downlink slot. Upon the detection of the PDCCH in a downlink slot, the UE1102may receive the corresponding PDSCH in the downlink slot. Otherwise, the UE1102may not perform PDSCH reception in the downlink slot. FIG.12illustrates an example of control resource unit and reference signal structure. A control resource set may be defined, in frequency domain, as a set of physical resource block(s) (PRBs). For example, a control resource set may include PRB #i to PRB #i+3 in frequency domain. The control resource set may also be defined, in time domain, as a set of OFDM symbol(s). It may also be referred to as a duration of the control resource set or just control resource set duration. For example, a control resource set may include three OFDM symbols, OFDM symbol #0 to OFDM symbol #2, in time domain. The UE102may monitor PDCCH in one or more control resource sets. The PRB set may be configured with respect to each control resource set through dedicated RRC signaling (e.g., via dedicated RRC reconfiguration). The control resource set duration may also be configured with respect to each control resource set through dedicated RRC signaling. In the control resource unit and reference signal structure shown inFIG.12, control resource units are defined as a set of resource elements (REs). Each control resource unit includes all REs (i.e., 12 REs) within a single OFDM symbol and within a single PRB (i.e., consecutive 12 subcarriers). REs on which reference signals (RSs) are mapped may be counted as those REs, but the REs for RSs are not available for PDCCH transmission and the PDCCH are not mapped on the REs for RSs. Multiple control resource units may be used for a transmission of a single PDCCH. In other words, one PDCCH may be mapped the REs which are included in multiple control resource units.FIG.12shows the example that the UE102performing blind decoding of PDCCH candidates assuming that multiple control resource units located in the same frequency carries one PDCCH. The RSs for the PDCCH demodulation may be contained in all of the resource units on which the PDCCH is mapped. The REs for the RS may not be available for either the PDCCH transmission or the corresponding PDSCH transmission. FIG.13illustrates an example of control channel and shared channel multiplexing. The starting and/or ending position(s) of PDSCH may be indicated via the scheduling PDCCH. More specifically, the DCI format which schedule PDSCH may include information field(s) for indicating the starting and/or ending position(s) of the scheduled PDSCH. The UE102may include a higher layer processor which is configured to acquire a common and/or dedicated higher layer message. The common and/or dedicated higher layer message may include system information and/or information for higher layer configuration/reconfiguration. Based on the system information and/or higher layer configuration, the UE102performs physical layer reception and/or transmission procedures. The UE102may also include PDCCH receiving circuitry which is configured to monitor a PDCCH. The PDCCH may carry a DCI format which schedule a PDSCH. Additionally and/or alternatively the PDCCH may carry a DCI format which schedule a PUSCH. The UE102may also include PDSCH receiving circuitry which is configured to receive the PDSCH upon the detection of the corresponding PDCCH. The UE102may also include PUCCH transmitting circuitry which is configured to transmit the PUCCH carrying HARQ-ACK feedback related to the PDSCH. Additionally and/or alternatively the UE102may also include PUSCH transmitting circuitry which is configured to transmit the PUSCH upon the detection of the corresponding PDCCH. The gNB160may include a higher layer processor which is configured to send a common and/or dedicated higher layer message. The common and/or dedicated higher layer message may include system information and/or information for higher layer configuration/reconfiguration. Based on the system information and/or higher layer configuration, the gNB160performs physical layer reception and/or transmission procedures. The gNB160may also include PDCCH transmitting circuitry which is configured to transmit a PDCCH. The PDCCH may carry DCI format which schedule a PDSCH. Additionally and/or alternatively, the PDCCH may carry DCI format which schedule a PUSCH. The gNB160may also include PDSCH transmitting circuitry which is configured to transmit the PDSCH upon the transmission of the corresponding PDCCH. The gNB160may also include PUCCH receiving circuitry which is configured to receive the PUCCH carrying HARQ-ACK feedback related to the PDSCH. Additionally and/or alternatively the gNB160may also include PUSCH receiving circuitry which is configured to receive the PUSCH upon the detection of the corresponding PDCCH. UE102may monitor PDCCH candidates in a control resource set. The set of PDCCH candidates may be also referred to as search space. The control resource set may be defined by a PRB set in frequency domain and a duration in units of OFDM symbol in time domain. For each serving cell, higher layer signaling such as common RRC messages or UE dedicated RRC messages may configure the UE102with one or more PRB set(s) for PDCCH monitoring. For each serving cell, higher layer signaling such as common RRC messages or UE dedicated RRC messages may also configure the UE102with the control resource set duration for PDCCH monitoring. For each serving cell, higher layer signaling configures a UE with P control resource sets. For control resource set p, 0<=p<P, the configuration includes: a first symbol index provided by higher layer parameter CORESET-start-symb; the number of consecutive symbols provided by higher layer parameter CORESET-time-duration; a set of resource blocks provided by higher layer parameter CORESET-freq-dom; a CCE-to-REG mapping provided by higher layer parameter CORESET-trans-type (also referred to as CORESET-CCE-to-REG-mapping); a REG bundle size, in case of interleaved CCE-to-REG mapping, provided by higher layer parameter CORESET-REG-bundle-size; and antenna port quasi-collocation provided by higher layer parameter CORESET-TCI-StateRefld. If the UE is not configured with higher layer parameter CORESET-TCI-StateRefld, the UE may assume that the DMRS antenna port associated with PDCCH reception in the USS and the DMRS antenna port associated with PBCH reception are quasi-collocated with respect to delay spread, Doppler spread, Doppler shift, average delay, and spatial Rx parameters. For each serving cell and for each DCI format with CRC scrambled by C-RNTI, SPS-RNTI and/or grant-free RNTI that a UE is configured to monitor PDCCH, the UE is configured with associations to control resource sets. The associations may include associations to a set of control resource sets by higher layer parameter DCI-to-CORESET-map. For each control resource set in the set of control resource sets, the associations may include: the number of PDCCH candidates per CCE aggregation level L by higher layer parameter CORESET-candidates-DCI; a PDCCH monitoring periodicity of kpslots by higher layer parameter CORESET-monitor-period-DCI; a PDCCH monitoring offset of opslots, where 0<=op<kp, by higher layer parameter CORESET-monitor-offset-DCI; and a PDCCH monitoring pattern within a slot, indicating first symbol(s) of the control resource set within a slot for PDCCH monitoring, by higher layer parameter CORESET-monitor-DCI-symbolPattern. The UE102may assume that non-slot based scheduling is configured in addition to slot-based scheduling, if the UE102is configured with higher layer parameter CORESET-monitor-DCI-symbolPattern. The UE102may assume that non-slot based scheduling is not configured but slot-based scheduling only, if the UE102is not configured with higher layer parameter CORESET-monitor-DCI-symbolPattern. FIG.14illustrates PDCCH monitoring occasions for slot-based scheduling (also referred to as Type A resource allocation). A search space set may be identified for a combination of a control resource set, a DCI format (or DCI format group including DCI format having a same DCI payload size). In the example shown inFIG.16, two search space sets are seen, search space set #0 and #1. Both search space set #0 and #1 are associated with a same CORESET. The configuration of the CORESET such as CORESET-start-symb, CORESET-time-duration, CO RESET-freq-dom, CO RESET-trans-type, CORESET-REG-bundle-size, CORESET-TCI-StateRefld apply to both search space set #0 and #1. For example, CORESET-time-duration set to 3 symbols applies to both of them. Search space set #0 may be associated with a certain DCI format (e.g., DCI format 1, fallback DCI format), and search space set #1 may be associated with another certain DCI format (e.g., DCI format 2, regular DCI format). The higher layer parameter CORESET-monitor-period-DCI is set to 2 slots for search space set #0, while the higher layer parameter CORESET-monitor-period-DCI is set to 1 slot for search space set #1. Therefore, DCI format 1 may be potentially transmitted and/or monitored in every 2 slot, while DCI format 2 may be potentially transmitted and/or monitored in every slot. FIG.15illustrates PDCCH monitoring occasions for non-slot-based scheduling. In the example shown inFIG.15, two search space sets are seen, search space set #2 and #3. Both search space set #2 and #3 are associated with a same CORESET. This CORESET may or may not be the same CORESET as inFIG.15. The higher layer parameters CORESET-monitor-period-DCI for both search space set #2 and #3 are set to 1 slot. In addition, the higher layer parameters CORESET-monitor-DCI-symbolPattern are individually configured to search space set #2 and #3. The higher layer parameter CORESET-monitor-DCI-symbolPattern may indicate, using a bitmap scheme, OFDM symbol(s) on which PDCCH is monitored. To be more specific, the higher layer parameter CORESET-monitor-DCI-symbolPattern per search space set may include 14 bits, the 1stbit to 14thbit which correspond to OFDM symbol #0 to #13, respectively. Each of the bits indicates whether or not PDCCH is monitored on the corresponding OFDM symbol (e.g., “0” indicates no PDCCH monitoring and “1” indicates PDCCH monitoring, or vice versa). In this example, the higher layer parameters CORESET-monitor-DCI-symbolPattern for search space set #2 indicates OFDM symbols #0 and #7 for PDCCH monitoring, which the higher layer parameters CORESET-monitor-DCI-symbolPattern for search space set #3 indicates OFDM symbols #0, #2, #4,#6, #8,#10, #12 for PDCCH monitoring. It is noted that these PDCCH monitoring applies to the slot that is specified by CORESET-monitor-period-DCI and CORESET-monitor-offset-DCI. A control-channel element may include 6 resource-element groups (REGs) where a resource-element group equals one resource block during one OFDM symbol. Resource-element groups within a control-resource set may be numbered in increasing order in a time-first manner, starting with 0 for the first OFDM symbol and the lowest-numbered resource block in the control resource set. A UE can be configured with multiple control-resource sets. Each control-resource set may be associated with one CCE-to-REG mapping only. The CCE-to-REG mapping for a control-resource set can be interleaved or non-interleaved, configured by the higher-layer parameter CORESET-CCE-REG-mapping-type. The REG bundle size is configured by the higher-layer parameter CORESET-REG-bundle-size. For non-interleaved CCE-to-REG mapping, the REG bundle size is 6. For interleaved CCE-to-REG mapping, the REG bundle size is either 2 or 6 for a CORESET with CORESET-time-duration set to 1, and the REG bundle size is either NCORESETsymbor 6 for a CORESET with CORESET-time-duration NCORESETsymbset to greater than 1. The UE may assume: the same precoding in the frequency domain being used within a REG bundle if the higher-layer parameter CORESET-precoder-granularity equals CORESET-REG-bundle-size; and the same precoding in the frequency domain being used across within contiguous RBs in CORESET if the higher-layer parameter CORESET-precoder-granularity equals the number of contiguous RBs in the frequency domain within CORESET. Each control resource set includes a set of CCEs numbered from 0 to NCCE,p,kp−1 where NCCE,p,kpis the number of CCEs in control resource set p in monitoring period kp. The sets of PDCCH candidates that a UE monitors are defined in terms of PDCCH UE-specific search spaces. A PDCCH UE-specific search space S(L)kpat CCE aggregation level L is defined by a set of PDCCH candidates for CCE aggregation level L. L can be one of 1, 2, 4, and 8. PDSCH and/or PUSCH RE mapping may be affected by higher layer signaling and/or layer-1 signaling such as a PDCCH with a DCI format 1 and 2. For PDSCH, modulated complex-valued symbols may be mapped in REs which meet all of the following criteria: they are in the resource blocks assigned for transmission; they are declared as available for PDSCH according to rate matching resource set configuration and/or indication; they are not used for CSI-RS; they are not used for Phase Tracking RS (PT-RS); they are not reserved for SS/PBCH; they are not declared as ‘reserved’. To decode PDSCH according to a detected PDCCH, a UE may be configured with any of higher layer parameters: rate-match-PDSCH-resource-set including one or multiple reserved pairs of RBs (higher layer parameter rate-match-PDSCH-resource-RBs which is also referred to as bitmap-1) and reserved symbols (higher layer parameters rate-match-PDSCH-resource-symbols which is also referred to as bitmap-2) for which the reserved RBs apply; rate-match-resources-v-shift including LTE-CRS-vshift(s); rate-match-resources-antenna-port including LTE-CRS antenna ports 1, 2 or 4 ports; rate-match-CORESET including CORESET-ID(s) of CORESET configured to a UE102for monitoring. The UE102may have to determine the PDSCH RE mapping according to the union of provided rate-matching configurations. To decode PDSCH a UE102rate-matches around the REs corresponding to detected PDCCH that scheduled the PDSCH. A UE102may not be expected to handle the case where PDSCH DMRS REs are over-lapping, even partially, with any RE(s) indicated by the rate-matching configuration rate-match-PDSCH-resource-set and rate-match-resources-v-shift and rate-match-resources-antenna-port and rate-match-CORESET. If a UE102receives a PDSCH without receiving a corresponding PDCCH, or if the UE102receives a PDCCH indicating a SPS PDSCH release, the UE102may generate one corresponding HARQ-ACK information bit. If a UE102is not provided higher layer parameter PDSCH-CodeBlockGroupTransmission, the UE102may generate one HARQ-ACK information bit per transport block. A UE102is not expected to be indicated to transmit HARQ-ACK information for more than two SPS PDSCH receptions in a same PUCCH. For each physical cell group, UE102may be configured with higher layer parameter pdsch-HARQ-ACK-Codebook which indicates PDSCH HARQ-ACK codebook type. The PDSCH HARQ-ACK codebook may be either semi-static (also referred to as Type-1 HARQ-ACK codebook) or dynamic (also referred to as Type-2 HARQ-ACK codebook). This may be applicable to both CA and none CA operation and may correspond to L1 parameter ‘HARQ-ACK-codebook’. A UE102may report HARQ-ACK information for a corresponding PDSCH reception or SPS PDSCH release only in a HARQ-ACK codebook that the UE transmits in a slot indicated by a value of a PDSCH-to-HARQ_feedback timing indicator field in a corresponding DCI format (e.g. DCI format 1_0 or DCI format 1_1). If the UE102receives the PDCCH or SPS PDSCH release successfully, a value of the corresponding HARQ-ACK information bit may be basically set to ACK. If the UE102does not receive the PDCCH or SPS PDSCH release successfully (i.e. fails to receive it), the value of the corresponding HARQ-ACK information bit may be basically set to NACK. The UE102may report NACK value(s) for HARQ-ACK information bit(s) in a HARQ-ACK codebook that the UE transmits in a slot not indicated by a value of a PDSCH-to-HARQ_feedback timing indicator field in a corresponding DCI format (e.g. DCI format 1_0 or DCI format 1_1). If the UE102is provided higher layer parameter pdsch-AggregationFactor, NPDSCHrepeatis a value of pdsch-AggregationFactor, otherwise, NPDSCHrepeat=1. The UE102may report HARQ-ACK information only for a last slot of the NPDSCHrepeatslots. If a UE reports HARQ-ACK information in a PUSCH or a PUCCH only for a SPS PDSCH release or only for a PDSCH reception within the MA,coccasions for candidate PDSCH receptions that is scheduled by DCI format 1_0 with a counter DAI field value of 1 on the PCell, the UE may determine a HARQ-ACK codebook only for the SPS PDSCH release or only the PDSCH reception, e.g. one-bit HARQ-ACK codebook. Otherwise, the HARQ-ACK codebook may be more than one bit. In some cases, a HARQ-ACK information bit may be automatically set to a fixed value (e.g. NACK, or ACK) without referring to PDSCH reception or SPS PDSCH release reception. For example, if the UE is configured with pdsch-HARQ-ACK-Codebook=semi-static, the UE102may report NACK value(s) for HARQ-ACK information bit(s) in a HARQ-ACK codebook that the UE transmits in a slot not indicated by a value of a PDSCH-to-HARQ_feedback timing indicator field in a corresponding DCI format (e.g. DCI format 1_0 or DCI format 1_1). Another case where HARQ-ACK information bit may be automatically set to a fixed value (e.g. NACK, or ACK) without referring to PDSCH reception or SPS PDSCH release reception is that, if an occasion for a candidate PDSCH reception can be in response to a PDCCH with a DCI format (e.g. DCI format 1_1) and if higher layer parameter maxNrofCodeWordsScheduledByDCI indicates reception of two transport blocks, when the UE receives a PDSCH with one transport block, the HARQ-ACK information is associated with the first transport block and the UE102may generate a NACK for the second transport block if higher layer parameter harq-ACK-SpatialBundlingPUCCH is not provided and may generate HARQ-ACK information with value of ACK for the second transport block if higher layer parameter harq-ACK-Spatia/Bund/ingPUCCH is provided. Yet another case where HARQ-ACK information bit may be automatically set to a fixed value (e.g. NACK, or ACK) without referring to PDSCH reception or SPS PDSCH release reception is that, if the UE102is configured by higher layer parameter maxNrofCodeWordsScheduledByDCI with reception of two transport blocks for the active DL BWP of serving cell c, and if the UE102receives one transport block, the UE102may assume ACK for the second transport block. Yet another case where HARQ-ACK information bit may be automatically set to a fixed value (e.g. NACK, or ACK) without referring to PDSCH reception or SPS PDSCH release reception is that the UE102may set to NACK value in the HARQ-ACK codebook any HARQ-ACK information corresponding to PDSCH reception or SPS PDSCH release scheduled by DCI format (e.g. DCI format 1_0 or DCI format 1_1) that the UE102detects in a PDCCH monitoring occasion that is after a PDCCH monitoring occasion where the UE detects a DCI format (e.g. DCI format 1_0 or DCI format 1_1) scheduling the PUSCH transmission. NR may support code block group based transmission(s) for PDSCH and PUSCH. If the UE102is provided higher layer parameter PDSCH-CodeBlockGroup Transmission for a serving cell, the UE102may receive PDSCHs that include code block groups (CBGs) of a transport block and the UE102may be provided higher layer parameter maxCodeBlockGroupsPerTransportBlock indicating a maximum number NHARQ-ACKCBG/TB,maxof CBGs for generating respective HARQ-ACK information bits for a transport block reception for the serving cell, where for the number of C code blocks (CBs) in a transport block, the UE102may determine the number of CBGs as NHARQ-ACKCBG/TB,max=min(NHARQ-ACKCBG/TB,max, C). For CBG-based PDSCH reception, if the UE102successfully decodes all CGs in a given CBG of a TB, a value of the HARQ-ACK information bit corresponding the CBG may be basically set to ACK. If the UE102does not successfully decode (i.e. fails to decode) at least one CG in the given CBG of the TB, a value of the HARQ-ACK information bit corresponding the CBG may be basically set to NACK. In addition, in some cases, a HARQ-ACK information bit for a given CBG may be automatically set to a fixed value (e.g. NACK, or ACK) without referring to the reception of the associated CB(s). For example, the HARQ-ACK codebook includes the NHARQ-ACKCBG/TB,maxHARQ-ACK information bits and, if NHARQ-ACKCBG/TB<NHARQ-ACKCBG/TB,maxfor a transport block, the UE102may generate a NACK value for the last NHARQ-ACKCBG/TB,max−NHARQ-ACKCBG/TBHARQ-ACK information bits for the transport block in the HARQ-ACK codebook. In another case where a HARQ-ACK information bit for a CBG is automatically set to ACK without referring to the reception of the associated CB(s) is that, if the UE102generates a HARQ-ACK codebook in response to a retransmission of a transport block, corresponding to a same HARQ process as a previous transmission of the transport block, the UE102may generate an ACK for each CBG that the UE102correctly decoded in a previous transmission of the transport block. Yet another case where a HARQ-ACK information bit for a CBG is automatically set to a certain value without referring to the reception of the associated CB(s) is that if the UE102receives a PDSCH that is scheduled by a PDCCH with DCI format (e.g. DCI format 1_0), or a SPS PDSCH, or the UE detects a SPS PDSCH release, and if the UE is configured with higher layer parameter pdsch-HARQ-ACK-Codebook=semi-static, the UE may repeat NHARQ-ACKCBG/TB,maxtimes the HARQ-ACK information for the transport block in the HARQ-ACK PDSCH or for the SPS PDSCH release, respectively, for generating NHARQ-ACKCBG/TB,maxHARQ-ACK information bits The 5G NR system may be operated licensed spectrum which is owned by cellular operators. Additionally and/or alternatively the 5G NR system may be operated in unlicensed spectrum as a complementary tool for the operators to augment their service offering. NR-based unlicensed access (NR-U) may be applicable to below 6 GHz and above 6 GHz unlicensed bands (e.g., 5 GHz, 37 GHz, 60 GHz). NR-U cell may be operated in TDD bands with either an LTE-based anchor cell or an NR-based anchor cell (i.e. standalone NR cell). Furthermore, standalone operation of NR-U in unlicensed spectrum may also be possible. In order to ensure a fair co-existence with another NR-U node and/or another radio access technology (RAT) node such as wireless LAN node, the gNB160and/or the UE102may have to perform Listen Before Talk (LBT) procedure before their transmissions. LBT procedure is also referred to as Channel Access procedure. There may be several types of Channel Access (CA) procedures. FIG.16shows the first type of Channel Access procedure. The first type of Channel Access procedure may be used for downlink transmission(s) including PDSCH and PDCCH. The gNB160may transmit a transmission including PDSCH and PDCCH on a carrier on which NR-U cell(s) transmission(s) are performed, after first sensing the channel to be idle during the CA slot durations of a defer duration Td; and after the counter Nis zero in step4. The counter N is adjusted by sensing the channel for additional CA slot duration(s) according to the Step S1to step S6. In Step S1, the gNB160may set N=Ninit, where. Ninitis a random number uniformly distributed between 0 and CWp, and go to Step S4. In Step S2, if N>0 and the gNB160chooses to decrement the counter, the gNB160may set N=N−1. In Step S3, the gNB160may sense the channel for an additional CA slot duration, and if the additional CA slot duration is idle, go to Step S4, otherwise go to Step S5. In Step S4, if N=0, the gNB160may stop, otherwise go to Step S2. In Step S5, the gNB160may sense the channel until either a busy CA slot is detected within an additional defer duration Tdor all the CA slots of the additional defer duration Tdare detected to be idle. In Step S6, if the channel is sensed to be idle during all the CA slot durations of the additional defer duration Td, the gNB160may go to Step S4, otherwise go to Step S5. FIG.17shows an example of deferment of transmission. If the gNB160has not transmitted a transmission including PDSCH/PDCCH on a carrier on which NR-U cell(s) transmission(s) are performed after Step S4in this procedure, the gNB160may transmit a transmission including PDSCH/PDCCH on the carrier, if the channel is sensed to be idle at least in a CA slot duration Tslwhen the gNB160is ready to transmit PDSCH/PDCCH and if the channel has been sensed to be idle during all the CA slot durations of a defer duration Tdimmediately before this transmission. If the channel has not been sensed to be idle in a s CA lot duration Tslwhen the gNB160first senses the channel after it is ready to transmit or if the channel has been sensed to be not idle during any of the CA slot durations of a defer duration Tdimmediately before this intended transmission, the gNB160may proceed to Step S1after sensing the channel to be idle during the CA slot durations of a defer duration Td. The defer duration Tdmay consist of duration Tf=16 us immediately followed by mpconsecutive CA slot durations where each slot duration is Tsl=9 us, and Tfincludes an idle CA slot duration Tslat start of Tf. A slot duration Tslmay be considered to be idle if the gNB160senses the channel during the CA slot duration, and the power detected by the gNB160for at least 4 us within the CA slot duration is less than energy detection threshold XThresh. Otherwise, the CA slot duration Tslmay be considered to be busy. By using the above-described transmission deferment, more than one cells of which locations are geometrically separated may be able to obtain channel access successfully at the same time, and therefore frequency reuse among the cells can be achieved. CWmin,p≤CWp≤CWmax,pis the contention window. CWpadjustment may be performed by the gNB160. CWmin,pand CWmax,pmay be chosen before Step S1of the above-described procedure. mp, CWmin,p, and CWmax,pmay be derived based on channel access priority class associated with the gNB transmission. FIG.18shows an example of channel access priority class for downlink transmission(s). In this example, there are 4 classes, and lower index may correspond to higher priority. For each class, a parameter set for the channel access procedure is defined. The parameter set for class p may include mP, CWmin,p, CWmax,p, Tm cot,p, and allowed CWpsizes, where Tm cot,pis referred to as maximum channel occupancy time (MCOT). The gNB160getting channel access with priority class p may not be allowed to continuously transmit on the carrier on which NR-U cell(s) transmission(s) are performed, for a period exceeding Tm cot,p. Similarly, the UE102may use the first type of Channel Access procedure for uplink transmission(s) including PUSCH and/or PUCCH. The above-described Channel access procedure including Step S1to Step S6may be used with “gNB160” replaced by “UE102”, with “PDSCH/PDCCH” replaced by “PUSCH/PUCCH/SRS”, and with uplink channel access priority class.FIG.19shows an example of channel access priority class for uplink transmission(s). When the first type of Channel Access procedure is used for uplink transmission, it may also be referred to as Type-1 UL Channel Access procedure. FIG.20shows the second type of Channel Access procedure. The second type of Channel Access procedure may be used for downlink transmission(s) including discovery signal transmission(s) and not including PDSCH. The discovery signal may include SS/PBCH(s), CSI-RS(s) and/or control resource set(s). The second type of Channel Access procedure may make the channel access easier than the first type, since the discovery signal may not occupy a long transmission duration compared with a PDSCH transmission. An gNB160may transmit a transmission including discovery signal but not including PDSCH on a carrier on which NR-U cell(s) transmission(s) are performed immediately after sensing the channel to be idle for at least a sensing interval Tdrs=25 us and if the duration of the transmission is less than 1 ms. Tdrsmay consist of a duration Tf=16 us immediately followed by one CA slot duration Tsl=9 us and Tfincludes an idle CA slot duration Tslat start of Tf. The channel is considered to be idle for Tdrsif it is sensed to be idle during the slot durations of Tdrs. FIG.21shows the third type of Channel Access procedure. Channel sensing scheme of the third type of Channel Access procedure is almost the same as of the second type of Channel Access procedure. The third type of Channel Access procedure may be used for uplink transmission(s) which is to be transmitted inside of COT obtained by the first type channel access procedure at the gNB160side. In the example, the gNB160performs the first type channel access procedure right before a Common Control-PDCCH (CC-PDCCH) transmission. CC-PDCCH may also be referred to as PDCCH with CRC scrambled by common control-RNTI (CC-RNTI). In a DCI format carried by the CC-PDCCH may include several bit fields including bit field(s) for indicating “UL offset” and “UL duration”. If UL offset/and duration d are indicated by the CC-PDCCH for subframe n, the UE102is not required to receive any downlink physical channels and/or physical signals in slot(s) n+I+i with i=0, 1, . . . , d−1, and those slot(s) may have to be covered by the MOOT which was obtained by the channel access for the CC-PDCCH transmission at gNB160side. If the UE uses Type 2 channel access procedure for a transmission including PUSCH, the UE may be allowed to transmit the transmission including PUSCH immediately after sensing the channel to be idle for at least a sensing interval Tshort_ul=25 us. Tshort_ulconsists of a duration Tf=16 us immediately followed by one CA slot duration Tsl=9 us and Tfincludes an idle CA slot duration T at start of Tf. The channel is considered to be idle for Tshort_ulif it is sensed to be idle during the CA slot durations of Tshort_ul. The first type of Channel Access procedure may also be referred to as Type-2 UL Channel Access procedure. Note that the other type of PDCCH (e.g. PDCCH with DCI format 0_0, 0_1, 0_2, 0_3, 1_0, 1_1, 1_2, 1_3) for slot n may also indicate “UL offset” and “UL duration”. In this case, the UE may also be allowed to use the third type of Channel Access procedure, if configured. FIG.22shows the fourth type of Channel Access procedure. Channel sensing scheme of the fourth type of Channel Access procedure is almost the same as of the second and third types of Channel Access procedure. The fourth type of Channel Access procedure may be used for downlink transmission(s) which includes PUSCH but does not include PDSCH and is to be transmitted inside of COT obtained by the first type channel access procedure at the UE102side. If a PUSCH transmission indicates COT sharing, an gNB160may be allowed to transmit a transmission including PDCCH but not including PDSCH on the same carrier immediately after sensing the channel to be idle for at least a sensing interval Tpdcch=25 us, if the duration of the PDCCH is less than or equal to two OFDM symbols length and it shall contain at least Downlink Feedback Information (DFI) or UL grant to the UE from which the PUSCH transmission indicating COT sharing was received. Tpdcchconsists of a duration Tf=16 us immediately followed by one slot duration Tsl=9 us and Tfincludes an idle slot duration Tslat start of Tf. The channel is considered to be idle for Tpdcchif it is sensed to be idle during the slot durations of Tpdcch. In order to avoid collisions with transmissions from other nodes, contention window (CW) size may change depending on how many times collisions occur or equivalent. If a collision is observed at a node, the node may have to increase the CW size. If any collision is not observed, the node may be allowed to reduce the CW size.FIG.23shows an example of CW size adjustment. This example assumes that the number of available CW size is 7, i.e. CW #0 to CW #6. If a collision is observed, CW size is increased to the CW size with the next higher index, except for the CWmaxin which case the CW size is kept as CWmax. If any collision is not observed, the CW size may fallback to CWminirrespective of the previous CW size. A possible metric for the gNB's decision on whether or not the collision occurs for PDSCH may be HARQ-ACK feedback from the UE102. Another possible metric for the gNB's decision on whether or not the collision occurs in PDCCH may be PUSCH from the UE102. For uplink, a possible metric for the UE's decision on whether or not the collision occurs for PUSCH may be whether or not uplink retransmission is requested. FIG.24shows an example of reference slot for CW size adjustment for downlink transmission. Reference slot k may be defined as the starting slot of the most recent transmission on the carrier made by the gNB160, for which at least some HARQ-ACK feedback is expected to be available at the time when the CW size is adjusted. Note that an slot is just an example of the reference. Another time duration can also be used for the reference for the CW size adjustment if it can be a unit of a collision occurrence. FIG.25shows an example of NACK-based CW size adjustment procedure for downlink transmission. If the gNB160transmits transmissions including PDSCH that are associated with channel access priority class p on a carrier, the gNB160may maintain the contention window value CWpand adjusts CWpbefore Step S1of the first type of Channel Access procedure for those transmissions using the Step D1and D2. In Step D1, for every priority class p∈{0,2,3,4}, the gNB160may set CWp=CWinti,p. In Step D2, if at least Z=a certain percentage (e.g. 80%) of HARQ-ACK values corresponding to PDSCH transmission(s) in reference slot k are determined as NACK, the gNB160may increase CWpfor every priority class p∈{1,2,3,4} to the next higher allowed value and may remain in Step D2, otherwise go to Step D1. There may be several rules for determining Z which is a ratio of the number of HARQ-ACKs with “NACK” to the total number of valid HARQ-ACKs.FIG.26shows an example of a rule for determining Z. This rule is that if the gNB160detects ‘NACK’ state, it may be counted as NACK. FIG.27shows another example of a rule for determining Z. This rule is that if the HARQ-ACK values correspond to PDSCH transmission(s) on an NR-U Cell that are assigned by PDCCH transmitted on the same NR-U Cell, and if no HARQ-ACK feedback is detected for a PDSCH transmission by the gNB160, it may be counted as NACK. FIG.28shows another example of a rule for determining Z This rule is that if the HARQ-ACK values correspond to PDSCH transmission(s) on an NR-U Cell that are assigned by PDCCH transmitted on another cell, and if no HARQ-ACK feedback is detected for a PDSCH transmission by the gNB160, it may be ignored. In a case that HARQ-ACK feedback is ignored, it may not be used (may be considered as invalid) to derive either numerator (i.e. the number of “NACK”s) or denominator (i.e. the total number of valid HARQ-ACKs) for Z determination. Another rule is that if a PDSCH transmission has two codewords, the HARQ-ACK value of each codeword is considered separately. Each codeword may be an array of encoded bits which correspond to a respective transport block. FIG.29shows another example of a rule for determining Z This rule is that bundled HARQ-ACK across M TBs is considered as M HARQ-ACK responses. For example, if spatial bundling (e.g. binary AND operation) between HARQ-ACKs for TB1and TB2is applied, and if bundled HARQ-ACK is ACK, it may be counted as two ACKs, and vice versa. Alternatively, bundled HARQ-ACK across M TBs is considered as a single HARQ-ACK response. For example, if spatial bundling (e.g. binary AND operation) between HARQ-ACKs for TB1and TB2is applied, and if bundled HARQ-ACK is NACK, it may be counted as one NACK, and vice versa. FIG.30shows another example of a rule for determining Z This rule may apply, if the UE102is configured with pdsch-HARQ-ACK-Codebook=semi-static, if an occasion for a candidate PDSCH reception can be in response to a PDCCH with DCI format 1_1, and if higher layer parameter maxNrofCodeWordsScheduledByDCI indicates reception of two transport blocks. The rule is that if HARQ-ACK is transmitted via PUCCH, and if the UE102receives a PDSCH with one TB in slot k, HARQ-ACK for the second TB may be ignored, and only HARQ-ACK for the first TB may be used for determining Z Additionally and/or alternatively, the rule is that if HARQ-ACK is transmitted via PUSCH, and if the UE102receives a PDSCH with one TB in slot k, HARQ-ACK for the second TB may be ignored, and only HARQ-ACK for the first TB may be used for determining Z. FIG.31shows another example of a rule for determining Z. This rule may apply, if the UE102is configured with pdsch-HARQ-ACK-Codebook=semi-static. The rule is that if the gNB160does not transmit any PDSCH for a given UE in slot k, and if HARQ-ACK information for the slot k in a HARQ-ACK codebook that the given UE transmits, the HARQ-ACK information for the slot k reported by the given UE may be ignored. In other words, if the gNB160transmits PDCCH(s) with DCI format and any of the PDCCH(s) does not indicate a PDSCH transmission for a given UE in slot k, and if HARQ-ACK information for the slot k in a HARQ-ACK codebook that the given UE transmits, the HARQ-ACK information for the slot k reported by the given UE may be ignored. If the UE102is provided higher layer parameter pdsch-AggregationFactor, NPDSCHrepeatis a value of pdsch-AggregationFactor and the value may be larger than one. In this case the UE102reports HARQ-ACK information only for a last slot of the NPDSCHrepeatslots. Another rule is that if a single HARQ-ACK information is reported only for a last slot of the NPDSCHrepeat, the reported HARQ-ACK information is considered as NPDSCHrepeatpieces of HARQ-ACK responses for the NPDSCHrepeatslots. In other words, If NACK is reported for the last slot of the NPDSCHrepeat, and if one of the other slot in the NPDSCHrepeatslots is a reference slot k, is may be assumed that NACK is reported for the reference slot k even if there is no actual HARQ-ACK response for the reference slot k. FIG.32shows another example of a rule for determining Z. This rule may apply, if the UE102is provided higher layer parameter PDSCH-CodeBlockGroupTransmission for a serving cell. The rule is that if the HARQ-ACK codebook includes the NHARQ-ACKCBG/TB,maxHARQ-ACK information bits, and if NHARQ-ACKCBG/TB=NHARQ-ACKCBG/TB,maxit may be counted as either a single ACK or a single NACK. For example, if at least one of the NHARQ-ACKCBG/TB,maxHARQ-ACK information bits indicates ACK, the gNB160may count those HARQ-ACK information bits for the transport block in the HARQ-ACK codebook as a single ACK. If all of the NHARQ-ACKCBG/TB,maxHARQ-ACK information bits indicates NACK, the gNB160may count those HARQ-ACK information bits for the transport block in the HARQ-ACK codebook as a single NACK. FIG.33shows another example of a rule for determining Z This rule may apply, if the UE102is provided higher layer parameter PDSCH-CodeBlockGroupTransmission for a serving cell. The rule is that if the HARQ-ACK codebook includes the NHARQ-ACKCBG/TB,maxHARQ-ACK information bits and, if NHARQ-ACKCBG/TB<NHARQ-ACKCBG/TB,maxfor a transport block, the last NHARQ-ACKCBG/TB,max−NHARQ-ACKCBG/TBHARQ-ACK information bits for the transport block in the HARQ-ACK codebook may be ignored, the first and NHARQ-ACKCBG/TBHARQ-ACK information bits for the transport block in the HARQ-ACK codebook may be used to determine either a single ACK or a single NACK. For example, if at least one of the first NHARQ-ACKCBG/TBHARQ-ACK information bits indicates ACK, the gNB160may count the HARQ-ACK information bits for the transport block in the HARQ-ACK codebook as a single ACK. If all of the NHARQ-ACKCBG/TBHARQ-ACK information bits indicates NACK, the gNB160may count the HARQ-ACK information bits for the transport block in the HARQ-ACK codebook as a single NACK. FIG.34shows another example of a rule for determining Z This rule may apply, if the UE102is provided higher layer parameter PDSCH-CodeBlockGroupTransmission for a serving cell. The rule is that if the HARQ-ACK codebook includes the NHARQ-ACKCBG/TB,maxHARQ-ACK information bits for slot k and, if the UE102correctly decoded some CBG(s) in a previous transmission of the same transport block, HARQ-ACK information bit(s) for those CBG(s) may be ignored, and only the other HARQ-ACK information bits may be used. Additionally and/or alternatively, if the HARQ-ACK codebook includes the NHARQ-ACKCBG/TB,maxHARQ-ACK information bits for slot k and, if the gNB160does not transmit some CBG(s) in slot k, HARQ-ACK information bit(s) for those CBG(s) may be ignored, and only the other HARQ-ACK information bits may be used. For the use of the other HARQ-ACK information bits, the rule shown inFIG.32and/or the rule shown inFIG.32may apply. FIG.35shows an example of PUSCH-based CW size adjustment procedure for downlink transmission(s). If the gNB160transmits transmissions including PDCCH with DCI format for PUSCH scheduling and not including PDSCH that are associated with channel access priority class p on a channel starting from time t0, the gNB160may maintain the contention window value CWpand adjusts CWPbefore Step S1of the first type of Channel Access procedure for those transmissions using the Step E1and E2. In Step E1, for every priority class p∈{0,2,3,4} the gNB160may set CWP=CWmin,p. In Step E2, if less than a certain percentage (e.g. 10%) of the UL transport blocks scheduled by the gNB160using Type 2 channel access procedure in the time interval between t0and t0+TCOhave been received successfully, the gNB160may increase CWpfor every priority class p∈{1,2,3,4} to the next higher allowed value and may remain in Step E2, otherwise go to Step E1. t0may be the time instant when the gNB160has started transmission. TCO=Tm cot,p+Tgwhere Tgmay be the total duration of all gaps of duration greater than 25 us that occur between the DL transmission of the gNB160and UL transmissions scheduled by the gNB160, and between any two UL transmissions scheduled by the gNB160starting from t0. FIG.36is an example of a rule for the decision on a successful reception. This rule may apply, if the UE102is provided higher layer parameter PUSCH-CodeBlockGroupTransmission for a serving cell. If one or more CBG(s) for a TB is transmitted, the gNB160may use all of the transmitted CBG(s) to determine successful reception for the TB. For example, if the gNB160successfully decodes at least one of the transmitted CBG(s), the gNB160may consider it as a successful reception for the CW size adjustment. If the gNB160does not successfully decodes any one of the transmitted CBG(s), the gNB160may consider it as a failure reception for the CW size adjustment. FIG.37shows an example of reference HARQ process ID for CW size adjustment procedure for uplink transmission(s). The reference HARQ process ID HARQ_ID_ref is the HARQ process ID of UL-SCH in reference slot nref. The reference slot nrefis determined by Step R1and Step R2. Step R1is that if the UE102receives an UL grant or an DFI in slot ng, slot nwis the most recent slot before slot ng−3 in which the UE has transmitted UL-SCH using Type 1 channel access procedure. If the UE transmits transmissions including UL-SCH without gaps starting with slot n0and in slot n0, n1, Λ, nw, reference slot nrefis slot n0. Otherwise, reference slot nrefis slot nw. FIG.38shows an example of NDI-based CW size adjustment procedure for uplink transmission(s). If the UE transmits transmissions using Type 1 channel access procedure that are associated with channel access priority class p on a carrier, the UE may maintain the contention window value CWpand adjusts CWpfor those transmissions before Step S1of the first type of Channel Access procedure. If the UE receives an UL grant or a PDCCH with AUL-RNTI and/or DFI-RNTI, for every priority class p∈{1,2,3,4} the UE102may set CWp=CWmin,pif the NDI value for at least one HARQ process associated with HARQ_ID_ref is toggled, or if the HARQ-ACK value(s) for at least one of the HARQ processes associated with HARQ_ID_ref received in the earliest DFI after nref+3 indicates ACK. Otherwise, the UE102may increase CWpfor every priority class p∈{1,2,3,4} to the next higher allowed value. FIG.39shows an example of timer-based CW size adjustment procedure for uplink transmission(s). If there exist one or more previous transmissions {T0, . . . , Tn} using Type 1 channel access procedure, from the start slot (s) of the previous transmission(s) of which, N or more slots have elapsed and neither UL grant nor DFI was received, where N=max (Contention Window Size adjustment timer X, Tiburst length+1) if X>0 and N=0 otherwise, the UE102may increase CWpfor every priority class p∈{1,2,3,4} to the next higher allowed value. The CWpmay be adjusted once. FIG.40shows a method for a base station which communicates with a user equipment (UE). The method may comprises sending first radio resource control (RRC) configuration information (Step4001). The first RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The method may also comprises sending second RRC configuration information (Step4002). The second RRC configuration information may indicate that a maximum number of codewords scheduled by DCI is two. The method may further comprises, after a channel access procedure, transmitting, to the UE, a PDSCH which contains only a first transport block (Step4003). The method may further comprises receiving, from the UE, a HARQ-ACK feedback including at least a first HARQ-ACK information bit and a second HARQ-ACK information bit (Step4004). The first HARQ-ACK information bit may correspond to the first transport block of the PDSCH. The second HARQ-ACK information bit may correspond to a second transport block of the PDSCH. The second HARQ-ACK information bit may be set to Negative ACK (NACK). A contention window for the channel access procedure may be adjusted using the HARQ-ACK feedback, wherein the second HARQ-ACK information bit is ignored. FIG.41shows a method for a base station which communicates with a user equipment (UE). The method may comprise sending radio resource control (RRC) configuration information (Step4101). The RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The method may also comprise, after a channel access procedure, transmitting, to the UE, a PDSCH in a first slot and does not transmit, to the UE, any PDSCH in a second slot (Step4102). The method may further comprise receiving, from the UE, a HARQ-ACK feedback including at least a first HARQ-ACK information bit and a second HARQ-ACK information bit (Step4103). The first HARQ-ACK information bit may correspond to the PDSCH in the first slot. The second HARQ-ACK information bit may correspond to a PDSCH in the second slot The second HARQ-ACK information bit may be set to Negative ACK (NACK). A contention window for the channel access procedure may be adjusted using the HARQ-ACK feedback, wherein the second HARQ-ACK information bit may be ignored. FIG.42shows a method for a base station which communicates with a user equipment (UE). The method may comprise sending first radio resource control (RRC) configuration information (Step4201). The first RRC configuration information may indicate that Physical Downlink Shared Channel Hybrid Automatic Repeat Request-Acknowledgment (PDSCH HARQ-ACK) codebook is semi static. The method may also comprise sending second RRC configuration information (Step4202). The second RRC configuration may indicate that PDSCH Aggregation Factor is set to N which is an integer greater than 1. The method may further comprise, after a channel access procedure, transmitting, to the UE, PDSCHs carrying a transport block in N slots (Step4203). The method may further comprise, for the transport block, receiving, from the UE, only a HARQ-ACK information bit (Step4204). The HARQ-ACK information bit may correspond to a last slot of the N slots. A contention window for the channel access procedure may be adjusted using a HARQ-ACK information for another slot of the N slots, wherein the HARQ-ACK information is assumed to be the same value as the HARQ-ACK information bit corresponding to the last slot of the N slots. It should be noted that a decision on whether a given channel and/or data (including TB and CB) is successfully received or not may be done by referring to Cyclic Redundancy Check (CRC) bits which is appended to the given channel and/or data. It should be noted that various modifications are possible within the scope of the present invention defined by claims, and embodiments that are made by suitably combining technical means disclosed according to the different embodiments are also included in the technical scope of the present invention. It should be noted that in most cases the UE102and the gNB.160may have to assume same procedures. For example, when the UE102follows a given procedure (e.g., the procedure described above), the gNB160may also have to assume that the UE102follows the procedure. Additionally, the gNB160may also have to perform the corresponding procedures. Similarly, when the gNB160follows a given procedure, the UE102may also have to assume that the gNB160follows the procedure. Additionally, the UE102may also have to perform the corresponding procedures. The physical signals and/or channels that the UE102receives may be transmitted by the gNB160. The physical signals and/or channels that the UE102transmits may be received by the gNB160. The higher-layer signals and/or channels (e.g., dedicated RRC configuration messages) that the UE102acquires may be sent by the gNB160. The higher-layer signals and/or channels (e.g., dedicated RRC configuration messages) that the UE102sends may be acquired by the gNB160. It should be noted that names of physical channels and/or signals described herein are examples. The term “computer-readable medium” refers to any available medium that can be accessed by a computer or a processor. The term “computer-readable medium,” as used herein, may denote a computer- and/or processor-readable medium that is non-transitory and tangible. By way of example, and not limitation, a computer-readable or processor-readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer or processor. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. It should be noted that one or more of the methods described herein may be implemented in and/or performed using hardware. For example, one or more of the methods described herein may be implemented in and/or realized using a chipset, an application-specific integrated circuit (ASIC), a large-scale integrated circuit (LSI) or integrated circuit, etc. Each of the methods disclosed herein comprises one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another and/or combined into a single step without departing from the scope of the claims. In other words, unless a specific order of steps or actions is required for proper operation of the method that is being described, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the systems, methods, and apparatus described herein without departing from the scope of the claims. A program running on the gNB160or the UE102according to the described systems and methods is a program (a program for causing a computer to operate) that controls a CPU and the like in such a manner as to realize the function according to the described systems and methods. Then, the information that is handled in these apparatuses is temporarily stored in a RAM while being processed. Thereafter, the information is stored in various ROMs or HDDs, and whenever necessary, is read by the CPU to be modified or written. As a recording medium on which the program is stored, among a semiconductor (for example, a ROM, a nonvolatile memory card, and the like), an optical storage medium (for example, a DVD, a MO, a MD, a CD, a BD, and the like), a magnetic storage medium (for example, a magnetic tape, a flexible disk, and the like), and the like, any one may be possible. Furthermore, in some cases, the function according to the described systems and methods described above is realized by running the loaded program, and in addition, the function according to the described systems and methods is realized in conjunction with an operating system or other application programs, based on an instruction from the program. Furthermore, in a case where the programs are available on the market, the program stored on a portable recording medium can be distributed or the program can be transmitted to a server computer that connects through a network such as the Internet. In this case, a storage device in the server computer also is included. Furthermore, some or all of the gNB160and the UE102according to the systems and methods described above may be realized as an LSI that is a typical integrated circuit. Each functional block of the gNB160and the UE102may be individually built into a chip, and some or all functional blocks may be integrated into a chip. Furthermore, a technique of the integrated circuit is not limited to the LSI, and an integrated circuit for the functional block may be realized with a dedicated circuit or a general-purpose processor. Furthermore, if with advances in a semiconductor technology, a technology of an integrated circuit that substitutes for the LSI appears, it is also possible to use an integrated circuit to which the technology applies. Moreover, each functional block or various features of the base station device and the terminal device used in each of the aforementioned embodiments may be implemented or executed by a circuitry, which is typically an integrated circuit or a plurality of integrated circuits. The circuitry designed to execute the functions described in the present specification may comprise a general-purpose processor, a digital signal processor (DSP), an application specific or general application integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic devices, discrete gates or transistor logic, or a discrete hardware component, or a combination thereof. The general-purpose processor may be a microprocessor, or alternatively, the processor may be a conventional processor, a controller, a microcontroller or a state machine. The general-purpose processor or each circuit described above may be configured by a digital circuit or may be configured by an analogue circuit. Further, when a technology of making into an integrated circuit superseding integrated circuits at the present time appears due to advancement of a semiconductor technology, the integrated circuit by this technology is also able to be used.
132,437
11863328
DETAILED DESCRIPTION The present disclosure is related to packet recovery mechanisms in wireless networks, which improves end-to-end (e2e) reliability. Embodiments may utilize Multiple Access Management Service (MAMS), which is a programmable framework to select and configure network paths, as well as adapt to dynamic network conditions, when multiple network connections serve a client device. The MAMS framework is based on principles of user plane interworking, which can be deployed as an overlay without impacting the underlying networks. MAMS co-exists and complements existing communication protocols by providing a way to negotiate and configure the protocols based on client and network capabilities. Further it allows exchange of network state information and leveraging network intelligence to optimize the performance of such communication protocols. MAMS has minimal or no dependency on the actual access technology of the participating links, which allows MAMS to be scalable for addition of newer access technologies and for independent evolution of the existing access technologies. The present disclosure provides embodiments for recovering lost packets due to temporary interference, collisions, congestions, buffer overflow, etc., in wireless networks overlaid with MAMS. First embodiments involve e2e retransmission between a receiver device and a transmitter device such that any packet that has been lost on the delivery path can be detected and retransmitted. In the first embodiments, the receiver sends a Negative Acknowledgement (NACK) message to the transmitter when a packet loss is detected by the receiver. In response to receipt of the NACK message by the transmitter, the transmitter attempts to identify, in its transmission buffer, the lost packet indicated by the NACK message. If the transmitter cannot find the lost packet in its buffer, the transmitter sends a First Sequence Number (FSN) message to the receiver to indicate a Sequence Number (SN) of the oldest (acknowledged) packet in the buffer. In response, the receiver does not report any lost packets whose SN is smaller (i.e., older) than the FSN. In some embodiments, a minimal NACK time interval (T) may be inserted between two successive NACK reports to avoid unnecessary retransmission. The minimum NACK time interval may be configured by a MAMS server through one or more control messages. The first embodiments are useful for networks that have a limited number of retransmissions due to, for example, limited bandwidth, crowded spectrum, or the like. Second embodiments involve using packet coding and decoding to recover lost packets. In this embodiment, an SN is added to each of the IP packets that are sent between a transmitter (e.g., a MAMS server) and a receiver (e.g., a UE). Additionally, a new control message is defined, which is used to deliver a coded SDU. The coded SDU is generated by applying a network coding technique to one or multiple consecutive SDUs. The receiver may use the coded SDU to recover any of the SDUs that are used in the coding. For example, given two packets (packet A and packet B), a packet (packet C) may be generated by applying an exclusive OR operation (XOR or ⊕) between the two packets. If packet A is lost, packet A may be recovered by B⊕C. Similarly, if packet B is lost, packet B may be obtained by A⊕C. More than two packets may be used to recover lost packets in other embodiments. In one example, Random Linear Network Coding (RLNC) may be used to reconstruct or reconstruct lost packets. Other embodiments may be described and/or claimed. Referring now toFIG.1, where a multi-access computing (MEC) environment100in accordance with various embodiments, is shown.FIG.1specifically illustrates the different layers of communication occurring within the environment100, starting from endpoint sensors or things layer110(e.g., operating in an Internet of Things (IoT) network topology) comprising one or more IoT devices111(also referred to as edge endpoints110or the like); increasing in sophistication to gateways or intermediate node layer120comprising one or more user equipment (UEs)121aand121b(also referred to as intermediate nodes120or the like), which facilitate the collection and processing of data from endpoints110; increasing in processing and connectivity sophistication to access or edge node layer130comprising a plurality of access nodes (ANs)131,132, and133(also referred to as edge compute nodes130or the like); and increasing in connectivity and processing sophistication to a backend layer140comprising core network (CN)142and cloud144. The processing at the backend layer140may be enhanced by network services as performed by a remote application server150and/or other cloud services. An end-user device, such as an intermediate node120or endpoint110has access to multiple communication networks based on different technologies, for example, LTE or new radio (NR)/fifth generation (5G) cellular technology (e.g., as provided by AN131and/or ANs132), WiFi (e.g., as provided by AN133and/or ANs132), DSL, MuLTEfire, etc., for accessing application services. Different technologies exhibit benefits and limitations in different scenarios, and application performance in different scenarios becomes dependent on the choice of the access networks (e.g., WiFi, LTE, etc.) and the used network and transport protocols (e.g., VPN, MPTCP, GRE etc.). For example, WiFi may provide high throughput for intermediate nodes120and endpoints110when under relatively good coverage, but the throughput degrades significantly as the user moves closer to the edge of WiFi coverage area or when an AN133serves a relatively large user population (e.g., due to contention based WiFi access scheme). In LTE or NR networks, the capacity is often constrained by the limited availability of licensed spectrum, but the quality of the service is predictable even in multi-user scenarios due to the exclusivity of the licensed spectrum and the controlled scheduling provided by a serving base station. Unlike LTE and NR networks that use licensed spectrum, WiFi is a shared medium that operates in the unlicensed radiofrequency (RF) of 2.4 GHz and 5 GHz ranges. The 3GPP variant of unlicensed access is called Licensed Assisted Access (LAA). LAA, aims to design LTE and/or NR specifications for global harmonization that allow for fair coexistence with WiFi and other networks in a shared medium. LAA employs a medium access scheme similar to WiFi's Enhanced Distributed Channel Access (EDCA). The coexistence impact on fairness and throughput with respect to LTE and/or NR is also a current challenge for both standards. One issue that may arise when utilizing network technologies that operated in a shared medium is that packets may be lost during transmission due to, for example, temporary interference, packet collisions, congestion, and buffer overflow. In current WiFi-based protocols, Media Access Control (MAC) protocols support limited retransmissions to recover lost packets. In particular, a WiFi transmitter will give up and drop a packet when a maximum retransmission limit is reached. Additionally, the WiFi-based retransmission method is not applicable when a packet is dropped due to temporary congestion and/or buffer overflow. Similarly, LAA uses a contention window size (CWS) for retransmitting lost packets, where the CWS increases in an exponential manner based on the Hybrid Automatic Repeat Request (HARQ)-Acknowledgement (ACK) in the MAC layer. The present disclosure provides embodiments for recovering lost packets that are an improvement over existing packet recovery mechanisms, such as those discussed previously. First embodiments involve e2e retransmission between a receiver device and a transmitter device such that any packet that has been lost on the delivery path can be detected and retransmitted. Second embodiments involve using packet coding and decoding to recover lost packets, wherein the transmitter device generates an encoded packet based on at least two previously sent packets and sends the encoded packet to the receiver device to be decoded by the receiver device. In either of these embodiments, the receiver device (or simply the “receiver”) may be an endpoint110or intermediate node120and the transmitter device(or simply the “transmitter”) may be an edge compute node130, or vice versa. In either of the aforementioned embodiments, the transmitter and receiver may implement a protocol stack including one or more MAMS entities. In some embodiments, these protocol stack entities may operate below layer 3 (L3) (e.g., the IP layer) and above layer 2 (L2) (e.g., link layer) entities. In some implementations, the MAMS entities may reside on a MEC platform. The aforementioned embodiments improve the performance (e.g., e2e reliability) of the underlying network technologies while retaining the benefits of the MAMS framework and the underlying network technologies. The various aspects of the embodiments are discussed in more detail infra with respect toFIGS.2-9. Referring back toFIG.1, the environment100is shown to include a UE121aand UE121b(collectively referred to as “UE121” or “UEs121”). In this example, the UE121ais illustrated as a vehicle UE, and UE121bis illustrated as a smartphone (e.g., handheld touchscreen mobile computing device connectable to one or more cellular networks). However, these UEs121may comprise any mobile or non-mobile computing device, such as tablet computers, wearable devices, Personal Data Assistants (PDAs), pagers, desktop computers, wireless handsets, unmanned vehicles or drones, and/or any type of computing device including a wireless communications interface. Environment100also includes IoT devices111, which are uniquely identifiable embedded computing devices (e.g., within the Internet infrastructure) that comprise a network access layer designed for low-power IoT applications utilizing short-lived UE connections. The IoT devices111may be any objects, devices, sensors, or “things” that are embedded with hardware and/or software components that enable the objects, devices, sensors, or “things” capable of capturing and/or recording data associated with an event, and capable of communicating such data with one or more other devices over a network with little or no user intervention. For instance, in various embodiments, IoT devices111may be abiotic devices such as autonomous sensors, gauges, meters, image capture devices, microphones, light emitting devices, audio emitting devices, audio and/or video playback devices, electro-mechanical devices (e.g., switch, actuator, etc.), and the like. The IoT devices111can utilize technologies such as machine-to-machine (M2M) or machine-type communications (MTC) for exchanging data with an MTC server (e.g., a server150), a MEC server136and/or MEC system, or device via a PLMN, ProSe or D2D communication, sensor networks, or IoT networks. The M2M or MTC exchange of data may be a machine-initiated exchange of data. The IoT devices111may execute background applications (e.g., keep-alive messages, status updates, etc.) to facilitate the connections of the IoT network. Where the IoT devices111are, or are embedded in, sensor devices, the IoT network may be a wireless sensor network (WSN). An IoT network describes an interconnecting IoT UEs, such as the IoT devices111being connected to one another over respective direct links105. The IoT devices may include any number of different types of devices, grouped in various combinations (referred to as an “IoT group”) that may include IoT devices that provide one or more services for a particular user, customer, organizations, etc. A service provider (e.g., an owner/operator of server150, CN142, and/or cloud144) may deploy the IoT devices in the IoT group to a particular area (e.g., a geolocation, building, etc.) in order to provide the one or more services. In some implementations, the IoT network may be a mesh network of IoT devices111, which may be termed a fog device, fog system, or fog, operating at the edge of the cloud144. The fog involves mechanisms for bringing cloud computing functionality closer to data generators and consumers wherein various network devices run cloud application logic on their native architecture. Fog computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control, and networking anywhere along the continuum from cloud144to Things (e.g., IoT devices111). The fog may be established in accordance with specifications released by the OpenFog Consortium (OFC), the Open Connectivity Foundation™ (OCF), among others. In some embodiments, the fog may be a tangle as defined by the IOTA foundation. The fog may be used to perform low-latency computation/aggregation on the data while routing it to an edge cloud computing service (e.g., edge nodes130) and/or a central cloud computing service (e.g., cloud144) for performing heavy computations or computationally burdensome tasks. On the other hand, edge cloud computing consolidates human-operated, voluntary resources, as a cloud. These voluntary resource may include, inter-alia, intermediate nodes120and/or endpoints110, desktop PCs, tablets, smartphones, nano data centers, and the like. In various implementations, resources in the edge cloud may be in one to two-hop proximity to the IoT devices111, which may result in reducing overhead related to processing data and may reduce network delay. In some embodiments, the fog may be a consolidation of IoT devices111and/or networking devices, such as routers and switches, with high computing capabilities and the ability to run cloud application logic on their native architecture. Fog resources may be manufactured, managed, and deployed by cloud vendors, and may be interconnected with high speed, reliable links. Moreover, fog resources reside farther from the edge of the network when compared to edge systems but closer than a central cloud infrastructure. Fog devices are used to effectively handle computationally intensive tasks offloaded by edge resources. In embodiments, the fog may operate at the edge of the cloud144. The fog operating at the edge of the cloud144may overlap or be subsumed into an edge network130of the cloud144. The edge network of the cloud144may overlap with the fog, or become a part of the fog. Furthermore, the fog may be an edge-fog network that includes an edge layer and a fog layer. The edge layer of the edge-fog network includes a collection of loosely coupled, voluntary and human-operated resources (e.g., the aforementioned edge compute nodes or edge devices). The Fog layer resides on top of the edge layer and is a consolidation of networking devices such as the intermediate nodes120and/or endpoints110ofFIG.1. Data may be captured, stored/recorded, and communicated among the IoT devices1804(or, for example, among the intermediate nodes120and/or endpoints110that have direct links105with one another as shown byFIG.1). Analysis of the traffic flow and control schemes may be implemented by aggregators that are in communication with the IoT devices111and each other through a mesh network. The aggregators may be a type of IoT device111and/or network appliance. In the example ofFIG.1, the aggregators may be edge nodes130, or one or more designated intermediate nodes120and/or endpoints110. Data may be uploaded to the cloud144via the aggregator, and commands can be received from the cloud144through gateway devices that are in communication with the IoT devices111and the aggregators through the mesh network. Unlike the traditional cloud computing model, in some implementations, the cloud144may have little or no computational capabilities and only serves as a repository for archiving data recorded and processed by the fog. In these implementations, the cloud144centralized data storage system and provides reliability and access to data by the computing resources in the fog and/or edge devices. Being at the core of the architecture, the Data Store of the cloud144is accessible by both Edge and Fog layers of the aforementioned edge-fog network. The UEs121and IoT devices111may be configured to connect, for example, communicatively couple, with Radio Access Network (RAN) including one or more of the ANs131,132, and/or133. In embodiments, the RAN may be an NG RAN or a 5G RAN, an E-UTRAN, or a legacy RAN, such as a UTRAN or GERAN. As used herein, the term “NG RAN” may refer to a RAN that operates in an NR or 5G system, and the term “E-UTRAN” or the like may refer to a RAN that operates in an LTE or 4G system. The UEs121and IoT devices111may utilize respective connections (or channels)103, respectively, each of which comprises a physical communications interface or layer. In this example, the connections103are illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols, such as a GSM protocol, a CDMA network protocol, a PTT protocol, a POC protocol, a UMTS protocol, a 3GPP LTE protocol, a 5G protocol, a NR protocol, and/or any of the other communications protocols discussed herein. In embodiments, the UEs121and IoT devices111may further directly exchange communication data via respective direct interfaces (or links)105. In some implementations the interfaces105may be a WiFi based link or a personal area network (PAN) based link (e.g., IEEE 802.15.4 based protocols including ZigBee, IPv6 over Low power Wireless Personal Area Networks (6LoWPAN), WirelessHART, MiWi, Thread, etc.; WiFi-direct; Bluetooth/Bluetooth Low Energy (BLE) protocols). In other implementations, the interface105may be an LTE/NR Proximity Services (ProSe) link or PC5 interface. According to various embodiments, the UEs121and IoT devices111and the RAN nodes131/132communicate data (e.g., transmit and receive) data over a licensed medium (also referred to as the “licensed spectrum” and/or the “licensed band”) and an unlicensed shared medium (also referred to as the “unlicensed spectrum” and/or the “unlicensed band”). The licensed spectrum may include channels that operate in the frequency range of approximately 400 MHz to approximately 3.8 GHz, whereas the unlicensed spectrum may include the 5 GHz band. To operate in the unlicensed spectrum, the UEs121and IoT devices111and the RAN nodes131/132may operate using LAA, enhanced LAA (eLAA), and/or further eLAA (feLAA) mechanisms. In these implementations, the UEs121and IoT devices111and the RAN nodes131/132may perform one or more known medium-sensing operations and/or carrier-sensing operations in order to determine whether one or more channels in the unlicensed spectrum is unavailable or otherwise occupied prior to transmitting in the unlicensed spectrum. The medium/carrier sensing operations may be performed according to a listen-before-talk (LBT) protocol. LBT is a mechanism whereby equipment (e.g., UEs121and IoT devices111, RAN nodes131/132, etc.) senses a medium (for example, a channel or carrier frequency) and transmits when the medium is sensed to be idle (or when a specific channel in the medium is sensed to be unoccupied). The medium sensing operation may include CCA, which utilizes at least ED to determine the presence or absence of other signals on a channel in order to determine if a channel is occupied or clear. This LBT mechanism allows cellular/LAA networks to coexist with incumbent systems in the unlicensed spectrum and with other LAA networks. ED may include sensing RF energy across an intended transmission band for a period of time and comparing the sensed RF energy to a predefined or configured threshold. The UE121bis shown to be configured to access an access point (AP)133via a connection107. The connection107can comprise a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, wherein the AP133would comprise a wireless fidelity (WiFi®) router. In this example, the AP133is shown to be connected to the Internet without connecting to the CN120of the wireless system. In various embodiments, the UE121b, RAN nodes131/132, and AP106may be configured to utilize LWA operation and/or LWIP operation. The LWA operation may involve the UE121bbeing configured by a RAN node131/132to utilize radio resources of LTE/NR and WLAN. LWIP operation may involve the UE121busing WLAN radio resources (e.g., connection107) via IPsec protocol tunneling to authenticate and encrypt packets (e.g., IP packets) sent over the connection107. IPsec tunneling includes encapsulating the entirety of original IP packets and adding a new packet header, thereby protecting the original header of the IP packets. The RAN can include one or more AN nodes or RAN nodes131and132(collectively referred to as “RAN nodes” or “RAN node”) that enable the connections103. As used herein, the terms “access node,” “access point,” or the like may describe equipment that provides the radio baseband functions for data and/or voice connectivity between a network and one or more users. In this example, the RAN node131is embodied as a NodeB, evolved NodeB (eNB), or a next generation NodeB (gNB), and the RAN nodes132are embodied as Road Side Unites (RSUs). Any other type of ANs can be used, and the ANs may comprise ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell). As used herein, the term “NG RAN node” or the like may refer to a RAN node111that operates in an NR or 5G system (for example, a gNB), and the term “E-UTRAN node” or the like may refer to a RAN node131that operates in an LTE or 4G system (e.g., an eNB). According to various embodiments, the RAN nodes131may be implemented as one or more of a dedicated physical device such as a macrocell base station, and/or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells. In some embodiments, all or parts of the RAN nodes131/132may be implemented as one or more software entities running on server computers as part of a virtual network, which may be referred to as a cloud RAN (CRAN) and/or a virtual baseband unit pool (vBBUP). In these embodiments, the CRAN or vBBUP may implement a RAN function split, such as a PDCP split wherein RRC and PDCP layers are operated by the CRAN/vBBUP and other L2 protocol entities are operated by individual RAN nodes131/132; a MAC/PHY split wherein RRC, PDCP, RLC, and MAC layers are operated by the CRAN/vBBUP and the PHY layer is operated by individual RAN nodes131/132; or a “lower PHY” split wherein RRC, PDCP, RLC, MAC layers and upper portions of the PHY layer are operated by the CRAN/vBBUP and lower portions of the PHY layer are operated by individual RAN nodes131/132. This virtualized framework allows the freed-up processor cores of the RAN nodes131/132to perform other virtualized applications. In some implementations, an individual RAN node131/132may represent individual gNB-DUs that are connected to a gNB-CU via individual 15 interfaces (not shown byFIG.1). In these implementations, the gNB-DUs include one or more remote radio heads or RFEMs (see, e.g.,FIG.12), and the gNB-CU may be operated by a server that is located in the RAN (not shown) or by a server pool in a similar manner as the CRAN/vBBUP. Additionally or alternatively, one or more of the RAN nodes131/132may be next generation eNBs (ng-eNBs), which are RAN nodes131/132that provide E-UTRA user plane and control plane protocol terminations toward the UEs121, and are connected to a 5GC via an NG interface. Any of the RAN nodes131/132can terminate the air interface protocol and can be the first point of contact for the UEs121and IoT devices111. In some embodiments, any of the RAN nodes131/132can fulfill various logical functions for the RAN including, but not limited to, radio network controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management. In embodiments, the UEs121and IoT devices111can be configured to communicate using OFDM communication signals with each other or with any of the RAN nodes131/132over a multicarrier communication channel in accordance with various communication techniques, such as, but not limited to, an OFDMA communication technique (e.g., for downlink communications) and/or an SC-FDMA communication technique (e.g., for uplink and ProSe or sidelink communications), although the scope of the embodiments is not limited in this respect. The RAN nodes131/132may be configured to communicate with one another via respective interfaces or links (not shown), such as an X2 interface for LTE implementations (e.g., when CN120is an Evolved Packet Core (EPC)), an Xn interface for 5G or NR implementations (e.g., when CN120is an Fifth Generation Core (5GC)), or the like. The ANs131and132are communicatively coupled to CN120. In embodiments, the CN120may be an evolved packet core (EPC) network, a NextGen Packet Core (NPC) network, a 5G core (5GC), or some other type of CN. The CN120may comprise a plurality of network elements, which are configured to offer various data and telecommunications services to customers/subscribers (e.g., users of UEs121and IoT devices111) who are connected to the CN120via a RAN. The components of the CN120may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium). In some embodiments, Network Functions Virtualization (NFV) may be utilized to virtualize any or all of the above-described network node functions via executable instructions stored in one or more computer-readable storage mediums (described in further detail infra). A logical instantiation of the CN120may be referred to as a network slice, and a logical instantiation of a portion of the CN120may be referred to as a network sub-slice. NFV architectures and infrastructures may be used to virtualize one or more network functions, alternatively performed by proprietary hardware, onto physical resources comprising a combination of industry-standard server hardware, storage hardware, or switches. In other words, NFV systems can be used to execute virtual or reconfigurable implementations of one or more CN120components/functions. The CN120is shown to be communicatively coupled to an application server150and a network150via an IP communications interface155. the one or more server(s)150comprise one or more physical and/or virtualized systems for providing functionality (or services) to one or more clients (e.g., UEs121and IoT devices111) over a network (e.g., cloud144). The server(s)150may include various computer devices with rack computing architecture component(s), tower computing architecture component(s), blade computing architecture component(s), and/or the like. The server(s)150may represent a cluster of servers, a server farm, a cloud computing service, or other grouping or pool of servers, which may be located in one or more datacenters. The server(s)130may also be connected to, or otherwise associated with one or more data storage devices (not shown). Moreover, the server(s)150may include an operating system (OS) that provides executable program instructions for the general administration and operation of the individual server computer devices, and may include a computer-readable medium storing instructions that, when executed by a processor of the servers, may allow the servers to perform their intended functions. Suitable implementations for the OS and general functionality of servers are known or commercially available, and are readily implemented by persons having ordinary skill in the art. Generally, the server(s)150offer applications or services that use IP/network resources. As examples, the server(s)150may provide traffic management services, cloud analytics, content streaming services, immersive gaming experiences, social networking and/or microblogging services, and/or other like services. In addition, the various services provided by the server(s)150may include initiating and controlling software and/or firmware updates for applications or individual components implemented by the UEs121and IoT devices111. The server(s)150can also be configured to support one or more communication services (e.g., Voice-over-Internet Protocol (VoIP) sessions, PTT sessions, group communication sessions, social networking services, etc.) for the UEs121and IoT devices111via the CN120. The cloud144may represent a cloud computing service, the Internet, a local area network (LAN) or a wide area network (WAN) including proprietary and/or enterprise networks for a company or organization, or combinations thereof. The cloud144may be a network that comprises computers, network connections among the computers, and software routines to enable communication between the computers over network connections. In this regard, the cloud144comprises one or more network elements that may include one or more processors, communications systems (e.g., including network interface controllers, one or more transmitters/receivers connected to one or more antennas, etc.), and computer readable media. Examples of such network elements may include wireless access points (WAPs), home/business servers (with or without RF communications circuitry), routers, switches, hubs, radio beacons, base stations, picocell or small cell base stations, backbone gateways, and/or any other like network device. Connection to the cloud144may be via a wired or a wireless connection using the various communication protocols discussed infra. More than one network may be involved in a communication session between the illustrated devices. Connection to the cloud144may require that the computers execute software routines which enable, for example, the seven layers of the OSI model of computer networking or equivalent in a wireless (cellular) phone network. Cloud144may be used to enable relatively long-range communication such as, for example, between the one or more server(s)150and one or more UEs121and IoT devices111. In some embodiments, the cloud144may represent the Internet, one or more cellular networks, local area networks, or wide area networks including proprietary and/or enterprise networks, Transfer Control Protocol (TCP)/Internet Protocol (IP)-based network, or combinations thereof. In such embodiments, the cloud144may be associated with network operator who owns or controls equipment and other elements necessary to provide network-related services, such as one or more base stations or access points, one or more servers for routing digital data or telephone calls (e.g., a core network or backbone network), etc. The backbone links155may include any number of wired or wireless technologies, and may be part of a local area network (LAN), a wide area network (WAN), or the Internet. In one example, the backbone links155are fiber backbone links that couple lower levels of service providers to the Internet, such as the CN142and cloud144. In some embodiments, at least some of the edge nodes120may include or be part of a MEC system135. The MEC system135includes a collection of MEC servers136(including MEC server136aand MEC server136binFIG.1) and MEC management systems (not shown byFIG.1) necessary to run MEC applications (e.g., MEAs1036ofFIG.10and MEAs1136ofFIG.11) within an operator network or a subset of an operator network. The MEC servers136a,136b,136c(collectively referred to as “MEC servers136” or “MEC server136”) are physical computer systems (e.g., server compute nodes) that include a MEC platform (e.g., MEP1037ofFIG.10and MEP-VNF1137ofFIG.11) and a virtualization infrastructure (e.g., VI1038ofFIG.10and/or NFVI1104ofFIG.11), and provide compute, storage, and network resources to MEC applications. The MEC servers136may also be referred to as “MEC hosts136” or “edge servers.” The virtualization infrastructure (VI) of the MEC servers136provide virtualized environments and virtualized resources (e.g., “virtualized infrastructure”) for the MEC hosts136, and the MEC applications may run as virtual machines (VMs) and/or application containers on top of the VI. The components and/or entities of the MEC system135are discussed in more detail infra with respect toFIGS.10-11. As shown byFIG.1, each of the (R)AN nodes131/132and AP133are co-located with MEC servers136a,136b, and136c, respectively. These implementations may be small-cell clouds (SCCs) where a MEC server136is co-located with a small cell (e.g., pico-cell, femto-cell, etc.), or may be mobile micro clouds (MCCs) where a MEC server136is co-located with a macro-cell (e.g., an eNB, gNB, etc.). The MEC servers136may be deployed in a multitude of arrangements other than as shown byFIG.1. In a first example, the MEC servers136may be co-located or operated by RNCs, which may be the case for legacy network deployments, such as 3G networks. In a second example, the MEC servers136may be deployed at cell aggregation sites or at multi-RAT aggregation points that can be located either within an enterprise or used in public coverage areas. In a third example, the MEC servers136may be deployed at the edge of CN120. These implementations may be used in follow-me clouds (FMC), where cloud services running at distributed data centers follow the UEs121as they roam throughout the network. According to various embodiments, task offloading may be “opportunistic”, wherein the MEC system135(or a particular MEC server136selected as the master node in the example ofFIG.1) may offload application tasks to one or more UEs121taking into account the computational complexity of the tasks and/or the amount of computational and network/signaling resources available at the UEs121. For example, a MEC server136may offload a certain number and/or type of tasks based on the quality or strength of links103,105, and/or107, the strength or quality of the computational resources available at the UEs121, an amount of available memory or a current memory utilization of the UEs121, and/or based on other operational parameters of (or experienced by) the UEs121. For some identified tasks, the MEC system135may evaluate the offloading opportunity (e.g., the “tradeoff”) with respect to available UEs121, in which case the MEC system135may offload tasks to certain UEs121that are capable of providing output data from performing their respective tasks back to the MEC server136in a desired period of time. Based on the operational parameters discussed previously, offloading tradeoffs may be evaluated and optimal or best offloading opportunities may be determined based on the tradeoffs. FIG.2shows an example of a MAMS reference architecture200according to various embodiments. MAMS is a programmable framework that provides mechanisms for flexible selection of network paths in a multi-access communication environment, based on application needs. MAMS leverages network intelligence and policies to dynamically adapt traffic distribution across selected paths and user plane treatment to changing network/link conditions. The network path selection and configuration messages are carried as user plane data between the functional elements in the network and the end-user device, and thus without any impact to the control plane signaling schemes of the underlying access network(s). For example, in a multi-access network with LTE and WiFi technologies, existing LTE and existing WiFi signaling procedures will be used to setup the LTE and WiFi connections, respectively, and MAMS specific control plane messages are carried as LTE or WiFi user plane data. The MAMS framework provides the capabilities of smart selection and flexible combination of access paths and core network paths, as well as the user plane treatment when the traffic is distributed across the selected paths. Thus, it is a broad programmable framework providing functions beyond simple sharing of network policies such as provided by Access Network Discovery and Selection Function (ANDSF) that offers policies and rules for assisting 3GPP devices to discover and select available access networks. Further, it allows the choice and configuration of user plane treatment for the traffic over the multiple paths, depending on the needs of the application. MAMS mechanisms are not dependent on any specific access network type or user plane protocols like TCP, UDP, GRE, MPTCP etc. MAMS co-exists and complements the existing protocols by providing a way to negotiate and configure these protocols based on client and network capabilities per access basis to match their use for a given multi-access scenario. Further, it allows load balancing of the traffic flows across the selected multiple accesses and exchange of network state information to be used for network intelligence to optimize the performance of such protocols. Continuing to refer toFIG.2, the MAMS architecture200illustrates a scenario of a client201served by multiple (1 to n) core networks241-1to241-n(where n is a number). The MAMS architecture200includes the following functional elements: a client201including a Client Connection Manager (CCM)206and a Client Multi Access Data Proxy (C-MADP)207; multiple (1 to n) AN elements231-1to231-n; a MAMS system235including a Network Connection manager (NCM)236and a Network Multi Access Data Proxy (N-MADP)237; and the multiple (1 to n) core networks241-1to241-n. The CCM206and NCM236handle control plane aspects, and the C-MADP207and N-MADP237handle user plane aspects. The core networks (or simply “cores”)241-1to241-nare elements that anchor the client's201IP address used for communication with applications via the network. One or more of the cores241-1to241-nmay correspond to the CN142and/or the cloud144depicted byFIG.1. The client201is an end-user device (e.g., a UE such as UEs121and/or IoT devices111depicted byFIG.1) supporting connections with multiple access nodes (e.g., edge nodes130inFIG.1), possibly over different access technologies. When the client201is capable of handling multiple network connections, the client201may be referred to as a “multiconnectivity client” or the like. The ANs231-1to231-nare network elements in the network that deliver user data packets to the client201via respective point-to-point access links211-1to211-n, which may include, for example, WiFi links (e.g., link107inFIG.1), LTE or NR cellular links (e.g., links103inFIG.1), digital subscriber line (DSL) connections, and/or the like. In some implementations, the point-to-point access links211-1to211-nmay additionally or alternatively include short-range radio links (e.g., links105inFIG.1) such as, for example, Bluetooth® or BLE, IEEE 802.15.4 based protocols (e.g., 6LoWPAN, WirelessHART, MiWi, Thread, etc.), WiFi-direct, and/or the like. The NCM236is an element in the network that handles MAMS control messages from the client201and configures distribution of data packets over the multiple available access paths221-1to221-n, delivery paths222-1to222-n, and/or core network paths223-1to223-n, as well as user plane treatment of traffic flows. The NCM236handles the MAMS control plane procedures, and configures the network (N-MADP) and client (C-MADP) user plane functions such as negotiating the client201on the use of available access network paths221-1to221-n, protocols and rules for processing user plane traffic, and/or link monitoring procedures. The control plane messages exchanged between the NCM236and CCM206are transported as an overlay, without any impact to the underlying access networks. The control plane path224may be overlaid over any access user plane path. A “path” may be a UDP flow between two hosts, which may be denoted by a 4-tuple (IP source address, IP destination address, source port, destination port). In some embodiments, WebSocket is used for transporting management and control messages between the NCM236and CCM206, wherein Multi Access (MX) Control Message are carried over (or encapsulated in) a WebSocket, and the WebSocket is carried over (or encapsulated in) TCP/TLS. The CCM206is an entity in the client201that exchanges MAMS signaling with the NCM236and configures the multiple network paths221-1to221-nat the client201for transport of user data. The CCM206is the peer functional element of the NCM236in the client201for handling MAMS control plane procedures. The CCM206manages multiple network connections221-1to221-nat the client201. The CCM206is responsible for exchanging MAMS signaling messages with the NCM236for supporting functions such as uplink (UL) and downlink (DL) user network path configuration for transporting user data packets, link probing and reporting to support adaptive network path selection by NCM236. In the DL for user data received by the client201, the CCM206configures C-MADP207such that application data packet received over any of the accesses to reach the appropriate application on the client201. In the UL for the data transmitted by the client201, the CCM206configures the C-MADP207to determine the best access links221to be used for UL data based on a combination of local policy and network policy delivered by the NCM236over link224. The C-MADP207is an element in the client201that handles user data traffic forwarding across multiple network paths. The C-MADP207is responsible for MAMS-specific user plane functionalities in the client201such as encapsulation, fragmentation, concatenation, reordering, retransmissions, etc. The C-MADP207is configured by the CCM206based on signaling exchange with the NCM236and local policies at the client201. The CCM206configures the selection of delivery connections222-1to222-nand the user plane protocols to be used for UL user data traffic based on the signaling exchanged with the NCM236. The N-MADP237is an entity in the network handles the user data traffic forwarding across multiple network paths. N-MADP237is responsible for MAMS-specific user plane (“u-plane”) functionalities in the network. Such as encapsulation, fragmentation, concatenation, reordering, retransmission, etc. The N-MADP237is the distribution node that routes the UL user plane traffic to the appropriate anchor connection223-1to223-ntowards a respective core network241-1to241-n, and the DL user traffic to the client201over the appropriate delivery connection(s)222-1to222-n. The anchor connections223-1to223-nare network paths from the N-MADP237to the user plane gateway (IP anchor) that has assigned an IP address to the client201, and the delivery connections222-1to222-nare network paths from the N-MADP237to the client201. In the DL, the NCM236configures the use of delivery connections, and user plane protocols at the N-MADP237for transporting user data traffic. The N-MADP237may implement Equal-Cost Multi-Path routing (ECMP) support for the down link traffic. Additionally or alternatively, the N-MADP237may be connected to a router or other like network element (e.g., AP106ofFIG.1) with ECMP functionality. The NCM236configures the N-MADP237with a load balancing algorithm based on static and/or dynamic network policies. These network policies may include assigning access and core paths for specific user data traffic type, data volume based percentage distribution, link availability and feedback information from exchange of MAMS signaling with the CCM206at the client201, and/or the like. The N-MADP237can be configured with appropriate user plane protocols to support both per-flow and per-packet traffic distribution across the delivery connections. In the UL, the N-MADP237selects the appropriate anchor connection223-1to223-nover which to forward the user data traffic, received from the client201via one or more delivery connections222-1to222-n. The forwarding rules in the UL at the N-MADP237are configured by the NCM236based on application requirements (e.g., Enterprise hosted Application flows via a WiFi anchor241(e.g., cloud144ofFIG.1), Mobile Operator hosted applications via a cellular core241(e.g., CN142ofFIG.1), and/or the like). The NCM236and the N-MADP237can be either collocated with one another or instantiated on different network nodes. The NCM236can setup multiple N-MADP237instances in the network. The NCM236controls the selection of an individual N-MADP237instance by the client and the rules for distribution of user traffic across the N-MADP237instances. In this way, different N-MADP237instances may be used to handle different sets of clients for load balancing across clients. Additionally, the different N-MADP237instances may be used for different address deployment topologies (e.g., N-MADP237hosted at the user plane node at the access edge or in the core network, while the NCM236hosted at the access edge node), as well as address access network technology architecture. For example, an N-MADP237instance at a CN node241may be used to manage traffic distribution across LTE and DSL networks, and another N-MADP237instance at a (R)AN node131/132may be used to manage traffic distribution across LTE and WiFi traffic. Furthermore, a single client201can be configured to use multiple N-MADP237instances, which may be used for addressing different application requirements. For example, individual N-MADP237instances may be used to handle TCP and UDP transport based traffic. The CCM206and NCM236exchange signaling messages to configure the user plane functions, C-MADP207and N-MADP237, at the client and network respectively. The CCM206may obtain the CCM236credentials (FQDN or IP Address) for sending the initial discovery messages. As an example, the client201can obtain the NCM236credentials using methods like provisioning, DNS query. Once the discovery process is successful, the (initial) NCM236can update and assign additional NCM236addresses, for example, based on MCC/MNC tuple information received in the MX Discovery Message, for sending subsequent control plane messages. The CCM206discovers and exchanges capabilities with the NCM. The NCM236provides the credentials of the N-MADP237end-point and negotiates the parameters for user plane with the CCM206. CCM206configures C-MADP207to setup the user plane path (e.g., MPTCP/UDP Proxy Connection) with the N-MADP237based on the credentials (e.g., (MPTCP/UDP) Proxy IP address and port, Associated Core Network Path), and the parameters exchanged with the NCM236. Further, NCM236and CCM206exchange link status information to adapt traffic steering and user plane treatment with dynamic network conditions. The key procedures are described in details in the following sub-sections. In embodiments, a UDP connection may be configured between the C-MADP207and the N-MADP237to exchange control messages. The control messages may be or include, for example, keep-alive, probe request (REQ)/acknowledgement (ACK), Packet Loss Report (PLR), First Sequence Number (FSN), Coded MX SDU (CMS), Traffic Splitting Update (TSU), Traffic Splitting ACK (TSA) messages, and/or path quality estimation information. The N-MADP237end-point IP address and UDP port number of the UDP connection is used to identify MX control PDUs. In various embodiments, the C-MADP207may send out PLR messages to report lost MX SDU, for example, during handover. In response the N-MADP237may retransmit the lost MX SDU to the client201(C-MADP207) if/when the lost packet is found. In various embodiments, the N-MADP237may send out FSN messages to indicate an oldest MX SDU in its transmission (Tx) buffer if a lost MX SDU is not found in the buffer after receiving a PLR message from C-MADP207. In these embodiments, the C-MADP207only reports lost packets with an SN not smaller than the FSN. In various embodiments, the N-MADP237and/or C-MADP207may send out a CMS message to support DL or UL packet loss recovery through network coding. In these embodiments, the N-MADP237and/or C-MADP207may use a network coding algorithm to decode lost packets using the information of non-lost packets. Examples of such network coding algorithms may include linear network coding (LNC), random LNC (RLNC) (including block based RLNC and/or sliding window RLNC), caterpillar RLNC (CRLNC), network coded TCP (CTCP), convolutional network coding (CNC), opportunistic listening and opportunistic coding, and/or other like network coding algorithms. In these embodiments, if any of the MX SDUs is/are lost, a coded MX SDU is generated by applying a network coding algorithm to multiple consecutive (uncoded) MX SDUs, and is used for fast recovery without retransmission of the lost MX SDUs. The coded MC SDU may be generated using a number of consecutive MX SDUs (N) and a coding coefficient (K), where N and K are numbers. If N=2 and K=0, then an exclusive OR (XOR) method may be used to generate the coded MX SDU from two consecutive uncoded MX SDUs. As an example, LNC involves a MAMS transmitter (e.g., either the C-MADP207or the N-MADP237)generating one or more new packets using linear combinations of earlier received packets, multiplying them by one or more coefficients. The MAMS receiver (e.g., the other one of the C-MADP207or N-MADP237) receives one or more coded messages (e.g., the CMS discussed infra with respect toFIG.5), and collects the coded messages in a matrix. The original (lost) packets can be recovered by performing, for example, Gaussian elimination on the matrix, wherein the decoded packets may correspond to the rows in reduced row echelon form. As another example, RLNC involves the MAMS transmitter generating a coded message (e.g., the CMS discussed infra with respect toFIG.5) based on random linear combinations of previously transmitted packets (e.g., the MX SDUs discussed infra with respect toFIG.5) with coefficients chosen from a Galois field (GF) to create coded (redundant) symbols, where source symbols can be represented as row vectors of elements of the GF. In CRLNC, the MAMS transmitter encodes a packet using a last uncoded packet received by (or sent to) the MAMS receiver with a fixed coding rate. The coded message includes, inter alia, a header containing a sequence number and coding coefficient(s), and a packet payload containing a coded packet. The coding coefficient (a(i)) specifies how a corresponding i-th uncoded packet is included in the linear combination to form the corresponding coded packet. The various elements depicted in the example ofFIG.2may be implemented using a variety of different physical and/or virtualized components. In some embodiments, for example, the elements within MAMS network202may be implemented using one or more components of an edge node130, such as one or more LTE or 5G RANs, or the MEC system135ofFIG.1. In some embodiments, the MAMS system235may be implemented in or by an individual RAN node, such as one or more of the RAN nodes131/132inFIG.1. In one example, the MAMS system235is implemented as part of the layer 3 (L3) protocol stack (e.g., the Radio Resource Control (RRC) layer or the like). In another example, the MAMS system235is implemented as part of a layer above L3 such as the network layer (e.g., IP, UDP, GTP-U, etc.) data-plane protocol stack of the RAN node131/132. In another example of such embodiments, the MAMS system235may be is implemented as a separate layer between the L3 and upper layers (see e.g.,FIG.3discussed infra). In another example of such embodiments, the MAMS system235may be implemented in or by a gNB-CU of a CU/DU split architecture. In another example of such embodiments, the MAMS system235may be implemented in or by a vBBU pool, or a cloud RAN (C-RAN). In embodiments where a MEC framework is used, the MAMS system235may be implemented in or by a MEC host (or a MEC server) that is located in, or co-located with, RAN such as one or more MEC servers136inFIG.1. Alternatively, the functional elements within MAMS network202may be implemented by one or more network functions (or as a VNF) of CN120inFIG.1. For example, the N-MADP237may run on an S-GW or P-GW when CN120is an EPC, or the N-MADP237may run on a UPF when CN120is a 5GC. FIG.3depicts an example MAMS e2e user plane protocol stacks301and302in a MAMS network300according to various embodiments. This example includes protocol stack310that is implemented in a client301and a protocol stack312that is implemented in MAMS system335. The client301may be the same or similar as client202ofFIG.2, and the MAMS system335may be the same or similar as the MAMS system235ofFIG.2. The protocol stacks310and312may include one or more layer 3 (L3) protocols, various layer 2 (L2) and layer 1 (L1) protocols (L1/L2_A and L1/L2_B inFIG.3) for different wireless network interfaces. The L3 protocols may be or reside in the network layer of the OSI model. The network layer is responsible for knowing the internetwork path (routing) from a source (sending) device to a destination (receiver) device. The network layer is also responsible for logical addressing schemes that assign logical addresses to network hosts on both sides of the path. L3 protocols send datagrams (or packets) to the L2 entities. The datagrams/packets contain a defined set of data including addressing and control information that is routed between the source and destination devices. Examples of L3 protocols include, inter alia, IP, User Datagram Protocol (UDP), Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX), AppleTalk, DECnet, Routing Information Protocol (RIP), Interior Gateway Routing Protocol (IGRP), Enhanced IGRP (EIGRP), Open Shortest Path First (OSPF), intermediate system-to-intermediate system (IS-IS), Border Gateway Protocol (BGP), and Exterior Gateway Protocol (EGP). In an example, when L3 is an IP layer, the L3 may assign IP addresses to user data packets in any of IPv4, IPv6, or PPP formats, for example. In 3GPP-based networks (e.g., LTE, NR, etc.), the L3 for the control plane includes an RRC protocol/layer/entity and a Non-Access Stratum (NAS) protocol/layer/entity. Typically, the RRC layer communicates data with the L2 protocol entities via one or more service access points (SAPs) and may configure the lower layer entities via corresponding management SAPs (M-SAPs). The L2 protocols may be or reside in the data link layer of the OSI model. The data link layer is responsible for reliable transmission of data across a physical network link, using specifications that provide different network and protocol characteristics, which includes physical addressing, different network topologies, error notifications, frame (L2 data units) sequences, and frame flow control. L2 is concerned with a specific addressing structure, namely physical addressing, as opposed to the L3 logical addressing scheme. Depending on the interface implementations, the physical addressing generally comes in the form of a MAC addresses that is encoded into the communication interface circuitry of the node. For WiFi-based protocols, the logical link control (LLC) layer may perform L3 multiplexing and demultiplexing operations. On receiving a frame from the MAC layer or the physical layer, the LLC layer identifies an L3 protocol type from an LLC header portion and provides the datagram to the correct L3 protocol via a suitable SAP (“de-multiplexing”). When sending data, the LLC layer provides packets from a particular L3 protocol entity to the MAC layer via a MAC SAP after inserting an L3 protocol type in the LLC header portion of the frame (“multiplexing”). The MAC layer of WiFi-based interfaces (e.g., IEEE 802.3) specifies a physical MAC address that identifies an interface or node on a network. Each frame (e.g., MAC PDU (MPDU)) sent over the wire contains a MAC address field, and only devices with a specific MAC address can process the frame. A source MAC address field is also included in the frame. Each interface implemented by a node has a corresponding L2 protocol stack. For example, a node301or335comprising an IEEE 802.11 based interface may include an L2 protocol stack comprising LLC and MAC layers, and the node may include a 3GPP-based interface including an L2 protocol stack comprising a Service Data Adaptation Protocol (SDAP) layer (e.g., for a 5G interface), a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a MAC layer. Each of the L2 layers communicate with one another via corresponding SAPs between each layer. In various embodiments, each protocol stacks310and312include an Multi-Access (MX) layers that are located below the L3 and above the L1/L2 entities of various communication protocols. The MX layers include an MX convergence sublayer, and for each network access technology, an MX adaptation sublayer. In embodiments, MX control PDUs and MX data PDUs goes through the MX adaptation sublayers in the same way. In these embodiments, the MX convergence sublayer is configured to communicate with one or more L3 protocols via the respective SAPs of the L3 protocols/layers, and each MX adaptation sublayer is configured to communication with a respective L2 protocol entity via the SAPs of those layers. The MX convergence sublayer performs multi-access specific in the user plane such as access (path) selection, multi-link (path) aggregation, splitting/reordering, lossless switching, fragmentation, concatenation, keep-alive, and probing, etc. MX Convergence layer can be implemented using existing user plane protocols including, for example, TCP, Multipath TCP (MPTCP), Quick UDP Internet Connection (QUIC), Multi-Path QUIC (MPQUIC), or by adapting encapsulating header/trailer schemes such as Generic Routing and Encapsulation (GRE), Generic Multi-Access (GMA), and/or the like. In embodiments where the MX convergence sublayer is an MPTCP-based MX convergence sublayer, multiple access networks are combined into a single MPTCP connection, and therefore, no new u-plane protocol or PDU format is needed. In these embodiments, if the NCM236determines that N-MADP237is to be instantiated with MPTCP as the MX Convergence Protocol, the NCM236exchanges the support of MPTCP capability in the discovery and capability exchange procedures. MPTCP proxy may be used to manage traffic steering and aggregation over multiple delivery connections. In embodiments where the MX convergence sublayer is an GRE-based MX convergence sublayer, multiple access networks are combined into a single GRE connection, and therefore, no new u-plane protocol or PDU format is needed. In these embodiments, if the NCM236determines that the N-MADP237is to be instantiated with GRE as the MX Convergence Protocol, the NCM236exchanges the support of GRE capability in the discovery and capability exchange procedures. The MX adaptation sublayers (including MX adaptation sublayer_A and MX adaptation sublayer_B inFIG.3) perform functions to handle tunneling, network layer security, and network address translation (NAT). The tunneling functionality may include UDP tunneling, where the MX adaptation sublayers encapsulate user plane packets of a respective anchor connection223in a UDP tunnel of a delivery connection222between the N-MADP237and C-MADP207; and/or IPsec tunneling, where the MX adaptation sublayers sends user plane packets of a respective anchor connection223through an IPsec tunnel of a delivery connection222. The client301NAT may involve the MX adaptation sublayers changing a client IP address of user plane packet of the anchor connection223, and sending the user plan packet over a delivery connection222. The MX adaptation sublayers may also pass through (or forward) the user plane packets through without any change over the anchor connection223. The MX adaptation sublayers also supports IPsec Tunneling and Datagram Transport Layer Security (DTLS) to ensure security of user plane packets over the network path. For IPsec tunneling, the MX adaptation sublayers342establish an IPsec tunnel between the N-MADP237and C-MADP207on the network path that is considered untrusted. DTLS is used if/when UDP tunneling is used on the network path that is considered untrusted. The client NAT mechanism may be used if a delivery connection222is trusted and does not involve NAT function on the path. The client NAT may provide efficiencies due to no tunneling overhead. The UDP or IPsec tunneling mechanisms may be used if a delivery connection222has a NAT function placed on the path. The MX convergence sublayer operates on top of the MX adaptation sublayers in the protocol stacks310and312. From the transmitter perspective, a User Payload (e.g., IP PDU) is processed by the MX convergence sublayer first, and then by an MX adaptation sublayer before being transported over a corresponding delivery connection222and/or access connection221(see e.g.,FIG.2). From the receiver perspective, an IP packet received over an access connection221and/or a delivery connection222(see e.g.,FIG.2) is processed by a corresponding MX adaptation sublayer first, and then by the MX convergence sublayer. According to various embodiments, loss detection and retransmission mechanisms are added to the MX convergence sublayer. In some embodiments, the MX convergence sublayer may be a trailer-based MX convergence sublayer that integrates multiple connections into a single e2e IP connection, and operates between L2 and L3 (e.g., the network or IP layers). As an example, for each protocol stack310and312, the L1/L2_A1 may be or include L1/L2 entities of a 3GPP-based radio interface and the L1/L2_B1 may be or include L1/L2 entities of a WiFi-based radio interface. For the control plane of the 3GPP-based radio interface, the MX adaptation sublayer_A1 may be below the RRC layer and above the PDCP layer; for the user plane of the 3GPP-based radio interface, the MX adaptation sublayer_A1 may be below an IP and/or UDP layer and above the SDAP layer in 5G systems and above the PDCP layer in LTE systems. In this example, the PHY layer (L1) may transmit and receive PHY layer signals that may be received from or transmitted to one or more other communication devices. The PHY layer signals may comprise one or more physical channels, such as those discussed herein. The PHY layer may further perform link adaptation or adaptive modulation and coding, power control, cell search (e.g., for initial synchronization and handover purposes), and other measurements used by higher layers, such as RRC. The PHY layer may still further perform error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, modulation/demodulation of physical channels, interleaving, rate matching, mapping onto physical channels, and MIMO antenna processing. In embodiments, an instance of PHY may process requests from and provide indications to an instance of MAC via one or more PHY-SAP. According to some embodiments, requests and indications communicated via PHY-SAP may comprise one or more transport channels. Furthermore, in this example, the L1/L2_A2 implemented by the MAMS system335may be or include L1/L2 entities of a wired interface for communicating data with a core network, such as CN142ofFIG.1. For the WiFi-based radio, the MX adaptation sublayer_B may be above the LLC layer and/or the MAC layer(s). In one example, the protocol stack310is, or is part of, the 6LoWPAN, ZigBee, Subnetwork Access Protocol (SNAP), or Thread protocol, each of which comprises, inter alia, physical (PHY) and MAC layers that are based on the IEEE 802.15.4 protocol for the operation of LR-WPANs. The MAC layer manages access to physical channels and network beaconing, and the PHY layer provides an interface to the physical channels, modulation and encoding of data for transmission, and demodulation and decoding of signals into data to be provided to upper layers. In another example, the protocol stack310is, or is part of, the DSRC protocol comprising, inter alia, a DSRC/WAVE PHY and MAC layers that are based on the IEEE 802.11p protocol. The DSRC/WAVE PHY layer is responsible for obtaining data for transmitting over the V2X network from higher layers, as well as receiving raw data over the V2X network and providing data to upper layers. The MAC layer organizes the data packets into network frames. The MAC layer may be split into a lower DSRC/WAVE MAC layer based on IEEE 802.11p and an upper WAVE MAC layer (or a WAVE multi-channel layer) based on IEEE 1609.4. IEEE 1609 builds on IEEE 802.11p and defines one or more of the other higher layers. The LLC layer (e.g., IEEE 802.2) allows multiple network L3 protocols to communicate over the same physical link by allowing the L3 protocols to be specified in LLC fields. In either of the aforementioned examples, the L1/L2_B2 implemented by the MAMS system335may be or include L1/L2 entities of a wired interface for communicating data with a WiFi-based backbone network, such as cloud144ofFIG.1. FIG.4depicts an example Multi-Access (MX) Data Protocol Data Unit (PDU) format400and an example MX trailer format402according to various embodiments. The MX data PDU format400may carry multiple IP PDUs in the IP payload section/field if concatenation is supported, and may carry a fragment of the IP PDU if fragmentation is supported. The MX data PDU format400also includes an IP header (hdr) section/field and the MX trailer section to carry the trailer-based MX PDU format402. In some embodiments, the Protocol Type field in the IP hdr of the MX data PDU format400may include a value of “114” to indicate that the presence of MX trailer (e.g., the trailer based MAMS u-plane protocol is a “0-hop” protocol, not subject to IP routing). If the MX PDU is transported with the MX adaptation method of IPsec tunneling, client NAT, or Pass Through, the IP length field in the IP header of the MX data PDU format400may also be changed to add the length of “MX Trailer” to the length of the original IP packet, and the IP checksum field in the IP header of the MX data PDU format400may also be changed to recalculate after changing “Protocol Type” and “IP Length”. If the MX adaptation method is UDP tunneling and “MX header optimization” in the “MX_UP_Setup_Configuration_Request” message is true, the “IP length” and “IP checksum” header fields of the MX data PDU format400may remain unchanged. The MX u-plane protocol can support multiple anchor connections223simultaneously, each of which is uniquely identified by Connection ID. MX u-plane protocol can also support multiple traffic classes per connection, each of which is identified by Traffic Class ID. Moreover, the MX trailer format402may be negotiated dynamically between the NCM236and CCM206(seeFIG.2). For example, the NCM236can send a control message to indicate which of the above fields should be included for individual delivery connection, on DL and UL, respectively. The trailer-based MX PDU format402allows in a Sequence Number (SN) to be added as part of the MX trailer to each IP packet that is sent between a MAMS receiver (e.g., endpoints110, intermediate nodes120ofFIG.1) and a MAMS system (e.g., MAMS system335ofFIG.3). the MX trailer format with all the fields present. In some embodiments, the MX flags are encoded in the last octet of the MX Trailer at the end of a MX PDU, and the MAMS receiver may first decode the MX flags field to determine the length of the MX trailer, and then decode each MX field accordingly. The trailer-based MX PDU format402includes the following fields:MX flags (e.g., 1 Byte): bit 0 is the most significant bit, bit 7 is the least significant bit, bits 6 and 7 are reserved for future uses.Next Header Present (bit 0): If the Next Header Present bit is set to 1, then the Next Header field is present and contains valid information.Connection ID Present (bit 1): If the Connection ID Present bit is set to 1, then the Connection ID field is present and contains valid information.Traffic Class Present (bit 2): If the Traffic Class Present bit is set to 1, then the Traffic Class field is present and contains valid information.Sequence Number Present (bit 3): If the Sequence Number Present bit is set to 1, then the Sequence Number field is present and contains valid information.Packet Length Present (bit 4): If the Packet Length Present bit is set to 1, then the First SDU (Service Data Unit) Length field is present and contains valid information.Fragmentation Control Present (bit 5): If the Fragmentation Control Present bit is set to 1, then the Fragmentation Control field is present and contains valid information.Bit 6˜7: reserved.Next Header (e.g., 1 Byte): the IP protocol type of the (first) IP packet in a MX PDU.Connection ID (e.g. 1 Byte): an unsigned integer to identify the anchor connection of the IP packets in a MX PDU.Traffic Class (TC) ID (e.g., 1 Byte): an unsigned integer to identify the traffic class of the IP packets in a MX PDU, for example Data Radio Bearer (DRB) ID [LWIPEP] for a cellular (e.g., LTE) connection.Sequence Number (SN) (e.g., 2 Bytes): an auto-incremented integer to indicate order of transmission of the IP packet, needed for lossless switching or multi-link (path) aggregation or fragmentation. Sequence Number SHALL be generated on a per Connection and per Traffic Class (TC) basis.First SDU Length (e.g., 2 Bytes): the length of the first IP packet, only included if a MX PDU contains multiple IP packets, e.g., concatenation.Fragmentation Control (FC) (e.g., 1 Byte): to provide necessary information for re-assembly, only needed if a MX PDU carries fragments, i.e. fragmentation. According to various embodiments, packet loss detection may be supported based on the SN without changes to the existing MAMS protocols. However, protocol enhancements may be needed to the MX convergence sublayer to support the retransmission mechanisms discussed herein. In various embodiments, MX control messages may be used for such purposes including the following MX control messages.NACK report: reports the list of lost packets at receiver, and includes the following information: (Anchor) Connection ID; Traffic Class ID; and Number of Packet Loss Bursts (1 Byte). For each burst {SN of the first lost packet in the burst (2 Bytes), Number of lost packets in the burst (1 Byte or 2 Bytes)}.ACK report: reports the last received in-order packet at receiver, and includes the following information: (Anchor) Connection ID; Traffic Class ID; and Sequence Number (SN) of the last received in-order packet (2 Bytes).FSN (First Sequence Number) report: reports the SN of the oldest (unacknowledged) packet in the buffer at transmitter, and includes the following information: (Anchor) Connection ID; Traffic Class ID; and SN of the oldest (unacknowledged) packet in the buffer at transmitter. In these embodiments, the NACK report and ACK report are sent by a MAMS receiver (e.g., client201and client301ofFIGS.2and3, respectively), and the FSN report is sent by the MAMS transmitter (e.g., MAMS system235and MAMS system335ofFIGS.2and3, respectively). It is implementation specific how to detect packet loss and send out NACK report. Usually, the MAMS receiver will send out a NACK report once a packet loss is detected. Moreover, a minimal NACK report time interval (T) may be inserted between two successive NACK reports to avoid unnecessary retransmission. The minimum NACK report time interval may be configured by the MAMS transmitter through the MX control messages. When the MAMS transmitter receives a NACK report, but cannot find the lost packet in its Tx buffer, the MAMS transmitter may send the FSN report to the MAMS receiver to notify the MAMS receiver of the SN of the oldest (e.g., acknowledged) packet in the Tx buffer. In response, the MAMS receiver does not report any lost packets whose SN is smaller (e.g., older) than the FSN. In some embodiment, the NACK report discussed above may be a PLR message, which is used to report lost MX SDUs/PDUs. The contents of the PLR message is discussed infra. In either of the aforementioned embodiments, the (Anchor) Connection ID may be an unsigned integer to identify the anchor connection which the ACK message is for; the Traffic Class ID may be an unsigned integer to identify the traffic class of the anchor connection which the ACK message is for; the ACK number may be the next (in-order) SN that the sender of the PLR message is expecting; and for each loss burst, the Number of Loss Bursts may include or indicate the SN of the first lost MX SDU in a burst, and the number of consecutive lost MX SDUs in the burst. In various embodiments, a new type of MX control message may be used to deliver coded MX service data units (SDUs). Examples of the MX control messages and MX SDUs is shown byFIG.5. The coded MX SDU may be generated by applying the network coding technique to one or multiple consecutive MX SDUs. The receiver may use the Coded MX SDU to recover any of the MX SDUs that are used in the coding. For example, packet A and packet B are two MX SDUs, and a packet C may be generated by simply applying XOR between the two. As a result, if packet A is lost, packet A may be obtained by B⊕C. Similarly, if packet B is lost, packet B may be obtained by A⊕C. FIG.5depicts an example MX control PDU format500and an example coded MX Service Data Unit (SDU) format502according to various embodiments. The MX control PDU format500includes an MX control PDU payload, an IP header, and a UDP header. Control PDUs are sent as UDP messages between the C-MADP207and the N-MADP237to exchange control messages for keep-alive, path quality estimation, or other like purposes. A UDP connection may be configured between the C-MADP207and the N-MADP237for the exchange the control messages. An N-MADP237end-point IP address and a UDP port number of the UDP connection is/are used to identify an MX control PDU. An N-MADP237end-point IP address may be included in the IP header section and a UDP port number of the UDP connection may be included in the UDP header section. MX Probe Parameters are negotiated during the User Plane Setup phase (e.g., MX UP SETUP CFG and MX UP SETUP CNF). The MX control PDU payload includes the following fields:Type (1 Byte): the type of the MX control message. The Type field may include the following values:+0: Keep-Alive+1: Probe REQ/ACK+2: Packet Loss Report (PLR)+3: First Sequence Number (FSN)+4: Coded MX SDU (CMS)+5: Traffic Splitting Update (TSU)+6: Traffic Splitting Acknowledgement (TSA)+Others: reservedConnection ID (CID) (1 Byte): the connection ID of the delivery connection for sending out the MX control messageMX Control Message (variable): the payload of the MX control message The type field and CID fields may comprise an MX control header. In some embodiments, the “Type” field is set to “2” for a PLR message. A PLR message includes the following fields:Connection ID (1 Byte): an unsigned integer to identify the anchor connection which the ACK message is for;Traffic Class ID (1 Byte): an unsigned integer to identify the traffic class of the anchor connection which the ACK message is for;ACK number (2 Bytes): the next (in-order) SN that the sender of the PLR message is expecting; andNumber of Loss Bursts (1 Byte): for each loss burst, include the following:SN of the first lost MX SDU in a burst (2 Bytes), andNumber of consecutive lost MX SDUs in the burst (1 Byte). In some embodiments, the “Type” field is set to “3” for an FSN message. An FSN message includes the following fields:Connection ID (1 Byte): an unsigned integer to identify the anchor connection which the FSN message is for;Traffic Class ID (1 Byte): an unsigned integer to identify the traffic class of the anchor connection which the FSN message is for; andFirst Sequence Number (2 Bytes): the SN of the oldest MX SDU in the (retransmission) buffer of the sender of the FSN message. In some embodiments, the “Type” field is set to “4” for a CMS message. A CMS message includes the following fields:Connection ID (1 Byte): an unsigned integer to identify the anchor connection of the coded MX SDU;Traffic Class ID (1 Byte): an unsigned integer to identify the traffic class of the coded MX SDU; andSequence Number (2 Bytes): the SN of the first (uncoded) MX SDU used to generate the coded MX SDU;Fragmentation Control (FC) (1 Byte): to provide necessary information for re-assembly, only needed if the coded MX SDU is too long to transport in a single MX control PDU.N (1 Byte): the number of consecutive MX SDUs used to generate the coded MX SDUK (1 Byte): the length (in terms of bits) of the coding coefficient fieldCoding Coefficient (N×K/8 Bytes):a(i): the coding coefficient of the i-th (uncoded) MX SDUpaddingCoded MX SDU (variable): the coded MX SDU. If N=2 and K=0, the simple XOR method is used to generate the Coded MX SDU from two consecutive uncoded MX SDUs, and the a(i) fields are not included in the message. If the coded MX SDU is too long, it can be fragmented, and transported by multiple MX control PDUs. The N, K, and a(i) fields are only included in the MX PDU carrying the first fragment of the coded MX SDU. FIG.5also shows an example coded MX SDU502of an example MX Convergence Control message format. In embodiments, the coded MX SDU502may be included in the MX Control Message section of the MX control PDU format500. In this example, the CID, TC ID, SN, and FC fields are the same as discussed previously with respect to the MX trailer ofFIG.4and/or the MX control PDU format500. Here, the SN field includes the sequence number of the first MX SDU of the MX SDUs used to generate the coded MX SDU carried in the MX control message. In this example, the coded MX SDU502includes the following new fields to support packet coding:N: the number (e.g., up to 16) of (consecutive, uncoded) MX SDUs used to generate the coded MX SDUK1: the number (e.g., up to 16) of bits of the coding coefficientK2: the number of bits for the coding sequence numbera(i): the coding coefficient for the i-th MX SDU (1≤i≤N)b(i): the coding sequence number for the i-th MX SDU (1≤i≤N) In embodiments, if N=2 and K1=0, an XOR coding method is used to recover the coded packet, and therefore, a(i) and/or b(i) are not included in the control message. Additionally or alternatively, if the coding sequence number is aligned with packet sequence number, then K2=0, and b(i) will not be included in the control message. Moreover, if a coded MX SDU502is too long to be carried in a single MX control message, the coded MX SDU502may be fragmented and transported by multiple MX control messages (e.g., shown by the “1stFragment . . . ” and the “2ndFragment . . . ” inFIG.5), and only the MX control message carrying the 1stfragment will include the N, K1, a(i), K2, and b(i) fields. FIGS.6-9show example packet recovery processes/procedures600-900, respectively, in accordance with various embodiments. For illustrative purposes, the various operations of processes600-900are described as being performed by a MAMS receiver (Rx) and a MAMS transmitter (Tx). In some embodiments, the MAMS Rx may be an intermediate node120(e.g., a UE121ofFIG.1) or an endpoint110(e.g., an IoT device111ofFIG.1), and the MAMS Tx may be an edge node130(e.g., one or more MEC servers/hosts136, (R)AN nodes131/132, AP133, relay nodes, distributed units, etc.). In other embodiments, the MAMS Tx may be the intermediate node120or the endpoint110, and the MAMS Rx may be the edge node130. Additionally, the various messages/signaling communicated between the MAMS Rx and the MAMS Tx may be sent and received over various interfaces discussed infra with respect toFIGS.10-18, and using the various mechanisms discussed herein including those discussed infra with respect toFIGS.10-18. While particular examples and orders of operations are illustratedFIGS.6-9, the depicted orders of operations should not be construed to limit the scope of the embodiments in any way. Rather, the depicted operations may be re-ordered, broken into additional operations, combined, and/or omitted altogether while remaining within the spirit and scope of the present disclosure. FIG.6depicts an example packet retransmission procedure600according to various embodiments. Procedure600begins at operation602where a first packet (Packet_1) is sent from the MAMS Tx to the MAMS Rx, and the Packet_1 is properly received by the MAMS Rx. At operation604, a second packet (Packet_2) is sent from the MAMS Tx to the MAMS Rx, and the Packet_2 is not properly received by the MAMS Rx (i.e., Packet_2 is lost). At operation606, a third packet (Packet_3) is sent from the MAMS Tx to the MAMS Rx, and the Packet_3 is properly received by the MAMS Rx. At operation608, the MAMS Rx detects the lost Packet_2, and then at operation610, the MAMS Rx sends a first PLR (PLR_1) or a NACK report. In response, the MAMS Tx retransmits Packet_2 at operation612, but Packet_2 is lost again. The MAMS Rx waits for the minimum report time interval (T), and then at operation614, the MAMS Rx sends out a second PLR (PLR_2) or another NACK report). At operation616. The MAMS Tx retransmits Packet_2, which is successfully received by the MAMS Rx. After some time, at operation618the MAMS Rx detects a lost packet (Packet_N), and at operation620, the MAMS Rx sends out a corresponding PLR (PLR_N) or NACK report. However, in this case, at operation622the MAMS Tx cannot find Packet_N in its Tx buffer, which causes the MAMS Tx to generate and send an FSN report at operation624. The FSN indicates that the oldest packet in the buffer of the MAMS Tx has the SN of N+X, where X is a number. As a result, the MAMS Rx will not report any lost packets having an SN smaller than N+X. FIG.7depicts an example packet retransmission processes700A and700B according to various embodiments. Process700A may be performed by a MAMS Rx. Process700A begins at operation705where the MAMS Rx receives one or more packets from a MAMS Tx. At operation710, the MAMS Rx determines whether any packets have been lost. If the MAMS Rx determines that no packets have been lost, the MAMS Rx loops back to continue receiving packets from the MAMS Tx. If the MAMS Rx determines that there is a lost packet, then the MAMS Rx proceeds to operation715to generate and send a PLR (or a NACK report) to the MAMS Tx. The PLR (or NACK report) may include the information discussed above with respect toFIGS.4-5. At operation720, the MAMS Rx receives another packet, and at operation725, the MAMS Rx determines whether this other packet is an FSN or not. If the other packet is not an FSN, the MAMS Rx proceeds to operation730to process this packet. The other packet may be a retransmission of the lost packet or a new data packet. If the other packet is an FSN, the MAMS Rx proceeds to operation735to not report any lost packets with an SN that is smaller than an SN indicated by the FSN message. It is implementation specific on how the MAMS Rx will prevent PLRs from indicating the FSN. In one example, the MAMS Rx may set a suitable data element, database object, or data structure with the SN from the FSN message, and may check this entity prior to generating and sending future PLRs or NACK reports. After operation730or735is performed, process700A may end or repeat as necessary. Process700B may be performed by a MAMS Tx. Process700B begins at operation740where the MAMS Tx receives a PLR (or NACK report) from a MAMS Rx. The PLR (or NACK report) may include the information discussed above with respect toFIGS.4-5. At operation745, the MAMS Tx identifies or determines an SN of the lost packet from the information included in the PLR. At operation750, the MAMS Tx determines whether the lost packet is in its local Tx buffer, such as by search individual packets in the buffer to determine if any of the buffered packets include the SN determined at operation745. If the lost packet is discovered in the buffer, the MAMS Tx retransmits the lost packet at operation760. If the lost packet is not discovered in the buffer, then at operation755the MAMS Tx generates and sends an FSN indicating an SN of the oldest packet in the buffer. After operation760or755is performed, process700B may end or repeat as necessary. FIG.8depicts an example network coding procedure800according to various embodiments. Procedure800begins at operation802where a first packet (Packet_1) is sent from the MAMS Tx to the MAMS Rx, and the Packet_1 is properly received by the MAMS Rx. At operation804, a second packet (Packet_2) is sent from the MAMS Tx to the MAMS Rx, and the Packet_2 is not properly received by the MAMS Rx (i.e., Packet_2 is lost). At operation806, the MAMS Tx generates a CMS. In this example, the MAMS Tx generates the CMS by performing an XOR operation on Packet_1 and Packet_2. The CMS may also include the information as discussed previously with respect toFIGS.4-5. In some embodiments, the MAMS Tx may send out the CMS message with or without determining a packet loss. At operation808, the MAMS Tx sends the CMS message to the MAMS Rx. The MAMS Rx recovers the lost Packet_2 from its stored version of Packet_1 and the CMS message. FIG.9depicts example network coding processes900A and900B according to various embodiments. Process900A may be performed by a MAMS Rx. Process900A begins at operation905where the MAMS Rx receives one or more packets from a MAMS Tx. At operation910, the MAMS Rx stores the received packets. At operation915, the MAMS Rx receives a coded message from the Tx. In embodiments, the coded message may be the CMS message discussed previously, and the CMS message may include the information discussed previously with respect toFIG.5. At operation920, the MAMS Rx applies a network coding algorithm to the coded packet included in the coded message to recover a lost packet. In some embodiments, coded message indicates an SN of a first uncoded packet used to generated the coded packet and a number of consecutive packets used to generate the coded packet (N). In these embodiments, the MAMS Rx may apply the network coding algorithm to the coded packet and a plurality of consecutive uncoded packets that it had stored at operation910to recover the missing (lost) packet. The plurality of consecutive uncoded packets is a set of consecutive packets starting from a packet with the SN of the first uncoded packet and have a number of packets equal to N. In some embodiments, the coded message also indicate a coding coefficient (a(i)) of an i-th (uncoded) uncoded packet of the plurality of packets, and a number of bits of the coding coefficient (K). In these embodiments, the MAMS Rx may perform an XOR operation on the plurality of consecutive uncoded packets when K is equal to zero. In some embodiments, the MAMS Rx may perform an XOR operation on the plurality of consecutive uncoded packets when N is equal to two and K is equal to zero. In other embodiments, the MAMS Rx may perform an XOR operation on the plurality of consecutive uncoded packets when N is m greater than two. Additionally or alternatively, the MAMS Rx may use, as the network coding algorithm, an LNC algorithm, an RLNC algorithm, a CRLNC algorithm, a CTCP algorithm, or some other suitable network coding algorithm. After operation920is performed, process900A may end or repeat as necessary. Process900B may be performed by a MAMS Tx. Process900B begins at operation925where the MAMS Tx sends one or more packets to the MAMS Rx. At operation930, the MAMS Tx applies a network coding algorithm to at least two consecutive packets to generate a coded packet. The coded packet may be the CMS discussed previously. In some embodiments, the MAMS Tx may generate the coded packet on a periodic basis, such as after a predetermined number of packets have been sent to the MAMS Rx. For example, the MAMS Tx may send n coded CMS messages for every N (uncoded) packets, where n is a number and n may be less than, greater than, or equal to N. The particular mechanisms used to determine n and N is up to MAMS Tx implementation, and is out of scope of the present disclosure. At operation935, the MAMS Tx generates a coded message including the coded packet. The coded message may be the CMS message discussed previously. At operation940, the MAMS Tx sends the coded message to the MAMS Rx. After operation940is performed, process900A may end or repeat as necessary. FIG.10illustrates an example MEC system architecture1000in accordance with various embodiments. The MEC system1000ofFIG.10is a first embodiment of a system architecture of the MEC system135discussed previously. MEC enables implementation of multi-access edge applications (ME apps)1036as software-only entities that run on top of a Virtualization Infrastructure (VI)1038, which is located in or close to the network edge.FIG.10shows the general entities involved, and these entities can be grouped into multi-access edge system level1002, multi-access edge host level1001, and network level entities (not shown). The multi-access edge host level1001includes MEC servers136and multi-access edge (ME) management (mgmt)1030, which provide functionality to run multi-access edge applications (MEAs)1036within an operator network or a subset of an operator network. The multi-access edge system level1002includes multi-access edge system level management1002, UE (which may be the same or similar to the intermediate nodes120and/or endpoints110discussed herein), and third party entities. The network level (not shown) includes various external network level entities, such as a 3GPP network (e.g., CN120ofFIG.1), a local area network (e.g., a LAN, WLAN, PAN, etc.), and an external network (e.g., network150ofFIG.1). The multi-access edge host level1001includes multi-access edge host level management and MEC server136. The multi-access edge host level management may include various components that handle the management of the multi-access edge specific functionality of a particular MEP1037, MEC server136, and the MEAs1036to be run. The MEC server136includes the MEP1037, MEAs1036, and VI1038. The MEC system1000includes three groups of reference points, including “Mp” reference points regarding the multi-access edge platform functionality; “Mm” reference points, which are management reference points; and “Mx” reference points, which connect MEC entities to external entities. The interfaces/reference points in the MEC system135may include IP-based connections, and may be used to provide Representational State Transfer (REST or RESTful) services, and the messages conveyed using the reference points/interfaces may be in XML, HTML, JSON, or some other desired format, such as those discussed herein. A suitable Authentication, Authorization, and Accounting (AAA) protocol, such as the radius or diameter protocols, may also be used for communicating over the reference points/interfaces in other embodiments. The MEC host136is an entity that contains an MEP1037and VI1038which provides compute, storage, and network resources for the purpose of running MEAs1036. The VI1038includes a data plane (DP)1038that executes the traffic rules (TR)1037breceived by the MEP1037, and routes the traffic among applications (e.g., MEAs1036), ME services (MESs)1037a, DNS server/proxy (see e.g., via DNS handling entity1037c), 3GPP network, local networks, and external networks. The MEC DP1038amay be connected with the (R)AN nodes111and CN120ofFIG.1over interfaces114/115, and/or may be connected with the AP106ofFIG.1via a wider network150, such as the internet, an enterprise network, or the like. The other entities depicted byFIGS.1-3may be the same or similar as those discussed with regard toFIG.10. The MEP1037within the MEC server136may be a collection of essential functionality required to run MEAs1036on a particular VI1038and enable them to provide and consume MESs1037a. The MEP1037can also provide various services and/or functions, such as offering an environment where the MEAs1036can discover, advertise, consume and offer MESs1037a(discussed infra), including MESs1037aavailable via other platforms when supported. The MEP1037may be able to allow authorized MEAs1036to communicate with third party servers located in external networks. The MEP1037may receive traffic rules from the multi-access edge platform manager (MEPM)1031, applications, or services, and instruct the data plane accordingly (see e.g., Traffic Rules Control1037balso referred to as filtering roles control module1037b). The MEP1037may send instructions to the DP1038within the VI1038via the Mp2 reference point. The Mp2 reference point between the MEP1037and the DP1038of the VI1038may be used to instruct the DP1038on how to route traffic among applications, networks, services, etc. In some implementations, the MEP1037may translate tokens representing UEs XP01 in the traffic rules into specific IP addresses. The MEP1037also receives DNS records from the MEPM1031and configures a DNS proxy/server accordingly. The MEP1037hosts MESs1037aincluding the multi-access edge services discussed infra, and provide access to persistent storage and time of day information. Furthermore, the MEP1037may communicate with other MEPs1037of other MEC servers136via the Mp3 reference point. The VI1038may represent the totality of all hardware and software components which build up the environment in which MEAs1036and/or MEP1037are deployed, managed and executed. The VI1038may span across several locations, and the network providing connectivity between these locations is regarded to be part of the VI1038. The physical hardware resources of the VI1038includes computing, storage and network resources that provide processing, storage and connectivity to MEAs1036and/or MEP1037through a virtualization layer (e.g., a hypervisor, virtual machine monitor (VMM), or the like). The virtualization layer may abstract and/or logically partition the physical hardware resources of the MEC server136as a hardware abstraction layer. The virtualization layer may also enable the software that implements the MEAs1036and/or MEP1037to use the underlying VI1038, and may provide virtualized resources to the MEAs1036and/or MEP1037, so that the MEAs1036and/or MEP1037can be executed. The MEAs1036may be applications that can be instantiated on a MEC server136within the MEC system135and can potentially provide or consume MESs1037a. MEAs1036may run as virtual machines (VM) on top of the VI1038provided by the MEC server136, and can interact with the MEP1037to consume and provide the MESs1037a. The MEAs1036are instantiated on the VI1038of the MEC server136based on configuration or requests validated by the ME management730. In some embodiments, the MEAs1036can also interact with the MEP1037to perform certain support procedures related to the lifecycle of the MEAs1036, such as indicating availability, preparing relocation of user state, etc. The MEAs1036may have a certain number of rules and requirements associated to them, such as required resources, maximum latency, required or useful services, etc. These requirements may be validated by the multi-access edge system level management1030, and can be assigned to default values if missing. MESs1037amay be services provided and consumed either by the MEP1037or MEAs1036. When provided by an application, an MES1037acan be registered in a list of services1037dto the MEP1037over the Mp1 reference point. Additionally, the MEAs1036can subscribe to one or more services1037afor which it is authorized over the Mp1 reference point. The MEC system135may support a feature called UserApps. When the MEC system135supports the feature UserApps, the multi-access edge management may support the instantiation of MEAs1036on multiple MEC servers136following a single instantiation request, and when required by the operator in response to a request by the user. The application instance may need to fulfil a number of potential constraints predefined for the application. Once instantiated, connectivity may be established between the UE and the application instance. Potential constraints may include latency, location, compute resources, storage resources, network capability, security conditions, and the like. When the MEC system135supports the feature UserApps, the system1000may, in response to a request by a user, support the establishment of connectivity between a UE and an instance of a specific MEA1036fulfilling the requirements of the MEA1036regarding the UE. If no instance of the MEA1036fulfilling these requirements is currently running, the multi-access edge system management may create a new instance of the application on a multi-access edge host200that fulfils the requirements of the application. Once instantiated, connectivity shall be established between the UE and the new MEA1036instance. Requirements of the application can include latency, location, compute resources, storage resources, network capability, security conditions, and the like. When the MEC system135supports the feature UserApps, the system1000may support the on-boarding of MEAs1036during the execution of an instantiation request, may allow the establishment of connectivity between a UE and a specific instance of an MEA1036, may support the capability to terminate the MEA1036instance when no UE is connected to it anymore, and may support the termination of the MEA1036running on multiple MEC servers136following a single termination request. As shown byFIG.10, the Mp1 reference point is between the MEP1037and the MEAs1036. The Mp1 reference point may provide service registration1037d, service discovery, and communication support for various services, such as the MESs1037a. In addition, the Mp1 interface may provide application availability, session state relocation support procedures, traffic rules and DNS rules activation, access to persistent storage and time of day information, and/or the like. The Mp1 reference point may be used for consuming and providing service specific functionality. Examples of MESs1037ainclude Radio Network Information Service (RNIS), location services, and bandwidth management services. The RNIS, when available, provides authorized MEAs1036with radio network related information, and expose appropriate up-to-date radio network information to the MEAs1036. The radio network information (RNI) may include, inter alia, radio network conditions, measurement and statistics information related to the user plane, information related to UEs served by the radio node(s) associated with the multi-access edge host (e.g., UE context and radio access bearers), changes on information related to UEs served by the radio node(s) associated with the multi-access edge host, and/or the like. The RNI may be provided at the relevant granularity (e.g., per UE, per cell, per period of time). The service consumers (e.g., MEAs1036and MEP1037) may communicate with the RNIS over an RNI Application Programming Interface (API) to obtain contextual information from a corresponding RAN. RNI may be provided to the service consumers via an access node (e.g., (R)AN nodes131,132or AP133ofFIG.1). The RNI API may support both query and subscription (e.g., a pub/sub) based mechanisms that are used over a Representational State Transfer (RESTful) API or over a message broker of the MEP1037(not shown byFIG.10). A MEA1036may query information on a message broker via a transport information query procedure, wherein the transport information may be pre-provisioned to the MEA1036via a suitable configuration mechanism. The various messages communicated via the RNI API may be in XML, JSON, Protobuf, or some other suitable format. The RNI may be used by MEAs1036and MEP1037to optimize the existing services and to provide new types of services that are based on up to date information on radio conditions. As an example, a MEA1036may use RNI to optimize current services such as video throughput guidance. In throughput guidance, a radio analytics MEA1036may use MEC services to provide a backend video server (e.g., server(s)130) with a near real-time indication on the throughput estimated to be available at the radio downlink interface in a next time instant. The throughput guidance radio analytics application1036computes throughput guidance based on the required radio network information it obtains from a multi-access edge service running on the MEC server136. RNI may be also used by the MEP1037to optimize the mobility procedures required to support service continuity, such as when a certain MEA1036requests a single piece of information using a simple request-response model (e.g., using RESTful mechanisms) while other MEAs1036subscribe to multiple different notifications regarding information changes (e.g., using a pub/sub mechanism and/or message broker mechanisms). In various embodiments, a MEC server136acting as a master node for distributed ML (e.g., the MEC server136in the example ofFIG.1) may access RNI of individual UEs via a MEA1036and/or the MEP1037using the RNI API for the purposes of evaluating the channel conditions and/or link quality for partitioning training datasets and/or for assigning computational tasks to the individual UEs. In an example, an application implemented by a MEC entity (e.g., the MEC-O1021) may access RNI via a MEA1036or the MEP1037using the RNI API, which may be used to select a MEC server136to act as the master node for the distributed ML. The location services (LS), when available, may provide authorized MEAs1036with location-related information, and expose such information to the MEAs1036. With location related information, the MEP1037or one or more MEAs1036perform active device location tracking, location-based service recommendations, and/or other like services. The LS supports the location retrieval mechanism, i.e. the location is reported only once for each location information request. The LS supports a location subscribe mechanism, for example, the location is able to be reported multiple times for each location request, periodically or based on specific events, such as location change. The location information may include, inter alia, the location of specific UEs currently served by the radio node(s) associated with the MEC server136, information about the location of all UEs currently served by the radio node(s) associated with the MEC server136, information about the location of a certain category of UEs currently served by the radio node(s) associated with the MEC server136, a list of UEs in a particular location, information about the location of all radio nodes currently associated with the MEC server136, and/or the like. The location information may be in the form of a geolocation, a Global Navigation Satellite Service (GNSS) coordinate, a Cell identity (ID), and/or the like. The LS is accessible through the API defined in the Open Mobile Alliance (OMA) specification “RESTful Network API for Zonal Presence” OMA-TS-REST-NetAPI-ZonalPresence-V1-0-20160308-C. The Zonal Presence service utilizes the concept of “zone”, where a zone lends itself to be used to group all radio nodes that are associated to a MEC host or MEC server136, or a subset thereof, according to a desired deployment. In this regard, the OMA Zonal Presence API provides means for MEAs1036to retrieve information about a zone, the access points associated to the zones and the users that are connected to the access points. In addition, the OMA Zonal Presence API, allows authorized application to subscribe to a notification mechanism, reporting about user activities within a zone. In various embodiments, a MEC server136acting as a master node for distributed ML (e.g., the MEC server136in the example ofFIG.1) may access location information or zonal presence information of individual UEs using the OMA Zonal Presence API to identify the relative location or positions of the UEs. The location or zonal presence information may be used as a basis for selecting individual UEs for offloading ML tasks, partitioning training data, specifying encoding criteria, or for determining other aspects of the embodiments discussed herein. The bandwidth management services (BWMS) provides for the allocation of bandwidth to certain traffic routed to and from MEAs1036, and specify static/dynamic up/down bandwidth resources, including bandwidth size and bandwidth priority. MEAs1036may use the BWMS to update/receive bandwidth information to/from the MEP1037. In some embodiments, different MEAs1036running in parallel on the same MEC server136may be allocated specific static, dynamic up/down bandwidth resources, including bandwidth size and bandwidth priority. The BWMS includes a bandwidth management (BWM) API to allowed registered applications to statically and/or dynamically register for specific bandwidth allocations per session/application. The BWM API includes HTTP protocol bindings for BWM functionality using RESTful services or some other suitable API mechanism. Referring back toFIG.10, multi-access edge management comprises multi-access edge system level management and the multi-access edge host level management1030. The multi-access edge host level management1030comprises the MEPM1031and the VI manager (VIM)1032, and handles the management of the multi-access edge specific functionality of a particular MEC server136and the applications running on it. In some implementations, some or all of the multi-access edge management components may be implemented by one or more servers located in one or more data centers, and may use virtualization infrastructure that is connected with Network Functions Virtualization (NFV) infrastructure used to virtualize core network elements, or using the same hardware as the NFV infrastructure. An example NFV infrastructure is shown byFIG.11. The MEPM1031is responsible for managing the life cycle of applications including informing the multi-access edge orchestrator (MEC-O)1021of relevant application related events. The MEPM1031may also provide MEP element management functions (MEPE mgmt1031a) to the MEP1037, manage MEA rules and requirements (MERR mgmt1031b) including service authorizations, traffic rules, DNS configuration and resolving conflicts, and manage MEA1036lifecycles (MEALC mgmt1031). The multi-access edge platform manager1031may also receive virtualized resources fault reports and performance measurements from the VIM1032for further processing. The Mm5 reference point between the multi-access edge platform manager1031and the MEP1037is used to perform platform configuration, configuration of the MEPE mgmt1031a, the MERR mgmt1031b, the MEALC mgmt1031, management of application relocation, etc. The VIM1032may be an entity that allocates, manages and releases virtualized (compute, storage and networking) resources of the VI1038, and prepares the VI1038to run a software image. To do so, the VIM1032may communicate with the VI1038over the Mm7 reference point between the VIM1032and the VI1038. Preparing the VI1038may include configuring the VI1038, and receiving/storing the software image. When supported, the VIM1032may provide rapid provisioning of applications, such as described in “Openstack++ for Cloudlet Deployments”, available at http://reports-archive.adm.cs.cmu.edu/anon/2015/CMU-CS-15-123.pdf. The VIM1032may also collect and report performance and fault information about the virtualized resources, and perform application relocation when supported. For application relocation from/to external cloud environments, the VIM1032may interact with an external cloud manager to perform the application relocation, for example using the mechanism described in “Adaptive VM Handoff Across Cloudlets”, and/or possibly through a proxy. Furthermore, the VIM1032may communicate with the multi-access edge platform manager1031via the Mm6 reference point, which may be used to manage virtualized resources, for example, to realize the application lifecycle management. Moreover, the VIM1032may communicate with the MEC-O1021via the Mm4 reference point, which may be used to manage virtualized resources of the MEC server136, and to manage application images. Managing the virtualized resources may include tracking available resource capacity, etc. The multi-access edge system level management includes the MEC-O1021as a core component, which has an overview of the complete MEC system135. The MEC-O1021may maintain an overall view of the MEC system135based on deployed multi-access edge hosts200, available resources, available MESs1037a, and topology. The Mm3 reference point between the MEC-O1021and the multi-access edge platform manager1030may be used for the management of the application lifecycle, application rules and requirements and keeping track of available MESs1037a. The MEC-O1021may communicate with the user application lifecycle management proxy (UALMP)1025via the Mm9 reference point in order to manage MEAs1036requested by UE application1005. The MEC-O1021may also be responsible for on-boarding of application packages, including checking the integrity and authenticity of the packages, validating application rules and requirements and if necessary adjusting them to comply with operator policies, keeping a record of on-boarded packages, and preparing the VIM(s)1002to handle the applications. The MEC-O1021may select appropriate MEC host(s)200for application instantiation based on constraints, such as latency, available resources, and available services. The MEC-O1021may also trigger application instantiation and termination, as well as trigger application relocation as needed and when supported. The Operations Support System (OSS)1022refers to the OSS of an operator that receives requests via the Customer Facing Service (CFS) portal1006(and over the Mx1 reference point) and from UE applications1005for instantiation or termination of MEAs1036, and decides on the granting of these requests. The CFS portal1006(and the Mx1 interface) may be used by third-parties to request the MEC system135to run applications1006in the MEC system135. Granted requests may be forwarded to the MEC-O1021for further processing. When supported, the OSS1022also receives requests from UE applications1005for relocating applications between external clouds and the MEC system135. The Mm2 reference point between the OSS1022and the multi-access edge platform manager1030is used for the multi-access edge platform1030configuration, fault and performance management. The Mm1 reference point between the MEC-O1021and the OSS1022is used for triggering the instantiation and the termination of multi-access edge applications1036in the MEC system135. The user application lifecycle management proxy (“user app LCM proxy”)1025may authorize requests from UE applications1005in the UE and interacts with the OSS1022and the MEC-O1021for further processing of these requests. The user app LCM proxy1025may interact with the OSS1022via the Mm8 reference point, and is used to handle UE applications1005requests for running applications in the MEC system135. A user application1005may be an ME app1036that is instantiated in the MEC system135in response to a request of a user via an application running in the intermediate nodes120and/or endpoints110(e.g., UE application1005). The user app LCM proxy1025allows UE applications1005to request on-boarding, instantiation, termination of user applications and when supported, relocation of user applications in and out of the MEC system135. It also allows informing the UE applications1005about the state of the user applications1005. The user app LCM proxy1025is only accessible from within the mobile network, and may only be available when supported by the MEC system135. A UE application1005may use the Mx2 reference point between the user app LCM proxy1025and the UE application1005to request the MEC system135to run an application in the MEC system135, or to move an application in or out of the MEC system135. The Mx2 reference point may only be accessible within the mobile network and may only be available when supported by the multi-access edge system. In order to run an MEA1036in the MEC system1000, the MEC-O1021receives requests triggered by the OSS1022, a third-party, or a UE application1005. In response to receipt of such requests, the MEC-O1021selects a MEC server136to host the MEA1036for computational offloading. These requests may include information about the application to be run, and possibly other information, such as the location where the application needs to be active, other application rules and requirements, as well as the location of the application image if it is not yet on-boarded in the MEC system1000. In various embodiments, the MEC-O1021selects one or more MEC servers136for computational intensive tasks. The selected one or more MEC servers136may offload computational tasks of a UE application1005based on various operational parameters, such as network capabilities and conditions, computational capabilities and conditions, application requirements, and/or other like operational parameters. The application requirements may be rules and requirements associated to/with one or more MEAs1036, such as deployment model of the application (e.g., whether it is one instance per user, one instance per host, one instance on each host, etc.); required virtualized resources (e.g., compute, storage, network resources, including specific hardware support); latency requirements (e.g., maximum latency, how strict the latency constraints are, latency fairness between users); requirements on location; multi-access edge services that are required and/or useful for the MEAs1036to be able to run; multi-access edge services that the MEAs1036can take advantage of, if available; connectivity or mobility support/requirements (e.g., application state relocation, application instance relocation); required multi-access edge features, such as VM relocation support or UE identity; required network connectivity (e.g., connectivity to applications within the multi-access edge system, connectivity to local networks, or to the Internet); information on the operator's multi-access edge system deployment or mobile network deployment (e.g., topology, cost); requirements on access to user traffic; requirements on persistent storage; traffic rules; DNS rules, etc. The MEC-O1021considers the requirements and information listed above and information on the resources currently available in the MEC system135to select one or several MEC servers136within the MEC system135to host MEAs1036and/or for computational offloading. After one or more MEC servers136are selected, the MEC-O1021requests the selected MEC host(s)200to instantiate the application(s) or application tasks. The actual algorithm used to select the MEC servers136depends on the implementation, configuration, and/or operator deployment. In various embodiments, the selection algorithm may be based on the task offloading embodiments discussed herein, for example, by taking into account network, computational, and energy consumption requirements for performing tasks of application tasks, as well as network functionalities, processing, and offloading coding/encodings, or differentiating traffic between various RATs. Under certain circumstances (e.g., UE mobility events resulting in increased latency, load balancing decisions, etc.), and if supported, the MEC-O1021may decide to select one or more new MEC servers136to act as a master node, and initiates the transfer of an application instance or application-related state information from the one or more source MEC servers136to the one or more target MEC servers136. FIG.11illustrates an example multi-access edge system architecture1100(or a multi-access edge system architecture) in accordance with various embodiments. MEC system1100ofFIG.11is a second embodiment of a system architecture of the MEC system135discussed previously. Like numbered elements inFIG.11are the same as discussed previously with respect toFIG.10. The MEC system1100includes architectures and infrastructures that are used to virtualize one or more network functions (NFs) onto physical resources comprising a combination of industry-standard server hardware, storage hardware, or switches, or alternatively comprising proprietary hardware. Typically, mobile network operators virtualize their NFs using Network Functions Virtualization (NFV), and use virtualization infrastructure (VI) to consolidate various network elements, which are referred to as Virtualized Network Functions (VNFs). In other words, NFV can be used to execute virtual or reconfigurable implementations of one or more components/functions of a CN120. As mentioned previously, the MEC system135(or individual MEC servers136) may include VI to consolidate and virtualize various MEC components and MEC applications on top of the VI. In this regard, the system1100is an architecture where MEC elements are deployed in an NFV environment, which may provide maximum utilization of the underlying VI. In particular, the system1100is a MEC architecture that is deployed in NFV environments, wherein the MEP1037is deployed as a VNF, the MEAs1036appear as VNFs towards the NFV MANO components (MEAs1036with specific NFV functionality are referred to as an “MEA-VNFs1136” or the like), and the VI1038is deployed as an NFVI1104and its virtualized resources are managed by a VIM1102. In addition to elements discussed previously with respect toFIG.10, the system1100is illustrated as including a virtualized infrastructure manager (VIM)1102, a network function virtualization infrastructure (NFVI)1104, a VNF manager (VNFM)1106, virtualized network functions (VNFs) including, inter alia, MEPVNF1137and MEA-VNFs1136, a MEC Edge Platform Manager—NFV (MEPM-V)1110, and an NFV Orchestrator (NFVO)1112. In embodiments, the MEP1037is realized as a VNF (e.g., MEP-VNF1137inFIG.11) and is managed according to typical NFV procedures. In these embodiments, the MEPM1031is transformed into the Multi-access Edge Platform Manager—NFV (MEPM-V)1110, where the MEPM-V1110acts as an Element Manager (EM) of the MEP-VNF1137. The MEPM-V1110delegates Life Cycle Management (LCM) parts/tasks to one or more VNFM(s)1106, including VNFM-MEP LCM1106A and VNFM-MEA LCM1106B. In particular, the VNFM1106is used to perform LCM of the MEP-VNF including LCM of the MEP1037performed by the VNFM-MEP LCM1106A and LCM of the MEAs1036performed by the VNFM-MEA LCM1106B. Additionally, the MEC-O1021is transformed into a Multi-access Edge Application Orchestrator” (MEAO)1121that uses the NFVO1112for resource orchestration, and for orchestration of the set of MEA-VNFs as one or more NFV Network Services (NSs). The MEA-VNFs1136are managed like individual VNFs, where certain orchestration and Life Cycle Management (LCM) tasks are delegated to the NFVO1112and VNFM1106a,bfunctional blocks. In some embodiments, the MEP-VNF1137, the MEPM-V1110, and VNFM-MEA LCM1106B may be deployed as a single package or ensemble. In other embodiments, the VNFM-MEP LCM1106A and VNFM-MEA LCM1106B are part of a generic VNFM1106, and the MEP-VNF1137and the MEPM-V1110are provided by a single vendor. The VIM1102manages the resources of the NFVI1104. The NFVI1104includes physical or virtual resources and applications (including hypervisors) used to execute the system1100. The VIM1102manages the life cycle of virtual resources with the NFVI1104(e.g., creation, maintenance, and tear down of virtual machines (VMs) associated with one or more physical resources); tracks VM instances; tracks performance, fault, and security of VM instances and associated physical resources; and exposes VM instances and associated physical resources to other management systems. The NFVO1112coordinates, authorizes, releases, and engages resources of the NFVI1104in order to provide requested services (e.g., to execute a core network function, component, or slice). The VNFM1106manages VNFs used to execute core network120components/functions. The VNFM1106manages the life cycle of the VNFs and tracks performance, fault, and security of the virtual aspects of VNFs. The MEPM-V1110tracks the performance, fault and security of the functional aspects of VNFs. The tracking data from the VNFM1106and the MEPM-V1110may comprise, for example, performance measurement (PM) data used by the VIM1102or the NFVI1104. Both the VNFM1106and the MEPM-V1110can scale up/down the quantity of VNFs of the system1100. The Mm3* reference point between MEAO1121and the MEPM-V1110is based on the Mm3 reference point discussed previously. The Mm3* reference point in this embodiment may be altered to account for the split between MEPM-V1110and VNFM-MEA LCMs1106B. In addition to the reference points discussed previously with respect toFIG.10, system1100includes the reference points Mv1, Mv2 and Mv3 between elements of the MEC architecture and NFV architectures to support the management of MEA-VNFs1136and respective MEC services1137a. The Mv1 reference point connects the MEAO1121and the NFVO1112and is the same or similar to the Os-Ma-nfvo reference point in NFV architectures. The Mv2 reference point connects the VNFM-MEA LCM1106B with the MEPM-V1110to allow LCM related notifications to be exchanged between these entities. The Mv2 reference point is the same or similar to the Ve-Vnfm-em reference point in NFV architectures. The Mv3 reference point connects the VNFM-MEA LCM1106B with MEA-VNF1136instance(s) to allow the exchange of messages related to, for example, MEA LCM or initial deployment-specific configurations. The Mv3 reference point is the same or similar to the Ve-Vnfm-vnf reference point in NFV architectures. Furthermore, the following reference points are used as they are defined for NFV architectures: The Nf-Vn reference point that connects each MEA-VNF1136with the NFVI1104; the Nf-Vi reference point that connects the NFVI1104and the VIM1102; the Os-Ma-nfvo reference point that connects the OSS1022and the NFVO1112, which is primarily used to manage NSs (e.g., a number of VNFs connected and orchestrated to deliver a service); the Or-Vnfm reference point that connects the NFVO1112and the VNFM1106a,b, which is primarily used for the NFVO1112to invoke VNF LCM operations; the Vi-Vnfm reference point that connects the VIM1102and the VNFM1106a,b, which is primarily used by the VNFM1106a,bto invoke resource management operations to manage cloud resources that are needed by the VNF1137and/or data plane (DP)-VNF1138(where Vi-Vnfm reference point corresponds to the Mm6 reference point discussed previously); the Or-Vi reference point that connects the NFVO1112and the VIM1102, which is primarily used by the NFVO1112to manage cloud resources capacity; the Ve-Vnfm-em reference point that connects the VNFM1106a,bthat manages the lifecycle of the MEP1037with the MEPM-V1110; the Ve-Vnfm-vnf reference point that connects the VNFM1106a,bthat manages the lifecycle of the MEP1037with the MEP-VNF1137; the Nf-Vn reference point that connects the MEP-VNF1137and the NFVI1104; the Nf-Vi reference point that connects the NFVI1104and the VIM1102; the Os-Ma-nfvo reference point that connects the OSS1022and the NFVO1112, which is primarily used to manage NSs, for example, a number of VNFs connected and orchestrated to deliver a service; the Or-Vnfm reference point that connects the NFVO1112and the VNFM1106a,bthat manages the lifecycle of the ME platform, which is primarily used for the NFVO1112to invoke VNF LCM operations; the Vi-Vnfm reference point that connects the VIM1102and the VNFM1106a,bthat manages the lifecycle of the MEP1037, which is primarily used by the VNFM1106a,bto invoke resource management operations to manage the cloud resources that are needed by the VNF; and the Or-Vi reference point that connects the NFVO1112and the VIM1102. It is primarily used by the NFVO1112to manage cloud resources capacity. When MEC is deployed in a NFV environment, the data plane (DP)1138may be implemented as a Physical Network Function (PNF) (e.g., as DP-PNF1138), a VNF (e.g., as DP-VNF1138), or combination thereof. When implemented as a DP-PNF1138, the DP is connected to the NS that contains the MEA-VNFs1136, and the Mp2 reference point is kept as a MEC-internal reference point also in the NFV-based deployment of MEC. In another embodiment, for performance enhancements, the Service Function Chaining (SFC) functionality provided by the underlying NFVI1104may be reused for traffic routing. In such a deployment, the DP1138and the Mp2 reference point are omitted from the system1100. The SFC functionality in the NFVI1104is configured by the NFVO1112in the VIM1102based on the NFP of the NFV NS, using the Or-Vi reference point. In these embodiments, the MEAO1121translates the traffic rules into an NFP and sends it to the NFVO1112. The MEP1137may not control the traffic redirection directly via the Mp2 reference point, but instead may pass requests to activate/deactivate/update traffic rules to the MEPM-V1110, which will then be forwarded to the MEAO1121. When receiving such a request, the MEAO1121may request the NFVO1112to update the NFP accordingly. Furthermore, although not shown byFIG.11, the system1100may also include a network manager (NM). The NM may provide a package of end-user functions with the responsibility for the management of a network, which may include network elements with VNFs, non-virtualized network functions, or both (e.g., management of the VNFs may occur via the MEPM-V1110). FIG.12illustrates an example of infrastructure equipment1200in accordance with various embodiments. The infrastructure equipment1200(or “system1200”) may be implemented as a base station, radio head, access network node (e.g., the edge nodes130shown and described previously), MEC servers136, server(s)150, and/or any other element/device discussed herein. In other examples, the system1200could be implemented in or by an intermediate node120or endpoint110. The system1200includes application circuitry1205, baseband circuitry1210, one or more radio front end modules (RFEMs)1215, memory circuitry1220, power management integrated circuitry (PMIC)1225, power tee circuitry1230, network controller circuitry1235, network interface connector1240, positioning circuitry1245, and user interface1250. In some embodiments, the device1200may include additional elements such as, for example, memory/storage, display, camera, sensor, or input/output (I/O) interface. In other embodiments, the components described below may be included in more than one device. For example, said circuitries may be separately included in more than one device for CRAN, vBBU, or other like implementations. Application circuitry1205includes circuitry such as, but not limited to one or more processors (or processor cores), cache memory, and one or more of low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface module, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose input/output (I/O or IO), memory card controllers such as Secure Digital (SD) MultiMediaCard (MMC) or similar, Universal Serial Bus (USB) interfaces, Mobile Industry Processor Interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports. The processors (or cores) of the application circuitry1205may be coupled with or may include memory/storage elements and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the system1200. In some implementations, the memory/storage elements may be on-chip memory circuitry, which may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, Flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein. The processor(s) of application circuitry1205may include, for example, one or more processor cores (CPUs), one or more application processors, one or more graphics processing units (GPUs), one or more reduced instruction set computing (RISC) processors, one or more Acorn RISC Machine (ARM) processors, one or more complex instruction set computing (CISC) processors, one or more DSPs, one or more FPGAs, one or more PLDs, one or more ASICs, one or more microprocessors or controllers, or any suitable combination thereof. In some embodiments, the application circuitry1205may comprise, or may be, a special-purpose processor/controller to operate according to the various embodiments herein. As examples, the processor(s) of application circuitry1205may include one or more Intel Pentium®, Core®, or Xeon® processor(s); Advanced Micro Devices (AMD) Ryzen® processor(s), Accelerated Processing Units (APUs), or Epyc® processors; ARM-based processor(s) licensed from ARM Holdings, Ltd. such as the ARM Cortex-A family of processors and the ThunderX2® provided by Cavium™, Inc.; a MIPS-based design from MIPS Technologies, Inc. such as MIPS Warrior P-class processors; and/or the like. In some embodiments, the system1200may not utilize application circuitry1205, and instead may include a special-purpose processor/controller to process IP data received from an EPC or 5GC, for example. In some implementations, the application circuitry1205may include one or more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. As examples, the programmable processing devices may be one or more field-programmable gate arrays (FPGAs); programmable logic devices (PLDs) such as complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; ASICs such as structured ASICs and the like; programmable SoCs (PSoCs); and/or the like. In such implementations, the circuitry of application circuitry1205may comprise logic blocks or logic fabric, and other interconnected resources that may be programmed to perform various functions, such as the procedures, methods, functions, etc. of the various embodiments discussed herein. In such embodiments, the circuitry of application circuitry1205may include memory cells (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static random access memory (SRAM), anti-fuses, etc.)) used to store logic blocks, logic fabric, data, etc. in look-up-tables (LUTs) and the like. In some implementations, such as implementations where subsystems of the edge nodes130, intermediate nodes120, and/or endpoints110ofFIG.1are individual software agents or AI agents, each agent is implemented in a respective hardware accelerator that are configured with appropriate bit stream(s) or logic blocks to perform their respective functions. In these implementations, processor(s) and/or hardware accelerators of the application circuitry1205may be specifically tailored for operating the agents and/or for machine learning functionality, such as a cluster of AI GPUs, tensor processing units (TPUs) developed by Google® Inc., a Real AI Processors (RAPs™) provided by AlphaICs®, Nervana™ Neural Network Processors (NNPs) provided by Intel® Corp., Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU), NVIDIA® PX™ based GPUs, the NM500 chip provided by General Vision®, Hardware 3 provided by Tesla®, Inc., an Epiphany™ based processor provided by Adapteva®, or the like. In some embodiments, the hardware accelerator may be implemented as an AI accelerating co-processor, such as the Hexagon 685 DSP provided by Qualcomm®, the PowerVR 2NX Neural Net Accelerator (NNA) provided by Imagination Technologies Limited®, the Neural Engine core within the Apple® A11 or A12 Bionic SoC, the Neural Processing Unit within the HiSilicon Kirin 970 provided by Huawei®, and/or the like. The baseband circuitry1210may be implemented, for example, as a solder-down substrate including one or more integrated circuits, a single packaged integrated circuit soldered to a main circuit board or a multi-chip module containing two or more integrated circuits. The baseband circuitry1210includes one or more processing devices (e.g., baseband processors) to carry out various protocol and radio control functions. Baseband circuitry1210may interface with application circuitry of system1200for generation and processing of baseband signals and for controlling operations of the RFEMs1215. The baseband circuitry1210may handle various radio control functions that enable communication with one or more radio networks via the RFEMs1215. The baseband circuitry1210may include circuitry such as, but not limited to, one or more single-core or multi-core processors (e.g., one or more baseband processors) or control logic to process baseband signals received from a receive signal path of the RFEMs1215, and to generate baseband signals to be provided to the RFEMs1215via a transmit signal path. In various embodiments, the baseband circuitry1210may implement a real-time OS (RTOS) to manage resources of the baseband circuitry1210, schedule tasks, etc. Examples of the RTOS may include Operating System Embedded (OSE)™ provided by Enea®, Nucleus RTOS™ provided by Mentor Graphics®, Versatile Real-Time Executive (VRTX) provided by Mentor Graphics®, ThreadX™ provided by Express Logic®, FreeRTOS, REX OS provided by Qualcomm®, OKL4 provided by Open Kernel (OK) Labs®, or any other suitable RTOS, such as those discussed herein. Although not shown byFIG.12, in one embodiment, the baseband circuitry1210includes individual processing device(s) to operate one or more wireless communication protocols (e.g., a “multi-protocol baseband processor” or “protocol processing circuitry”) and individual processing device(s) to implement physical layer (PHY) functions. In this embodiment, the protocol processing circuitry operates or implements various protocol layers/entities of one or more wireless communication protocols. In a first example, the protocol processing circuitry may operate LTE protocol entities and/or 5G/NR protocol entities when the RFEMs1215are cellular radiofrequency communication system, such as millimeter wave (mmWave) communication circuitry or some other suitable cellular communication circuitry. In the first example, the protocol processing circuitry would operate MAC, RLC, PDCP, SDAP, RRC, and NAS functions. In a second example, the protocol processing circuitry may operate one or more IEEE-based protocols when the RFEMs1215are WiFi communication system. In the second example, the protocol processing circuitry would operate WiFi MAC and LLC functions. The protocol processing circuitry may include one or more memory structures (not shown) to store program code and data for operating the protocol functions, as well as one or more processing cores (not shown) to execute the program code and perform various operations using the data. The protocol processing circuitry provides control functions for the baseband circuitry1210and/or RFEMs1215. The baseband circuitry1210may also support radio communications for more than one wireless protocol. Continuing with the aforementioned embodiment, the baseband circuitry1210includes individual processing device(s) to implement PHY including HARQ functions, scrambling and/or descrambling, (en)coding and/or decoding, layer mapping and/or de-mapping, modulation symbol mapping, received symbol and/or bit metric determination, multi-antenna port pre-coding and/or decoding which may include one or more of space-time, space-frequency or spatial coding, reference signal generation and/or detection, preamble sequence generation and/or decoding, synchronization sequence generation and/or detection, control channel signal blind decoding, radio frequency shifting, and other related functions. etc. The modulation/demodulation functionality may include Fast-Fourier Transform (FFT), precoding, or constellation mapping/demapping functionality. The (en)coding/decoding functionality may include convolution, tail-biting convolution, turbo, Viterbi, or Low Density Parity Check (LDPC) coding. Embodiments of modulation/demodulation and encoder/decoder functionality are not limited to these examples and may include other suitable functionality in other embodiments. User interface circuitry950may include one or more user interfaces designed to enable user interaction with the system1200or peripheral component interfaces designed to enable peripheral component interaction with the system1200. User interfaces may include, but are not limited to, one or more physical or virtual buttons (e.g., a reset button), one or more indicators (e.g., light emitting diodes (LEDs)), a physical keyboard or keypad, a mouse, a touchpad, a touchscreen, speakers or other audio emitting devices, microphones, a printer, a scanner, a headset, a display screen or display device, etc. Peripheral component interfaces may include, but are not limited to, a nonvolatile memory port, a universal serial bus (USB) port, an audio jack, a power supply interface, etc. The radio front end modules (RFEMs)1215may comprise a millimeter wave (mmWave) RFEM and one or more sub-mmWave radio frequency integrated circuits (RFICs). In some implementations, the one or more sub-mmWave RFICs may be physically separated from the mmWave RFEM. The RFICs may include connections to one or more antennas or antenna arrays, and the RFEM may be connected to multiple antennas. In alternative implementations, both mmWave and sub-mmWave radio functions may be implemented in the same physical RFEM1215, which incorporates both mmWave antennas and sub-mmWave. The antenna array comprises one or more antenna elements, each of which is configured convert electrical signals into radio waves to travel through the air and to convert received radio waves into electrical signals. For example, digital baseband signals provided by the baseband circuitry1210is converted into analog RF signals (e.g., modulated waveform) that will be amplified and transmitted via the antenna elements of the antenna array including one or more antenna elements (not shown). The antenna elements may be omnidirectional, direction, or a combination thereof. The antenna elements may be formed in a multitude of arranges as are known and/or discussed herein. The antenna array may comprise microstrip antennas or printed antennas that are fabricated on the surface of one or more printed circuit boards. The antenna array may be formed in as a patch of metal foil (e.g., a patch antenna) in a variety of shapes, and may be coupled with the RF circuitry using metal transmission lines or the like. The memory circuitry1220may include one or more of volatile memory including dynamic random access memory (DRAM) and/or synchronous dynamic random access memory (SDRAM), and nonvolatile memory (NVM) including high-speed electrically erasable memory (commonly referred to as Flash memory), phase change random access memory (PRAM), magnetoresistive random access memory (MRAM), etc., and may incorporate the three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. Memory circuitry1220may be implemented as one or more of solder down packaged integrated circuits, socketed memory modules and plug-in memory cards. The memory circuitry1220is configured to store computational logic (or “modules”) in the form of software, firmware, or hardware commands to implement the techniques described herein. The computational logic or modules may be developed using a suitable programming language or development tools, such as any programming language or development tool discussed herein. The computational logic may be employed to store working copies and/or permanent copies of programming instructions for the operation of various components of appliance infrastructure equipment1200, an operating system of infrastructure equipment1200, one or more applications, and/or for carrying out the embodiments discussed herein (such as one or more operations of depicted byFIGS.6-9and/or the like). The computational logic may be stored or loaded into memory circuitry1220as instructions for execution by the processors of the application circuitry1205to provide or perform the functions described herein. The various elements may be implemented by assembler instructions supported by processors of the application circuitry1205or high-level languages that may be compiled into such instructions. The permanent copy of the programming instructions may be placed into persistent storage devices of memory circuitry1220in the factory during manufacture, or in the field through, for example, a distribution medium (not shown), through a communication interface (e.g., from a distribution server), and/or over-the-air (OTA). The PMIC1225may include voltage regulators, surge protectors, power alarm detection circuitry, and one or more backup power sources such as a battery or capacitor. The power alarm detection circuitry may detect one or more of brown out (under-voltage) and surge (over-voltage) conditions. The power tee circuitry1230may provide for electrical power drawn from a network cable to provide both power supply and data connectivity to the infrastructure equipment1200using a single cable. The network controller circuitry1235provides connectivity to a network using a standard network interface protocol such as Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), or some other suitable protocol, such as those discussed herein. Network connectivity may be provided to/from the infrastructure equipment1200via network interface connector1240using a physical connection, which may be electrical (commonly referred to as a “copper interconnect”), optical, or wireless. The network controller circuitry1235may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned protocols. In some implementations, the network controller circuitry1235may include multiple controllers to provide connectivity to other networks using the same or different protocols. In various embodiments, the network controller circuitry1235enables communication with associated equipment and/or with a backend system (e.g., server(s)130ofFIG.1), which may take place via a suitable gateway device. The positioning circuitry1245includes circuitry to receive and decode signals transmitted/broadcasted by a positioning network of a global navigation satellite system (GNSS). Examples of navigation satellite constellations (or GNSS) include United States' Global Positioning System (GPS), Russia's Global Navigation System (GLONASS), the European Union's Galileo system, China's BeiDou Navigation Satellite System, a regional navigation system or GNSS augmentation system (e.g., Navigation with Indian Constellation (NAVIC), Japan's Quasi-Zenith Satellite System (QZSS), France's Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS), etc.), or the like. The positioning circuitry1245comprises various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and the like to facilitate OTA communications) to communicate with components of a positioning network, such as navigation satellite constellation nodes. In some embodiments, the positioning circuitry1245may include a Micro-Technology for Positioning, Navigation, and Timing (Micro-PNT) IC that uses a master timing clock to perform position tracking/estimation without GNSS assistance. The positioning circuitry1245may also be part of, or interact with, the baseband circuitry1210and/or RFEMs1215to communicate with the nodes and components of the positioning network. The positioning circuitry1245may also provide position data and/or time data to the application circuitry1205, which may use the data to synchronize operations with various other infrastructure equipment, or the like. The components shown byFIG.12may communicate with one another using interface circuitry, which may include any number of bus and/or interconnect (IX) technologies such as industry standard architecture (ISA), extended ISA (EISA), inter-integrated circuit (I2C), an serial peripheral interface (SPI), point-to-point interfaces, power management bus (PMBus), peripheral component interconnect (PCI), PCI express (PCIe), Intel® Ultra Path Interface (UPI), Intel® Accelerator Link (IAL), Common Application Programming Interface (CAPI), Intel® QuickPath interconnect (QPI), Ultra Path Interconnect (UPI), Intel® Omni-Path Architecture (OPA) IX, RapidIO™ system IXs, Cache Coherent Interconnect for Accelerators (CCIA), Gen-Z Consortium IXs, Open Coherent Accelerator Processor Interface (OpenCAPI) IX, a HyperTransport interconnect, and/or any number of other IX technologies. The IX technology may be a proprietary bus, for example, used in an SoC based system. FIG.13illustrates an example of an platform1300(also referred to as “system1300,” “device1300,” “appliance1300,” or the like) in accordance with various embodiments. In embodiments, the platform1300may be suitable for use as intermediate nodes120and/or endpoints110ofFIG.1, IoT devices1504-1804ofFIGS.15-18, and/or any other element/device discussed herein with regard toFIGS.1-11. Platform1300may also be implemented in or as a server computer system or some other element, device, or system discussed herein. The platform1300may include any combinations of the components shown in the example. The components of platform1300may be implemented as integrated circuits (ICs), portions thereof, discrete electronic devices, or other modules, logic, hardware, software, firmware, or a combination thereof adapted in the computer platform1300, or as components otherwise incorporated within a chassis of a larger system. The example ofFIG.13is intended to show a high level view of components of the computer platform1300. However, some of the components shown may be omitted, additional components may be present, and different arrangement of the components shown may occur in other implementations. The platform1300includes processor circuitry1302. The processor circuitry1302includes circuitry such as, but not limited to one or more processor cores and one or more of cache memory, low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as serial peripheral interface (SPI), inter-integrated circuit (I2C) or universal programmable serial interface circuit, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose input-output (I/O), memory card controllers such as secure digital/multi-media card (SD/MMC) or similar, universal serial bus (USB) interfaces, mobile industry processor interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports. In some implementations, the processor circuitry1302may include one or more hardware accelerators, which may be microprocessors, programmable processing devices (e.g., FPGA, ASIC, etc.), or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. In some implementations, the processor circuitry1302may include on-chip memory circuitry, which may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, Flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein. The processor(s) of processor circuitry1302may include, for example, one or more processor cores (CPUs), one or more application processors, one or more graphics processing units (GPUs), one or more reduced instruction set computing (RISC) processors, one or more Acorn RISC Machine (ARM) processors, one or more complex instruction set computing (CISC) processors, one or more digital signal processors (DSP), one or more FPGAs, one or more PLDs, one or more ASICs, one or more baseband processors, one or more radio-frequency integrated circuits (RFIC), one or more microprocessors or controllers, or any suitable combination thereof. The processors (or cores) of the processor circuitry1302may be coupled with or may include memory/storage and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the platform1300. In these embodiments, the processors (or cores) of the processor circuitry1302is configured to operate application software to provide a specific service to a user of the platform1300. In some embodiments, the processor circuitry1302may be a special-purpose processor/controller to operate according to the various embodiments herein. As examples, the processor circuitry1302may include an Intel® Architecture Core™ based processor, such as a Quark™, an Atom™, an i3, an i5, an i7, or an MCU-class processor, Pentium® processor(s), Xeon® processor(s), or another such processor available from Intel® Corporation, Santa Clara, California. However, any number other processors may be used, such as one or more of Advanced Micro Devices (AMD) Zen® Core Architecture, such as Ryzen® or EPYC® processor(s), Accelerated Processing Units (APUs), MxGPUs, Epyc® processor(s), or the like; A5-A12 and/or S1-S4 processor(s) from Apple® Inc., Snapdragon™ or Centrig™ processor(s) from Qualcomm® Technologies, Inc., Texas Instruments, Inc.® Open Multimedia Applications Platform (OMAP)™ processor(s); a MIPS-based design from MIPS Technologies, Inc. such as MIPS Warrior M-class, Warrior I-class, and Warrior P-class processors; an ARM-based design licensed from ARM Holdings, Ltd., such as the ARM Cortex-A, Cortex-R, and Cortex-M family of processors; the ThunderX2® provided by Cavium™, Inc.; or the like. In some implementations, the processor circuitry1302may be a part of a system on a chip (SoC), System-in-Package (SiP), a multi-chip package (MCP), and/or the like, in which the processor circuitry1302and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel® Corporation. Other examples of the processor circuitry1302are mentioned elsewhere in the present disclosure. Additionally or alternatively, processor circuitry1302may include circuitry such as, but not limited to, one or more FPDs such as FPGAs and the like; PLDs such as CPLDs, HCPLDs, and the like; ASICs such as structured ASICs and the like; PSoCs; and the like. In such embodiments, the circuitry of processor circuitry1302may comprise logic blocks or logic fabric including and other interconnected resources that may be programmed to perform various functions, such as the procedures, methods, functions, etc. of the various embodiments discussed herein. In such embodiments, the circuitry of processor circuitry1302may include memory cells (e.g., EPROM, EEPROM, flash memory, static memory (e.g., SRAM, anti-fuses, etc.) used to store logic blocks, logic fabric, data, etc. in LUTs and the like. The processor circuitry1302may communicate with system memory circuitry1304over an interconnect1306(e.g., a bus). Any number of memory devices may be used to provide for a given amount of system memory. As examples, the memory circuitry1304may be random access memory (RAM) in accordance with a Joint Electron Devices Engineering Council (JEDEC) design such as the DDR or mobile DDR standards (e.g., LPDDR, LPDDR2, LPDDR3, or LPDDR4), dynamic RAM (DRAM), and/or synchronous DRAM (SDRAM)). The memory circuitry1304may also include nonvolatile memory (NVM) such as high-speed electrically erasable memory (commonly referred to as “flash memory”), phase change RAM (PRAM), resistive memory such as magnetoresistive random access memory (MRAM), etc., and may incorporate three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. The memory circuitry1304may also comprise persistent storage devices, which may be temporal and/or persistent storage of any type, including, but not limited to, non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth. The individual memory devices of memory circuitry1304may be implemented as one or more of solder down packaged integrated circuits, socketed memory modules, and plug-in memory cards. The memory circuitry1304may be implemented as any number of different package types such as single die package (SDP), dual die package (DDP) or quad die package (Q17P). These devices, in some examples, may be directly soldered onto a motherboard to provide a lower profile solution, while in other examples the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. Any number of other memory implementations may be used, such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDIMMs or MiniDIMMs. Memory circuitry1304. In embodiments, the memory circuitry1304may be disposed in or on a same die or package as the processor circuitry1302(e.g., a same SoC, a same SiP, or soldered on a same MCP as the processor circuitry1302). To provide for persistent storage of information such as data, applications, operating systems (OS), and so forth, a storage circuitry1308may also couple to the processor circuitry1302via the interconnect1306. In an example, the storage circuitry1308may be implemented via a solid-state disk drive (SSDD). Other devices that may be used for the storage circuitry1308include flash memory cards, such as SD cards, microSD cards, xD picture cards, and the like, and USB flash drives. In low power implementations, the storage circuitry1308may be on-die memory or registers associated with the processor circuitry1302. However, in some examples, the storage circuitry1308may be implemented using a micro hard disk drive (HDD). Further, any number of new technologies may be used for the storage circuitry1308in addition to, or instead of, the technologies described, such resistance change memories, phase change memories, holographic memories, or chemical memories, among others. The storage circuitry1308store computational logic1383(or “modules1383”) in the form of software, firmware, or hardware commands to implement the techniques described herein. The computational logic1383may be employed to store working copies and/or permanent copies of computer programs, or data to create the computer programs, for the operation of various components of platform1300(e.g., drivers, etc.), an operating system of platform1300, one or more applications, and/or for carrying out the embodiments discussed herein. The computational logic1383may be stored or loaded into memory circuitry1304as instructions1382, or data to create the instructions1382, for execution by the processor circuitry1302to provide the functions described herein. The various elements may be implemented by assembler instructions supported by processor circuitry1302or high-level languages that may be compiled into such instructions (e.g., instructions1370, or data to create the instructions1370). The permanent copy of the programming instructions may be placed into persistent storage devices of storage circuitry1308in the factory or in the field through, for example, a distribution medium (not shown), through a communication interface (e.g., from a distribution server (not shown)), or over-the-air (OTA). In an example, the instructions1382provided via the memory circuitry1304and/or the storage circuitry1308ofFIG.13are embodied as one or more non-transitory computer readable storage media (see e.g., NTCRSM1402ofFIG.14) including program code, a computer program product or data to create the computer program, with the computer program or data, to direct the processor circuitry1302of platform1300to perform electronic operations in the platform1300, and/or to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality depicted previously (see e.g.,FIGS.6-9). The processor circuitry1302accesses the one or more non-transitory computer readable storage media over the interconnect1306. Although the instructions1382are shown as code blocks included in the memory circuitry1304and the computational logic1383is shown as code blocks in the storage circuitry1308, it should be understood that any of the code blocks may be replaced with hardwired circuits, for example, built into an FPGA, ASIC, or some other suitable circuitry. For example, where processor circuitry1302includes (e.g., FPGA based) hardware accelerators as well as processor cores, the hardware accelerators (e.g., the FPGA cells) may be pre-configured (e.g., with appropriate bit streams) with the aforementioned computational logic to perform some or all of the functions discussed previously (in lieu of employment of programming instructions to be executed by the processor core(s)). The memory circuitry1304and/or storage circuitry1308may store program code of an operating system (OS), which may be a general purpose OS or an OS specifically written for and tailored to the computing platform1300. For example, the OS may be Unix or a Unix-like OS such as Linux e.g., provided by Red Hat Enterprise, Windows 10™ provided by Microsoft Corp.®, macOS provided by Apple Inc.®, or the like. In another example, the OS may be a mobile OS, such as Android® provided by Google Inc.®, iOS® provided by Apple Inc.®, Windows 10 Mobile® provided by Microsoft Corp.®, KaiOS provided by KaiOS Technologies Inc., or the like. In another example, the OS may be a real-time OS (RTOS), such as Apache Mynewt provided by the Apache Software Foundation®, Windows 10 For IoT® provided by Microsoft Corp.®, Micro-Controller Operating Systems (“MicroC/OS” or “μC/OS”) provided by Micrium®, Inc., FreeRTOS, VxWorks® provided by Wind River Systems, Inc.®, PikeOS provided by Sysgo AG®, Android Things® provided by Google Inc.®, QNX™ RTOS provided by BlackBerry Ltd., or any other suitable RTOS, such as those discussed herein. The OS may include one or more drivers that operate to control particular devices that are embedded in the platform1300, attached to the platform1300, or otherwise communicatively coupled with the platform1300. The drivers may include individual drivers allowing other components of the platform1300to interact or control various input/output (I/O) devices that may be present within, or connected to, the platform1300. For example, the drivers may include a display driver to control and allow access to a display device, a touchscreen driver to control and allow access to a touchscreen interface of the platform1300, sensor drivers to obtain sensor readings of sensor circuitry1321and control and allow access to sensor circuitry1321, actuator drivers to obtain actuator positions of the actuators1322and/or control and allow access to the actuators1322, a camera driver to control and allow access to an embedded image capture device, audio drivers to control and allow access to one or more audio devices. The OSs may also include one or more libraries, drivers, APIs, firmware, middleware, software glue, etc., which provide program code and/or software components for one or more applications to obtain and use the data from a secure execution environment (SEE), trusted execution environment (TEE), and/or management engine of the platform1300(not shown). The components may communicate over the interconnect1306. The interconnect1306may include any number of technologies, including industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The interconnect1306may be a proprietary bus, for example, used in a SoC based system. Other bus systems may be included, such as an I2C interface, an SPI interface, point-to-point interfaces, and a power bus, among others. The interconnect1306couples the processor circuitry1302to the communication circuitry1309for communications with other devices. The communication circuitry1309is a hardware element, or collection of hardware elements, used to communicate over one or more networks (e.g., cloud1301) and/or with other devices (e.g., mesh devices/fog1364). The communication circuitry1309includes baseband circuitry1310(or “modem1310”) and radiofrequency (RF) circuitry1311and1312. The baseband circuitry1310includes one or more processing devices (e.g., baseband processors) to carry out various protocol and radio control functions. Baseband circuitry1310may interface with application circuitry of platform1300(e.g., a combination of processor circuitry1302, memory circuitry1304, and/or storage circuitry1308) for generation and processing of baseband signals and for controlling operations of the RF circuitry1311or1312. The baseband circuitry1310may handle various radio control functions that enable communication with one or more radio networks via the RF circuitry1311or1312. The baseband circuitry1310may include circuitry such as, but not limited to, one or more single-core or multi-core processors (e.g., one or more baseband processors) or control logic to process baseband signals received from a receive signal path of the RF circuitry1311and/or1312, and to generate baseband signals to be provided to the RF circuitry1311or1312via a transmit signal path. In various embodiments, the baseband circuitry1310may implement a real-time OS (RTOS) to manage resources of the baseband circuitry1310, schedule tasks, etc. Examples of the RTOS may include Operating System Embedded (OSE)™ provided by Enea®, Nucleus RTOS™ provided by Mentor Graphics®, Versatile Real-Time Executive (VRTX) provided by Mentor Graphics®, ThreadX™ provided by Express Logic®, FreeRTOS, REX OS provided by Qualcomm®, OKL4 provided by Open Kernel (OK) Labs®, or any other suitable RTOS, such as those discussed herein. Although not shown byFIG.13, in one embodiment, the baseband circuitry1310includes individual processing device(s) to operate one or more wireless communication protocols (e.g., a “multi-protocol baseband processor” or “protocol processing circuitry”) and individual processing device(s) to implement PHY functions. In this embodiment, the protocol processing circuitry operates or implements various protocol layers/entities of one or more wireless communication protocols. In a first example, the protocol processing circuitry may operate LTE protocol entities and/or 5G)/NR protocol entities when the communication circuitry1309is a cellular radiofrequency communication system, such as millimeter wave (mmWave) communication circuitry or some other suitable cellular communication circuitry. In the first example, the protocol processing circuitry1305would operate MAC, RLC, PDCP, SDAP, RRC, and NAS functions. In a second example, the protocol processing circuitry may operate one or more IEEE-based protocols when the communication circuitry1309is WiFi communication system. In the second example, the protocol processing circuitry would operate WiFi MAC and LLC)functions. The protocol processing circuitry may include one or more memory structures (not shown) to store program code and data for operating the protocol functions, as well as one or more processing cores (not shown) to execute the program code and perform various operations using the data. The protocol processing circuitry provides control functions for the baseband circuitry1310and/or RF circuitry1311and1312. The baseband circuitry1310may also support radio communications for more than one wireless protocol. Continuing with the aforementioned embodiment, the baseband circuitry1310includes individual processing device(s) to implement PHY including HARQ functions, scrambling and/or descrambling, (en)coding and/or decoding, layer mapping and/or de-mapping, modulation symbol mapping, received symbol and/or bit metric determination, multi-antenna port pre-coding and/or decoding which may include one or more of space-time, space-frequency or spatial coding, reference signal generation and/or detection, preamble sequence generation and/or decoding, synchronization sequence generation and/or detection, control channel signal blind decoding, radio frequency shifting, and other related functions. etc. The modulation/demodulation functionality may include Fast-Fourier Transform (FFT), precoding, or constellation mapping/demapping functionality. The (en)coding/decoding functionality may include convolution, tail-biting convolution, turbo, Viterbi, or Low Density Parity Check (LDPC) coding. Embodiments of modulation/demodulation and encoder/decoder functionality are not limited to these examples and may include other suitable functionality in other embodiments. The communication circuitry1309also includes RF circuitry1311and1312to enable communication with wireless networks using modulated electromagnetic radiation through a non-solid medium. Each of the RF circuitry1311and1312include a receive signal path, which may include circuitry to convert analog RF signals (e.g., an existing or received modulated waveform) into digital baseband signals to be provided to the baseband circuitry1310. Each of the RF circuitry1311and1312also include a transmit signal path, which may include circuitry configured to convert digital baseband signals provided by the baseband circuitry1310to be converted into analog RF signals (e.g., modulated waveform) that will be amplified and transmitted via an antenna array including one or more antenna elements (not shown). The antenna array may be a plurality of microstrip antennas or printed antennas that are fabricated on the surface of one or more printed circuit boards. The antenna array may be formed in as a patch of metal foil (e.g., a patch antenna) in a variety of shapes, and may be coupled with the RF circuitry1311or1312using metal transmission lines or the like. The RF circuitry1311(also referred to as a “mesh transceiver”) is used for communications with other mesh or fog devices1364. The mesh transceiver1311may use any number of frequencies and protocols, such as 2.4 Gigahertz (GHz) transmissions under the IEEE 802.15.4 standard, using the Bluetooth® low energy (BLE) standard, as defined by the Bluetooth® Special Interest Group, or the ZigBee® standard, among others. Any number of RF circuitry1311, configured for a particular wireless communication protocol, may be used for the connections to the mesh devices1364. For example, a WLAN unit may be used to implement WiFi™ communications in accordance with the IEEE 802.11 standard. In addition, wireless wide area communications, for example, according to a cellular or other wireless wide area protocol, may occur via a WWAN unit. The mesh transceiver1311may communicate using multiple standards or radios for communications at different ranges. For example, the platform1300may communicate with close/proximate devices, e.g., within about 10 meters, using a local transceiver based on BLE, or another low power radio, to save power. More distant mesh devices1364, e.g., within about 50 meters, may be reached over ZigBee or other intermediate power radios. Both communications techniques may take place over a single radio at different power levels, or may take place over separate transceivers, for example, a local transceiver using BLE and a separate mesh transceiver using ZigBee. The RF circuitry1312(also referred to as a “wireless network transceiver,” a “cloud transceiver,” or the like) may be included to communicate with devices or services in the cloud1301via local or wide area network protocols. The wireless network transceiver1312includes one or more radios to communicate with devices in the cloud1301. The cloud1301may be the same or similar to cloud144discussed previously. The wireless network transceiver1312may be a LPWA transceiver that follows the IEEE 802.15.4, or IEEE 802.15.4g standards, among others, such as those discussed herein. The platform1300may communicate over a wide area using LoRaWAN™ (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance. The techniques described herein are not limited to these technologies, but may be used with any number of other cloud transceivers that implement long range, low bandwidth communications, such as Sigfox, and other technologies. Further, other communications techniques, such as time-slotted channel hopping, described in the IEEE 1002.15.4e specification may be used. Any number of other radio communications and protocols may be used in addition to the systems mentioned for the mesh transceiver1311and wireless network transceiver1312, as described herein. For example, the radio transceivers1311and1312may include an LTE or other cellular transceiver that uses spread spectrum (SPA/SAS) communications for implementing high-speed communications. Further, any number of other protocols may be used, such as WiFi® networks for medium speed communications and provision of network communications. The transceivers1311and1312may include radios that are compatible with, and/or may operate according to any one or more of the following radio communication technologies and/or standards including but not limited to those discussed herein. Network interface circuitry/controller (NIC)1316may be included to provide wired communication to the cloud1301or to other devices, such as the mesh devices1364using a standard network interface protocol. The standard network interface protocol may include Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, or may be based on other types of network protocols, such as Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others. Network connectivity may be provided to/from the platform1300via NIC1316using a physical connection, which may be electrical (e.g., a “copper interconnect”) or optical. The physical connection also includes suitable input connectors (e.g., ports, receptacles, sockets, etc.) and output connectors (e.g., plugs, pins, etc.). The NIC1316may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned network interface protocols. In some implementations, the NIC1316may include multiple controllers to provide connectivity to other networks using the same or different protocols. For example, the platform1300may include a first NIC1316providing communications to the cloud over Ethernet and a second NIC1316providing communications to other devices over another type of network. The interconnect1306may couple the processor circuitry1302to an external interface1318(also referred to as “I/O interface circuitry” or the like) that is used to connect external devices or subsystems. The external devices include, inter alia, sensor circuitry1321, actuators1322, and positioning circuitry1345. The sensor circuitry1321may include devices, modules, or subsystems whose purpose is to detect events or changes in its environment and send the information (sensor data) about the detected events to some other a device, module, subsystem, etc. Examples of such sensors1321include, inter alia, inertia measurement units (IMU) comprising accelerometers, gyroscopes, and/or magnetometers; microelectromechanical systems (MEMS) or nanoelectromechanical systems (NEMS) comprising 3-axis accelerometers, 3-axis gyroscopes, and/or magnetometers; level sensors; flow sensors; temperature sensors (e.g., thermistors); pressure sensors; barometric pressure sensors; gravimeters; altimeters; image capture devices (e.g., cameras); light detection and ranging (LiDAR) sensors; proximity sensors (e.g., infrared radiation detector and the like), depth sensors, ambient light sensors, ultrasonic transceivers; microphones; etc. The external interface1318connects the platform1300to actuators1322, allow platform1300to change its state, position, and/or orientation, or move or control a mechanism or system. The actuators1322comprise electrical and/or mechanical devices for moving or controlling a mechanism or system, and converts energy (e.g., electric current or moving air and/or liquid) into some kind of motion. The actuators1322may include one or more electronic (or electrochemical) devices, such as piezoelectric biomorphs, solid state actuators, solid state relays (SSRs), shape-memory alloy-based actuators, electroactive polymer-based actuators, relay driver integrated circuits (ICs), and/or the like. The actuators1322may include one or more electromechanical devices such as pneumatic actuators, hydraulic actuators, electromechanical switches including electromechanical relays (EMRs), motors (e.g., DC motors, stepper motors, servomechanisms, etc.), wheels, thrusters, propellers, claws, clamps, hooks, an audible sound generator, and/or other like electromechanical components. The platform1300may be configured to operate one or more actuators1322based on one or more captured events and/or instructions or control signals received from a service provider and/or various client systems. The positioning circuitry1345includes circuitry to receive and decode signals transmitted/broadcasted by a positioning network of a global navigation satellite system (GNSS). Examples of navigation satellite constellations (or GNSS) include United States' Global Positioning System (GPS), Russia's Global Navigation System (GLONASS), the European Union's Galileo system, China's BeiDou Navigation Satellite System, a regional navigation system or GNSS augmentation system (e.g., Navigation with Indian Constellation (NAVIC), Japan's Quasi-Zenith Satellite System (QZSS), France's Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS), etc.), or the like. The positioning circuitry1345comprises various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and the like to facilitate OTA communications) to communicate with components of a positioning network, such as navigation satellite constellation nodes. In some embodiments, the positioning circuitry1345may include a Micro-Technology for Positioning, Navigation, and Timing (Micro-PNT) IC that uses a master timing clock to perform position tracking/estimation without GNSS assistance. The positioning circuitry1345may also be part of, or interact with, the communication circuitry1309to communicate with the nodes and components of the positioning network. The positioning circuitry1345may also provide position data and/or time data to the application circuitry, which may use the data to synchronize operations with various infrastructure (e.g., radio base stations), for turn-by-turn navigation, or the like. In some examples, various input/output (I/O) devices may be present within, or connected to, the platform1300, which are referred to as input device circuitry1386and output device circuitry1384inFIG.13. The input device circuitry1386and output device circuitry1384include one or more user interfaces designed to enable user interaction with the platform1300and/or peripheral component interfaces designed to enable peripheral component interaction with the platform1300. Input device circuitry1386may include any physical or virtual means for accepting an input including, inter alia, one or more physical or virtual buttons (e.g., a reset button), a physical keyboard, keypad, mouse, touchpad, touchscreen, microphones, scanner, headset, and/or the like. The output device circuitry1384may be included to show information or otherwise convey information, such as sensor readings, actuator position(s), or other like information. Data and/or graphics may be displayed on one or more user interface components of the output device circuitry1384. Output device circuitry1384may include any number and/or combinations of audio or visual display, including, inter alia, one or more simple visual outputs/indicators (e.g., binary status indicators (e.g., light emitting diodes (LEDs)) and multi-character visual outputs, or more complex outputs such as display devices or touchscreens (e.g., Liquid Chrystal Displays (LCD), LED displays, quantum dot displays, projectors, etc.), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the platform1300. The output device circuitry1384may also include speakers or other audio emitting devices, printer(s), and/or the like. In some embodiments, the sensor circuitry1321may be used as the input device circuitry1386(e.g., an image capture device, motion capture device, or the like) and one or more actuators1322may be used as the output device circuitry1384(e.g., an actuator to provide haptic feedback or the like). In another example, near-field communication (NFC) circuitry comprising an NFC controller coupled with an antenna element and a processing device may be included to read electronic tags and/or connect with another NFC-enabled device. Peripheral component interfaces may include, but are not limited to, a non-volatile memory port, a universal serial bus (USB) port, an audio jack, a power supply interface, etc. A battery1324may be coupled to the platform1300to power the platform1300, which may be used in embodiments where the platform1300is not in a fixed location. The battery1324may be a lithium ion battery, a lead-acid automotive battery, or a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, a lithium polymer battery, and/or the like. In embodiments where the platform1300is mounted in a fixed location, the platform1300may have a power supply coupled to an electrical grid. In these embodiments, the platform1300may include power tee circuitry to provide for electrical power drawn from a network cable to provide both power supply and data connectivity to the platform1300using a single cable. Power management integrated circuitry (PMIC)1326may be included in the platform1300to track the state of charge (SoCh) of the battery1324, and to control charging of the platform1300. The PMIC1326may be used to monitor other parameters of the battery1324to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery1324. The PMIC1326may include voltage regulators, surge protectors, power alarm detection circuitry. The power alarm detection circuitry may detect one or more of brown out (under-voltage) and surge (over-voltage) conditions. The PMIC1326may communicate the information on the battery1324to the processor circuitry1302over the interconnect1306. The PMIC1326may also include an analog-to-digital (ADC) convertor that allows the processor circuitry1302to directly monitor the voltage of the battery1324or the current flow from the battery1324. The battery parameters may be used to determine actions that the platform1300may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like. As an example, the PMIC1326may be a battery monitoring integrated circuit, such as an LTC4020 or an LTC2990 from Linear Technologies, an ADT7488A from ON Semiconductor of Phoenix Arizona, or an IC from the UCD90xxx family from Texas Instruments of Dallas, TX. A power block1328, or other power supply coupled to a grid, may be coupled with the PMIC1326to charge the battery1324. In some examples, the power block1328may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the platform1300. A wireless battery charging circuit, such as an LTC4020 chip from Linear Technologies of Milpitas, California, among others, may be included in the PMIC1326. The specific charging circuits chosen depend on the size of the battery1324, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard, promulgated by the Alliance for Wireless Power, among others. Furthermore, the present disclosure may take the form of a computer program product or data to create the computer program, with the computer program or data embodied in any tangible or non-transitory medium of expression having the computer-usable program code (or data to create the computer program) embodied in the medium.FIG.14illustrates an example non-transitory computer-readable storage media (NTCRSM) that may be suitable for use to store instructions (or data that creates the instructions) that cause an apparatus (such as any of the devices/components/systems described with regard toFIGS.1-13), in response to execution of the instructions by the apparatus, to practice selected aspects of the present disclosure. As shown, NTCRSM1402may include a number of programming instructions1404(or data to create the programming instructions). Programming instructions1404may be configured to enable a device (e.g., any of the devices/components/systems described with regard toFIGS.1-13), in response to execution of the programming instructions, to perform various programming operations associated with operating system functions, one or more applications, and/or aspects of the present disclosure (including various programming operations associated withFIGS.6-9). In some embodiments, the programming instructions1404(or data to create the programming instructions) to be executed may be in a pre-configured form that may require configuration instructions to install or provision the programming instructions1404to an apparatus (such as any of the devices/components/systems described with regard toFIGS.1-13). When installed/provisioned, configured and executed, the programming instructions1404can complete or perform various programming operations associated with operating system functions, one or more applications, and/or aspects of the present disclosure (including various programming operations associated withFIGS.6-9). In alternate embodiments, programming instructions1404(or data to create the instructions) may be disposed on multiple NTCRSM1402. In alternate embodiments, programming instructions1404(or data to create the instructions) may be disposed on computer-readable transitory storage media, such as, signals. The instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., HTTP). Any combination of one or more computer usable or computer readable medium(s) may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, one or more electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses, devices, or propagation media. For instance, the NTCRSM1402may be embodied by devices described for the storage circuitry1308and/or memory circuitry1304described with regard toFIG.13. More specific examples (a non-exhaustive list) of a computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM, Flash memory, etc.), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device and/or optical disks, a transmission media such as those supporting the Internet or an intranet, a magnetic storage device, or any number of other hardware devices. Note that the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program (or data to create the program) is printed, as the program (or data to create the program) can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory (with or without having been staged in or more intermediate storage media). In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program (or data to create the program) for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable medium may include a propagated data signal with the computer-usable program code (or data to create the program code) embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code (or data to create the program) may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc. In various embodiments, the program code (or data to create the program code) described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a packaged format, etc. Program code (or data to create the program code) as described herein may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, etc. in order to make them directly readable and/or executable by a computing device and/or other machine. For example, the program code (or data to create the program code) may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement the program code (the data to create the program code such as that described herein. In another example, the Program code (or data to create the program code) may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the program code (or data to create the program code) may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the program code (or data to create the program code) can be executed/used in whole or in part. In this example, the program code (or data to create the program code) may be unpacked, configured for proper execution, and stored in a first location with the configuration instructions located in a second location distinct from the first location. The configuration instructions can be initiated by an action, trigger, or instruction that is not co-located in storage or execution location with the instructions enabling the disclosed techniques. Accordingly, the disclosed program code (or data to create the program code) are intended to encompass such machine readable instructions and/or program(s) (or data to create such machine readable instruction and/or programs) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit. Computer program code for carrying out operations of the present disclosure (e.g., computational logic1383, instructions1382,1370discussed previously with regard toFIG.13) may be written in any combination of one or more programming languages, including an object oriented programming language such as Python, Ruby, Scala, Smalltalk, Java™, C++, C#, or the like; a procedural programming languages, such as the “C” programming language, the Go (or “Golang”) programming language, or the like; a scripting language such as JavaScript, Server-Side JavaScript (SSJS), JQuery, PHP, Pearl, Python, Ruby on Rails, Accelerated Mobile Pages Script (AMPscript), Mustache Template Language, Handlebars Template Language, Guide Template Language (GTL), PHP, Java and/or Java Server Pages (JSP), Node.js, ASP.NET, and/or the like; a markup language such as Hypertext Markup Language (HTML), Extensible Markup Language (XML), Java Script Object Notion (JSON), Apex®, Cascading Stylesheets (CSS), JavaServer Pages (JSP), MessagePack™, Apache® Thrift, Abstract Syntax Notation One (ASN.1), Google® Protocol Buffers (protobuf), or the like; some other suitable programming languages including proprietary programming languages and/or development tools, or any other languages tools. The computer program code for carrying out operations of the present disclosure may also be written in any combination of the programming languages discussed herein. The program code may execute entirely on the system1300, partly on the system1300, as a stand-alone software package, partly on the system1300and partly on a remote computer or entirely on the remote computer or server (e.g., system1200). In the latter scenario, the remote computer may be connected to the system1300through any type of network, including a LAN or WAN, or the connection may be made to an external computer (e.g., through the Internet using an Internet Service Provider). Some non-limiting examples are as follows. The following examples pertain to further embodiments, and specifics in the examples may be used anywhere in one or more embodiments discussed previously. Any of the following examples may be combined with any other example or any embodiment discussed herein. Example 1 includes a method for data communication, the method comprising: receiving, by a receiver device, a set of packets from a transmitter device; detecting, by the receiver device, a lost packet from the set of packets; and generating, by the receiver device, a packet based on the detected lost packet. Example 2 includes the method of example 1 and/or some other example(s) herein, wherein the generated packet is a packet loss report (PLR) message, and the method further comprises: transmitting, by the receiver device, the generated packet to the transmitter device; and receiving, by the receiver device, a First Sequence Number (FSN) message based on the PLR, wherein the FSN message indicates a sequence number (SN) of an oldest Service Data Unit (SDU) in a transmission buffer of the transmitter device. Example 3 includes the method of example 2 and/or some other example(s) herein, wherein the FSN message further indicates a connection identifier (ID) and a traffic class ID, the connection ID identifies an anchor connection associated with the FSN message, and the traffic class ID identifies a traffic class of the anchor connection. Example 4 includes the method of any one of examples 2-3 and/or some other example(s) herein, wherein the PLR message indicates an SN of a next expected packet. Example 5 includes the method of example 4 and/or some other example(s) herein, further comprising: generating, by the receiver device, another PLR message when the SN of the next expected packet is not smaller than the SN indicated by the FSN message. Example 6 includes the method of example 1 and/or some other example(s) herein, wherein the generated packet is a packet recovered using a coded packet, and wherein the coded packet is based on at least two consecutive uncoded packets. Example 7 includes the method of example 6 and/or some other example(s) herein, further comprising: receiving, by the receiver device, a coded message from the transmitter device, wherein the coded message includes the coded packet. Example 8 includes the method of example 7 and/or some other example(s) herein, wherein the coded message indicates an SN of a first uncoded packet used to generate the coded packet and a number of consecutive packets used to generate the coded packet (N), and the method comprises: applying, by the receiver device, a network coding algorithm to the coded packet and a plurality of consecutive uncoded packets to recover the coded packet, wherein the plurality of consecutive uncoded packets is a set of consecutive packets starting from a packet with the SN of the first uncoded packet and have a number of packets equal to N. Example 9 includes the method of example 8 and/or some other example(s) herein, wherein the network coding algorithm is one of an exclusive OR (XOR) network coding algorithm, a random linear network coding (RLNC) algorithm, a Caterpillar RLNC (CRLNC) algorithm, and a network coded transmission control protocol (CTCP) algorithm. Example 10 includes the method of examples 8-9 and/or some other example(s) herein, wherein the coded message further indicates a coding coefficient and a number of bits of a coding coefficient field (K), and the method comprises: performing, by the receiver device, an XOR on the plurality of consecutive uncoded packets when K is equal to zero. Example 11 includes a method comprising: receiving, by a Multiple Access Management Service (MAMS) receiver, a burst of packets from a MAMS transmitter; generating, by the MAMS receiver, a packet loss report (PLR) message indicating one or more detected lost packets based on the burst of packets; and receiving, by the MAMS receiver, a First Sequence Number (FSN) message based on the PLR, wherein the FSN message includes an FSN, the FSN is a sequence number (SN) of an oldest Service Data Unit (SDU) in a transmission buffer of the MAMS transmitter. Example 12 includes the method of example 11 and/or some other example(s) herein, further comprising: detecting, by the MAMS receiver, the one or more lost packet based on the burst of packets; and transmitting, by the MAMS receiver, the PLR message to the MAMS transmitter. Example 13 includes the method of examples 11-12 and/or some other example(s) herein, wherein the generating comprises generating another PLR message only when an SN of another lost packet is greater than the FSN. Example 14 includes the method of examples 11-13 and/or some other example(s) herein, wherein the FSN message further indicates a connection identifier (CID) and a traffic class identifier (ID), the CID identifies an anchor connection associated with the FSN message, and the traffic class ID identifies a traffic class of the anchor connection. Example 15 includes the method of examples 11-14 and/or some other example(s) herein, wherein the PLR message includes an acknowledgement (ACK) number, wherein the ACK number is a next in-order SN that the MAMS receiver is expecting. Example 16 includes the method of example 15 and/or some other example(s) herein, wherein the PLR message further includes a number of lost bursts, the number of lost bursts including an SN of a first lost packet of the one or more detected lost packets and a number of packets in the burst of packets. Example 17 includes the method of examples 11-16 and/or some other example(s) herein, wherein the MAMS receiver is a Client Multi-Access Data Proxy (C-MADP) or a Network Multi-Access Data Proxy (N-MADP) and the MAMS transmitter is a different one of the C-MADP or the N-MADP. Example 18 includes a method to be performed by a Multiple Access Management Service (MAMS) receiver, the method comprising: receiving, by the MAMS receiver, a first uncoded Multi-Access (MX) Service Data Units (SDUs) from a MAMS transmitter; receiving, by the MAMS receiver, a Coded MC SDU (CMS) message from the MAMS transmitter when a second uncoded MX SDU has not been properly received by the MAMS receiver, the CMS message including a CMS; and applying, by the MAMS receiver, a network coding algorithm to the CMS to recover the second uncoded MX SDU. Example 19 includes the method of example 18 and/or some other example(s) herein, further comprising: detecting the second uncoded MX SDU as being a lost packet; and transmitting the a Packet Loss Report (PLR) message to the MAMS transmitter, the PLR message indicating the second uncoded MX SDU. Example 20 includes the method of examples 18-19 and/or some other example(s) herein, wherein the CMS message further includes a Sequence Number (SN) of a first MX SDU used to generate the CMS, a number of consecutive packets used to generate the CMS, and a coding coefficient of a last uncoded MX SDU used to generate the CMS. Example 21 includes the method of example 20 and/or some other example(s) herein, further comprising: applying, by the MAMS receiver, a network coding algorithm to the coded packet and a set of consecutive uncoded packets to recover the coded packet, wherein the set of consecutive uncoded packets is a set of packets starting from a packet with the SN of the first uncoded packet and has a number of packets equal to the number of consecutive packets used to generate the CMS. Example 22 includes the method of example 21 and/or some other example(s) herein, further comprising: performing, by the MAMS receiver, an XOR operation on the set of consecutive uncoded packets when the number of consecutive packets used to generate the CMS is equal to two and there is no coding coefficient included in the CMS message. Example 23 includes the method of examples 21-22 and/or some other example(s) herein, wherein the CMS message further includes a connection identifier (CID) and a traffic class identifier (ID), the CID identifies an anchor connection associated with the CMS, and the traffic class ID identifies a traffic class of the anchor connection. Example 24 includes the method of examples 21-23 and/or some other example(s) herein, wherein the network coding algorithm is one of an XOR network coding algorithm, a random linear network coding (RLNC) algorithm, a Caterpillar RLNC (CRLNC) algorithm, and a network coded transmission control protocol (CTCP) algorithm. Example 25 includes the method of any one of examples 18-24 and/or some other example(s) herein, wherein the MAMS receiver is a Client Multi-Access Data Proxy (C-MADP) or a Network Multi-Access Data Proxy (N-MADP) and the MAMS transmitter is a different one of the C-MADP or the N-MADP. Example 26 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-25, or any other method or process described in the present disclosure. Example 27 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-25, or any other method or process described in the present disclosure. Example 28 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-25, or any other method or process described in the present disclosure. Example 29 includes a method, technique, or process as described in or related to any of examples 1-25, or portions or parts thereof, or otherwise described in the present disclosure. Example 30 includes an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-25, or portions thereof, or otherwise described in the present disclosure. The one or more computer-readable media may be one transitory or non-transitory computer-readable media. Example 31 includes at least one transitory or non-transitory computer-readable storage medium comprising data, wherein the data is to create, manufacture, or otherwise produce instructions, wherein execution of the instructions is to cause a computing device or computing system to perform the method, techniques, or process as described in or related to any of examples 1-25, or portions thereof, or otherwise described in the present disclosure. Example 32 includes a signal as described in or related to any of examples 1-25, or portions or parts thereof, or otherwise described in the present disclosure. Example 33 includes a signal in a wireless network as shown and described in the present disclosure, or otherwise described in the present disclosure. Example 34 includes a method of communicating in a wireless network as shown and described in the present disclosure. Example 35 includes a system for providing wireless communication as shown and described in the present disclosure. Example 36 includes a device for providing wireless communication as shown and described in the present disclosure. Example 37 includes a packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-25, or portions or parts thereof, or otherwise described in the present disclosure. The present disclosure has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and/or computer program products according to embodiments of the present disclosure. In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an” and “the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specific the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof. For the purposes of the present disclosure, the phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases “in an embodiment,” or “In some embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous. The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like. As used herein, the term “circuitry” refers to a circuit or system of multiple circuits configured to perform a particular function in an electronic device. The circuit or system of circuits may be part of, or include one or more hardware components, such as a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA), programmable logic device (PLD), complex PLD (CPLD), high-capacity PLD (HCPLD), System-on-Chip (SoC), System-in-Package (SiP), Multi-Chip Package (MCP), digital signal processor (DSP), etc., that are configured to provide the described functionality. In addition, the term “circuitry” may also refer to a combination of one or more hardware elements with the program code used to carry out the functionality of that program code. Some types of circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. Such a combination of hardware elements and program code may be referred to as a particular type of circuitry. As used herein, the term “processor circuitry” refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes. As used herein, the term “memory” and/or “memory circuitry” may represent one or more hardware devices for storing data, including random access memory (RAM), magnetic RAM, core memory, read only memory (ROM), magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data. The term “computer-readable medium” may include, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data. As used herein, the term “interface circuitry” may refer to, is part of, or includes circuitry providing for the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, input/output (I/O) interfaces, peripheral component interfaces, network interface cards, and/or the like. As used herein, the term “module” is one or more independent electronic circuits packaged onto a circuit board, SoC, SiP, MCP, etc., configured to provide a basic function within a computer system. The term “module” may refer to, be part of, or include an FPGA, ASIC, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. As used herein, the term “element” refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, etc., or combinations thereof. As used herein, the term “device” refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity. As used herein, the term “entity” refers to a distinct component of an architecture or device, or information transferred as a payload. As used herein, the term “controller” refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move. The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like. As used herein, the term “computer system” refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” refers to various components of a computer that are communicatively coupled with one another, or otherwise organized to accomplish one or more functions. Furthermore, the term “computer system” and/or “system” refers to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources. As used herein, the term “architecture” refers to a fundamental organization of a system embodied in its components, their relationships to one another, and to an environment, as well as to the principles guiding its design and evolution. As used herein, the term “appliance,” “computer appliance,” or the like, refers to a discrete hardware device with integrated program code (e.g., software or firmware) that is specifically or specially designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource. As used herein, the term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface. As used herein, the term “channel” may refer to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” may refer to a connection between two devices for the purpose of transmitting and receiving information. As used herein, the term “communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like. Examples of wireless communications protocols may be used in various embodiments include a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3GPP) radio communication technology including, for example, 3GPP 5G or NR, Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), Long Term Evolution (LTE), LTE-Advanced (LTE Advanced), LTE Extra, LTE-A Pro, cdmaOne (2G), Code Division Multiple Access 2000 (CDMA 2000), Cellular Digital Packet Data (CDPD), Mobitex, Circuit Switched Data (CSD), High-Speed CSD (HSCSD), Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDM), High Speed Packet Access (HSPA), HSPA Plus (HSPA+), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), LTE LAA, MuLTEfire, UMTS Terrestrial Radio Access (UTRA), Evolved UTRA (E-UTRA), Evolution-Data Optimized or Evolution-Data Only (EV-DO), Advanced Mobile Phone System (AMPS), Digital AMPS (D-AMPS), Total Access Communication System/Extended Total Access Communication System (TACS/ETACS), Push-to-talk (PTT), Mobile Telephone System (MTS), Improved Mobile Telephone System (IMTS), Advanced Mobile Telephone System (AMTS), Cellular Digital Packet Data (CDPD), DataTAC, Integrated Digital Enhanced Network (iDEN), Personal Digital Cellular (PDC), Personal Handy-phone System (PHS), Wideband Integrated Digital Enhanced Network (WiDEN), iBurst, Unlicensed Mobile Access (UMA), also referred to as also referred to as 3GPP Generic Access Network, or GAN standard), Bluetooth®, Bluetooth Low Energy (BLE), IEEE 802.15.4 based protocols (e.g., IPv6 over Low power Wireless Personal Area Networks (6LoWPAN), WirelessHART, MiWi, Thread, 802.11a, etc.) WiFi-direct, ANT/ANT+, ZigBee, Z-Wave, 3GPP device-to-device (D2D) or Proximity Services (ProSe), Universal Plug and Play (UPnP), Low-Power Wide-Area-Network (LPWAN), Long Range Wide Area Network (LoRA) or LoRaWAN™ developed by Semtech and the LoRa Alliance, Sigfox, Wireless Gigabit Alliance (WiGig) standard, mmWave standards in general (e.g., wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802.11ad, IEEE 802.11ay, etc.), Vehicle-to-Everything (V2X) communication technologies, 3GPP cellular V2X, Dedicated Short Range Communications (DSRC) communication systems such as Intelligent-Transport-Systems (ITS) including the European ITS-G5, ITS-G5B, ITS-G5C, etc. In addition to the standards listed above, any number of satellite uplink technologies may be used for purposes of the present disclosure including, for example, radios compliant with standards issued by the International Telecommunication Union (ITU), or the European Telecommunications Standards Institute (ETSI), among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated. As used herein, the terms “instantiate,” “instantiation,” and the like refers to the creation of an instance, and an “instance” refers to a concrete occurrence of an object, which may occur, for example, during execution of program code. The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content. As used herein, a “database object”, “data structure”, or the like may refer to any representation of information that is in the form of an object, attribute-value pair (AVP), key-value pair (KVP), tuple, etc., and may include variables, data structures, functions, methods, classes, database records, database fields, database entities, associations between data and/or database entities (also referred to as a “relation”), blocks and links between blocks in block chain implementations, and/or the like. As used herein, the term “resource” refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like. The term “network resource” may refer to a resource hosted by a remote entity (e.g., a cloud computing service) and accessible over a network. The term “on-device resource” may refer to a resource hosted inside a device and enabling access to the device, and thus, to the related physical entity. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable. Additionally, a “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, such as a multi-access edge applications The foregoing description provides illustration and description of various example embodiments, but is not intended to be exhaustive or to limit the scope of embodiments to the precise forms disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. Where specific details are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
217,882
11863329
DETAILED DESCRIPTION In the discussion that follows, several ways of multiplexing control information (including, for example, uplink control information, or UCI, onto a shared channel (such as the PUSCH) transmitted by an SC-FDMA signal with shortened transmission time intervals (TTIs). In particular, different mapping solutions are provided by taking into account different short TTI lengths and different demodulation reference signal (DMRS) configurations for the shared channel. The described solutions enable the transmission of uplink control signals on shared channels with shortened TTIs, to obtain improved latency while taking account of the need for reliable reception of certain control information, such as HARQ ACK/NACK data. As discussed in the background section above, one way to reduce latency in a wireless system is to reduce the TTI length. However, when the length of the TTI is reduced, such as in an uplink transmission, transmitting DMRS with one or more SC-FDMA symbols, for each short TTI, leads to an increased overhead and a corresponding decrease in data rates. To reduce the overhead, one possible approach is to multiplex the reference signals from several transmitters into the same SC-FDMA symbol while the user data from different transmitters are transmitted in separate SC-FDMA symbols. According to another possible approach, which may be applied in the downlink, for example, short Physical Downlink Shared Channel (PDSCH), or sPDSCH, transmissions do not necessarily contain DMRS if recent DMRS transmissions to the same UE have occurred. According to this approach, the presence of DMRS in a downlink short TTI is either signaled in the short PDCCH, or the UE tries to blindly decode the transmission under the two assumptions that DMRS is present or not. This dynamic DMRS insertion approach can also be applied to PUSCH for uplink transmissions within short TTIs. Note that the term “TTI,” as used here, refers to the transmission, as mapped to the SC-FDMA framework—thus, a “short TTI” refers to a transmission that is shorter (with respect to the number of SC-FDMA symbols) than an LTE transmission of standard length, which is a 1-millisecond subframe with 12 or 14 symbols. In the LTE context, it is desirable to provide for uplink short TTI patterns for PUSCH, with several different short TTI lengths. One approach to handling the issue of where to place the reference symbols and data symbols is to fix the positions of reference symbols and data symbols, for each of several predefined short TTIs for a PUSCH, for each subframe. The techniques described herein provide a more flexible approach. Throughout the discussion that follows, the term short PUSCH (sPUSCH) is used to denote an uplink physical shared channel with short TTIs/transmissions. The control information described herein will also be referred to as uplink control information (UCI). It will be appreciated, however, that the disclosed techniques are not limited to channels known by this name, and are not necessarily limited to uplink transmissions and uplink control information. Detailed below and in the accompanying figures are several methods of time-multiplexing control information, e.g., UCI, and data on sPUSCH. Several of the multiplexing methods are designed based on the following rules. A mapping rule for HARQ ACK/NACK: HARQ ACK/NACK is placed in time-domain samples (before DFT-spreading of the SC-FDMA symbols) as close to DMRS as possible in order to obtain good channel estimation. Note that elsewhere herein, these time-domain samples, which are the time-ordered samples provided to the input of the DFT block inFIG.1, may be referred to as “pre-DFT symbols.” These time-domain symbols can be as close to the DMRS as possible in two different ways, depending on whether they are mapped to an SC-FDMA symbol that follows the symbol carrying DMRS or to an SC-FDMA symbol that precedes the symbol carrying DMRS. If these time-domain symbols are mapped to an SC-FDMA symbol that follows the closest symbol carrying DMRS, they are closest to that DMRS if they appear first, or as close as possible to the beginning, in the time-ordered series of M symbols that are supplied to a size-M DFT for DFT-spreading, with the output of the DFT then being mapped to the SC-FDMA symbol with an Inverse Fast-Fourier Transform (IFFT). If these time-domain symbols are instead mapped to an SC-FDMA symbol that precedes the closest symbol carrying DMRS, they are closest to that DMRS if they appear last, or as close as possible to the end, in the time-ordered series of M symbols that are supplied to a size-M DFT for DFT-spreading, with the output of the DFT then being mapped to the SC-FDMA symbol with an IFFT. A mapping rule for RI/CRI is that a fixed starting position of RI is established, in such a way that the mapping of RI is independent of the HARQ-A/N mapping. According to a first option, “Option 1,” this is done by first defining the maximum number of complex-valued symbols that may be used for HARQ ACK/NACK, where not necessarily all of these complex-valued symbols are used for HARQ ACK/NACK in every TTI. The mapping of RI starts after this maximum number of complex-valued symbol positions. With this approach, the RI is placed near to DMRS, to obtain good channel estimation. According to a second option, “Option 2,” the RI mapping starts from the other end of the same SC-FDMA symbol used for HARQ ACK/NACK. According to a third option, “Option 3,” if there is more than one SC-FDMA symbol carrying user data on the sPUSCH, RI mapping starts from a different SC-FDMA symbol from the one carrying HARQ ACK/NACK, and the RI mapping is placed as close to DMRS as possible, within that SC-FDMA symbol. If DMRS is transmitted before or at the beginning of sPUSCH, Option 1 and Option 2 result in that HARQ feedback and RI can be time multiplexed at the beginning of sPUSCH close to the DMRS, which gives an optimal mapping from the latency reduction perspective. Option 3 allows for the best flexibility of UCI mapping, that is, more HARQ feedback bits and RI bits can be multiplexed, as compared to Option 1 and Option 2. A mapping rule for CQI/PMI is that if there is only one SC-FDMA symbol carrying user data on sPUSCH, the CQI/PMI is mapped to the same SC-FDMA symbol carrying the HARQ ACK/NACK, to pre-DFT symbols that are mapped to that same SC-FDMA and unused for RI/CRI. If there is more than one SC-FDMA symbol carrying user data on the sPUSCH, on the other hand, there are two options. According to a first option, “Option 1,” the CQI/PMI data, which may be referred to simply as “CQI data” may be mapped to all SC-FDMA symbols within the sPUSCH on one “row”, i.e., to the same position in the pre-DFT samples for the respective SC-FDMA's before continuing on the next row. Referring toFIG.2and its description above, it should be appreciated that a “row,” as that term is used here, refers to a given k in the illustrated pattern, where the row index k=0, 1, . . . , M inFIG.2is the symbol index before the transform precoding, where M is the number of subcarriers allocated to the PUSCH. According to a second option for CQI data mapping, “Option 2,” the CQI data are first mapped to pre-DFT symbols corresponding to the first SC-FDMA symbol and left unused by RI/CRI, before using pre-DFT symbols mapped to the next SC-FDMA, if needed. Option 1 for the CQI data mapping provides the best flexibility for multiplexing UCI. It also provides time diversity for CQI/PMI. For Option 2, since UCI is time multiplexed at the beginning of sPUSCH, this mapping rule is latency optimized, if DMRS is transmitted before or at the beginning of sPUSCH. To further simplify the UCI mapping design, a mirroring method can be considered, where the UCI mapping patterns for the cases of DMRS transmitted before and after data are mirrored with each other against the DMRS symbol. In other words, any of the above mapping rules may be used to first design the UCI mapping for the cases where the DMRS is transmitted before data, and then, obtain the UCI mapping for the cases where the DMRS is transmitted after data by mirroring the first obtained mapping. The reverse procedure may also be used, i.e., where the above mapping rules are used to first design the UCI mapping for the cases where the DMRS is transmitted after the data, and then the UCI mapping for the cases where the DMRS is transmitted after data is obtained by mirroring the first obtained mapping. As a preliminary matter, example calculations and techniques for determining the size of the control region on sPUSCH are presented. First, how to determine the number of coded modulation symbols per layer Q′ for HARQ ACK/NACK and RI/CRI is considered. For the case when only one transport block is transmitted in the sPUSCH conveying the HARQ-ACK bits, RI or CRI bits: Q′=⁢min⁡(⌈O·MscsPUSCH-initial·NsymbsPUSCH-initial·βoffsetsPUSCH∑r=0C-1⁢Kr⌉,Qmax′)≈⁢min⁢(⌈OQm·R·βoffsetsPUSCH⌉,Qmax′),(1) where O is the number of HARQ-ACK bits, rank indicator bits or CRI bits, and Qmand R are the modulation order and coding rate of the transport block. MscsPUSCH-initialis the scheduled bandwidth for initial sPUSCH transmission for the transport block, expressed as a number of subcarriers, and NsymbsPUSCH-initialis the number of SC-FDMA symbols per short TTI for initial sPUSCH transmission for the same transport block, excluding the DMRS symbols and SRS symbols if SRS is transmitted in the initial sPUSCH. C is the number of code blocks for initial PUSCH transmission for the same transport block (TB), Kris the number of bits in the code block number r, and βoffsetsPUSCHis the MSC offset between the data and the corresponding control information, configured by high-layer signaling to control the additional coding gain (i.e., lower coding rate) for the UCI over data. Qmax′ is the maximum number of coded modulation symbols (i.e., the maximum amount of resources) for the corresponding control information. The value of Qmax′ may differ for different UCI mapping rules. For the case when two transport blocks are transmitted in the PUSCH conveying the HARQ-ACK bits, rank indicator bits or CRI bits: ⁢Q′=max⁡[min⁡(Qtemp′,Qmax′),Qmin′],with⁢⁢Qtemp′=⁢⌈O·MscsPUSCH-initial⁡(1)·NsymbsPUSCH-initial⁡(1)·MscsPUSCH-initial⁡(2)·NsymbsPUSCH-initial⁡(2)·βoffsetsPUSCH∑r=0C(1)-1⁢Kr(1)·MscsPUSCH-initial⁡(2)·NsymbsPUSCH-initial⁡(2)+∑r=0C(2)-1⁢Kr(2)·MscsPUSCH-initial⁡(1)·NsymbsPUSCH-initial⁡(1)⌉≈⁢⌈OQm(1)·R(1)·βoffsetsPUSCH+OQm(2)·R(2)·βoffsetsPUSCH⌉,(2) where O is the number of HARQ-ACK bits, rank indicator bits or CRI bits, and Qm(x)and R(x), x={1,2}, are the modulation order and coding rate of the first and second transport block, respectively. MscPUSCH-initial(x), x={1,2} are the scheduled bandwidths for sPUSCH transmission in the initial short TTI for the first and second transport block, respectively, expressed as a number of subcarriers, and NsymbPUSCH-initial(x), x={1,2} are the number of SC-FDMA symbols per short TTI for initial sPUSCH transmission for the first and second transport block, respectively, excluding the DMRS symbols and SRS symbols if SRS is transmitted in the initial sPUSCH. C(x), x={1,2} are the number of code blocks for initial PUSCH transmission for the first and second transport block, respectively, and Kr(x), x={1,2} are the number of bits in the code block number r for the first and second transport block, respectively. Qmin′=O if O≤2, Qmin′=┌2O/Qm′┐ if 3≤O≤11 with Qm′=min(Qm1, Qm2) where Qmx, x={1,2} is the modulation order of transport block “x”, and Qmin′, =┌2O1/Qm′┐+┌2O2/Qm′┐ if O>11 with O1=┌O/2┐ and O2=O−┌O/2┐. δoffsetsPUSCHis the MSC offset between the data and the corresponding control information, configured by high-layer signaling to control the additional coding gain (i.e., lower coding rate) for the UCI over data, Qmax′ is the maximum number of coded modulation symbols (i.e., the maximum amount of resources) for the corresponding control information. The value of Qmax′ may differ for different UCI mapping rules. To determine the number of coded modulation symbols per layer Q′ for CQI/PMI: Q′=⁢min(⌈(O+L)·MscsPUSCH-initial⁡(x)·NsymbsPUSCH-initial⁡(x)·βoffsetsPUSCH∑r=0C(x)-1⁢Kr(x)⌉,MscsPUSCH·NsymbsPUSCH-QRI(x)Qm(x))≈⁢min⁢(⌈(O+L)Qm(x)·R(x)·βoffsetsPUSCH⌉,MscsPUSCH·NsymbsPUSCH-QRI′⁡(x)),(3) where O is the number of CQI/PMI bits, and L is the number of CRC bits given by L={0O≤118otherwise, and MscsPUSCHis the scheduled bandwidth for current sPUSCH transmission for the transport block, expressed as a number of subcarriers. NsymbsPUSCHis the number of SC-FDMA symbols for the current sPUSCH transmission, excluding the DMRS symbols and SRS symbols if SRS is transmitted in the current sPUSCH, QRI(x)and Q′RI(x)are the number of coded bits of RI and the number of coded modulation symbols of RI, respectively, multiplexed with the transport block with the highest IMCS value, and Qm(x)and R(x)are the modulation order and coding rate of the transport block with the highest IMCS value indicated by the initial UL grant. MscPUSCH-initial(x)NsymbPUSCH-initial(x), C(x)and Kr(x)are parameters related to the same transport block. Equation (3), above, applies for all the UCI mapping rules presented above. For equations (1) and (2), the value of Qmax′ depends on the mapping rules used for HARQ ACK/NACK and RI. If Option 1 of the RI/CRI mapping rules is adopted, that is, if a maximum number of coded modulation symbols for HARQ ACK/NACK, Q′ACK,max, is defined and the mapping of RI starting after this maximum number of code modulation symbol position, the value of Qmax′ is determined as, for HARQ-ACK: Qmax′=α·Q′ACK,max. For RI/CRI, Qmax′=α·(MscsPUSCH−Q′ACK,max). If Option 2 or Option 3 of the RI/CRI mapping rules is adopted, the value of Qmax′ is determined as, for HARQ-ACK, Qmax′=α·MscsPUSCH. For RI/CRI, Qmax′=α·MscsPUSCH. In either case, α is the number of SC-FDMA symbols used for HARQ-ACK and RI mapping. For all the examples discussed below for 2, 3, and 4-symbol sPUSCH, as well as the examples shown inFIG.27,FIG.28, and on the right-hand side ofFIG.29, α=1. For the examples shown inFIG.26and on the left-hand side and middle ofFIG.29, for 7-symbol sPUSCH, α=2. In the following, UCI mapping solutions for different short TTI lengths are presented, considering different DMRS configurations for a sPUSCH, according to various embodiments. Many of these mappings conform to the rules presented above. Other variants, which may not conform to all of these rules, are also discussed and/or illustrated. Note that throughout this disclosure, except where legacy LTE mappings are illustrated, i.e., inFIGS.2and26, it is assumed that the order of the modulated data and DMRS symbol mapping is from k=0 (bottom row of the figures) to a maximum k (top row of figures). If the order of modulated data and DMRS symbol mapping is from top to bottom, which is the case for current LTE mappings, the UCI mapping for the cases where DMRS is transmitted before the data, as shown in the figures discussed below, should be used for the cases where DMRS is transmitted after the data instead, and vice-versa. UCI Mapping on 2-Symbol sPUSCH In this section, some examples of UCI mapping on 2-symbol sPUSCH are listed, considering different DMRS configurations.FIG.3illustrates two examples of multiplexing UCI, DMRS and data on 2-symbol sPUSCH, where the first SC-FDMA symbol of sPUSCH is used for transmitting DMRS, and the second symbol is for data transmission. On the left-hand side ofFIG.3, the maximum number of complex-valued symbols for HARQ ACK/NACK is predefined, in this example as 4. In the particular example illustrated, only two are actually used, so the remaining two are used for UL-SCH data. The mapping of RI starts from the 5th complex-valued symbol from the bottom (symbols 5-7 from bottom). The mapping of CQI/PMI starts from the top of the second SC-FDMA symbol (top 4 symbols). In the alternative approach shown on the right-hand side ofFIG.3, the mapping of RI starts from the top of the second symbol, and the mapping of CQI/PMI starts after the RI. The benefit of this alternative is that there is no need to define a maximum number of complex-valued symbols for HARQ ACK/NACK, which gives more freedom for HARQ ACK/NACK mapping. The drawback is that the RI is placed not as close to DMRS, as compared to the mapping shown in the left-hand side ofFIG.3. FIG.4illustrates additional examples of multiplexing UCI, DMRS and data on 2-symbol sPUSCH, where the first SC-FDMA symbol of sPUSCH is used for transmitting data, and the second symbol is for DMRS transmission. In this configuration, the HARQ ACK/NACK is mapped from the top of the first SC-FDMA symbol (top 2 symbols), so that it is near to the DMRS in time domain samples. Similar toFIG.3, in the left-most example inFIG.4it is assumed that the maximum number of complex-valued symbols for HARQ ACK/NACK is 4. The mapping of RI thus starts from the 5th complex-valued symbol from the top of the first SC-FDMA symbol (5-7 symbols from the top). The mapping of CQI/PMI starts from the bottom of the first symbol (bottom 4 symbols). In the middle example, the mapping of RI starts from the bottom of the first symbol (bottom 3 symbols), and the mapping of CQI/PMI starts after the RI (next 4 symbols). FIG.5shows alternative patterns for the case where DMRS follows the data symbol; these patterns are obtaining by mirroring the patterns ofFIG.3against the DMRS symbol. With the mirroring operation, these alternative patterns use the same methodology as the legacy pattern depicted inFIG.2. FIG.6shows examples of multiplexing UCI and data on 2-symbol sPUSCH, where there is no DMRS on this sPUSCH, and the DMRS used for channel estimation for this sPUSCH is transmitted at the previous short TTIs. The mapping of HARQ ACK/NACK starts from k=0 (bottom) of the first symbol, in these examples, so that it is close to DMRS in time domain samples. In the left-most example inFIG.6, it is assumed that the maximum number of complex-valued symbols for HARQ ACK/NACK is 4. Thus, the mapping of RI starts from the 5th complex-valued symbol from the bottom of the first SC-FDMA symbol. The mapping of CQI/PMI starts from the top of the first symbol. In the middle example inFIG.6, the mapping of RI starts from the bottom of the second symbol. In the right-most example inFIG.6, the mapping of RI starts from the top of the first symbol. Comparing these three examples, the position of RI is closest to DMRS in time domain samples in the left-most example, while the distance between the RI and DMRS in time domain samples is the largest in the middle example. However, the mapping in the middle example provides the best flexibility for multiplexing UCI. In the left and middle examples ofFIG.6, the coded CQI/PMI is mapped to all SC-FDMA symbols within the sPUSCH on one row k before continuing on the next row k−1. In the rightmost example, the mapping of CQI/PMI starts from the bottom of the second SC-FDMA symbol. Similar toFIG.6,FIG.7shows examples of multiplexing UCI and data on 2-symbol sPUSCH, where there is no DMRS on this sPUSCH, and the DMRS used for channel estimation for this sPUSCH is transmitted at the previous sTTIs. The difference fromFIG.6is that here the CQI/PMI bits are mapped to the time-domain samples, for the first SC-FDMA symbol, that are left unused by HARQ feedback and RI/CRI first, before using time-domain symbols corresponding to the second SC-FDMA, if needed. This enables a reduction in latency before decoding PMI/CQI and CRI/RI, since these bits are transmitted as early as possible after the DMRS. FIG.8shows an example of multiplexing UCI and data on 2-symbol sPUSCH, where there is no DMRS on this sPUSCH, and the DMRS used for channel estimation for this sPUSCH is transmitted after this sPUSCH.FIG.9illustrates alternatives for multiplexing UCI and data on 2-symbol sPUSCH, where there is no DMRS on this sPUSCH, and the DMRS used for channel estimation for this sPUSCH is transmitted after this sPUSCH. These alternatives are based on mirroring patterns ofFIG.6andFIG.7, against the DMRS symbol that comes before in those examples. FIG.10shows an example of multiplexing UCI, data and SRS on 2-symbol sPUSCH, where the first symbol 12 is used for data transmission (UL-SCH data) and the second symbol 13 is used for SRS. The DMRS used for channel estimation for this sPUSCH is transmitted before this sPUSCH. UCI Mapping on 3-Symbol PUSCH In this section are described examples of UCI mapping on 3-symbol sPUSCH, considering different DMRS configurations.FIG.11illustrates examples of multiplexing UCI, DMRS and data on 3-symbol sPUSCH, where the first SC-FDMA symbol of sPUSCH is used for transmitting DMRS, and the second and the third symbols are for data transmission. As can be seen fromFIG.11, the UCI mapping rule is the same as the one shown inFIG.6, that is, multiplexing UCI and data on 2-symbol sPUSCH, where there is no DMRS on this sPUSCH, and the DMRS is transmitted before the 2-symbol sPUSCH. FIG.12illustrates latency-optimized alternative patterns toFIG.11, where the RI/CRI bits and CQI/PMI bits are mapped to the resource elements of the first SC-FDMA symbol following the DMRS symbol first. Note that resource elements of the last SC-FDMA symbol can be used for PMI/CQI if there is not sufficient number of remaining time-domain symbols corresponding to the previous SC-FDMA symbol. FIG.13illustrates examples of multiplexing UCI, DMRS, data and SRS on 3-symbol sPUSCH, where the first SC-FDMA symbol of sPUSCH is used for transmitting DMRS, and the second symbol is for data transmission, and the third one for SRS. As can be seen fromFIG.13, the UCI mapping rule is the same as the one inFIG.3, that is, multiplexing UCI and data on 2-symbol sPUSCH, where the first SC-FDMA symbol of sPUSCH is used for DMRS, and the second symbol is for data. By using the same mapping as shown inFIG.8, examples of multiplexing of UCI, DMRS and data on 3-symbol sPUSCH with configuration data+data+DMRS are shown inFIG.14.FIG.15illustrates alternatives for multiplexing UCI and data on 3-symbol sPUSCH, where the DMRS used for channel estimation for this sPUSCH is transmitted after this sPUSCH. These alternatives are based on mirroring patterns ofFIG.11against the DMRS symbol. FIG.16shows examples of multiplexing UCI and data on 3-symbol sPUSCH, where there is no DMRS on this sPUSCH, and the DMRS used for channel estimation for this sPUSCH is transmitted before this sPUSCH.FIG.17illustrates examples of multiplexing UCI, data and SRS on 3-symbol sPUSCH, where there is no DMRS on this sPUSCH, and the DMRS is transmitted before this sPUSCH. The same mapping rule illustrated inFIG.6is applied. Here, the coded CQI/PMI is mapped to all SC-FDMA symbols within the sPUSCH on one subcarrier before continuing on the next subcarrier. UCI Mapping on 4-Symbol PUSCH In this section, we list some examples of UCI mapping on 4-symbol sPUSCH, considering different DMRS configurations.FIG.18illustrates examples of multiplexing UCI, DMRS and data on 4-symbol sPUSCH, where the first SC-FDMA symbol of sPUSCH is used for transmitting DMRS, and the rest symbols are for data transmission. The UCI mapping rule is the same as the one shown inFIG.11. FIG.19shows latency-optimized alternative patterns toFIG.18, where the RI/CRI bits and CQI/PMI bits are mapped to the resource elements of the first SC-FDMA symbol following the DMRS symbol first. Note that resource elements of the last SC-FDMA symbol can be used for PMI/CQI if there is not sufficient number of remaining resource elements in the previous SC-FDMA symbol. FIG.20illustrates examples of multiplexing UCI, DMRS, data and SRS on 4-symbol sPUSCH, where the first SC-FDMA symbol of sPUSCH is used for transmitting DMRS, and the last symbol is for SRS. The UCI mapping rule is the same as the one shown inFIG.11. Note that alternative UCI mappings that are latency-optimized can be used as well similarly as inFIG.19, with the addition of SRS in the last SC-FDMA symbol. FIG.21illustrates examples of multiplexing UCI, DMRS, data and SRS on 4-symbol sPUSCH, where the first three SC-FDMA symbols of sPUSCH are used for data, and the last symbol is for DMRS. The UCI mapping rule is the same as the one shown inFIG.8andFIG.14.FIG.22shows alternatives for multiplexing UCI and data on 4-symbol sPUSCH, where the DMRS used for channel estimation for this sPUSCH is transmitted after this sPUSCH. These alternatives are based on mirroring patterns ofFIG.20against the DMRS symbol. Similar toFIG.16,FIG.23shows examples of multiplexing UCI on 4-symbol sPUSCH, where this is no DMRS on sPUSCH, and the DMRS is transmitted before.FIG.24shows latency-optimized alternative patterns toFIG.23, where the RI/CRI bits and CQI/PMI bits are mapped to the resource elements of the first SC-FDMA symbol following the DMRS symbol first. Note that pre-DFT symbols mapped to the last SC-FDMA symbol can be used for PMI/CQI if there is not sufficient number of remaining resource elements in the previous SC-FDMA symbol. FIG.25illustrates examples on multiplexing UCI on 4-symbol sPUSCH, where the last symbol is used for SRS, and the rest of the symbols are for data transmission. Note that alternative UCI mappings that are latency-optimized can be used as well similarly as inFIG.24, with the addition of SRS in the last SC-FDMA symbol. UCI Mapping on 7-Symbol PUSCH In this section, some examples of UCI mapping on 7-symbol sPUSCH are provided, considering different DMRS configurations.FIG.26shows an example of multiplexing UCI on 7-symbol sPUSCH, where the legacy DMRS position is reused, that is, the DMRS is placed at the middle of the 7-symbol sPUSCH. In this case, the legacy UCI mapping rule is reused. FIG.27shows the latency-optimized pattern for 7-symbol sPUSCH. The DMRS is placed in the first SC-FDMA symbol and UCI is mapped to the following SC-FDMA symbol. In case the payload of CQI/PMI is large, CQI/PMI is mapped on consecutive SC-FDMA symbols starting from the SC-FDMA symbol following the DMRS symbol. An example of this case is illustrated inFIG.28. In high Doppler scenarios, a single DMRS is not sufficient to provide good channel estimation at the eNodeB. In such cases, more DMRS symbols are needed to improve the channel estimation, and thereby improve the decoding of UCI and data.FIG.29shows some examples on UCI mapping on 7-symbol sPUSCH, where 2 SC-FDMA symbols are used for transmitting DMRS. The first and the sixth SC-FDMA symbols of the 7-symbol sPUSCH are used for DMRS, the rest of the symbols are used for data. Similar mapping rules can be used for other DMRS configurations for 7-symbol sPUSCH. In the left-most and middle examples shown inFIG.29, similar mapping rules as shown inFIG.3, for 2 symbol sPUSCH, are used for multiplexing of HARQ ACK/NACK and RI on 7-symbol sPUSCH. In this example, HARQ ACK/NACK and RI are placed on both symbol 1 and symbol 6. In the right-most example inFIG.29, HARQ ACK/NACK is placed on symbol 1, which is close to the first DMRS symbol, and RI is placed on symbol 6, which is close to the second DMRS symbol. The mapping in the left and middle examples ofFIG.29can provide time diversity for transmitting HARQ ACK/NACK and RI. The left example inFIG.29requires a predefined maximum number of complex-valued symbols for HARQ ACK/NACK, which is not needed if the mapping in the middle or right-most example ofFIG.29is used. Note that for high Doppler scenarios, the channel varies fast in time domain, thus, the CSI reports, i.e., RI, CQI and PMI, may not be very useful. In such cases, the middle example is the preferred mapping solution. When more resources are needed for the multiplexing of HARQ ACK/NACK or/and RI, symbol 4 can also be used for HARQ ACK/NACK or/and RI. Case where there is No Data in sPUSCH In case of full TTI operation (1-millisecond TTI), an eNB can schedule an aperiodic CQI report, which is transmitted as UCI on PUSCH by the UE. The UE may not have any data in buffer and thus the PUSCH will only contain UCI. A similar behavior is possible in case of short TTI operation, when eNB schedules aperiodic CQI report on sPUSCH. The UCI mapping explained previously throughout this document can be reused, but the code rate of CQI/PMI is adapted in this situation so as to exploit the resource elements that are used neither for data, nor for RI/CRI and HARQ feedback. This means that the code rate with which the information bits corresponding to CQI/PMI are coded is lowered so that the resulting larger sequence of coded bits uses up all scheduled resource elements left unused by RI/CRI and HARQ feedback.FIG.30illustrates an example of multiplexing of UCI on 2-symbol sPUSCH and 3-symbol sPUSCH in case sPUSCH does not carry any data. Similar behavior is intended for all other cases and other sTTI lengths mentioned previously. FIG.31is a flow chart of embodiments of the invention for the placement of RI and HARQ-A/N on short TTI PUSCH. A SC-FDMA symbol is DFT-spread starting in time from k=0. Different mapping solutions for different sTTI lengths are provided, without considering the mirroring method. For example, for UCI on PUSCH with sTTI, it is determined whether the closest DMRS in the time domain is before the data (block3102). If so, HARQ-A/N is placed in the first data SC-FDMA symbol on the lowest k (block3104). If not, HARQ-A/N is placed in the last data SC-FDMA symbol on the highest k (block3106). If there is more than one data symbol, RI is placed either in the second data SC-FDMA symbol on the lowest k (3104path) or the second last data SC-FDMA symbol on highest k (3106path). If not, RI is placed in the data SC-FDMA symbol on the highest k (3104path) or the data SC-FDMA symbol on the lowest k (3106path). FIG.32is a flow chart of embodiments of the invention for the placement of RI and HARQ-A/N on short TTI PUSCH. A SC-FDMA symbol is DFT-spread starting in time from k=0. This method provides a uniform mapping solution for all sTTI lengths, without considering the mirroring method. If the closest DMRS in the time domain is before the data, HARQ-A/N is placed in the first SC_FDMA symbol from kmin and up (block3204). If not, HARQ-A/N is placed in the last SC_FDMA symbol from kmax and down (block3206). InFIG.31andFIG.32, k=kmin, . . . , kmax is the symbol index before transform precoding (see section 5.3.3 in 3GPP TS 36.211), and r is the predefined maximum number of symbols used for HARQ ACK/NACK. When the mirroring method is considered, the UCI mapping for the cases where the closest DMRS is transmitted after the data are obtained by mirroring the UCI patterns based on the left paths ofFIG.31andFIG.32. Implementations The techniques referred to above can be implemented by respective transmitting and receiving devices.FIG.33is a block diagram of a wireless device (UE50) that is configured to operate as a transmitting device, andFIG.35is a block diagram of a network node (network node30) configured to operate as a receiving device. For purposes of discussion,FIGS.33and35show the UE50as a transmitting device and the network node30as a receiving device. However, the network node30may also be configured to operate as a transmitting device, and likewise, the UE50may also be configured to operate as a receiving device. FIG.33illustrates a diagram of a wireless device (UE50) configured to operate as a transmitting device (or transmitter apparatus), according to some embodiments. To ease explanation, the UE50may be considered to represent any radio communication device such as a target device (device targeted for communication), device to device (D2D) UE, machine type UE or UE capable of machine to machine communication (M2M), a sensor equipped with UE, iPAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, Customer Premises Equipment (CPE), etc. The UE50communicates with a transmitting device, radio node or base station, such as the network access node30, via antennas54and a transceiver circuit56. The transceiver circuit56may include transmitter circuitry, receiver circuitry, and associated control circuits that are collectively configured to transmit and receive signals according to a radio access technology, for the purposes of providing cellular communication services. According to various embodiments, cellular communication services may be operated according to any one or more of the 3GPP cellular standards, GSM, GPRS, WCDMA, HSDPA, LTE and LTE-Advanced. The UE50also includes one or more processing circuits52that are operatively associated with the radio transceiver circuit56. The processing circuit52comprises one or more digital processing circuits, e.g., one or more microprocessors, microcontrollers, DSPs, FPGAs, CPLDs, ASICs, or any mix thereof. More generally, the processing circuit52may comprise fixed circuitry, or programmable circuitry that is specially adapted via the execution of program instructions implementing the functionality taught herein, or may comprise some mix of fixed and programmed circuitry. The processing circuit52may be multi-core. The processing circuit52also includes a memory64. The memory64, in some embodiments, stores one or more computer programs66and, optionally, configuration data68. The memory64provides non-transitory storage for the computer program66and it may comprise one or more types of computer-readable media, such as disk storage, solid-state memory storage, or any mix thereof. By way of non-limiting example, the memory64comprises any one or more of SRAM, DRAM, EEPROM, and FLASH memory, which may be in the processing circuit52and/or separate from processing circuit52. In general, the memory64comprises one or more types of computer-readable storage media providing non-transitory storage of the computer program66and any configuration data68used by the UE50. In some embodiments, the processor62of the processing circuit52may execute a computer program66stored in the memory64that configures the processor62to perform a method of mapping control information to each of a plurality of TTIs/transmissions, for transmission as SC-FDMA signals, where each of the plurality of transmissions comprises one or more SC-FDMA symbols and where the control information comprises at least HARQ ACK/NACK data for each of the plurality of TTIs and may also include RI data and CQI data. Accordingly, the processing circuit52is configured to determine, for each of the plurality of transmissions, whether user data to be transmitted in the transmission, i.e. TTI, will be closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data. The processing circuit52is also configured to, for each of the plurality of transmissions in which user data to be transmitted in the transmission/TTI will be closest in time to DMRS transmitted before the user data, map all HARQ ACK/NACK data for the transmission/TTI to the earliest in time SC-FDMA symbol of the transmission/TTI that carries user data, and to pre-DFT symbols that correspond to that SC-FDMA symbol and that are closest in time to the DMRS transmitted before the user data. The processing circuit52is also configured to, for each of the plurality of transmissions in which user data to be transmitted in the transmission/TTI will be closest in time to DMRS transmitted after the user data, map all HARQ ACK/NACK data for the transmission to the last in time SC-FDMA symbol of the transmission/TTI that carries user data, and to pre-DFT symbols that correspond to that SC-FDMA symbol and that are closest in time to the DMRS transmitted after the user data. The processing circuit52is further configured to form, for each of the plurality of transmissions, an SC-FDMA signal from user data and control information for the transmission/TTI, based on the mapping of the HARQ ACK/NACK data. This functionality may be performed by the mapping circuitry60in the processing circuit52. The transmitter apparatus may include transmitter circuitry configured to transmit the SC-FDMA signals. Regardless of its specific implementation, the processing circuit52of the network node30is configured to perform a method in a transmitting device, of mapping control information to each of a plurality of TTIs/transmissions, for transmission as SC-FDMA signals, where each of the plurality of transmissions comprises one or more SC-FDMA symbols and where the control information comprises at least HARQ ACK/NACK data, RI data and CQI data for each of the plurality of transmissions, such as method3400ofFIG.34. The method3400includes determining, for each of the plurality of transmissions, whether user data to be transmitted in the transmission/TTI will be closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data (block3402). The method3400includes, for each of the plurality of transmissions in which user data to be transmitted in the transmission/TTI will be closest in time to DMRS transmitted before the user data, mapping all HARQ ACK/NACK data for the transmission/TTI to the earliest in time SC-FDMA symbol of the transmission/TTI that carries user data, and to pre-DFT symbols that correspond to that SC-FDMA symbol and that are closest in time to the DMRS transmitted before the user data (block3404). The method3400also includes, for each of the plurality of transmissions in which user data to be transmitted in the transmission/TTI will be closest in time to DMRS transmitted after the user data, mapping all HARQ ACK/NACK data for the transmission/TTI to the last in time SC-FDMA symbol of the transmission/TTI that carries user data, and to pre-DFT symbols that correspond to that SC-FDMA symbol and that are closest in time to the DMRS transmitted after the user data (block3406). The method3400further includes forming, for each of the plurality of transmissions, an SC-FDMA signal from user data and control information for the transmission/TTI, based on the mapping of the HARQ ACK/NACK data (3408). In some embodiments, the method3400includes, for each of the plurality of TTIs/transmissions, mapping RI data for the transmission/TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but to pre-DFT symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the transmission/TTI. In other embodiments, the method includes, for each of the plurality of transmissions in which two or more SC-FDMA symbols are to carry user data, mapping RI data for the transmission/TTI to an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the transmission/TTI. In some embodiments, the method3400includes, for each of the plurality of transmissions, mapping RI data for the transmission/TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that as close as possible to the pre-DFT symbols to which the HARQ ACK/NACK data is mapped, given a predetermined maximum number of pre-DFT symbols allocated to HARQ ACK/NACK data. The method3400may also include determining, for each of the plurality of transmissions, whether more than one SC-FDMA symbol of the transmission/TTI is to carry user data. For each of the plurality of transmissions in which only one SC-FDMA symbol is to carry user data, the method3400then includes mapping RI data for the transmission/TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but to pre-DFT symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the transmission. For each of the plurality of transmissions in which two or more SC-FDMA symbols are to carry user data, the method3400then includes mapping RI data for the transmission/TTI to an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the transmission. In some cases, the method3400may include, for each of the plurality of transmissions in which two or more SC-FDMA symbols are to carry user data, mapping CQI data for the transmission/TTI as evenly as possible to the two or more SC-FDMA symbols that are to carry user data. FIG.35illustrates a diagram of a network node30configured to operate as a receiving device (or receiver apparatus), according to some embodiments. The network node30facilitates communication between UEs and the core network. The generic terminology “network node” is used, but the network node30can be any kind of network node such as a radio network node such as base station, radio base station, base transceiver station, base station controller, network controller, evolved Node B (eNB), Node B, Multi-cell/multicast Coordination Entity (MCE), relay node, access point, radio access point, Remote Radio Unit (RRU) Remote Radio Head (RRH), a core network node (e.g., MME, SON node, a coordinating node, positioning node, MDT node, etc.), or even an external node (e.g., 3rd party node, a node external to the current network), etc. It may also include, in some cases, Operations Support System (OSS), Operations and Maintenance (O&M), Self-Organizing Network (SON), positioning node, Evolved Serving Mobile Location Center (E-SMLC), a centralized controller, a core network node, Mobility Management Entity (MME), base station controller, or network controller. The network node30has a communication interface circuit38that includes circuitry for communicating with other nodes in the core network, radio nodes, and/or other types of nodes in the network for the purposes of providing data and cellular communication services. The network node30communicates with UEs via antennas34and a transceiver circuit36. The transceiver circuit36may include transmitter circuitry, receiver circuitry, and associated control circuits that are collectively configured to transmit and receive signals according to a radio access technology, for the purposes of providing cellular communication services. According to various embodiments, cellular communication services may be operated according to any one or more of the 3GPP cellular standards, GSM, general packet radio service (GPRS), wideband code division multiple access (WCDMA), high-speed downlink packet access (HSDPA), LTE and LTE-Advanced. The network node30also includes one or more processing circuits32that are operatively associated with the communication interface circuit38or transceiver circuit36. The network node30uses the communication interface circuit38to communicate with network nodes and the transceiver circuit36to communicate with UEs. For ease of discussion, the one or more processing circuits32are referred to hereafter as “the processing circuit32.” The processing circuit32comprises one or more digital processors42, e.g., one or more microprocessors, microcontrollers, Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), Complex Programmable Logic Devices (CPLDs), Application Specific Integrated Circuits (ASICs), or any mix thereof. More generally, the processing circuit32may comprise fixed circuitry, or programmable circuitry that is specially configured via the execution of program instructions implementing the functionality taught herein, or may comprise some mix of fixed and programmed circuitry. The processor42may be multi-core having two or more processor cores utilized for enhanced performance, reduced power consumption, and more efficient simultaneous processing of multiple tasks. The processing circuit32also includes a memory44. The memory44, in some embodiments, stores one or more computer programs46and, optionally, configuration data48. The memory44provides non-transitory storage for the computer program46and it may comprise one or more types of computer-readable media, such as disk storage, solid-state memory storage, or any mix thereof. By way of non-limiting example, the memory44comprises any one or more of SRAM, DRAM, EEPROM, and FLASH memory, which may be in the processing circuit32and/or separate from the processing circuit32. In general, the memory44comprises one or more types of computer-readable storage media providing non-transitory storage of the computer program46and any configuration data48used by the network node30. Here, “non-transitory” means permanent, semi-permanent, or at least temporarily persistent storage and encompasses both long-term storage in non-volatile memory and storage in working memory, e.g., for program execution. In some embodiments, the processor42of the processing circuit32may execute a computer program46stored in the memory44that configures the processor42to operate as a receiver (or receiving apparatus) for de-mapping control information from each of a plurality of TTIs/transmissions received as SC-FDMA signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least HARQ ACK/NACK data for each of the plurality of transmissions and may also comprise RI data and/or CQI data. Accordingly, the processing circuit32is configured to control receiver circuitry (of the transceiver circuit36) that is configured to receive, for each of the plurality of transmissions, an SC-FDMA signal. The processing circuit32is configured to determine, for each of the plurality of transmissions, whether user data received in the transmission is closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data. The processing circuit32is configured to, for each of the plurality of transmissions in which user data received in the transmission is closest in time to DMRS transmitted before the user data, de-map all HARQ ACK/NACK data for the transmission/TTI from the earliest in time SC-FDMA symbol of the transmission/TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted before the user data. The processing circuit32is also configured to, for each of the plurality of transmissions in which user data received in the transmission is closest in time to DMRS transmitted after the user data, de-map all HARQ ACK/NACK data for the transmission/TTI from the last in time SC-FDMA symbol of the transmission/TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted after the user data. This functionality may be performed by the de-mapping circuitry40in the processing circuit32. Similar to as mentioned above, both the network node30and the UE50may be configured with any combination of the mapping circuitry60and the de-mapping circuitry40. Regardless of the specific implementation, the processing circuit32of the network node30is configured to perform a method3600of de-mapping control information from each of a plurality of transmissions received as SC-FDMA signals, where each of the plurality of transmissions comprises one or more SC-FDMA symbols and where the control information comprises at least HARQ ACK/NACK data, RI data, and CQI data for each of the plurality of transmissions. The method3600is illustrated inFIG.36and includes receiving, for each of the plurality of transmissions, an SC-FDMA signal (block3602). The method3600also includes determining, for each of the plurality of transmissions, whether user data received in the transmission is closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data (block3604). For each of the plurality of transmissions in which user data received in the transmission is closest in time to DMRS transmitted before the user data, de-mapping all HARQ ACK/NACK data for the transmission/TTI from the earliest in time SC-FDMA symbol of the transmission/TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted before the user data (block3606). The method3600also includes, for each of the plurality of transmissions in which user data received in the transmission is closest in time to DMRS transmitted after the user data, de-mapping all HARQ ACK/NACK data for the transmission/TTI from the last in time SC-FDMA symbol of the transmission/TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted after the user data (Block3608). In some embodiments, for each of the plurality of transmissions, the method3600includes de-mapping RI data for the transmission/TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but from post-despreading symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the transmission. In other embodiments, for each of the plurality of transmissions in which two or more SC-FDMA symbols carry user data, the method3600includes de-mapping RI data for the transmission/TTI from an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the transmission. In some embodiments, for each of the plurality of transmissions, the method3600includes de-mapping RI data for the transmission/TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that as close as possible to the post-despreading symbols to which the HARQ ACK/NACK data is mapped, given a predetermined maximum number of post-despreading symbols allocated to HARQ ACK/NACK data. The method3600may include determining, for each of the plurality of transmissions, whether more than one SC-FDMA symbol of the transmission carries user data. For each of the plurality of transmissions in which only one SC-FDMA symbol carries user data, the method3600then includes de-mapping RI data for the transmission/TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but from post-despreading symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the transmission. For each of the plurality of transmissions in which two or more SC-FDMA symbols carry user data, the method3600then includes de-mapping RI data for the transmission/TTI from an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the transmission. In some embodiments, for each of the plurality of transmissions in which two or more SC-FDMA symbols carry user data, the method3600includes de-mapping CQI data for the transmission/TTI as evenly as possible from the two or more SC-FDMA symbols that carry user data. In other embodiments, for each of the plurality of transmissions in which two or more SC-FDMA symbols carry user data, the method3600includes de-mapping CQI data for the transmission/TTI, to the extent possible, from pre-DFT symbols that map to the first SC-FDMA symbol carrying user data, and then de-mapping any remaining CQI data from one or more subsequent SC-FDMA symbols. FIG.37illustrates an example functional module or circuit architecture as may be implemented in the network node30, based on the mapping circuitry60, for mapping control information to each of a plurality of transmissions for transmission as SC-FDMA signals, where each of the plurality of transmissions comprises one or more SC-FDMA symbols and where the control information comprises at least HARQ ACK/NACK data, RI data and CQI data for each of the plurality of transmissions. The illustrated embodiment at least functionally includes a determining module3702for determining, for each of the plurality of transmissions, whether user data to be transmitted in the transmission will be closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data. The implementation includes a mapping module3704for, for each of the plurality of transmissions in which user data to be transmitted in the transmission will be closest in time to DMRS transmitted before the user data, mapping all HARQ ACK/NACK data for the transmission/TTI to the earliest in time SC-FDMA symbol of the transmission that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted before the user data. The mapping module3704is also for, for each of the plurality of transmissions in which user data to be transmitted in the transmission will be closest in time to DMRS transmitted after the user data, mapping all HARQ ACK/NACK data for the transmission/TTI to the last in time SC-FDMA symbol of the transmission that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted after the user data. The implementation includes a signal forming module3706for forming, for each of the plurality of transmissions, an SC-FDMA signal from user data and control information for the transmission, based on the mapping of the HARQ ACK/NACK data. FIG.38illustrates an example functional module or circuit architecture as may be implemented in the UE50, based on the circuitry60being configured to also perform mapping control information to each of a plurality of transmissions for transmission as SC-FDMA signals, where each of the plurality of transmissions comprises one or more SC-FDMA symbols and where the control information comprises at least HARQ ACK/NACK data, RI data and CQI data for each of the plurality of transmissions. The illustrated embodiment at least functionally includes a determining module3802for determining, for each of the plurality of transmissions, whether user data to be transmitted in the transmission will be closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data. The implementation includes a mapping module3804for, for each of the plurality of transmissions in which user data to be transmitted in the transmission will be closest in time to DMRS transmitted before the user data, mapping all HARQ ACK/NACK data for the transmission/TTI to the earliest in time SC-FDMA symbol of the transmission/TTI that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted before the user data. The mapping module3804is also for, for each of the plurality of transmissions in which user data to be transmitted in the transmission will be closest in time to DMRS transmitted after the user data, mapping all HARQ ACK/NACK data for the transmission/TTI to the last in time SC-FDMA symbol of the transmission/TTI that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted after the user data. The implementation includes a signal forming module3806for forming, for each of the plurality of transmissions, an SC-FDMA signal from user data and control information for the transmission/TTI, based on the mapping of the HARQ ACK/NACK data. FIG.39illustrates an example functional module or circuit architecture as may be implemented in the network node30, based on the circuitry40also being configured for de-mapping control information from each of a plurality of transmissions received as SC-FDMA signals, where each of the plurality of transmissions comprises one or more SC-FDMA symbols and where the control information comprises at least HARQ ACK/NACK data, RI data and CQI data for each of the plurality of transmission/TTIs. The illustrated embodiment at least functionally includes a receiving module3902for receiving, for each of the plurality of transmissions, an SC-FDMA signal. The implementation also includes a determining module3904for determining, for each of the plurality of transmissions, whether user data received in the transmission is closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data. The implementation includes a de-mapping module3906for, for each of the plurality of transmissions in which user data received in the transmission is closest in time to DMRS transmitted before the user data, de-mapping all HARQ ACK/NACK data for the transmission/TTI from the earliest in time SC-FDMA symbol of the transmission that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted before the user data. The de-mapping module3906is also for, for each of the plurality of transmissions in which user data received in the transmission is closest in time to DMRS transmitted after the user data, de-mapping all HARQ ACK/NACK data for the transmission/TTI from the last in time SC-FDMA symbol of the transmission that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted after the user data. FIG.40illustrates an example functional module or circuit architecture as may be implemented in UE50, based on the circuitry60being configured for de-mapping control information from each of a plurality of transmissions received as SC-FDMA signals, where each of the plurality of transmissions comprises one or more SC-FDMA symbols and where the control information comprises at least HARQ ACK/NACK data, RI data and CQI data for each of the plurality of transmissions. The illustrated embodiment at least functionally includes a receiving module4002for receiving, for each of the plurality of transmissions, an SC-FDMA signal. The implementation also includes a determining module4004for determining, for each of the plurality of transmissions, whether user data received in the transmission is closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data. The implementation includes a de-mapping module4006for, for each of the plurality of transmissions in which user data received in the transmission is closest in time to DMRS transmitted before the user data, de-mapping all HARQ ACK/NACK data for the transmission/TTI from the earliest in time SC-FDMA symbol of the transmission/TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted before the user data. The de-mapping module4006is also for, for each of the plurality of transmissions in which user data received in the transmission is closest in time to DMRS transmitted after the user data, de-mapping all HARQ ACK/NACK data for the transmission/TTI from the last in time SC-FDMA symbol of the transmission/TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted after the user data. In view of the above discussion and detailed description, it will be appreciated that embodiments of the presently disclosed techniques and apparatus include, but are not limited to, the following enumerated embodiments:(a). A method, in a transmitting device, of mapping control information to each of a plurality of transmission time intervals (TTIs), for transmission as Single-Carrier Frequency-Division Multiple Access (SC-FDMA) signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least Hybrid Automatic Repeat-Request (HARQ) ACK/NACK data for each of the plurality of TTIs, the method comprising:determining, for each of the plurality of TTIs, whether user data to be transmitted in the TTI will be closest in time to a demodulation reference signal (DMRS) transmitted before the user data or to a DMRS transmitted after the user data;for each of the plurality of TTIs in which user data to be transmitted in the TTI will be closest in time to DMRS transmitted before the user data, mapping all HARQ ACK/NACK data for the TTI to the earliest in time SC-FDMA symbol of the TTI that carries user data, and to pre-Discrete-Fourier Transform (pre-DFT) symbols closest in time to the DMRS transmitted before the user data;for each of the plurality of TTIs in which user data to be transmitted in the TTI will be closest in time to DMRS transmitted after the user data, mapping all HARQ ACK/NACK data for the TTI to the last in time SC-FDMA symbol of the TTI that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted after the user data; andforming, for each of the plurality of TTIs, an SC-FDMA signal from user data and control information for the TTI, based on the mapping of the HARQ ACK/NACK data.(b). The method of example embodiment (a), wherein the method further comprises:for each of the plurality of TTIs, mapping rank indicator (RI) data for the TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but to pre-DFT symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the TTI.(c). The method of example embodiment (a), wherein the method further comprises:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, mapping rank indicator (RI) data for the TTI to an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the TTI.(d). The method of embodiment (a), wherein the method further comprises:for each of the plurality of TTIs, mapping rank indicator (RI) data for the TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that as close as possible to the pre-DFT symbols to which the HARQ ACK/NACK data is mapped, given a predetermined maximum number of pre-DFT symbols allocated to HARQ ACK/NACK data.(e). The method of example embodiment (a), wherein the method further comprises:determining, for each of the plurality of TTIs, whether more than one SC-FDMA symbol of the TTI is to carry user data;for each of the plurality of TTIs in which only one SC-FDMA symbol is to carry user data, mapping rank indicator (RI) data for the TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but to pre-DFT symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the TTI;for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, mapping RI data for the TTI to an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the TTI.(f). The method of any of example embodiments (a)-(e), the method further comprising:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, mapping channel quality indicator (CQI) data for the TTI as evenly as possible to the two or more SC-FDMA symbols that are to carry user data.(g). The method of any of example embodiments (a)-(e), the method further comprising:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, mapping channel quality indicator (CQI) data for the TTI, to the extent possible, to pre-DFT symbols that map to the first SC-FDMA symbol carrying user data, and mapping any remaining CQI data to one or more subsequent SC-FDMA symbols.(h). A transmitter apparatus configured to map control information to each of a plurality of transmission time intervals (TTIs), for transmission as Single-Carrier Frequency-Division Multiple Access (SC-FDMA) signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least Hybrid Automatic Repeat-Request (HARQ) ACK/NACK data for each of the plurality of TTIs, the transmitter apparatus comprising:processing circuitry configured to:determine, for each of the plurality of TTIs, whether user data to be transmitted in the TTI will be closest in time to a demodulation reference signal (DMRS) transmitted before the user data or to a DMRS transmitted after the user data;for each of the plurality of TTIs in which user data to be transmitted in the TTI will be closest in time to DMRS transmitted before the user data, map all HARQ ACK/NACK data for the TTI to the earliest in time SC-FDMA symbol of the TTI that carries user data, and to pre-Discrete-Fourier Transform (pre-DFT) symbols closest in time to the DMRS transmitted before the user data;for each of the plurality of TTIs in which user data to be transmitted in the TTI will be closest in time to DMRS transmitted after the user data, map all HARQ ACK/NACK data for the TTI to the last in time SC-FDMA symbol of the TTI that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted after the user data; andform, for each of the plurality of TTIs, an SC-FDMA signal from user data and control information for the TTI, based on the mapping of the HARQ ACK/NACK data; andtransmitter circuitry configured to transmit the SC-FDMA signals.(i). The transmitter apparatus of example embodiment (h), wherein the processing circuitry is further configured to:for each of the plurality of TTIs, map rank indicator (RI) data for the TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but to pre-DFT symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the TTI.(j). The transmitter apparatus of example embodiment (h), wherein the processing circuitry is further configured to:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, map rank indicator (RI) data for the TTI to an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the TTI.(k). The transmitter apparatus of embodiment (h), wherein the processing circuitry is further configured to:for each of the plurality of TTIs, map rank indicator (RI) data for the TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that as close as possible to the pre-DFT symbols to which the HARQ ACK/NACK data is mapped, given a predetermined maximum number of pre-DFT symbols allocated to HARQ ACK/NACK data.(l). The transmitter apparatus of example embodiment (h), wherein the processing circuitry is further configured to:determine, for each of the plurality of TTIs, whether more than one SC-FDMA symbol of the TTI is to carry user data;for each of the plurality of TTIs in which only one SC-FDMA symbol is to carry user data, map rank indicator (RI) data for the TTI to the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but to pre-DFT symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the TTI;for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, map RI data for the TTI to an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and to pre-DFT symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the TTI.(m). The transmitter apparatus of any of example embodiments (h)-(l), wherein the processing circuitry is further configured to:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, map channel quality indicator (CQI) data for the TTI as evenly as possible to the two or more SC-FDMA symbols that are to carry user data.(n). The transmitter apparatus of any of example embodiments (h)-(l), wherein the processing circuitry is further configured to:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, map channel quality indicator (CQI) data for the TTI, to the extent possible, to pre-DFT symbols that map to the first SC-FDMA symbol carrying user data, and map any remaining CQI data to one or more subsequent SC-FDMA symbols.(o). A method, in a receiving device, of de-mapping control information from each of a plurality of transmission time intervals (TTIs), received as Single-Carrier Frequency-Division Multiple Access (SC-FDMA) signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least Hybrid Automatic Repeat-Request (HARQ) ACK/NACK data for each of the plurality of TTIs, the method comprising: receiving, for each of the plurality of TTIs, an SC-FDMA signal;determining, for each of the plurality of TTIs, whether user data received in the TTI is closest in time to a demodulation reference signal (DMRS) transmitted before the user data or to a DMRS transmitted after the user data;for each of the plurality of TTIs in which user data received in the TTI is closest in time to DMRS transmitted before the user data, de-mapping all HARQ ACK/NACK data for the TTI from the earliest in time SC-FDMA symbol of the TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted before the user data; andfor each of the plurality of TTIs in which user data received in the TTI is closest in time to DMRS transmitted after the user data, de-mapping all HARQ ACK/NACK data for the TTI from the last in time SC-FDMA symbol of the TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted after the user data.(p). The method of example embodiment (o), wherein the method further comprises:for each of the plurality of TTIs, de-mapping rank indicator (RI) data for the TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but from post-despreading symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the TTI.(q). The method of example embodiment (o), wherein the method further comprises:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, de-mapping rank indicator (RI) data for the TTI from an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the TTI.(r). The method of embodiment (o), wherein the method further comprises:for each of the plurality of TTIs, de-mapping rank indicator (RI) data for the TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that as close as possible to the post-despreading symbols to which the HARQ ACK/NACK data is mapped, given a predetermined maximum number of post-despreading symbols allocated to HARQ ACK/NACK data.(s). The method of example embodiment (o), wherein the method further comprises:determining, for each of the plurality of TTIs, whether more than one SC-FDMA symbol of the TTI is to carry user data;for each of the plurality of TTIs in which only one SC-FDMA symbol is to carry user data, de-mapping rank indicator (RI) data for the TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but from post-despreading symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the TTI;for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, de-mapping RI data for the TTI from an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the TTI.(t). The method of any of example embodiments (o)-(s), the method further comprising:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, de-mapping channel quality indicator (CQI) data for the TTI as evenly as possible from the two or more SC-FDMA symbols that are to carry user data.(u). The method of any of example embodiments (o)-(s), the method further comprising:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, de-mapping channel quality indicator (CQI) data for the TTI, to the extent possible, from pre-DFT symbols that map to the first SC-FDMA symbol carrying user data, and then de-mapping any remaining CQI data from one or more subsequent SC-FDMA symbols.(v). A receiver apparatus configured to de-map control information from each of a plurality of transmission time intervals (TTIs) transmitted as Single-Carrier Frequency-Division Multiple Access (SC-FDMA) signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least Hybrid Automatic Repeat-Request (HARQ) ACK/NACK data for each of the plurality of TTIs, the receiver apparatus comprising:receiver circuitry configured to receive, for each of the plurality of TTIs, an SC-FDMA signal; andprocessing circuitry configured to:determine, for each of the plurality of TTIs, whether user data received in the TTI is closest in time to a demodulation reference signal (DMRS) transmitted before the user data or to a DMRS transmitted after the user data;for each of the plurality of TTIs in which user data received in the TTI is closest in time to DMRS transmitted before the user data, de-map all HARQ ACK/NACK data for the TTI from the earliest in time SC-FDMA symbol of the TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted before the user data; andfor each of the plurality of TTIs in which user data received in the TTI is closest in time to DMRS transmitted after the user data, de-map all HARQ ACK/NACK data for the TTI from the last in time SC-FDMA symbol of the TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted after the user data.(w). The receiver apparatus of example embodiment (v), wherein the processing circuitry is further configured to:for each of the plurality of TTIs, de-map rank indicator (RI) data for the TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but from post-despreading symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the TTI.(x). The receiver apparatus of example embodiment (v), wherein the processing circuitry is further configured to:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, de-map rank indicator (RI) data for the TTI from an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the TTI.(y). The receiver apparatus of embodiment (v), wherein the processing circuitry is further configured to:for each of the plurality of TTIs, de-map rank indicator (RI) data for the TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that as close as possible to the post-despreading symbols to which the HARQ ACK/NACK data is mapped, given a predetermined maximum number of post-despreading symbols allocated to HARQ ACK/NACK data.(z). The receiver apparatus of example embodiment (v), wherein the processing circuitry is further configured to:determine, for each of the plurality of TTIs, whether more than one SC-FDMA symbol of the TTI is to carry user data;for each of the plurality of TTIs in which only one SC-FDMA symbol is to carry user data, de-map rank indicator (RI) data for the TTI from the same SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, but from post-despreading symbols that map to that same SC-FDMA symbol but that are as far as possible from the DMRS closest in time to the user data of the TTI;for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, de-map RI data for the TTI from an SC-FDMA symbol that carries user data and that is immediately adjacent in time to the SC-FDMA symbol to which the HARQ ACK/NACK data is mapped, and from post-despreading symbols that map to that adjacent SC-FDMA symbol and that are as close as possible to the DMRS closest in time to the user data of the TTI.(aa). The receiver apparatus of any of example embodiments (v)-(z), wherein the processing circuitry is further configured to:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, de-map channel quality indicator (CQI) data for the TTI as evenly as possible from the two or more SC-FDMA symbols that are to carry user data.(bb). The receiver apparatus of any of example embodiments (v)-(z), wherein the processing circuitry is further configured to:for each of the plurality of TTIs in which two or more SC-FDMA symbols are to carry user data, de-map channel quality indicator (CQI) data for the TTI, to the extent possible, from pre-DFT symbols that map to the first SC-FDMA symbol carrying user data, and then de-map any remaining CQI data from one or more subsequent SC-FDMA symbols.(cc). A non-transitory computer readable storage medium storing a computer program comprising program instructions that, when executed on at least one processing circuit of a transmitting device configured for mapping control information to each of a plurality of transmission time intervals (TTIs), for transmission as Single-Carrier Frequency-Division Multiple Access (SC-FDMA) signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least Hybrid Automatic Repeat-Request (HARQ) ACK/NACK data for each of the plurality of TTIs, cause the transmitting device to:determine, for each of the plurality of TTIs, whether user data to be transmitted in the TTI will be closest in time to a demodulation reference signal (DMRS) transmitted before the user data or to a DMRS transmitted after the user data;for each of the plurality of TTIs in which user data to be transmitted in the TTI will be closest in time to DMRS transmitted before the user data, map all HARQ ACK/NACK data for the TTI to the earliest in time SC-FDMA symbol of the TTI that carries user data, and to pre-Discrete-Fourier Transform (pre-DFT) symbols closest in time to the DMRS transmitted before the user data;for each of the plurality of TTIs in which user data to be transmitted in the TTI will be closest in time to DMRS transmitted after the user data, map all HARQ ACK/NACK data for the TTI to the last in time SC-FDMA symbol of the TTI that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted after the user data; and form, for each of the plurality of TTIs, an SC-FDMA signal from user data and control information for the TTI, based on the mapping of the HARQ ACK/NACK data.(dd). A non-transitory computer readable storage medium storing a computer program comprising program instructions that, when executed on at least one processing circuit of a receiving device configured for de-mapping control information from each of a plurality of transmission time intervals (TTIs), received as Single-Carrier Frequency-Division Multiple Access (SC-FDMA) signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least Hybrid Automatic Repeat-Request (HARQ) ACK/NACK data for each of the plurality of TTIs, cause the receiving device to:determine, for each of the plurality of TTIs, whether user data received in the TTI is closest in time to a demodulation reference signal (DMRS) transmitted before the user data or to a DMRS transmitted after the user data;for each of the plurality of TTIs in which user data received in the TTI is closest in time to DMRS transmitted before the user data, de-map all HARQ ACK/NACK data for the TTI from the earliest in time SC-FDMA symbol of the TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted before the user data; andfor each of the plurality of TTIs in which user data received in the TTI is closest in time to DMRS transmitted after the user data, de-map all HARQ ACK/NACK data for the TTI from the last in time SC-FDMA symbol of the TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted after the user data.(ee). A transmitter apparatus, adapted to perform the method of any one of claims (a)-(g).(ff). A receiver apparatus, adapted to perform the method of any one of claims (o)-(u).(gg). A computer program, comprising instructions which, when executed on a processing circuit, cause the processing circuit to carry out the method according to any one of claims (a)-(g) and (o)-(u).(hh). A carrier containing the computer program of claim (gg), wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.(ii). A transmitter apparatus configured for map control information to each of a plurality of transmission time intervals (TTIs), for transmission as Single-Carrier Frequency-Division Multiple Access (SC-FDMA) signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least Hybrid Automatic Repeat-Request (HARQ) ACK/NACK data for each of the plurality of TTIs, comprising:a determining module for determining, for each of the plurality of TTIs, whether user data to be transmitted in the TTI will be closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data;a mapping module for, for each of the plurality of TTIs in which user data to be transmitted in the TTI will be closest in time to DMRS transmitted before the user data, mapping all HARQ ACK/NACK data for the TTI to the earliest in time SC-FDMA symbol of the TTI that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted before the user data;the mapping module also for, for each of the plurality of TTIs in which user data to be transmitted in the TTI will be closest in time to DMRS transmitted after the user data, mapping all HARQ ACK/NACK data for the TTI to the last in time SC-FDMA symbol of the TTI that carries user data, and to pre-DFT symbols closest in time to the DMRS transmitted after the user data; anda signal forming module for forming, for each of the plurality of TTIs, an SC-FDMA signal from user data and control information for the TTI, based on the mapping of the HARQ ACK/NACK data.(jj). A receiver apparatus configured for de-mapping control information from each of a plurality of transmission time intervals (TTIs), received as Single-Carrier Frequency-Division Multiple Access (SC-FDMA) signals, where each of the plurality of transmission time intervals comprises one or more SC-FDMA symbols and where the control information comprises at least Hybrid Automatic Repeat-Request (HARQ) ACK/NACK data for each of the plurality of TTIs, comprising:a receiving module for receiving, for each of the plurality of TTIs, an SC-FDMA signal;a determining module for determining, for each of the plurality of TTIs, whether user data received in the TTI is closest in time to a DMRS transmitted before the user data or to a DMRS transmitted after the user data; anda de-mapping module for, for each of the plurality of TTIs in which user data received in the TTI is closest in time to DMRS transmitted before the user data, de-mapping all HARQ ACK/NACK data for the TTI from the earliest in time SC-FDMA symbol of the TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted before the user data;where the de-mapping module is also for, for each of the plurality of TTIs in which user data received in the TTI is closest in time to DMRS transmitted after the user data, de-mapping all HARQ ACK/NACK data for the TTI from the last in time SC-FDMA symbol of the TTI that carries user data, and from post-despreading symbols closest in time to the DMRS transmitted after the user data. Notably, modifications and other embodiments of the disclosed invention(s) will come to mind to one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the invention(s) is/are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of this disclosure. Although specific terms may be employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
86,416
11863330
MODE FOR CARRYING OUT THE INVENTION Reference will now be made in detail to the preferred embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. The accompanying drawings illustrate exemplary embodiments of the present disclosure and provide a more detailed description of the present disclosure. However, the scope of the present disclosure should not be limited thereto. In some cases, to prevent the concept of the present disclosure from being ambiguous, structures and apparatuses of the known art will be omitted, or will be shown in the form of a block diagram based on main functions of each structure and apparatus. Also, wherever possible, the same reference numbers will be used throughout the drawings and the specification to refer to the same or like parts. In the present disclosure, a user equipment (UE) is fixed or mobile. The UE is a device that transmits and receives user data and/or control information by communicating with a base station (BS). The term ‘UE’ may be replaced with ‘terminal equipment’, ‘Mobile Station (MS)’, ‘Mobile Terminal (MT)’, ‘User Terminal (UT)’, ‘Subscriber Station (SS)’, ‘wireless device’, ‘Personal Digital Assistant (PDA)’, ‘wireless modem’, ‘handheld device’, etc. A BS is typically a fixed station that communicates with a UE and/or another BS. The BS exchanges data and control information with a UE and another BS. The term ‘BS' may be replaced with ‘Advanced Base Station (ABS)’, ‘Node B’, ‘evolved-Node B (eNB)’, Base Transceiver System (BTS)’, ‘Access Point (AP)’, ‘Processing Server (PS)’, etc. In the following description, BS is commonly called eNB. In the present disclosure, a node refers to a fixed point capable of transmitting/receiving a radio signal to/from a UE by communication with the UE. Various eNBs can be used as nodes. For example, a node can be a BS, NB, eNB, pico-cell eNB (PeNB), home eNB (HeNB), relay, repeater, etc. Furthermore, a node may not be an eNB. For example, a node can be a radio remote head (RRH) or a radio remote unit (RRU). The RRH and RRU have power levels lower than that of the eNB. Since the RRH or RRU (referred to as RRH/RRU hereinafter) is connected to an eNB through a dedicated line such as an optical cable in general, cooperative communication according to RRH/RRU and eNB can be smoothly performed compared to cooperative communication according to eNBs connected through a wireless link. At least one antenna is installed per node. An antenna may refer to an antenna port, a virtual antenna or an antenna group. A node may also be called a point. Unlink a conventional centralized antenna system (CAS) (i.e. single node system) in which antennas are concentrated in an eNB and controlled an eNB controller, plural nodes are spaced apart at a predetermined distance or longer in a multi-node system. The plural nodes can be managed by one or more eNBs or eNB controllers that control operations of the nodes or schedule data to be transmitted/received through the nodes. Each node may be connected to an eNB or eNB controller managing the corresponding node via a cable or a dedicated line. In the multi-node system, the same cell identity (ID) or different cell IDs may be used for signal transmission/reception through plural nodes. When plural nodes have the same cell ID, each of the plural nodes operates as an antenna group of a cell. If nodes have different cell IDs in the multi-node system, the multi-node system can be regarded as a multi-cell (e.g. macro-cell/femto-cell/pico-cell) system. When multiple cells respectively configured by plural nodes are overlaid according to coverage, a network configured by multiple cells is called a multi-tier network. The cell ID of the RRH/RRU may be identical to or different from the cell ID of an eNB. When the RRH/RRU and eNB use different cell IDs, both the RRH/RRU and eNB operate as independent eNBs. In a multi-node system according to the present disclosure, which will be described below, one or more eNBs or eNB controllers connected to plural nodes can control the plural nodes such that signals are simultaneously transmitted to or received from a UE through some or all nodes. While there is a difference between multi-node systems according to the nature of each node and implementation form of each node, multi-node systems are discriminated from single node systems (e.g. CAS, conventional MIMO systems, conventional relay systems, conventional repeater systems, etc.) since a plurality of nodes provides communication services to a UE in a predetermined time-frequency resource. Accordingly, embodiments of the present disclosure with respect to a method of performing coordinated data transmission using some or all nodes can be applied to various types of multi-node systems. For example, a node refers to an antenna group spaced apart from another node by a predetermined distance or more, in general. However, embodiments of the present disclosure, which will be described below, can even be applied to a case in which a node refers to an arbitrary antenna group irrespective of node interval. In the case of an eNB including an X-pole (cross polarized) antenna, for example, the embodiments of the preset disclosure are applicable on the assumption that the eNB controls a node composed of an H-pole antenna and a V-pole antenna. A communication scheme through which signals are transmitted/received via plural transmit (Tx)/receive (Rx) nodes, signals are transmitted/received via at least one node selected from plural Tx/Rx nodes, or a node transmitting a downlink signal is discriminated from a node transmitting an uplink signal is called multi-eNB MIMO or CoMP (Coordinated Multi-Point Tx/Rx). Coordinated transmission schemes from among CoMP communication schemes can be categorized into JP (Joint Processing) and scheduling coordination. The former may be divided into JT (Joint Transmission)/JR (Joint Reception) and DPS (Dynamic Point Selection) and the latter may be divided into CS (Coordinated Scheduling) and CB (Coordinated Beamforming). DPS may be called DCS (Dynamic Cell Selection). When JP is performed, more various communication environments can be generated, compared to other CoMP schemes. JT refers to a communication scheme by which plural nodes transmit the same stream to a UE and JR refers to a communication scheme by which plural nodes receive the same stream from the UE. The UE/eNB combine signals received from the plural nodes to restore the stream. In the case of JT/JR, signal transmission reliability can be improved according to transmit diversity since the same stream is transmitted from/to plural nodes. DPS refers to a communication scheme by which a signal is transmitted/received through a node selected from plural nodes according to a specific rule. In the case of DPS, signal transmission reliability can be improved because a node having a good channel state between the node and a UE is selected as a communication node. In the present disclosure, a cell refers to a specific geographical area in which one or more nodes provide communication services. Accordingly, communication with a specific cell may mean communication with an eNB or a node providing communication services to the specific cell. A downlink/uplink signal of a specific cell refers to a downlink/uplink signal from/to an eNB or a node providing communication services to the specific cell. A cell providing uplink/downlink communication services to a UE is called a serving cell. Furthermore, channel status/quality of a specific cell refers to channel status/quality of a channel or a communication link generated between an eNB or a node providing communication services to the specific cell and a UE. In 3GPP LTE-A systems, a UE can measure downlink channel state from a specific node using one or more CSI-RSs (Channel State Information Reference Signals) transmitted through antenna port(s) of the specific node on a CSI-RS resource allocated to the specific node. In general, neighboring nodes transmit CSI-RS resources on orthogonal CSI-RS resources. When CSI-RS resources are orthogonal, this means that the CSI-RS resources have different subframe configurations and/or CSI-RS sequences which specify subframes to which CSI-RSs are allocated according to CSI-RS resource configurations, subframe offsets and transmission periods, etc. which specify symbols and subcarriers carrying the CSI RSs. In the present disclosure, PDCCH (Physical Downlink Control Channel)/PCFICH (Physical Control Format Indicator Channel)/PHICH (Physical Hybrid automatic repeat request Indicator Channel)/PDSCH (Physical Downlink Shared Channel) refer to a set of time-frequency resources or resource elements respectively carrying DCI (Downlink Control Information)/CFI (Control Format Indicator)/downlink ACK/NACK (Acknowledgement/Negative ACK)/downlink data. In addition, PUCCH (Physical Uplink Control Channel)/PUSCH (Physical Uplink Shared Channel)/PRACH (Physical Random Access Channel) refer to sets of time-frequency resources or resource elements respectively carrying UCI (Uplink Control Information)/uplink data/random access signals. In the present disclosure, a time-frequency resource or a resource element (RE), which is allocated to or belongs to PDCCH/PCFICH/PHICH/PDSCH/PUCCH/PUSCH/PRACH, is referred to as a PDCCH/PCFICH/PHICH/PDSCH/PUCCH/PUSCH/PRACH RE or PDCCH/PCFICH/PHICH/PDSCH/PUCCH/PUSCH/PRACH resource. In the following description, transmission of PUCCH/PUSCH/PRACH by a UE is equivalent to transmission of uplink control information/uplink data/random access signal through or on PUCCH/PUSCH/PRACH. Furthermore, transmission of PDCCH/PCFICH/PHICH/PDSCH by an eNB is equivalent to transmission of downlink data/control information through or on PDCCH/PCFICH/PHICH/PDSCH. FIG.1illustrates an exemplary radio frame structure used in a wireless communication system.FIG.1(a)illustrates a frame structure for frequency division duplex (FDD) used in 3GPP LTE/LTE-A andFIG.1(b)illustrates a frame structure for time division duplex (TDD) used in 3GPP LTE/LTE-A. Referring toFIG.1, a radio frame used in 3GPP LTE/LTE-A has a length of 10 ms (307200Ts) and includes 10 subframes in equal size. The 10 subframes in the radio frame may be numbered. Here, Ts denotes sampling time and is represented as Ts=1/(2048*15 kHz). Each subframe has a length of 1 ms and includes two slots. 20 slots in the radio frame can be sequentially numbered from 0 to 19. Each slot has a length of 0.5 ms. A time for transmitting a subframe is defined as a transmission time interval (TTI). Time resources can be discriminated by a radio frame number (or radio frame index), subframe number (or subframe index) and a slot number (or slot index). The radio frame can be configured differently according to duplex mode. Downlink transmission is discriminated from uplink transmission by frequency in FDD mode, and thus the radio frame includes only one of a downlink subframe and an uplink subframe in a specific frequency band. In TDD mode, downlink transmission is discriminated from uplink transmission by time, and thus the radio frame includes both a downlink subframe and an uplink subframe in a specific frequency band. Table 1 shows DL-UL configurations of subframes in a radio frame in the TDD mode. TABLE 1Downlink-to-UplinkDL-ULSwitch-pointSubframe numberconfigurationperiodicity012345678905 msDSUUUDSUUU15 msDSUUDDSUUD25 msDSUDDDSUDD310 msDSUUUDDDDD410 msDSUUDDDDDD510 msDSUDDDDDDD65 msDSUUUDSUUD In Table 1, D denotes a downlink subframe, U denotes an uplink subframe and S denotes a special subframe. The special subframe includes three fields of DwPTS (Downlink Pilot TimeSlot), GP (Guard Period), and UpPTS (Uplink Pilot TimeSlot). DwPTS is a period reserved for downlink transmission and UpPTS is a period reserved for uplink transmission. Table 2 shows special subframe configuration. TABLE 2Normal cyclic prefix in downlinkExtended cyclic prefix in downlinkUpPTSUpPTSExtendedNormalExtendedSpecialNormalcycliccycliccyclicsubframecyclic prefixprefix inprefix inprefix inconfigurationDwPTSin uplinkuplinkDwPTSuplinkuplink06592 · Ts2192 · Ts2560 · Ts7680 · Ts2192 · Ts2560 · Ts119760 · Ts20480 · Ts221952 · Ts23040 · Ts324144 · Ts25600 · Ts426336 · Ts7680 · Ts4384 · Ts5120 · Ts56592 · Ts4384 · Ts5120 · Ts20480 · Ts619760 · Ts23040 · Ts721952 · Ts12800 · Ts824144 · Ts———913168 · Ts——— FIG.2illustrates an exemplary downlink/uplink slot structure in a wireless communication system. Particularly,FIG.2illustrates a resource grid structure in 3GPP LTE/LTE-A. A resource grid is present per antenna port. Referring toFIG.2, a slot includes a plurality of OFDM (Orthogonal Frequency Division Multiplexing) symbols in the time domain and a plurality of resource blocks (RBs) in the frequency domain. An OFDM symbol may refer to a symbol period. A signal transmitted in each slot may be represented by a resource grid composed of NRBDL/UL*NscRBsubcarriers and NsymbDL/ULOFDM symbols. Here, NRBDLdenotes the number of RBs in a downlink slot and NRBULdenotes the number of RBs in an uplink slot. NRBDLand NRBULrespectively depend on a DL transmission bandwidth and a UL transmission bandwidth. NsymbDLdenotes the number of OFDM symbols in the downlink slot and NsymbULdenotes the number of OFDM symbols in the uplink slot. In addition, NscRBdenotes the number of subcarriers constructing one RB. An OFDM symbol may be called an SC-FDM (Single Carrier Frequency Division Multiplexing) symbol according to multiple access scheme. The number of OFDM symbols included in a slot may depend on a channel bandwidth and the length of a cyclic prefix (CP). For example, a slot includes 7 OFDM symbols in the case of normal CP and 6 OFDM symbols in the case of extended CP. WhileFIG.2illustrates a subframe in which a slot includes 7 OFDM symbols for convenience, embodiments of the present disclosure can be equally applied to subframes having different numbers of OFDM symbols. Referring toFIG.2, each OFDM symbol includes NRBDL/UL*NscRBsubcarriers in the frequency domain. Subcarrier types can be classified into a data subcarrier for data transmission, a reference signal subcarrier for reference signal transmission, and null subcarriers for a guard band and a direct current (DC) component. The null subcarrier for a DC component is a subcarrier remaining unused and is mapped to a carrier frequency (f0) during OFDM signal generation or frequency up-conversion. The carrier frequency is also called a center frequency. An RB is defined by NsymbDL/UL(e.g. 7) consecutive OFDM symbols in the time domain and NscRB(e.g. 12) consecutive subcarriers in the frequency domain. For reference, a resource composed by an OFDM symbol and a subcarrier is called a resource element (RE) or a tone. Accordingly, an RB is composed of NsymbDL/UL*NscRBREs. Each RE in a resource grid can be uniquely defined by an index pair (k, l) in a slot. Here, k is an index in the range of 0 to NsymbDL/UL*NscRB−1 in the frequency domain and l is an index in the range of 0 to NsymbDL/UL−1. Two RBs that occupy NscRBconsecutive subcarriers in a subframe and respectively disposed in two slots of the subframe are called a physical resource block (PRB) pair. Two RBs constituting a PRB pair have the same PRB number (or PRB index). A virtual resource block (VRB) is a logical resource allocation unit for resource allocation. The VRB has the same size as that of the PRB. The VRB may be divided into a localized VRB and a distributed VRB depending on a mapping scheme of VRB into PRB. The localized VRBs are mapped into the PRBs, whereby VRB number (VRB index) corresponds to PRB number. That is, nPRB=nVRB is obtained. Numbers are given to the localized VRBs from 0 to NVRBDL−1, and NVRBDL=NRBDLis obtained. Accordingly, according to the localized mapping scheme, the VRBs having the same VRB number are mapped into the PRBs having the same PRB number at the first slot and the second slot. On the other hand, the distributed VRBs are mapped into the PRBs through interleaving. Accordingly, the VRBs having the same VRB number may be mapped into the PRBs having different PRB numbers at the first slot and the second slot. Two PRBs, which are respectively located at two slots of the subframe and have the same VRB number, will be referred to as a pair of VRBs. FIG.3illustrates a downlink (DL) subframe structure used in 3GPP LTE/LTE-A. Referring toFIG.3, a DL subframe is divided into a control region and a data region. A maximum of three (four) OFDM symbols located in a front portion of a first slot within a subframe correspond to the control region to which a control channel is allocated. A resource region available for PDCCH transmission in the DL subframe is referred to as a PDCCH region hereinafter. The remaining OFDM symbols correspond to the data region to which a physical downlink shared chancel (PDSCH) is allocated. A resource region available for PDSCH transmission in the DL subframe is referred to as a PDSCH region hereinafter. Examples of downlink control channels used in 3GPP LTE include a physical control format indicator channel (PCFICH), a physical downlink control channel (PDCCH), a physical hybrid ARQ indicator channel (PHICH), etc. The PCFICH is transmitted at a first OFDM symbol of a subframe and carries information regarding the number of OFDM symbols used for transmission of control channels within the subframe. The PHICH is a response of uplink transmission and carries an HARQ acknowledgment (ACK)/negative acknowledgment (NACK) signal. Control information carried on the PDCCH is called downlink control information (DCI). The DCI contains resource allocation information and control information for a UE or a UE group. For example, the DCI includes a transport format and resource allocation information of a downlink shared channel (DL-SCH), a transport format and resource allocation information of an uplink shared channel (UL-SCH), paging information of a paging channel (PCH), system information on the DL-SCH, information about resource allocation of an upper layer control message such as a random access response transmitted on the PDSCH, a transmit control command set with respect to individual UEs in a UE group, a transmit power control command, information on activation of a voice over IP (VoIP), downlink assignment index (DAI), etc. The transport format and resource allocation information of the DL-SCH are also called DL scheduling information or a DL grant and the transport format and resource allocation information of the UL-SCH are also called UL scheduling information or a UL grant. The size and purpose of DCI carried on a PDCCH depend on DCI format and the size thereof may be varied according to coding rate. Various formats, for example, formats 0 and 4 for uplink and formats 1, 1A, 1B, 1C, ID, 2, 2A, 2B, 2C, 3 and 3A for downlink, have been defined in 3GPP LTE. Control information such as a hopping flag, information on RB allocation, modulation coding scheme (MCS), redundancy version (RV), new data indicator (NDI), information on transmit power control (TPC), cyclic shift demodulation reference signal (DMRS), UL index, channel quality information (CQI) request, DL assignment index, HARQ process number, transmitted precoding matrix indicator (TPMI), precoding matrix indicator (PMI), etc. is selected and combined based on DCI format and transmitted to a UE as DCI. In general, a DCI format for a UE depends on transmission mode (TM) set for the UE. In other words, only a DCI format corresponding to a specific TM can be used for a UE configured in the specific TM. A PDCCH is transmitted on an aggregation of one or several consecutive control channel elements (CCEs). The CCE is a logical allocation unit used to provide the PDCCH with a coding rate based on a state of a radio channel. The CCE corresponds to a plurality of resource element groups (REGs). For example, a CCE corresponds to 9 REGs and an REG corresponds to 4 REs. 3GPP LTE defines a CCE set in which a PDCCH can be located for each UE. A CCE set from which a UE can detect a PDCCH thereof is called a PDCCH search space, simply, search space. An individual resource through which the PDCCH can be transmitted within the search space is called a PDCCH candidate. A set of PDCCH candidates to be monitored by the UE is defined as the search space. In 3GPP LTE/LTE-A, search spaces for DCI formats may have different sizes and include a dedicated search space and a common search space. The dedicated search space is a UE-specific search space and is configured for each UE. The common search space is configured for a plurality of UEs. Aggregation levels defining the search space is as follows. TABLE 3Search SpaceNumberAggregationSizeof PDCCHTypeLevel L[in CCEs]candidates M(L)UE-specific16621264828162Common41648162 A PDCCH candidate corresponds to 1, 2, 4 or 8 CCEs according to CCE aggregation level. An eNB transmits a PDCCH (DCI) on an arbitrary PDCCH candidate with in a search space and a UE monitors the search space to detect the PDCCH (DCI). Here, monitoring refers to attempting to decode each PDCCH in the corresponding search space according to all monitored DCI formats. The UE can detect the PDCCH thereof by monitoring plural PDCCHs. Since the UE does not know the position in which the PDCCH thereof is transmitted, the UE attempts to decode all PDCCHs of the corresponding DCI format for each subframe until a PDCCH having the ID thereof is detected. This process is called blind detection (or blind decoding (BD)). The eNB can transmit data for a UE or a UE group through the data region. Data transmitted through the data region may be called user data. For transmission of the user data, a physical downlink shared channel (PDSCH) may be allocated to the data region. A paging channel (PCH) and downlink-shared channel (DL-SCH) are transmitted through the PDSCH. The UE can read data transmitted through the PDSCH by decoding control information transmitted through a PDCCH. Information representing a UE or a UE group to which data on the PDSCH is transmitted, how the UE or UE group receives and decodes the PDSCH data, etc. is included in the PDCCH and transmitted. For example, if a specific PDCCH is CRC (cyclic redundancy check)-masked having radio network temporary identify (RNTI) of “A” and information about data transmitted using a radio resource (e.g., frequency position) of “B” and transmission format information (e.g., transport block size, modulation scheme, coding information, etc.) of “C” is transmitted through a specific DL subframe, the UE monitors PDCCHs using RNTI information and a UE having the RNTI of “A” detects a PDCCH and receives a PDSCH indicated by “B” and “C” using information about the PDCCH. A reference signal (RS) to be compared with a data signal is necessary for the UE to demodulate a signal received from the eNB. A reference signal refers to a predetermined signal having a specific waveform, which is transmitted from the eNB to the UE or from the UE to the eNB and known to both the eNB and UE. The reference signal is also called a pilot. Reference signals are categorized into a cell-specific RS shared by all UEs in a cell and a modulation RS (DM RS) dedicated for a specific UE. A DM RS transmitted by the eNB for demodulation of downlink data for a specific UE is called a UE-specific RS. Both or one of DM RS and CRS may be transmitted on downlink. When only the DM RS is transmitted without CRS, an RS for channel measurement needs to be additionally provided because the DM RS transmitted using the same precoder as used for data can be used for demodulation only. For example, in 3GPP LTE(-A), CSI-RS corresponding to an additional RS for measurement is transmitted to the UE such that the UE can measure channel state information. CSI-RS is transmitted in each transmission period corresponding to a plurality of subframes based on the fact that channel state variation with time is not large, unlike CRS transmitted per subframe. FIG.4illustrates an exemplary uplink subframe structure used in 3GPP LTE/LTE-A. Referring toFIG.4, a UL subframe can be divided into a control region and a data region in the frequency domain. One or more PUCCHs (physical uplink control channels) can be allocated to the control region to carry uplink control information (UCI). One or more PUSCHs (Physical uplink shared channels) may be allocated to the data region of the UL subframe to carry user data. In the UL subframe, subcarriers spaced apart from a DC subcarrier are used as the control region. In other words, subcarriers corresponding to both ends of a UL transmission bandwidth are assigned to UCI transmission. The DC subcarrier is a component remaining unused for signal transmission and is mapped to the carrier frequency f0during frequency up-conversion. A PUCCH for a UE is allocated to an RB pair belonging to resources operating at a carrier frequency and RBs belonging to the RB pair occupy different subcarriers in two slots. Assignment of the PUCCH in this manner is represented as frequency hopping of an RB pair allocated to the PUCCH at a slot boundary. When frequency hopping is not applied, the RB pair occupies the same subcarrier. The PUCCH can be used to transmit the following control information.Scheduling Request (SR): This is information used to request a UL-SCH resource and is transmitted using On-Off Keying (OOK) scheme.HARQ ACK/NACK: This is a response signal to a downlink data packet on a PDSCH and indicates whether the downlink data packet has been successfully received. A 1-bit ACK/NACK signal is transmitted as a response to a single downlink codeword and a 2-bit ACK/NACK signal is transmitted as a response to two downlink codewords. HARQ-ACK responses include positive ACK (ACK), negative ACK (NACK), discontinuous transmission (DTX) and NACK/DTX. Here, the term HARQ-ACK is used interchangeably with the term HARQ ACK/NACK and ACK/NACK.Channel State Indicator (CSI): This is feedback information about a downlink channel. Feedback information regarding MIMO includes a rank indicator (RI) and a precoding matrix indicator (PMI). The quantity of control information (UCI) that a UE can transmit through a subframe depends on the number of SC-FDMA symbols available for control information transmission. The SC-FDMA symbols available for control information transmission correspond to SC-FDMA symbols other than SC-FDMA symbols of the subframe, which are used for reference signal transmission. In the case of a subframe in which a sounding reference signal (SRS) is configured, the last SC-FDMA symbol of the subframe is excluded from the SC-FDMA symbols available for control information transmission. A reference signal is used to detect coherence of the PUCCH. The PUCCH supports various formats according to information transmitted thereon. Table 4 shows the mapping relationship between PUCCH formats and UCI in LTE/LTE-A. TABLE 4Number ofbits perPUCCHModulationsubframe,formatschemeMbitUsageEtc.1N/AN/ASR (SchedulingRequest)1aBPSK1ACK/NACK orOne codewordSR + ACK/NACK1bQPSK2ACK/NACK orTwo codewordSR + ACK/NACK2QPSK20CQI/PMI/RIJoint codingACK/NACK(extendedCP)2aQPSK +21CQI/PMI/RI +Normal CPBPSKACK/NACKonly2bQPSK +22CQI/PMI/RI +Normal CPQPSKACK/NACKonly3QPSK48ACK/NACK orSR + ACK/NACK orCQI/PMI/RI +ACK/NACK Referring to Table 4, PUCCH formats 1/1a/1b are used to transmit ACK/NACK information, PUCCH format 2/2a/2b are used to carry CSI such as CQI/PMI/RI and PUCCH format 3 is used to transmit ACK/NACK information. Reference Signal (RS) When a packet is transmitted in a wireless communication system, signal distortion may occur during transmission since the packet is transmitted through a radio channel. To correctly receive a distorted signal at a receiver, the distorted signal needs to be corrected using channel information. To detect channel information, a signal known to both a transmitter and the receiver is transmitted and channel information is detected with a degree of distortion of the signal when the signal is received through a channel. This signal is called a pilot signal or a reference signal. When data is transmitted/received using multiple antennas, the receiver can receive a correct signal only when the receiver is aware of a channel state between each transmit antenna and each receive antenna. Accordingly, a reference signal needs to be provided per transmit antenna, more specifically, per antenna port. Reference signals can be classified into an uplink reference signal and a downlink reference signal. In LTE, the uplink reference signal includes:i) a demodulation reference signal (DMRS) for channel estimation for coherent demodulation of information transmitted through a PUSCH and a PUCCH; andii) a sounding reference signal (SRS) used for an eNB to measure uplink channel quality at a frequency of a different network. The downlink reference signal includes:i) a cell-specific reference signal (CRS) shared by all UEs in a cell;ii) a UE-specific reference signal for a specific UE only;iii) a DMRS transmitted for coherent demodulation when a PDSCH is transmitted;iv) a channel state information reference signal (CSI-RS) for delivering channel state information (CSI) when a downlink DMRS is transmitted;v) a multimedia broadcast single frequency network (MBSFN) reference signal transmitted for coherent demodulation of a signal transmitted in MBSFN mode; andvi) a positioning reference signal used to estimate geographic position information of a UE. Reference signals can be classified into a reference signal for channel information acquisition and a reference signal for data demodulation. The former needs to be transmitted in a wide band as it is used for a UE to acquire channel information on downlink transmission and received by a UE even if the UE does not receive downlink data in a specific subframe. This reference signal is used even in a handover situation. The latter is transmitted along with a corresponding resource by an eNB when the eNB transmits a downlink signal and is used for a UE to demodulate data through channel measurement. This reference signal needs to be transmitted in a region in which data is transmitted. Basic Description of HARQ Operation In an LTE FDD system, 8 stop-and-wait (SAW) HARQ processes are supported in a constant round-trip time (RTT) of 8 ms on both UL and DL. Each HARQ process is defined by a unique HARQ process identifier (or number) of a 3-bit size (a 4-bit size in the case of LTE TDD). A reception end (i.e., a UE in a DL HARQ process and an eNodeB in a UL HARQ process) requires individual soft buffer allocation for the combination of retransmitted data. Further, in the LTE system, for a HARQ operation, information such as a new data indicator (NDI), a redundancy version (RV), and a modulation and coding scheme (MCS) level is defined as being signaled to the reception end. A DL HARQ process of the LTE system is an adaptive asynchronous scheme. Therefore, DCI for a HARQ process is explicitly accompanied in every DL transmission. On the other hand, a UL HARQ process of the LTE system is a synchronous scheme and may support both adaptive and non-adaptive schemes. The UL non-adaptive HARQ scheme requires a preset RV sequence, i.e., a sequence such as 0, 2, 3, 1, 0, 2, 3, 1, . . . , for consecutive packet transmission because the UL non-adaptive HARQ scheme does not accompany explicit signaling of control information. However, in the UL adaptive HARQ scheme, an RV is explicitly signaled. In the FDD system, a UE may transmit HARQ ACK/NACK information in subframe index n for PDSCH transmission received in subframe index (n-k) (e.g., k=4 in the LTE system). The UE may determine a PUCCH resource index in which the UE is to transmit HARQ ACK/NACK in subframe n from a PDCCH indicating PDSCH transmission in subframe (n-k). For example, the PUCCH resource index in the LTE system is determined as follows. n(1)PUCCH=nCCE+N(1)PUCCH[Equation 1] Herein, n(1)PUCCHrepresents a resource index of PUCCH format 1 for ACK/NACK transmission, N(1)PUCCHrepresents a signaling value received from a higher layer, and nCCEis the smallest value of CCE indexes used for PDCCH transmission. A cyclic shift, an orthogonal spread code, and a PRB for PUCCH format 1a/1b are acquired from n(1)PUCCH. Each PUCCH resource index corresponds to a PUCCH resource for ACK/NACK. For example, if a PDCCH including CCEs 4, 5, and 6 delivers scheduling information for a PDSCH to a UE and CCE 4 is linked to PUCCH resource index 4, the UE transmits ACK/NACK to a BS on PUCCH resource 4 corresponding to CCE 4 constituting the PDCCH. Next, ACK/NACK transmission in a TDD mode will be described. In the TDD mode, since DL transmission and UL transmission are distinguished from each other by time, subframes within one radio frame are divided into DL subframes and UL subframes. For detailed UL-DL configurations in the TDD mode, reference is made to Table 1. In a TDD system, a UE may transmit, in one UL subframe, ACK/NACK information for PDSCH transmission in one or more DL subframes. The UE may transmit HACK ACK/NACK information in UL subframe n for PDSCH transmission received in DL subframe n-k and k may be given according to the UL-DL configurations. For example, for UL-DL configurations of Table 3, a set of DL related indexes K {k0, k1, . . . kM-1} may be given as shown in Table 4. TABLE 5TDD UL/DLSubframe nConfiguration01234567890——6—4——6—41——7, 64———7, 64—2——8, 7, 4, 6————8, 7, 4, 6——3—7, 6, 116,55,4—————4——12, 8, 7, 116, 5, 4, 7——————5——13, 12, 9, 8, 7, 5, 4, 11, 6———————6——775——77— For example, in the case of UL-DL configuration 0 given in the above table, k is 4 in UL subframe9. Thus, ACK/NACK information for data received in DL subframe5(=9-4) may be transmitted in UL subframe9. As more communication devices have demanded higher communication capacity, there has been necessity of enhanced mobile broadband (eMBB) communication relative to legacy radio access technology. In addition, massive machine type communication (MTC) for providing various services anytime and anywhere by connecting a plurality of devices and objects to each other is also one main issue to be considered in next-generation communication. Further, a communication system to be designed in consideration of services/UEs sensitive to reliability and latency is under discussion. Thus, introduction of next-generation radio access technology has been discussed by taking into consideration eMBB communication, massive MTC (mMTC), ultra-reliable and low-latency communication (URLLC), and the like. In the present disclosure, the above technology is referred to as new radio access technology (new RAT) for convenience of description. Hereinafter, a proposed method will be described focusing on a new RAT system for convenience of description. However, the range of a system to which the proposed method is applied may be extended to other systems such as a 3GPP LTE/LTE-A system, in addition to the new RAT system. Fast Retransmission Configurability of HARQ-ACK Behaviors A URLLC UE may demand a fast HARQ-ACK procedure in order to satisfy stricter latency requirements. To achieve these requirements, PRACH/SR transmission and/or short TTI (sTTI) based HARQ-ACK processing may be considered by adjusting numerology such as TTI length. However, for a UE located relatively close to a cell edge, sTTI transmission may not be desirable in terms of reliability securement. To reduce latency of such a UE, a method of permitting retransmission without HARQ-ACK and/or retransmission scheduling or previously performing repetition (transmission) by increasing resources or a TTI to low a code rate may be considered. Such an operation mode is referred to as a fast retransmission mode in this disclosure. Characteristically, a regular HARQ-ACK based retransmission mode and a fast retransmission mode are configurable and a rule may be defined such that a retransmission operation of the UE conforms to a corresponding configuration. Detailed examples of the configuration are described below.Proposal 1: The HARQ-ACK behavior of the UE may be determined through a higher layer or physical layer signal.Proposal 2: The HARQ-ACK behavior of the UE may be determined per service type. Herein, the service type may represent traffic usage such as eMBB/URLLC/mMTC. As an example, for eMBB, the regular HARQ-ACK based retransmission mode may be applied and, for URLLC, the fast retransmission mode without HARQ-ACK may be applied.Proposal 3: The HARQ-ACK behavior of the UE may be determined per preconfigured time/frequency resource (set). As an example, the HARQ-ACK behavior may be determined for a specific resource and the UE may assume that the HARQ-ACK behavior of a corresponding specific mode is applied to data scheduled for the resource.Proposal 4: The HARQ-ACK behavior of the UE may be determined per DCI format(s) and/or search space of a control channel. For example, when scheduling is performed by a control channel of a specific DCI format, the regular HARQ-ACK based retransmission mode may be applied and, for the other DCI formats, the fast retransmission mode without HARQ-ACK may be applied.Proposal 5: The HARQ-ACK behavior of the UE may be determined per MCS and/or quality of service (QoS) class (set). That is, the regular HARQ-ACK based retransmission mode or the fast retransmission mode without HARQ-ACK may be applied per MCS and/or QoS class (set).Proposal 6: A plurality of target block error rates (BLERs) may be defined and a rule may be defined such that a different HARQ-ACK behavior is applied to each BLER. As an example, a rule may be defined such that, if a target BLER for specific traffic is defined as 10−3, the UE may operate in the regular HARQ-ACK mode and, if the target BLER is defined as 10−5, the UE may operate in the fast retransmission mode. A rule may be defined such that configurability of the HARQ-ACK mode is applied to initial transmission and/or retransmission. CSI Feedback For one data transmission, target BLERs based on the regular HARQ-ACK mode and the fast retransmission mode may be different with respect to the same use case. For example, if a data BLER target according to a regular HARQ-ACK procedure is 10−3, a data BLER when fast retransmission is performed twice may be 10−6. Therefore, a target BLER according to the number of retransmissions/repetitions defined for the fast retransmission mode may be different from a target BLER in the regular HARQ-ACK mode. In this situation, an MCS which is derived according to channel estimation performance of the UE may differ. As an example, MCSs to be derived may differ when a CSI reference resource is one slot and the target BLER is 10−5and when the CSI reference resource is one mini-slot and the target BLER is 10−3. Accordingly, a rule may be defined such that the CSI reference resource and/or the target BLER is differently configured according to a HARQ-ACK mode. Alternatively, a rule may be defined such that CSI reference resources and/or the target BLERs of initial transmission and retransmission are differently configured. Alternatively, a rule may be defined such that the CSI reference resource and/or the target BLER is differently configured per numerology (or TTL length). Thus, this means that separate CSI feedback may be reported. More generally, a numerology and a HARQ scheme (i.e., indicating which HARQ-ACK mode is to be used) may be included in configuration of a reference resource which is to be used during CSI calculation. That is, the configuration of the reference resource which is to be used during CSI calculation may be differently configured according to the numerology and the HARQ-ACK mode. Details on Retransmission without HARQ-ACK In performing retransmission without a HARQ-ACK response for initial transmission or specific transmission, an operation of performing retransmission without scheduling from a network may be defined. Alternatively, in performing the retransmission operation without the HARQ-ACK response for initial transmission or specific transmission, an operation of previously performing scheduling even for retransmission through multi-TTI scheduling may be defined. When retransmission is performed without DL assignment or UL grant of the network, a rule may be defined such that the retransmission operation is performed after a time which is predefined from previous transmission or configured through a higher layer or physical layer signal. Alternatively, a rule may be defined such that the retransmission operation is performed in the first TTI which is available after previous transmission (e.g., the same transmission direction as previous transmission). Alternatively, TTI length used for a regular HARQ-ACK behavior may differ from TTI length used for a fast retransmission mode behavior. As an example, the regular HARQ-ACK behavior may be based on a mini-slot or a sub-slot and the fast retransmission behavior may be based on a slot. In this case, an MCS may be configured still based on the mini-slot or sub-slot. When the mini-slot or the sub-slot is mapped to the slot, a scheme of applying rate matching (an RV may be changed) or a repetition format may be used (k*mini-slot or sub-slot where k=slot size/mini-slot size). More generally, the regular HARQ-ACK mode and the fast retransmission mode may be interpreted as different configurations of TTIs in which data is scheduled. The difference compared to a normal TTI length is that a code rate according to transmission of an sTTI is applied during transmission of a long TTI, a resource for transmission of the long TTI is configured within the sTTI, the resource is repeatedly configured in the time domain. The above scheme may cause an effect of consecutively performing retransmission multiple times. During configuration of the fast retransmission mode, the UE may assume that an sTTI length, which is a reference to calculate the size of a slot, the number of retransmissions, and an MCS, has been configured. If retransmission is performed without DL assignment or UL grant of the network, the maximum number of retransmissions may be predefined/agreed upon or may be configured through a higher layer or physical layer signal. Characteristically, the maximum number of retransmissions may be differently configured per (1) service type, (2) preconfigured time/frequency resource (set), (3) DCI format(s) and/or search space for a control channel, (4) MCS and/or QoS class (set), or (5) target BLER. If retransmission is performed without DL assignment or UL grant of the network, a rule may be defined such that a predefined MCS or an MCS configured through the higher layer signal is used for retransmission. Alternatively, a rule may be defined such that an MCS to which an offset predefined from initial transmission or previous transmission or an offset configured through the higher layer signal is applied is used for retransmission. During a fast retransmission operation, adjustment of the MCS may always be applied or may be applied only to retransmission for initial transmission. Alternatively, adjustment of the MCS may be applied only to a predefined or signaled specific number of transmissions. If retransmission is performed without DL assignment or UL grant of the network, a rule may be defined such that UL transmission power which is predefined or UL transmission power which is configured through the higher layer signal is used for retransmission. Alternatively, a rule may be defined such that UL transmission power to which an offset predefined from initial transmission or previous transmission or an offset configured through the higher layer signal is applied is used for retransmission. During the fast retransmission operation, power control described above may always be applied or may be applied only to retransmission for initial transmission. Alternatively, power control described above may be applied only to a predefined or signaled specific number of retransmissions. If retransmission is performed without DL assignment or UL grant of the network, a rule may be simply defined to maintain the same resource assignment as previous transmission. Alternatively, a rule may be defined such that resource hopping is applied to retransmission according to a predefined/agreed-upon pattern. As an example, in order to further obtain frequency diversity, retransmission may be performed on a frequency resource mirrored based on a center frequency. Alternatively, a rule may be defined such that one of a plurality of predefined/agreed-upon patterns is configured through a higher layer signal or is indicated through a physical layer signal during scheduling for previous transmission. Alternatively, a rule may be defined such that retransmission is performed on a frequency resource separated from a resource assigned for previous transmission by an offset which is predefined/agreed upon, configured through the higher layer signal, or indicated through the physical layer signal during scheduling of previous transmission. Resource hopping may always be applied during the fast retransmission operation or may be applied only to retransmission for initial transmission. Alternatively, resource hopping may be applied only to a predefined or signaled specific number of retransmissions. If retransmission is performed without DL assignment or UL grant of the network, a rule may be defined such that an RV is automatically cyclic-shifted during retransmission according to a predefined/agreed-upon pattern. As an example, if the RV is previously defined to be used in order of {0,2,3,1}, RVs corresponding to respective retransmissions while retransmission is performed three times may be 2, 3, and 1. For information about retransmission, a parity part may be transmitted. A parity used for retransmission may be desirably configured not to overlap with a parity used for initial transmission. Again, the parity may be selected based on a direction in which spectral efficiency is increased (e.g., a low density parity check (LDPC) may be selected (1) randomly, (2) in correspondence to a parity having a large variable node, (3) in correspondence to a parity having a small variable node, or (4) in order of short iteration through which a parity bit punctured by a belief propagation (BP) algorithm based decoder is updated by reception of information from another node). Alternatively, in the fast retransmission mode, parities starting from a transmission-stopped parity may be used during retransmission. As another example, when an LDPC encoding result is {info, parity1, parity2, parity3} and an information bit is punctured, then {punctured_info, info, parity1, parity2, parity3} may be obtained. The following example of configuring an RV order may be considered when it is assumed that the information bit is always transmitted.parity1→parity2→parity3→parity1→parity2→parity3→(punctured_info is not retransmitted)parity1→parity2→parity3→punctured_info→parity1→parity2→parity3→ . . . . While the above rules have been described under the assumption that retransmission is performed without scheduling from the network, the present disclosure is applicable even to the case in which the fast retransmission operation is performed through multi-TTI scheduling. Since examples of the above-described proposed methods may also be included in one of implementation methods of the present disclosure, it is obvious that the examples are regarded as a sort of proposed methods. Although the above-described proposed methods may be independently implemented, the proposed methods may be implemented in a combined (aggregated) form of a part of the proposed methods. A rule may be defined such that the eNB informs the UE of information as to whether the proposed methods are applied (or information about rules of the proposed methods) through a predefined signal (e.g., a physical layer signal or a higher-layer signal). FIG.5illustrates an operation according to an embodiment of the present disclosure. InFIG.5, a method of performing fast retransmission in a wireless communication system is illustrated. The method may be performed by a UE. The UE may receive DL data or transmit UL data (S510). Then, the UE may perform a HARQ behavior including retransmission for DL data or UL data without HARQ transmission and reception or without retransmission scheduling (S520). In this case, the HARQ behavior may be configured as one of the following methods: whether to perform retransmission may be configured per traffic usage or service type of the DL data or the UL data, whether to perform retransmission may be configured per time or frequency resource set in which the DL data or the UL data is transmitted, whether to perform retransmission may be configured per DCI format for scheduling the DL data or the UL data and/or per search space of a DL control channel, whether to perform retransmission may be configured per MCS and/or QoS class set of the DL data or the UL data, or whether to perform retransmission may be configured according to target BLER of the DL data or the UL data. A CSI reference resource for the HARQ behavior may be different from a CSI reference resource for a regular HARQ behavior with HARQ-ACK transmission or retransmission scheduling. Retransmission according to the HARQ behavior may be performed after a predetermined time from DL data transmission prior to retransmission. Alternatively, retransmission according to the HARQ behavior may be performed in the first TTI which is available after DL data transmission prior to retransmission. If TTI length used for retransmission is different from TTI length used for the regular HARQ behavior with HARQ-ACK transmission or retransmission scheduling, an MCS for retransmission may be configured based on a TTI according to the regular HARQ behavior. The MCS for retransmission may be determined as a value to which an offset which is predetermined from an MCS used for initial transmission or transmission prior to retransmission is applied. The MCS for retransmission may be used only for retransmission for initial transmission or retransmission belonging to a range of a predetermined number of times. The maximum number of retransmissions may be individually configured per traffic usage or service type of the DL data, per time or frequency resource set in which the DL data is transmitted, per DCI format for scheduling the DL data and/or search space of a DL control channel, or per MCS and/or QoS class set of the DL data. Transmission power for retransmission may be determined as a value to which an offset which is predetermined from transmission power used for initial transmission or transmission prior to retransmission is applied. The transmission power for retransmission may be used only for retransmission for initial transmission or retransmission belonging to a range of a predetermined number of times. Resource assignment for retransmission may be performed on a frequency resource separated from a resource assigned for transmission prior to retransmission by a predetermined offset. Resource assignment for retransmission may include frequency hopping according to one of plural predetermined patterns. An RV for retransmission may be cyclically shifted according to a predetermined pattern. The HARQ behavior may be used only for retransmission for initial transmission or retransmission belonging to a range of a predetermined number of times. FIG.6is a block diagram of a transmitting device10and a receiving device20configured to implement exemplary embodiments of the present disclosure. Referring toFIG.6, the transmitting device10and the receiving device20respectively include transmitter/receiver13and23for transmitting and receiving radio signals carrying information, data, signals, and/or messages, memories12and22for storing information related to communication in a wireless communication system, and processors11and21connected operationally to the transmitter/receiver13and23and the memories12and22and configured to control the memories12and22and/or the transmitter/receiver13and23so as to perform at least one of the above-described embodiments of the present disclosure. The memories12and22may store programs for processing and control of the processors11and21and may temporarily storing input/output information. The memories12and22may be used as buffers. The processors11and21control the overall operation of various modules in the transmitting device10or the receiving device20. The processors11and21may perform various control functions to implement the present disclosure. The processors11and21may be controllers, microcontrollers, microprocessors, or microcomputers. The processors11and21may be implemented by hardware, firmware, software, or a combination thereof. In a hardware configuration, Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), or Field Programmable Gate Arrays (FPGAs) may be included in the processors11and21. If the present disclosure is implemented using firmware or software, firmware or software may be configured to include modules, procedures, functions, etc. performing the functions or operations of the present disclosure. Firmware or software configured to perform the present disclosure may be included in the processors11and21or stored in the memories12and22so as to be driven by the processors11and21. The processor11of the transmitting device10is scheduled from the processor11or a scheduler connected to the processor11and codes and modulates signals and/or data to be transmitted to the outside. The coded and modulated signals and/or data are transmitted to the transmitter/receiver13. For example, the processor11converts a data stream to be transmitted into K layers through demultiplexing, channel coding, scrambling and modulation. The coded data stream is also referred to as a codeword and is equivalent to a transport block which is a data block provided by a MAC layer. One transport block (TB) is coded into one codeword and each codeword is transmitted to the receiving device in the form of one or more layers. For frequency up-conversion, the transmitter/receiver13may include an oscillator. The transmitter/receiver13may include Nt (where Nt is a positive integer) transmit antennas. A signal processing process of the receiving device20is the reverse of the signal processing process of the transmitting device10. Under the control of the processor21, the transmitter/receiver23of the receiving device10receives RF signals transmitted by the transmitting device10. The transmitter/receiver23may include Nr receive antennas and frequency down-converts each signal received through receive antennas into a baseband signal. The transmitter/receiver23may include an oscillator for frequency down-conversion. The processor21decodes and demodulates the radio signals received through the receive antennas and restores data that the transmitting device10wishes to transmit. The transmitter/receiver13and23include one or more antennas. An antenna performs a function of transmitting signals processed by the transmitter/receiver13and23to the exterior or receiving radio signals from the exterior to transfer the radio signals to the transmitter/receiver13and23. The antenna may also be called an antenna port. Each antenna may correspond to one physical antenna or may be configured by a combination of more than one physical antenna element. A signal transmitted through each antenna cannot be decomposed by the receiving device20. A reference signal (RS) transmitted through an antenna defines the corresponding antenna viewed from the receiving device20and enables the receiving device20to perform channel estimation for the antenna, irrespective of whether a channel is a single RF channel from one physical antenna or a composite channel from a plurality of physical antenna elements including the antenna. That is, an antenna is defined such that a channel transmitting a symbol on the antenna may be derived from the channel transmitting another symbol on the same antenna. A transmitter/receiver supporting a MIMO function of transmitting and receiving data using a plurality of antennas may be connected to two or more antennas. In embodiments of the present disclosure, a UE serves as the transmission device10on uplink and as the receiving device20on downlink. In embodiments of the present disclosure, an eNB serves as the receiving device20on uplink and as the transmission device10on downlink. The transmitting device and/or the receiving device may be configured as a combination of one or more embodiments of the present disclosure. The detailed description of the exemplary embodiments of the present disclosure has been given to enable those skilled in the art to implement and practice the disclosure. Although the disclosure has been described with reference to the exemplary embodiments, those skilled in the art will appreciate that various modifications and variations can be made in the present disclosure without departing from the spirit or scope of the disclosure described in the appended claims. For example, those skilled in the art may use each construction described in the above embodiments in combination with each other. Accordingly, the disclosure should not be limited to the specific embodiments described herein, but should be accorded the broadest scope consistent with the principles and novel features disclosed herein. INDUSTRIAL APPLICABILITY The present disclosure may be used for a wireless communication apparatus such as a user equipment (UE), a relay and an eNB.
57,970
11863331
DETAILED DESCRIPTION The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. This disclosure relates generally to wireless communications systems, also referred to as wireless communications networks. In various embodiments, the techniques and apparatus may be used for wireless communication networks such as code division multiple access (CDMA) networks, time division multiple access (TDMA) networks, frequency division multiple access (FDMA) networks, orthogonal FDMA (OFDMA) networks, single-carrier FDMA (SC-FDMA) networks, LTE networks, Global System for Mobile Communications (GSM) networks, 5thGeneration (5G) or new radio (NR) networks, as well as other communications networks. As described herein, the terms “networks” and “systems” may be used interchangeably. An OFDMA network may implement a radio technology such as evolved UTRA (E-UTRA), Institute of Electrical and Electronics Engineers (IEEE) 802.11, IEEE 802.16, IEEE 802.20, flash-OFDM and the like. UTRA, E-UTRA, and GSM are part of universal mobile telecommunication system (UMTS). In particular, long term evolution (LTE) is a release of UMTS that uses E-UTRA. UTRA, E-UTRA, GSM, UMTS and LTE are described in documents provided from an organization named “3rd Generation Partnership Project” (3GPP), and cdma2000 is described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). These various radio technologies and standards are known or are being developed. For example, the 3rd Generation Partnership Project (3GPP) is a collaboration between groups of telecommunications associations that aims to define a globally applicable third generation (3G) mobile phone specification. 3GPP long term evolution (LTE) is a 3GPP project which was aimed at improving the UMTS mobile phone standard. The 3GPP may define specifications for the next generation of mobile networks, mobile systems, and mobile devices. The present disclosure is concerned with the evolution of wireless technologies from LTE, 4G, 5G, NR, and beyond with shared access to wireless spectrum between networks using a collection of new and different radio access technologies or radio air interfaces. In particular, 5G networks contemplate diverse deployments, diverse spectrum, and diverse services and devices that may be implemented using an OFDM-based unified, air interface. In order to achieve these goals, further enhancements to LTE and LTE-A are considered in addition to development of the new radio technology for 5G NR networks. The 5G NR will be capable of scaling to provide coverage (1) to a massive Internet of things (IoTs) with a ULtra-high density (e.g., ˜1M nodes/km2), ultra-low complexity (e.g., ˜10s of bits/sec), ultra-low energy (e.g., ˜10+ years of battery life), and deep coverage with the capability to reach challenging locations; (2) including mission-critical control with strong security to safeguard sensitive personal, financial, or classified information, ultra-high reliability (e.g., ˜99.9999% reliability), ultra-low latency (e.g., ˜1 ms), and users with wide ranges of mobility or lack thereof; and (3) with enhanced mobile broadband including extreme high capacity (e.g., ˜10 Tbps/km2), extreme data rates (e.g., multi-Gbps rate, 100+ Mbps user experienced rates), and deep awareness with advanced discovery and optimizations. The 5G NR may be implemented to use optimized OFDM-based waveforms with scalable numerology and transmission time interval (TTI); having a common, flexible framework to efficiently multiplex services and features with a dynamic, low-latency time division duplex (TDD)/frequency division duplex (FDD) design; and with advanced wireless technologies, such as massive multiple input, multiple output (MIMO), robust millimeter wave (mmWave) transmissions, advanced channel coding, and device-centric mobility. Scalability of the numerology in 5G NR, with scaling of subcarrier spacing, may efficiently address operating diverse services across diverse spectrum and diverse deployments. For example, in various outdoor and macro coverage deployments of less than 3 GHz FDD/TDD implementations, subcarrier spacing may occur with 15 kHz, for example over 5, 10, 20 MHz, and the like bandwidth (BW). For other various outdoor and small cell coverage deployments of TDD greater than 3 GHz, subcarrier spacing may occur with 30 kHz over 80/100 MHz BW. For other various indoor wideband implementations, using a TDD over the unlicensed portion of the 5 GHz band, the subcarrier spacing may occur with 60 kHz over a 160 MHz BW. Finally, for various deployments transmitting with mmWave components at a TDD of 28 GHz, subcarrier spacing may occur with 120 kHz over a 500 MHz BW. The scalable numerology of the 5G NR facilitates scalable TTI for diverse latency and quality of service (QoS) requirements. For example, shorter TTI may be used for low latency and high reliability, while longer TTI may be used for higher spectral efficiency. The efficient multiplexing of long and short TTIs to allow transmissions to start on symbol boundaries. 5G NR also contemplates a self-contained integrated subframe design with UL/downlink scheduling information, data, and acknowledgement in the same subframe. The self-contained integrated subframe supports communications in unlicensed or contention-based shared spectrum, adaptive UL/downlink that may be flexibly configured on a per-cell basis to dynamically switch between UL and downlink to meet the current traffic needs. Various other aspects and features of the disclosure are further described below. It should be apparent that the teachings herein may be embodied in a wide variety of forms and that any specific structure, function, or both being disclosed herein is merely representative and not limiting. Based on the teachings herein one of an ordinary level of skill in the art should appreciate that an aspect disclosed herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented or such a method may be practiced using other structure, functionality, or structure and functionality in addition to or other than one or more of the aspects set forth herein. For example, a method may be implemented as part of a system, device, apparatus, and/or as instructions stored on a computer readable medium for execution on a processor or computer. Furthermore, an aspect may comprise at least one element of a claim. In a wireless communication network, a base station (BS) may configure a use equipment (UE) with a configured grant for autonomous transmission or non-scheduled transmission. Each configured grant is associated with a set of resources configured for the UE to transmit UL communications (e.g., data and/or control information) without being scheduled by the BS. The set of configured resource may occur periodically. The set of configured resources may correspond to transmission time occasions. In some instances, the UE may use the configured resources for autonomous or non-scheduled uplink data transmission. To improve communication reliability, the UE may apply hybrid automatic repeat request (HARQ) techniques to the UL data transmission. Additionally, the UE may perform the UL data transmission with repetitions using different redundancy versions to improve decoding performance at the BS. When operating over a licensed band, the BS may assign a HARQ process and/or a HARQ redundancy version for transmission in each transmission time occasion. In other words, the BS may provide a mapping or association between HARQ process/redundancy version to configured resource in the time domain. The UE may transmit UL HARQ data in the configured transmission occasions based on the association. The present application describes mechanisms for unscheduled UL HARQ transmission using configured grant resources in a shared radio frequency band, which may be in a shared spectrum or an unlicensed spectrum. For example, a BS may configure a UE with a set of configured resources and a plurality of redundancy versions (RVNs) for unscheduled UL HARQ transmission using the configured resources. The UE may determine a RV sequence from the plurality of RVNs. The UE may map the RV sequence to transmission slots within a configured resource transmit one or more redundancy versions of a transport block (TB) during one or more transmission slots within the configured resource. The UE may perform a listen-before-talk (LBT) prior to a transmission and may transmit the one or more redundancy versions of the TB after a successful LBT. In some aspects, the UE may perform the RV mapping for successful transmissions. For instance, the UE may select a RVN from the RV sequence sequentially for each slot in the configured resource beginning at a slot where an initial transmission TB can be transmitted (e.g., after passing the LBT). In some aspects, the UE may perform the RV mapping beginning at a sot associated with an earliest LBT attempt irrespective of whether a transmission attempt is successful or not. For instance, the UE may select a RVN from the RV sequence sequentially for each slot in the configured resource beginning at a slot where a first transmission attempt is performed. In some instances, the UE may cyclically wrap the RV sequence after using up all the RVN in the sequence for mapping. In some aspects, the UE may order the plurality of RVNs in any suitable order in the RV sequence. In some aspects, the BS may configure the UE with a RV sequence including a set of RVNs arranged in a certain order, and thus the UE may use the configured RV sequence for RV-to-slot mapping. In some aspects, the BS may configured the UE with RV-to-slot-mapping. In some aspects, the UE may retransmit the TB in a subsequent configured resource. In some instances, the UE may reinitiate a RV mapping for the retransmission in the subsequent configured resource using the same mapping mechanisms as for the initial transmission. In some instances, the UE may resume from a last RVN in the RV sequence used in the initial transmission. In some instances, the UE may transmit one or more redundancy versions of the TB using any RVNs. In some aspects, the UE may prioritize TBs of different HARQ processes for unscheduled transmission in a configured resource. In other words, the UE may determine priorities for the TBs of the different HARQ processes for unscheduled transmission in the configured resource. In some instances, the UE may transmit TBs of different HARQ process in the order of MAC PDU preparation order. In some instances, the UE may prioritize a retransmission over an initial transmission. In some instances, the UE may prioritize the TBs of the different HARQ processes based on data priorities and/or latency requirements of the HARQ processes. In other words, the UE may determine priorities for the TBs of the different HARQ processes based on the data priorities and/or latency requirements of the HARQ processes. FIG.1illustrates a wireless communication network100according to some aspects of the present disclosure. The network100may be a 5G network. The network100includes a number of base stations (BSs)105(individually labeled as105a,105b,105c,105d,105e, and105f) and other network entities. A BS105may be a station that communicates with UEs115and may also be referred to as an evolved node B (eNB), a next generation eNB (gNB), an access point, and the like. Each BS105may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to this particular geographic coverage area of a BS105and/or a BS subsystem serving the coverage area, depending on the context in which the term is used. A BS105may provide communication coverage for a macro cell or a small cell, such as a pico cell or a femto cell, and/or other types of cell. A macro cell generally covers a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscriptions with the network provider. A small cell, such as a pico cell, would generally cover a relatively smaller geographic area and may allow unrestricted access by UEs with service subscriptions with the network provider. A small cell, such as a femto cell, would also generally cover a relatively small geographic area (e.g., a home) and, in addition to unrestricted access, may also provide restricted access by UEs having an association with the femto cell (e.g., UEs in a closed subscriber group (CSG), UEs for users in the home, and the like). A BS for a macro cell may be referred to as a macro BS. A BS for a small cell may be referred to as a small cell BS, a pico BS, a femto BS or a home BS. In the example shown inFIG.1, the BSs105dand105emay be regular macro BSs, while the BSs105a-105cmay be macro BSs enabled with one of three dimension (3D), full dimension (FD), or massive MIMO. The BSs105a-105cmay take advantage of their higher dimension MIMO capabilities to exploit 3D beamforming in both elevation and azimuth beamforming to increase coverage and capacity. The BS105fmay be a small cell BS which may be a home node or portable access point. A BS105may support one or multiple (e.g., two, three, four, and the like) cells. The network100may support synchronous or asynchronous operation. For synchronous operation, the BSs may have similar frame timing, and transmissions from different BSs may be approximately aligned in time. For asynchronous operation, the BSs may have different frame timing, and transmissions from different BSs may not be aligned in time. The UEs115are dispersed throughout the wireless network100, and each UE115may be stationary or mobile. A UE115may also be referred to as a terminal, a mobile station, a subscriber unit, a station, or the like. A UE115may be a cellular phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a tablet computer, a laptop computer, a cordless phone, a wireless local loop (WLL) station, or the like. In one aspect, a UE115may be a device that includes a Universal Integrated Circuit Card (UICC). In another aspect, a UE may be a device that does not include a UICC. In some aspects, the UEs115that do not include UICCs may also be referred to as IoT devices or internet of everything (IoE) devices. The UEs115a-115dare examples of mobile smart phone-type devices accessing network100. A UE115may also be a machine specifically configured for connected communication, including machine type communication (MTC), enhanced MTC (eMTC), narrowband IoT (NB-IoT) and the like. The UEs115e-115hare examples of various machines configured for communication that access the network100. The UEs115i-115kare examples of vehicles equipped with wireless communication devices configured for communication that access the network100. A UE115may be able to communicate with any type of the BSs, whether macro BS, small cell, or the like. InFIG.1, a lightning bolt (e.g., communication links) indicates wireless transmissions between a UE115and a serving BS105, which is a BS designated to serve the UE115on the downlink (DL) and/or uplink (UL), desired transmission between BSs105, backhaul transmissions between BSs, or sidelink transmissions between UEs115. In operation, the BSs105a-105cmay serve the UEs115aand115busing 3D beamforming and coordinated spatial techniques, such as coordinated multipoint (CoMP) or multi-connectivity. The macro BS105dmay perform backhaul communications with the BSs105a-105c, as well as small cell, the BS105f. The macro BS105dmay also transmits multicast services which are subscribed to and received by the UEs115cand115d. Such multicast services may include mobile television or stream video, or may include other services for providing community information, such as weather emergencies or alerts, such as Amber alerts or gray alerts. The BSs105may also communicate with a core network. The core network may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. At least some of the BSs105(e.g., which may be an example of a gNB or an access node controller (ANC)) may interface with the core network through backhaul links (e.g., NG-C, NG-U, etc.) and may perform radio configuration and scheduling for communication with the UEs115. In various examples, the BSs105may communicate, either directly or indirectly (e.g., through core network), with each other over backhaul links (e.g., X1, X2, etc.), which may be wired or wireless communication links. The network100may also support mission critical communications with ultra-reliable and redundant links for mission critical devices, such as the UE115e, which may be a drone. Redundant communication links with the UE115emay include links from the macro BSs105dand105e, as well as links from the small cell BS105f. Other machine type devices, such as the UE115f(e.g., a thermometer), the UE115g(e.g., smart meter), and UE115h(e.g., wearable device) may communicate through the network100either directly with BSs, such as the small cell BS105f, and the macro BS105e, or in multi-step-size configurations by communicating with another user device which relays its information to the network, such as the UE115fcommunicating temperature measurement information to the smart meter, the UE115g, which is then reported to the network through the small cell BS105f. The network100may also provide additional network efficiency through dynamic, low-latency TDD/FDD communications, such as V2V, V2X, C-V2X communications between a UE115i,115j, or115kand other UEs115, and/or vehicle-to-infrastructure (V2I) communications between a UE115i,115j, or115kand a BS105. In some implementations, the network100utilizes OFDM-based waveforms for communications. An OFDM-based system may partition the system BW into multiple (K) orthogonal subcarriers, which are also commonly referred to as subcarriers, tones, bins, or the like. Each subcarrier may be modulated with data. In some instances, the subcarrier spacing between adjacent subcarriers may be fixed, and the total number of subcarriers (K) may be dependent on the system BW. The system BW may also be partitioned into subbands. In other instances, the subcarrier spacing and/or the duration of TTIs may be scalable. In some aspects, the BSs105can assign or schedule transmission resources (e.g., in the form of time-frequency resource blocks (RB)) for downlink (DL) and uplink (UL) transmissions in the network100. DL refers to the transmission direction from a BS105to a UE115, whereas UL refers to the transmission direction from a UE115to a BS105. The communication can be in the form of radio frames. A radio frame may be divided into a plurality of subframes or slots, for example, about 10. Each slot may be further divided into mini-slots. In a FDD mode, simultaneous UL and DL transmissions may occur in different frequency bands. For example, each subframe includes a UL subframe in a UL frequency band and a DL subframe in a DL frequency band. In a TDD mode, UL and DL transmissions occur at different time periods using the same frequency band. For example, a subset of the subframes (e.g., DL subframes) in a radio frame may be used for DL transmissions and another subset of the subframes (e.g., UL subframes) in the radio frame may be used for UL transmissions. The DL subframes and the UL subframes can be further divided into several regions. For example, each DL or UL subframe may have pre-defined regions for transmissions of reference signals, control information, and data. Reference signals are predetermined signals that facilitate the communications between the BSs105and the UEs115. For example, a reference signal can have a particular pilot pattern or structure, where pilot tones may span across an operational BW or frequency band, each positioned at a pre-defined time and a pre-defined frequency. For example, a BS105may transmit cell specific reference signals (CRSs) and/or channel state information-reference signals (CSI-RSs) to enable a UE115to estimate a DL channel. Similarly, a UE115may transmit sounding reference signals (SRSs) to enable a BS105to estimate a UL channel Control information may include resource assignments and protocol controls. Data may include protocol data and/or operational data. In some aspects, the BSs105and the UEs115may communicate using self-contained subframes. A self-contained subframe may include a portion for DL communication and a portion for UL communication. A self-contained subframe can be DL-centric or UL-centric. A DL-centric subframe may include a longer duration for DL communication than for UL communication. A UL-centric subframe may include a longer duration for UL communication than for UL communication. In some aspects, the network100may be an NR network deployed over a licensed spectrum. The BSs105can transmit synchronization signals (e.g., including a primary synchronization signal (PSS) and a secondary synchronization signal (SSS)) in the network100to facilitate synchronization. The BSs105can broadcast system information associated with the network100(e.g., including a master information block (MIB), remaining system information (RMSI), and other system information (OSI)) to facilitate initial network access. In some instances, the BSs105may broadcast the PSS, the SSS, and/or the MIB in the form of synchronization signal block (SSBs) over a physical broadcast channel (PBCH) and may broadcast the RMSI and/or the OSI over a physical downlink shared channel (PDSCH). In some aspects, a UE115attempting to access the network100may perform an initial cell search by detecting a PSS from a BS105. The PSS may enable synchronization of period timing and may indicate a physical layer identity value. The UE115may then receive a SSS. The SSS may enable radio frame synchronization, and may provide a cell identity value, which may be combined with the physical layer identity value to identify the cell. The PSS and the SSS may be located in a central portion of a carrier or any suitable frequencies within the carrier. After receiving the PSS and SSS, the UE115may receive a MIB. The MIB may include system information for initial network access and scheduling information for RMSI and/or OSI. After decoding the MIB, the UE115may receive RMSI and/or OSI. The RMSI and/or OSI may include radio resource control (RRC) information related to random access channel (RACH) procedures, paging, control resource set (CORESET) for physical downlink control channel (PDCCH) monitoring, physical UL control channel (PUCCH), physical UL shared channel (PUSCH), power control, and SRS. After obtaining the MIB, the RMSI and/or the OSI, the UE115can perform a random access procedure to establish a connection with the BS105. In some examples, the random access procedure may be a four-step random access procedure. For example, the UE115may transmit a random access preamble and the BS105may respond with a random access response. The random access response (RAR) may include a detected random access preamble identifier (ID) corresponding to the random access preamble, timing advance (TA) information, a UL grant, a temporary cell-radio network temporary identifier (C-RNTI), and/or a backoff indicator. Upon receiving the random access response, the UE115may transmit a connection request to the BS105and the BS105may respond with a connection response. The connection response may indicate a contention resolution. In some examples, the random access preamble, the RAR, the connection request, and the connection response can be referred to as message 1 (MSG1), message 2 (MSG2), message 3 (MSG3), and message 4 (MSG4), respectively. In some examples, the random access procedure may be a two-step random access procedure, where the UE115may transmit a random access preamble and a connection request in a single transmission and the BS105may respond by transmitting a random access response and a connection response in a single transmission. After establishing a connection, the UE115and the BS105can enter a normal operation stage, where operational data may be exchanged. For example, the BS105may schedule the UE115for UL and/or DL communications. The BS105may transmit UL and/or DL scheduling grants to the UE115via a PDCCH. The scheduling grants may be transmitted in the form of DL control information (DCI). The BS105may transmit a DL communication signal (e.g., carrying data) to the UE115via a PDSCH according to a DL scheduling grant. The UE115may transmit a UL communication signal to the BS105via a PUSCH and/or PUCCH according to a UL scheduling grant. In some aspects, the BS105may communicate with a UE115using HARQ techniques to improve communication reliability, for example, to provide a URLLC service. The BS105may schedule a UE115for a PDSCH communication by transmitting a DL grant in a PDCCH. The BS105may transmit a DL data packet to the UE115according to the schedule in the PDSCH. The DL data packet may be transmitted in the form of a transport block (TB). If the UE115receives the DL data packet successfully, the UE115may transmit a HARQ ACK to the BS105. Conversely, if the UE115fails to receive the DL transmission successfully, the UE115may transmit a HARQ NACK to the BS105. Upon receiving a HARQ NACK from the UE115, the BS105may retransmit the DL data packet to the UE115. The retransmission may include the same coded version of DL data as the initial transmission. Alternatively, the retransmission may include a different coded version of the DL data than the initial transmission. The UE115may apply soft-combining to combine the encoded data received from the initial transmission and the retransmission for decoding. The BS105and the UE115may also apply HARQ for UL communications using substantially similar mechanisms as the DL HARQ. In some aspects, the network100may operate over a system BW or a component carrier (CC) BW. The network100may partition the system BW into multiple BWPs (e.g., portions). A BS105may dynamically assign a UE115to operate over a certain BWP (e.g., a certain portion of the system BW). The assigned BWP may be referred to as the active BWP. The UE115may monitor the active BWP for signaling information from the BS105. The BS105may schedule the UE115for UL or DL communications in the active BWP. In some aspects, a BS105may assign a pair of BWPs within the CC to a UE115for UL and DL communications. For example, the BWP pair may include one BWP for UL communications and one BWP for DL communications. In some aspects, the network100may operate over a shared channel, which may include shared frequency bands and/or unlicensed frequency bands. For example, the network100may be an NR-unlicensed (NR-U) network operating over an unlicensed frequency band. In such an aspect, the BSs105and the UEs115may be operated by multiple network operating entities. To avoid collisions, the BSs105and the UEs115may employ a listen-before-talk (LBT) procedure to monitor for transmission opportunities (TXOPs) in the shared channel. For example, a transmitting node (e.g., a BS105or a UE115) may perform an LBT prior to transmitting in the channel. When the LBT passes, the transmitting node may proceed with the transmission. When the LBT fails, the transmitting node may refrain from transmitting in the channel. In an example, the LBT may be based on energy detection. For example, the LBT results in a pass when signal energy measured from the channel is below a threshold. Conversely, the LBT results in a failure when signal energy measured from the channel exceeds the threshold. In another example, the LBT may be based on signal detection. For example, the LBT results in a pass when a channel reservation signal (e.g., a predetermined preamble signal) is not detected in the channel A TXOP may also be referred to as channel occupancy time (COT). In some aspects, when operating over a shared radio frequency band in a shared spectrum or unlicensed spectrum, a BS105may configure a UE115with configured resources for autonomous UL data transmission. The configured resources may be repeated at a certain time interval. The UE115may use the configured resources for UL HARQ data transmission without being scheduled dynamically by the BS105. Each configured resource may include a set of consecutive transmission slots or time periods. The BS105may configure the UE with a set of RVNs. The UE115may determine an order for mapping the configured RVNs to the set of slots or transmission periods. The UE115may transmit one or more redundancy versions of a TB in consecutive slots or time periods within a configured resource. The UE115may also prioritize HARQ processes and/or TBs for transmissions in the configured resources. In other words, the UE115may determine priorities for the HARQ processes and/or the TBs for transmissions in the configured resources. Mechanisms for transmitting UL HARQ data using configured resources in a shared radio frequency band are described in greater detail herein. FIG.2illustrates a HARQ communication scenario200in a shared radio frequency band according to some aspects of the present disclosure. The scenario200may correspond to a HARQ communication scenario in the network100when the network100operates over a shared frequency band or an unlicensed frequency band. InFIG.2, the x-axis represents time in some constant units. In the scenario200, a BS205similar to the BSs105may communicate data with a UE215similar to the UEs115using HARQ over a frequency band202, which may be a shared radio frequency band in a shared spectrum or an unlicensed spectrum, shared by multiple network operating entities. The frequency band202may be located at any suitable frequencies. In some aspects, the frequency band202may be located at about 3.5 GHz, 6 GHz, or 30 GHz. For HARQ communications, a transmitting node (e.g., the UE215) may transmit data (e.g., in the form of a TB) to a receiving node (e.g., the BS205). The receiving node may provide the transmitting node with a feedback on the reception status of the data. For example, the receiving node may transmit an ACK to the transmitting node to indicate a successful decoding of the data. Conversely, the receiving node may transmit a NACK to the transmitting node to indicate a decoding failure for the data. When the transmitting node receives an ACK from the receiving node, the transmitting node may transmit new data in a subsequent transmission. However, when the transmitting node receives a NACK from the receiving node, the transmitting node may retransmit the same data to the receiving node. In some instances, the transmitting node may use the same encoding version for the initial transmission and the retransmission. In some other instances, the transmitting node may use different encoding versions for the initial transmission and the retransmission. The encoding versions may be referred to as redundancy versions. Different redundancy versions may include different combinations of systematic data information bits and error correction bit. In some aspects, the receiving node may perform soft-combining to decode the data based on the initial transmission and the retransmission. For simplicity of discussion and illustration,FIG.2illustrates the HARQ communication in the context of UL data communications, though similar HARQ mechanisms may be applied to DL data communications. As an example, the UE215includes a HARQ component220. The HARQ component220is configured to perform multiple parallel HARQ processes222for UL data communications. The HARQ processes222may operate independent of each other. In other words, the ACKs, NACKs, and/or retransmissions are determined and processed separately for each HARQ process222at the BS205and at the UE215. Each HARQ process222may be identified by a HARQ process identifier (ID). For example, the HARQ processes222may be identified by identifiers H1, H2, . . . Hn. Each HARQ process222may have one or more TBs ready for transmission. In the illustrated example ofFIG.2, the HARQ process H1222has one TB230ready for transmission and the HARQ process H2222has one TB232ready for transmission. The BS205may configure the UE215with configured resources for autonomous or unscheduled transmission. The UE215may transmit the TB230and the TB232to the BS205using a configured resource. In some aspects, the BS205may configure the UE215with a configured resource240. The configured resource240may be periodic. For instance, the configured resource240may repeated at a time interval242. The configured resource240may be partitioned into a plurality transmission time periods or slots206. Each slot206may include any suitable number of OFDM symbols depending on the transmission configurations or numerology (e.g., the subcarrier spacing (SCS) and/or the cyclic prefix (CP) mode) in use. The UE215may perform an LBT250in the frequency band202prior to a transmission. As an example, a first LBT250attempt for a transmission in a second slot206within the configured resource240failed (shown by the cross symbol). A second LBT250attempt for a transmission in a third slot206within the configured resource240also failed (shown by the cross symbol). A third LBT attempt for a transmission in a fourth slot206within the configured resource240is a pass. Thus, the UE215may initiate a transmission beginning at the fourth slot206. Once the UE215won a contention (e.g., passing the LBT250), the UE215may use the configured resource for a number of consecutive HARQ transmissions. In the illustrated example ofFIG.2, after passing the LBT250, the UE215transmits four repetitions of the TB230, denoted as TB A, followed by two repetitions of the TB232, denoted as TB B, in consecutive slots206. In some aspects, the UE215may transmit the repetitions for the TB230using different redundancy versions and/or the same redundancy versions. In some instances, each repetition may use a different RVN. In some instances, all repetitions may use the same RVN. In some instances, at least two repetitions may use the same RVN. Similarly, the UE215may transmit the repetitions for the TB232using different redundancy versions and/or the same redundancy versions. In some aspects, the UE215may include a RVN and/or a HARQ ID for each transmission, for example, in uplink control information (UCI)260. For instance, the RVN may indicate a RV0, a RV1, a RV2, a RV3, a RV4, and so on. Each transmission for the TB A230may include UCI260indicating a HARQ ID HE Similarly, each transmission for the TB B232may include UCI260indicating a HARQ ID H2. The UE215may further indicate whether a transmission is an initial transmission or a retransmission by including a new data indicator (NDI) in the UCI260. For example, the NDI may be set to a value of 1 to indicate that a corresponding transmission is an initial transmission and may be set to a value of 0 to indicate that a corresponding transmission is a retransmission. For instance, the UCI260for each transmission of the TB A230may include a NDI with a value of 1 to indicate that the repetitions of the TB A230are associated with an in initial transmissions of the TB A230. The UCI260for each transmission of the TB B232may include a NDI with a value of 0 to indicate that the repetitions of the TB B232are associated with a retransmission of the TB B232. In some aspects, the UE215may determine a RV sequence (e.g., a sequence of RVNs) for transmitting one or more redundancy versions of a TB in a configured resource and/or how to prioritize transmission of one TB of a certain HARQ process222over another TB of another HARQ process222without assistance from the BS205. In some other instances, the BS205may provide the UE with some assistance in the RV sequence determination and/or HARQ ID selection. Mechanisms for determining RVNs and/or HARQ IDs for unscheduled transmission using configured resource240are described in greater detail below. FIG.3is a block diagram of an exemplary UE300according to some aspects of the present disclosure. The UE300may be a UE115discussed above inFIG.1. As shown, the UE300may include a processor302, a memory304, a HARQ module308, a transceiver310including a modem subsystem312and a radio frequency (RF) unit314, and one or more antennas316. These elements may be in direct or indirect communication with each other, for example via one or more buses. The processor302may include a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a controller, a field programmable gate array (FPGA) device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor302may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The memory304may include a cache memory (e.g., a cache memory of the processor302), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and non-volatile memory, or a combination of different types of memory. In an aspect, the memory304includes a non-transitory computer-readable medium. The memory304may store, or have recorded thereon, instructions306. The instructions306may include instructions that, when executed by the processor302, cause the processor302to perform the operations described herein with reference to the UEs115in connection with aspects of the present disclosure, for example, aspects ofFIGS.2and5-17. Instructions306may also be referred to as program code. The program code may be for causing a wireless communication device to perform these operations, for example by causing one or more processors (such as processor302) to control or command the wireless communication device to do so. The terms “instructions” and “code” should be interpreted broadly to include any type of computer-readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements. The HARQ module308may be implemented via hardware, software, or combinations thereof. For example, the HARQ module308may be implemented as a processor, circuit, and/or instructions306stored in the memory304and executed by the processor302. In some instances, the HARQ module308can be integrated within the modem subsystem312. For example, the HARQ module308can be implemented by a combination of software components (e.g., executed by a DSP or a general processor) and hardware components (e.g., logic gates and circuitry) within the modem subsystem312. The HARQ module308may be used for various aspects of the present disclosure, for example, aspects ofFIGS.2and5-17. The HARQ module308is configured to receive a configured grant from a BS (e.g., the BSs105and205) indicating one or more configured resources, receive an indication of a plurality of RVNs from the BS, determine a RV sequence from the plurality or RVNs, map the RV sequence to transmission slots within a configured resource, perform LBT (e.g., based on channel energy detection), and transmit one or more redundancy versions of a TB associated with a HARQ process in one or more slots within the configured resource. In some aspects, HARQ module308is configured to perform the RV mapping for successful transmissions. For instance, the HARQ module308may select a RVN from the RV sequence sequentially for each slot in the configured resource beginning at a slot where an initial transmission TB is successfully transmitted (e.g., after passing an LBT). In some aspects, the HARQ module308is configured to perform the RV mapping beginning at a slot associated with an earliest LBT attempt irrespective of whether a transmission attempt is successful or not. For instance, the HARQ module308may select a RVN from the RV sequence sequentially for each slot in the configured resource beginning at a slot where a first transmission attempt is performed. In some aspects, the HARQ module308is configured to receive, from the BS, a RV sequence including RVNs arranged in a certain order and determine a mapping between the configured RV sequence and transmission slots in the configured resource for unscheduled UL HARQ transmission with repetitions. In some aspects, the HARQ module308is configured to receive, from the BS, a mapping or association between the RV sequence and transmission slots in a configured resource and transmit UL HARQ transmission with repetitions in the configured resource according to the RV sequence and the RV-to-slot mapping. In some aspects, the HARQ module308is configured to retransmit the TB in a subsequent configured resource. In some instances, the HARQ module308may reinitiate a RV mapping for the retransmission in the subsequent configured resource using the same mapping mechanisms as for the initial transmission. In some instances, the HARQ module308may resume from a last RVN in the RV sequence used in the initial transmission. In some instances, the HARQ module308may transmit one or more redundancy versions of the TB using any RVNs. In some aspects, the HARQ module308is configured to receive ACK/NACKs from the BS and determine the retransmission for the TB based on receiving a NACK or no ACK/NACK for a previous transmission of the TB. In some aspects, the HARQ module308is configured to prioritize TBs of different HARQ processes for unscheduled transmission in a configured resource. In some instances, the HARQ module308may transmit TBs of different HARQ process in the order of MAC PDU preparation order. In some instances, the HARQ module308may prioritize a retransmission over an initial transmission. In some instances, the HARQ module308may prioritize the TBs of the different HARQ processes based on data priorities and/or latency requirements of the HARQ processes. Mechanisms for transmitting unscheduled UL data with HARQ using configured resources in a shared radio frequency band are described in greater detail herein. As shown, the transceiver310may include the modem subsystem312and the RF unit314. The transceiver310can be configured to communicate bi-directionally with other devices, such as the BSs105. The modem subsystem312may be configured to modulate and/or encode the data from the memory304and/or the HARQ module308according to a modulation and coding scheme (MCS), e.g., a low-density parity check (LDPC) coding scheme, a turbo coding scheme, a convolutional coding scheme, a digital beamforming scheme, etc. The RF unit314may be configured to process (e.g., perform analog to digital conversion or digital to analog conversion, etc.) modulated/encoded data (e.g., PUSCH data, UCI, UL HARQ data block) from the modem subsystem312(on outbound transmissions) or of transmissions originating from another source such as a UE115or a BS105. The RF unit314may be further configured to perform analog beamforming in conjunction with the digital beamforming. Although shown as integrated together in transceiver310, the modem subsystem312and the RF unit314may be separate devices that are coupled together at the UE115to enable the UE115to communicate with other devices. The RF unit314may provide the modulated and/or processed data, e.g. data packets (or, more generally, data messages that may contain one or more data packets and other information), to the antennas316for transmission to one or more other devices. The antennas316may further receive data messages transmitted from other devices. The antennas316may provide the received data messages for processing and/or demodulation at the transceiver310. The transceiver310may provide the demodulated and decoded data (e.g., configured grants, configured RVNs, configured RVN order, RV sequences, configured RV-to-slot mapping, and/or HARQ ACK/ACK) to the HARQ module308for processing. The antennas316may include multiple antennas of similar or different designs in order to sustain multiple transmission links. The RF unit314may configure the antennas316. In an example, the transceiver310is configured to receive a configured grant, a RV sequence, and/or RVNs from a BS and transmit unscheduled HARQ UL data to the BS using configured resource indicate by the configured grant, for example, by coordinating with the HARQ module308. In an aspect, the UE300can include multiple transceivers310implementing different RATs (e.g., NR and LTE). In an aspect, the UE300can include a single transceiver310implementing multiple RATs (e.g., NR and LTE). In an aspect, the transceiver310can include various components, where different combinations of components can implement different RATs. FIG.4is a block diagram of an exemplary BS400according to some aspects of the present disclosure. The BS400may be a BS105in the network100as discussed above inFIG.1. A shown, the BS400may include a processor402, a memory404, a configuration module408, a HARQ module409, a transceiver410including a modem subsystem412and a RF unit414, and one or more antennas416. These elements may be in direct or indirect communication with each other, for example via one or more buses. The processor402may have various features as a specific-type processor. For example, these may include a CPU, a DSP, an ASIC, a controller, a FPGA device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor402may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The memory404may include a cache memory (e.g., a cache memory of the processor402), RAM, MRAM, ROM, PROM, EPROM, EEPROM, flash memory, a solid state memory device, one or more hard disk drives, memristor-based arrays, other forms of volatile and non-volatile memory, or a combination of different types of memory. In some aspects, the memory404may include a non-transitory computer-readable medium. The memory404may store instructions406. The instructions406may include instructions that, when executed by the processor402, cause the processor402to perform operations described herein, for example, aspects ofFIGS.2,5-12. Instructions406may also be referred to as code, which may be interpreted broadly to include any type of computer-readable statement(s) as discussed above with respect toFIG.3. Each of the configuration module408and the HARQ module409may be implemented via hardware, software, or combinations thereof. For example, each of the configuration module408and the HARQ module409may be implemented as a processor, circuit, and/or instructions406stored in the memory404and executed by the processor402. In some examples, the configuration module408and the HARQ module409can be integrated within the modem subsystem412. For example, the configuration module408and the HARQ module409can be implemented by a combination of software components (e.g., executed by a DSP or a general processor) and hardware components (e.g., logic gates and circuitry) within the modem subsystem412. In some examples, a UE may include one or both of the configuration module408and the HARQ module409. In other examples, a UE may include all of the configuration module408and the HARQ module409. The configuration module408and the HARQ module409may be used for various aspects of the present disclosure, for example, aspects ofFIGS.2,5-12. The configuration module408is configured to determine configured resources in a shared radio frequency band for a UE (e.g., the UE115,215, and/or300), transmit a configured grant to the UE indicating the configured resources, transmit a configuration to the UE indicting a RV sequence, RVNs, a RV-to-slot mapping for the UE to transmit unscheduled HARQ UL transmission in the configured resources. The HARQ module409may be used for various aspects of the present disclosure, for example, aspects ofFIGS.2,5-12. The HARQ module409is configured to receive PUSCH from the UE, perform decoding on the PUSCH in the configured resource, transmit ACK/NACK to the UE based on the decoding results. Mechanisms for configuring a UE for unscheduled UL HARQ transmission using configured resources are described in greater detail herein. As shown, the transceiver410may include the modem subsystem412and the RF unit414. The transceiver410can be configured to communicate bi-directionally with other devices, such as the UEs115and/or300and/or another core network element. The modem subsystem412may be configured to modulate and/or encode data according to a MCS, e.g., a LDPC coding scheme, a turbo coding scheme, a convolutional coding scheme, a digital beamforming scheme, etc. The RF unit414may be configured to process (e.g., perform analog to digital conversion or digital to analog conversion, etc.) modulated/encoded data (e.g., configured grant, RV sequences, RVNs, HARQ ACK/NACK) from the modem subsystem412(on outbound transmissions) or of transmissions originating from another source such as a UE115and/or UE300. The RF unit414may be further configured to perform analog beamforming in conjunction with the digital beamforming. Although shown as integrated together in transceiver410, the modem subsystem412and/or the RF unit414may be separate devices that are coupled together at the BS105to enable the BS105to communicate with other devices. The RF unit414may provide the modulated and/or processed data, e.g. data packets (or, more generally, data messages that may contain one or more data packets and other information), to the antennas416for transmission to one or more other devices. This may include, for example, transmission of information to complete attachment to a network and communication with a camped UE115or300according to some aspects of the present disclosure. The antennas416may further receive data messages transmitted from other devices and provide the received data messages for processing and/or demodulation at the transceiver410. The transceiver410may provide the demodulated and decoded data (e.g., UL HARQ data, UCI, PUSCH) to the communication module408and HARQ module409for processing. The antennas416may include multiple antennas of similar or different designs in order to sustain multiple transmission links. In an example, the transceiver410is configured to transmit a configured grant, a RV sequence, a set of RVNs, and/or HARQ ACK/NACKs to a UE and/or receive UL HARQ data blocks from the UE, for example, by coordinating with the configuration module408and the HARQ module409. In an aspect, the BS400can include multiple transceivers410implementing different RATs (e.g., NR and LTE). In an aspect, the BS400can include a single transceiver410implementing multiple RATs (e.g., NR and LTE). In an aspect, the transceiver410can include various components, where different combinations of components can implement different RATs. FIGS.5-10illustrate various mechanisms for a UE (e.g., the UEs115,215, and/or300) to determine which RVNs to use when using configured resources (e.g., the configured resources240) for unscheduled HARQ UL transmission. InFIGS.5-10, the method500and the schemes600,700,800, and/or900may be employed by a UE such as the UEs115,215, and/or300and a BS such as the BSs105,205, and/or500in a network such as the network100. In particular, the BS may configure the UE with configured resources and RVNs, and the UE may determine RVN mapping based on the configured RVNs for transmitting unscheduled HARQ UL data in the configured resources as shown in the method500and schemes600-1000. The method500is described in relation to the schemes600-1000. For simplicity of discussion and illustration, the method500and the schemes600-1000are described using a RV cycle with a total of four transmissions (including three repetitions) for each TB transmission. However, the number of total transmissions in a RV cycle can be scaled to include any suitable number of repetitions (e.g., about 1, 2, 4, or 5 or more). Additionally, inFIGS.6-10, the x-axes represent time in some arbitrary units. Further, the schemes600-1000are illustrated using the same configured resource structure and HARQ process arrangement as inFIG.2, and may use the same reference numerals as inFIG.2for simplicity sake. FIG.5is a signaling diagram illustrating a HARQ communication method500using configured resources according to some aspects of the present disclosure. The method500is implemented between the BS205and the UE215ofFIG.2. As illustrated, the method500includes a number of enumerated steps, but embodiments of the method500may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order. At step510, the BS205transmits a configured grant resource configuration to the UE215. The configuration may indicate a set of configured resources in a shared radio frequency band. The configured resources may also be referred to as configured grant resources. The shared radio frequency band may correspond to the shared radio frequency band202ofFIG.2. The configured resources may correspond to the configured resource240with a certain periodicity (e.g., repeating at a time interval242) shown inFIG.2. The configuration may indicate a set of RVNs, for example, a version 0, a version 1, a version 2, and a version 3 shown as RV0, a RV1, a RV2, and a RV3, respectively, for unscheduled UL data (e.g., PUSCH data) transmissions with HARQ. In some aspects, the configuration may indicate a maximum number of repetitions configured for each TB transmission associated with a HARQ process (e.g., the HARQ processes222H1, H2. . . , Hn shown inFIG.2). For instance, the configuration may indicate a maximum number of repetitions of 3 (or a total of 4 transmission) for the illustrated example where the set of RVNs includes RV0, a RV1, a RV2, and a RV3. In some aspects, the BS may allow the UE215to use a less number of repetitions than the maximum number of repetitions in a HARQ transmission. In some aspects, the configuration may further indicate a configured order for the set of RVNs. In some aspects, the configuration may further indicate a mapping or association between the set of RVNs and transmission slots in a configured resource. At step520, the UE215determines a RV mapping between the set of RVNs and transmission slots (e.g., the transmission slots206) in the configured resource for HARQ transmissions. For instance, the UE215may determine a RV sequence from the set of RVNs and a mapping of the RV sequence to the transmission slots in a configured resource. When the configuration includes a configured order for the set of RVNs and/or a configured mapping for the set of RVNs to the transmission slots in the configured resources, the UE215may consider the configured order and/or the configured mapping during the RV mapping determination. The UE215may use various mechanisms to determine the RV mapping as described in greater detail below inFIGS.6-10. At step530, the UE215transmits a UL TB with repetitions (e.g., about 1, 2, 3, 4, or 5 or more) in a configured resource. The UE215may transmit the repetitions using various RVNs based on the determine RV mapping. The BS205may feedback a reception status to the UE215. For instance, if the BS205successfully decodes the UL TB, the BS205may transmit a HARQ ACK to the UE215. If the BS205fails to decode the UL TB, the BS205may transit a HARQ NACK to the UE215. Subsequently, the UE215may determine whether to retransmit the TB based on the reception status. For instances, if the UE215receives an ACK from the BS205, the UE215may transmit a new TB in next configured resource. If the UE215receives a NACK from the BS205or fails to receive an ACK/NACK from the BS205, the UE215may retransmit the TB in a next configured resource. FIG.6illustrates a HARQ transmission scheme600using configured resources according to some aspects of the present disclosure. In the scheme600, the BS205may configure the UE215with a set of RVNs with a configured order and the UE215may map the RVNs to successful transmissions according to the configured order. In this regard, the BS205transmits a configuration to the UE215indicating a set of RVNs610(e.g., RV0, RV1, RV2, and RV3) in a configured order. In the illustrated example ofFIG.6, the set of RVNs610in the configured order is shown as [RV0, RV2, RV1, and RV3]. The UE215may select a RVN from the set of RVNs610based on the configured order for each transmission slot206in the configured resource240beginning at a slot206with a successful LBT250attempt. In the illustrated example ofFIG.6, the UE215performs a first transmission attempt in a second transmission slot206within the configured resource240by performing an LBT250prior to the second transmission slot206. The LBT250for transmission in the second transmission slot206failed as shown by the cross symbol. The UE215may determine to perform a second transmission attempt in a next slot206(e.g., the third transmission slot206) by performing an LBT250prior to the third transmission slot206. Again, the LBT250for transmission in the third transmission slot206failed as shown by the cross symbol. The UE215may determine to perform another transmission attempt in a next slot206(e.g., the fourth transmission slot206) by performing an LBT250prior to the fourth transmission slot206. The LBT250for transmission in the fourth transmission slot206is successful, and thus the UE215proceeds with the transmission of the TB A230. In the scheme600, the UE215may not consider a transmission slot206with a failed LBT attempt for RV mapping. The UE215determines a RV mapping for transmission in the configured resource240by selecting a RVN from the set of RVNs610based on the configured order for each transmission slot206beginning at the fourth slot206where the LBT250passes. The transmission slots206after the LBT250passes are mapped to RV0, RV2, RV3, and RV1in order for the TB A230transmission. After determining the mapping, the UE215transmits the TB A230with a RVN corresponding to a RVN of a transmission slot206in which the TB is transmitted. The UE215may determine a total number of transmission for the TB A230in a RV cycle based on a total number of RVNs (e.g., 4) in the set of RVNs. As shown, the UE215uses RV0for a first successful transmission of the TB A230and uses RV2, RV3, and RV1for subsequent repetitions of TB A230based on the determined mapping. After completing the transmission for the TB A230including the repetitions, the UE215may transmit another TB with repetition in remaining slots206of the configured resource240by restarting the RV mapping (e.g., starting at a beginning of the RV sequence). As shown, the UE215transmits a TB B232using RV0and a repetition of the TB B232using RV2in the remaining slots206. FIG.7illustrates a HARQ transmission scheme700using configured resources according to some aspects of the present disclosure. The scheme700is described using the same LBT scenario as inFIG.6. The scheme700is substantially similar to the scheme600, where the BS205may configure the UE215with a set of RVNs610(e.g., RV0, RV1, RV2, and RV3) in a configured order, [RV0, RV2, RV1, and RV3]. However, in the scheme700, the UE215may map the set of RVNs610to transmission attempts instead of successful transmissions as in the scheme600. In this regard, the UE215may begin the mapping at a transmission slot206where a first transmission attempt for a TB is performed irrespective of whether a correspond LBT attempt is successful or not. For instance, the UE215may select a RVN from the set of RVNs610based on the configured order for each transmission slot206in the configured resource240beginning at a slot206associated with an earliest LBT250attempt. Thus, the RV mapping may be applied to transmission slots206where LBT fails and transmission slots206where actual transmissions are performed. In the illustrated example ofFIG.7, the UE215begins a LBT250for a first attempt in transmitting the TB A230in the second transmission slot206of the configured resource. The UE215fails the LBT250in the second transmission slot206and again in a subsequent third transmission slot206. The UE215determines a RV mapping for transmission in the configured resource240by selecting a RVN from the set of RVNs610based on the configured order for each transmission slot206beginning at the second slot206where a first transmission attempt is performed. The UE215may repeat one or more of the RVNs in the set of RVNs610based on the configured order after using up all the RVNs in the set of RVNs610. In other words, the UE215may cyclically wrap the RV sequence formed by the set of RVNs610in the configured order for the mapping. The UE215may determine a total number of transmissions for TB A230based on the length of the set of the RVNs610. As shown, the transmission slots206after the start of the LBT250are mapped to RV0, RV2, RV3, RV1, RV0, and RV2in order until all the transmissions/repetitions for the TB A230are mapped. After determining the mapping, the UE215transmits the TB A230with a RVN corresponding to a RVN of a transmission slot206in which the transmission is performed. The UE215uses RV3for a first successful transmission of the TB A230and uses RV1, RV0, and RV2for subsequent repetitions of TB A230based on RVNs of the corresponding transmission slots206in which the transmissions are performed. After completing the transmissions for the TB A230including all the repetitions, the UE215transmits another TB (e.g., the TB B230) with repetition in remaining slots206of the configured resource240by restarting the RV mapping. As shown, the UE215transmits a TB B232using RV0and a repetition of the TB B232using RV2in the remaining slots206. FIG.8illustrates a HARQ transmission scheme800using configured resources according to some aspects of the present disclosure. The scheme800is described using the same LBT scenario as inFIG.6. In the scheme800, the BS205may configure the UE215with a set of RVNs610(e.g., RV0, RV1, RV2, and RV3) in a configured order, [RV0, RV2, RV1, and RV3], similar to the scheme600. However, the BS205additionally configure the UE215with a mapping or association between the set of RVNs610and transmission slots206in the configured resource240. As shown, the set of RVNs610is mapped to the transmission slots206in the configured order and the set of RVNs610is cyclically wrapped for the mapping until all transmission slots206are mapped. The UE215may perform an LBT250prior to transmitting the TB A230. Upon passing the LBT250, the UE215transmits the TB A230with a RVN corresponding to a RVN of a transmission slot206in which the transmission is performed. The UE215may determine the total number of transmissions for the TB A230based on a configuration provided by the BS205. As shown, The UE215uses RV1for a first successful transmission of the TB A230and RV0, RV2, and RV3for subsequent repetitions of TB A230based on RVNs of the corresponding transmission slots206in which the repetitions are transmitted. After completing the transmissions for the TB A230including all the repetitions, the UE215transmits the TB B232with repetition in remaining slots206of the configured resource240. The UE215uses RV1for a first transmission of the TB B232and RV0for a subsequent repetition of TB B230based on RVNs of the corresponding transmission slots206in which the transmissions are performed. The scheme800does not require repetitions of a TB to align with a RV cycle. In the illustrated example, the RV cycle includes RV0, RV2, RV3, and RV1in the configured order according to the set of RVNs610. The UE215starts the transmission of the TB A230at RV1, which is not a starting RVN in the RV cycle, and the UE215completes the repetitions at RV3, which is not a last RVN in the RV cycle. Thus, while the scheme800restrict the mapping between the set of RVNs610and transmission slots206in configured resources, the scheme800allows the UE215with the flexibility to start a TB transmission and/or ends a repetition of the TB without aligning to the RV cycle as configured by the set of RVNs610. FIG.9illustrates a HARQ transmission scheme900using configured resources according to some aspects of the present disclosure. The scheme900is substantially similar to the scheme600, where the BS205may configure the UE215with a set of RVNs610in a configured order. However, the scheme900allows the UE215to perform a less number of repetitions than a configured number of repetitions for a TB transmission. In this regard, the UE215may determine a number of repetitions for a TB transmission based on a number of transmission slots206available in a configured resource240after passing an LBT. The UE215may split the number of available transmission slots206in a configured resource240among multiple TB transmissions to allow each TB transmission (e.g., the TB A230of the HARQ process H1and the TB B232of the HARQ process H2) with about similar number of repetitions. The scheme900may be used in conjunction with the schemes600-800to improve transmission delays. For instance, in the example illustrated inFIG.6, the UE215transmits the TB A230with 3 repetitions and the TB B232with 1 repetition. To provide a good or acceptable decoding performance, the UE215may be required to transmit additional repetitions of the TB B232. While the UE215may transmit additional repetitions in another configured resource240, there is a time gap between the repetitions of the TB B232, for example, depending on the periodicity of the configured resource240. Additionally, the UE215may be required to perform another LBT to gain channel access in the next configured resource240. As such, there may be a long delay for the transmission of the TB B232to complete. The long delay may not be desirable or acceptable, especially delay sensitive traffic such as URLLC type traffic. Accordingly, the scheme900provides the UE215with the flexibility to determine a number of repetitions for a TB transmission. In the illustrated example ofFIG.9, the UE215reduces the number of repetitions for the TB A230to two so that the TB B232may be transmitted with two repetitions within the configured resource240. As shown, the UE215uses RV0for a first successful transmission of the TB A230and uses RV2and RV3for two repetitions of the TB A230. Subsequently, the UE215uses RV0for a first transmission of the TB B232and uses RV2and RV3for two repetitions of the TB B232. By transmitting each of the TB A230and TB B232with two repetitions, the decoding performance may be sufficient for both the TB A230and TB B232. As such, the UE215may avoid transmitting the TB B232with a long delay gap between repetitions of the TB B232. As discussed above inFIG.2, each HARQ TB transmission may include a UCI message (e.g., UCI260) indicating a HARQ ID, a RVN, and/or a NDI for the TB transmission. As such, the BS205may detect the start of the TB B232transmission in the seventh slot206within the configured resource240based on the UCI carried in the corresponding TB B232transmission. FIG.10illustrates a HARQ transmission scheme1000using configured resources according to some aspects of the present disclosure. The scheme1000is described using the same LBT scenario as the scheme600. In the scheme1000, the BS205may configure the UE215with a set of RVNs1010shown as [RV0, RV1, RV2, and RV3], but without a specific order. The UE215may determine any RV mapping order using the set of RVNs1010. In some instances, the UE215may determine a RV sequence for the mapping using one or more RVNs in the set of RVNs1010. In some instances, the UE215may determine a RV sequence for the mapping using all RVNs in the set of RVNs1010. In some instances, the UE215may determine a RV sequence for the mapping by repeating one or more of the RVNs in the set of RVNs610. In some instances, the UE215may determine any suitable number of repetitions for a TB transmission using the RVNs in the set of RVNs1010. For instance, the UE215may determine a RV sequence with a length of 5 for each TB transmission using the RVNs from the set of RVNs1010. In some other instances, the UE215may determine a RV sequence with a length of 3 for each transmission using the RVNs from the set of RVNs1010. In the illustrated example ofFIG.10, the UE215determines a RV sequence1020from the set of RVNs1010in the order of RV0, RV3, RV1, and RV2. The UE215may use the same RV mapping mechanism as in the scheme600where the mapping is performed for successful transmissions. As shown, the UE215uses RV0for a first successful transmission of the TB A230and uses RV2, RV3, and RV1for subsequent repetitions of TB A230based on the determined mapping with the determined RV sequence1020. After completing the transmission for the TB A230including the repetitions, the UE215may transmit another TB with repetition in remaining slots206of the configured resource240by restarting the RV mapping (e.g., starting at a beginning of the RV sequence1020). As shown, the UE215transmits a TB B232using RV0and a repetition of the TB B232using RV3in the remaining slots206. WhileFIG.10is illustrated using the mapping mechanisms of the scheme600, the scheme1000may be used with the mapping mechanisms discussed in the scheme700with respect toFIG.7. For instance, the UE215may determine the order or the RV sequence1020based on a set of configured RVNs (e.g., the set of RVNs1010received from the BS205) and perform the RV mapping based on transmission attempts to include transmission slots206where corresponding LBT attempt failed. Additionally, the scheme1000may be applied in conjunction with the scheme900discussed above with respect toFIG.9, where the UE215may adjust the number of repetitions for a TB transmission based on a number of transmission slots206available for transmission in a configured resource after passing an LBT. As discussed above inFIG.2, a UCI message (e.g., the UCI260) may be included in a HARQ transmission to indicate whether the TB in the transmission is a new transmission or a retransmission. Additionally, the UCI message may include a RVN to indicate a RVN used for generating the TB transmission. For instance, the transmission of a TB (e.g., the TBs230and/or232) with a redundancy version of RV0may include, in the UCI, a RVN message field indicating RV0(e.g., RVN=0). Similarly, the transmission of a TB with a redundancy version of RV1may include, in the UCI, a RVN message field indicating RV1(e.g., RVN=1), and so on. In some aspects, the bit-width of RVN message field may be dependent on the set of RVNs (e.g., the set of RVNs610and/or1010). For instance, if the set of RVNs include RV0to RV3, the RVN message field may include a length of about 2 bits to represent the values 0 to 4. Alternatively, if the set of RVNs include RV0to RV4, the RVN message field may include a length of about 3 bits to represent the values 0 to 4. Accordingly, a BS (e.g., the BSs105,205, and/or400) and/or a UE (e.g., the UEs115,215, and/or300) may determine a bit-width for the RVN message field in a UCI message based on the set of configured RVNs. In some aspects, a UE (e.g., the UEs115,215, and/or300) may use any suitable combinations of the schemes600-1000for transmitting unscheduled HARQ UL data in configured resources. For instance, in some instances, the UE may employ any of the scheme600,700,800, or1000in conjunction with the scheme900. In some other instances, the UE may employ the scheme1000to determine a RV sequence and maps the RV sequence to successful transmission only or including failed transmission attempts as discussed in the scheme600or700, respectively. In some aspects, a UE (e.g., the UEs115,215, and/or300) may select any suitable RVNs to form a RV sequence and may map the RV sequence to transmission slots (e.g., the slots202) in a configured resource (e.g., the configured resource240) using the schemes600,700,800,900, and/or1000discussed above for transmitting unscheduled HARQ UL data in the configured resource. In some aspects, a UE (e.g., the UEs115,215, and/or300) may prepare a PUSCH transmission (e.g., including UL data) in advance of a determined transmission time. The preparation may include performing data scrambling, encoding, and/or modulation on a media access control (MAC) packet data unit (PDU). For instance, the UE may perform the preparation at a physical layer processing component, such as the transceiver310and/or the modem312. Different UEs may have different capabilities, for example, different processing delays, and thus may require different amounts of time to prepare for PUSCH for transmission. For instance, one UE may require 1-slot time (e.g., the slot206) to prepare for a PUSCH transmission while another UE may require 2-slot time to prepare for a PUSCH transmission. As such, RV mappings that are dependent on a successful LBT (e.g., the scheme600) may impose a certain constrain on UE's processing capabilities. FIG.11illustrates a HARQ transmission scenario1100according to some aspects of the present disclosure. The scenario1100may correspond to a HARQ transmission scenario in the network100when the UE215uses the scheme600for RV mapping and requires 2-slot time1108to prepare a PUSCH for transmission, for example, including data encoding and modulation. InFIG.11, the x-axis represents time in some arbitrary units and the transmission slots206in the configured resource240are labeled as S0to S6. In the scenario1100, the UE215may prepare at least two PUSCH transmissions before initiating a transmission attempt for the TB A230based on the 2-slot preparation timing requirement. The UE215may prepare PUSCH transmission as shown in the timeline1102to meet a potential transmission timeline1104. The order of the RVs in the timelines1102and1104is based on the configured set of RVNs610. As shown, the UE215prepares a PUSCH for TB A230with a redundancy version of RV0during the slots S0and S1206for transmission in slot S2206. The UE215prepares a PUSCH for TB A230with a redundancy version of RV2during the slots S1and S2206for transmission in slot S3206. The UE215may have the first two transmissions RV0and RV2prepared before initiating the first transmission attempt for the slot S2206. The LBT250for the transmission attempt in the slot S2206shown by the cross symbol. The UE215may perform another LBT for a transmission attempt in the slot206and the LBT is a success. Thus, instead of transmitting the TB A230as in the transmission timeline1104, the UE215transmits the TB A230as shown in transmission timeline1106. However, the UE215may have proceeded to prepare a transmission for the TB A230with a redundancy version of RV3during the slot S2206as shown by the timeline1102. In other words, the UE215may have the TB A230with redundancy versions of RV2in the transmission pipeline in the slot S2206. Since the UE215requires 2-slot time for PUSCH transmission preparation, the UE215may not have sufficient time to prepare a transmission for the PUSCH corresponding to TB A230with RV0again in the slot S2206for transmission in the slot S3206. As such, the configured RV order [RV0, RV2, RV3, RV1] may not be supported by a UE requiring 2 more slot timing for PUSCH transmission preparation. Accordingly, a BS (e.g., the BSs105,205, and/or400) may configure a RV sequence for a UE (e.g., the UEs115,215, and/or300) according to the UE's capability. In some aspects, the BS may configure a RV sequence with no repeating RVN at the beginning of the sequence (e.g., [RV0, RV2, RV3, and RV1]) for a UE with a PUSCH preparation timing requirement of one slot (e.g., the slots206) or less. For a UE with a PUSCH preparation timing requirement of N (e.g., N>1) slots or more, the BS may configure the UE with a RV sequence that has N number of repeating RVNs at the beginning of the RV sequence. For instance, for a UE with a 2-slot PUSCH preparation timing requirement, the BS may configure the UE with a RV sequence with 2 repeating RVNs at the beginning of the sequence (e.g., [RV0, RV0, RV3, and RV1]). For a UE with a 3-slot PUSCH preparation timing requirement, the BS may configure the UE with a RV sequence with 3 repeating RVNs at the beginning of the sequence (e.g., [RV0, RV0, RV0, RV1]). In some aspects, the BS may configure a UE with a PUSCH preparation timing requirement of two slots or more to use the scheme700instead of the scheme600for RV mapping. In some aspects, a BS (e.g., the BSs105,205, and/or400) may configure a RV sequence for a UE (e.g., the UEs115,215, and/or300) with any RVNs and/or any order and the UE may determine whether to use the scheme600or700based on the UE's capability. For instance, if the UE is configured with a RV sequence MVO, RV2, RV3, RV11and the UE is capable of preparing a PUSCH transmission in less than one slot, the UE may select the scheme600for RV mapping. If the UE is configured with a RV sequence [RV0, RV2, RV3, RV1] and the UE requires two or more slots to prepare for a PUSCH transmission, the UE may select the scheme700for RV mapping. Thus, while the BS may have a better control on RV usage when the UE employs the scheme600, the processing timing requirement at the UE may be more restrictive. FIG.12illustrates a HARQ retransmission scheme1200using configured resources according to some aspects of the present disclosure. The scheme1200may be employed by a UE such as the UEs115,215, and/or300and a BS such as the BSs105,205, and/or400in a network such as the network100. In particular, the UE may perform unscheduled HARQ retransmissions using configured resources as shown in the scheme1200. InFIG.12, the x-axes represent time in some arbitrary units. The scheme1200is described using the same configured resource structure and HARQ process arrangement as inFIG.2, and may use the same reference numerals as inFIG.2for simplicity sake. Additionally, the scheme1300is illustrated using the mapping mechanisms in the scheme600for a set of initial TB transmissions As shown, the BS205configures the UE215with a set of RVNs610(e.g., RV0, RV1, RV2, and RV3) in a configured order, [RV0, RV2, RV1, and RV3]. The UE215transmits the TB A230with redundancy versions RV0, RV2, and RV3in a configured resource240shown as240_t(0). The BS205may fail to decode the TB A230successfully and thus may transmit a NACK to the UE215. Alternatively, the BS205may not detect the TB A230in the configured resource240_t(0). Thus, the UE215may retransmit the TB A230in another configured resource240shown as240_t(1). The retransmission may be triggered by a retransmission timer expiration for the HARQ process H1of the TB A230. The UE215may perform LBT250prior to transmitting in the configured resource240_t(1). Upon passing the LBT250, the UE215may retransmit the TB A230. In some aspects, the UE215may select a first option1210for the retransmission. In the first option1210, the UE215performs the retransmission using the same procedure for RV mapping as in the initial transmission (in the configured resource240_t(0)). As shown, the UE215retransmits the TB A230using RV0and RV2according to the configured order in the set of RVNs610. In some aspects, the UE215may select a second option1220for the retransmission. In the second option1220, the UE215performs the retransmission by continuing from a last RVN used in the in the initial transmission (in the configured resource240_t(0)). As shown, RV0, RV2, and RV3were used in the configured resource240_t(0). Thus, the UE215retransmits the TB A230using RV1(after RV3in the set of RVNs610) in the configured resource240_t(1). After using the last RVN (e.g., RV1) in the RV cycle, the UE215may cyclically wrap the set of RVNs610and use RV0for a next repetition of the TB A230in the configured resource240_t(1). In some aspects, the UE215may select a third option1230for the retransmission. In the third option1230, the UE215performs the retransmission by selecting any RVNs. In some instances, the UE215may select RVNs from the set of RVNs610in any order. As an example, the UE215retransmits the TB A230using RV2and RV3. In some aspects, a NR-U network (e.g., the network100) may support multiple services with different quality-of-service (QoS) requirements. For instance, the network may support URLLC services where low latency is important. Thus, a UE (e.g., UEs115,215, and/or300) may perform HARQ process selection for transmission with consideration for traffic requirements. FIGS.13-15illustrate various mechanisms for a UE (e.g., the UEs115,215, and/or300) to perform HARQ process selection for unscheduled HARQ transmission using configured resources. InFIGS.13-15, the schemes1300,1400, and1500may be employed by a UE such as the UEs115,215, and/or300and a BS such as the BSs105,205, and/or400in a network such as the network100. In particular, the UE may prioritize transmission or determine transmission priorities for HARQ processes (e.g., the HARQ processes222H1, H2, . . . , Hn) as shown in the schemes1300-1500. The schemes1300-1500are illustrated using the same HARQ process arrangement as inFIG.2, and may use the same reference numerals as inFIG.2for simplicity sake. For simplicity of discussion and illustration,FIGS.13-15illustrates two HARQ processes, H1and H2, though similar HARQ process selection mechanisms may be applied to more than two HARQ processes. Additionally, the scheme1300-1500may consider HARQ processes with pending data, but without an ongoing retransmission, during a HARQ process selection. FIG.13illustrates a HARQ transmission scheme1300using configured resources according to some aspects of the present disclosure. For example, the UE215may have one or more TBs ready for transmission in the HARQ process H1222and the HARQ process H2222. The UE215further includes a HARQ selection module1310, which may be implemented using hardware and/or software. The HARQ selection module1310is configured to select a HARQ process222for transmission according to a MAC PDU preparation order. For instance, the UE215prepares a TB from the HARQ process H1222(shown as TB(H1)) followed by preparing HARQ process H2222(shown as TB(H2)). Thus, the HARQ selection module1310may select TB(H1) for transmission before TB(H2). As shown, the UE215transmits TB(H1) at time T(0) followed by TB(H2) at time T(1). FIG.14illustrates a HARQ transmission scheme1400using configured resources according to some aspects of the present disclosure. For example, the UE215may have one or more TBs ready for transmission in the HARQ process H1222and the HARQ process H2222. The UE215further includes a HARQ selection module1410, which may be implemented using hardware and/or software. The HARQ selection module1410is configured to prioritize a retransmission over an initial transmission. For instance, the HARQ process H1222has a new TB ready for an initial retransmission (shown as TB(H1, newTx)) and the HARQ process H2222has a TB ready for retransmission (shown as TB(H2, ReTx)). Thus, the HARQ selection module1410may prioritize TB(H2, ReTx) over TB(H1, newTx) for transmission. As shown, the UE215transmits TB(H2, ReTx) at time T(0) followed by TB(H1, newTx) at time T(1). FIG.15illustrates a HARQ transmission scheme1500using configured resources according to some aspects of the present disclosure. For example, the UE215may have one or more TBs ready for transmission in the HARQ process H1222and the HARQ process H2222. The UE215further includes a HARQ selection module1510, which may be implemented using hardware and/or software. The HARQ selection module1510is configured to prioritize a HARQ process222with a high data priority over a HARQ process222with a lower data priority. For instance, the HARQ process222H1has a data priority P2and the HARQ process H2222has a data priority P1higher than the priority P2. Thus, the HARQ selection module1510may prioritize TB(H2, P1) over TB(H1, P2). As shown, the UE215transmits TB(H2, P1) at time T(0) followed by TB(H1, P2) at time T(1). In some aspects, a UE (e.g., the UE115,215, and/or300) may use a combination of the schemes1300,1400, and1500for prioritizing transmissions from different HARQ processes. For instance, the UE may prioritize HARQ retransmissions over new TB transmissions and prioritize transmissions within HARQ retransmission. For instance, the UE may prioritize a TB with the highest data priority scheduled for retransmissions over a TB with a lower data priority scheduled for retransmissions. In some other instances, the UE may give highest priority to a highest-priority transmission and then may prioritize a retransmission over a new retransmission among the remaining scheduled transmissions. In some aspects, each HARQ process may be represented by a HARQ ID, each HARQ ID may be associated with a data priority, and each TB may be associated or tagged with a HARQ ID. Thus, in some instances, the UE may determine a data priority for a TB based on a corresponding HARQ ID. In some aspects, a BS (e.g., the BSs105,205, and/or400) may communicate UL HARQ transmissions with a UE (e.g., the UEs115,215, and/or300) using any suitable combination of the schemes200,600,700,800,900,1000,1200,1300,1400, and/or1500described above with respect toFIGS.2,6,7,8,9,10,12,13,14, and/or15, respectively. FIG.16is a flow diagram of a communication method1600according to some aspects of the present disclosure. Steps of the method1600can be executed by a computing device (e.g., a processor, processing circuit, and/or other suitable component) of a wireless communication device or other suitable means for performing the steps. For example, a wireless communication device, such as the UE115,215, or300, may utilize one or more components, such as the processor302, the memory304, the HARQ module308, the transceiver310, the modem312, and the one or more antennas316, to execute the steps of method1600. The method1600may employ similar mechanisms as in the method500described above with respect toFIG.5and/or the schemes200,600,700,800,900,1000, and/or1200described above with respect toFIGS.6,7,8,9,10, and/or12, respectively. As illustrated, the method1600includes a number of enumerated steps, but aspects of the method1600may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order. At step1610, the method1600includes receiving, by a UE (e.g., the UEs115,215, and/or300) from a BS (e.g., the BSs105,205, and/or400), a configuration indicating a plurality of RVNs (e.g., the set of RVNs610). In some instances, the UE may correspond to the UE300and may utilize one or more components, such as the processor302, the HARQ module308, the transceiver310, the modem312, and the one or more antennas316, to receive the configuration. At step1620, the method1600includes mapping, by the UE, one or more RVNs of the plurality of RVNs to a first set of configured transmission periods (e.g., the slots206in the configured resources240) in a shared radio frequency band (e.g., the frequency band202). In some instances, the UE may correspond to the UE300and may utilize one or more components, such as the processor302and the HARQ module308, to perform the mapping, for example, by implementing the schemes600,700,800,900,1000, and/or1100. At step1630, the method1600includes transmitting, by the UE to the BS, one or more redundancy versions of a TB (e.g., the TBs230and232) in the shared radio frequency band during one or more configured transmission periods of the first set of configured transmission periods based on the mapping. In some instances, the UE may correspond to the UE300and may utilize one or more components, such as the processor302, the HARQ module308, the transceiver310, the modem312, and the one or more antennas316, to transmit the one or more redundancy versions of the TB. In some aspects, the step1630includes transmitting, by the UE, each redundancy version of the one or more redundancy versions of the TB during a configured transmission period of the one or more configured transmission periods based on a corresponding RVN for the configured transmission period. In some aspects, the step1630transmitting, by the UE, two or more redundancy versions of the TB during two or more consecutive configured transmission periods of the first set of configured transmission periods. In some aspects, the configuration further indicates a configured order for the plurality of RVNs, and wherein the mapping is based on the configured order. In some aspects, the configured order is based on a capability of the UE. For instance, the capability may be associated with a processing delay of the UE. In some aspects, the method1600includes performing, by the UE, an LBT (e.g., the LBT250) in the shared radio frequency band, where the step1630is performed based on the LBT. In some instances, the UE may correspond to the UE300and may utilize one or more components, such as the processor302, the HARQ module308, the transceiver310, the modem312, and the one or more antennas316, to perform the LBT, for example, including measuring channel energy an comparing the measured energy to a detection threshold. In some aspects, the step1620includes selecting, by the UE, a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order, the mapping beginning at a configured transmission period associated with a success of the LBT, for example, as discussed in the scheme600. In some aspects, the step1620includes selecting, by the UE, a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order responsive to an earliest attempt of the LBT, for example, as discussed in the scheme700. In some aspects, the earliest attempt of the LBT is a failure. In some aspects, the step1620includes repeating one or more RVNs of the plurality of RVNs for the mapping based on a RV order. In some aspects, the configuration further indicates a configured mapping between the plurality of RVNs and the first set of configured transmission periods, for example, as discussed in the scheme800, and wherein the mapping the one or more RVNs is based on the configured mapping. In some aspects, a total number of the one or more redundancy versions corresponds to a total number of RVNs in the plurality of RVNs. In some aspects, a total number of the one or more redundancy versions is different from a total number of RVNs in the plurality of RVNs, for example, show in the scheme900. In some aspects, the step1630includes transmitting, by the UE, a communication signal including UCI (e.g., the UCI260) and a first redundancy version of the one or more redundancy versions of the TB during a first configured transmission period of the one or more configured transmission periods, the UCI including a bit-width associated with the plurality of RVNs. In some aspects, the method1600includes retransmitting, by the UE, one or more redundancy versions of the TB in the shared radio frequency band during one or more consecutive configured transmission periods of a second set of configured transmission periods. In some aspects, the retransmitting is based on a mapping of the plurality of RVNs to the second set of configured transmission periods. In some aspects, the mapping for the second set of configured transmission periods is the same as the mapping for the first set of configured transmission periods, for example, as discussed in the option1210ofFIG.12. In some aspects, the mapping for the second set of configured transmission periods is different from the mapping for the first set of configured transmission periods. In some aspects, the mapping for the second set of configured transmission periods is based on a RVN used for transmitting a last redundancy version of the one or more redundancy versions of the TB in the first set of configured transmission periods, for example, as discussed in the option1220ofFIG.12. FIG.17is a flow diagram of a communication method1700according to some aspects of the present disclosure. Steps of the method1700can be executed by a computing device (e.g., a processor, processing circuit, and/or other suitable component) of a wireless communication device or other suitable means for performing the steps. For example, a wireless communication device, such as the UE115,215, or300, may utilize one or more components, such as the processor302, the memory304, the HARQ module308, the transceiver310, the modem312, and the one or more antennas316, to execute the steps of method1700. The method1700may employ similar mechanisms as in the scheme1300,1400, and/or1500described above with respect toFIGS.13,14, and/or15, respectively. As illustrated, the method1700includes a number of enumerated steps, but aspects of the method1700may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order. At step1710, the method1700includes prioritizing, by a UE (e.g., the UEs115,215, and/or300), a plurality of TBs (e.g., the TBs230and232) associated with a plurality of HARQ processes (e.g., the HARQ processes222H1, H2, . . . , Hn ofFIG.2) for transmission during configured transmission periods (e.g., the slots206in the configured resources240). In other words, the UE may determine the priorities for the plurality of TBs associated with the plurality of HARQ processes for transmission during the configured transmission periods. In some instances, the UE may correspond to the UE300and may utilize one or more components, such as the processor302and the HARQ module308, to prioritize the plurality of TBs. At step1720, the method1700includes transmitting, by the UE to a BS (e.g., the BSs105,205, and/or400), the plurality of TBs in a shared radio frequency band (e.g., the frequency band202) during the configured transmission periods based on the prioritizing. In other words, the UE may transmit the plurality of TBs in the shared radio frequency band during the configured transmission periods based on the determined priorities. In some instances, the UE may correspond to the UE300and may utilize one or more components, such as the processor302, the HARQ module308, the transceiver310, the modem312, and the one or more antennas316, to transmit the plurality of TBs. In some aspects, the step1710includes performing the prioritizing based on a TB transmission preparation timing, as discussed in the scheme1300. In other words, the UE may determine the priorities based on the TB transmission preparation timing. In some aspects, the step1720includes prioritizing, by the UE, a retransmission of a first TB of the plurality of TBs over an initial transmission of a second TB of the plurality of the TBs, as discussed in the scheme1400. In some aspects, the step1720includes performing the prioritizing based on data priorities of the plurality of HARQ processes, as discussed in the scheme1500. In other words, the UE may determine the priorities based on data priorities of the plurality of HARQ processes. Further embodiments of the present disclosure include a method of wireless communication. The method of wireless communication includes receiving, by a user equipment (UE) from a base station (BS), a configuration indicating a plurality of redundancy version number (RVNs). The method of wireless communication also includes mapping, by the UE, one or more RVNs of the plurality of RVNs to a first set of configured transmission periods in a shared radio frequency band; and transmitting, by the UE to the BS, one or more redundancy versions of a transport block (TB) in the shared radio frequency band during one or more configured transmission periods of the first set of configured transmission periods based on the mapping. The method may also include one or more of the following features. For instance, the method includes where the transmitting includes transmitting, by the UE, each redundancy version of the one or more redundancy versions of the TB during a configured transmission period of the one or more configured transmission periods based on a corresponding RVN for the configured transmission period. The transmitting includes transmitting, by the UE, two or more redundancy versions of the TB during two or more consecutive configured transmission periods of the first set of configured transmission periods. The configuration further indicates a configured order for the plurality of RVNs, and where the mapping is based on the configured order. The configured order is based on a capability of the UE. The transmitting is based on the LBT. The mapping includes selecting, by the UE, a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order, the mapping beginning at a configured transmission period associated with a success of the LBT. The mapping includes selecting, by the UE, a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order responsive to an earliest attempt of the LBT. The earliest attempt of the LBT is a failure. The mapping includes repeating, by the UE, one or more RVNs of the plurality of RVNs for the mapping based on a RV order. The configuration further indicates a configured mapping between the plurality of RVNs and the first set of configured transmission periods, and where the mapping the one or more RVNs is based on the configured mapping. A total number of the one or more redundancy versions corresponds to a total number of RVNs in the plurality of RVNs. A total number of the one or more redundancy versions is different from a total number of RVNs in the plurality of RVNs. The transmitting includes transmitting, by the UE, a communication signal including uplink control information (UCI) and a first redundancy version of the one or more redundancy versions of the TB during a first configured transmission period of the one or more configured transmission periods, the UCI including a bit-width associated with the plurality of RVNs. The method may include retransmitting, by the UE, one or more redundancy versions of the TB in the shared radio frequency band during one or more consecutive configured transmission periods of a second set of configured transmission periods. The retransmitting is based on a mapping of the plurality of RVNs to the second set of configured transmission periods. The mapping for the second set of configured transmission periods is the same as the mapping for the first set of configured transmission periods. The mapping for the second set of configured transmission periods is different from the mapping for the first set of configured transmission periods. The mapping for the second set of configured transmission periods is based on a RVN used for transmitting a last redundancy version of the one or more redundancy versions of the TB in the first set of configured transmission periods. Further embodiments of the present disclosure include a method of wireless communication. The method of wireless communication includes prioritizing, by a user equipment (UE), a plurality of transport blocks (TBs) associated with a plurality of hybrid automatic repeat request (HARQ) processes for transmission during configured transmission periods; and transmitting, by the UE to a base station (BS), the plurality of TBs in a shared radio frequency band during the configured transmission periods based on the prioritizing. The method may also include one or more of the following features. For instance, the method includes where the prioritizing is based on a TB transmission preparation timing. The prioritizing includes prioritizing, by the UE, a retransmission of a first TB of the plurality of TBs over an initial transmission of a second TB of the plurality of the TBs. The prioritizing is based on data priorities of the plurality of HARQ processes. Further embodiments of the present disclosure include a user equipment (UE). The user equipment includes a processor configured to map one or more redundancy version numbers (RVNs) of a plurality of RVNs to a first set of configured transmission periods in a shared radio frequency band. The user equipment also includes a transceiver configured to receive, from a base station (BS), a configuration indicating the plurality of RVNs; and transmit, to the BS, one or more redundancy versions of a transport block (TB) in the shared radio frequency band during one or more configured transmission periods of the first set of configured transmission periods based on the mapping. The UE may also include one or more of the following features. For instance, the UE includes where the transceiver configured to transmit the one or more redundancy versions of the TB is configured to transmit each redundancy version of the one or more redundancy versions of the TB during a configured transmission period of the one or more configured transmission periods based on a corresponding RVN for the configured transmission period. The transceiver configured to transmit the one or more redundancy versions of the TB is configured to transmit two or more redundancy versions of the TB during two or more consecutive configured transmission periods of the first set of configured transmission periods. The configuration further indicates a configured order for the plurality of RVNs, and where the mapping is based on the configured order. The configured order is based on a capability of the UE. The processor is further configured to perform a listen-before-talk (LBT) in the shared radio frequency band; and the transceiver configured to transmit the one or more redundancy versions of the TB is configured to transmit the one or more redundancy versions of the TB based on the LBT. The processor configured to map the one or more RVNs is configured to select a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order, the mapping beginning at a configured transmission period associated with a success of the LBT. The processor configured to map the one or more RVNs is configured to select a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order responsive to an earliest attempt of the LBT. The earliest attempt of the LBT is a failure. The processor configured to map the one or more RVNs is configured to repeat one or more RVNs of the plurality of RVNs for the mapping based on a RV order. The configuration further indicates a configured mapping between the plurality of RVNs and the first set of configured transmission periods; and the processor configured to map the one or more RVNs is configured to map the one or more RVNs is based on the configured mapping. A total number of the one or more redundancy versions corresponds to a total number of RVNs in the plurality of RVNs. A total number of the one or more redundancy versions is different from a total number of RVNs in the plurality of RVNs. The transceiver configured to transmit the one or more redundancy versions of the TB is configured to transmit a communication signal including uplink control information (UCI) and a first redundancy version of the one or more redundancy versions of the TB during a first configured transmission period of the one or more configured transmission periods, the UCI including a bit-width associated with the plurality of RVNs. The transceiver is further configured to retransmit one or more redundancy versions of the TB in the shared radio frequency band during one or more consecutive configured transmission periods of a second set of configured transmission periods. The transceiver configured to retransmit the one or more redundancy versions of the TB is configured to retransmit the one or more redundancy versions of the TB based on a mapping of the plurality of RVNs to the second set of configured transmission periods. The mapping for the second set of configured transmission periods is the same as the mapping for the first set of configured transmission periods. The mapping for the second set of configured transmission periods is different from the mapping for the first set of configured transmission periods. The mapping for the second set of configured transmission periods is based on a RVN used for transmitting a last redundancy version of the one or more redundancy versions of the TB in the first set of configured transmission periods. Further embodiments of the present disclosure include a user equipment (UE). The user equipment includes a processor configured to prioritize a plurality of transport blocks (TBs) associated with a plurality of hybrid automatic repeat request (HARQ) processes for transmission during configured transmission periods; and a transceiver configured to transmit, to a base station (BS), the plurality of TBs in a shared radio frequency band during the configured transmission periods based on the prioritizing. The UE may also include one or more of the following features. For instance, the UE includes where the processor configured to prioritize the plurality of TBs is configured to prioritize the plurality of TBs based on a TB transmission preparation timing. The processor configured to prioritize the plurality of TBs is configured to prioritize a retransmission of a first TB of the plurality of TBs over an initial transmission of a second TB of the plurality of the TBs. The processor configured to prioritize the plurality of TBs is configured to prioritize the plurality of TBs based on data priorities of the plurality of HARQ processes. Further embodiments of the present disclosure include a non-transitory computer-readable medium having program code recorded thereon. The non-transitory computer-readable medium includes code for causing a user equipment (UE) to receive, from a base station (BS), a configuration indicating a plurality of redundancy version number (RVNs). The non-transitory computer-readable medium also includes code for causing the UE to map one or more RVNs of the plurality of RVNs to a first set of configured transmission periods in a shared radio frequency band; and code for causing the UE to transmit, to the BS, one or more redundancy versions of a transport block (TB) in the shared radio frequency band during one or more configured transmission periods of the first set of configured transmission periods based on the mapping. The non-transitory computer-readable medium may also include one or more of the following features. For instance, the non-transitory computer-readable medium includes where the code for causing the UE to transmit the one or more redundancy versions of the TB is configured to transmit each redundancy version of the one or more redundancy versions of the TB during a configured transmission period of the one or more configured transmission periods based on a corresponding RVN for the configured transmission period. The code for causing the UE to transmit the one or more redundancy versions of the TB is configured to transmit two or more redundancy versions of the TB during two or more consecutive configured transmission periods of the first set of configured transmission periods. The configuration further indicates a configured order for the plurality of RVNs, and where the mapping is based on the configured order. The configured order is based on a capability of the non-transitory computer-readable medium. The code for causing the UE to transmit the one or more redundancy versions of the TB is configured to transmit the one or more redundancy versions of the TB based on the LBT. The code for causing the UE to map the one or more RVNs is configured to select a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order, the mapping beginning at a configured transmission period associated with a success of the LBT. The code for causing the UE to map the one or more RVNs is configured to select a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order responsive to an earliest attempt of the LBT. The earliest attempt of the LBT is a failure. The code for causing the UE to map the one or more RVNs is configured to repeat one or more RVNs of the plurality of RVNs for the mapping based on a RV order. The configuration further indicates a configured mapping between the plurality of RVNs and the first set of configured transmission periods; and the code for causing the UE to map the one or more RVNs is configured to map the one or more RVNs is based on the configured mapping. A total number of the one or more redundancy versions corresponds to a total number of RVNs in the plurality of RVNs. A total number of the one or more redundancy versions is different from a total number of RVNs in the plurality of RVNs. The code for causing the UE to transmit the one or more redundancy versions of the TB is configured to transmit a communication signal including uplink control information (UCI) and a first redundancy version of the one or more redundancy versions of the TB during a first configured transmission period of the one or more configured transmission periods, the UCI including a bit-width associated with the plurality of RVNs. The non-transitory computer-readable medium may include code for causing the UE to retransmit one or more redundancy versions of the TB in the shared radio frequency band during one or more consecutive configured transmission periods of a second set of configured transmission periods. The code for causing the UE to retransmit the one or more redundancy versions of the TB is configured to retransmit the one or more redundancy versions of the TB based on a mapping of the plurality of RVNs to the second set of configured transmission periods. The mapping for the second set of configured transmission periods is the same as the mapping for the first set of configured transmission periods. The mapping for the second set of configured transmission periods is different from the mapping for the first set of configured transmission periods. The mapping for the second set of configured transmission periods is based on a RVN used for transmitting a last redundancy version of the one or more redundancy versions of the TB in the first set of configured transmission periods. Further embodiments of the present disclosure include a non-transitory computer-readable medium having program code recorded thereon. The non-transitory computer-readable medium includes code for causing a user equipment (UE) to prioritize a plurality of transport blocks (TBs) associated with a plurality of hybrid automatic repeat request (HARQ) processes for transmission during configured transmission periods; and code for causing the UE to transmit, to a base station (BS), the plurality of TBs in a shared radio frequency band during the configured transmission periods based on the prioritizing. The non-transitory computer-readable medium may also include one or more of the following features. For instance, the non-transitory computer-readable medium includes where the code for causing the UE to prioritize the plurality of TBs is configured to prioritize the plurality of TBs based on a TB transmission preparation timing. The code for causing the UE to prioritize the plurality of TBs is configured to prioritize a retransmission of a first TB of the plurality of TBs over an initial transmission of a second TB of the plurality of the TBs. The code for causing the UE to prioritize the plurality of TBs is configured to prioritize the plurality of TBs based on data priorities of the plurality of HARQ processes. Further embodiments of the present disclosure include a user equipment (UE). The user equipment includes means for receive, from a base station (BS), a configuration indicating a plurality of redundancy version number (RVNs). The user equipment also includes means for mapping one or more RVNs of the plurality of RVNs to a first set of configured transmission periods in a shared radio frequency band; and means for transmitting, to the BS, one or more redundancy versions of a transport block (TB) in the shared radio frequency band during one or more configured transmission periods of the first set of configured transmission periods based on the mapping. The UE may also include one or more of the following features. For instance, the UE includes where the means for transmitting the one or more redundancy versions of the TB is configured to transmit each redundancy version of the one or more redundancy versions of the TB during a configured transmission period of the one or more configured transmission periods based on a corresponding RVN for the configured transmission period. The means for transmitting the one or more redundancy versions of the TB is configured to transmit two or more redundancy versions of the TB during two or more consecutive configured transmission periods of the first set of configured transmission periods. The configuration further indicates a configured order for the plurality of RVNs, and where the mapping is based on the configured order. The configured order is based on a capability of the UE. The means for transmitting the one or more redundancy versions of the TB is configured to transmit the one or more redundancy versions of the TB based on the LBT. The means for mapping the one or more RVNs is configured to select a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order, the mapping beginning at a configured transmission period associated with a success of the LBT. The means for mapping the one or more RVNs is configured to select a RVN from the plurality of RVNs for each configured transmission period of the first set of configured transmission periods based on a RV order responsive to an earliest attempt of the LBT. The earliest attempt of the LBT is a failure. The means for mapping the one or more RVNs is configured to repeat one or more RVNs of the plurality of RVNs for the mapping based on a RV order. The configuration further indicates a configured mapping between the plurality of RVNs and the first set of configured transmission periods; and the means for mapping the one or more RVNs is configured to map the one or more RVNs is based on the configured mapping. A total number of the one or more redundancy versions corresponds to a total number of RVNs in the plurality of RVNs. A total number of the one or more redundancy versions is different from a total number of RVNs in the plurality of RVNs. The means for transmitting the one or more redundancy versions of the TB is configured to transmit a communication signal including uplink control information (UCI) and a first redundancy version of the one or more redundancy versions of the TB during a first configured transmission period of the one or more configured transmission periods, the UCI including a bit-width associated with the plurality of RVNs. The UE may include means for retransmitting one or more redundancy versions of the TB in the shared radio frequency band during one or more consecutive configured transmission periods of a second set of configured transmission periods. The means for retransmitting the one or more redundancy versions of the TB is configured to retransmit the one or more redundancy versions of the TB based on a mapping of the plurality of RVNs to the second set of configured transmission periods. The mapping for the second set of configured transmission periods is the same as the mapping for the first set of configured transmission periods. The mapping for the second set of configured transmission periods is different from the mapping for the first set of configured transmission periods. The mapping for the second set of configured transmission periods is based on a RVN used for transmitting a last redundancy version of the one or more redundancy versions of the TB in the first set of configured transmission periods. Further embodiments of the present disclosure include a user equipment (UE). The user equipment includes means for prioritizing a plurality of transport blocks (TBs) associated with a plurality of hybrid automatic repeat request (HARQ) processes for transmission during configured transmission periods; and means for transmitting, to a base station (BS), the plurality of TBs in a shared radio frequency band during the configured transmission periods based on the prioritizing. The UE may also include one or more of the following features. For instance, the UE includes where the means for prioritizing the plurality of TBs is configured to prioritize the plurality of TBs based on a TB transmission preparation timing. The means for prioritizing the plurality of TBs is configured to prioritize a retransmission of a first TB of the plurality of TBs over an initial transmission of a second TB of the plurality of the TBs. The means for prioritizing the plurality of TBs is configured to prioritize the plurality of TBs based on data priorities of the plurality of HARQ processes. Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of [at least one of A, B, or C] means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). As those of some skill in this art will by now appreciate and depending on the particular application at hand, many modifications, substitutions and variations can be made in and to the materials, apparatus, configurations and methods of use of the devices of the present disclosure without departing from the spirit and scope thereof. In light of this, the scope of the present disclosure should not be limited to that of the particular embodiments illustrated and described herein, as they are merely by way of some examples thereof, but rather, should be fully commensurate with that of the claims appended hereafter and their functional equivalents.
122,103
11863332
DETAILED DESCRIPTION Aspects of the present disclosure involve a networking device configured to detect a security intrusion at a port of the device and to provide an indication of the detection of security intrusion to a central alarm system. In one example, the networking device may include one or more input/output (or bi-directional) ports for communications with other components of the networking device. The bi-directional ports may include a photodetector, such as a photodiode, to detect light signals being received on the port. When the port is not connected, no light is received at the port and the photodetector will subsequently not detect light on the port. The photodetector may be in communication with a controller such that indicator signals of the presence of light on the port may be transmitted to the controller. The controller may be configured to generate an alarm condition in response to the indication signals received from the photodetectors, as described in more detail below. In some instances, an input signal of network traffic may be received at a common input of the networking device. The common input signal may be demultiplexed or copied onto one or more of the bi-directional ports for connection to other networking devices or other components of the networking device. In some instances, however, the demultiplexed signal on the input/output ports may be intercepted by a network capture device. To prevent the connection of a network capture device on the ports, a loopback connector may be coupled with an otherwise empty port causing the light of the common input signal to be redirected back to the port and detected by the photodetector. The controller of the networking device may configure a state for each unused port to monitor for the presence of the light on the port based on the signals received from the photodetectors. In the circumstance that a feedback loop is removed from a port and the feedback signal is lost (possibly to insert a network capture device into the port), the controller may generate an alarm signal based on the indication of the lost signal. In some instances, the alarm condition (as well as information identifying the port associated with the alarm) may be transmitted to an alarm system for processing. In this manner, the networking device may be configured to detect the potential connection of a network capture device (or other nefarious networking device) to an unused port of the networking device and provide an alarm to a monitoring system for verification of the loss of signal and to mitigate any security breaches associated with the networking device. FIG.1is a schematic diagram illustrating an exemplary network system for optical multiplexing connected to a security monitoring system, in accordance with one or more implementations. The system100includes a networking device102, in particular a Reconfigurable Optical Add-Drop Multiplexer (ROADM) device102used to multiplex optical signals onto a common output in which switching among multiple input signals is provided to generate the multiplexed output signal. However, it should be appreciated that other networking devices may also be included in the system100for which the devices, methods, and systems described herein may apply. As described, the ROADM device102provides for switching in incoming optical signal among multiple output or transmission paths. For example, a ROADM102may receive an optical signal from a source, such as Site A. The ROADM102provides for redirecting of the signal to any number of alternate sites in communication with the ROADM102, such as Site B. The number of alternate sites or transmission paths available to the ROADM102to switch to may be referred to as the device's degrees of freedom. Thus, a ROADM102with four possible output paths may include four degrees of freedom for transmission of a received optical signal. In addition, portions of the optical signal may be demultiplexed from the input signal and provided to other circuits or customers of a network. This may be referred to as “dropping” portions of the signal from the multiplexed optical signal. InFIG.1, only a portion of the ROADM device102is illustrated for simplicity. In particular, the ROADM102ofFIG.1illustrates a first wavelength selectable switch (WSS)104portion of a ROADM in communication with a second WSS portion106of a ROADM102. Each WSS104,106provide for the multiplexing, switching, demultiplexing, dropping, and the like of an incoming optical signal. A ROADM102may include several WSS modules, although only two are illustrated inFIG.1. The illustrated two WSS modules of the ROADM102therefore provide the ROADM with two degrees of freedom, meaning that the ROADM102may be configured to provide an incoming optical signal to another site or back to the origination site. Additional WSS modules increase in the degrees of freedom of the ROADM102by providing additional transmission paths for an incoming optical signal. In this example, the ROADM102is configured to interconnect Site A and Site B such that an optical signal may be transmitted between the two sites via the ROADM102device. The WSS components104,106of the ROADM102illustrated are 8x1 wavelength selectable switches, as they multiplex or demultiplex a signal between one bi-directional port (referred to as a “common” port) and8bi-directional ports (referred to as the “output” ports). The two WSS104,106may be part of the same or separate ROADM devices102. Additional WSS104,106may be included in a ROADM102to increase the degrees of freedom of the ROADM102device. Also, other types of WSS104,106may be incorporated in the ROADM102, such as 4X1, 16X1, 8X2, WSS components, and so forth. As shown, WSS-A104includes a common port connected to Site A such that an optical signal may be received or transmitted to Site A from WSS-A104. Similarly, WSS-B106includes a common port connected to Site B such that an optical signal may be received or transmitted to Site B from WSS-B106. Within the ROADM102, each WSS104,106provides a copy of the signals received on the common port to the eight output ports of the respective WSS104,106. It is noted that each of the output ports of the WSS104,106are bi-directional such that the term “output port” is used for convenience herein. For example, WSS-A104may receive an input signal from Site A on the common port and provides (through a demultiplexer described below) the received common signal to each of the eight output ports of the WSS-A104. The output ports of each of the WSS104,106provide for interconnection with other WSS of the ROADM102or to drop portions of the optical signal to other circuits or networks. In the example illustrated inFIG.1, WSS-A104and WSS-B106are interconnected over port 2 of each WSS. Further, port 8 of each WSS104,106is utilized to drop out the optical signal through demultiplexers114,116. Although illustrated as connecting across port 2, port 2 of WSS-A104may connect to any port of WSS-B106to interconnect the WSS components104,106. Further, the dropped signal may be output from any port of the WSS104,106. Further still, other WSS of the ROADM102may connect to the unused ports of the WSS104,106illustrated to increase the degrees of freedom of the ROADM102. In the example shown, WSS-B106may receive the optical signal from WSS-A104via port 2 and provide the optical signal on the common output port to Site B. In the circumstance in which other WSS are connected to WSS-B106via one or more of the unused ports, WSS-B106may combine or multiplex signals from each of the connected ports through a switching mechanism. The example illustrated inFIG.1provides for the an optical signal received from Site A at the ROADM102to be output on port 2 of WSS-A104to port 2 of WSS-B106and output to Site B. In a similar manner, an optical signal from Site B may be transmitted to Site A along the same path in the opposite direction. Portions of the optical signals may be dropped to other circuits or networks in addition to being transmitted to other sites. The ROADM102may also include a controller108in communication with the WSS104,106of the ROADM102. In some instances, the controller108may configure one or more aspects of the WSS104,106, such as assigning an operational state to one or more ports of the WSS, controlling the switching and/or multiplexing functions of the WSS104,106, receiving operational information from the WSS104,106, and the like. In some instances, signals received from the WSS104,106may cause the controller108to generate an alarm condition or state. The controller108may, in some instances, transmit an alarm to an alarm monitor system112via a network110. The transmitted alarm may be based on a detected loss of light at a port of a WSS104,106of the ROADM102and may indicate a potential security breach at the ROADM102. The transmission of the alarm condition to the alarm monitoring system112may also initiate one or more responses or procedures to determine the extent of the detected security breach at the ROADM102. The operations, algorithms, and functions of the controller108are described in more detail below. FIG.2is a schematic diagram illustrating an exemplary networking device104for multiplexing optical signals, in accordance with one or more implementations. In particular, the networking device104ofFIG.2is a schematic illustration of a WSS104of the ROADM102discussed above in relation toFIG.1. Although described in relation to the WSS-A104ofFIG.1, WSS-B106or other WSS components of a ROADM102may include a similar composition or design. The systems, methods, and devices discussed herein to provide a security feature to a ROADM102or other networking device may also apply to other devices or components. Rather, the design ofFIG.2is provided here as an example component to which one or more of the features described herein may apply. As described, the WSS104may include a common port and 8 output ports, although more or fewer common ports and/or output ports may be included in other versions of the WSS. Each port, including the common port, includes an input portion and an output portion such that each port is a bi-directional port. Although the input and output portions of the ports are illustrated separately inFIG.2, each port may include a single interface over which both input and output signals are transmitted. Each of ports 1-8 of the WSS-A104ofFIG.1thus are illustrated as an input portion and an output portion inFIG.2. For example, port 1 of WSS-A104includes input port202and output port226. In input port202-216may be connected to a wavelength switch218that is configured to multiplex input signals from the input ports202-216into a common, multiplexed output signal as described above. Thus, wavelength switch218may transmit the multiplexed output signal to common output port222for transmission to another site or node of a network (such as to Site B ofFIG.1). Wavelength switch218may also switch between signals on input ports202-216to generate the switched multiplexed optical output signal as described. The common input port224may receive an optical signal from a site or node of the network (such as Site A ofFIG.1) and provide the common input signal to a passive demultiplexer220component. The demultiplexer220duplicates the common input signal on common input port224and provides the duplicated signal to output port 1-8226-240. Thus, each output port226-240receives and provides the same signal corresponding to the common input signal. However, one or more of the output ports226-240may remain unused when installed or instantiated in a network. UtilizingFIGS.1and2as an example, WSS-A104receives a common input signal at common input port224from Site A. Passive demultiplexer220of WSS-A104duplicates the input signal and provides the signal to each output port226-240. Output port 2228of WSS-A104is connected to input port 2204of WSS-B106such that the input signal provided to WSS-A104is transmitted out of port 2228into corresponding port 2204of WSS-B106. Upon transmission via input port 2204, the signal is received at wavelength switch218of WSS-B106and provided to common output port222for transmission to Site B. In this manner, signals received at the common input port of the WSS104may be transmitted or switched to other WSS components or sites via the communications ports of the WSS. A photodetector242or other light detecting sensor may be connected to each of the input ports202-216of the ROADM device102. In general, a photodetector242detects the presence of light on the corresponding input port202-216and provides an indication signal of the measurement of light on the input. In some instances, the photodetector242may measure the intensity of the light present on the input port202-216, while other photodetectors242provide an on or off indication. Although not illustrated inFIG.2, each of the photodetectors242may be in communication with controller108to provide the indication signal to the controller108. As described in more detail below, the controller108may determine which of the input ports202-216has a light signal present at the port. The common input port224may also include a photodetector242to detect the presence of a light signal on the common input port224. As further illustrated in the system100ofFIG.1, one or more of the ports of the WSS104,106of the ROADM102may be unused. For example, port 1 and ports 3-7 of both WSS104,106of the ROADM102are unconnected to other WSS, components of the ROADM, networks, etc. These unused or open ports provide a potential security flaw for network traffic carried by the ROADM102. For example,FIG.3is a schematic diagram illustrating a compromised networking device102, in accordance with one or more implementations. The networking device102may be the same ROADM as discussed above that includes WSS-A104and WSS-B106connected across port 2 of the respective WSS. In this illustration, however, a network capture and analyzer device302is connected to port 1 of WSS-A104and port 1 of WSS-B106. As discussed above in relation toFIG.2, the common input signal provided to common input port224is replicated to each output port226-240. A network analyzer302connected to an unused port of the WSS104,106of the ROADM102may therefore receive the common input signal provided on the common input port224. By connecting the network analyzer302to one port of each of the WSS104,106of the ROADM102, all traffic signals into the ROADM102may be obtained, analyzed, and/or stored. A bad actor attempting to capture all or some of the data of the network may therefore connect a capture device302to the ports of a ROADM102to steal the network information. FIG.4illustrates a method400for detecting a security intrusion of a network device102, in accordance with one or more implementations. One or more of the operations may be performed by components of the ROADM device102, such as controller108. In addition, one or more of the operations may be performed by software programs, hardware components, or a combination of hardware and software components of the networking device102. Beginning in operation502, one or more of the unused ports of WSS104,106of the ROADM102may be configured to detect an output at the port. For example,FIG.5is a schematic diagram illustrating an exemplary secured configuration of a network device102, in accordance with one or more implementations. The network device102ofFIG.5is a similar ROADM device as described above with WSS-A104interconnected with WSS-B106over port 2. However, in the embodiment illustrated, each unused port for WSS-A104and WSS-B106includes a loopback502that connects the output port of each port to the corresponding input portion of the port. More particularly, a loopback circuit502may be inserted or connected to ports 1 and 3-7 of WSS-A104and WSS-B106as those ports are unused by the ROADM102. The loopback circuits502connect the output portion of the respective ports (such as Port 1—Out226of the WSS104ofFIG.2) to the corresponding input portion of the same port (such as Port 1—In202ofFIG.2). The loopbacks502thus providing any signal present on the output portion of the port to the input portion of the port. The loopbacks502may be physical connections or software connections configured to feedback the signal on the output portion of the port to the input portion of the same port. In one instance, the controller108may configure a software interconnect to form the loopback502. In another example, the loopback502may be a physical device that is inserted into the port to feedback the signal. Returning to the method400ofFIG.4, the controller108may configure a port state for each of the ports of the WSS104,106of the ROADM102device. For example, a function of each port may be identified and stored by the controller108based on the configuration of the ROADM102. Such port states may include an unused or unconnected state, an interconnection state (for ports that interconnect to other WSS components of the ROADM102), a traffic drop state (for ports from which one or more portions of the optical signal are dropped to other circuits), and the like. The controller108in the example figures may thus state a “interconnect” state for port 2 of WSS-A104and port 2 of WSS-B106, a “drop traffic” state for port 8 of the WSS104,106, and an “unconnected state” for ports 1 and 3-7 for the WSS104,106. In some instances, the controller108may assign or associated a “used/connected” state or an “unused/unconnected” state to the ports of the WSS104,106. In operation406, the controller108may set or associate a security alarm procedure for ports of the WSS104,106designated as unused or unconnected. In general, the alarm procedure monitors for a loss of light at the port and initiates an alarm condition when a loss of light is detected. For example and as discussed above with reference toFIG.2, the common input signal is duplicated and provided to the output portion of each port226-240of the WSS104such that the common input signal is available from any port226-240of the component. The feedback loops502provide the common input signal on each output port226-240back to the input portion of the respective ports202-216. The associated photodetector242connected to the input portion of each port202-216may then detect that a light signal is present at the input port202-216. As each photodetector242provides an indication signal to the controller108as to the presence or loss of light at the respective port202-216, the controller108may determine the presence of the common input signal being received at each of the unused input ports206-218. Returning to operation406, the alarm procedure associated with the unused ports of the WSS104,106may comprise generating an alarm condition for the port when a loss of light is detected at the port by the corresponding photodetector242. Upon detection of a loss of a light signal at the port, the controller108may generate an alarm condition for the port. In addition to setting the alarm condition for the port, the controller108may transmit an indication of the alarm condition for the port to the alarm monitoring system112, as described in more detail below. With a security alarm procedure associated with the unused ports, the controller108may begin monitoring the ports for a loss of signal. In operation408, the controller108may determine if a loss of a light signal at any of the unused ports is detected, based on the indication signals provided by the photodetectors242of the WSS104,106. A loss of light at an unused port may occur when a loopback502is removed from a port, perhaps to insert a network capture device302into a port of the WSS104,106. If no loss of light at the port is detected, the controller108may continue to monitor for the loss of light in operation410and determine again if a loss of light is detected at the port in operation408. In some instances, determining a loss of light at an input port of the WSS104,106may include the controller108comparing the sensor signal from the photodetector connected to the common input port224to the sensor signals received from the photodetectors242associated with the input ports202-216. A loss of light may be determined if the sensor signals from the compared photodetectors242is different. In this example, a loss of the common input signal may not necessarily trigger an alarm as the common input signal loss would be detected at the common input port224and the input ports202-216simultaneously. Rather, the alarm condition may be triggered when the common input signal is detected at the common input port224but not at one or more of the input ports202-216, indicating a removal of the loopback502from the detected port. The loopback502at each unused port provides the common input signal to the corresponding photodetectors242such that the photodetectors242may detect when the loopback502is removed and light signal is no longer present at the input port. If a loss of light is detected in operation408, the controller108may generate an alarm condition for the affected port in operation412. Further, the controller108may transmit an indication of the alarm condition to the alarm monitoring system112via a network110. The alarm monitoring system112may be any computing device or network associated with the ROADM102for monitoring alarms generated by the ROADM102. In some instances, the alarm monitoring system112may monitor alarms for several network devices and may be associated with a network operational center. The alarm monitoring system112may generate one or more alerts to network engineers or administrators in response to receiving the alarm condition. The generated alarm indication may include information associated with the ROADM102, such as an identification of the device, an identification of the alarm type, an identification of the particular port associated with the alarm, a location of the device102, and the like. In operation414, the controller108may determine the cause of the alarm condition at the affected port. For example, the alarm monitoring system112may generate an alert to a network or device administrator to investigate the cause of the generated alarm. This may include dispatching a technician to the device102to determine if a network capture device was connected to the port in which an alarm was generated. In some instances, the loss of light alarm at the port may be the result of an accidental removal of the loopback502or some network outage at the port. In other instances, however, the alarm may be triggered by the connection of a nefarious device to the port. As long as the alarm cause is not verified or investigated, the controller108may maintain the alarm condition for the port in operation416. However, after verification or investigation of the alarm cause, the alarm condition for the port may be reset in operation418and the controller108may return to monitoring the photodetectors242of the WSS104,106of the ROADM102in operation406by setting the security alarm for the unused ports of the WSS104,106. In one example, resetting of the alarm may be in response to a command provided to the controller108from the alarm monitoring system112via the network110. In another example, the alarm condition may be cleared at the ROADM102device by accessing the controller108. In addition to the photodetectors242, the ports of the WSS104,106may also include one or more physical sensors to detect the insertion of a cable or device into the port. For example, a mechanical switch may be connected to or otherwise associated with each port of the WSS104,106that activates when a device or cable is inserted into the port. Each switch may transmit a signal to the controller108that indicates a position of the respective switch. The controller108may determine, based on the signals provided by the mechanical switches, which ports have a cable or device plugged into the port. For ports that are designated as “unused” or “unconnected”, a signal from the switch indicating that a device is connected to the port may cause the controller108to generate an alarm. In some instances, the controller108may utilize the switch sensor indicator to verify the photodetector242input, override the photodetector242input, or generate an alarm regardless of the photodetector242input. Additional security features may also be included in the ROADM102or WSS104,106of the ROADM. For example, the controller108may configure one or more attenuation levels of the unused ports of the WSS104,106. In one instance, the attenuation level of the unused ports may be increased by the controller108such that a transmission signal is no longer transmitted from the output portion of the ports. In another instance, a noise signal may be applied to the input portion of each port of the WSS104,106to fill up the spectrum at the port. In this instance, loopbacks502may not be used as the noise generating devices504are inserted into the input portion of the ports. To accommodate for the lack of loopbacks502, a photodetector242may then be applied or inserted at the output portion of the port and provide a detection signal to the controller108. Removal of the photodetector242at the output port would generate the alarm as described above. In general, the above methods and systems apply any networking device in which an input signal is replicated on one or more unused ports, causing a potential security vulnerability to the networking device. FIG.6illustrates a system600configured for detecting a security intrusion of a network device102, in accordance with one or more implementations. In some implementations, system600may include one or more servers604in communication with at least one ROADM102or other networking device. The ROADM102may be configured to communicate with the server computing platforms604according to a client/server architecture and/or other architectures. The ROADM102may be configured with machine-readable instructions106that include one or more instruction modules. The instruction modules may include computer program modules. The instruction modules may include one or more of a signal monitoring module608, a security alert initiation module610, a security receiving module612, a security alert transmittal module614, a signal receiving module616, and/or other instruction modules. Security alert initiation module610may be configured to, in response to detecting a change in the light signal, initiate a security alert. A security receiving module612may be configured to, at the processor, receive a security setting for the port for the security enabling, the security setting indicative of the presence of the loopback connector in the port. Generally speaking, remotely or by connecting a service terminal or computer to the network device (e.g., ROADM), a configuration module of the ROADM102may be accessed and any port intended to be looped-back (security enable) configured or set as such. The security setting security may enable the monitoring of the change in light signal on the port of the networking device102. The security enabled port may include a loopback connector502that redirects the light signal, as described. The removal of such loopback connector502may cause the change in the light signal. More particularly, the light signal may be redirected by the loopback connector502for detection by a photodetector242coupled with the security enabled port. The removal of the loopback connector502may cause the change in the light signal and the initiating of the security alert. So, by security enabling the port, the processor detects light changes associated with the port when the loop back connector is removed. Security alert transmittal module614may be configured to transmit the security alert over a network. For example, the server604may be part of a network operations center and be running a monitoring program. The security alert may be received and flagged at the server. The signal may include some identification of the device initiating the signal, and may also include port information and the like. In some instances, personnel may then initiate a manual review of the device, and removal of any nefarious hardware or otherwise correction of any problem associated with the device triggering the security alert. Alternatively, the monitoring program may generate a signal, for receipt by a signal receiving module616configured to receive a signal to disable the networking device. The initiation of such a signal may be automatic or responsive to a user command entered at the monitoring device. In some implementations, the server(s)604and networking devices102may be operatively linked via one or more electronic communication links. For example, such electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which networking devices102, server(s)604, and/or external resources618may be operatively linked via some other communication media. A given server computing platform604may include one or more processors configured to execute computer program modules. The computer program modules may be configured to enable an expert or user associated with the given computing platform604to interface with the system600and/or external resources618, and/or provide other functionality attributed herein to the server platform(s)604. By way of non-limiting example, the server computing platform604may include one or more of a server may be implemented by any number of possible computing platforms, including some level of virtualization, and may be include a blade device, a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a and/or other computing platforms. External resources618may include sources of information outside of system600, external entities participating with system600, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources618may be provided by resources included in system600. The network device602may include electronic storage620, one or more processors622, and/or other components. The network device602may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. The network device602may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to the security enabling a network device. It should be appreciated that although modules608,610,612,614, and/or616are illustrated inFIG.6as being implemented within a single processing unit, in implementations in which processor(s)622includes multiple processing units, one or more of modules608,610,612,614, and/or616may be implemented remotely from the other modules. The description of the functionality provided by the different modules608,610,612,614, and/or616described below is for illustrative purposes, and is not intended to be limiting, as any of modules608,610,612,614, and/or616may provide more or less functionality than is described. For example, one or more of modules608,610,612,614, and/or616may be eliminated, and some or all of its functionality may be provided by other ones of modules608,610,612,614, and/or616. As another example, processor(s)622may be configured to execute one or more additional modules that may perform some or all of the functionality attributed below to one of modules608,610,612,614, and/or616. FIG.7is a block diagram illustrating an example of a computing device or computer system700which may be used in implementing the embodiments of the components of the network disclosed above. For example, the computing system700ofFIG.7may be the controller108discussed above. The computer system (system) includes one or more processors702-706. Processors702-706may include one or more internal levels of cache (not shown) and a bus controller or bus interface unit to direct interaction with the processor bus712. Processor bus712, also known as the host bus or the front side bus, may be used to couple the processors702-706with the system interface714. System interface714may be connected to the processor bus712to interface other components of the system700with the processor bus712. For example, system interface714may include a memory controller714for interfacing a main memory716with the processor bus712. The main memory716typically includes one or more memory cards and a control circuit (not shown). System interface714may also include an input/output (I/O) interface720to interface one or more I/O bridges or I/O devices with the processor bus712. One or more I/O controllers and/or I/O devices may be connected with the I/O bus726, such as I/O controller728and I/O device730, as illustrated. I/O device730may also include an input device (not shown), such as an alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processors702-706. Another type of user input device includes cursor control, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processors702-706and for controlling cursor movement on the display device. System700may include a dynamic storage device, referred to as main memory716, or a random access memory (RAM) or other computer-readable devices coupled to the processor bus712for storing information and instructions to be executed by the processors702-706. Main memory716also may be used for storing temporary variables or other intermediate information during execution of instructions by the processors702-706. System800may include a read only memory (ROM) and/or other static storage device coupled to the processor bus712for storing static information and instructions for the processors702-706. The system set forth inFIG.7is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure. According to one embodiment, the above techniques may be performed by computer system700in response to processor704executing one or more sequences of one or more instructions contained in main memory716. These instructions may be read into main memory716from another machine-readable medium, such as a storage device. Execution of the sequences of instructions contained in main memory716may cause processors702-706to perform the process steps described herein. In alternative embodiments, circuitry may be used in place of or in combination with the software instructions. Thus, embodiments of the present disclosure may include both hardware and software components. A machine readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Such media may take the form of, but is not limited to, non-volatile media and volatile media and may include removable data storage media, non-removable data storage media, and/or external storage devices made available via a wired or wireless network architecture with such computer program products, including one or more database management products, web server products, application server products, and/or other additional software components. Examples of removable data storage media include Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc Read-Only Memory (DVD-ROM), magneto-optical disks, flash drives, and the like. Examples of non-removable data storage media include internal magnetic hard disks, SSDs, and the like. The one or more memory devices606may include volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and/or non-volatile memory (e.g., read-only memory (ROM), flash memory, etc.). Computer program products containing mechanisms to effectuate the systems and methods in accordance with the presently described technology may reside in main memory716, which may be referred to as machine-readable media. It will be appreciated that machine-readable media may include any tangible non-transitory medium that is capable of storing or encoding instructions to perform any one or more of the operations of the present disclosure for execution by a machine or that is capable of storing or encoding data structures and/or modules utilized by or associated with such instructions. Machine-readable media may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more executable instructions or data structures. Embodiments of the present disclosure include various steps, which are described in this specification. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, the steps may be performed by a combination of hardware, software and/or firmware. Although the present technology has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the technology is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present technology contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
38,054
11863333
DETAILED DESCRIPTION A software platform, such as a UCaaS platform, may facilitate a conference between multiple participants, such as over conferencing software of the software platform. Typically, a user of conferencing software must first join a conference before being able to access features of that conference, for example, a list of other participants who have already joined the conference, functionality for messaging those participants who have already joined the conference, and an agenda for the conference. In particular, because conventional conferencing software systems require a channel be opened between a user device and the server implementing a conference before conference features can be accessed by a user, such systems do not allow a user to preview conference features without joining a conference. This technical limitation creates challenges in several types of scenarios, as will be discussed. In one example, where a user is running late to a conference, the user may find themselves frustrated when they join the conference late only to find out that no other participants have joined or that the stakeholders have not yet joined. Often times the user may leave a current conference early, before it ends, in order to timely join a next conference only to find themselves waiting for other late participants. Knowing in advance which participants have joined such a next conference can help the user determine an appropriate time to join the conference, whether by leaving a current conference or otherwise. However, conventional conferencing software systems do not have the capability to allow a user device to access participant information for a conference without the user device connecting to the conference. In another example, where a user is running late for a conference in progress, the user may want to contact one or more participants that have already joined the conference to notify them that they are running late or for other purposes. A typical solution is that the user sends an email to all the invitees of the conference; however, this is wasteful and annoying for invitees who declined the conference invitation, and in many cases the conference participants who are actively paying attention to the conference may simply ignore the email message. Conventional conferencing software systems do not have the capability to allow the user, via their user device, to communicate with one or more participants in the conference using an in-conference communication application without the user device connecting to the conference. In yet another example, a user may join a conference without having any idea of what the conference is about. This could be because the user has consecutive conferences scheduled throughout the day and does not have time to prepare for a particular conference, the user has overlapping scheduled conferences, the user has multiple conferences scheduled for the same time slot, the title of the conference is ambiguous (e.g., “discuss problems”), or any combination thereof. Joining a conference unprepared can be embarrassing for the user and a waste of time for other conference participants while the user is briefed on the missed portion of the conference. In this situation, the user may want to preview one or more conference items prior to joining the conference in progress. The one or more conference items may include, and are not limited to, a list of topics to be addressed during the conference (e.g., an agenda), a presentation (e.g., a screen share), a real-time transcript, or any file or document that is associated with the conference. However, conventional conferencing software systems do not have the capability to allow a user device to access conference items without the user device connecting to the conference. Implementations of this disclosure address problems such as these a using a conferencing system that enables client device access to one or more conference features prior to the client device joining a conference. The conferencing system may generate and store a conference object when a conference is initiated. The conference object may represent a conference that is in progress, a future conference, or a conference that occurred in the past. The conference object can include a list of conference participants, participant messaging data, one or more conference items, or any combination thereof. Each conference participant is associated with an attribute that indicates whether they accepted the conference invite, declined the conference invite, has joined the conference, a messaging status, a presentation status, an audio status, a video status, or any combination thereof. The participant messaging data may include a log of participant chat messages in a conference chat room, for example. In an example, to enable a user of a client device to preview the participants in a conference without the client device connecting to the conference, the conferencing system may obtain calendar data to generate a graphical output. The graphical output may be a graphical user interface that includes a conference description, a conference topic or list thereof, a conference start time, a conference end time, a conference location, two or more conference participants, two or more conference participant invite statuses, or any combination thereof. The conferencing system is configured to transmit the graphical output for display on a client device that has not yet joined the conference. The conferencing system is configured to determine participant information. The participant information may include a participant invite status, a participant attendance status, a participant attendance time, a participant audio status, a participant video status, a representation of a participant, or any combination thereof. The participant invite status may include an indicator that indicates whether a given participant accepted or declined the conference invite. The participant attendance status may include an indicator that indicates that a given participant has joined the conference, has not joined the conference, or joined the conference and subsequently left the conference. The participant attendance time may include an indicator that indicates a time that a given participant joined the conference or a duration of time that the participant has been in the conference. The participant audio status may include an indicator that indicates that at a given time a given participant's microphone is on, the participant's microphone is muted, or that the participant is speaking. The participant video status may include an indicator that indicates that at a given time a given participant's video is on, the participant's video is off, or that the participant is presenting in the conference. A representation of the participant may include an avatar, a photograph, an icon, text, or any combination thereof. The conferencing system is configured to transmit a graphical output based on the participant information for display on the client device. In an example, to enable a user of a client device to message one or more conference participants without the client device connecting to a conference, the conferencing system is configured to obtain information regarding the conference, which may be a conference in-progress. The conference information can include a conference description, a conference topic or list thereof, a conference start time, a conference end time, a conference location, participant information, or any combination thereof. The conferencing system is configured to transmit a graphical output based on the conference information to a client device for display. Prior to the client device joining the conference, the conferencing system is configured to receive a message to initiate a communication with a participant that is in the conference. The communication may be a text communication, such as a chat, an audio communication, such as an audio chat, a video communication, such as a video chat, or any combination thereof. The conferencing system is configured to grant the client device access to communicate with the participant in the conference. In one example, when the client device is granted access to communicate with the participant in the conference, the client device may access an in-conference communication application to communicate with the participant. An in-conference communication application may be a chat room associated with a conference in progress that is configured for text messaging, audio messaging, video messaging, or any combination thereof. In another example, when the client device is granted access to communicate with the participant in the conference, the client device may access a communication application that is not associated with the conference to communicate with the participant. In an example, to enable a user of a client device to preview one or more conference items without the client device connecting to a conference, the conferencing system is configured to obtain a conference item and transmit a graphical output based on the conference item to a client device for display. The conference item may include an editable document, an agenda or list of topics, a real-time transcript, a downloadable file, or a real-time presentation. The conferencing system is configured to receive a request to view the conference item and transmit a graphical output to the client device based on the request to display the conference item. To describe some implementations in greater detail, reference is first made to examples of hardware and software structures used to implement a system for providing a client device access to conference features without a connection between the client device and the conference.FIG.1is a block diagram of an example of an electronic computing and communications system100, which can be or include a distributed computing system (e.g., a client-server computing system), a cloud computing system, a clustered computing system, or the like. The system100includes one or more customers, such as customers102A through102B, which may each be a public entity, private entity, or another corporate entity or individual that purchases or otherwise uses software services, such as of a UCaaS platform provider. Each customer can include one or more clients. For example, as shown and without limitation, the customer102A can include clients104A through104B, and the customer102B can include clients104C through104D. A customer can include a customer network or domain. For example, and without limitation, the clients104A through104B can be associated or communicate with a customer network or domain for the customer102A and the clients104C through104D can be associated or communicate with a customer network or domain for the customer102B. A client, such as one of the clients104A through104D, may be or otherwise refer to one or both of a client device or a client application. Where a client is or refers to a client device, the client can comprise a computing system, which can include one or more computing devices, such as a mobile phone, a tablet computer, a laptop computer, a notebook computer, a desktop computer, or another suitable computing device or combination of computing devices. Where a client instead is or refers to a client application, the client can be an instance of software running on a customer device (e.g., a client device or another device). In some implementations, a client can be implemented as a single physical unit or as a combination of physical units. In some implementations, a single physical unit can include multiple clients. The system100can include a number of customers and/or clients or can have a configuration of customers or clients different from that generally illustrated inFIG.1. For example, and without limitation, the system100can include hundreds or thousands of customers, and at least some of the customers can include or be associated with a number of clients. The system100includes a datacenter106, which may include one or more servers. The datacenter106can represent a geographic location, which can include a facility, where the one or more servers are located. The system100can include a number of datacenters and servers or can include a configuration of datacenters and servers different from that generally illustrated inFIG.1. For example, and without limitation, the system100can include tens of datacenters, and at least some of the datacenters can include hundreds or another suitable number of servers. In some implementations, the datacenter106can be associated or communicate with one or more datacenter networks or domains, which can include domains other than the customer domains for the customers102A through102B. The datacenter106includes servers used for implementing software services of a UCaaS platform. The datacenter106as generally illustrated includes an application server108, a database server110, and a telephony server112. The servers108through112can each be a computing system, which can include one or more computing devices, such as a desktop computer, a server computer, or another computer capable of operating as a server, or a combination thereof. A suitable number of each of the servers108through112can be implemented at the datacenter106. The UCaaS platform uses a multi-tenant architecture in which installations or instantiations of the servers108through112is shared amongst the customers102A through102B. In some implementations, one or more of the servers108through112can be a non-hardware server implemented on a physical device, such as a hardware server. In some implementations, a combination of two or more of the application server108, the database server110, and the telephony server112can be implemented as a single hardware server or as a single non-hardware server implemented on a single hardware server. In some implementations, the datacenter106can include servers other than or in addition to the servers108through112, for example, a media server, a proxy server, or a web server. The application server108runs web-based software services deliverable to a client, such as one of the clients104A through104D. As described above, the software services may be of a UCaaS platform. For example, the application server108can implement all or a portion of a UCaaS platform, including conferencing software, messaging software, and/or other intra-party or inter-party communications software. The application server108may, for example, be or include a unitary Java Virtual Machine (JVM). In some implementations, the application server108can include an application node, which can be a process executed on the application server108. For example, and without limitation, the application node can be executed in order to deliver software services to a client, such as one of the clients104A through104D, as part of a software application. The application node can be implemented using processing threads, virtual machine instantiations, or other computing features of the application server108. In some such implementations, the application server108can include a suitable number of application nodes, depending upon a system load or other characteristics associated with the application server108. For example, and without limitation, the application server108can include two or more nodes forming a node cluster. In some such implementations, the application nodes implemented on a single application server108can run on different hardware servers. The database server110stores, manages, or otherwise provides data for delivering software services of the application server108to a client, such as one of the clients104A through104D. In particular, the database server110may implement one or more databases, tables, or other information sources suitable for use with a software application implemented using the application server108. The database server110may include a data storage unit accessible by software executed on the application server108. A database implemented by the database server110may be a relational database management system (RDBMS), an object database, an XML database, a configuration management database (CMDB), a management information base (MIB), one or more flat files, other suitable non-transient storage mechanisms, or a combination thereof. The system100can include one or more database servers, in which each database server can include one, two, three, or another suitable number of databases configured as or comprising a suitable database type or combination thereof. In some implementations, one or more databases, tables, other suitable information sources, or portions or combinations thereof may be stored, managed, or otherwise provided by one or more of the elements of the system100other than the database server110, for example, the client104or the application server108. The telephony server112enables network-based telephony and web communications from and to clients of a customer, such as the clients104A through104B for the customer102A or the clients104C through104D for the customer102B. Some or all of the clients104A through104D may be voice over internet protocol (VOIP)-enabled devices configured to send and receive calls over a network114. In particular, the telephony server112includes a session initiation protocol (SIP) zone and a web zone. The SIP zone enables a client of a customer, such as the customer102A or102B, to send and receive calls over the network114using SIP requests and responses. The web zone integrates telephony data with the application server108to enable telephony-based traffic access to software services run by the application server108. Given the combined functionality of the SIP zone and the web zone, the telephony server112may be or include a cloud-based private branch exchange (PBX) system. The SIP zone receives telephony traffic from a client of a customer and directs same to a destination device. The SIP zone may include one or more call switches for routing the telephony traffic. For example, to route a VOIP call from a first VOIP-enabled client of a customer to a second VOIP-enabled client of the same customer, the telephony server112may initiate a SIP transaction between a first client and the second client using a PBX for the customer. However, in another example, to route a VOIP call from a VOIP-enabled client of a customer to a client or non-client device (e.g., a desktop phone which is not configured for VOIP communication) which is not VOIP-enabled, the telephony server112may initiate a SIP transaction via a VOIP gateway that transmits the SIP signal to a public switched telephone network (PSTN) system for outbound communication to the non-VOIP-enabled client or non-client phone. Hence, the telephony server112may include a PSTN system and may in some cases access an external PSTN system. The telephony server112includes one or more session border controllers (SBCs) for interfacing the SIP zone with one or more aspects external to the telephony server112. In particular, an SBC can act as an intermediary to transmit and receive SIP requests and responses between clients or non-client devices of a given customer with clients or non-client devices external to that customer. When incoming telephony traffic for delivery to a client of a customer, such as one of the clients104A through104D, originating from outside the telephony server112is received, an SBC receives the traffic and forwards it to a call switch for routing to the client. In some implementations, the telephony server112, via the SIP zone, may enable one or more forms of peering to a carrier or customer premise. For example, Internet peering to a customer premise may be enabled to ease the migration of the customer from a legacy provider to a service provider operating the telephony server112. In another example, private peering to a customer premise may be enabled to leverage a private connection terminating at one end at the telephony server112and at the other end at a computing aspect of the customer environment. In yet another example, carrier peering may be enabled to leverage a connection of a peered carrier to the telephony server112. In some such implementations, an SBC or telephony gateway within the customer environment may operate as an intermediary between the SBC of the telephony server112and a PSTN for a peered carrier. When an external SBC is first registered with the telephony server112, a call from a client can be routed through the SBC to a load balancer of the SIP zone, which directs the traffic to a call switch of the telephony server112. Thereafter, the SBC may be configured to communicate directly with the call switch. The web zone receives telephony traffic from a client of a customer, via the SIP zone, and directs same to the application server108via one or more Domain Name System (DNS) resolutions. For example, a first DNS within the web zone may process a request received via the SIP zone and then deliver the processed request to a web service which connects to a second DNS at or otherwise associated with the application server108. Once the second DNS resolves the request, it is delivered to the destination service at the application server108. The web zone may also include a database for authenticating access to a software application for telephony traffic processed within the SIP zone, for example, a softphone. The clients104A through104D communicate with the servers108through112of the datacenter106via the network114. The network114can be or include, for example, the Internet, a local area network (LAN), a wide area network (WAN), a virtual private network (VPN), or another public or private means of electronic computer communication capable of transferring data between a client and one or more servers. In some implementations, a client can connect to the network114via a communal connection point, link, or path, or using a distinct connection point, link, or path. For example, a connection point, link, or path can be wired, wireless, use other communications technologies, or a combination thereof. The network114, the datacenter106, or another element, or combination of elements, of the system100can include network hardware such as routers, switches, other network devices, or combinations thereof. For example, the datacenter106can include a load balancer116for routing traffic from the network114to various servers associated with the datacenter106. The load balancer116can route, or direct, computing communications traffic, such as signals or messages, to respective elements of the datacenter106. For example, the load balancer116can operate as a proxy, or reverse proxy, for a service, such as a service provided to one or more remote clients, such as one or more of the clients104A through104D, by the application server108, the telephony server112, and/or another server. Routing functions of the load balancer116can be configured directly or via a DNS. The load balancer116can coordinate requests from remote clients and can simplify client access by masking the internal configuration of the datacenter106from the remote clients. In some implementations, the load balancer116can operate as a firewall, allowing or preventing communications based on configuration settings. Although the load balancer116is depicted inFIG.1as being within the datacenter106, in some implementations, the load balancer116can instead be located outside of the datacenter106, for example, when providing global routing for multiple datacenters. In some implementations, load balancers can be included both within and outside of the datacenter106. In some implementations, the load balancer116can be omitted. FIG.2is a block diagram of an example internal configuration of a computing device200of an electronic computing and communications system. In one configuration, the computing device200may implement one or more of the client104, the application server108, the database server110, or the telephony server112of the system100shown inFIG.1. The computing device200includes components or units, such as a processor202, a memory204, a bus206, a power source208, peripherals210, a user interface212, a network interface214, other suitable components, or a combination thereof. One or more of the memory204, the power source208, the peripherals210, the user interface212, or the network interface214can communicate with the processor202via the bus206. The processor202is a central processing unit, such as a microprocessor, and can include single or multiple processors having single or multiple processing cores. Alternatively, the processor202can include another type of device, or multiple devices, configured for manipulating or processing information. For example, the processor202can include multiple processors interconnected in one or more manners, including hardwired or networked. The operations of the processor202can be distributed across multiple devices or units that can be coupled directly or across a local area or other suitable type of network. The processor202can include a cache, or cache memory, for local storage of operating data or instructions. The memory204includes one or more memory components, which may each be volatile memory or non-volatile memory. For example, the volatile memory can be random access memory (RAM) (e.g., a DRAM module, such as DDR SDRAM). In another example, the non-volatile memory of the memory204can be a disk drive, a solid state drive, flash memory, or phase-change memory. In some implementations, the memory204can be distributed across multiple devices. For example, the memory204can include network-based memory or memory in multiple clients or servers performing the operations of those multiple devices. The memory204can include data for immediate access by the processor202. For example, the memory204can include executable instructions216, application data218, and an operating system220. The executable instructions216can include one or more application programs, which can be loaded or copied, in whole or in part, from non-volatile memory to volatile memory to be executed by the processor202. For example, the executable instructions216can include instructions for performing some or all of the techniques of this disclosure. The application data218can include user data, database data (e.g., database catalogs or dictionaries), or the like. In some implementations, the application data218can include functional programs, such as a web browser, a web server, a database server, another program, or a combination thereof. The operating system220can be, for example, Microsoft Windows®, Mac OS X®, or Linux®; an operating system for a mobile device, such as a smartphone or tablet device; or an operating system for a non-mobile device, such as a mainframe computer. The power source208provides power to the computing device200. For example, the power source208can be an interface to an external power distribution system. In another example, the power source208can be a battery, such as where the computing device200is a mobile device or is otherwise configured to operate independently of an external power distribution system. In some implementations, the computing device200may include or otherwise use multiple power sources. In some such implementations, the power source208can be a backup battery. The peripherals210includes one or more sensors, detectors, or other devices configured for monitoring the computing device200or the environment around the computing device200. For example, the peripherals210can include a geolocation component, such as a global positioning system location unit. In another example, the peripherals can include a temperature sensor for measuring temperatures of components of the computing device200, such as the processor202. In some implementations, the computing device200can omit the peripherals210. The user interface212includes one or more input interfaces and/or output interfaces. An input interface may, for example, be a positional input device, such as a mouse, touchpad, touchscreen, or the like; a keyboard; or another suitable human or machine interface device. An output interface may, for example, be a display, such as a liquid crystal display, a cathode-ray tube, a light emitting diode display, or other suitable display. The network interface214provides a connection or link to a network (e.g., the network114shown inFIG.1). The network interface214can be a wired network interface or a wireless network interface. The computing device200can communicate with other devices via the network interface214using one or more network protocols, such as using Ethernet, transmission control protocol (TCP), internet protocol (IP), power line communication, an IEEE 802.X protocol (e.g., Wi-Fi, Bluetooth, or ZigBee), infrared, visible light, general packet radio service (GPRS), global system for mobile communications (GSM), code-division multiple access (CDMA), Z-Wave, another protocol, or a combination thereof. FIG.3is a block diagram of an example of a software platform300implemented by an electronic computing and communications system, for example, the system100shown inFIG.1. The software platform300is a UCaaS platform accessible by clients of a customer of a UCaaS platform provider, for example, the clients104A through104B of the customer102A or the clients104C through104D of the customer102B shown inFIG.1. The software platform300may be a multi-tenant platform instantiated using one or more servers at one or more datacenters including, for example, the application server108, the database server110, and the telephony server112of the datacenter106shown inFIG.1. The software platform300includes software services accessible using one or more clients. For example, a customer302as shown includes four clients—a desk phone304, a computer306, a mobile device308, and a shared device310. The desk phone304is a desktop unit configured to at least send and receive calls and includes an input device for receiving a telephone number or extension to dial to and an output device for outputting audio and/or video for a call in progress. The computer306is a desktop, laptop, or tablet computer including an input device for receiving some form of user input and an output device for outputting information in an audio and/or visual format. The mobile device308is a smartphone, wearable device, or other mobile computing aspect including an input device for receiving some form of user input and an output device for outputting information in an audio and/or visual format. The desk phone304, the computer306, and the mobile device308may generally be considered personal devices configured for use by a single user. The shared device310is a desk phone, a computer, a mobile device, or a different device which may instead be configured for use by multiple specified or unspecified users. In some implementations, a client may be a vehicle or a component thereof. A vehicle may include an automobile, an aircraft, a watercraft, a spacecraft, a train, a monorail, or a hyperloop. Each of the clients304through310includes or runs on a computing device configured to access at least a portion of the software platform300. In some implementations, the customer302may include additional clients not shown. For example, the customer302may include multiple clients of one or more client types (e.g., multiple desk phones or multiple computers) and/or one or more clients of a client type not shown inFIG.3(e.g., wearable devices or televisions other than as shared devices). For example, the customer302may have tens or hundreds of desk phones, computers, mobile devices, and/or shared devices. The software services of the software platform300generally relate to communications tools, but are in no way limited in scope. As shown, the software services of the software platform300include telephony software312, conferencing software314, messaging software316, and other software318. Some or all of the software312through318uses customer configurations320specific to the customer302. The customer configurations320may, for example, be data stored within a database or other data store at a database server, such as the database server110shown inFIG.1. The telephony software312enables telephony traffic between ones of the clients304through310and other telephony-enabled devices, which may be other ones of the clients304through310, other VOIP-enabled clients of the customer302, non-VOIP-enabled devices of the customer302, VOIP-enabled clients of another customer, non-VOIP-enabled devices of another customer, or other VOIP-enabled clients or non-VOIP-enabled devices. Calls sent or received using the telephony software312may, for example, be sent or received using the desk phone304, a softphone running on the computer306, a mobile application running on the mobile device308, or using the shared device310that includes telephony features. The telephony software312further enables phones that do not include a client application to connect to other software services of the software platform300. For example, the telephony software312may receive and process calls from phones not associated with the customer302to route that telephony traffic to one or more of the conferencing software314, the messaging software316, or the other software318. The conferencing software314enables audio, video, and/or other forms of conferences between multiple participants, such as to facilitate a conference between those participants. In some cases, the participants may all be physically present within a single location, for example, a conference room, in which the conferencing software314may facilitate a conference between only those participants and using one or more clients within the conference room. In some cases, one or more participants may be physically present within a single location and one or more other participants may be remote, in which the conferencing software314may facilitate a conference between all of those participants using one or more clients within the conference room and one or more remote clients. In some cases, the participants may all be remote, in which the conferencing software314may facilitate a conference between the participants using different clients for the participants. The conferencing software314can include functionality for hosting, presenting scheduling, joining, or otherwise participating in a conference. The conferencing software314may further include functionality for recording some or all of a conference and/or documenting a transcript for the conference. The messaging software316enables instant messaging, unified messaging, and other types of messaging communications between multiple devices, such as to facilitate a chat or other virtual conversation between users of those devices. The unified messaging functionality of the messaging software316may, for example, refer to email messaging which includes a voicemail transcription service delivered in email format. The other software318enables other functionality of the software platform300. Examples of the other software318include, but are not limited to, device management software, resource provisioning and deployment software, administrative software, third party integration software, and the like. In one particular example, the other software318can include functionality to provide a client device with access to conference features without the client device connecting to a conference. The software312through318may be implemented using one or more servers, for example, of a datacenter such as the datacenter106shown inFIG.1. For example, one or more of the software312through318may be implemented using an application server, a database server, and/or a telephony server, such as the servers108through112shown inFIG.1. In another example, one or more of the software312through318may be implemented using servers not shown inFIG.1, for example, a meeting server, a web server, or another server. In yet another example, one or more of the software312through318may be implemented using one or more of the servers108through112and one or more other servers. The software312through318may be implemented by different servers or by the same server. Features of the software services of the software platform300may be integrated with one another to provide a unified experience for users. For example, the messaging software316may include a user interface element configured to initiate a call with another user of the customer302. In another example, the telephony software312may include functionality for elevating a telephone call to a conference. In yet another example, the conferencing software314may include functionality for sending and receiving instant messages between participants and/or other users of the customer302. In yet another example, the conferencing software314may include functionality for file sharing between participants and/or other users of the customer302. In some implementations, some or all of the software312through318may be combined into a single software application run on clients of the customer, such as one or more of the clients304through310. FIG.4is a block diagram of an example of a system400for providing a client device access to conference features without a connection between the client and a conference. For example, the system400may be configured to allow a client device to preview conference participants prior to joining the conference, message one or more conference participants prior to joining the conference, view a conference item prior to joining the conference, or any combination thereof. The system400includes a client device402, a server404, client devices406A through406N, and a user system408. The client device402and the client devices406A through406N may each be any one of the clients304to310shown inFIG.3or similar types of client devices, whether or not corresponding to a customer of a software platform. The server404may be used to implement at least a portion of a software platform which implements conferencing functionality, such as the software platform300shown inFIG.3. In an example, the functionality to provide access to conference features prior to joining a conference may be implemented in the other software318shown inFIG.3. The user system408may include or otherwise access a database server, such as the database server110shown inFIG.1, that stores calendar records407of users of the system400. In some examples, the user system408may be implemented as a component of the server404. The server404includes conferencing software410. The conferencing software410may, for example, be the conferencing software314shown inFIG.3. The conferencing software410is configured to enable audio, video, and/or other forms of conferences between multiple participants, such as users of the client devices406A through406N. In this example, the client device402has not yet joined the conference, and the client devices406A through406N have joined the conference. Any one of the users of the client devices406A through406N may have initiated the conference. When the conference is initiated, the conferencing software410generates a conference object and stores the conference object in a memory of the server404. Each participant of the conference is associated with an attribute that indicates whether the participant accepted the conference invite, declined the conference invite, or has joined the conference. The conferencing software410is configured to detect changes in the conference, update the conference object, and store the updated conference object in the memory of the server404. The detected changes may include, but are not limited to, a participant joining the conference, a participant leaving the conference, a participant speaking or presenting, a change in audio status of a client device, a change in video status of a client device, a change in a messaging application of the conference, a recording status change, or a change in an agenda or list of topics or other file associated with the conference. The conferencing software410is configured to obtain calendar data from the user system408. In some examples, the conferencing software410may transmit a request for calendar data to the user system408to obtain the calendar data. The calendar data may include a description of the conference, an agenda or list of topics for the conference, a conference start time, a conference end time, a location, two or more conference participants, two or more conference participant invite statuses, or any combination thereof. The conferencing software410is configured to generate a graphical output based on the calendar data and transmit the graphical output to a client device for display. The graphical output may be transmitted to client device402, client devices406A through406N, or both. In this example, the client device402receives the graphical output and displays the graphical output on a display of the client device402. The graphical output is displayed as a panel on the display of the client device402. The panel may be a persistent panel that is an extensible panel that sits on a display of the client device402to make the user aware of conference information or real-time communications in one or more modalities and to enable the user to take some action in response to the conference information or real-time communication in a single click regardless of the modality. The panel generally occupies less display space than a typical software application, and, in some cases, the size may be configurable based on user preferences and/or based on a number of applications integrated within the panel. The panel may be persistent such that it remains on top of the display of the client device402for immediate viewing at all times, unless otherwise configured. The single-click actions may correspond to response actions including to join the conference, leave the conference, message participants of the conference, turn video on/off, mute/unmute a microphone. In some examples, a single-click action to message participants of the conference may include prewritten messages. The panel may be configured, initialized, or otherwise used within a graphical output using the other software318shown inFIG.3. In this example, the panel is configured to display information associated with a conference that the client device402has not yet joined. The conference can be a conference to which the user was invited as a participant, or a conference that the user created as a host. A portion of the panel may include an indicator that indicates a conference status associated with the conference. The indicator can be a color or an icon. For example, the color may indicate that the conference is in the future, the user is late for the conference (e.g., the conference start time has passed, and no other participants have joined), or that the conference has started (e.g., other participants have joined). The user of the client device402can provide an input to expand the panel to view more information regarding the conference. The input can include hovering a cursor over the portion of the panel, touching the portion of the panel, or pressing one or more keys/buttons, for example, a keyboard shortcut. In response to the input, the client device402is configured to transmit a request to the server404to obtain conference information. The conferencing software410is configured to obtain the conference object from the memory of the server404and determine the conference information based on the request. The conferencing software410is configured to generate a graphical output based on the determined conference information and transmit the graphical output to the client device402for display in the expanded portion of the panel. In some implementations, the conference information is periodically obtained by the client device402without manual user intervention (i.e., without the need for user input) and displayed when the user expands the panel. In some implementations, the conference information is automatically pushed to the client device402without the need for a request and displayed when the user expands the panel. In some implementations, the graphical output may be displayed within a graphical user interface of a client application running at the client device402or as a pop-up window. The display of the graphical user interface using the client application or pop-up window may be implemented to make the user aware of conference information or real-time communications in one or more modalities and to enable the user to take some action in response to the conference information or real-time communication in a single click regardless of the modality. FIG.5is a swim lane diagram of an example of a system500for providing a client device access to conference participant information without a connection between the client and the conference. The system500includes a client device502, a server504, a client device506, and a user system508. The client device502may be the client device402shown inFIG.4, and the client device506may be any one of the client devices406A to406N shown inFIG.4. The server504may be the same as server404shown inFIG.4, and may be used to implement the software platform300shown inFIG.3. In an example, the functionality for providing access to conference participant information without having to join the conference may be implemented in the other software318shown inFIG.3. The user system508may be a database server, such as database server110shown inFIG.1. In some examples, the user system508may be implemented as a component of the server504. In this example, a user of the client device502may be in a current conference and running late for a next conference. The user would like to preview the participants of the next conference who are in attendance in order to determine when to leave the current conference to join the next conference. Attendance information for the participants of the next conference may be based on calendar data. As shown inFIG.5, the server504is configured to obtain calendar data510from the user system508. The server504is configured to generate a graphical output512based on the calendar data510, and transmit the graphical output512to the client device502. The client device502is configured to receive the graphical output512and display514the graphical output512, or portions thereof, within a panel on a display of the client device502. The panel may be configured to display information associated with a conference, for example, a conference that the client device502has not yet joined. A portion of the panel may include an indicator that indicates a conference status associated with the conference. The indicator can be a color or an icon. For example, the color may indicate that the conference is a future conference, the conference has started without participants, the conference has started with participants, the conference has started and there is at least one high priority participant in attendance, the conference has an active screen share, or that the conference has ended. At some point in time, the client device506transmits a join message516to the server504to connect to a conference518. The conference518may be a conference in progress, or it may be a conference that is initiated by the join message516. The server504is configured to receive the join message516and create or update520a conference object associated with the conference518. The server504grants the client device506access to the conference518, and the client device506joins the conference518, for example, via conferencing software410shown inFIG.4. At this point, the client device502has not yet joined the conference518. The server504is configured to determine522participant information. The participant information is determined based on the conference object, and may include a participant invite status for one or more participants, a participant attendance status for one or more participants, a participant attendance time for one or more participants, a participant preview status for one or more participants, a participant indicator for one or more participants, such as a participant name, or representation thereof, a participant audio status for one or more participants, a participant video status for one or more participants, a participant presentation status for one or more participants, or any combination thereof. The participant invite status may indicate whether the participant accepted or declined the conference invite. The participant attendance status may indicate that the participant has joined the conference, has not joined the conference, or joined the conference and subsequently left the conference. The participant attendance time may indicate a time that the participant joined the conference or a duration of time that the participant has been in the conference. The participant preview status may indicate whether the participant is currently previewing the conference or has previewed the conference within a predetermined period of time. The participant audio status may indicate that the participant microphone is on, the participant microphone is muted, or that the participant is speaking. The participant video status may indicate that the participant video is on, the participant video is off, or that the participant is presenting in the conference. A representation of the participant may include an avatar, a photograph, an icon, text, or any combination thereof. The participant presentation status may be an indication of whether the participant is currently performing a screen share. The server504is configured to generate a graphical output524based on the participant information and transmit the graphical output524to the client device502. The client device502is configured to receive the graphical output524and display526at least a portion thereof in an expanded portion of the panel. The graphical output524may include one or more visual indicators. For example, a visual indicator may indicate whether a participant has their video or audio on or off based on the participant video status or the participant audio status. A visual indicator may indicate who is speaking in the conference based on the participant audio status. A visual indicator may indicate a duration of time that the participant has been in the conference based on participant attendance time. A visual indicator may indicate that a participant is performing a screen share based on the participant presentation status. A visual indicator may indicate whether the conference is being recorded based on the conference status. In some implementations, the client device502may transmit a request528for participant information to the server504. The request528may be transmitted in response to an input from a user to expand the panel to view more information, such as participant information. In some implementations, the graphical output524is periodically transmitted to the client device502without the need for an input, and displayed when the user expands the panel. In some implementations, the graphical output524is automatically pushed to the client device502when there is an update to the conference object, without the need for the request528, and displayed when the user expands the panel. In some implementations, the server504may transmit a notification to the client device502to notify the user that a high priority participant has joined or left the conference518. A high priority participant may be a supervisor of the user or other stakeholder. In this example, the server504may determine a priority level for a participant based on an organizational structure or a user preference. The server504may determine the priority level using a machine learning (ML) model and historical conference data of the user and one or more other users of the system500. In some examples, a priority level for each participant may be determined. In some implementations, the server504may transmit a notification to a client device that has not yet joined the conference518, such as the client device502, to notify the user that a threshold of participants for a quorum has been reached in the conference518. The threshold may be based on a percentage, for example, the number of participants that have joined the conference518relative to the number of participants that accepted the conference invite. The threshold may be a configurable value. In an example, the notification may be transmitted when 50% of the participants that accepted the conference invite have joined the conference518. In some examples, a notification may be transmitted when quorum is lost, for example, when a participant leaves the conference518. In some situations, it may be helpful to see who is previewing the conference or how many participants are previewing the conference, for example, to avoid a situation where no one is joining the conference because the conference is empty. In some implementations, the server504may transmit a notification to a client device that has not joined the conference518, such as client device502. The notification may be transmitted based on a determination that one or more participants are previewing the conference518or have previewed the conference518within a predetermined period of time. The notification may be a pop-up message on a client device display to join the conference518when one or more participants are previewing the conference518or have previewed the conference518within a predetermined period of time. The notification may be transmitted when a threshold of a number of previewing participants is met. The notification may be transmitted when a high priority participant is previewing the conference518or has joined the conference518. In some situations, a user may not have received an invite for a particular meeting, either inadvertently or because the host was not aware that the user should be invited. In such a case, and in some implementations, the server504may be configured to transmit a notification to a client device to join a conference to which the user was not invited. For example, the server504may determine that the user should be invited based on historical conference data of the user and one or more other users of the system500that were invited to the conference. FIG.6is a swim lane diagram of an example of a system600for providing a client device access to a conference messaging application without a connection between the client device and the conference. The system600includes a client device602, a server604, and a client device606. The client device602may be the client device402shown inFIG.4, and the client device606may be any one of the client devices406A to406N shown inFIG.4. The server604may be the same as server404shown inFIG.4, and may be used to implement the software platform300shown inFIG.3. In an example, the functionality for providing access to conference participant information without having to join the conference may be implemented in the other software318shown inFIG.3. As shown inFIG.6, client device606transmits a join message608to the server604to connect to a conference610. The conference610may be a conference in progress, or it may be a conference that is initiated by the join message608. The server604is configured to receive the join message608and create or update612a conference object associated with the conference610. The server604grants the client device606access to the conference610, and the client device606joins the conference610, for example, via the conference software410shown inFIG.4. At this point, the client device602has not yet joined the conference610. The server604may be configured to determine conference information based on the conference object. The conference information can include a conference description, a conference topic or list thereof, a conference start time, a conference end time, a conference location, participant information, or any combination thereof. The server604is configured to generate a graphical output614based on the conference information and transmit the graphical output614to the client device602. The client device602is configured to receive the graphical output614and display616at least a portion thereof in an expanded portion of a panel associated with the conference610on a display of the client device602. The graphical output614may include one or more visual indicators associated with the conference information. The display616of the graphical output614may be triggered by an input to the client device602to expand the panel. The input can include hovering a cursor over the portion of the panel, touching the portion of the panel, or pressing one or more keys/buttons, for example, a keyboard shortcut. In this example, the expanded portion of the panel may display one or more participants that have joined the conference610. The expanded portion of the panel may display an option to initiate a communication via an in-conference communication application with one or more participants that have joined the conference. The in-conference communication application is available for access by participants in the conference. The in-conference communication application may be a text or instant messaging application, an audio messaging application, or a video messaging application. In an example where the in-conference communication application is a text or instant messaging application, the option to initiate a communication may include one or more prewritten messages (e.g., “Coming from another meeting, I will be 5 minutes late,” “Please get started without me,” or “Please do not start without me”). The prewritten messages may be default messages generically available to various users of the conferencing software. Alternatively, the prewritten messages may be automatically generated using a ML model, user calendar data, historical conference data, and/or historical message data associated with the user of the client device602. For example, if the user is in a conference with a group of participants that historically end conferences 10 minutes late, one or more prewritten messages may be automatically generated that include that the user will be 10 minutes late. In another example, if the user typically sends a message to start the meeting without their presence, a prewritten message such as “Please get started without me” may be automatically generated. The user may provide another input that triggers the client device602to transmit a message618to the server604. The input may be a single-click response to an option on the expanded portion of the panel. The message618may be a request to access an in-conference communication application to communicate with one or more conference participants. The server604is configured to receive the message618and grant access to the in-conference communication application620such that the client device602can communicate with one or more conference participants prior to joining the conference610. In this example, when the access is granted, the client device602can communicate with client device606via the in-conference communication application620without joining the conference610. In an example, if the user is a host of the conference610, the user may notify one or more of the participants in the conference610that they are running late. If the user is not the host of the conference610, the user may notify the host that they are running late. In an example where a user may be in a vehicle and is unable to join a conference on time, the vehicle, or a component thereof, may be configured to transmit the message618to the server604. In this example, the message618may be a prewritten message. The prewritten message may include an estimated time of arrival (ETA) based on navigation and/or traffic data to notify the one or more participants of the conference of an approximate time the user may join the conference. The ETA may include some buffer time to allow the user to park the vehicle and get set up for the conference. In some implementations, the user may wish to post a message to a persistent chat room that is not associated with the conference610, such as a chat room or other chat messaging space, which may, for example, be implemented using the messaging software316shown inFIG.3. For example, a persistent chat room that has the same participants as the conference610may exist, and the user may wish to post a message in the persistent chat room for future reference. In this example, the server604may check for existing persistent chat rooms to determine whether a persistent chat room with matching participants of the conference610exists. If a match is found, the user can be presented with an option to post a message to the matching persistent chat room. In this way, messages communicable from the client device of the user may be posted to a space outside of the conference so that those messages may remain available and easily accessible to relevant users even after the conference has ended. In some implementations, an agenda or list of topics for the conference610may be updated based on a chat message. For example, a user may indicate in the chat message that they will be 15 minutes late to join the conference. Based on the chat message, the server604determines that the list of topics may need to be updated. The server604may make this determination by searching the list of topics for items associated with the user and automatically updating the list of topics by moving any matching items associated with the user to later in the conference610. In some examples, the server may update the list of topics to move the matching items to the end of the conference610. FIG.7is a swim lane diagram of an example of a system700for providing a client device access to conference items without a connection between the client device and the conference. The system700includes a client device702, a server704, and a client device706. The client device702may be the client device402shown inFIG.4, and the client device706may be any one of the devices406A to406N shown inFIG.4. The server704may be the same as server404shown inFIG.4, and may be used to implement the software platform300shown inFIG.3. In an example, the functionality for providing access to one or more conference items without having to join the conference may be implemented in the other software318shown inFIG.3. Certain conference items may be viewed prior to the initiation of the conference, for example, a list of topics or a downloadable file associated with the conference. The example shown inFIG.7is for a conference that is in progress. As shown inFIG.7, client device706transmits a join message708to the server704to join a conference710. The conference710may be a conference in progress, or it may be a conference that is initiated by the join message708. The server704is configured to receive the join message708and create or update712a conference object associated with the conference710. The server704grants the client device706access to the conference710, and the client device706joins the conference710, for example, via the conference software410shown inFIG.4. At this point, the client device702has not yet joined the conference710. The server704may be configured to determine one or more conference items based on the conference object. In an example, the one or more conference items may be attachments or links included in a conference invite or a description included in the conference invite. The one or more conference items may include an editable document, a list of topics, a real-time transcript, a downloadable file, or a real-time presentation such as a screen share. The server704is configured to generate a graphical output714based on the one or more conference items and transmit the graphical output714to the client device702. The client device702is configured to receive the graphical output714and display716at least a portion thereof in an expanded portion of a panel associated with the conference710on a display of the client device702. The graphical output714may include one or more visual indicators associated with the one or more conference items. The visual indicators may be graphical representations of the one or more conference items, such as icons, thumbnails, text representations, or any combination thereof. The display716of the graphical output714may be triggered by an input to the client device702to expand the panel. The input can include hovering a cursor over the portion of the panel, touching the portion of the panel, or pressing one or more keys/buttons, for example, a keyboard shortcut. In this example, the expanded portion of the panel may display one or more visual indicators associated with the conference items. The expanded portion of the panel may display an option to view and/or edit one or more of the conference items. The user may provide another input that triggers the client device702to transmit a request718to the server704. The request718may be a request to view or edit a conference item. The input may be a single-click response to an option on the expanded portion of the panel. The server704is configured to receive the request718and determine720a conference item based on the request718. Determining the conference item may include obtaining the conference object and extracting data associated with the conference item. If the request718is a request to view the conference item, the server704may generate a graphical output722based on the data associated with the conference item. The server704is configured to transmit the graphical output722to the client device702for display. The client device702is configured to receive the graphical output722and display724at least a portion thereof in an expanded portion of the panel. If the request718is a request to edit the conference item, the server704may grant the client device702access to edit the conference item. The server may open an application interface to allow the client device702to view and edit the conference item. When the edits to the conference item are completed, the server704may update the conference object to reflect the edits. In some examples, a notification may be sent to the other participants of the conference that the conference item has been edited. In some examples, a notification may be sent to the author of the conference item that the conference item has been edited. In an example, the list of topics for the conference may be a living document that can be edited by conference participants. If the user is running late and cannot join the conference on time, the user can edit the list of topics to move one or more topics associated with the user to a later portion of the conference without having to join the conference. Typical conferences begin with some idle conversations that are not related to the substance of the conference. In some implementations, the server704may detect when the idle conversation has ended and the substantive portion of the conference710has begun. For example, detection of the substantive portion of the conference710may be based on the initiation of a screen share or by using automated speech recognition and a ML model to detect whether the discussion of the list of topics has begun. The server704may transmit a notification to the client device702to notify the user that the substantive portion of the conference710has begun. In some examples, the notification may be a pop-up window on the display of the client device702. The notification may include an option to join the conference710. In an example, a user may be running late for a conference and is in an area with substantial background noise. The user may wish to access an audio portion of the conference to listen in without having to join the conference. In this example, the user does not have to worry about muting their microphone. The server704may transmit a visual indicator to the conference attendees to notify them that the user is listening in. In another example, a user may be running late for a webinar and wants to get caught up before joining the webinar. The user may access the conference items associated with the webinar without having to join the webinar. For example, the user may view the real-time transcript and/or a real-time presentation of the webinar without having to join the webinar. In some examples, the conferencing software may include a notes application that allows participants to take notes during the conference. The conferencing software may provide the participants with an option to make their notes public so that other conference participants can view their notes. The conferencing software may annotate the participant notes with timestamps for future reference. In some implementations, the server704may automatically edit the list of topics based on a presence detection of a user. For example, the server704may detect that a user is running late based on a chat message from the user or determining that the user has not yet joined the conference. The server704may identify one or more items associated with the user and automatically move the identified items to the end of the list of topics. In some examples, the server704may identify one or more items associated with the user and change the font of the items to indicate that the discussion of the identified items may be delayed because the associated user has not yet joined the conference. In some examples, the server704may identify one or more items based on whether they are yet to be discussed or have already been discussed. When the associated user joins the conference, the server704may automatically change the font back to the original font. Changing the font may include a change to a different font type, underlining text, bolding text, italicizing text, highlighting text, changing a text color, changing a text size, or any combination thereof. In some implementations, the server704may determine a conference importance level using a ML model. The conference importance level may be based on the list of topics, conference attendees, participant interactions in previous similar conferences, or any combination thereof. The server704may be configured to transmit a notification to a client device that includes a suggestion to join the conference or skip the conference based on the importance level of the conference to the user. For example, if the user is double-booked for multiple conferences scheduled at the same time, the notification of the importance level of each conference may help the user determine which conference to join. To further describe some implementations in greater detail, reference is next made to examples of methods that may be performed by or using a system for providing a client device access to conference features without a connection between the client device and a conference.FIGS.8through10are flowcharts of examples of methods for providing access to conference features prior to joining a conference. The methods can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1through7. The methods can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the methods or other techniques, methods, processes, or algorithms described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. For simplicity of explanation, the methods are depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement a technique in accordance with the disclosed subject matter. FIG.8is a flowchart of an example of a method800for providing a client device access to conference participant information without a connection between the client device and the conference. At802, calendar data is obtained by a server. The calendar data may include at least one of a description of a conference, a list of topics associated with the conference, a start time for the conference, an end time for the conference, a location, a participant identifier, a participant invite status, or any combination thereof. At804, the server transmits a first graphical output to a client device. The client device has not yet connected to the conference. The graphical output is based on the calendar data. The client device receives the first graphical output and displays at least a portion of the graphical output on a panel associated with the conference. The first graphical output may include a visual indicator that indicates a conference status. The conference status may indicate that the conference is a future conference, the conference has started without participants, the conference has started with participants, the conference that started and a high priority participant is in attendance, or the conference has ended. The panel may be a persistent panel that is an extensible panel that sits on a display of the client device to make the user aware of conference information or real-time communications in one or more modalities and to enable the user to take some action in response to the conference information or real-time communication in a single click regardless of the modality. The panel may be expanded to view additional information associated with the conference. At806, the server receives a request for participant information. The request is received prior to the client device joining the conference. The request may be received in response to an input from a user to expand the panel to view more information, such as participant information. At808, the server determines the participant information based on the request. The participant information may be determined based on the conference object, and may include a participant invite status for one or more participants, a participant attendance status for one or more participants, a participant attendance time for one or more participants, a participant preview status for one or more participants, a participant indicator for one or more participants, such as a participant name or title, or representation thereof, a participant audio status for one or more participants, a participant video status for one or more participants, a participant presentation status for one or more participants, or any combination thereof. Prior to the client device connecting to the conference, the server is configured to generate a second graphical output that includes the participant information. At810, the server transmits the second graphical output to the client device for display. Prior to connecting to the conference, the client device receives the second graphical output and displays at least a portion thereof in an expanded portion of the panel. The second graphical output may include one or more visual indicators. For example, a visual indicator may indicate whether a participant has their video or audio on or off based on the participant video status or the participant audio status. A visual indicator may indicate who is speaking in the conference based on the participant audio status. A visual indicator may indicate a duration of time that the participant has been in the conference based on participant attendance time. A visual indicator may indicate that a participant is performing a screen share based on the participant presentation status. A visual indicator may indicate whether the conference is being recorded based on the conference status. In some implementations, the method800may include determining whether a high priority participant (e.g., a stakeholder) has joined the conference. The high priority participant may be determined based on an organizational chart relative to an organizational position of the user of the client device. In some examples, the high priority participant may be based on a preference of the user of the client device. The method may include transmitting a notification to the client device to notify the user that a high priority participant has joined or left the conference. In some implementations, the method800may include determining that a threshold number of participants for a quorum is met and transmitting a notification to a client device that has not yet joined the conference to notify the user that the quorum threshold is met. The method800may include transmitting a notification to the client device when the quorum is lost, for example, when a participant leaves the conference. In some implementations, the method800includes determining who is previewing the conference or how many participants are previewing the conference. The server may transmit a notification to a client device that has not yet joined the conference to notify the user that one or more participants are previewing the conference or have previewed the conference within a predetermined period of time. FIG.9is a flowchart of an example of a method900for providing a client device access to a conference messaging application without a connection between the client device and the conference. At902, a server obtains conference information. The conference information may be associated with a conference in progress and can include a conference description, a conference topic or list thereof, a conference start time, a conference end time, a conference location, participant information, or any combination thereof. The server obtains the conference information and generates a graphical output based on the conference information. At904, the server transmits the graphical output based on the conference information to a client device for display. The client device receives the graphical output and displays at least a portion thereof in an expanded portion of a panel associated with the conference on a display of the client device. The graphical output may include one or more visual indicators associated with the conference information. The display of the graphical output may be triggered by an input to the client device to expand the panel. The expanded portion of the panel may display an option to initiate a communication with one or more participants in attendance via an in-conference communication application, such as a conference messaging application. At906, the server receives a message to initiate a communication with a participant that is in attendance at the conference. The message is received prior to the client device joining the conference. The message may be transmitted to the server in response to an input received at the client device. The input may be a single-click response to an option on the expanded portion of the panel. The message may be a request to access an in-conference communication application to communicate with one or more conference participants. At908, the server grants the client device access to an in-conference communication application. The access to the in-conference communication application is granted prior to the client device joining the conference. When the access is granted, the client device can communicate with one or more of the participants in the conference via the in-conference communication application without joining the conference. In some implementations, the message may include a prewritten message. In an example where the client device is a vehicle, or a component thereof, the prewritten message may include an ETA based on navigation and/or traffic data to notify one or more participants of the conference of an approximate time the user may join the conference. In some examples, the ETA may include some buffer time to allow the user to park the vehicle and get set up for the conference. In some implementations, the method900may include posting a message to a persistent chat room that is not associated with the conference. For example, the method900may include checking for existing persistent chat rooms to determine whether a persistent chat room with matching participants of the conference exists. If a match is found, the method900may include transmitting a notification that includes an option to post a message to the matching persistent chat room to the client device. In some implementations, the method900may include updating a list of topics based on a chat message. The method900may include determining that the list of topics should be updated based on the text of a chat message. For example, it may be determined from the text of the chat message that the user is running late for the conference. The method900may include searching the list of topics for items associated with the user and automatically updating the list of topics by moving any matching items associated with the user to later in the conference. In some examples, the method900may include transmitting a notification to the conference attendees that the list of topics has been updated. FIG.10is a flowchart of an example of a method1000for providing a client device access to conference items without a connection between the client device and the conference. At1002, a server obtains a conference item. The conference item may be an attachment or a link included in a conference invite or a description included in the conference invite. The conference item may include an editable document, a list of topics, a real-time transcript, a downloadable file, or a real-time presentation such as a screen share. The server obtains the conference item and generates a first graphical output based on the conference item. At1004, the server transmits the first graphical output to a client device to display an indicator of the conference item. The client device receives the first graphical output and displays at least a portion thereof in an expanded portion of a panel associated with the conference on a display of the client device. The first graphical output includes an indicator associated with the conference item. The indicator may be a visual indicator and may be a graphical representation of the conference item, such as an icon, a thumbnail, a text representation, or any combination thereof. The display of the first graphical output may be triggered by an input to the client device to expand the panel. The expanded portion of the panel may display an option to view and/or edit the conference item. At1006, the server receives a request to view the conference item. The request may be triggered by another input at the client device. The input may be a single-click response to an option on the expanded portion of the panel. The request may be a request to view or edit a conference item. The server generates a second graphical output based on the conference item indicated in the request. At1008, the server transmits the second graphical output to the client device to display the conference item. The client device receives the second graphical output and displays at least a portion thereof in an expanded portion of the panel or in an application that is compatible with the conference item. In some implementations, the method1000may include detecting an initiation of a screen share. The method1000may include determining that a substantive portion of the conference has started based on the screen share. The substantive portion of the conference can be associated with an item of the list of topics. The method1000may include transmitting a notification to the client device that indicates that the substantive portion of the conference has started. In some implementations, the method1000may include detecting a voice in an audio portion of the conference. The method1000may include converting the detected voice to text and comparing the text to the conference item. In this example, the conference item may be a list of topics. The method1000may include determining that a substantive portion of the conference has started based on the comparison, where the substantive portion of the conference is associated with an item of the list of topics. The method1000may include transmitting a notification to the client device that indicates that the substantive portion of the conference has started. The implementations of this disclosure can be described in terms of functional block components and various processing operations. Such functional block components can be realized by a number of hardware or software components that perform the specified functions. For example, the disclosed implementations can employ various integrated circuit components (e.g., memory elements, processing elements, logic elements, look-up tables, and the like), which can carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, where the elements of the disclosed implementations are implemented using software programming or software elements, the systems and techniques can be implemented with a programming or scripting language, such as C, C++, Java, JavaScript, assembler, or the like, with the various algorithms being implemented with a combination of data structures, objects, processes, routines, or other programming elements. Functional aspects can be implemented in algorithms that execute on one or more processors. Furthermore, the implementations of the systems and techniques disclosed herein could employ a number of conventional techniques for electronics configuration, signal processing or control, data processing, and the like. The words “mechanism” and “component” are used broadly and are not limited to mechanical or physical implementations, but can include software routines in conjunction with processors, etc. Likewise, the terms “system” or “tool” as used herein and in the figures, but in any event based on their context, may be understood as corresponding to a functional unit implemented using software, hardware (e.g., an integrated circuit, such as an ASIC), or a combination of software and hardware. In certain contexts, such systems or mechanisms may be understood to be a processor-implemented software system or processor-implemented software mechanism that is part of or callable by an executable program, which may itself be wholly or partly composed of such linked systems or mechanisms. Implementations or portions of implementations of the above disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be a device that can, for example, tangibly contain, store, communicate, or transport a program or data structure for use by or in connection with a processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device. Other suitable mediums are also available. Such computer-usable or computer-readable media can be referred to as non-transitory memory or media, and can include volatile memory or non-volatile memory that can change over time. The quality of memory or media being non-transitory refers to such memory or media storing data for some period of time or otherwise based on device power or a device power cycle. A memory of an apparatus described herein, unless otherwise specified, does not have to be physically contained by the apparatus, but is one that can be accessed remotely by the apparatus, and does not have to be contiguous with other memory that might be physically contained by the apparatus. While the disclosure has been described in connection with certain implementations, it is to be understood that the disclosure is not to be limited to the disclosed implementations but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
88,697
11863334
DETAILED DESCRIPTION OF THE DRAWINGS Various embodiments of a Folder Engine are described herein that provide functionality for a user account to create and organize one or more folders with respect to a plurality of online chat sessions and/or online channels. In some embodiments, the user account may be a recipient user account that is receiving various messages via the online chat sessions and/or the online channels. Various embodiments of the Folder Engine as described herein provide functionality for a recipient user account to receive notifications of unread messages and online chat sessions and/or online channels whereby the respective notifications are displayed to the recipient user account in response to explicit input received from the recipient user account. For example, the input received from the recipient user account may be an occurrence of a hover action initiated and performed by the recipient user account. According to one or more embodiments, the Folder Engine receives data from the recipient user account requesting creation a folder. The Folder Engine receives data from the recipient user account representing selection of the online chat session(s) and/or the online channel(s) that are to be included within the folder. The Folder Engine receives data from the recipient user account representing a determination by the recipient user account of an organizational order (such as a display order) of the online chat sessions and/or the online channels included in folder. In one or more embodiments, the Folder Engine detects a specific type of input action performed by the recipient user account proximate to a folder or a collection of online chat sessions that currently include one or more unread messages. Based on detecting that specific type of input action, the Folder Engine displays a notification. In some embodiments, where the detected specific type of input action was applied proximate to a folder, the Folder Engine displays a notification that includes display of multiple identities of online chat sessions and/or online channels. According to some embodiments, selection of an identity of a sender user account displayed in a notification, acts as a shortcut operation(s) for opening a folder in which a corresponding online chat session is located. In some embodiments, the sole location of the online chat session may be within the opened folder. The online chat session includes messages sent between the recipient user account and the sender user account as well as one or more unread messages sent from the sender user account to the recipient user account in the online chat session. The shortcut operation(s) further includes triggering access of the online chat session by the recipient user account and/or triggering access of one or more unread messages in the online chat session. In this specification, reference is made in detail to specific embodiments of the invention. Some of the embodiments or their aspects are illustrated in the drawings. For clarity in explanation, the invention has been described with reference to specific embodiments, however it should be understood that the invention is not limited to the described embodiments. On the contrary, the invention covers alternatives, modifications, and equivalents as may be included within its scope as defined by any patent claims. The following embodiments of the invention are set forth without any loss of generality to, and without imposing limitations on, the claimed invention. In the following description, specific details are set forth in order to provide a thorough understanding of the invention. The invention may be practiced without some or all of these specific details. In addition, well known features may not have been described in detail to avoid unnecessarily obscuring the invention. In addition, it should be understood that steps of the exemplary methods set forth in this exemplary patent can be performed in different orders than the order presented in this specification. Furthermore, some steps of the exemplary methods may be performed in parallel rather than being performed sequentially. Also, the steps of the exemplary methods may be performed in a network environment in which some steps are performed by different computers in the networked environment. Some embodiments are implemented by a computer system. A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods and steps described herein. FIG.1Ais a diagram illustrating an exemplary environment in which some embodiments may operate. In the exemplary environment100, a sending client device150, one or more receiving client device(s)160are connected to a processing engine102and, optionally, a communication platform140. The processing engine102is connected to the communication platform140, and optionally connected to one or more repositories130and/or databases132of historical virtual online event data, such as historical virtual meeting data One or more of the databases may be combined or split into multiple databases. The sending client device150and receiving client device(s)160in this environment may be computers, and the communication platform server140and processing engine102may be applications or software hosted on a computer or multiple computers which are communicatively coupled via remote server or locally. The exemplary environment100is illustrated with only one sending client device, one receiving client device, one processing engine, and one communication platform, though in practice there may be more or fewer sending client devices, receiving client devices, processing engines, and/or communication platforms. In some embodiments, the sending client device, receiving client device, processing engine, and/or communication platform may be part of the same computer or device. In an embodiment(s), the processing engine102may perform methods500,600(ofFIGS.5,6) or other method herein. In some embodiments, this may be accomplished via communication with the sending client device, receiving client device(s), processing engine102, communication platform140, and/or other device(s) over a network between the device(s) and an application server or some other network server. In some embodiments, the processing engine102is an application, browser extension, or other piece of software hosted on a computer or similar device or is itself a computer or similar device configured to host an application, browser extension, or other piece of software to perform some of the methods and embodiments herein. Sending client device150and receiving client device(s)160are devices with a display configured to present information to a user of the device. In some embodiments, the sending client device150and receiving client device(s)160present information in the form of a user interface (UI) with UI elements or components. In some embodiments, the sending client device150and receiving client device(s)160send and receive signals and/or information to the processing engine102and/or communication platform140. The sending client device150is configured to submit messages (i.e., chat messages, content, files, documents, media, or other forms of information or data) to one or more receiving client device(s)160. The receiving client device(s)160are configured to provide access to such messages to permitted users within an expiration time window. In some embodiments, sending client device150and receiving client device(s) are computer devices capable of hosting and executing one or more applications or other programs capable of sending and/or receiving information. In some embodiments, the sending client device150and/or receiving client device(s)160may be a computer desktop or laptop, mobile phone, virtual assistant, virtual reality or augmented reality device, wearable, or any other suitable device capable of sending and receiving information. In some embodiments, the processing engine102and/or communication platform140may be hosted in whole or in part as an application or web service executed on the sending client device150and/or receiving client device(s)160. In some embodiments, one or more of the communication platform140, processing engine102, and sending client device150or receiving client device160may be the same device. In some embodiments, the sending client device150is associated with a sending user account, and the receiving client device(s)160are associated with receiving user account(s). In some embodiments, optional repositories function to store and/or maintain, respectively, user account information associated with the communication platform140, conversations between two or more user accounts of the communication platform140, and sensitive messages (which may include sensitive documents, media, or files) which are contained via the processing engine102. The optional repositories may also store and/or maintain any other suitable information for the processing engine102or communication platform140to perform elements of the methods and systems herein. In some embodiments, the optional database(s) can be queried by one or more components of system100(e.g., by the processing engine102), and specific stored data in the database(s) can be retrieved. Communication platform140is a platform configured to facilitate communication between two or more parties, such as within a conversation, “chat” (i.e., a chat room or series of public or private chat messages), video conference or meeting, message board or forum, virtual meeting, or other form of digital communication. In some embodiments, the platform140may further be associated with a video communication environment and a video communication environment client application executed on one or more computer systems. FIG.1Bis a diagram illustrating exemplary software modules154,156,158,160of a Folder Engine that may execute at least some of the functionality described herein. According to some embodiments, one or more of exemplary software modules154,156,158,160may be part of the processing engine102. In some embodiments, one or more of the exemplary software modules154,156,158,160may be distributed throughout the communication platform140. The module154functions to detect one or more types of input actions. In some embodiments, the module154functions to detect a hover action(s). The module156functions to initiate display of one or more notifications. In some embodiments, the module156functions to initiate display of a notification whereby a notification includes displays of one or more identities and badges. The module158functions to receive selection of a notification. In some embodiments, the module158functions to receive selection of an identity of a sender user account or an online channel included in a display of a notification. The module160functions to initiate access to an online chat session or online channel that corresponds with a selected notification. In some embodiments, the module160functions to initiate access to an online chat session or online channel that corresponds with an identity selected from a display of a notification. The above modules154,156,158,160and their functions will be described in further detail in relation to an exemplary method below andFIGS.4A,4B,4C,4D,5A,5B,5C,6A and6B. As shown in the example ofFIG.2, a user account communications interface200for accessing and communicating with the platform140and displayed at a computer device150. The interface200provides access to video data, audio data, chat data and meeting transcription related to an online event(s), such as a virtual webinar or a virtual meeting joined by a user account associated with the computer device150. The interface200further provides various types of tools, functionalities, and settings that can be selected by a user account during an online event. Various types of virtual meeting control tools, functionalities, and settings are, for example, mute/unmute audio, turn on/off video, start meeting, join meeting, view and call contacts. As shown in flowchart diagram300of the example ofFIG.3, the Folder Engine detects a hover action initiated by a recipient user account proximate to a folder created by the recipient user account. (Step310) It is understood that, prior to detecting of the hover action, the Folder Engine receives a request from the recipient user account to instantiate (or create) the folder. In response to the request, the Folder Engine generates the folder. The Folder Engine further receives a selection from the recipient user account of one or more online chats to be placed within the folder. The Folder Engine receives from the recipient user account an ordering of the one or more selected online chats within the folder. In some embodiments, an online chat(s) and/or online channel(s) placed within the folder may be an online chat or online channel that was initiated prior to the request to instantiate the folder In some embodiments, prior to detecting of the hover action, the Folder Engine determines presence of at least one unread message in an online chat(s) that has been place in the folder and initiates display of an unread message badge proximate to the folder. For example, the Folder Engine determines presence of at least one unread message in multiple online chats organized by the recipient user account in the folder. The Folder Engine determines an aggregate amount of unread messages in the multiple online chats indicates the aggregate amount of unread messages in the unread message badge. Based on detecting the hover action, the Folder Engine initiates display of one or more notifications that correspond to one or more online chats selected by the recipient user account to be organized in the folder. (Step320) In some embodiments, a notification may include display of an identity of a sender user account or an online channel. The notification may further include display of a unread messages badge for each respective sender user account and online channel included in the notification. In some embodiments, a notification displays an identity of a sender user account from which one or more unread messages were sent to a first online chat between the sender user account and the recipient user account. The one or more unread messages are respective messages not yet accessed by the recipient user account. The Folder Engine receives a selection from the recipient user account of a first notification that corresponds to a first online chat within the folder. (Step330) Based on the selection of the first notification, the Folder Engine initiates access of the first online chat by the recipient user account. (Step340) In some embodiments, the Folder Engine opens the folder in which the first online chat is located. As shown in diagram400of the example ofFIG.4A, a user interface associated with a client application for a video communications environment may have a side bar402. In some embodiments, the side bar402may be part of or associated with interface200illustrated inFIG.2. In various embodiments, the side bar402may include a folder404and a badge406. The folder404may include a plurality of online chat sessions and/or a plurality of online channels. Each online chat session may be between the recipient to user account and a particular sender user account(s). The badge406displayed proximate to the folder404displays a total aggregate number of unread messages across all online chat sessions that are organized within the folder404. It is understood that an unread message is a message in an online chat session or in an online channel that has yet to be accessed by the recipient user account. In some embodiments, the recipient user account may select a preferred badge mode, in which one or more badges are displayed in a particular shape (such as a circle for example) without display of a number of unread messages. The recipient user account may toggle the preferred badge mode to be active or inactive. The side bar402may further include an unread chat session indicator408. In some embodiments, the unread chat session indicator408corresponds to a collection of online chat sessions and/or online channels that currently have messages that have yet to be read (or accessed) by the recipient user account. In some embodiments, the unread chat session indicator408provides access to any online chat session and/or online channel that currently includes one or more unread messages. An unread chat badge410is displayed proximate to the unread chat session indicator408. The unread chat badge410displays a total aggregate number of unread messages across all online chat sessions and/or online channels. For example, as shown in the example ofFIG.4A, the unread chat badge410indicates a total number of 18 unread messages. As shown in diagram412of the example ofFIG.4B, the Folder Engine detects a hover action414proximate to a folder404. In some embodiments, the hover action414may be an occurrence of user input data placed near an unread message badge404-1that is displayed proximate to the folder404. For example, the hover action414may be placement of a mouse pointer proximate to the unread message badge404-1whereby the hover action is detected based on the mouse pointer maintaining its presence proximate to the badge404-1for at least a predefined required amount of time. In response to detecting the hover action414, the Folder Engine displays a notification. In some embodiments, the notification may be a display of an identity416of sender user account. The identity416of the second user account may represent an online chat between the sender user account and the recipient user account. The notification may further include display of a badge418. The badge418indicates a total number of unread messages sent from the sender user account currently present in the online chat. For example, the total number of unread messages in the online chat maybe one or more messages sent from the sender user account not yet accessed by the recipient user account. As shown in diagram420of the example ofFIG.4C, the Folder Engine detects a hover action422proximate to the unread chat badge410. In response to detecting the hover action422, the Folder Engine displays a notification424that includes display of the identities of multiple sender user accounts and/or online channels. For example, an identity426,434may represent an online chat session between a respective sender user account and the recipient user account. For example, an identity428,430,434may represent an online channel to which the recipient user account is subscribed. The unread chat badge410indicates an aggregate total number of unread messages sent from each of the sender user accounts and/or online channels displayed in the notification424. As shown in diagram436of the example ofFIG.4D, the Folder Engine detects a hover action438proximate to an unread messages badge438of a folder440. In response to detecting the hover action438, the Folder Engine displays a notification442that includes display of identities of multiple channels organized within the folder440. The multiple channels in the folder440are organized and displayed according to an organizational order selected by the recipient user account. Each identity displayed in the notification442includes a respective unread messages badge444,446,448. Each unread messages badge444,446,448indicates a total number of unread messages currently present in the corresponding online channel. The unread messages badge438of the folder440indicates an aggregate total number of unread messages currently in all the online channels displayed in the notification442. As shown in diagram500of the example ofFIG.5A, the Folder Engine detects selection of an identity502of a sender user account. For example, the selected identity502may further represent an online chat session between the sender user account in the recipient user account. As previously described, the unread chat badge410indicates a total aggregate number of unread messages. As shown in diagram504of the example ofFIG.5B, in response to detecting selection of the identity502of the sender user account, the Folder Engine initiates, renders and displays a graphic transition to display an open folder. In some embodiments, detection of the selection of the identity502further triggers the Folder Engine to open a folder in which the corresponding online chat session has been placed by the recipient user account. As shown in diagram506of the example ofFIG.5C, the graphic transition results in display of an open folder404. The display of the open folder404includes a presentation of each online chat session that is organized within the folder404according to an organizational order selected by the recipient user account. Further, the Folder Engine initiates access, by the recipient user account, of the online chat session508that corresponds to the selected identity502. Since the online chat session508has a sole location in the folder—as determined by the recipient user account, receiving selection of identity502by the Folder Engine thereby acts as a shortcut operation(s) for triggering the opening of the folder404and the recipient user account accessing the corresponding online chat session508and/or the recipient user account accessing the unread message(s) in the corresponding online chat session508. Since there was one unread message in the online chat session508and the recipient user account has now accessed that online chat session508, the unread chat badge410indicates an updated aggregate total number of unread messages. For example, the Folder Engine decrements an aggregate total number of unread messages indicated by the unread chat badge410from18to17to reflect the recipient user account's access of the unread message in the online chat session508. As shown in diagram600of the example ofFIG.6A, in some embodiments a notification424may display the identities602,610,616,622of multiple sender user accounts and/or channels. For example, the displayed identities602,622of sender user accounts may further represent distinct online chat sessions. Each displayed identity602,610,616,622includes display of an unread message badge604,612,618,624. Each unread message badge604,612,618,624indicates a current total number of unread messages present in the corresponding online chat session or channel. The notification424further includes display of a preview608,614,620,626of a most recently received unread message from each sender user account602,622or a most recently received unread message in each channel610,616. As shown in diagram628of the example ofFIG.6B, the Folder Engine displays a folder creation functionality630. The functionality630includes a folder name text field632in which the recipient user account may input a folder name to be assigned to a newly created folder. The functionality630further includes a chat or channel selection field634through which the recipient user account may search for and select one or more online chat sessions and one or more online channels to be included in the newly created folder. It is further understood that the Folder Engine may further receive folder organizational data selected by the recipient user account. The folder organizational data represents an ordering of the one or more online chat sessions and/or the one or more online channels the recipient user account has selected for inclusion within the folder. FIG.7is a diagram illustrating an exemplary computer that may perform processing in some embodiments. As shown in the example ofFIG.7, an exemplary computer700may perform operations consistent with some embodiments. The architecture of computer700is exemplary. Computers can be implemented in a variety of other ways. A wide variety of computers can be used in accordance with the embodiments herein. Processor701may perform computing functions such as running computer programs. The volatile memory702may provide temporary storage of data for the processor701. RAM is one kind of volatile memory. Volatile memory typically requires power to maintain its stored information. Storage703provides computer storage for data, instructions, and/or arbitrary information. Non-volatile memory, which can preserve data even when not powered and including disks and flash memory, is an example of storage. Storage703may be organized as a file system, database, or in other ways. Data, instructions, and information may be loaded from storage703into volatile memory702for processing by the processor7. The computer700may include peripherals705. Peripherals705may include input peripherals such as a keyboard, mouse, trackball, video camera, microphone, and other input devices. Peripherals705may also include output devices such as a display. Peripherals705may include removable media devices such as CD-R and DVD-R recorders/players. Communications device706may connect the computer700to an external medium. For example, communications device706may take the form of a network adapter that provides communications to a network. A computer700may also include a variety of other devices704. The various components of the computer700may be connected by a connection medium such as a bus, crossbar, or network. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “identifying” or “determining” or “executing” or “performing” or “collecting” or “creating” or “sending” or the like, refer to the action and processes of a computer system, or similar electronic computer device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description above. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. It will be appreciated that the present disclosure may include any one and up to all of the following examples.Example 1: A computer-implemented method comprising: detecting a hover action initiated by a recipient user account proximate to a folder created by the recipient user account; based on detecting the hover action, initiating display of one or more notifications that correspond to one or more online chats selected by the recipient user account to be organized in the folder; receiving a selection from the recipient user account of a first notification that corresponds to a first online chat within the folder; and based on the selection of the first notification, initiating access of the first online chat by the recipient user account.Example 2: The method of Example 1, wherein detecting a hover action initiated by a recipient user account proximate to a folder created by the recipient user account; based on detecting the hover action, initiating display of one or more notifications that correspond to one or more online chats selected by the recipient user account to be organized in the folder; receiving a selection from the recipient user account of a first notification that corresponds to a first online chat within the folder; and based on the selection of the first notification, initiating access of the first online chat by the recipient user account.Example 3: The method of any Examples 1-2, wherein a respective online chat selected by the recipient user account was initiated prior to the request to instantiate the folder.Example 4: The method of any Examples 1-3, further comprising: prior to detecting of the hover action: determining presence of at least one unread message in the first online chat; and initiating display of an unread message badge proximate to the folder.Example 5: The method of any Examples 1-4, further comprising: determining presence of at least one unread message in multiple online chats organized by the recipient user account in the folder; determining an aggregate amount of unread messages in the multiple online chats; and indicating the aggregate amount of unread messages in the unread message badge.Example 6: The method of any Examples 1-5, wherein initiating display of notifications comprises: displaying an identity of a first sender user account from which one or more unread messages were sent to the first online chat, the one or more unread messages comprising respective messages not yet accessed by the recipient user account.Example 7: The method of any Examples 1-6, wherein displaying an identity of a first sender user account comprises: displaying an indication of a total number of unread messages sent from the first sender user account to the first online chat, the total number of unread messages comprising a number of unread messages sent from the first sender user account not yet accessed by the recipient user account.Example 7: The method of any Examples 1-6, wherein displaying an identity of a first sender user account comprises: displaying an indication of a total number of unread messages sent from the first sender user account to the first online chat, the total number of unread messages comprising a number of unread messages sent from the first sender user account not yet accessed by the recipient user account.Example 8: A non-transitory computer-readable medium having a computer-readable program code embodied therein to be executed by one or more processors, the program code including instructions for: detecting a hover action initiated by a recipient user account proximate to a folder created by the recipient user account; based on detecting the hover action, initiating display of one or more notifications that correspond to one or more online chats selected by the recipient user account to be organized in the folder; receiving a selection from the recipient user account of a first notification that corresponds to a first online chat within the folder; and based on the selection of the first notification, initiating access of the first online chat by the recipient user account.Example 9: The non-transitory computer-readable medium of Example 8, wherein the first online chat comprises one of: a chat message session between a first sender user account and the recipient user account; a chat message session between a plurality of sender user accounts and the recipient user account; and a chat message channel to which the recipient user account is subscribed.Example 10: The non-transitory computer-readable medium of any Examples 8-9, wherein initiating display of one or more notifications comprises: displaying an identity of a first sender user account that belongs to the first online chat in the folder; displaying a total number of unread messages sent from the first sender account; and displaying a first preview of a particular unread message sent from the first sender account.Example 11: The non-transitory computer-readable medium of any Examples 8-10, further comprising: displaying an identity of a second sender user account that belongs to a second online chat in the folder; displaying a total number of unread messages sent from the second sender account; and displaying a second preview of a particular unread message sent from the second sender account.Example 12: The non-transitory computer-readable medium of any Examples 8-11, further comprising: concurrently displaying the respective identities of the first and second sender accounts, the total number of unread messages sent from the first and second sender accounts, the first preview and the second preview.Example 13: The non-transitory computer-readable medium of any Examples 8-12, further comprising: wherein the total number of unread messages sent from the first sender account comprises: a number of messages sent from the first sender account to the first online chat the recipient user account has yet to access; and wherein the total number of unread messages sent from the second sender account comprises: a number of messages sent from the second sender account to the second online chat the recipient user account has yet to access.Example 14: The non-transitory computer-readable medium of any Examples 8-13, further comprising: receiving selection from the recipient user account of the identity of the second sender account by account; and initiating access of the second online chat by the recipient user account.Example 15: The non-transitory computer-readable medium of any Examples 8-14, further comprising: decrementing the total number of unread messages sent from the second sender account.Example 16: A communication system comprising one or more processors configured to perform the operations of: detecting a hover action initiated by a recipient user account proximate to a folder created by the recipient user account; based on detecting the hover action, initiating display of one or more notifications that correspond to one or more online chats selected by the recipient user account to be organized in the folder; receiving a selection from the recipient user account of a first notification that corresponds to a first online chat within the folder; and based on the selection of the first notification, initiating access of the first online chat by the recipient user account.Example 17: The communication system of Example 16, further comprising: prior to detecting of the hover action in a video communication environment: receiving a request from the recipient user account, in the video communication environment, to instantiate the folder; generating the folder; receiving a selection from the recipient user account, in the video communication environment, of one or more online chats for the folder; and receiving from the recipient user account, in the video communication environment, an ordering of the one or more selected online chats within the folder.Example 18: The communication system of any Examples 16-17, further comprising: wherein in the video communication environment comprises a client application.Example 19: The communication system of any Examples 16-18, further comprising: storing the folder, the one or more online chats for the folder and the ordering of the one or more selected online chats in cloud storage associated with the video communication environment.Example 20: The communication system of any Examples 16-19, further comprising detecting a hover action proximate to a first badge associated with an unread message indicator for a collection of online chats that currently include one or more unread messages; in response to the hover action, displaying a notification that includes display of an identity of a sender user account and a second badge, the sender user account corresponding to a respective online chat session between the sender user account and the recipient user account, the second badge indicating a current number of unread messages sent from the sender user account in the respective online chat session; receiving a selection from the recipient user account of the identity of the sender user account displayed in the notification; based on the selection: (i) opening a folder in which the respective online chat session is located, the respective online chat session previously selected to be situated in the folder by the recipient user account; (ii) displaying the opened folder; and (iii) initiating access by the recipient user account of one or more unread messages in the respective online chat session. The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
39,141
11863335
DETAILED DESCRIPTION Examples are described herein in the context of systems and methods for providing spotlight cards within a chat channel. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Reference will now be made in detail to implementations of examples as illustrated in the accompanying drawings. The same reference indicators will be used throughout the drawings and the following description to refer to the same or like items. In the interest of clarity, not all of the routine features of the examples described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Chat messaging has become a fixture of modern communication. In particular, chat channels are used across numerous platforms, especially within work environments as a means of providing swift and easy communication between individuals. A chat channel involves an application that allows multiple participants to exchange messages, including sharing documents, text messages, audio clips, etc., with other members of the chat channel. Unlike email communication, however, a chat channel generates a running dialogue of messages that are exchanged within the chat channel. As such, chat channels can accumulate thousands of messages, especially, if the chat channel involves a high number of members. The high volume of messages within a chat channel can make it cumbersome for members to identify relevant or important information within the chat channel. To provide chat channel members quick review and access to important and relevant content within a chat channel, example spotlight cards are provided herein. Spotlight cards can include important information for the chat channel that a hosting member, or a member with authority to generate spotlight cards, flags the information as important. A spotlight card may be positioned within a spotlight panel of the chat channel that is continuously visible while a member is in a chat channel, regardless of where in the chat channel the member scrolls. This may allow the spotlight cards to highlight the importance of the spotlight content contained within the spotlight card. For example, a host member may generate a spotlight card for a chat message that is a reminder to the chat channel members to meet a submission deadline. Since it is important that the chat channel members meet the submission deadline, the spotlight card may be positioned in the spotlight panel to highlight its importance and serve as a constant reminder to all chat channel members regardless of where the member is within the chat channel. Another issue that arises from chat channels is the limited accessibility of other applications. Often, when a chat channel is used within a business or educational setting, members of that chat channel will also use other applications in conjunction with the chat channel. For example, chat channel members may be part of a project team and use another application for work on the project documents. To work in the project documents, the chat channel members must leave the chat channel and open the separate application. Not only is this time consuming but it can also interrupt members' focus or ability to recall what is happening in the chat channel. For example, if there is a request in the chat channel for specific information that is only present in the project documents, a chat channel member may navigate to the application hosting the project documents but forget the details of the request by the time he or she opens the project documents. This may require the member to toggle between the chat channel and the project document application, impacting the experience and focus of the member within the chat channel. To provide access and the ability to interact with resources external to the chat channel, example spotlight cards are provided herein. As discussed above, spotlight cards can be used to highlight important or relevant information for members of a chat channel. In some cases, the important or relevant information may be hosted by a resource, such as an application, that is separate or external from the chat channel. To highlight information that is hosted by the external resource, a spotlight card may be generated. In this case, the spotlight card can provide easy access to the resource without leaving the chat channel. For example, if a project document is hosted by a word processing application, a spotlight card of the project document may be generated and posted in the spotlight panel of the chat channel. If a member wants to preview or access the contents of the project document, the member can simply view or select the spotlight card. As will be described in greater detail below, the member can preview the contents of the project document from the spotlight card or can expand spotlight card to access the project document. In some embodiments, a member can actively work in the project document, such as by editing the project document, without leaving the chat channel. This can allow chat channel members to remain engaged in a chat channel discussion while accessing relevant content that is hosted by external resources. Importantly, in some embodiments, the spotlight cards are dynamic and continuously update in real-time with information as it is updated in the external resource itself. For example, if sales numbers are being updated in a finance application that is separate and external from the chat channel, a corresponding spotlight card for the sales numbers may also update to reflect the content as it is present in the finance application. This can ensure that the spotlight cards provide relevant and correct content to the chat channel members. Not only can a single chat channel become overwhelming with the volume of content within the chat channel, but members can become overwhelmed by being part of numerous chat channels. For example, a single member may be part of a dozen or more chat channels, each containing a high volume of messages and content. For the purposes of this disclosure, this type of member may be referenced as a multi-channel member. Not only would it be time consuming, but it could even be impossible for a multi-channel member to identify the most recent and relevant information for each of the channels, especially when content is continuously being updated and generated. To provide a multi-channel member easy access and review of the most recent and relevant information across numerous chat channels, example spotlight home pages are provided herein. A spotlight home page may aggregate the spotlight cards from across multiple chat channels in a simple display for ease of review and access. Additionally, the spotlight home page may identify content in each chat channel that is relevant to the multi-channel member, such as a spotlight thread (e.g., a thread with a high number of replies or messages) or message that mentions the multi-channel member. By aggregating important and relevant information from numerous chat channels in a single place, the multi-channel member can remain up-to-date on the important content of each channel, as well as easily access the content within the relevant chat channel simply upon selecting the spotlight card. This illustrative example is given to introduce the reader to the general subject matter discussed herein and the disclosure is not limited to this example. The following sections describe various additional non-limiting examples and examples of systems and methods for providing spotlight cards within a chat channel. Referring now toFIG.1,FIG.1shows an example system100that provides videoconferencing functionality to various client devices. The system100includes a video conference provider110that is connected to multiple communication networks120,130, through which various client devices140-180can participate in video conferences hosted by the chat and video conference provider110. For example, the chat and video conference provider110can be located within a private network to provide video conferencing services to devices within the private network, or it can be connected to a public network, e.g., the internet, so it may be accessed by anyone. Some examples may even provide a hybrid model in which a video conference provider110may supply components to enable a private organization to host private internal video conferences or to connect its system to the chat and video conference provider110over a public network. The system optionally also includes one or more user identity providers, e.g., user identity provider115, which can provide user identity services to users of the client devices140-160and may authenticate user identities of one or more users to the chat and video conference provider110. In this example, the user identity provider115is operated by a different entity than the chat and video conference provider110, though in some examples, they may be the same entity. Video conference provider110allows clients to create videoconference meetings (or “meetings”) and invite others to participate in those meetings as well as perform other related functionality, such as recording the meetings, generating transcripts from meeting audio, generating summaries and translations from meeting audio, manage user functionality in the meetings, enable text messaging during the meetings, create and manage breakout rooms from the virtual meeting, etc.FIG.2, described below, provides a more detailed description of the architecture and functionality of the chat and video conference provider110. It should be understood that the term “meeting” encompasses the term “webinar” used herein. Meetings in this example video conference provider110are provided in virtual rooms to which participants are connected. The room in this context is a construct provided by a server that provides a common point at which the various video and audio data is received before being multiplexed and provided to the various participants. While a “room” is the label for this concept in this disclosure, any suitable functionality that enables multiple participants to participate in a common videoconference may be used. To create a meeting with the chat and video conference provider110, a user may contact the chat and video conference provider110using a client device140-180and select an option to create a new meeting. Such an option may be provided in a webpage accessed by a client device140-160or client application executed by a client device140-160. For telephony devices, the user may be presented with an audio menu that they may navigate by pressing numeric buttons on their telephony device. To create the meeting, the chat and video conference provider110may prompt the user for certain information, such as a date, time, and duration for the meeting, a number of participants, a type of encryption to use, whether the meeting is confidential or open to the public, etc. After receiving the various meeting settings, the chat and video conference provider may create a record for the meeting and generate a meeting identifier and, in some examples, a corresponding meeting password or passcode (or other authentication information), all of which meeting information is provided to the meeting host. After receiving the meeting information, the user may distribute the meeting information to one or more users to invite them to the meeting. To begin the meeting at the scheduled time (or immediately, if the meeting was set for an immediate start), the host provides the meeting identifier and, if applicable, corresponding authentication information (e.g., a password or passcode). The video conference system then initiates the meeting and may admit users to the meeting. Depending on the options set for the meeting, the users may be admitted immediately upon providing the appropriate meeting identifier (and authentication information, as appropriate), even if the host has not yet arrived, or the users may be presented with information indicating that the meeting has not yet started or the host may be required to specifically admit one or more of the users. During the meeting, the participants may employ their client devices140-180to capture audio or video information and stream that information to the chat and video conference provider110. They also receive audio or video information from the chat and video conference provider210, which is displayed by the respective client device140to enable the various users to participate in the meeting. At the end of the meeting, the host may select an option to terminate the meeting, or it may terminate automatically at a scheduled end time or after a predetermined duration. When the meeting terminates, the various participants are disconnected from the meeting, and they will no longer receive audio or video streams for the meeting (and will stop transmitting audio or video streams). The chat and video conference provider110may also invalidate the meeting information, such as the meeting identifier or password/passcode. To provide such functionality, one or more client devices140-180may communicate with the chat and video conference provider110using one or more communication networks, such as network120or the public switched telephone network (“PSTN”)130. The client devices140-180may be any suitable computing or communications device that have audio or video capability. For example, client devices140-160may be conventional computing devices, such as desktop or laptop computers having processors and computer-readable media, connected to the chat and video conference provider110using the internet or other suitable computer network. Suitable networks include the internet, any local area network (“LAN”), metro area network (“MAN”), wide area network (“WAN”), cellular network (e.g., 3G, 4G, 4G LTE, 5G, etc.), or any combination of these. Other types of computing devices may be used instead or as well, such as tablets, smartphones, and dedicated video conferencing equipment. Each of these devices may provide both audio and video capabilities and may enable one or more users to participate in a video conference meeting hosted by the chat and video conference provider110. In addition to the computing devices discussed above, client devices140-180may also include one or more telephony devices, such as cellular telephones (e.g., cellular telephone170), internet protocol (“IP”) phones (e.g., telephone180), or conventional telephones. Such telephony devices may allow a user to make conventional telephone calls to other telephony devices using the PSTN, including the chat and video conference provider110. It should be appreciated that certain computing devices may also provide telephony functionality and may operate as telephony devices. For example, smartphones typically provide cellular telephone capabilities and thus may operate as telephony devices in the example system100shown inFIG.1. In addition, conventional computing devices may execute software to enable telephony functionality, which may allow the user to make and receive phone calls, e.g., using a headset and microphone. Such software may communicate with a PSTN gateway to route the call from a computer network to the PSTN. Thus, telephony devices encompass any devices that can making conventional telephone calls and is not limited solely to dedicated telephony devices like conventional telephones. Referring again to client devices140-160, these devices140-160contact the chat and video conference provider110using network120and may provide information to the chat and video conference provider110to access functionality provided by the chat and video conference provider110, such as access to create new meetings or join existing meetings. To do so, the client devices140-160may provide user identification information, meeting identifiers, meeting passwords or passcodes, etc. In examples that employ a user identity provider115, a client device, e.g., client devices140-160, may operate in conjunction with a user identity provider115to provide user identification information or other user information to the chat and video conference provider110. A user identity provider115may be any entity trusted by the chat and video conference provider110that can help identify a user to the chat and video conference provider110. For example, a trusted entity may be a server operated by a business or other organization and with whom the user has established their identity, such as an employer or trusted third-party. The user may sign into the user identity provider115, such as by providing a username and password, to access their identity at the user identity provider115. The identity, in this sense, is information established and maintained at the user identity provider115that can be used to identify a particular user, irrespective of the client device they may be using. An example of an identity may be an email account established at the user identity provider115by the user and secured by a password or additional security features, such as biometric authentication, two-factor authentication, etc. However, identities may be distinct from functionality such as email. For example, a health care provider may establish identities for its patients. And while such identities may have associated email accounts, the identity is distinct from those email accounts. Thus, a user's “identity” relates to a secure, verified set of information that is tied to a particular user and should be accessible only by that user. By accessing the identity, the associated user may then verify themselves to other computing devices or services, such as the chat and video conference provider110. When the user accesses the chat and video conference provider110using a client device, the chat and video conference provider110communicates with the user identity provider115using information provided by the user to verify the user's identity. For example, the user may provide a username or cryptographic signature associated with a user identity provider115. The user identity provider115then either confirms the user's identity or denies the request. Based on this response, the chat and video conference provider110either provides or denies access to its services, respectively. For telephony devices, e.g., client devices170-180, the user may place a telephone call to the chat and video conference provider110to access video conference services. After the call is answered, the user may provide information regarding a video conference meeting, e.g., a meeting identifier (“ID”), a passcode or password, etc., to allow the telephony device to join the meeting and participate using audio devices of the telephony device, e.g., microphone(s) and speaker(s), even if video capabilities are not provided by the telephony device. Because telephony devices typically have more limited functionality than conventional computing devices, they may be unable to provide certain information to the chat and video conference provider110. For example, telephony devices may be unable to provide user identification information to identify the telephony device or the user to the chat and video conference provider110. Thus, the chat and video conference provider110may provide more limited functionality to such telephony devices. For example, the user may be permitted to join a meeting after providing meeting information, e.g., a meeting identifier and passcode, but they may be identified only as an anonymous participant in the meeting. This may restrict their ability to interact with the meetings in some examples, such as by limiting their ability to speak in the meeting, hear or view certain content shared during the meeting, or access other meeting functionality, such as joining breakout rooms or engaging in text chat with other participants in the meeting. It should be appreciated that users may choose to participate in meetings anonymously and decline to provide user identification information to the chat and video conference provider110, even in cases where the user has an authenticated identity and employs a client device capable of identifying the user to the chat and video conference provider110. The chat and video conference provider110may determine whether to allow such anonymous users to use services provided by the chat and video conference provider110. Anonymous users, regardless of the reason for anonymity, may be restricted as discussed above with respect to users employing telephony devices, and in some cases may be prevented from accessing certain meetings or other services, or may be entirely prevented from accessing the chat and video conference provider110. Referring again to video conference provider110, in some examples, it may allow client devices140-160to encrypt their respective video and audio streams to help improve privacy in their meetings. Encryption may be provided between the client devices140-160and the chat and video conference provider110or it may be provided in an end-to-end configuration where multimedia streams (e.g., audio or video streams) transmitted by the client devices140-160are not decrypted until they are received by another client device140-160participating in the meeting. Encryption may also be provided during only a portion of a communication, for example encryption may be used for otherwise unencrypted communications that cross international borders. Client-to-server encryption may be used to secure the communications between the client devices140-160and the chat and video conference provider110, while allowing the chat and video conference provider110to access the decrypted multimedia streams to perform certain processing, such as recording the meeting for the participants or generating transcripts of the meeting for the participants. End-to-end encryption may be used to keep the meeting entirely private to the participants without any worry about a video conference provider110having access to the substance of the meeting. Any suitable encryption methodology may be employed, including key-pair encryption of the streams. For example, to provide end-to-end encryption, the meeting host's client device may obtain public keys for each of the other client devices participating in the meeting and securely exchange a set of keys to encrypt and decrypt multimedia content transmitted during the meeting. Thus, the client devices140-160may securely communicate with each other during the meeting. Further, in some examples, certain types of encryption may be limited by the types of devices participating in the meeting. For example, telephony devices may lack the ability to encrypt and decrypt multimedia streams. Thus, while encrypting the multimedia streams may be desirable in many instances, it is not required as it may prevent some users from participating in a meeting. By using the example system shown inFIG.1, users can create and participate in meetings using their respective client devices140-180via the chat and video conference provider110. Further, such a system enables users to use a wide variety of different client devices140-180from traditional standards-based video conferencing hardware to dedicated video conferencing equipment to laptop or desktop computers to handheld devices to legacy telephony devices. etc. Referring now toFIG.2,FIG.2shows an example system200in which a video conference provider210provides videoconferencing functionality to various client devices220-250. The client devices220-250include two conventional computing devices220-230, dedicated equipment for a video conference room240, and a telephony device250. Each client device220-250communicates with the chat and video conference provider210over a communications network, such as the internet for client devices220-240or the PSTN for client device250, generally as described above with respect toFIG.1. The chat and video conference provider210is also in communication with one or more user identity providers215, which can authenticate various users to the chat and video conference provider210generally as described above with respect toFIG.1. In this example, the chat and video conference provider210employs multiple different servers (or groups of servers) to provide different Examples of video conference functionality, thereby enabling the various client devices to create and participate in video conference meetings. The chat and video conference provider210uses one or more real-time media servers212, one or more network services servers214, one or more video room gateways216, and one or more telephony gateways218. Each of these servers212-218is connected to one or more communications networks to enable them to collectively provide access to and participation in one or more video conference meetings to the client devices220-250. The real-time media servers212provide multiplexed multimedia streams to meeting participants, such as the client devices220-250shown inFIG.2. While video and audio streams typically originate at the respective client devices, they are transmitted from the client devices220-250to the chat and video conference provider210via one or more networks where they are received by the real-time media servers212. The real-time media servers212determine which protocol is optimal based on, for example, proxy settings and the presence of firewalls, etc. For example, the client device might select among UDP, TCP, TLS, or HTTPS for audio and video and UDP for content screen sharing. The real-time media servers212then multiplex the various video and audio streams based on the target client device and communicate multiplexed streams to each client device. For example, the real-time media servers212receive audio and video streams from client devices220-240and only an audio stream from client device250. The real-time media servers212then multiplex the streams received from devices230-250and provide the multiplexed stream to client device220. The real-time media servers212are adaptive, for example, reacting to real-time network and client changes, in how they provide these streams. For example, the real-time media servers212may monitor parameters such as a client's bandwidth CPU usage, memory and network I/O as well as network parameters such as packet loss, latency and jitter to determine how to modify the way in which streams are provided. The client device220receives the stream, performs any decryption, decoding, and demultiplexing on the received streams, and then outputs the audio and video using the client device's video and audio devices. In this example, the real-time media servers do not multiplex client device220's own video and audio feeds when transmitting streams to it. Instead, each client device220-250only receives multimedia streams from other client devices220-250. For telephony devices that lack video capabilities, e.g., client device250, the real-time media servers212only deliver multiplex audio streams. The client device220may receive multiple streams for a particular communication, allowing the client device220to switch between streams to provide a higher quality of service. In addition to multiplexing multimedia streams, the real-time media servers212may also decrypt incoming multimedia stream in some examples. As discussed above, multimedia streams may be encrypted between the client devices220-250and the chat and video conference provider210. In some such examples, the real-time media servers212may decrypt incoming multimedia streams, multiplex the multimedia streams appropriately for the various clients, and encrypt the multiplexed streams for transmission. As mentioned above with respect toFIG.1, the chat and video conference provider210may provide certain functionality with respect to unencrypted multimedia streams at a user's request. For example, the meeting host may be able to request that the meeting be recorded or that a transcript of the audio streams be prepared, which may then be performed by the real-time media servers212using the decrypted multimedia streams, or the recording or transcription functionality may be off-loaded to a dedicated server (or servers), e.g., cloud recording servers, for recording the audio and video streams. In some examples, the chat and video conference provider210may allow a meeting participant to notify it of inappropriate behavior or content in a meeting. Such a notification may trigger the real-time media servers to212record a portion of the meeting for review by the chat and video conference provider210. Still other functionality may be implemented to take actions based on the decrypted multimedia streams at the chat and video conference provider, such as monitoring video or audio quality, adjusting or changing media encoding mechanisms, etc. It should be appreciated that multiple real-time media servers212may be involved in communicating data for a single meeting and multimedia streams may be routed through multiple different real-time media servers212. In addition, the various real-time media servers212may not be co-located, but instead may be located at multiple different geographic locations, which may enable high-quality communications between clients that are dispersed over wide geographic areas, such as being located in different countries or on different continents. Further, in some examples, one or more of these servers may be co-located on a client's premises, e.g., at a business or other organization. For example, different geographic regions may each have one or more real-time media servers212to enable client devices in the same geographic region to have a high-quality connection into the chat and video conference provider210via local servers212to send and receive multimedia streams, rather than connecting to a real-time media server located in a different country or on a different continent. The local real-time media servers212may then communicate with physically distant servers using high-speed network infrastructure, e.g., internet backbone network(s), that otherwise might not be directly available to client devices220-250themselves. Thus, routing multimedia streams may be distributed throughout the video conference system210and across many different real-time media servers212. Turning to the network services servers214, these servers214provide administrative functionality to enable client devices to create or participate in meetings, send meeting invitations, create or manage user accounts or subscriptions, and other related functionality. Further, these servers may be configured to perform different functionalities or to operate at different levels of a hierarchy, e.g., for specific regions or localities, to manage portions of the chat and video conference provider under a supervisory set of servers. When a client device220-250accesses the chat and video conference provider210, it will typically communicate with one or more network services servers214to access their account or to participate in a meeting. When a client device220-250first contacts the chat and video conference provider210in this example, it is routed to a network services server214. The client device may then provide access credentials for a user, e.g., a username and password or single sign-on credentials, to gain authenticated access to the chat and video conference provider210. This process may involve the network services servers214contacting a user identity provider215to verify the provided credentials. Once the user's credentials have been accepted, the network services servers214may perform administrative functionality, like updating user account information, if the user has an identity with the chat and video conference provider210, or scheduling a new meeting, by interacting with the network services servers214. In some examples, users may access the chat and video conference provider210anonymously. When communicating anonymously, a client device220-250may communicate with one or more network services servers214but only provide information to create or join a meeting, depending on what features the chat and video conference provider allows for anonymous users. For example, an anonymous user may access the chat and video conference provider using client device220and provide a meeting ID and passcode. The network services server214may use the meeting ID to identify an upcoming or on-going meeting and verify the passcode is correct for the meeting ID. After doing so, the network services server(s)214may then communicate information to the client device220to enable the client device220to join the meeting and communicate with appropriate real-time media servers212. In cases where a user wishes to schedule a meeting, the user (anonymous or authenticated) may select an option to schedule a new meeting and may then select various meeting options, such as the date and time for the meeting, the duration for the meeting, a type of encryption to be used, one or more users to invite, privacy controls (e.g., not allowing anonymous users, preventing screen sharing, manually authorize admission to the meeting, etc.), meeting recording options, etc. The network services servers214may then create and store a meeting record for the scheduled meeting. When the scheduled meeting time arrives (or within a threshold period of time in advance), the network services server(s)214may accept requests to join the meeting from various users. To handle requests to join a meeting, the network services server(s)214may receive meeting information, such as a meeting ID and passcode, from one or more client devices220-250. The network services server(s)214locate a meeting record corresponding to the provided meeting ID and then confirm whether the scheduled start time for the meeting has arrived, whether the meeting host has started the meeting, and whether the passcode matches the passcode in the meeting record. If the request is made by the host, the network services server(s)214activates the meeting and connects the host to a real-time media server212to enable the host to begin sending and receiving multimedia streams. Once the host has started the meeting, subsequent users requesting access will be admitted to the meeting if the meeting record is located and the passcode matches the passcode supplied by the requesting client device220-250. In some examples additional access controls may be used as well. But if the network services server(s)214determines to admit the requesting client device220-250to the meeting, the network services server214identifies a real-time media server212to handle multimedia streams to and from the requesting client device220-250and provides information to the client device220-250to connect to the identified real-time media server212. Additional client devices220-250may be added to the meeting as they request access through the network services server(s)214. After joining a meeting, client devices will send and receive multimedia streams via the real-time media servers212, but they may also communicate with the network services servers214as needed during meetings. For example, if the meeting host leaves the meeting, the network services server(s)214may appoint another user as the new meeting host and assign host administrative privileges to that user. Hosts may have administrative privileges to allow them to manage their meetings, such as by enabling or disabling screen sharing, muting or removing users from the meeting, assigning or moving users to the mainstage or a breakout room if present, recording meetings, etc. Such functionality may be managed by the network services server(s)214. For example, if a host wishes to remove a user from a meeting, they may identify the user and issue a command through a user interface on their client device. The command may be sent to a network services server214, which may then disconnect the identified user from the corresponding real-time media server212. If the host wishes to remove one or more participants from a meeting, such a command may also be handled by a network services server214, which may terminate the authorization of the one or more participants for joining the meeting. In addition to creating and administering on-going meetings, the network services server(s)214may also be responsible for closing and tearing-down meetings once they have completed. For example, the meeting host may issue a command to end an on-going meeting, which is sent to a network services server214. The network services server214may then remove any remaining participants from the meeting, communicate with one or more real time media servers212to stop streaming audio and video for the meeting, and deactivate, e.g., by deleting a corresponding passcode for the meeting from the meeting record, or delete the meeting record(s) corresponding to the meeting. Thus, if a user later attempts to access the meeting, the network services server(s)214may deny the request. Depending on the functionality provided by the chat and video conference provider, the network services server(s)214may provide additional functionality, such as by providing private meeting capabilities for organizations, special types of meetings (e.g., webinars), etc. Such functionality may be provided according to various examples of video conferencing providers according to this description. Referring now to the video room gateway servers216, these servers216provide an interface between dedicated video conferencing hardware, such as may be used in dedicated video conferencing rooms. Such video conferencing hardware may include one or more cameras and microphones and a computing device designed to receive video and audio streams from each of the cameras and microphones and connect with the chat and video conference provider210. For example, the video conferencing hardware may be provided by the chat and video conference provider to one or more of its subscribers, which may provide access credentials to the video conferencing hardware to use to connect to the chat and video conference provider210. The video room gateway servers216provide specialized authentication and communication with the dedicated video conferencing hardware that may not be available to other client devices220-230,250. For example, the video conferencing hardware may register with the chat and video conference provider when it is first installed and the video room gateway may authenticate the video conferencing hardware using such registration as well as information provided to the video room gateway server(s)216when dedicated video conferencing hardware connects to it, such as device ID information, subscriber information, hardware capabilities, hardware version information etc. Upon receiving such information and authenticating the dedicated video conferencing hardware, the video room gateway server(s)216may interact with the network services servers214and real-time media servers212to allow the video conferencing hardware to create or join meetings hosted by the chat and video conference provider210. Referring now to the telephony gateway servers218, these servers218enable and facilitate telephony devices' participation in meetings hosed by the chat and video conference provider210. Because telephony devices communicate using the PSTN and not using computer networking protocols, such as TCP/IP, the telephony gateway servers218act as an interface that converts between the PSTN, and the networking system used by the chat and video conference provider210. For example, if a user uses a telephony device to connect to a meeting, they may dial a phone number corresponding to one of the chat and video conference provider's telephony gateway servers218. The telephony gateway server218will answer the call and generate audio messages requesting information from the user, such as a meeting ID and passcode. The user may enter such information using buttons on the telephony device, e.g., by sending dual-tone multi-frequency (“DTMF”) audio signals to the telephony gateway server218. The telephony gateway server218determines the numbers or letters entered by the user and provides the meeting ID and passcode information to the network services servers214, along with a request to join or start the meeting, generally as described above. Once the telephony client device250has been accepted into a meeting, the telephony gateway server218is instead joined to the meeting on the telephony device's behalf. After joining the meeting, the telephony gateway server218receives an audio stream from the telephony device and provides it to the corresponding real-time media server212and receives audio streams from the real-time media server212, decodes them, and provides the decoded audio to the telephony device. Thus, the telephony gateway servers218operate essentially as client devices, while the telephony device operates largely as an input/output device, e.g., a microphone and speaker, for the corresponding telephony gateway server218, thereby enabling the user of the telephony device to participate in the meeting despite not using a computing device or video. It should be appreciated that the components of the chat and video conference provider210discussed above are merely examples of such devices and an example architecture. Some video conference providers may provide more or less functionality than described above and may not separate functionality into different types of servers as discussed above. Instead, any suitable servers and network architectures may be used according to different examples. In some embodiments, in addition to the video conferencing functionality describe above, the chat and video conference provider210(or the chat and video conference provider110) may provide a chat functionality. In such examples, the chat and video conference provider210may allow a user to create one or more chat channels where the user may exchange messages with other users (e.g., members) that have access to the chat channel(s). The messages may include text, image files, video files, or other files. In some examples, a chat channel may be “open,” meaning that any user may access the chat channel. In other examples, the chat channel may require that a user be granted permission to access the chat channel. The chat and video conference provider210may provide permission to a user and/or an owner of the chat channel may provide permission to the user. Furthermore, there may be any number of members permitted in the chat channel. Similar to the formation of a meeting, a chat channel may be provided by a server where messages exchanged between members of the chat channel are received and then directed to respective client devices. For example, if the client devices220-250are part of the same chat channel, messages may be exchanged between the client devices220-240via the chat and video conference provider210in a manner similar to how a meeting is hosted by the chat and video conference provider210. Referring now toFIG.3,FIG.3shows an example chat channel322including a spotlight panel330, according to an embodiment herein. The chat channel322may be accessible through a master chat panel300. The master chat panel300may be displayed on a client device, such as the client device220, in response to information sent by a chat and video conference provider, such as the chat and video conference provider110inFIG.1. The master chat panel300may be generated by an application, e.g., a standalone chat client or integrated into a video conferencing application, run by one or more processors stored on the client device. The master chat panel300may include a general dashboard304, a chat control dashboard320, a sidebar308, a chat window350, a reply dashboard326, and a reply panel324. The general dashboard304may include one or more buttons or links that switch functionalities and/or views of the master chat panel300. For example,FIG.3shows a chat view, perhaps in response to a user command selecting a chat button306in the general dashboard304. In this view, the chat window350, the reply panel324, and other components illustrated inFIG.3may be displayed on the client device. In other examples, a contacts button may be selected by a user. In response the contacts button being selected, the chat window350, the reply dashboard326and the reply panel324may be replaced by a display of a contacts window including a list of user contacts associated with the user of the client device. The sidebar308may be displayed alongside the contacts window. Other configurations are also possible. Various buttons on the general dashboard304may correspond to various displays of windows being displayed on the client device. Any number of components shown inFIG.3may be displayed on the client device with any of the various windows. Similarly, any of the components may cease to be displayed in accordance with any of the windows. The sidebar308may include one or more chat channel headings, such as chats312, channels314, and recent318. Chats312heading may include one or more chat channels, such as chat channel313. The chats312may include private chat channels, where messages in a chat channel are exchanged in a one-on-one manner. For example, the chat channel313may be between the member viewing the master chat panel300and one other member, such as Janis Cork, as depicted. Messages exchanged via the chat channel313may only be accessible by the members of the chat channel313. One-on-one chat channels, such as those provided under the chats312heading may allow members to securely communicate with each other or track communications between themselves. The channels314heading may be for chat channels that include two or more users. For example, a chat channel316may be included under the channels314heading because the chat channel316is for a Design Team. The chat channel316may include two or more members who have access to send and receive messages within the chat channel316. In some examples, the chat channel316may only be accessed by members who have permission to enter the chat channel316, such as members who receive and accept an invitation to join the chat channel316. In some embodiments, a chat channel may have a host or member who has host controls over the chat channel. For example, host controls may include the ability to establish and invite members to a chat channel. Additionally, as will be described in greater detail below, host controls may also grant a member the ability to generate and pin a spotlight card within a spotlight panel330. The recent318heading may indicate chat channels that a viewing member of the master chat panel300has recently viewed. The recent318heading may allow the viewing member easy access to commonly or recently viewed or accessed chat channels. “Recently accessed” chat channels may be determined by the client device to be a fixed number of most recent channels accessed by the viewing member, or may be only those chat channels access within a certain time, calculated from the current time. Although only the chat channel headings312,314, and318are shown, other chat channel headings are possible. For example, some examples may include a chat channel heading that displays, on the client device, only those channels that the user associated with the client device is a member of that have been recently accessed. The sidebar308may also include one or more combinatory headings, such as starred combinatory heading310. A combinatory heading may aggregate one or more messages from one or more chat channels, according to a predetermined criterion. The combinatory headings may include a link that, in response to a user command, cause the client device to display one or more messages in the chat window350. The messages may be gathered from one or more chat channels, such as the chat channels312or316, and displayed based on predetermined criteria. InFIG.3, for example, the starred combinatory heading310may gather only those messages that have been marked by a user of the client device. The marked messages may be stored at the client device, and/or may be stored at the chat and video conference provider. The link may cause the one or more processors included on the client device to determine which messages are marked messages and cause them to be displayed in the chat window350. In some examples, the link may cause the client device to send a signal to the chat and video conference provider. The chat and video conference provider may then determine which messages are marked messages and send information to the client device to generate a display of the marked messages in the chat window350. Other combinatory headings (and associated links and functionality) are also considered. Other examples may include an unread heading, an all files heading, a contact request heading, and others. As with the starred combinatory heading310, an associated link may cause the client device and/or the chat and video conference provider to determine which messages (if any) meet predetermined criteria associated with the combinatory heading and subsequently display those messages on the client device. As depicted, a viewing participant of the master chat panel300may select to access the chat channel316for the Design Team. Upon selection of the chat channel316, the chat window350may be provided on the master chat panel300. The chat window350may include the chat control dashboard320. The chat control dashboard320may display one or more control buttons and/or information regarding the chat channel316(e.g., the currently viewed chat channel). The control buttons may include links that mark a message (e.g., to mark it such that it is determined to be a marked message via the starred combinatory heading310), begin a video conference, schedule a meeting, create a video message, or other tasks. The chat control dashboard may also include a title of the chat channel316currently being displayed on the client device, such as the “Design Team Channel” as depicted, and/or a number of users with access to the chat channel316. One of ordinary skill in the art would recognize many different possibilities and configurations. The chat window350may also include a reply panel324. The reply panel324may include an input field323, where the member can input a message and select to send the message to the chat channel316. The input field323may be accessed by a peripheral device such as a mouse, a keyboard, a stylus, or any other suitable input method. In some examples, the input field323may be accessed by a touchscreen or other system built into the client device. In some examples, a notification may be sent from the client device and/or the chat and video conference provider that indicates a response is being entered into the input field323by the user. In other examples, no notification may be sent. The reply dashboard326may include one or more buttons that, in response to a user command edit or modify a response input into the input field323. For example, a record button may be provided, that allows the client device to capture audio and video. In other examples, there may be a share button that causes the client device to send the message to a different chat channel. In yet another example, there may be a reaction button that causes an image to be sent by the client device to the chat channel in response to a message posted in the chat channel. In some examples, there may be one or more formatting buttons included on the reply dashboard326. The one or more formatting buttons may change the appearance of a reply entered in the input field323. The user may thereby edit and customize their response in the input field323before sending. The reply dashboard326may include a send button328. The send button328may, in response to a user command, cause the client device to send the contents of the input field323(or “message”) to the other members of the chat channel316. The client device may transmit the message to the chat and video conference provider210, which may in turn transmit the message to the client devices associated with the other members of the chat channel316. Upon transmission of the message via the send button328, the message may be published within a chat messaging panel322. As noted above, messages exchanged within the chat channel316may include image files, such as JPEG, PNG, TIFF, or files in any other suitable format, may also include video files such as MPEG, GIF, or video files in any other suitable format, or may also include text entered into the input field323and/or other files attached to the message such as a PDF, DOC, or other file format. As illustrated, the chat window350may include the chat messaging panel322and a spotlight panel330. The chat messaging panel322may display messages as they are exchanged between members of the chat channel316. The messages may be displayed in the chat messaging panel322in real-time. The chat messaging panel322may include all messages that are exchanged within the chat channel316since the generation of the chat channel316. As could be appreciated, by holding all messages that are exchanged between members of the chat channel316, the chat messaging panel322may include a large volume of messages. Not only could a large volume of messages be generated if the chat channel316is active for a long duration of time or includes a large number of members, but also if the members of the chat channel316are increasingly communicative. When the chat messaging panel322includes a large volume of messages, it can be difficult for members of the chat channel316to identify or easily view relevant or important content. For example, a message that is important to the members of the chat channel316may be exchanged on a Monday and by Wednesday there may be so many messages exchanged after the important message is posted in the chat messaging panel322that a member cannot find the important message without spending time and effort sorting through the content of the chat messaging panel322. To highlight relevant or important content of the chat channel316for members, one or more spotlight cards332-336may be generated and added to the chat channel316. As illustrated, the chat channel316includes three spotlight cards: a first spotlight card332, a second spotlight card334, and a third spotlight card336. The spotlight cards332,334, and336may be generated within a spotlight panel330. The spotlight panel330may be positioned proximate to the chat messaging panel322so that members can continuously view the spotlight cards332,334, and336as they view messages within the chat messaging panel322. In some embodiments, the spotlight panel330may be a persistent panel within the chat window350meaning that the spotlight panel330is always present when a member is in the chat channel316. For example, a member may scroll through the messages in the chat messaging panel322but the spotlight panel330may maintain its position within the chat window350. Each of the spotlight cards332,334, and336may include spotlight content. Spotlight content may include content that is important or relevant to the chat channel316. For example, in some embodiments, the spotlight content may include an important message that was previously exchanged within the chat channel316that a host or other member with authorization wants to highlight as important to the other members of the chat channel316. In some embodiments, the spotlight content may include content from a resource external to the chat channel316. A resource external to the chat channel316may include an application or information that is not available directly from the chat channel316. For example, the spotlight card334may include Q4 sales numbers from a finance application. The finance application may be separate from the chat channel316and thus a resource external from the chat channel316(e.g., an external resource). As another example, the spotlight card336may include information on a leaderboard for the Design Team. The spotlight card336may include information on the leaders of the Design Team, such as contact information or profile information for the leaders. In some embodiments, the spotlight content of the spotlight card336(e.g., the contact information or profile information) may be pulled from contact or profile information for each of the respective leaders stored with the chat and video conference provider. Since the contact or profile information is stored separate from the chat channel316, the contact or profile information may be considered to be an external resource. In some embodiments, a widget or link may be used to access and pull content from an external resource (such as the contact or profile information from the chat and video conference provider or the finance information from the finance application). Examples of external resources may include applications, websites, content that is hosted by the video conference provider210, such as a calendar or email application hosted by the video conference provider210, applications, websites, content that is not hosted by the video conference provider210, such as a word processing application, email service hosted by a third party, or a news website, or content or features that are native to the video conference provider210, such as contact or profile information associated with members that are part of the video conference provider210. To generate a new spotlight card, a member of the chat channel316may select a “plus” button338within the spotlight panel330. For example, the member may use a cursor340to select the plus button338. In some embodiments, upon selecting the plus button338a window344may be presented. The window344may provide an option to add spotlight card and a settings option. The settings options may provide various options for generating a spotlight card. For example, settings may include what members of the chat channel316have authorization to generate a spotlight card. In some embodiments, only members with host controls, such as a host, co-host, or a member assigned host controls over the chat channel316may be able to generate a spotlight card. In other embodiments, instead of the button338, a spotlight card may be generated from content exchanged within the chat messaging panel322. For example, a member may select the content, right click, and be provided with an option to “add as spotlight card.” In another scenario, a member may drag and drop content into the spotlight panel330to generate a spotlight card. In still another scenario, if a member is adding an application to the chat channel316, the member may be prompted an option to add the application or content from the application as a spotlight card. Spotlight cards may be generated based on predetermined criteria within the chat channel316. For example, if a thread within the chat channel316has more than a threshold number of replies, then a spotlight card may be generated based on this thread for quick access by the chat channel members. In some embodiments, a member may generate a spotlight card with other members of the chat channel316. In some cases, the spotlight card may be shared with individual members or with the entire chat channel316. This can allow a generating member to limit the sharing of a spotlight card if, for example, the spotlight card contains sensitive information/data. In some embodiment, when generating a spotlight card, the generating member may select the “add spotlight” option on the window344. From there, the generating member may be presented with a prompt (not shown) to identify the external resource from which the spotlight content of the spotlight card may be drawn. The generating member may then indicate the external resource and the spotlight content within the external resource, in some cases navigating into the external resource to identify the spotlight content for the spotlight card. For example, to generate the spotlight card334, the generating member may indicate that the finance application is the external resource and may navigate into the finance application to indicate that the Q4 sales figures are the spotlight content for the spotlight card. In some embodiments, the spotlight content of the spotlight card may include a preview of the content of an external resource, such as a preview of a shared document, or the spotlight content may be specific content within the external resource, such as only the Q4 sales figures from the finance application. A generating member may indicate the spotlight content when generating a spotlight card. The spotlight content of each of the spotlight cards332,334, and336may update as the content of the external resource updates. For example, if the Q4 sales figures of the spotlight content associated with the spotlight card334update in the finance application, the spotlight card334may update to reflect the most recent Q4 sales figures. In some embodiments, the spotlight card334in a simplified view, as illustrated byFIG.3, displays a quick view of the spotlight content. For example, the simplified view of the spotlight card334may display a header of what the spotlight content is, such as “Q4 Sales.” As will be described in greater detail below with respect toFIGS.4-6, in other embodiments, the spotlight card334may provide more spotlight content when in an expanded view or detailed view. In some embodiments, a display of the spotlight cards332,334, and336may visually change to indicate an update of the corresponding spotlight content. For example, in some embodiments, the spotlight card332may change color or size, may toggle, may include a notification bubble360, or otherwise visually change to indicate an update to the spotlight content. Following the example spotlight card332, if the spotlight content of the spotlight card332is a poll for lunch options, then the spotlight card332may visually change to indicate a poll count change as members of the chat channel316vote on the various lunch options. In some embodiments, a notification bubble360may be provided on the spotlight card332to indicate that the poll is closing soon or that the viewing member has not completed the polling questions corresponding to the spotlight content. As depicted, the spotlight cards332,334, and336may be positioned within the spotlight panel330. As can be appreciated, if there are numerous spotlight cards, not all of the spotlight cards332,334, and336may be visible within the spotlight panel330at a time. If there is a large number of spotlight cards, the spotlight panel330may include a scroll (not shown) for viewing off-screen spotlight cards. In some embodiments, the spotlight cards332,334, and336(and other spotlight cards if present) may be positioned within the spotlight panel330based on a priority. A priority of a given spotlight card may be based on a number of factors, such as how recently the spotlight card was generated (e.g., the more recent cards may have a high priority for placement within the spotlight panel330), a time sensitivity of the spotlight content of a given spotlight card (e.g., the spotlight content is a poll or questionnaire that is timing out soon may have a high priority), or the member who generated the spotlight card (e.g., if a host generated the spotlight card, then the spotlight card may have a higher priority over a spotlight card generated by a co-host or a member granted one-time authority to generate a spotlight card). In other embodiments, the priority of a spotlight card may be determined based on an interaction level of the spotlight card. For example, if members interact with the spotlight card334more often within a predetermined time duration than they interact with the spotlight card336, then the spotlight card334may have a higher priority than the spotlight card336. If overtime, the spotlight card334is interacted with less by members than the spotlight card336, then the spotlight card334may be determined to have a lower priority of the spotlight card336and the placement of the spotlight card334within the spotlight panel330may be changed. The priority of a spotlight card may be used to determine the placement of the spotlight card within the spotlight panel330. For example, the higher the priority of a spotlight card, the higher the placement of the spotlight card may be within the spotlight panel330. A higher placement of the spotlight card may mean that the spotlight card is closer to a focus of the spotlight panel330. For example, per the illustration ofFIG.3, the further to left within the spotlight panel330that a spotlight card is placed, the higher the visibility of the spotlight card may be to members of the chat channel316. If a spotlight card has a lower priority, then the spotlight card may be placed further to the right within the spotlight panel330, meaning that the spotlight card may be placed off-screen if there are numerous spotlight cards having a higher priority. As depicted, the spotlight card332may have a higher priority than the spotlight card334, and the spotlight card334may have a higher priority than the spotlight card336. In still another example, a generating member of a spotlight card may pin a spotlight card at a placement within the spotlight panel330. The pinning of a spotlight card's placement may supersede any other factors that are used to determine a priority of the spotlight card (e.g., interaction level, time sensitivity). For example, if a host pins the spotlight card332in the highest priority placement position (here the furthest to the left within the spotlight panel330), then the spotlight card332may stay in this placement regardless of a determined priority. That is, the spotlight card332may stay in the highest priority placement position regardless of if the spotlight card334or the spotlight card336are determined to have a higher priority. As noted above, the spotlight panel330may be expandable such as to provide a viewing member with an expanded view of the spotlight content of each of the spotlight cards332,334, and336. For example, if the viewing member wanted to see a listing of the lunch options provided by the spotlight card332, the viewing member may select the expand button342. Upon selection, the expand button342may expand the spotlight panel330such as to provide a more detailed view of the spotlight cards332,334, and336. Referring now toFIG.4, an example master chat panel400including an expanded spotlight panel430is illustrated, according to an embodiment herein. The master chat panel400may be the same or similar to the master chat panel300. Similar numbering is used to indicate the same or similar components ofFIGS.3and4. For example, a dashboard404may be the same or similar to the dashboard304, including a chat button406. As depicted, a chat channel416may be selected via the sidebar408. The chat channel416may be accessed via a chat window450. The chat window450may include a chat messaging panel422and a spotlight panel430, which may be the same or similar to the chat messaging panel322and the spotlight panel330, respectively, discussed with reference toFIG.3. The spotlight panel430may be an expanded view of the spotlight panel330. For example, the spotlight panel430may be larger than the spotlight panel330. When expanded, the spotlight panel430may provide an expanded view of spotlight cards432,434, and436. The spotlight cards432,434, and436may be the same or similar to the spotlight cards332,334, and336. In the expanded view, the spotlight cards432,434, and436may provide information on the spotlight content of a given spotlight card. For example, the spotlight card432may provide a listing of the lunch options, whereas the spotlight card332in a simplified view shown onFIG.3did not provide such a listing. Similarly, the spotlight card434may provide a preview of the Q4 sales numbers, whereas the spotlight card334did not provide the preview. And finally, the spotlight card436may provide the name and title of the leaders on the leaderboard, whereas the spotlight card336did not provide this information. In some embodiments, one or more of the spotlight cards432,434, or436may include historical information of the spotlight card. For example, the spotlight card432may include a historical polling result for previous lunch options. In some embodiments, a spotlight card may be interactive. That is, a member of the chat channel416can interact with the spotlight content of a given spotlight card. For example, a viewing member may vote on a lunch option of the spotlight content provided in the spotlight card332. As depicted, a viewing member may select Deli Subs using a selection444within the spotlight card332. This edit or input by a viewing member from the chat channel416may be transmitted to the external resource and update the spotlight content within the external resource. In another example, if the viewing member adds another lunch option such as “Wings” to the listing provided by the spotlight card432, then the lunch option “Wings” may be added to the lunch options within the external resource, which may be a polling application. A user who may or may not be part of the chat channel416who is viewing the lunch options using the polling application, thus not through the chat channel416, may then see the lunch option “Wings” that was added via the spotlight card432. In some embodiments, the spotlight panel430may be customizable. That is, a host or other member with authorization to generate spotlight cards, may customize the appearance of the spotlight panel430. For example, as illustrated, a host may modify or customize the background446of the spotlight panel430. Customizing the spotlight panel430may increase the user experience of a chat channel416by fitting the spotlight panel430to the theme or character of the chat channel416. If the chat channel416is for an environmental project, then the background446may be changed to a nature themed picture to set the tone of the chat channel416. In some embodiments, changing the background446of the chat channel416may also help orient members as to which chat channel416that he or she is in with a visual cue (e.g., a member may recognize the nature background416and readily know he or she is in the chat channel416for the environmental project). Customizing the spotlight panel430may customize the spotlight panel430for all members of the chat channel416. That is, if a host customizes the spotlight panel430by changing the background446, then the spotlight panel430for every member who accesses the chat channel416may be visually the same. A member, however, may be able to personally modify the spotlight panel430based on his or her preferences such that the spotlight panel430has a certain appearance only for that member's view of the chat channel416. Referring now toFIGS.5, another example master chat panel500including an expanded spotlight panel530is illustrated, according to an embodiment herein. The master chat panel500may be the same or similar to the master chat panels300or400. Similar numbering is used to indicate the same or similar components ofFIGS.3-5. For example, a dashboard504may be the same or similar to the dashboard304or404, including a chat button506. As depicted, a chat channel516may be selected via the sidebar508. The chat channel516may be accessed via a chat window550. The chat window550may include a chat messaging panel522and a spotlight panel530, which may be the same or similar to the chat messaging panel322or422, and the spotlight panel330or430, respectively, as discussed with reference toFIGS.3and4. The spotlight panel530may be an expanded view of the spotlight panel330. For example, the spotlight panel530may be larger than the spotlight panel330. As discussed above, when expanded, the spotlight panel530may provide an expanded view of spotlight cards532,534, and536. The spotlight cards532,534, and536may be the same or similar to the spotlight cards432,434, and436, respectively. As illustrated, the spotlight panel530may be repositioned to be on the right-hand side of the chat messaging panel522. A member of the chat channel516may personalize a position and size of the spotlight panel530. That is, the member can reposition the spotlight panel530and resize the spotlight panel530as desired when within the chat channel516. When a member personalizes the position and size of the spotlight panel530, the positioning and size of the spotlight panel530may be specific to that member's display of the chat channel516. The spotlight panel530may remain in an original position and size on the other members' chat channel516display, depending on each other member's personalization of the spotlight panel530. To reposition or resize the spotlight panel530, a viewing member may select the spotlight panel530and move the spotlight panel530around the master chat panel500as desired. For example, the viewing member may select the edges548of the spotlight panel530to change the size of the spotlight panel530or move the position of the spotlight panel530. Those skilled in the art may readily appreciate the various methods that can be used to resize or reposition the spotlight panel530. As the spotlight panel530is resized or repositioned, the spotlight cards532,534, and536may change sizes or positions to fit the spotlight panel530. In some embodiments, the spotlight cards532,534, and536may maintain their placement within the spotlight panel530based on priority of each card. Referring now toFIG.6, an example master chat panel600having a detailed view of a spotlight card634is provided, according to an embodiment herein. The master chat panel600may be the same or similar to the master chat panels300,400, or500. Similar numbering is used to indicate the same or similar components ofFIGS.3-6. For example, a dashboard604may be the same or similar to the dashboard304,404, or504, including a chat button606. As depicted, a chat channel616may be selected via the sidebar608. The chat channel616may be accessed via a chat window650. The chat window650may include a chat messaging panel622and a spotlight panel630, which may be the same or similar to the chat messaging panel322,422, or522, and the spotlight panel330,430, or530, respectively, as discussed with reference toFIGS.3-5. The spotlight panel630may be expanded such as to provide a detailed view of the spotlight card634, which may be the same or similar to spotlight card334. For example, the spotlight panel630may be larger than the spotlight panel330. To access the detailed view of the spotlight card634, a member may select the spotlight card334or may expand the spotlight panel630until the detailed view of the spotlight card334is provided. When providing a detailed view, the spotlight card634may provide more details of the spotlight content than the expanded view or simplified view of the spotlight card634. As illustrated, when providing the detailed view, the spotlight card634may provide more spotlight content than the expanded view of the spotlight card534or the simplified view of the spotlight card334. The detailed view can allow a member to review details of the spotlight content from the spotlight card634without leaving the chat channel616. This can allow a member to continue communicating with the other chat channel members in the chat messaging panel622while having access to the spotlight content of a desired external resource via the spotlight card634. For example, if another member in the chat channel616request specific information from a finance application associated with the spotlight card634, a member does not have to leave the chat channel616to access and review the requested information in the finance application. Instead, the member can select the spotlight card634and review the spotlight content in the detailed view. In some embodiments, a member can edit the spotlight content of the external resource from the spotlight card634. For example, if the member receives the latest project updates via the chat messaging panel622, the member can update the Q4 sales information in the finance application without leaving the chat channel616. That is, the member can select and edit the spotlight content of the finance application from the spotlight card634. Again, this can allow chat channel members to access and edit content from external resources without leaving the chat channel. In some embodiments, the amount of spotlight content provided to a member via the spotlight card634may depend on the permissions level of the member. Specifically, the permissions level of the member with respect to the external resource. For example, if the finance application associated with the spotlight content of the spotlight card634requires a subscription, then the amount of spotlight content that a member can view via the spotlight card634may depend on that member's subscription status with the finance application. In some embodiments, the amount of spotlight content may be based on the generating member's subscription status. For example, if the member who generates the spotlight card634has a subscription, then there may not be a limit to the spotlight content provided on the spotlight card634, regardless of the other members' subscription status. In contrast, in another embodiment, if a viewing member does not have a subscription to the finance application, then the viewing member may only be provided a preview of the spotlight content on the spotlight card634. It should be appreciated that subscription status may also include a permissions level or access level to an external resource. Referring now toFIG.7, an example master chat panel700including a floating spotlight card734is illustrated, according to an embodiment herein. The master chat panel700may be the same or similar to the master chat panels300-600. Similar numbering is used to indicate the same or similar components ofFIGS.3-7. For example, a dashboard704may be the same or similar to the dashboard304,404,504, or604, including a chat button706. As depicted, a chat channel716may be selected via the sidebar708. The chat channel716may be accessed via a chat window750. The chat window750may include a chat messaging panel722and a spotlight panel730, which may be the same or similar to the chat messaging panel322,422,522, or622, and the spotlight panel330,430,530, or630, respectively, as discussed with reference toFIGS.3-6. In some embodiments, a member may create a floating spotlight card734. As noted above, a member can personalize the size and position of the spotlight panel730. Similarly, a member can personalize the size and position of a spotlight card, such as creating a floating spotlight card734. A floating spotlight card734may be a spotlight card that is positioned or placed outside of the spotlight panel730, such as next to content within the chat messaging panel722. In some embodiments, a member may pin or fix the floating spotlight card734next to a chat message within the chat messaging panel722, meaning that the floating spotlight card734may remain next to the identified chat message regardless of where the member navigates to within the chat messaging panel722. In another embodiment, instead of a member personalizing a spotlight card to generate the floating spotlight card734, including pinning the floating spotlight card734proximate to content within the chat messaging panel722, a host or other authorized member of the chat channel716may generate the floating spotlight card734. When the host or other authorized member generates the floating spotlight card734, then the floating spotlight card734may be present for all members of the chat channel716. For example, if the host pins the floating spotlight card734within the chat messaging panel722next to another member's request for the most recent finance numbers, then the floating spotlight card734may be pinned next to that chat message on all members' chat messaging panel722. This can allow the relevant spotlight cards to be placed and remain next to relevant content within the chat messaging panel722. In some embodiments, the floating spotlight card734may remain in the pinned position until returned to the spotlight panel730. In some embodiments, a chat channel member may be a multi-channel member involved in multiple chat channels. As such, the multi-channel member may be exposed to a high volume of messages and content from the various chat channels. Even if relevant or important content is highlighted in a spotlight window for each chat channel, the multi-channel member may still miss important content if he or she does not access each of the individual chat channels to review the spotlight cards. To provide multi-channel members easy access and review of spotlight cards and relevant information within multiple chat channels, a spotlight home page may be provided. Referring now toFIG.8, an example spotlight home panel800is provided, according to an embodiment herein. The spotlight home panel800may aggregate spotlight cards from different chat channels that a given user is a member in a simple display for quick and easy review by the multi-channel member. For example, per the illustrated example, a multi-channel member may be part of a chat channel812, a chat channel814, and a chat channel816. The spotlight home panel800can allow the multi-channel member to customize how he or she views respective content from each of the chat channel812,814, and816, and in some cases, share the spotlight home panel800or the content therein with other members. For each of the chat channels812,814, and816, the spotlight cards from each respective channel may be provided on the spotlight home panel800. For example, the spotlight card830may be provided for the chat channel812, the spotlight card831may be provided for the chat channel814, and the spotlight cards832,834, and836may be provided for the chat channel816. The chat channel816may be the same or similar to the chat channel316, and the spotlight cards832,834, and836may be the same or similar to the spotlight cards332,334, or336, respectively. The spotlight cards830and831may be similar to the spotlight cards332,334, or336. For example, the spotlight cards830-836may update in real-time as the spotlight cards update in the respective chat channels. Additionally, the multi-channel member may be able to interact with the spotlight cards, as discussed above, by for example, expanding each spotlight card to an expanded view or detailed view. The multi-channel member can also edit the spotlight content of the spotlight cards830-836as described above. Additionally, the spotlight home panel800may identify and display content in each chat channel that is relevant to the multi-channel member. For example, the spotlight home panel800may identify and display spotlight threads840,842, and844(e.g., a thread having a high number of replies or messages), or messages852,854, and856that mention the multi-channel member. As depicted, the chat channel812may have a spotlight thread840which has a high number of messages or comments within that thread. Similarly, the chat channel814may include a spotlight thread842and the chat channel816may include a spotlight thread844. A spotlight thread may be determined if the comments of a thread exceed a threshold number of comments or if it's a thread with the most recent comments. In some embodiments, the multi-channel member may indicate that any threads containing comments by a selected member should be identified as a spotlight thread. For example, if the team lead comments on any thread within the chat channel816, then that thread may be determined to be a spotlight thread and the thread may be provided on the spotlight home panel800. In some embodiments, content of a chat channel that references the multi-channel member may be highlighted on the spotlight home panel800. For example, a mentions panel850may be provided on the spotlight home panel800. The mentions panel850may include messages852,854, and856which each include a mention or reference to the multi-channel member. By providing each of these messages852,854, and856on the spotlight home panel800, the multi-channel member can be notified of the mention and have easy access to respond to the message if desired. In some embodiments, the multi-channel member can respond to or interact with any of the spotlight threads840-844or messages852-856from the spotlight home panel800. In other embodiments, the multi-channel member can select the spotlight threads840-844or messages852-856and be automatically directed to the associated chat channel, specifically to the selected content within the associated chat channel. For example, if the multi-channel member selects the message852, and the message852is from the design team chat channel816, then upon selection the multi-channel member may be directed to the message852within the chat channel816. Referring now toFIG.9, a flowchart of an example method900for providing spotlight cards within a chat channel is provided. The description of the method900inFIG.9will be made with reference toFIGS.3-8, however any suitable system according to this disclosure may be used, such as the example systems100and200, shown inFIGS.1and2. The method900may include steps905and910. At step905, a first chat channel for exchanging chat messages between client devices may be established. For example, the chat and video conference provider210may establish the chat channel316between a plurality of client devices, such as the client devices220-250. At step910, an indication to generate a first spotlight card within the first chat channel may be received. For example, the chat and video conference provider210may receive an indication to generate a first spotlight card, such as the spotlight card332, within the first chat channel, such as the chat channel316. The indication may identify spotlight content from a first resource external to the first chat channel. Example resources external to a chat channel may include an application external to the video conference provider, an application hosted by the video conference provider, or one or more features native to the video conference provider. In some embodiments, prior to generating the first spotlight card, the video conference provider may determine an authorization setting associated with the first client device for the first chat channel. Then, the video conference provider may generate the first spotlight card based on the authorization settings associated with the first client device. The method900may also include steps915and920. At step915, a first spotlight card may be generated. The first spotlight card may identify the spotlight content. In some embodiments, step915may further include accessing the first resource external to the first chat channel, identifying the spotlight content responsive to the indication to generate the first spotlight card, and generating the first spotlight card based on the spotlight content from the first resource external to the first chat channel. At step920, the first spotlight card may be transmitted to one or more of the client devices connected to the first chat channel for display within a spotlight panel of the first chat channel. The spotlight panel may be positioned proximate to a chat messaging panel including chat messages posted to the first chat channel. In some embodiments, the method900may further include updating, by the video conference provider, the first spotlight card within the spotlight panel. For example, the video conference provider may receive updated spotlight content from the first resource external to the first chat channel and update the first spotlight card with the updated spotlight content. In another example, the video conference provider may receive a selection of the first spotlight card within the spotlight panel and provide a detailed view of the first spotlight card to the first client device. In still another example, one or more edits to the spotlight content of the first spotlight content may be received from a first client device and the video conference provider may update the spotlight content of the first spotlight card based on the one or more edits. In still a further example, the first spotlight card may be modified to visually indicate a status change of an application corresponding to the first resource external to the first chat channel. In some embodiments, generating the first spotlight card of step915may include establishing a placement in the first chat channel for the first spotlight card. In such cases, the method900may further include receiving, by the video conference provider, an indication to pin the first spotlight card proximate to content in the chat messaging panel, changing, by the video conference provider, the placement of the first spotlight card from the spotlight panel to a position proximate to the content in the chat messaging panel, and pinning, by the video conference provider, the first spotlight card to the position proximate to the content in the chat messaging panel such that the first spotlight card remains in the position proximate to the content in the chat messaging panel until a removal indication is received. In some embodiments, the method900may further include receiving, by the video conference provider, an indication to generate a second spotlight card within the first chat channel. The second spotlight card may include second spotlight content from a second resource external to the first chat channel. The second resource may be different than the first resource associated with the first spotlight card. Responsive to the indication, the video conference provider may generate the second spotlight card identifying the second spotlight content from the second resource and transmit the second spotlight card to one or more client devices of the first chat channel for display within the spotlight panel. Optionally, a first priority of the first spotlight card and a second priority of the second spotlight card may be determined and a placement of the first spotlight card within the spotlight panel may be modified based on the first priority or the second priority. As described above, the first priority and second priority may be determined by a first interaction level for the first spotlight card and a second interaction level for the second spotlight card, respectively. The first interaction level and the second interaction level may be compared to determine the first priority and the second priority. Method900may further include receiving, from a first client device, an indication to modify the spotlight panel, modifying, by the video conference provider, the spotlight panel based on the indication, and modifying, by the video conference provider, the first spotlight card within the spotlight panel based on the modification of the spotlight panel. For example, the video conference provider may expand the first spotlight card to provide a detailed view of the spotlight content of the first spotlight card. In some embodiments, the method900may further include receiving, from a first client device, an indication to move the first spotlight card from the first chat channel, wherein the plurality of client devices comprise the first client device, accessing, by the first client device, a second chat channel, wherein the second chat channel is established by the video conference provider, receiving, from the first client device, an indication to add the first spotlight card to a second spotlight panel of the second chat channel, and modifying, by the video conference provider, the second spotlight panel of the second chat channel to include the first spotlight card. Referring now toFIG.10,FIG.10shows an example computing device1000suitable for use in example systems or methods providing spotlight cards within a chat channel. The example computing device1000includes a processor1010which is in communication with the memory1020and other components of the computing device1000using one or more communications buses1002. The processor1010is configured to execute processor-executable instructions stored in the memory1020to perform one or more methods for providing spotlight cards within a chat channel, such as part or all of the example method900, described above with respect toFIG.9. For example, the video conferencing software1060provided on the computing device1000may provide instructions for performing one or more steps of the method900for providing spotlight cards within a chat channel. The computing device, in this example, also includes one or more user input devices1050, such as a keyboard, mouse, touchscreen, video input device (e.g., one or more cameras), microphone, etc., to accept user input. The computing device1000also includes a display1040to provide visual output to a user. The computing device1000also includes a communications interface1030. In some examples, the communications interface1030may enable communications using one or more networks, including a local area network (“LAN”); wide area network (“WAN”), such as the Internet; metropolitan area network (“MAN”); point-to-point or peer-to-peer connection; etc. Communication with other devices may be accomplished using any suitable networking protocol. For example, one suitable networking protocol may include the Internet Protocol (“IP”), Transmission Control Protocol (“TCP”), User Datagram Protocol (“UDP”), or combinations thereof, such as TCP/IP or UDP/IP. While some examples of methods and systems herein are described in terms of software executing on various machines, the methods and systems may also be implemented as specifically-configured hardware, such as field-programmable gate array (FPGA) specifically to execute the various methods according to this disclosure. For example, examples can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in a combination thereof. In one example, a device may include a processor or processors. The processor comprises a computer-readable medium, such as a random access memory (RAM) coupled to the processor. The processor executes computer-executable program instructions stored in memory, such as executing one or more computer programs. Such processors may comprise a microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), field programmable gate arrays (FPGAs), and state machines. Such processors may further comprise programmable electronic devices such as PLCs, programmable interrupt controllers (PICs), programmable logic devices (PLDs), programmable read-only memories (PROMs), electronically programmable read-only memories (EPROMs or EEPROMs), or other similar devices. Such processors may comprise, or may be in communication with, media, for example one or more non-transitory computer-readable media, which may store processor-executable instructions that, when executed by the processor, can cause the processor to perform methods according to this disclosure as carried out, or assisted, by a processor. Examples of non-transitory computer-readable medium may include, but are not limited to, an electronic, optical, magnetic, or other storage device capable of providing a processor, such as the processor in a web server, with processor-executable instructions. Other examples of non-transitory computer-readable media include, but are not limited to, a floppy disk, CD-ROM, magnetic disk, memory chip, ROM, RAM, ASIC, configured processor, all optical media, all magnetic tape or other magnetic media, or any other medium from which a computer processor can read. The processor, and the processing, described may be in one or more structures, and may be dispersed through one or more structures. The processor may comprise code to carry out methods (or parts of methods) according to this disclosure. The foregoing description of some examples has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Numerous modifications and adaptations thereof will be apparent to those skilled in the art without departing from the spirit and scope of the disclosure. Reference herein to an example or implementation means that a particular feature, structure, operation, or other characteristic described in connection with the example may be included in at least one implementation of the disclosure. The disclosure is not restricted to the particular examples or implementations described as such. The appearance of the phrases “in one example,” “in an example,” “in one implementation,” or “in an implementation,” or variations of the same in various places in the specification does not necessarily refer to the same example or implementation. Any particular feature, structure, operation, or other characteristic described in this specification in relation to one example or implementation may be combined with other features, structures, operations, or other characteristics described in respect of any other example or implementation. Use herein of the word “or” is intended to cover inclusive and exclusive OR conditions. In other words, A or B or C includes any or all of the following alternative combinations as appropriate for a particular usage: A alone; B alone; C alone; A and B only; A and C only; B and C only; and A and B and C. EXAMPLES These illustrative examples are mentioned not to limit or define the scope of this disclosure, but rather to provide examples to aid understanding thereof. Illustrative examples are discussed above in the Detailed Description, which provides further description. Advantages offered by various examples may be further understood by examining this specification As used below, any reference to a series of examples is to be understood as a reference to each of those examples disjunctively (e.g., “Examples 1-4” is to be understood as “Examples 1, 2, 3, or 4”). Example 1 is a method comprising: establishing, by a video conference provider, a first chat channel for exchanging chat messages between a plurality of client devices; receiving, by the video conference provider, an indication to generate a first spotlight card within the first chat channel, wherein the indication identifies spotlight content from a first resource external to the first chat channel; generating, by the video conference provider, the first spotlight card identifying the spotlight content; and transmitting, by the video conference provider to one or more of the client devices connected to the first chat channel, the first spotlight card for display within a spotlight panel within the first chat channel, the spotlight panel positioned proximate to a chat messaging panel comprising chat messages posted to the chat channel. Example 2 is the method of any previous or subsequent Example, wherein generating, by the video conference provider, the first spotlight card within the spotlight panel comprises: accessing the first resource external to the first chat channel; identifying the spotlight content responsive to the indication to generate the first spotlight card; and generating the first spotlight card based on the spotlight content from the first resource external to the first chat channel. Example 3 is the method of any previous or subsequent Example, further comprising updating, by the video conference provider, the first spotlight card within the spotlight panel of the first chat channel. Example 4 is the method of any previous or subsequent Example, wherein updating, by the video conference provider, the first spotlight card within the spotlight panel comprises: receiving, by the video conference provider, updated spotlight content from the first resource external to the first chat channel; and updating, by the video conference provider, the first spotlight card with the updated spotlight content. Example 5 is the method of any previous or subsequent Example, wherein updating, by the video conference provider, the first spotlight card within the spotlight panel comprises: modifying the first spotlight card to indicate a status change of an application corresponding to the first resource external to the first chat channel. Example 6 is the method of any previous or subsequent Example, wherein updating, by the video conference provider, the first spotlight card within the spotlight panel comprises: receiving, from a first client device, a selection of the first spotlight card within the spotlight panel; and providing, by the video conference provider, a detailed view of the first spotlight card to the first client device. Example 7 is the method of any previous or subsequent Example, wherein updating, by the video conference provider, the first spotlight card within the spotlight panel comprises: receiving, from a first client device, one or more edits to the spotlight content of the first spotlight card within the spotlight panel; and updating, by the video conference provider, the spotlight content of the first spotlight card based on the one or more edits to the spotlight content. Example 8 is the method of any previous or subsequent Example, wherein generating the first spotlight card identifying the spotlight content comprising establishing a placement in the chat channel for the first spotlight card, and the method further comprising: receiving, by the video conference provider, an indication to pin the first spotlight card proximate to content in the chat messaging panel; changing, by the video conference provider, the placement of the first spotlight card from the spotlight panel to a position proximate to the content in the chat messaging panel; and pinning, by the video conference provider, the first spotlight card to the position proximate to the content in the chat messaging panel such that the first spotlight card remains in the position proximate to the content in the chat messaging panel until a removal indication is received. Example 9 is a system comprising: a non-transitory computer-readable medium; a communications interface; and a processor communicatively coupled to the non-transitory computer-readable medium and the communications interface, the processor configured to execute processor-executable instructions stored in the non-transitory computer-readable medium to: establish, by a video conference provider, a first chat channel for exchanging chat messages between a plurality of client devices; receive, by the video conference provider, an indication to generate a first spotlight card within the first chat channel, wherein the indication identifies spotlight content from a first resource external to the first chat channel; generate, by the video conference provider, the first spotlight card identifying the spotlight content; and transmit, by the video conference provider to one or more of the client devices connected to the first chat channel, the first spotlight card for display within a spotlight panel within the first chat channel, the spotlight panel positioned proximate to a chat messaging panel comprising one or more chat messages posted to the chat channel. Example 10 is the system of any previous or subsequent Example, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: receive, by the video conference provider, an indication to generate a second spotlight card within the first chat channel, wherein: the second spotlight card comprises second spotlight content from a second resource external to the first chat channel; and the second resource is different than the first resource associated with the first spotlight card; generate, by the video conference provider, the second spotlight card identifying the second spotlight content from the second resource; and transmit, by the video conference provider to the one or more of the client devices connected to the first chat channel, the second spotlight card for display within the spotlight panel of the first chat channel. Example 11 is the system of any previous or subsequent Example, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: determine, by the video conference provider, a first priority of the first spotlight card; and determine, by the video conference provider, a second priority of the second spotlight card; and modify, by the video conference provider, a placement of the first spotlight card within the spotlight panel based on the first priority and the second priority. Example 12 is the system of any previous or subsequent Example, wherein: the processor-executable instructions to determine, by the video conference provider, the first priority of the first spotlight card cause the processor to execute further processor-executable instructions stored in the non-transitory computer-readable medium to determine, by the video conference provider, a first interaction level for the first spotlight card; the processor-executable instructions to determine, by the video conference provider, the second priority of the second spotlight card cause the processor to execute further processor-executable instructions stored in the non-transitory computer-readable medium to determine, by the video conference provider, a second interaction level for the first spotlight card; and the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: compare, by the video conference provider, the first interaction level to the second interaction level; and determine, by the video conference provider, the first priority based on the comparison of the first interaction level of the first spotlight card to the second interaction level of the second spotlight card. Example 13 is the system of any previous or subsequent Example, wherein the indication to generate the first spotlight card within the first chat channel is received from a first client device of the plurality of client devices, and the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: determine, by the video conference provider, an authorization setting associated with the first client device for the first chat channel; and generate, by the video conference provider, the first spotlight card identifying the spotlight content based on the authorization setting associated with the first client device for the first chat channel. Example 14 is the system of any previous or subsequent Example, wherein the indication to generate the first spotlight card within the first chat channel is received from a first client device of the plurality of client devices, and the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: determine, by the video conference provider, a permissions setting associated with the first client device for the first resource external to the first chat channel; and generate, by the video conference provider, the first spotlight card identifying the spotlight content based on the permissions setting associated with the first client device for the first resource external to the first chat channel. Example 15 is the system of any previous or subsequent Example, wherein the first resource external to the first chat channel comprises one or more of: an application external to the video conference provider; an application hosted by the video conference provider; or one or more features native to the video conference provider. Example 16 is a non-transitory computer-readable medium comprising processor-executable instructions configured to cause one or more processors to: establish, by a video conference provider, a first chat channel for exchanging chat messages between a plurality of client devices; receive, by the video conference provider, an indication to generate a first spotlight card within the first chat channel, wherein the indication identifies spotlight content from a first resource external to the first chat channel; generate, by the video conference provider, the first spotlight card identifying the spotlight content; and transmit, by the video conference provider to one or more of the client devices connected to the first chat channel, the first spotlight card for display within a spotlight panel within the first chat channel, the spotlight panel positioned proximate to a chat messaging panel comprising one or more chat messages posted to the chat channel. Example 17 is the non-transitory computer-readable medium of any previous or subsequent Example, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: update, by the video conference provider, the first spotlight card within the spotlight panel of the first chat channel. Example 18 is the non-transitory computer-readable medium of any previous or subsequent Example, wherein the processor-executable instructions to update, by the video conference provider, the first spotlight card within the spotlight panel cause the processor to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: receive, from a first client device, one or more edits to the spotlight content of the first spotlight card within the spotlight panel, wherein the plurality of client devices comprise the first client device; and updating, by the video conference provider, the spotlight content of the first spotlight card based on the one or more edits to the spotlight content. Example 19 is the non-transitory computer-readable medium of any previous or subsequent Example, wherein the processor-executable instructions to update, by the video conference provider, the first spotlight card within the spotlight panel cause the processor to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: modify, by the video conference provider, the first spotlight card to visually indicate a status change of an application corresponding to the first resource external to the first chat channel. Example 20 is the non-transitory computer-readable medium of any previous or subsequent Example, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: receive, from a first client device, an indication to modify the spotlight panel; modify, by the video conference provider, the spotlight panel based on the indication; and modify, by the video conference provider, the first spotlight card within the spotlight panel based on the modification of the spotlight panel. Example 21 is the non-transitory computer-readable medium of any previous or subsequent Example, wherein the processor-executable instructions to modify, by the video conference provider, the first spotlight card within the spotlight panel based on the modification of the spotlight panel cause the processor to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: expand, by the video conference provider, the first spotlight card to provide a detailed view of the spotlight content of the first spotlight card. Example 22 is the non-transitory computer-readable medium of any previous or subsequent Example, wherein the processor is configured to execute further processor-executable instructions stored in the non-transitory computer-readable medium to: receive, from a first client device, an indication to move the first spotlight card from the first chat channel, wherein the plurality of client devices comprise the first client device; access, by the first client device, a second chat channel, wherein the second chat channel is established by the video conference provider; receive, from the first client device, an indication to add the first spotlight card to a second spotlight panel of the second chat channel; and modify, by the video conference provider, the second spotlight panel of the second chat channel to include the first spotlight card.
113,167
11863336
Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION This disclosure relates to virtual environments (or venues), for example, in which colleagues, friends, associates, family members and the like can build or maintain relationships without being physically together. Such virtual venues can be used for essentially any event in which two or more people may interact, including but not limited to business meetings, social events, performances, speeches, and the like. Companies and organizations benefit when their employees have stronger relationships with each other. Employees benefit when they have stronger relationships with their co-workers, family, and friends. Customers and the companies and organizations that serve them also benefit when people involved in transactions have stronger relationships. Relationships are most naturally strengthened through in-person interactions. Before technology like telephones, computers, and the internet, the most effective way to build relationships was in person. As technology has become a part of human existence, it has been utilized to help people build relationships. Making a telephone call and talking with someone enables a dialogue to discover shared values, identify mutual goals, coordinate activities, and reciprocate for help given. Sending email can provide similar opportunities for people to connect and grow closer. The advent of video conferencing has made it possible to combine a phone call with a real-time visual representation of an individual. Not only can the participants hear each other's voices. They can also see facial expressions and share the screens of their computers with each other. Video conferencing tools like ZOOM or GO TO MEETING are effective ways for small groups of people to build relationships (ideally 1:1), but they are ineffective for larger groups. They do not enable the types of social interactions that are common in larger social gatherings. In a natural, in-person social gathering, people move around the room to enter and exit conversations as they determine to be optimal for their relationship-building goals. They can hear voices at variable levels of volume, which communicates who is having fun, who might be preparing to leave, and who is struggling with relationship building. Also, it is common for social gatherings to include games, media like music or video, decorations such as balloons and colored themes, live performers such as magicians, artists, and musicians, and toasts and recognition of others. In today's world with remote work becoming more popular and some people unable to come to work or travel, companies and organizations cannot rely upon in-person social gatherings to build relationships. Yet they do not have tools that combine technologies to gather remotely and yet also maximize social interactivity, enable use of all social cues available, and reproduce the same informal and spontaneous interactions required for real relationship building. The present inventors recognized that what is needed is a remote social technology that combines all of the elements of socializing in person and translates them into something that can be experienced remotely. Potential key elements of this technology combination may include one or more of the following, details of which are described below.Enabling more than one speaker to speak at a time.Allowing participants to move their presence in the social virtual environment to get closer to people with whom they want to socialize, and to move farther away from people with whom they do not want to socialize.Adjusting the volume of voices to be proportional to the distance between people.Locating music, entertainment, or content in places that allow participants to get farther away from or closer to it, thus adjusting volume and how dominant the music, entertainment, or content is in the individual's socializing experience.Providing for live entertainment and the ability to move closer to or away from it, thereby adjusting the audio volume proportionately.Enabling team-building games and activities that are customized to the needs of the participants.Enable one participant to click on a button and “focus” on another participant, to increase their audio volume and decrease the audio volume of others.Provide a physics-based experience, where each icon has mass, velocity, and momentum. If one icon bumps into another one, it is moved. If there is a group of icons near each other, in a clump, then another icon may not have sufficient mass to break up the clump nor to move through it. This replicates the “pushing way through a crowd” social cue.The mass of an icon can be modified as a bonus during a game or after receiving recognition, or for being at the event for a certain period of time. As a result, the icon having the enlarged mass may have enhanced capabilities such as being able to push through a larger crowd or strike a game object (e.g., a virtual soccer ball) with more force.Provide the ability to go from one room with one type of experience into another room with a different type of experience. Access to one room or the other may be restricted.Enable one person to “take the floor” and give a speech or toast. This makes everyone else muted or quieter while the person talks, and optionally enlarges or otherwise enhances the video emanating from the speaker's camera, e.g., by increasing its size or otherwise making it more visually prominent.Enable one person to give a live performance, such as a comedian or other performer. The performer can see the reaction of the crowd and their level of engagement.Provide a multitude of experience modes. For example, in one mode, the audio experience and quality would be elevated, while requiring participants to use headphones to experience high fidelity sound. In another mode, the audio experience is more of a background, and the system will use echo cancellation capabilities to enable participants to participate without using headphones.Provide several options for how the rooms render visually. For example, a participant with an old computer, slow CPU, slow bandwidth, and/or small screen may be able to see only 10 participants on the screen at a time. A participant with a very powerful machine, fast CPU, very fast bandwidth, and large screen may be able to see 50 participants at a time. These options may be automatically given to participants based on their equipment setup, or they may be selected manually.A name tag can be displayed below each video icon. This name tag can be clicked, and then a brief pre-recorded video of a person's “elevator pitch” or background could be displayed, thus expanding knowledge of each other. Also, another button could be enabled that allows the participant to seamlessly give that person recognition via a recognition platform.Furthermore, all these elements can be available via a web browser or application on a computer, tablet, smart phone, or any other internet connected device. A server hosts the content including the music, games, and content.A development portal can make it possible for developers of games, hosts of events, event planners, musical talent, music providers, and other enhancers of the group experience to offer their content, games, and talents to hosts of virtual events. The host is able to purchase these products/enhancements and make them available to virtual event participants.Upload photo or video or audio—virtual event participant can upload a photo, video, and/or audio segment to the virtual event system and selectively present one or more of them to other meeting participants. FIGS.2A-2Bdepict an example of a dynamic virtual event in progress. This particular meeting includes nine participants202-218, each represented by a separate icon that may display within the icon static or video images representative of the participant. Each participant also can emit and hear audio, for example, speech of him/herself or other participants. The nine icons in this example are displayed as being distributed in a virtual event venue200that includes a stationary audio source220at the right side of the venue200. Each participant, via user input (e.g., the arrow keys), can move his or her icon around within the virtual venue200and form groups of two or more participants, similar to how actual humans can move around and pair off in a physical venue. For example, inFIG.2A, the nine participants are essentially all clustered together in a single group, as indicated by the dashed-line shape226. However, as shown inFIG.2B, two of the participants216and218have moved their respective icons to form a sub-group (also referred to as an audio group) of two people, indicated by the dashed-line shape224. As a result, the volume of the audio emitted by these two sub-group participants, because they are in a sub-group224, will be higher as between them in contrast to the volume of the audio emitted by the other, more distant participants, which will be muted proportionally based on their respective distances from the sub-group224of two participants216,218. In other words, participants216and218will hear each other better (i.e., the volume of their emitted audio will be relatively louder) because they are closer together, while the volume of conversations of other participants farther away (e.g., participants202and204) will be quieter (e.g., background chatter), if not altogether silent, or nearly so. In this manner, the virtual event venue200presents a dynamic environment in which the participants can move around the venue and form sub-groups to have conversations, similar to how humans at a physical gathering move around and interact at a physical meeting venue. Optionally, the existence of a sub-group224may be indicated visually, e.g., by displaying a line encompassing the sub-group members, or by any other suitable visual indication technique (e.g., highlighting, coloring, shading, etc.). The visual indications of a sub-group may be displayed either only to members of the sub-group, or globally, meaning to all virtual participants at the virtual event. In a similar manner, the stationary audio source220will sound louder or quieter depending on how close a participant moves his or her icon to the audio source220. For example, as shown inFIG.2B, the audio source220will sound louder to participant216because he is closer to the audio source220, whereas, in contrast, the audio source220will sound quieter to participants202and208because their icons are farther away from the audio source220. A participant may leave the meeting by clicking on button222. Further features, functions, and implementation details of dynamic virtual event venues include the following. Another potential feature of the virtual event environment is dynamic audio grouping in which icons (e.g., video bubbles) representing two or more meeting participants come within a certain distance of each other, the proximity of their respective icons to each other is detected and used to create an audio group (or sub-group, as shown inFIG.2B). As described above, the formation of an audio group changes the characteristics of the audio volumes that the meeting participants hear. Whereas normally volume is louder if an icon is closer and quieter if it is farther from another icon, in an audio group (e.g., sub-group224inFIG.2B) all of the icon's volumes are at the same level. This enables all members of the audio group to hear each other, even if one or more of the icons are somewhat far apart (in the sense of visual distance between icons being displayed on the screen). The audio groups may be formed or changed either manually (e.g., based on user input) or automatically/dynamically based on parameters such as the level of proximity between two or more icons, and optionally other parameters. For example, to determine if a passerby (e.g., an approaching icon representing a meeting participant) should be joined to an audio group, the approaching icon's speed, distance, and/or time spent lingering near the audio group can be taken into account. Depending on the particular parameters used to implement dynamic audio grouping (e.g., threshold speed, distance, and time spent lingering), the system makes a decision to either let the meeting participant represented by the approaching icon to pass without joining the audio group or to join them to the audio group. For example, referring toFIG.2B, if icons204,206, and210already have been joined into an audio group, and icon218is moving to the left, and thus approaching the audio group, a decision is made to either join icon218to the audio group or decline to join icon to the audio depending, e.g., on the speed of icon218, its screen distance from the audio group, and/or the time spent lingering (if any) near the audio group. Other parameters can be used in making the join/no join decision depending on the preferences of the implementer of the virtual event environment. This same technique may be used either to form an audio group in the first instance, or in deciding whether to join a meeting participant to an existing audio group. Additionally, as described above, visual indicators may be used to signal the existence of an audio group. Further, such a visual indicator (e.g., a line encompassing all members of the audio group) may change appearance, either if a member of the audio group moves around (e.g., the visual indicator will dynamically change shape in a “rubber-band” manner to continue to encompass all group members as they move their icons within the virtual event environment) and/or when virtual participants enter or leave the audio group). Alternatively, or in addition, the virtual event environment can automatically move icons within an audio group, e.g., such that the icons of the group's members are visually centered around a virtual center-of-mass, which tends to optimize the number of participant's that can be displayed properly. Whenever participants enter or leave an audio group, the virtual event environment causes the icons of the current audio group to automatically re-center themselves about the current center-of-mass, which changes as icons enter and leave the audio group. Optionally, centering icons within an audio group can have a time-delay feature such that the icons are moved to re-center them only after a predetermined period of time (e.g., 3 seconds) has lapsed and the audio group's icons have stabilized. This time-delay helps to prevent excessive and repetitive re-centering when participants come and go repeatedly within a short time. The number of icons (i.e., event participants) allowed within any individual audio group can be limited to a maximum quantity (e.g.,10). The feature is useful, among other ways, to limit the size of audio groups to a manageable number of participants, which tends to enhance the ability of individual participants in the audio group to speak to any other member (or members) of the same audio group, thereby improving the social effectiveness of the virtual event environment. In addition, the virtual event environment optionally may provide an exclusionary feature in which, for example, the host or other member of an audio group can select (e.g., through user-interface controls) to prevent other participants from joining their audio group, either on a global basis (i.e., no other participants are allowed to join the audio group) or on a selective basis (i.e., only certain participants are prevented from joining the audio group). Such an exclusionary feature may prove beneficial, e.g., while an audio group is having a confidential or otherwise private conversation. Further, in addition to the sound dynamics, the virtual event environment can be configured to center the audio group's video bubbles (i.e., icons in a circular shape that display video or a still image representing the respective meeting participants) so all members of the audio group can view each other in an optimal manner. The virtual event environment also may enable hybrid physical/virtual interactions between meeting participants. For example, as shown inFIG.3, a meeting or conference can include both a physical aspect (e.g., in-person individuals joining together in the same physical space300) and a virtual aspect, that is, meeting participants who are geographically separated by distances that prevent them from seeing or hearing other participants without the aid of electronic or similar means, can nevertheless participate virtually in the meeting (e.g., see and hear other participants) using various electronic means such as cameras, microphones, display screens, internet connectivity, or the like. For example, in the physical space300at which a meeting or conference is being held, one or more kiosks302,304,306,308,310, each including a display screen, camera, and microphone, and connected to the internet or other communications network, can be placed at appropriate locations in the physical meeting space300such that the physical meeting participants can socialize or otherwise interact (e.g., hear and see) with virtual event participants via the kiosks. More specifically, the display screen on a kiosk can display the dynamic virtual event environment described above, and a physical meeting attendee can use the kiosk to join the virtual event and optionally have all or some of the same functionality and features provided by the virtual event environment. For example, physical participants312are adjacent a kiosk304, thereby giving them access to the virtual event environment so that they can virtually interact with virtual event participants. In contrast, physical participants such as participant314are not sufficiently near any kiosk meaning that they cannot interact with virtual event participants. The virtual dynamic meeting environment may also make use of metadata—for example, data about individuals (e.g., demographic information and/or information specific to an individual such as personal interests, company of employment, state/city of residence, social status, or the like). The metadata may include information users enter themselves, such as facts about themselves, or even intentions or goals to achieve during the virtual event. For example, a virtual event participant may provide information indicating that they want to make sure to interact with vice presidents, or other decision makers, of four different companies during the virtual event. The system can use such metadata to highlight or otherwise identify other meeting participants that the user having the stated goal should seek out and converse with them. Alternatively, or in addition, the system can provide a user with recommendations as to direction, things to say to other participants, suggestions for conversation starters, and the like. Another potential feature of the virtual event environment is to regulate audio dynamics between a performer (or presenter) and the crowd (e.g., some or all of the other meeting participants). For example, in a default mode, all participants may speak (or emit other audio) at any time (e.g., simultaneously) and that audio can be heard by all other participants at different volumes based, for example, on the relative and respective distances between the participants' icons within the virtual meeting environment. However, that default mode can be modified to enable certain entities to regulate the audio dynamics such that regulating audio based on distance between icons may be modified or over-ridden entirely. For example, when one of the meeting participants is a performer (e.g., comedian, singer, poet, speaker, etc.) and that performer gives a live performance virtually (e.g., if the performer participant is a comedian and is doing their comedy routine), the system can allow real-time (or near real-time) crowd feedback. In that regard, any of several dynamic audio control modes for either or both the performer and the crowd can be employed. For example, the performer or presenter (or another on their behalf) can selectively quiet the crowd, either the entire crowd or specific members of the crowd. For example, if one or more participants are actively talking (or generating other audio) and the presenter (or other entity authorized to regulate the audio dynamics) decides that the talking is disruptive of the presentation, the presenter can, using user interface controls, selectively quiet the talking crowd members, either by reducing their volume (potentially on a per-participant or per-group basis) or by silencing their volume entirely. Such reducing or silencing may be configured to allow members of a particular talking participant's audio crowd to continue to hear the talking participant's audio but no other participant (and/or presenter) will hear that talking, either at all or on a reduced basis. Alternatively, or in addition, such crowd quieting may be performed automatically by the virtual event system, for example, by detecting talking or other audio emanating from sources other than the presenter that exceeds a predetermined threshold. Any such audio that exceeds the threshold can be reduced in magnitude (i.e., the volume is lowered and thus harder to hear by the other participants and/or presenter), or muted entirely. Alternatively, or in addition, quieting the crowd can be regulated such that short and potentially loud spikes of audio (e.g., desirable but loud audio such as audience laughter or cheering) are permitted to continue but audio that exceeds either or both of a time duration or decibel level) can selectively be reduced or muted entirely, as described above. As another example, the presenter or performer (or another on their behalf) can select “unrestricted audience participation” where all noise from the crowd streams to them (and potentially other participants), regardless of its relevance, similar to how all noise, both from the performer and the crowd, emanates throughout a physical venue during a punk rock concert or similar performance. The performer alternatively can select “no audience participation,” where the audience must listen and cannot converse during the show, e.g., because all but the performer's microphone are muted. Alternatively, or in addition, a “push-to-talk” (“PTT”) mode can be employed, where, for example, a member of the audience (i.e., a virtual event participant who is not the performer) can give shout outs or ask questions, but only if they first press a button (e.g., a virtual button displayed within the user interface) to do so, and the presenter affirmatively gives them authorization to speak (e.g., by using user interface controls). Such PTT requests can be queued up either on a first-in-first-out basis or potentially on some other basis such as a characteristic or other parameter associated with the PTT requester, e.g., the requester's rank or title within the company, whether the PTT requester has previously and/or recently made another PTT request, or the like. For example, if the president of a company concludes a virtual presentation to the company's employees, and then asks for questions, and, in response, an audience member makes a PTT request to ask a question or make a comment, that PTT request will be added to the queue and selectively granted, e.g., in turn. When that PTT request is granted, the audio emanating from the PTT requester (e.g., speech) is turned on (and potentially increased in volume) so that all of the other participants listening to the president's presentation will be able to hear the PTT requester's question or comment. At the same time, the audio emanating from every other participant (or other audio source) will be muted (e.g., silenced or reduced) so that the other listening participants can hear the PTT requester's comment or question. In addition, video emanating from the PTT requester, while asking their question, can be made larger or otherwise enhanced to draw the attention of the audience and/or presenter. This process of turning on the audio emanating from the current PTT requester and muting the audio emanating from all other sources (except, potentially, from the president, moderator, or other individual given such privileges) results in an orderly and manageable queue in which questions or comments are made audible one at a time. Alternatively, or in addition, the PTT requests could be moderated, e.g., to identify unacceptable or impermissible content. For example, in moderation mode, each PTT requester would be required to first speak (or otherwise communicate) their comment or question (which could be record and queued), which would then be screened by a moderator (e.g., by a human being or automatically using keyword filtering, predetermined rules, or artificial intelligence) before the PTT request was granted. The moderator could screen the comments or questions either by listening to the recorded message from the PTT requester, or optionally the recorded message could be transcribed (e.g., using automated voice-to-text conversion) so that it could be read by the moderator. Objectionable comments or questions would selectively not be presented to the presenter and/or the audience participants. If the PTT request is granted, the recorded question or comment would then be played for everyone to hear. Alternatively, the moderator could choose to allow the PTT requester to speak live and present their question or comment again, rather than playing the recorded comment or question. Alternatively, or in addition, the virtual event system could employ a layered crowd-feedback approach (potentially in conjunction with the PTT mode) in which only the audio emanating from a subset of participants (e.g., 20) in the total audience (e.g., 400) is audible to other participants. This approach has several benefits, for example, it reduces the bandwidth required by the system in that only 20 audio streams need to be supported at a time, rather than all 400. Such layering could be implemented on a group-specific basis in which the audio emanating from participants in only a certain number of groups (e.g., one) would be make audible at a time, and then, potentially on a time duration basis, only the audio emanating from a different one or more groups would be made audible. This approach would give the participants (and/or the presenter) the sense of live, natural crowd noise, while at the same time reducing bandwidth requirements. The groups used in the layering approach could either be previously formed audio groups and/or groups arbitrarily defined by the virtual system (e.g., randomly or based on proximity of participants' icons). In addition, the groups could overlapping membership (e.g., one or more participants in the currently audible group also are members of the next group of participants to be made audible). This technique would tend to improve the realism produced by the layering approach, e.g., by giving both the audience and the presenter the impression that the audience is attentive and being responsive (e.g., laughing at jokes). Optionally, the performer or their proxy can screen the audience member and/or their intended audio output, before enabling the audience member's microphone. Similarly, the virtual system can reduce required bandwidth, and potentially produce other benefits, by limiting the quantity of video streams emanating from the participants (and displayed within their icon) for display. For example, while a PTT requester is speaking, their streaming video would be displayed, and potentially enhanced (e.g., in size or visual effect). In contrast, for participants who are behaving in an undesirable or otherwise less than ideal manner (e.g., inattentive, speaking too loud, asleep or inactive, doing something inappropriate, etc.), their video streams will not be displayed in the virtual event environment but rather in a low-bandwidth manner (e.g., only a static display of the participant's image or name). Similarly, the virtual event system can choose to selectively display the video streams of only certain participants, e.g., randomly selected potentially on a rolling basis, or depending on predetermined criteria such as position within the company hosting the virtual event, status as a preferred customer, celebrity, or recognized expertise and achievement in a certain discipline, and/or essentially any other criterion deemed by the event host to be worthy of special treatment. As another example, the presenter can select “King of the World” mode (perhaps suitable for webinars or presentations from the CEO of an organization) in which the presenter can view all participants, stop them from talking to each other, and/or turn off their video stream, or otherwise prompt them listen to the presenter's presentation. Additionally, one or more virtual participants can quiet or silence the presenter (or potentially any other participant, either individually, or on an audio-group basis). For example, if two or more participants are in an audio group, and they are not interested in hearing the presenter (and/or other participants), either of them can selectively silence or quiet (reduce volume) the audio of others, including audio emanating from the presenter, other individual participants, and/or entire audio groups. Optionally, the virtual event system can determine, for example, whether or not an audience participant is actually watching or otherwise paying attention to the performance, and then use that information to undertake certain actions such as muting the microphone of the inattentive audience participant, sending that participant a message such as “pay attention,” or essentially any other appropriate action as desired. For example, if the audience participant is determined to be speaking or otherwise interacting with another virtual participant of the audience during the performance in an ongoing manner (e.g., continually speaking with the other audience participant for more than a threshold period of time), then the audio emanating that audience participant could be blocked, either entirely (such that the inattentive audience participant cannot speak with any of the other participants, except potentially only participants in the inattentive participant's audio group) or selectively, e.g., the audio emanating from the inattentive audience participant is turned off or otherwise blocked such that only certain participants (e.g., the performer) cannot hear it. As another example, if a participant is determined to be inattentive, e.g., because they have quieted or muted the presenter beyond a predetermined duration, then that inattentive participant will be denied access to certain features or content, e.g., then the video emanating from that inattentive participant's camera will not be displayed in the virtual event environment. In contrast, if a participant is, for example, watching the presentation in full-screen mode, the system may determine that that participant is active and no features or content will be disabled for that attentive participant, e.g., the video emanating the attentive participant will potentially be displayed in the virtual event environment. As another example, the system could analyze images obtained from the audience participant's video stream (e.g., captured by the participant's local camera and transmitted to the computing platform hosting the dynamic virtual event environment) and use image analysis to determine whether or not the participant's eyes are directed to their local display screen. If, for example, it is determined that the participant's gaze is elsewhere than their display screen for a threshold duration, then the system could infer that the audience participant in question is inattentive, and in response undertake essentially any other appropriate action as desired such as muting their audio or not displaying their video stream. As another feature, the virtual event system can measure, potentially in an ongoing and real-time manner, the level, extent and/or effectiveness of a meeting participant's social activity (i.e., the interactions between the meeting participant being measured and other meeting participants during a virtual event). The system can use this information on a real-time basis, for example, to inform a meeting participant who is a salesperson how effectively they are networking, or have networked, with other participants of the virtual crowd. For example, using metadata and/or other collected information, the system could inform the sales person that they have interacted with (and for how long) decision-makers of companies in the salesperson's relevant industry. As another example, an organization planning a virtual event can gather information, e.g., metadata, on each of the potential virtual participants, e.g., name, title, industry, past recognitions or achievements (e.g., best IT Manager of the year, MacArthur Award Winner, designation as Fellow at a prestigious organization, VIP, etc.), related relationships with other people or enterprises, an indicator of the quality of relationship between each participant and the organization planner (e.g., a color-code such as green (good relationship), yellow (neutral relationship), red (bad or troubled relationship)), and similar information. That metadata can then be provided to the organizational planner as a searchable attendee list that can be used to plan activities and identify relevant relationships between various participants. In addition, the metadata may be selectively displayed to event participants in or along with a participant's icon, e.g., using various graphical attributes such as color-coding. For example, an event participant, if authorized, may be able to search or otherwise access the metadata on a real-time basis, e.g., to identify all participant attendees who work for a specific company, occupy a certain position within a company (e.g., CEO), attended a specific school, are interested in certain hobbies, sports or literature, or the like. Such information could be displayed, e.g., using either a hover-over mode (which would cause the display of a frame of information) or by clicking on the participant's icon, or essentially any other suitable user-interface technique. The selective display of metadata may be controlled on a per-participant basis, e.g., only to certain employees of the planning organization, to inform specific participants of desirable interactions with other, non-employee participants. In addition, the system can give the salesperson (or any other participant) real-time prompts to suggest interactions based, for example, on the salesperson's stated intentions, best practices, and/or using metrics and other information collected by the system over time. For example, the system could notify the salesperson (e.g., displaying a pop-up message or the like on the salesperson's display screen) that a relevant decision-maker is present in the virtual event environment and that the salesperson should use their user interface controls to navigate their icon to the current virtual location of that decision-maker's icon (e.g., “move to the left and up” or “keep moving left—you're getting close”). In addition to salespersons, the system can be used to measure social interactivity and/or give social prompts to essentially any type of virtual event participant. For example, the system can inform a recruiter how many potential recruits they have interacted with, information about those potential recruits, the presence and locations of other recruits with which the recruiter has not yet interacted, and/or the effectiveness of the recruiter's interactions with the potential recruits (e.g., by measuring the recruits' attentiveness during the interaction, as described above). Additionally, the system can collect relevant information and provide a post-virtual event report, for example, to a manager so that manager can be better informed about the quantity, quality, and/or effectiveness of their employees who participated in the virtual event. Similar, the system can provide a post-virtual event report to the event's host so that the host can be better informed about the quantity and type of interactions that occurred during the virtual event, and/or the virtual event's effectiveness relative to the event host's goals. As discussed above, the dynamic virtual event system can provide games such as “bubble soccer.” For example, each participant's icon is in the shape of a “bubble,” and the willing participants are divided into teams of one or more participants. Then, a movable graphical object (e.g., a virtual soccer ball) is introduced into the environment and the teams attempt to score virtual goals by manipulating their user-interface controls to virtually bump into the virtual soccer ball, thereby imparting a virtual force, in an attempt to cause the virtual soccer ball to move toward and/or enter the opposing team's virtual goal (e.g., a static graphical object in the form of a soccer goal and displayed on each game participant's screen). Other potential games may include “group pong,” which may be similar to bubble soccer in that a team of virtual participants manipulate their video bubbles to come into virtual contact with a displayed, movable graphical object in the form of a ball or the like, and attempt it to bounce back toward a virtual wall, similar to the familiar Pong videogame. To facilitate such games, the dynamic virtual event system can provide a framework for game and other activity developers so that the developers can build their games on top of the virtual social environment. To do so, the game developers could use known software tools, for example, to display a specific screen layout and format (e.g., a virtual soccer field), expose player positions, name, and/or controls, place virtual objects into the virtual social environment and apply known rules of physics to the virtual objects, and/or respond to inputs from players. As another feature, the virtual event system can be configured to have multiple (e.g., two or more) different virtual event spaces, which can be linked or otherwise connected to enable virtual participants to move from room to room. Each virtual room may be occupied by different virtual participants, so by moving into a different virtual room, a virtual participant can gain access to a different group of virtual participants with which to interact. This feature may be useful, among other ways, to reduce the size of the group of virtual participants in any particular virtual room down to a manageable number, thereby enhancing the effectiveness and accessibility of interactions, which tends to improve the overall virtual event experience. As another feature, a virtual participant may share their screen with other virtual participants, either on a global basis (i.e., all virtual event participants) or on a group-specific basis (e.g., only to other members of the sharing participant's audio or sub-group). Among other uses, screen sharing may be used to share audio, video, still images, software applications, virtual whiteboards, URLs, or the like. In addition, a member of the group (e.g., the group host or owner, which may be designated either manually, e.g., based on user input, or automatically, e.g., the first virtual participant in the group (e.g., the initiator of the group)) may control whether screen-sharing is global or limited to the group. Such control may be implemented, e.g., using rules such as whether the group's activities are directed at corporate training, directed to fun activities As another feature, the virtual event environment may be configured to enable virtual participants to place artifacts or graphical objects within the virtual event environment, either on a global basis or on a group-specific basis. For example, a virtual event participant, who is enabled to do so, may place graphical objects such as a form to be filled out, a display frame (that displays, e.g., a URL, software code, an iFrame, or other information), a virtual photo-booth, a virtual whiteboard, or a picture with a hover-over mode, a CTA (Call-To-Action) button, or the like. As another feature, the virtual event environment may be commerce-enabled, e.g., to facilitate sales events, trade shows, or the like. For example, suppose that a virtual participant is a jewelry salesperson and has a number of high-priced jewelry items to sale, the salesperson may place graphical objects (e.g., photos, videos or3D models) in the virtual environment (e.g., in a separate dedicated virtual room) representative of the jewelry items, which other participants (i.e., potential buyers) can inspect and determine whether they want to purchase one of more of them. Each jewelry object can be enabled with various functionalities (e.g., hover-over informational frames, URLs to relevant webpages or other internet-accessible resources, or the like). Both the virtual potential buyer participants and the salesperson can move around the virtual dedicated room as desired and inspect the various jewelry items for sale. In addition, one or more other participants associated with the jewelry items (e.g., the designer, a celebrity endorser or other spokesperson associated with the jewelry items) can move around the virtual dedicated room and interact with the potential buyers to help bolster sales, answer questions, and the like. As another feature, the virtual event system may provide a developer framework that enables a wide variety of people, e.g., game developers, corporate trainers, event planners, and the like, to develop resources or other activities (e.g., games, questionnaires, online courses, tutorials, team building events, structured meetings, etc.) that can be deployed in the virtual event environment. In addition, the developer framework could include an event macro layer that enables developers and others to develop macros, or templates, for certain types of events, e.g., scout troop meetings, diversity or other corporate training, negotiation training, etc. Such macros would provide a track-proven, general framework, developed by professionals based on established best practices, and for use by others, who could populate an instance of a selected framework with information and other content specific to the particular event that they are planning, thereby enabling a repeatable way for a developer to quickly and easily plan and run a specific event that is well-structured and engaging for the event participants. The macros could be uploaded to the virtual event system, and/or an appropriate internet venue such as an online marketplace, and could be shared, traded or otherwise made available to others for use. In a marketplace environment, the uploaded macros could have, e.g., well-developed and creative backgrounds, custom themes, and/or other content or functionality designed by professionals and proven to be effective or otherwise attractive to use. Users could visit the marketplace and download, potentially at a cost, desired macros that are relevant to their event being planned. For example, if an event planner wants to plan a virtual event relating to team building, the planner could visit the marketplace and peruse the various macros available for team building exercises, which, for example, could vary based on specific industries, countries or languages, knowledge level of intended audience, or the like. The selected macro could provide a framework (e.g., present agenda, invite a guest to give an introductory speech, play a specified video, execute a certain activity, etc.) that prompts the planner to enter content specific to their event. In that manner, the planner is provided with a proven, plug-and-play template into which the planner enters their event-specific content, which then can be executed on demand, for example, during a virtual event. FIG.4illustrates an exemplary process400for conducting a dynamic virtual event. At402, a plurality of icons are displayed on a display screen of a computing device, each icon representing a different virtual event participant, wherein the plurality of icons includes a first icon representing a virtual event participant associated with the computing device. At404, input representing a direction of movement for the first icon is received from an input device of the computing device. At406, in response to receiving the input, the first icon is moved on the display screen in the direction represented by the input. In some implementations, the first icon comprises video of the virtual event participant associated with the computing device. In some cases, the process400further comprises receiving, by the computing device, audio signals associated with a second icon of the plurality of icons; and playing, by the computing device, the received audio signals at a particular volume based on a distance between the first icon and the second icon on the display screen. In some implementations, the particular volume decreases as the distance between the first icon and the second icon on the display screen increases. In some cases, the particular volume is proportional to a volume level at which the virtual event participant associated with the first icon would hear the audio signals emitted by the virtual event participant associated with the second icon at a physical distance in a physical space proportional to the distance between the first icon and the second icon on the display screen. In some implementations, the process400further comprises receiving, by the computing device, a plurality of audio signals each associated with one of the plurality of icons; and playing, by the computing device, each of the received plurality of audio signals at a particular volume based on the distance between the first icon and the icon associated with the received audio signal on the display screen. In some cases, the process400further comprises receiving, by the computing device, input designating a particular icon from the plurality of icons upon which to focus; and in response to the input, playing, by the computing device, audio signals associated with the particular icon at a first volume level, and playing audio signals associated with icons other than the particular icon at a second volume level, wherein the first volume level is greater than the second volume level. FIG.5illustrates an exemplary process500for conducting a virtual event. As shown, the process500includes displaying, on a display screen of a computing device, a plurality of icons, each icon representing a different virtual event participant, wherein the plurality of icons includes a first icon representing a virtual event participant associated with the computing device (502); identifying, by the computing device, a subset of the plurality of icons including the first icon (504); grouping, by the computing device, the subset of the plurality of icons into an audio group (506); receiving, by the computing device, a plurality of audio signals each associated with one of the plurality of icons (508); and playing, by the computing device, each of the received plurality of audio signals, wherein audio signals received from icons included in the audio group are played at a first volume, wherein audio signals received from icons not included in the audio group are played at a second volume, and wherein the first volume is greater than the second volume (510). In some implementations, identifying the subset of the plurality of icons includes identifying icons that are located within a certain distance of each other. In some cases, identifying the subset of the plurality of icons includes receiving user input identifying the icons. In some implementations, the process500includes identifying, by the computing device, an icon from the plurality of icons that is not included within the audio group; and in response, adding, by the computing device, the identified icon to the audio group. In some implementations, identifying the icon from the plurality of icons that is not included within the audio group includes identifying the icon based on the icon's proximity to the audio group. In some cases, identifying the icon from the plurality of icons that is not included within the audio group includes identifying the icon based on a speed at which the icon is approaching a location of the audio group on the display screen. In some implementations, identifying the icon from the plurality of icons that is not included within the audio group includes identifying the icon based on an amount of time the icon has spent within a particular distance from a location of the audio group on the display screen. In some cases, identifying the icon from the plurality of icons that is not included within the audio group includes receiving user input identifying the icon. In some implementations, the process500includes displaying, on the display screen, a visual indication of which icons from the plurality of icons are included in the audio group. FIG.6illustrates an exemplary process600for conducting a hybrid physical and virtual event. As shown, the process600includes providing a kiosk arranged in a physical meeting space, the kiosk including a microphone, a camera, an input device, and a display screen, and being connected to a communications network (602); displaying, on the display screen of the kiosk, a plurality of icons, each icon representing a different virtual event participant associated with a remote computing device (604); updating, on the display screen of the kiosk, a location at which each of the plurality of icons is displayed based on user input received by the remote computing device associated with each icon (606); receiving, by the kiosk, a plurality of audio signals associated with one or more of the plurality of icons (608); and playing, by the kiosk, each of the received plurality of audio signals at a particular volume based on a distance between the icon representing the kiosk and the icon associated with the received audio signal on the display screen (610). FIG.7illustrates an exemplary process700for selectively providing metadata to virtual event participants. As shown, the process700includes displaying, on a display screen of a computing device, a plurality of icons, each icon representing a different virtual event participant (702); identifying, by the computing device, metadata associated with one or more virtual event participants represented by the plurality of icons, wherein the metadata includes at least one of a name, a title, an organization, an industry, or a relationship to another virtual event participant (704); and selectively providing, by the computing device, metadata associated with a particular virtual event participant to one or more other virtual event participants (706). FIG.8illustrates an exemplary process800for controlling participant audio volume during a virtual event. As shown, the process800includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (802); identifying, by the server computing device, a presenter device from the plurality of computing devices (804); and in response to identifying the presenter device, instructing, by the server computing device, the plurality of participant computing devices to playback audio signals transmitted by the presenter device at a presenter volume, and to playback audio signals transmitted by participant computing devices other than the presenter device at an audience volume, wherein the audience volume is less than the presenter volume (806). FIG.9illustrates an exemplary process900for conducting a virtual event including push-to-talk audience interactions. As shown, the process900includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (902); identifying, by the server computing device, a presenter device from the plurality of computing devices (904); in response to identifying the presenter device, instructing, by the server computing device, the plurality of participant computing devices to playback audio signals transmitted by the presenter device at a presenter volume, and to mute or reduce a volume of audio signals transmitted by participant computing devices other than the presenter device (906); receiving, by the server computing device, a push-to-talk request from a requesting computing device (908); and in response to receiving the push-to-talk request, instructing, by the server computing device, the plurality of participant computing devices to playback audio signals transmitted by the presenter device at the presenter volume, to playback audio signals transmitted by the requesting computing device at a push-to-talk volume, and to mute or reduce the volume of audio signals transmitted by participant computing devices other than the presenter device or the requesting computing device (910). FIG.10illustrates an exemplary process1000for conducting a virtual event including layered audience feedback. As shown, the process1000includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1002); identifying, by the server computing device, a presenter device from the plurality of computing devices (1004); identifying, by the server computing device, an audience feedback subset from the plurality of computing devices other than the presenter device (1006); and instructing, by the server computing device, the plurality of participant computing devices to playback audio signals transmitted by the presenter device at a presenter volume, to playback audio signals transmitted by participant computing devices in the audience feedback subset at an audience volume, and to mute or reduce the volume of audio signals transmitted by participant computing devices that are not included in the audience feedback subset and that are not the presenter device (1008). In some cases, the audience feedback subset is a first audience feedback subset, and the process1000includes determining, by the server computing device, that a particular amount of time has passed since identifying the first audience feedback subset; in response to determining that the particular amount of time has passed, identifying, by the server computing device, a second audience feedback subset from the plurality of computing devices, wherein the second audience feedback subset is different than the first audience feedback subset; and instructing, by the server computing device, the plurality of participant computing devices to playback audio signals transmitted by the presenter device at the presenter volume, to playback audio signals transmitted by participant computing devices in the second audience feedback subset at the audience volume, and to mute or reduce the volume of audio signals transmitted by participant computing devices that are not included in the second audience feedback subset and that are not the presenter device. FIG.11illustrates an exemplary process1100for controlling participant video quality during a virtual event. As shown, the process1100includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of video signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1102); identifying, by the server computing device, a video enhancement subset from the plurality of participant computing devices (1104); and instructing, by the server computing device, participant computing devices in the video enhancement subset to transmit video signals at a first video quality level, and instructing to participant computing devices that are not in the video enhancement subset to transmit video signals at a second video quality level, wherein the first video quality level is higher than the second video quality level (1106). FIG.12illustrates an exemplary process1200for allowing participants to mute a presenter during a virtual event. As shown, the process1200includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1202); identifying, by the server computing device, a presenter device from the plurality of computing devices (1204); in response to identifying the presenter device, instructing, by the server computing device, the plurality of participant computing devices to playback audio signals transmitted by the presenter device (1206); receiving, by the server computing device, a request to mute the presenter from a particular computing device (1208); and in response, instructing, by the server computing device, the particular computing device to mute audio signals transmitted by the presenter device, wherein the plurality of participant computing devices other than the particular computing device continue to playback audio signals transmitted by the presenter device (1210). FIG.13illustrates an exemplary process1300for managing inattentive participants during a virtual event. As shown, the process1300includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1302); identifying, by the server computing device, an inattentive participant based on either or both audio or video signals transmitted by a participant computing device associated with the inattentive participant (1304); and in response to identifying the inattentive participant, performing, by the server computing device, one or more corrective actions to the participant computing device associated with the inattentive participant including at least one of muting a microphone of the participant computing device, blocking audio transmitted by the participant computing device for a period of time, blocking video transmitted by the participant computing device for a period of time, or sending the participant computing device a warning message for display to the inattentive participant (1306). FIG.14illustrates an exemplary process1400for providing a social activity metric to participants in a virtual event. As shown, the process1400includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1402); determining, by the server computing device, a metric associated with social activity of a particular participant based on interactions by the particular participants with other participants in the virtual event (1404); and providing, by the server computing device, the metric associated with the social activity of the particular participant to the participant computing device associated with the particular participant (1406). FIG.15illustrates an exemplary process1500for suggesting participant actions during a virtual event. As shown, the process1500includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1502); determining, by the server computing device, a prompt for display to a particular participant based on intent information provided by the particular participant, wherein the prompt includes at least one of a suggested interaction with a target participant, a suggested navigation action configured to move a display location of an icon associated with the particular participant closer to a display location of an icon associated with the target participant, or information about a target participant (1504); and instructing, by the server computing device, the participant computing device associated with the particular participant to display the prompt on a display screen of the participant computing device (1506). FIG.16illustrates an exemplary process1600for conducting a virtual game during a virtual event. As shown, the process1600includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1602); assigning, by the server computing device, display locations to a plurality of icons each associated with one of the plurality of participant computing devices, wherein each of the plurality of participant computing devices is configured to display each of the plurality of icons at the assigned display locations on a display screen at each participant computing device (1604); assigning, by the server computing device, a display location to a game object icon, wherein each of the plurality of participant computing devices is configured to display the game object icon at the assigned display location (1606); receiving, by the server computing device, directional input signals from at least a portion of the plurality of computing devices, each directional input signal associated with one of the plurality of computing devices (1608); in response to receiving the directional input signals, selectively updating, by the server computing device, the display locations of each icon of the plurality of icons associated with a participant computing that is associated with a received directional input signal (1610); detecting, by the server computing device, collisions between icons from the plurality of icons and between the icons and the game object icon (1612); and in response to detecting the collisions, selectively updating, by the server computing device, the display locations of the plurality of icons and the game object icon based on the detected collisions (1614). FIG.17illustrates an exemplary process1700for conducting a screen-share during a virtual event. As shown, the process1700includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio and video signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1702); receiving, by the server computing device, a request from a presenter computing device from the plurality of participant computing devices to share video signals displayed on a display screen of the presenter computing device with a set of participant computing devices from the plurality of participant computing devices (1704); and providing, by the server computing device, video signals transmitted by the presenter computing device to the set of participant computing devices for display on a display screen of each participant computing device (1706). FIG.18illustrates an exemplary process1800for allowing participants to place graphical objects within a virtual event environment. As shown, the process1800includes identifying, by a server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein the server is configured to selectively control playback of audio signals transmitted by any of the participant computing devices on the other participant computing devices in the plurality of participant computing devices (1802); receiving, by the server computing device, a request from one of the plurality of participant computing devices to place a graphical object at a particular display location within the virtual event environment (1804); and in response to receiving the request, instructing, by the server computing device, the plurality of participant computing devices to display the graphical object at the particular display location within the virtual event environment on a display screen associated with each participant computing device (1806). In some cases, the graphical object has an associated functionality, and the process1800includes receiving input from a participant computing device to interact with the graphical object, thereby enabling the particular computing device to access the graphical object's associated functionality. In some cases, the graphical object's functionality includes at least one of the following: displaying information relating to the graphical object, the information including one or more of descriptive text, a Uniform Resource Locator (URL), a still image, or a video image; or changing an appearance of the graphical object; or alerting another participant's computing device that the particular computing device has interacted with the graphical object. In some implementations, the graphical object is associated with a product for sale and includes a Uniform Resource Locator (URL) associated with a webpage providing information about the product for sale. FIG.19illustrates an exemplary process1900for configuring a virtual event environment using a macro. As shown, the process1900includes executing, by a server computing device, a predetermined macro including instructions for configuring attributes of the virtual event environment including at least one of a background, a theme, an agenda, a featured activity, or a featured video presentation (1902); and identifying, by the server computing device, a plurality of participant computing devices connected to the server computing device by a communications network, each participant computing device associated with a participant in the virtual event, wherein each participant computing device displays a graphical representation of the virtual event environment on a display screen associated with the participant computing device (1904). In some cases, the process1900includes prior to executing the macro, downloading, by the server computing device, the macro from an online marketplace including a plurality of macros. FIG.20is a block diagram of computing devices2000,2050that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers. Computing device2000is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device2050is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other similar computing devices. Additionally, computing device2000or2050can include Universal Serial Bus (USB) flash drives. The USB flash drives may store operating systems and other applications. The USB flash drives can include input/output components, such as a wireless transmitter or USB connector that may be inserted into a USB port of another computing device. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. Computing device2000includes a processor2002, memory2004, a storage device2006, a high-speed interface2008connecting to memory2004and high-speed expansion ports2010, and a low speed interface2012connecting to low speed bus2014and storage device2006. Each of the components2002,2004,2006,2008,2010, and2012, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor2002can process instructions for execution within the computing device2000, including instructions stored in the memory2004or on the storage device2006to display graphical information for a GUI on an external input/output device, such as display2016coupled to high speed interface2008. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices2000may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory2004stores information within the computing device2000. In one implementation, the memory2004is a volatile memory unit or units. In another implementation, the memory2004is a non-volatile memory unit or units. The memory2004may also be another form of computer-readable medium, such as a magnetic or optical disk. The storage device2006is capable of providing mass storage for the computing device2000. In one implementation, the storage device2006may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory2004, the storage device2006, or memory on processor2002. The high speed controller2008manages bandwidth-intensive operations for the computing device2000, while the low speed controller2012manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller2008is coupled to memory2004, display2016(e.g., through a graphics processor or accelerator), and to high-speed expansion ports2010, which may accept various expansion cards (not shown). In the implementation, low-speed controller2012is coupled to storage device2006and low-speed expansion port2014. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device2000may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server2020, or multiple times in a group of such servers. It may also be implemented as part of a rack server system2024. In addition, it may be implemented in a personal computer such as a laptop computer2022. Alternatively, components from computing device2000may be combined with other components in a mobile device (not shown), such as device2050. Each of such devices may contain one or more of computing device2000,2050, and an entire system may be made up of multiple computing devices2000,2050communicating with each other. Computing device2050includes a processor2052, memory2064, an input/output device such as a display2054, a communication interface2066, and a transceiver2068, among other components. The device2050may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components2050,2052,2064,2054,2066, and2068, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate. The processor2052can execute instructions within the computing device2050, including instructions stored in the memory2064. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. Additionally, the processor may be implemented using any of a number of architectures. For example, the processor2010may be a CISC (Complex Instruction Set Computers) processor, a RISC (Reduced Instruction Set Computer) processor, or a MISC (Minimal Instruction Set Computer) processor. The processor may provide, for example, for coordination of the other components of the device2050, such as control of user interfaces, applications run by device2050, and wireless communication by device2050. Processor2052may communicate with a user through control interface2058and display interface2056coupled to a display2054. The display2054may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface2056may comprise appropriate circuitry for driving the display2054to present graphical and other information to a user. The control interface2058may receive commands from a user and convert them for submission to the processor2052. In addition, an external interface2062may be provided in communication with processor2052, so as to enable near area communication of device2050with other devices. External interface2062may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. The memory2064stores information within the computing device2050. The memory2064can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory2074may also be provided and connected to device2050through expansion interface2072, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory2074may provide extra storage space for device2050, or may also store applications or other information for device2050. Specifically, expansion memory2074may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory2074may be provide as a security module for device2050, and may be programmed with instructions that permit secure use of device2050. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory2064, expansion memory2074, or memory on processor2052that may be received, for example, over transceiver2068or external interface2062. Device2050may communicate wirelessly through communication interface2066, which may include digital signal processing circuitry where necessary. Communication interface2066may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver2068. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module2070may provide additional navigation- and location-related wireless data to device2050, which may be used as appropriate by applications running on device2050. Device2050may also communicate audibly using audio codec2060, which may receive spoken information from a user and convert it to usable digital information. Audio codec2060may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device2050. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device2050. The computing device2050may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone2080. It may also be implemented as part of a smartphone2082, personal digital assistant, or other similar mobile device. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), peer-to-peer networks (having ad-hoc or static members), grid computing infrastructures, and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Although a few implementations have been described in detail above, other modifications are possible. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
83,450
11863337
DESCRIPTION OF EMBODIMENTS Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same elements will be referred to by the same reference signs and description thereof will be appropriately omitted. First Embodiment FIG.1is a diagram illustrating an environment for use of an equipment management device10according to a first embodiment. The equipment management device10is a device that manages equipment210provided in a conference room20. For example, the equipment210is a display device such as a display or a projector. The equipment210may be, for example, air conditioning equipment having a temperature control function, may be lighting equipment, or may be equipment having a battery therein (for example, a radio microphone, a remote controller, or an electronic pen). The equipment210transmits information indicating an operation state of the equipment210(hereinafter referred to as operation information) to the equipment management device10and transmits information indicating that an abnormality has occurred (hereinafter referred to as abnormality information) to the equipment management device10when the abnormality has occurred in the equipment210. When the abnormality information is received, the equipment management device10transmits abnormality-relevant information relevant to the abnormality information to the equipment210or another device (for example, another device in the conference room20). The abnormality-relevant information is information which is to be transmitted due to occurrence of an abnormality and is, for example, control information for the equipment210. In the following description, abnormality-relevant information is considered to be control information unless otherwise mentioned. A device in the conference room20is, for example, the equipment210. In this case, the equipment210operates according to control information transmitted from the equipment management device10. As described above, a destination of the abnormality-relevant information may be a device other than the equipment210. For example, when the abnormality-relevant information is questionnaire data as will be described later in another embodiment, the destination of the abnormality-relevant information may be a display device in the conference room20or a terminal carried into the conference room20. A device group220is provided in the conference room20. The device group220generates information which is required for the equipment management device10to generate control information and which is information on the inside of the conference room20(hereinafter referred to as room information) and transmits the generated information to the equipment management device10. In the example illustrated in the drawing, the device group220includes a lighting control device222, an imaging device224, a microphone226, and a sensor228. The abnormality information may not be generated by the equipment210but may be generated by causing the equipment management device10to analyze the room information generated by the device group220. The lighting control device222is a device that controls a lighting device provided in the conference room20. The equipment management device10identifies a lighting state, for example, an on/off state and a lighting intensity, of the lighting device in the conference room20by processing information transmitted from the lighting control device222. The imaging device224repeatedly images the inside of the conference room20and transmits generated images to the equipment management device10. The equipment management device10generates information on persons in the conference room20(hereinafter referred to as person information) by processing the images. The person information includes, for example, the number of persons and individual positions and movements thereof in the conference room20. The microphone226generates voice information indicating voice in the conference room20and transmits generated voice information to the equipment management device10. The equipment management device10generates necessary data by processing the voice information. For example, the equipment management device10generates text data indicating words spoken in the conference room20by performing a voice recognition process on the voice information. The sensor228includes at least an illuminance sensor and a temperature sensor and transmits information indicating detection values to the equipment management device10. The equipment management device10can also detect an abnormality in the device group220by processing the information transmitted from the sensor228. For example, when the equipment210is a lighting device and the information transmitted from the lighting control device222indicates that the equipment210is turned on, it is determined that an abnormality has occurred in the equipment210when the detection value from the illuminance sensor which is the sensor228does not satisfy a reference. When the equipment210is air conditioning equipment having a temperature control function, it is determined that an abnormality has occurred in the equipment210when the detection value from the temperature sensor which is the sensor228departs from a temperature range estimated on the basis of the operation information of the equipment210(for example, a set temperature). The equipment management device10causes a terminal of a manager who manages the equipment210(hereinafter referred to as a manager terminal30) to display information on the abnormality occurring in the equipment210according to necessity. The devices or equipment in the conference room20may communicate with another device in a wired manner or a wireless manner. FIG.2is a diagram illustrating an example of a functional configuration of the equipment management device10. The equipment management device10includes an acquirer110, a conference information generator120, and an information transmitter130. The acquirer110acquires operation information and abnormality information from the equipment210and acquires various types of information from the device group220. The conference information generator120generates information indicating a status of a conference which is carried out in the conference room20(hereinafter referred to as conference information) by processing information output from the device group220. The conference information includes at least one of information indicating that a conference is in progress, information indicating that a conference has stopped, and information indicating that a conference has ended. A specific example of the routine which is performed by the conference information generator120will be described later. The information transmitter130makes a timing different, the timing being a timing at which abnormality-relevant information, for example, control information, is transmitted to the equipment210(hereinafter referred to as a first timing) according to a preference progress status of the conference indicated by the conference information generated by the conference information generator120. Then, the information transmitter130transmits the abnormality-relevant information to the equipment210at the first timing. When the first timing is made different according to a progress status of the conference, the information transmitter130selects the first timing from a plurality of candidates. Examples of the plurality of candidates include a timing at which the abnormality-relevant information is transmitted immediately and a timing at which the abnormality-relevant information is transmitted later. A specific example of the routine which is performed by the information transmitter130and a specific example of the abnormality-relevant information will be described later. FIG.3is a block diagram illustrating a hardware configuration of the equipment management device10. The equipment management device10includes a bus1010, a processor1020, a memory1030, a storage device1040, an input/output interface1050, and a network interface1060. The bus1010is a data transmission path that allows the processor1020, the memory1030, the storage device1040, the input/output interface1050, and the network interface1060to transmit and receive data to and from each other. A method of connecting the processor1020and the like is not limited to the bus. The processor1020is a processor that is realized by a central processing unit (CPU), a graphics processing unit (GPU), or the like. The memory1030is a main storage device that is realized by a random access memory (RAM). The storage device1040is an auxiliary storage device that is realized by a hard disk drive (HDD), a solid state drive (SSD), a memory card, a read only memory (ROM), or the like. The storage device1040stores program modules for realizing functions (for example, the acquirer110, the conference information generator120, and the information transmitter130) of the equipment management device10. By causing the processor1020to read the program modules into the memory1030and to execute the program modules, the functions corresponding to the program modules are realized. The input/output interface1050is an interface that connects various input/output devices to the equipment management device10. The network interface1060is an interface that connects the equipment management device10to other devices (for example, the equipment210, the device group220, and a manager terminal30) over a network. FIG.4is a flowchart illustrating an example of a routine that is performed by the equipment management device10. As described above, the equipment210in the conference room20transmits abnormality information when an abnormality occurs in the equipment210. The abnormality information includes information indicating the type of the equipment210and information indicating the type of the abnormality (for example, an error code). The acquirer110of the equipment management device10acquires abnormality information from the equipment210and acquires room information generated by the device group220(Step S10). Here, the acquirer110may normally acquire room information regardless of the abnormality information or may acquire room information when abnormality information is acquired. In the latter, the acquirer110may start acquisition of the room information after the abnormality information has been acquired. When the room information is normally (for example, almost in real time) acquired, the acquirer110may generate the abnormality information by analyzing the room information instead of acquiring the abnormality information from the equipment210. This analysis may be performed by a device other than the equipment management device10. In this case, the acquirer110acquires the abnormality information from the device. Subsequently, the conference information generator120generates conference information by processing the room information (Step S20). For example, the conference information generator120generates text data indicating spoken details in the conference room20from voice information generated by the microphone226. When a key word indicating occurrence of an abnormality is not included in the text data, the conference information generator120generates conference information indicating that a conference is in progress. When a key word indicating that occurrence of an abnormality is included in the text data, the conference information generator120generates conference information indicating that the conference has stopped. When a key word indicating that the conference has ended is included in the text data, the conference information generator120generates conference information indicating that the conference has ended. For example, the key words are stored in advance in the equipment management device10. For example, the conference information generator120generates person information using an image generated by the imaging device224and generates the conference information using the person information. For example, when all persons in the conference room20sit, the conference information generator120generates the conference information indicating that the conference is in progress. When a plurality of persons are located near the equipment210or when a plurality of persons alternately comes near the equipment210, the conference information generator120generates the conference information indicating that the conference has stopped. When persons in the conference room20simultaneously starts movement, the conference information generator120generates the conference information indicating that the conference has ended. For example, the conference information generator120may generate the conference information using the number of persons in the conference room20. For example, the conference information generator120acquires the number of persons scheduled to participate in the conference and generates the conference information indicating that the conference is in progress when a ratio of the number of persons in the conference room20to the number of persons scheduled to participate in the conference is equal to or greater than a reference ratio. Here, the number of persons in the conference room20is identified, for example, by processing an image generated by the imaging device224. The conference information generator120acquires the number of persons scheduled to participate in the conference, for example, from a schedule management device. For example, the conference information generator120may generate the conference information using the number of persons who have spoken within a predetermined time (for example, within 3 minutes) immediately before. For example, the conference information generator120identifies the number of persons having spoken using voiceprints included in voice data generated by the microphone226. When the number of persons having spoken satisfies a reference, the conference information generator120generates the conference information indicating that the conference is in progress. For example, the conference information generator120may generate the conference information using the number of persons looking at a screen of a display or a projection surface of a projector. For example, when the number of persons satisfies a reference, the conference information generator120generates the conference information indicating that the conference is in progress. The number of persons looking at a screen of a display or a projection surface of a projector is identified, for example, by processing an image generated by the imaging device224. For example, the conference information generator120may generate the conference information using whether input data of the display or the projector changes. For example, when input data of the display or the projector starts change, the conference information generator120generates the conference information indicating that the conference is in progress. When the conference information is generated using the voice information generated by the microphone226, the conference information generator120may generate the conference information using results obtained by machine learning based on the voice information or the text information. When the conference information is generated using the image generated by the imaging device224, the conference information generator120may generate the conference information using results obtained by machine learning based on the image. Subsequently, the information transmitter130identifies abnormality-relevant information to be transmitted, for example, control information for coping with the abnormality, using the type of the abnormality. The information transmitter130determines a timing at which the control information is transmitted to the equipment210, that is, a first timing, using the conference information (Step S30). Here, the information transmitter130preferably determines the first timing additionally using the abnormality information. More specifically, the abnormality information includes information indicating details of the abnormality occurring in the equipment210. Then, the information transmitter130determines the first timing using details of the abnormality indicated by the abnormality information. This is because quick countermeasures may be needed according to the type of the abnormality. Subsequently, the information transmitter130transmits the control information at the first timing (Step S40). Instead of Steps S30and S40, the information transmitter130may transmit the control information to the equipment210when the conference information satisfies a reference. A specific example of the first timing will be described below. In this example, when details of the abnormality satisfy a specific condition, the information transmitter130does not transmit the abnormality-relevant information until the conference information indicates that the conference has stopped or ended. When details of the abnormality do not satisfy the specific condition, the information transmitter130may transmit the abnormality-relevant information immediately after the abnormality has been detected. For example, when the equipment210is a display or a projector and the abnormality information indicates that a video signal has not been input to the equipment, the abnormality-relevant information is control information. The control information is information for causing the equipment210to display an alert. When the conference information indicates that the conference is in progress, the information transmitter130transmits the control information immediately. When the equipment210is a display or a projector including a plurality of video input terminals and the abnormality information indicates that a video signal is input to the equipment210but a selected input terminal is not an input terminal to which display data is being input, the abnormality-relevant information is control information. The control information is a control command for causing the equipment210to select the input terminal to which display data is being input. When the conference information indicates that the conference is in progress, the information transmitter130transmits the conference information immediately. It is assumed that the equipment210is a display or a projector and contrast (a luminance difference) of display can be identified by processing an image obtained by imaging a display of the display or the projector (for example, an image generated by the imaging device224). For example, the equipment210or a management device thereof identifies a difference between luminance of characters in the image and luminance near the characters as contrast. An example of the abnormality information indicates that the contrast is equal to or less than a reference. In this case, the abnormality-relevant information is control information. The control information is information for causing the equipment210to change display (projection) parameters such that the contrast increases. When the conference information indicates that the conference is in progress, the information transmitter130transmits the control information immediately. In the example, when the equipment210additionally includes a light in the conference room20, the information transmitter130may further control brightness of the light instead of or in addition to control of the display or the projector. With this configuration, it is also possible to improve visibility of a screen or a projected image. When the equipment210is a projector, a decrease in contrast may be caused by deterioration of a light source. In this case, the control information is information for causing the projector to display a recommendation for exchanging the light source of the projector. When the conference information indicates that the conference has not started, that the conference has stopped, or that the conference has ended, the information transmitter130transmits control information for performing the displaying. Here, the information transmitter130preferably transmits the control information when the conference has stopped or ended. The information transmitter130may additionally transmit the abnormality-relevant information to the manager terminal30such that a recommendation for exchanging the light source of the projector is displayed thereon. The timing at which the abnormality-relevant information is transmitted is arbitrary and the abnormality-relevant information may be transmitted, for example, immediately after the abnormality information has been acquired. In this example, the equipment210may count a cumulative emission time of the light source. In this case, the information transmitter130acquires the cumulative emission time from the equipment210after the acquirer110has acquired the abnormality information. At this time, the equipment210may transmit the cumulative emission time in response to a request from the information transmitter130. The information transmitter130may set a condition that the cumulative emission time is greater than a reference time as the condition for transmitting the abnormality-relevant information (which may be control information). The equipment210may transmit the cumulative emission time along with the abnormality information to the equipment management device10. When the equipment210is air conditioning equipment, an example of details of the abnormality indicated by the abnormality information is that the room temperature departs from a reference range. In this case, the information transmitter130transmits the abnormality-relevant information when the conference information indicates that the conference has stopped or ended. That is, when the equipment210is air conditioning equipment, the “specific condition” in the details of the abnormality is a condition that the room temperature departs from the reference range. The abnormality-relevant information which is transmitted may be questionnaire data and may indicate maintenance to be performed on the equipment210. Details of a questionnaire which is transmitted herein relate to, for example, a degree of comfortableness from the room temperature of the conference room20. The method of transmitting questionnaire data is the same as in a second embodiment which will be described later. The destination of the abnormality-relevant information may be a display device, for example, the display or the projector, in the conference room20or may be the manager terminal30. On the other hand, when the equipment210is air conditioning equipment and details of the abnormality indicated by the abnormality information are that the air conditioning equipment does not operate (that is, when the specific condition is not satisfied), the information transmitter130transmits the abnormality-relevant information immediately. When the equipment210is a device having a battery therein such as a radio microphone, a remote controller, or an electronic pen, the equipment210transmits information indicating that at least a state of charge or an output voltage of the battery is equal to or less than a reference value as the abnormality information to the equipment management device10when this situation occurs. Then, the information transmitter130of the equipment management device10transmits the abnormality-relevant information, for example, information for outputting a recommendation for exchanging the battery to the display or the projector in the conference room20when the conference information indicates that the conference has stopped or ended. That is, when the equipment210is a device having a battery therein, the “specific condition” in the details of the abnormality is a condition that the state of charge or the output voltage of the battery is equal to or less than the reference value. The abnormality-relevant information which is transmitted herein is, for example, for performing display or voice output for recommending exchange of the battery. The information transmitter130may transmit the information to the manager terminal30. At this time, the information transmitter130preferably performs transmission to the manager terminal30immediately after the abnormality information has been acquired. In addition to the aforementioned example, the information transmitter130preferably transmits the control information when the conference information indicates that the conference has not started, that the conference has stopped, or that the conference has ended. In other words, in a standard state, the information transmitter130preferably transmits the control information when the conference information indicates that the conference has not started, that the conference has stopped, or that the conference has ended. According to this embodiment, when an abnormality occurs in the equipment210in the conference room20, the information transmitter130of the equipment management device10transmits control information for coping with the abnormality to the equipment210. Here, the information transmitter130determines the timing at which the control information is transmitted to the equipment210using conference information indicating a conference status. Alternatively, the information transmitter130transmits the control information to the equipment210when the conference information satisfies a reference. Accordingly, it is possible to cope with an abnormality in the equipment210at an appropriate timing. In this embodiment, the microphone226illustrated inFIG.1may be a so-called smart speaker. In this case, the smart speaker also includes a speaker in addition to the microphone226. A user of the conference room20can operate the equipment210using the smart speaker. Specifically, the information transmitter130of the equipment management device10generates information for operating the equipment210using voice data input to the microphone of the smart speaker and transmits the information to the equipment210. For example, the smart speaker or the information transmitter130of the equipment management device10analyzes details of a conversation in the conference room20. When the details satisfy a specific condition, the information transmitter130determines that an abnormality occurs in the conference room20, identifies details of the abnormality, and generates the abnormality information. The information transmitter130generates abnormality-relevant information including the voice data. The information transmitter130transmits the voice data to the smart speaker. Then, the smart speaker outputs voice on the basis of the voice data. For example, when the equipment210is a display device such as a display or a projector, an image which has been apparently input from the outside may not be displayed on the display device. In this case, a conversation in the conference room20is likely to include a phrase such as “not displayed” or “not visible.” When such a phrase is included in the conversation, the information transmitter130outputs a voice such as “Is right terminal selected?” or “will input terminal be switched to another?” from the smart speaker. For example, when a voice “Will input terminal be switched to another?” is output from the smart speaker and a word indicating “YES” is acquired by the microphone of the smart speaker, the information transmitter130transmits a command for switching the input terminal to the equipment210. When the equipment210is a display device such as a display or a projector, contrast or brightness of such display may be changed. For example, the information transmitter130can identify contrast or brightness of the display by processing an image obtained by imaging a display of the display or the projector (for example, an image generated by the imaging device224). When the contrast or brightness is equal to or less than a reference, the information transmitter130outputs a voice such as “Will contrast be increased?” or “Will screen be brighter?” from the smart speaker. When a word indicating “YES” is acquired by the microphone of the smart speaker, the information transmitter130transmits a command for increasing the contrast (or making the screen brighter) to the equipment210. For example, a case in which an Internet conference is held in the conference room20will be considered. In this case, the equipment210is a device (for example, a personal computer) that is used for the Internet conference. The information transmitter130can identify movement of a mouse of a person in the conference room20by processing an image generated by the imaging device224and thus determine whether the person utters words. When the person utters words and an output from the microphone226in the equipment210is zero, the information transmitter130outputs a voice such as “Is microphone mute?” or “Will muteness of microphone be resolved?” from the speaker. When a word indicating “YES” is acquired by the microphone of the smart speaker as a response to the voice “Will muteness of microphone be resolved?,” the information transmitter130outputs a command for resolving muteness of the microphone226to the equipment210. Second Embodiment FIG.5is a diagram illustrating an example of a functional configuration of an equipment management device10according to a second embodiment. The equipment management device10illustrated in the drawing has the same configuration as the equipment management device10according to the first embodiment except that a timing determiner140and a questionnaire transmitter150are provided. The questionnaire transmitter150transmits questionnaire data indicating a questionnaire associated with a conference room20to a display device (for example, a display or a projector) or a terminal of a conference participant in the conference room20. The terminal of a conference participant is identified, for example, using a mail address of the participant. In this case, the questionnaire transmitter150acquires the mail address of the participant, for example, from a schedule management device for the conference room20. When an abnormality occurs in equipment210, the questionnaire transmitter150may determine (or correct) the questionnaire data additionally using abnormality information transmitted from the equipment210. For example, the questionnaire transmitter150adds an item associated with the type of the abnormality indicated by the abnormality information to the questionnaire data. The information required for generating the questionnaire data is stored in advance in the questionnaire transmitter150. For example, the questionnaire transmitter150stores an item for each type of the equipment210and for each type of an abnormality. The questionnaire transmitter150adds an item corresponding to the abnormality information received from the equipment210to the questionnaire data. Then, the timing determiner140determines a timing at which the questionnaire data is transmitted (hereinafter referred to as a second timing) using the conference information generated by the conference information generator120. For example, when the conference information satisfies a reference, specifically, the timing determiner140transmits the questionnaire data at a timing at which a conference has ended. Instead of transmitting the questionnaire data by a mail, the timing determiner140may display details of a questionnaire or information indicating that there is questionnaire using the projector or the display in the conference room20and display code information indicating an URL for inputting response data using the projector or the display. According to this embodiment, the same advantages as in the first embodiment can be achieved. It is possible to transmit questionnaire data indicating a questionnaire associated with the conference room20. In this embodiment, questionnaire data instead of control information may be used as equipment-relevant information. In this case, the timing determiner140and the questionnaire transmitter150may be omitted. While embodiments of the present invention have been described above with reference to the drawings, the embodiments are merely examples of the present invention and various configurations other than described above may be employed. A plurality of steps (processes) are sequentially described in the plurality of flowcharts described above, but the order for performing the steps in the embodiments is not limited to that described above. In the embodiments, the order of the steps illustrated in the flowcharts can be changed as long as it does not interfere with details of the steps. The aforementioned embodiments can be combined as long as conflictions in details do not arise. Some or all of the aforementioned embodiments may be described as the following additional configurations, but the present invention is not limited thereto.1. An equipment management device including:an acquirer configured to acquire abnormality information indicating that an abnormality has occurred in equipment provided in a conference room;a conference information generator configured to generate conference information indicating a progress status of a conference which is carried out in the conference room by processing information output from a device provided in the conference room; andan information transmitter configured to make a first timing different, the first timing being a timing at which abnormality-relevant information which is information to be transmitted due to occurrence of the abnormality is transmitted according to the progress status of the conference indicated by the conference information and to transmit the abnormality-relevant information to the equipment or another device at the first timing.2. The equipment management device according to 1, wherein the information transmitter is configured to select the first timing from a plurality of candidates when the first timing is made different according to the progress status of the conference, andwherein the plurality of candidates include at least a timing at which the abnormality-relevant information is transmitted immediately and a timing at which the abnormality-relevant information is transmitted later.3. The equipment management device according to 1 or 2, wherein the conference information includes at least one of information indicating that the conference is in progress, information indicating that the conference has stopped, and information indicating that the conference has ended.4. The equipment management device according to any one of 1 to 3, wherein the abnormality information includes information indicating details of the abnormality, andwherein the information transmitter is configured to determine the first timing additionally using the details of the abnormality indicated by the abnormality information.5. The equipment management device according to 4, wherein the information transmitter is configured not to transmit the abnormality-relevant information until the conference stops or ends when the details of the abnormality satisfy a specific condition.6. The equipment management device according to 5, wherein the equipment is air conditioning equipment that adjusts the room temperature of the conference room, andwherein the specific condition is a condition that the room temperature departs from a reference range.7. The equipment management device according to 5, wherein the equipment has a battery therein, andwherein the specific condition is a condition that a state of charge or an output voltage of the battery is equal to or less than a reference.8. The equipment management device according to 4, wherein the equipment is a display or a projector,wherein the abnormality information indicates that a video signal is not input to the equipment,wherein the information transmitter is configured to determine the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is information for displaying an alert.9. The equipment management device according to 4, wherein the equipment is a display or a projector including a plurality of video input terminals,wherein the abnormality information indicates that a video signal is input to the equipment but a selected input terminal is not the input terminal to which display data is input,wherein the information transmitter is configured to determine the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is a control command for causing the equipment to select the input terminal to which the display data is input.10. The equipment management device according to 4, wherein the equipment is a display or a projector,wherein the abnormality information is generated by processing an image acquired by imaging a display on the display or the projector and indicates that contrast of the display is equal to or less than a reference,wherein the information transmitter is configured to determine the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is information for instructing the equipment to increase the contrast.11. The equipment management device according to 9, wherein the equipment further includes a light in the conference room, andwherein the abnormality-relevant information further is information for controlling brightness of the light.12. The equipment management device according to 4, wherein the equipment is a projector,wherein the abnormality-relevant information is generated by processing an image acquired by imaging a result of display by the projector and indicates that contrast of the image is equal to or less than a reference,wherein the information transmitter is configured to determine the first timing such that the abnormality-relevant information is transmitted when the conference has not started, when the conference has stopped, or when the conference has ended, andwherein the abnormality-relevant information is information for instructing the projector to display a message for recommending exchange of a light source of the projector.13. The equipment management device according to 12, wherein the information transmitter is configured to further cause a terminal of a manager of the conference room to recommend exchange of the light source of the projector.14. The equipment management device according to 12 or 13, wherein the information transmitter is configured to determine the first timing such that the abnormality-relevant information is transmitted after the conference has ended.15. The equipment management device according to any one of 12 to 14, wherein the information transmitter is configured to set a condition that a cumulative emission time of the light source is greater than a reference time as the condition for performing the display.16. The equipment management device according to any one of 1 to 15, wherein the abnormality-relevant information is questionnaire data indicating a questionnaire associated with the conference room.17. The equipment management device according to any one of 1 to 15, further including:a questionnaire transmitter configured to transmit questionnaire data indicating a questionnaire associated with the conference room to a display device in the conference room or a terminal of a participant of the conference; anda timing determiner configured to determine a second timing at which the questionnaire data is transmitted using the conference information.18. The equipment management device according to 16 or 17, wherein the questionnaire data is determined using the abnormality information.19. An equipment management method that is performed by a computer, the equipment management method including:acquiring abnormality information indicating that an abnormality has occurred in equipment provided in a conference room;generating conference information indicating a progress status of a conference which is carried out in the conference room by processing information output from a device provided in the conference room; andmaking a first timing different, the first timing being a timing at which abnormality-relevant information which is information to be transmitted due to occurrence of the abnormality is transmitted according to the progress status of the conference indicated by the conference information and transmitting the abnormality-relevant information to the equipment or another device at the first timing.20. The equipment management method according to 19, further including selecting the first timing from a plurality of candidates when the first timing is made different according to the progress status of the conference wherein the plurality of candidates include at least a timing at which the abnormality-relevant information is transmitted immediately and a timing at which the abnormality-relevant information is transmitted later.21. The equipment management method according to 19 or 20, wherein the conference information includes at least one of information indicating that the conference is in progress, information indicating that the conference has stopped, and information indicating that the conference has ended.22. The equipment management method according to any one of 19 to 21, wherein the abnormality information includes information indicating details of the abnormality, andwherein the equipment management method further includes determining the first timing additionally using the details of the abnormality indicated by the abnormality information.23. The equipment management method according to 22, further including not transmitting the abnormality-relevant information until the conference stops or ends when the details of the abnormality satisfy a specific condition.24. The equipment management method according to 23, wherein the equipment is air conditioning equipment that adjusts the room temperature of the conference room, andwherein the specific condition is a condition that the room temperature departs from a reference range.25. The equipment management method according to 23, wherein the equipment has a battery therein, andwherein the specific condition is a condition that a state of charge or an output voltage of the battery is equal to or less than a reference.26. The equipment management method according to 22, wherein the equipment is a display or a projector,wherein the abnormality information indicates that a video signal is not input to the equipment,wherein the equipment management method further includes determining the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is information for displaying an alert.27. The equipment management method according to 22, wherein the equipment is a display or a projector including a plurality of video input terminals,wherein the abnormality information indicates that a video signal is input to the equipment but a selected input terminal is not the input terminal to which display data is input,wherein the equipment management method further includes determining the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is a control command for causing the equipment to select the input terminal to which the display data is input.28. The equipment management method according to 22, wherein the equipment is a display or a projector,wherein the abnormality information is generated by processing an image acquired by imaging a display on the display or the projector and indicates that contrast of the display is equal to or less than a reference,wherein the equipment management method further includes determining the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is information for instructing the equipment to increase the contrast.29. The equipment management method according to 27, wherein the equipment further includes a light in the conference room, andwherein the abnormality-relevant information further is information for controlling brightness of the light.30. The equipment management method according to 22, wherein the equipment is a projector,wherein the abnormality-relevant information is generated by processing an image acquired by imaging a result of display by the projector and indicates that contrast of the image is equal to or less than a reference,wherein the equipment management method further includes determining the first timing such that the abnormality-relevant information is transmitted when the conference has not started, when the conference has stopped, or when the conference has ended, andwherein the abnormality-relevant information is information for instructing the projector to display a message for recommending exchange of a light source of the projector.31. The equipment management method according to 30, further including causing a terminal of a manager of the conference room to recommend exchange of the light source of the projector.32. The equipment management method according to 30 or 31, further comprising determining the first timing such that the abnormality-relevant information is transmitted after the conference has ended.33. The equipment management method according to any one of 30 to 32, further comprising setting a condition that a cumulative emission time of the light source is greater than a reference time as the condition for performing the display.34. The equipment management method according to any one of 19 to 33, wherein the abnormality-relevant information is questionnaire data indicating a questionnaire associated with the conference room.35. The equipment management method according to any one of 19 to 33, further including:transmitting questionnaire data indicating a questionnaire associated with the conference room to a display device in the conference room or a terminal of a participant of the conference; anddetermining a second timing at which the questionnaire data is transmitted using the conference information.36. The equipment management method according to 34 or 35, wherein the questionnaire data is determined using the abnormality information.37. A program causing a computer to perform:acquiring abnormality information indicating that an abnormality has occurred in equipment provided in a conference room;generating conference information indicating a progress status of a conference which is carried out in the conference room by processing information output from a device provided in the conference room; andmaking a first timing different, the first timing being a timing at which abnormality-relevant information which is information to be transmitted due to occurrence of the abnormality is transmitted according to the progress status of the conference indicated by the conference information and transmitting the abnormality-relevant information to the equipment or another device at the first timing.38. The program according to 37, wherein the transmission is configured to select the first timing from a plurality of candidates when the first timing is made different according to the progress status of the conference, andwherein the plurality of candidates include at least a timing at which the abnormality-relevant information is transmitted immediately and a timing at which the abnormality-relevant information is transmitted later.39. The program according to 37 or 38, wherein the conference information includes at least one of information indicating that the conference is in progress, information indicating that the conference has stopped, and information indicating that the conference has ended.40. The program according to any one of 37 to 39, wherein the abnormality information includes information indicating details of the abnormality, andwherein the transmission is configured to determine the first timing additionally using the details of the abnormality indicated by the abnormality information.41. The program according to 40, wherein the transmission is configured not to transmit the abnormality-relevant information until the conference stops or ends when the details of the abnormality satisfy a specific condition.42. The program according to 41, wherein the equipment is air conditioning equipment that adjusts the room temperature of the conference room, andwherein the specific condition is a condition that the room temperature departs from a reference range.43. The program according to 41, wherein the equipment has a battery therein, andwherein the specific condition is a condition that a state of charge or an output voltage of the battery is equal to or less than a reference.44. The program according to 40, wherein the equipment is a display or a projector,wherein the abnormality information indicates that a video signal is not input to the equipment, wherein the transmission is configured to determine the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is information for displaying an alert.45. The program according to 40, wherein the equipment is a display or a projector including a plurality of video input terminals,wherein the abnormality information indicates that a video signal is input to the equipment but a selected input terminal is not the input terminal to which display data is input,wherein the transmission is configured to determine the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is a control command for causing the equipment to select the input terminal to which the display data is input.46. The program according to 40, wherein the equipment is a display or a projector,wherein the abnormality information is generated by processing an image acquired by imaging a display on the display or the projector and indicates that contrast of the display is equal to or less than a reference,wherein the transmission is configured to determine the first timing such that the abnormality-relevant information is transmitted while the conference is processing when the conference information indicates that the conference is processing, andwherein the abnormality-relevant information is information for instructing the equipment to increase the contrast.47. The program according to 45, wherein the equipment further includes a light in the conference room, andwherein the abnormality-relevant information further is information for controlling brightness of the light.48. The program according to 40, wherein the equipment is a projector,wherein the abnormality-relevant information is generated by processing an image acquired by imaging a result of display by the projector and indicates that contrast of the image is equal to or less than a reference,wherein the transmission is configured to determine the first timing such that the abnormality-relevant information is transmitted when the conference has not started, when the conference has stopped, or when the conference has ended, andwherein the abnormality-relevant information is information for instructing the projector to display a message for recommending exchange of a light source of the projector.49. The program according to 48, wherein the transmission is configured to further cause a terminal of a manager of the conference room to recommend exchange of the light source of the projector.50. The program according to 48 or 49, wherein the transmission is configured to determine the first timing such that the abnormality-relevant information is transmitted after the conference has ended.51. The program according to any one of 48 to 50, wherein the transmission is configured to set a condition that a cumulative emission time of the light source is greater than a reference time as the condition for performing the display.52. The program according to any one of 37 to 51, wherein the abnormality-relevant information is questionnaire data indicating a questionnaire associated with the conference room.53. The program according to any one of 37 to 51, causing the computer to further perform:a questionnaire transmission configured to transmit questionnaire data indicating a questionnaire associated with the conference room to a display device in the conference room or a terminal of a participant of the conference; anda timing determination configured to determine a second timing at which the questionnaire data is transmitted using the conference information.54. The program according to 52 or 53, wherein the questionnaire data is determined using the abnormality information. Priority is claimed on PCT International Application No. PCT/JP2019/38485, filed Sep. 30, 2019, the content of which is incorporated herein by reference. REFERENCE SIGNS LIST 10Equipment management device20Conference room30Manager terminal110Acquirer120Conference information generator130Information transmitter140Timing determiner150Questionnaire transmitter210Equipment220Device group222Lighting control device224Imaging device226Microphone228Sensor
54,225